Spent a few days struggling with this and I guess 18.04 just aint ready for production.
18.04 server w/ lxd over zfs on raw block devices just reboots itself randomly. No panic or output. I have console=ttyS0 (over a null modem…etc ) kernel working with usual operation but absolutely no peep when it reboots. No signs in ANY of the log files.
Hard to reproduce, but no-go for production. Happens when I try to run some of my 25Gb R+D scripts+programs+etc (now working as old 14.04 LXCs for years) that take hours to copy to each new lxd installation I have tried (tried generic “lxd init” too as well as “root zfs” and both “live-server” and plain “server” install ISOs). The combination of a container madly running dns unbound to answer 100000s digs from another container while the third is doing heavy network I/O seems to cause the mysterious reboot – but not reliably. Takes hours to reproduce but makes 18.04 not ready for my production environment. FBSD vimage jails on zfs ran fine for 7 years and old lxc on 14.04 non-zfs for 4 years running same setup (and freenas never a problem). Guess the linux zfs version isnt that stable yet for high load. Tried root zfs with and w/o luks  and went down the rat hole of searching for under powered PSU (750W) and overheating (69C max w/ lm-sensors) options and all dead ends. Tried on Phenom X6 and 4Ghz FX AMD processors with 32GB ECC-RAM (yes ecc – and bios setup that way). This seems like a genuine bug in the 18.04 lxd over multi-device zfs pool setup (yeah – I see the 120second lock msg for “sync” on console but I understand that is just a warning). My 2cents: my gut says this is a mem overwrite problem between network bridge (manually created br0. NO local bridge) and zfs fs. I took luks off so it aint cryptsetup. Giving up for now and go back to my own os work. Maybe revisit in 6 mos. Too bad. lxd over zfs sounded like a good story and I would have loved to deploy. I did learn a lot.
Conclusions after a few months.
I could not stop these random reboots from happening for 18.04+consumer mother boards (w/ECC + AMD) so I gave up and bought a used dell R610 (~$175 inc 48G ram 2xPSU) and the problem went away. But I thought I would share a few ratholes for you to avoid should you have the same issues.
1. dell likes raid. So I had to replace the hard drive controlled w/ one flashed for “unraid zfs” (see ebay ~$50). dont be afraid of breaking off some plastic tabs inside.
2. DO NOT USE the dell/broadcom on-board nics. They do not deal with ipv6 multicast in bridge mode (read: lost neighbor solicitations) well. 4-day rathole.
3. avoid messing with dell DRAC or universal configuration screens. Just try not to change anything that will force a reconfiguration or cause diminished performace. Lost one dell to this.
Pros: redundant PSUs, good cooling, reasonable power usage (~100W w/ 6 2.5″ drives)
Cons: slower cpu: 2×4-core 2.4Ghz intel versus 1×8-core 4.0Ghz AMD
Overall: I did not really gain much by going to 18.04. My 14.04 lxc setup had been up for years non stop w/ that same 4GHz AMD/mobo combo and had no IPv6 problems. I can only hope that I learned something useful for the month+ I spent upgrading to 18.04.