18.04 zfs+lxd causes container host to reboot under heavy load?

Spent a few days struggling with this and I guess 18.04 just aint ready for production.

18.04 server w/ lxd over zfs on raw block devices just reboots itself randomly. No panic or output. I have console=ttyS0 (over a null modem…etc ) kernel working with usual operation but absolutely no peep when it reboots. No signs in ANY of the log files.

Hard to reproduce, but no-go for production. Happens when I try to run some of my 25Gb R+D scripts+programs+etc (now working as old 14.04 LXCs for years) that take hours to copy to each new lxd installation I have tried (tried generic “lxd init” too as well as “root zfs” and both “live-server” and plain “server” install ISOs). The combination of a container madly running dns unbound to answer 100000s digs from another container while the third is doing heavy network I/O seems to cause the mysterious reboot – but not reliably. Takes hours to reproduce but makes 18.04 not ready for my production environment. FBSD vimage jails on zfs ran fine for 7 years and old lxc on 14.04 non-zfs for 4 years running same setup (and freenas never a problem). Guess the linux zfs version isnt that stable yet for high load. Tried root zfs with and w/o luks [1] and went down the rat hole of searching for under powered PSU (750W) and overheating (69C max w/ lm-sensors) options and all dead ends. Tried on Phenom X6 and 4Ghz FX AMD processors with 32GB ECC-RAM (yes ecc – and bios setup that way). This seems like a genuine bug in the 18.04 lxd over multi-device zfs pool setup (yeah – I see the 120second lock msg for “sync” on console but I understand that is just a warning). My 2cents: my gut says this is a mem overwrite problem between network bridge (manually created br0. NO local bridge) and zfs fs. I took luks off so it aint cryptsetup. Giving up for now and go back to my own os work. Maybe revisit in 6 mos. Too bad. lxd over zfs sounded like a good story and I would have loved to deploy. I did learn a lot.

[1] https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS

Conclusions after a few months.

I could not stop these random reboots from happening for 18.04+consumer mother boards (w/ECC + AMD) so I gave up and bought a used dell R610 (~$175 inc 48G ram 2xPSU) and the problem went away. But I thought I would share a few ratholes for you to avoid should you have the same issues.
1. dell likes raid. So I had to replace the hard drive controlled w/ one flashed for “unraid zfs” (see ebay ~$50). dont be afraid of breaking off some plastic tabs inside.
2. DO NOT USE the dell/broadcom on-board nics. They do not deal with ipv6 multicast in bridge mode (read: lost neighbor solicitations) well. 4-day rathole.
3. avoid messing with dell DRAC or universal configuration screens. Just try not to change anything that will force a reconfiguration or cause diminished performace. Lost one dell to this.

Pros: redundant PSUs, good cooling, reasonable power usage (~100W w/ 6 2.5″ drives)
Cons: slower cpu: 2×4-core 2.4Ghz intel versus 1×8-core 4.0Ghz AMD

Overall: I did not really gain much by going to 18.04. My 14.04 lxc setup had been up for years non stop w/ that same 4GHz AMD/mobo combo and had no IPv6 problems. I can only hope that I learned something useful for the month+ I spent upgrading to 18.04.

How to understand Kepler telescope under large aspherical wavefront

To my understanding, a Kepler telescope is designed to conjugate one plane to another. For example scale down/up the incoming wavefront onto a wavefront sensor for measurement. It works well for small wavefronts, i.e. the wavefront is relatively flat (e.g. $<10 lambda$ wavefront RMS).

What would happen if a large aspherical wavefront is impinging onto the same Kepler telescope system? Paper Calibration issues with Shack-Hartmann sensors for metrology applications suggests to place the aperture at the shared focal plane for performance improvement, as below:

enter image description here

How to understand the intuition behind? Why would Kepler telescope fail for large aspherical wavefronts?

How to calculate the concentration of lead(II) liberated from lead(IV) oxide under acidic conditions?…

What is the concentration of $ce{Pb^2+}$ in a lake that has a pH of $6.0$ and is in equilibrium with $ce{PbO2(s)}$ and atmospheric oxygen?

begin{align}
ce{PbO2(s) + 4H+ + 2e- &-> Pb^2+ + 2H2O} &
log K &= 49.2
end{align}

I spent so much time to solve this, but I could not, my final answer does not appear to make sense:
begin{align}
mathrm{pH} &= 6 &
implies&& [ce{H+}] &=10^{-6}\
log K &= 49.2 &
implies&& K &= 10^{49.2}\
K &= frac{[ce{Pb^2+}]}{[ce{H+}]^4} &
implies&& 10^{49.2} &=frac{[ce{Pb^2+}]}{(10^{-6})^4} \
&&implies&& [ce{Pb^2+}] & =10^{25.2}
end{align}

What is my mistake?

While applying to masters programs, how important is GPA of unrelated under-graduation?

For example: I’ll soon apply for masters in Statistical Machine Learning after 4 years of relevant work experience. I’ve done under-graduation in Electrical Engineering from a highly reputed Indian university, obtained poor grades however.

I wanted to understand if admission committee emphasizes GPA in unrelated program when I have considerable work experience in field I’m applying to.

Why aren’t clustal alignments stable under deletion of some of the sequences?

I’m new to pairwise and multiple sequence alignment in general, but I thought I understood how clustal works — k-tuple distance is a cheap metric that’s ‘probably approximately’ monotonic in the real global pairwise-alignment score, so we can just calculate it between all C(n, 2) sequences and do a simple UPGMA. Clustalw algorithm description here

But I’m observing odd results. Suppose I have a file with 60 sequences and run clustal on it, and observe that seqA and seqB are paired terminal child nodes. If I then delete a handful (say 10) of the sequences, but keep seqA and seqB, then in the new clustal alignment, seqA and seqB may not remain paired.

This is strange to me, because if Clustal was actually doing the k-tuple distance + UPGMA thing I thought it was doing, then all terminal pairings should always be stable under deletions of other sequences.

So… what am I missing?