Entropy Broken

In my previous post I noted the presence of Entropy Broker — software designed to gather entropy from a number of sources, and then concentrate it to be sent to a number of consumers. My main interest in this was to make use of the EGD interface to feed new entropy to OpenSSL so that it wouldn’t deplete the little one available on my two vservers — where I cannot simply push it via timer_entropyd.

Unfortunately it turned out not to be as simple as building it and writing init scripts for it. The software seems abandoned, with the last release back in 2009, but most importantly it doesn’t work.

I have already hinted in the other post that the website lies about the language the software is written in. In particular, the package declares itself as being written in C, when instead it is composed of a number of C++ source files, and uses the C++ compiler to build. Looking at the code, all of it is written in C style; a quick glance to the compiled results shows that nothing is used from the STL; the only two C++ language symbols that the compiled binaries rely on are the generic new and delete operators (_Znwm and _ZdlPv in mangled form). This alone spells bad.

After building, and setting up the basic services needed by Entropy Broker (the eb hub, server_timers ­­– that takes the place of timer_entropyd – and, in a virtual machine, client_linux_kernel), the results aren’t promising. The entropy pool is not replenished on the virtual machine, ever; network traffic is very very limited. The same more or less goes when using the EGD client (which is actually an EGD server acting as an eb-client). Even worse with the server_audio that seems to exit with error after reading a few data points. server_video doesn’t even build since it relies on V4L1 that has been dropped out of Linux 2.6.38 and later.

Returning a moment about my EntropyKey problems with the entropy not staying full, I’ve spoken with the EntropyKey developers briefly today. If those downward spikes happen, it usually is because something is consuming entropy faster than EntropyKey can replenish it, and since the EntropyKey can produce around 4KiB/s of entropy, that means a fast consumption of random data.

As an example, I was told that spawning a process eats 8 bytes of entropy, so something around 500 process spawned in a second would be enough to beat the Key’s ability to replenish the entropy. This might sound a lot but it really isn’t, especially when doing parallel builds, just think that a straight gcc invocation in Gentoo spawns about five processes (the gcc-config wrapper, gcc as the real frontend, cpp to preprocess, cc1 as the real compiler, and as which is the assembler), and that libtool definitely calls many more for handling inputs and outputs. And remember that the tinderbox builds with make -j12 whenever it can.

This seems to match the results I see from the Munin graphs, where entropy is depleted when load spikes, for instance when kicking off a build for ChromiumOS. But now I’m also wondering if the problem is that the ekeyd daemon gets a too low priority when trying to replenish it, which leaves it uncovered — I guess my next step is to add Munin monitoring for the ekeyd data as well as the entropy to see if I can link the two of them. Do note that the load on Yamato can easily reach 60 and over…

And a final word about timer_entropyd… a quick check seems to suggest that it only works correctly on systems that are mostly idle… my frontend system seems to be just fine in such a context, and indeed it does seem to do a good job there (load during the day never reached 1). It doesn’t seem to be a good idea for Yamato with its high load.

9 thoughts on “Entropy Broken

  1. An obvious place where entropy is required for process creation is randomisation of the address space layout, for PIE binaries even for the load address of the binary itself.


  2. Something you are not really considering is entropy quality.The EntropyKey will be providing high quality random data, but you have already noticed that the timer entropy source gets less random with increasing load.If I were attacking your machine, I could easily overload it to reduce the quality of the random data, and thus attack your host more easily. Theoretically speaking, anyway!


  3. sys-apps/rng-tools includes a daemon which can be coaxed into reading from raw streams. Combine that with a FIFO, and you’ve theoretically got a generic, pluggable place to shove a bitstream.I’m facing entropy depletion issues on my vps node, so I’m looking at how I use that to shove more entropy into the system when it’s depletedThree ideas so far:1) Collect huge files from my systems at home, upload them to my server, and feed them into rngd.2) While the entropy pool is more than 90% full, read bytes into a buffer, which is then fed back in via rngd. This would let me bank entropy during times of surplus, and feed off that bank when I’m short.3) Arrange for my vps hosting provider to configure a virtualized serial port through which it feeds entropy data from the host. rngd could be fed there from the dom0 entropy pool, and from entropy streamed securely on the local network.


  4. Instead of ranting on your weblog, you could’ve contacted the entropy-broker developer. Then he would know there were issues with it and resolve them.


  5. Oh and by the way: server_audio exiting is probably caused by the input-data not being random enough. You could have seen that in the logging it produces in either syslog or a logfile.Furthermore: the kernel client waits until the kernel signals it that the entropy count is too low. Default (at least with 2.6.38) is 128 bits. It almost never reaches that point. To have it filled earlier, write a new threshold to /proc/sys/kernel/random/write_wakeup_threshold (1…4096). E.g.:echo 512 > cat /proc/sys/kernel/random/write_wakeup_thresholdI’ll add this to the readme.And my last point: the video4linux server will be updated to support current video4linux2, expect a release tonight.And please: mentioning problems on your blog is fine, but please also write an e-mail – it was only by accident that I stumbled on your blog.


  6. Folkert, I usually try to contact upstream before posting — I don’t remember in this case if I did or not, to be honest, but as email is cheap I usually fire off a mail. If I didn’t, either the address wasn’t at my reach or there was something that just turned me off altogether…Anyway, for what concerns the write_wakeup_threshold — I did change it, don’t worry. I’ve been around this kind of things for a while so I know what to look out for. If you’re back developing the project, and you want some more feedback let me know. I’m not sure if I can give it right away but I wouldn’t mind looking into it.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s