This Time Self-Hosted
dark mode light mode Search

Testing beforehand

Things need to be tested, by developers, before they get ready for public consumption. This is something that is pretty well known in the Free Software development work, but that does not seem to get to all the users, especially the novices that come from the world of proprietary software, in particular Windows software.

This is probably because in that world, stable software does not get really much changes by default, and as a result most users tend to use experimental software, in beta state often times, to work daily. Now, this is probably due to how the software is labelled by those companies too: Apple’s “beta” Safari 4 is mostly stable, compared to the older version, but I guess it’s far from complete behind the scenes; on the other hand a development version of a piece of Free Software may very well be unusable from the crashes, since it gets available much sooner.

Similarly, tricks that increase performance in software ecosystems like Windows’s are pretty common, because there is no other way to get better performances (and Microsoft is pretty bad at that, I guess I’ll write about that one day too). At the same time what we consider tricky in Free Software world may very well be totally and utterly broken.

Indeed, since I joined Gentoo there has been quite a few different tweaks and tricks that are supposed to either improve your runtime performance tenfold, or make you compile stuff in a fifth of the time. Some of these came out just stupid copy and paste while other are outright disinformation which I tried to debunk before. On the other hand, I’m the handler of one of the most successful tricks (at least I hope it is so!).

My problem is that for some users, the important tricks are the ones that the developers don’t speak about. I don’t know why this is, maybe they think that the developers are up to screw them up. Maybe they think the distribution developers like me are just part of a conspiracy that wants to waste their CPU power or something. Or maybe they think we want to be cool being able to compile stuff much sooner than they do. Who knows? But the point is that none of this is the case, of course.

What I think is that this kind of tricks should really be tested by developers first so that they don’t end up biting people in their bottoms. One of these tricks that lately seems to be pretty popular is in-memory builds with tmpfs. Now, this is something I really should look into doing myself, too. With 16GB of memory, with the exception of OpenOffice, this can be quite an useful addition to my tinderbox (if I can save and restore the state quickly, that is).

I do have a problem with the users telling people to use this right now as it is. The first problem is that, given that ccache and distcc usage are handled by Portage, this probably should be, too. The second problem is related to what the suggestions lack: the identification of the problems. Because, mind you, it’s not just building in memory what you’re doing, it’s also building with tmpfs!

By itself, tmpfs does not have any particular bugs that might hinder builds, but it has one interesting feature: sub-second timestamps. These are available also on XFS, so to say that Gentoo does not support building on tmpfs (because it increases build failure rate) is far from being the truth, as we do support XFS builds pretty well. Unfortunately neither users nor, I have to say, developers, know about this detail. Indeed you can see it here:

flame@yamato ~ % touch foobar /tmp/foobar
flame@yamato ~ % stat -c '%y' foobar /tmp/foobar 
2009-06-02 04:04:14.000000000 +0200
2009-06-02 04:04:21.197995915 +0200

How this relates to the builds is easy to understand if you know how make works: by tracking mtime of dependencies and targets. If they don’t follow in the right sequence the build may break or enter infinite loops (like in the case of smuxi some time ago), and indeed this is much easier when the resolution of mtime is higher than a second: if the timestamp stops as a second, any command taking less than that will not be considered as an extra overhead.

I have written already a few posts about fixing make in my “For a Parallel World” series; most of them are useful to fix this kind of issues too, so you might want to refer to those.

Finally, I want to say that there are other things that you should probably know when thinking about using tmpfs to build straight in memory. One of these is that, by default, gcc is going to build in memory by itself, somehow. Indeed the -pipe compiler flag that almost everybody has in their CFLAGS variable tells the compiler just that: to keep in memory the temporary data and execute, for instance, the assembler directly there. While the kind of temporary data that is kept in the build directory and that kept in memory by -pipe are not the same thing, if you’re limited on memory you could probably just try to disable -pipe and leave the compiler to use in-memory files.

But sincerely, I think there would be a much greater gain if people started to help out at fixing parallel make issues; compiling with just one core can get pretty tiresome even on a warbox like Yamato, and this is the case of Ardour for instance because scons is not currently called with a job option to build in parallel. Unfortunately last time I tried to declare a proper variable for the number of parallel jobs, so that it didn’t have to be hackishly extracted from MAKEOPTS, the issue ended up stopped in gentoo-dev by bikeshed arguments on the name of the variable.

On the other hand this “trick” (if you want to call it this way) could be a nice way to start, given that lots of parallel make issues also appear with tmpfs/xfs (the timestamps might go backward); I think I remember ext4 having an option to enable sub-second timestamps, maybe developers should start by setting up their devbox with that enabled, or with xfs, so that the issues can be found even by those who don’t have enough memory to afford in-memory builds.

Further readings: Mike’s testing with in-memory builds for FATE .

Comments 10
  1. Personally I use ramfs for more than 3 years. The only problems I ever had were due to Java checking for sufficient disc space (fails, because ramfs has no space). And OOo eating all memory.

  2. I’m using this for years as well. Main advantage is actually in the install phase for a package like gentoo-sources which contains tons of small files. Copying them over from a tmpfs is much faster then around on disk.

  3. Oh yea, likewise I have been using tmpfs for years now too.One interesting question has developed from your article that I hope you can clarify. If you use tmpfs, is it advantageous or not to use -pipe. It seems that you could fore-go -pipe and make even better use of your ram. Or am I not thinking straight?thx

  4. Good question, I have no idea since I have never done benchmarks, I only tried it out once and I never had before enough memory to do this; I’d like to do this for the tinderbox but it’s a bit of a problem because I need persistence of failed workdirs.In general, @-pipe@ should alleviate the memory hunger of C++ compilers, how does that actually work out in real world situation I have no idea.

  5. I use tmpfs for /var/tmp/ on all machines I administer, gentoo or otherwise, and never had problems. Even if the machine hasn’t enough ram to back the whole compile (and some do compile openoffice, for example), tmpfs data just hits swapspace. This still is faster than letting the object files hit a real fs, particulary on install and cleanup.Laptops with their slow disks seem to benefit the most.

  6. I never said anything about time going backwards; the problem is not there; I do remember that bug and I remember it was fixed. What I’m saying is that make expecting two targets to be sequential or contemporary might work with 1-second resolution, but would be broken by sub-second resolution:<typo:code>foobar.c: foobar.hfoobar.h: foobar.data gensource foobar.data genheader foobar.data</typo:code>In this case the rule (blatantly broken) has the order wrong, but if the two commands are fast enough, it won’t trigger an error at all, but it certainly would on tmpfs.

  7. There is also one hack for which tmpfs is useful.Assume you sillily have only 2 GB of disk space free on /var but want to compile OpenOffice.What can you do? You create a 6 GB tmpfs and then make sure you have enough RAM+swap.How do you get enough swap if you don’t already?E.g. use “dd if=/dev/zero …” to create a empty file on any file system where you have the space (could even be a USB stick!) and the use mkswap and swapon on it.Not it sure is not going to make things much faster, tmpfs swapping seems less efficient than normal filesystem caching (probably there are also issues due to large page tables and TLB misses etc.), but it still is great if you absolutely temporarily need a bit more space than you have available on every partition independently.Hm. I should add I probably wouldn’t recommend that on 32 bit systems – PAE should be able to handle it but I am not 100% sure how well.

  8. _*Flameeyes*: reposted since it was sent to the old blog server._I’d have to say openoffice is the #1 reason to use TMPFS for building.I’ve been using tmpfs for builds for the last year or 2 now, and have only seen benefits, not problems. GCC building in memory is fine and all, but GCC isn’t permanently resident, it writes out everything to disk between calls and stuff that builds by aggregating a lot of .o files ( ie: normal things that don’t compile everything in one gcc call ) suffer because of this.Even temporary files that never see the light of day for output have to be at least written to the disk, reading them isn’t so problematic, because reading them will get them out of the clean disk-cache, but it still has to tell the underlying filesystem that there are dozens, if not hundereds, if not thousands of journalled writes, which all have to take up IO somewhere.OpenOffice is a perfect example of this, most of openoffice being slow to build is stuck in IO. Even with 3G of memory available and linux being reasonably effective with disk-cache, its sub-optimal with openoffice.OpenOffice appears to be very highly IO bound, and for a while, so linear in design that it couldn’t even handle parallel compilation ( on the machines that had enough memory to run 2 gccs at a time that is ).When building openoffice, I set up 10G of tempfs, which is insane, because I only have 4G of real memory. And I back myself by having 10G of swap handy distributed amongst 3 drives with shared priority. ( Mostly because I want to keep working on the computer while OO builds )This sounds like insanity of course, because in theory, you’re swapping out to disk lots at least some of the time.But in practice, its highly effective. You don’t suffer any IO penalty, at *all* until really really late in the build, and by then most the thrashing is done. only when you’re around 80% complete does it start to bite into IO, but even then, the actual IO throughput is menial, well below the disk throughput rates, and most the things that get swapped out you don’t need to ever see again anyway.Real numbers:Core 2 Duo, T9300 @ 2.5gz. ( amd64 ).Build Time for Open Office: 2 hours.

    qlop -tHG openoffice: app-office/openoffice: Sat Aug 16 06:38:40 2008: 2 hours, 4 minutes, 36 secondsapp-office/openoffice: Fri Oct 17 21:00:50 2008: 1 hour, 56 minutes, 56 secondsapp-office/openoffice: Mon Feb  9 10:48:40 2009: 2 hours, 8 minutes, 3 seconds

    –P.s. I also have ccache stored in my tmpfs to make my life easier if I do multiple builds on the same day of something, it doesn’t survive reboots, and I don’t mind, it doesn’t help much enough to worry about 😉

  9. Not to destroy the spirit of Gentoo, me personally, I just use the binary. Nothing worse then hitting an update cycle where you have to upgrade both firefox and openoffice at the same time. Works pretty much just as good… lol I know I’m probably gonna get flamed for this. Cheers. Gentoo Noob

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.