This Time Self-Hosted
dark mode light mode Search

Tinderbox and logs, some thoughts

While looking at today’s logs I started to wonder whether the current approach is actually sustainable. The main problem is that most of the tinderbox was just created piece by piece as needed to complete a specific task, and it really has never undergone a complete overhaul.

Right before moving to containers it only consisted of a chroot, which caused quite a few issues on the long run, but not extreme issues (still some of the important parts, such as the fact that libvirt testsuite seem to be triggering something bad in my kernel, which in turn blocks evolution and gwibber).

Beside the change from a simple chroot to a container (that reminds a lot of BSD jails by the way), the rest ofthe tinderbox has been kept almost identical: there is a script, written by Zac, that produces a list of packages, following these few rules:

  • the package is the highest version available for a slot of a package (it lists all slots for all packages);
  • the package is not masked, nor its dependencies are (this is important because I do mask stuff locally when I know ti fails, for instance libvirt above);
  • the ebuild in that particular version is not installed, or it wasn’t re-installed in the past 10 weeks (originally, six weeks, now ten because tests need to be executed as well).

This is the main core of the tinderbox, and beside that there is a mostly-simple bashrc file for Portage, that takes care of a few more tests, that portage itself ignores for now:

  • check for calls to AC_CANONICAL_TARGET, this is a personal beef against overcanonicalisation in autotools;
  • check for bundled common libraries (zlib, libpng, jpeg, ffmpeg, …), this makes it easier to identify possibly bundled libraries, although it has a quite high rate of false positives; having an actual database of code present in the various package would be easier;
  • check for use of insecure functions; this is just a little extra check for functions like tmpnam() and similar, that ld already warns about;
  • single-pass find(1) checks for OSX forkfiles (unfortunately common in the Ruby packages!), for setuid and setgid binaries (to have a list of them) and for invalid directories (like /usr/X11R6) in the packages;
  • extra QA trick to identify packages calling make rather than emake (which turns out to be quite useful to identify packages that use make instead of emake -j1 to hide bugs.

Now, all this produces a few files in the temporary directory, which are then printed so that the actual build log keeps them. Unfortunately this does not really work tremendously well since it requires quite a bit of work to properly extract them.

So I was thinking, what if instead of just running emerge, I run a wrapper around emerge? This would actually help me gathering important information, at the cost of spending more time for each merge. At that point, the wrapper could be writing up a “report card” after each emerge, containing in a suitable format (XML comes to mind) the following information:

  • status of the emerge (completed or not);
  • build log of the emerge (filename, gets copied with the report card itself);
  • emerge info for the current merge (likewise) — right now the emerge info I provide is the generic information of the tinderbox, which might differ from the specific instance used by the merge;
  • version of all the dependency tree — likewise, it’s something that might change between the time the package fails and I file the bug;
  • CVS revision of the current ebuild — very important to debug possibly fixed and duplicated bugs; this is one of the crucial point to be considering before moving to git where such revision is not available, as far as I know;
  • if the package has failed within functions we know they leave a log with details, those files should be copied together with the report card, and listed — this works for epatch, the functions from autotools.eclass and econf;
  • gathering of all the important information as returned by the checks in bashrc and in Portage proper (prestripped files, setXid files, use of make and AC_CANONICAL_TARGET, …).

Once this report card is generated, the information from the workdir is pretty unimportant and can mostly cleared up, this would allow me to save space on disk, which isn’t exactly free.

More importantly, with this report it’s also possible to check for bugs, and report them almost automatically, which would reduce the time needed for handling the reports of the tinderbox, and thus the time I need to invest on the reporting side of the job, instead of spending it on the fixing side (which thankfully is currently handled very well by Samuli and Victor).

More on this idea in the next days hopefully, with some proof of concept as well.

Comments 5
  1. With git or any non-CVS version control system, the revision identifier of the whole repository could be used instead of the one in the ebuild.Then adding revision identifiers of the whole repository to “emerge –info” could make it easier to determine if any bug report is outdated.

  2. The problem with that is that the number of *repository* revision identifiers is the number of commits, which means that it’s a multiplication of single revisions. Given that the package.mask file has over 10K revisions, it’s not as trivial as the CVS ID of the ebuild file to identify which one it is.

  3. git uses SHA-1 sums as the repository revision identifier, so it does not depend in a simple way on the number of commits. Commands of git support working on specific files, it is possible to diff/log changes to a file since a specific revision.Ebuilds use also eclasses and many files. A repository revision identifier could allow identifying all of these, so it would help if they influence a bug.

  4. You can communicate with Bugzilla trought the XML-RPC API: http://www.bugzilla.org/doc…and send your report as a SOAP message.p.s.: I like your efforts with the Tinderbox but I continue to prefer a real Continuos Integration solution ( like buildbot: http://buildbot.net/trac ) rather than a DIY reimplementation of the CI (your tinderbox)

  5. Buildbot would work fine if we had more than 10 machines per arch going to execute it. Since that really doesn’t seem the case, I’m afraid that’s not going to happen anytime soon.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.