This Time Self-Hosted
dark mode light mode Search

Squeezing more performance out of the tinderbox

While I got some contacts that might change a bit the way tinderbox will work in the (not-too-near) future, I’m still doing my best to optimise the whole process so to squeeze as much power out of it as possible, both here and wherever it’ll be in that future.

Interestingly enough, dropping out of the software RAID1 volume and moving to a straight LVM, especially for what concerns the log files, made a difference. This shows especially when you look at the time spent doing the final merge to the lifefs. After all, RAID1 wasn’t as important to me as a proper backup of the configuration, which I set up properly with rsnapshot, and an external (eSATA) Western Digital RAID-0 drive.

Right now, I’m instead looking for a way to reduce the merge redundancy: while it has gotten much better than the earlier days, especially since USE-based dependencies can be caught much sooner than the old hacks, there is still room for improvement.

One of the improvements I figured out yesterday under the shower (a pretty good place to think on how to solve programming problems). Right now the queue is just a list, printed out by the tinderbox scripts that Zac wrote. To avoid recurring over the same packages over and over, each time I stop the process to sync up, I resume the list where it stopped, rather than just generate a new one; this way I’m quite sure it reduces over time, for sure. The “queue list” is randomise

In parallel, I still generate the complete list, this time not randomised, to be used to fetch all the packages that are not fetched already (it also provides me a list of fetch-restricted packages, and helps finding packages that, well, fail to fetch for various other reasons). This list is generated from scratch so it lists all the packages that follow the usual requirements (“not installed in its latest version in the past six weeks”), and is thus usually longer (as it lists also the packages that failed the tinderbox run, and those passed through over six weeks ago).

But, in that second list, you might not find some of the packages in the first: packages installed as a dependency will not be in there, if they were installed less than six weeks ago, and yet if they were slated for later merge they’ll still be into the queue. When this shortlist of packages include things like Berkeley DB, it makes very much sense to avoid them: the sys-libs/db merge takes a few days to complete. Right now, the difference is about 238 packages (well, to be honest this is a rough estimate since it’s also accounting for packages masked by failure), but this is when almost all of the packages were merged, less than 3000 packages remaining. At the beginning of the run this is going to be much better.

What happens now then? Well I’ll probably fix up the shell script I’m using to restart the tinderbox, so that it proceeds to generate the new complete list, launch the fetch, then reduce the main queue (re-shuffling it at the same time). This would reduce the work I need to do each time the tinderbox restarts, and also reduce the time needed to make a complete run.

On the other hand I’d definitely need some way to handle the “remove-head-of-list” automatically; right now I have to handle that manually, and I also have to kill the tinderbox manually, trying to catch it between merges. So if somebody feels like fetching the git repository, check my notes and replace the current xargs call with a proper script that can be directed in some way (netcat is as good a way as any other to command it), and that can pause-rehash-resume, keeping score of what was merged and what not…

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.