I think I’ll keep away from Python still

Last night I ended up in Bizarro World, hacking at Jürgen’s gmaillabelpurge (which he actually wrote on my request, thanks once more Jürgen!). Why? Well, the first reason was that I found out that it hasn’t been running for the past two and a half months, because, for whatever reason, the default Python interpreter on the system where it was running was changed from 2.7 to 3.2.

So I tried first to get it to work with Python 3 keeping it working with Python 2 at the same time; some of the syntax changes ever so slightly and was easy to fix, but the 2to3 script that it comes with is completely bogus. Among other things, it adds parenthesis on all the print calls… which would be correct if it checked that said parenthesis wouldn’t be there already. In a script link the one aforementioned, the noise on the output is so high that there is really no signal worth reading.

You might be asking how comes I didn’t notice this before. The answer is because I’m an idiot! I found out only yesterday that my firewall configuration was such that postfix was not reachable from the containers within Excelsior, which meant I never got the fcron notifications that the job was failing.

While I wasn’t able to fix the Python 3 compatibility, I was able to at least understand the code a little by reading it, and after remembering something about the IMAP4 specs I read a long time ago, I was able to optimize its execution quite a bit, more than halving the runtime on big folders, like most of the ones I have here, by using batch operations, and peeking, instead of “seeing” the headers. At the end, I spent some three hours on the script, give or take.

But at the same time, I ended up having to workaround limitations in Python’s imaplib (which is still nice to have by default), such as reporting fetched data as an array, where each odd entry is a pair of strings (tag and unparsed headers) and each even entry is a string with a closed parenthesis (coming from the tag). Since I wasn’t able to sleep, at 3.30am I started re-writing the script in Perl (which at this point I know much better than I’ll ever know Python, even if I’m a newbie in it); by 5am I had all the features of the original one, and I was supporting non-English locales for GMail — remember my old complain about natural language interfaces? Well, it turns out that the solution is to use the Special-Use Extension for IMAP folders; I don’t remember this explanation page when we first worked on that script.

But this entry is about Python and not the script per-se (you can find on my fork the Perl version if you want). I have said before I dislike Python, and my feeling is still unchanged at this point. It is true that the script in Python required no extra dependency, as the standard library already covered all the bases … but at the same time that’s about it: it is basics that it has; for something more complex you still need some new modules. Perl modules are generally easier to find, easier to install, and less error-prone — don’t try to argue this; I’ve got a tinderbox that reports Python tests errors more often than even Ruby’s (which are lots), and most of the time for the same reasons, such as the damn unicode errors “because LC_ALL=C is not supported”.

I also still hate the fact that Python forces me to indent code to have blocks. Yes I agree that indented code is much better than non-indented one, but why on earth should the indentation mandate the blocks rather than the other way around? What I usually do in Emacs when I’m getting stuff in and out of loops (which is what I had to do a lot on the script, as I was replacing per-message operations with bulk operations), is basically adding the curly brackets in different place, then select the region, and C-M- it — which means that it’s re-indented following my brackets’ placement. If I see an indent I don’t expect, it means I made a mistake with the blocks and I’m quick to fix it.

With Python, I end up having to manage the space to have it behave as I want, and it’s quite more bothersome, even with the C-c < and C-c > shortcuts in Emacs. I find the whole thing obnoxious. The other problem is that, while Python does provide basics access to a lot more functionality than Perl, its documentation is .. spotty at best. In the case of imaplib, for instance, the only real way to know what’s going to give you, is to print the returned value and check with the RFC — and it does not seem to have a half-decent way to return the UIDs without having to parse them. This is simply.. wrong.

The obvious question for people who know would be “why did you not write it in Ruby?” — well… recently I’ve started second-guessing my choice of Ruby at least for simple one-off scripts. For instance, the deptree2dot tool that I wrote for OpenRC – available here – was originally written as a Ruby script … then I converted it a Perl script half the size and twice the speed. Part of it I’m sure it’s just a matter of age (Perl has been optimized over a long time, much more than Ruby), part of it is due to be different tools for different targets: Ruby is nowadays mostly a long-running software language (due to webapps and so on), and it’s much more object oriented, while Perl is streamlined, top-down execution style…

I do expect to find the time to convert even my scan2pdf script to Perl (funnily enough, gscan2pdf which inspired it is written in Perl), although I have no idea yet when… in the mean time though, I doubt I’ll write many more Ruby scripts for this kind of processing..

Munin and IPv6

Okay here it comes another post about Munin for those who are using this awesome monitoring solution (okay I think I’ve been involved in upstream development more than I expected when Jeremy pointed me at it). While the main topic of this post is going to be IPv6 support, I’d like first to spend a few words for context of what’s going on.

Munin in Gentoo has been slightly patched in the 2.0 series — most of the patches were sent upstream the moment when they were introduced, and most of them have been merged in for the following release. Some of them though, including the one bringing my FreeIPMI plugin to replace the OpenIPMI plugins, or at least the first version of it, and those dealing with changes that wouldn’t have been kosher for other distributions (namely, Debian) at this point, were also not merged in the 2.0 branch upstream.

But now Steve opened a new branch for 2.0, which means that the development branch (Munin does not use the master branch, for a simple logistic reason of having a master/ directory in GIT I suppose) is directed toward the 2.1 series instead. This meant not only that I can finally push some of my recent plugin rewrites but also that I could make some more deep changes to it, including rewriting the seven asterisk plugins into a single one, and work hard on the HTTP-based plugins (for web servers and web services) so that they use a shared backend, like SNMP. This actually completely solved an issue that, in Gentoo, we solved only partially before — my ModSecurity ruleset blacklists the default libwww-perl user agent, so with the partial and complete fix, Munin advertises itself in the request; with the new code it includes also the plugin that is currently making the request so that it’s possible to know which requests belongs to what).

Speaking of Asterisk, by the way, I have to thank Sysadminman for lending me a test server for working on said plugins — this not only got us the current new Asterisk plugin (7-in-1!) but also let me modify just a tad said seven plugins, so that instead of using Net::Telnet, I could just use IO::Socket::INET. This has been merged for 2.0, which in turn means that the next ebuild will have one less dependency, and one less USE flag — the asterisk flag for said ebuild only added the Net::Telnet dependency.

To the main topic — how did I get to IPv6 in Munin? Well, I was looking at which other plugins need to be converted to “modernity” – which to me means re-using as much code possible, collapse multiple plugins in one through multigraph, and support virtual-nodes – and I found the squid plugins. This was interesting to me because I actually have one squid instance running, on the tinderbox host to avoid direct connection to the network from the tinderboxes themselves. These plugins do not use libwww-perl like the other HTTP plugins, I suppose (but I can’t be sure, for what I’m going to explain in a moment) because the cache://objects request that has to be done might or might not work with the noted library. Since as I said I have a squid instance, and these (multiple) plugins look exactly like the kind of target that I was looking for to rewrite, I started looking into them.

But once I started, I had a nasty surprise: my Squid instance only replies over IPv6, and that’s intended (the tinderboxes are only assigned IPv6 addresses, which makes it easier for me to access them, and have no NAT to the outside as I want to make sure that all network access is filtered through said proxy). Unfortunately, by default, libwww-perl does not support accessing IPv6. And indeed, neither do most of the other plugins, including the Asterisk I just rewrote, since they use IO::Socket::INET (instead of IO::Socket::INET6). A quick searching around, and this article turned up — although then this also turned up that relates to IPv6 support in Perl core itself.

Unfortunately, even with the core itself supporting IPv6, libwww-perl seems to be of different ideas, and that is a showstopper for me I’m afraid. At least, I need to find a way to get libwww-perl to play nicely if I want to use it over IPv6 (yes I’m going to work this around for the moment and just write the new squid plugins against the IPv4). On the other hand, using IO::Socket::IP would probably solve the issue for the remaining parts of the node and that will for sure at least give us some better support. Even better, it might be possible to abstract and have a Munin::Plugin::Socket that will fall-back to whatever we need. As it is, right now it’s a big question mark of what we can do there.

So what can be said about the current status of IPv6 support in Munin? Well, the Node uses Net::Server, and that in turn is not using IO::Socket::IP, but rather IO::Socket::INET or INET6 if installed — that basically means that the node itself will support IPv6 as long as INET6 is installed, and would call for using it as well, instead of using IO::Socket::IP — but the latter is the future and, for most people, will be part of the system anyway… The async support, in 2.0, will always use IPv4 to connect to the local node. This is not much of a problem, as Steve is working on merging the node and the async daemon in a single entity, which makes the most sense. Basically it means that in 2.1, all nodes will be spooled, instead of what we have right now.

The master, of course, also uses IPv6 — via IO::Socket::INET6 – yet another nail in the coffin of IO::Socket::IP? Maybe. – this covers all the communication between the two main components of Munin, and could be enough to declare it fully IPv6 compatible — and that’s what 2.0 is saying. But alas, this is not the case yet. On an interesting note, the fact that right now Munin supports arbitrary commands as transports, as long as they provide an I/O interface to the socket, make the fact that it supports IPv6 quite moot. Not only you just need an IPv6-capable SSH to handle it, but you can probably use SCTP instead of TCP simply by using a hacked up netcat! I’m not sure if monitoring would get any improvement of using SCTP, although I guess it might overcome some of the overhead related to establishing the connection, but.. well it’s a different story.

Of course, Munin’s own framework is only half of what has to support IPv6 for it to be properly supported; the heart of Munin is the plugins, which means that if they don’t support IPv6, we’re dead in the water. Perl plugins, as noted above, have quite a few issues with finding the right combination of modules for supporting IPv6. Bash plugins, and indeed any other language that could be used, would support IPv6 as good as the underlying tools — indeed, even though libwww-perl does not work with IPv6, plugins written with wget would work out of the box, on an IPv6-capable wget… but of course, the gains we have by using Perl are major enough that you don’t want to go that route.

All in all, I think what’s going to happen is that as soon as I’m done with the weekend’s work (which is quite a bit since the Friday was filled with a couple of server failures, and me finding out that one of my backups was not working as intended) I’ll prepare a branch and see how much of IO::Socket::IP we can leverage, and whether wrapping around that would help us with the new plugins. So we’ll see where this is going to lead us, maybe 2.1 will really be 100% IPv6 compatible…

Working outside the bubble

One of the most common mistakes a developer can make is never look or work outside their usual environment. By never looking outside their own little sandbox, they risk losing sight of which improvements happen outside of their world, which they could easily factor in. This is why I always look at what Fedora, Debian and Ubuntu do, for instance.

Given my involvement with Ruby packaging, one of the things I should try to avoid covering myself with is Ruby itself; unfortunately trying to look at Python packaging is not something I’d be keen on doing anytime soon, given the state of python.eclass (please guys, rewrite it, sooner rather than later!). But at least tonight I spent some time looking at Perl modules’ ebuilds.

The main reason why I went to look for those is because an user (Grant) asked me to add Google’s AdWords library for Perl to the main tree, but Perl is something I wanted to look at for a while, since I wanted to set up RT for my customers and the new versions require a few more dependencies.

At any rate, looking at Perl ebuilds for me is not too unnatural: while the fact that there is a single Perl implementation makes it much easier on them to implement the phases, the rest of the set up is not too different from what we have in Ruby land.

What seems to be more common there is that they also set HOMEPAGE by default if none is set, since it seems like a lot of modules only have an homepage as part of CPAN, which is only different in the Ruby world due to the fact that most of the projects are managed on GitHub, which makes that their default homepage.

Having finally to take a good look at the g-cpan tool, I have to say that I think that all the trials for a g-gem tool or similar are quite out of target: instead of creating a tool that creates and install ebuilds for the pre-packaged extensions, we should have coordinated with RubyGems upstream to provide a few more details — such as a way to explicit the license the extension is released under, which CPAN does and RubyGems doesn’t, you can also see Mandriva struggling with the same issue in the final questions on my FOSDEM talk — and at that point we could have just created a tool that prepared a skeleton for our own ebuilds, rather than something fully automated like it has been tried before.

Anyway, I really like the idea of trying to package something you’re not usually used to, makes it easy to find what is different and what is similar, in approaches.

Help request: extending Finance::Quote

It’s not very common for me to ask explicit help with writing new software, but since this is something that I have no experience with, in a language I don’t know, and not mission-critical for any of my jobs, I don’t really feel like working on this myself.

Since right now I not only have a freelancing, registered job, but I also have to take care of most, if not all, house expenses, I’ve started keeping my money in check through Gnucash as I said before. This makes it much easier to see how much (actually, little) money I make and I can save away or spend on enjoying myself from time to time (to avoid burning out).

Now, there is one thing that bothers me: to save away the money that I owe the government as taxes (both VAT I have to pay, and extra taxes) I subscribed to a security fund, paying regularly (if I have the money available, of course!); unfortunately I need to explicitly go look up the data on my bank’s website to know exactly how much money I have stashed in there at any time.

Gnucash obviously have a way to solve this problem, by using Finance::Quote Perl module to fetch the data from a longish series of websites, mostly through scraping. Let’s not even start to talk about the chances that the websites changed their structure in the past months since the 1.17 release of the module (hint: at least one had, since I tried it out manually and it only gets a 404 error), but at last Yahoo, while accepting the ISIN of the fund, doe not give me any data for the current value of the share.

Now, the fund, which is managed by Pioneer Investments and they do provide the data, and via a very simple, ISIN-based, URL! Unfortunately, they provide that data only… in PDF. Now, this does not seem to be too bad: the data is available in text form because pdftotext provides it properly, and it’s clearly marked with the previous line to be a fixed string; on the other hand, I have no idea how it would be possible to scrape a PDF, especially in Perl, and even worse within Finance::Quote!

If somebody feels like helping me out, the URL for the PDF file with the data is the following, and the grep command will tell you what to look for in the PDF’s text. If you can help me out with this I’ll be very glad. Thanks!

# wget 'http://www.pioneerinvestments.it/it/webservice/pdfDispatcher.jhtml?doccode=ilpunto&from=02008FON∈=IT0000388204'
# pdftotext pioneer_monetario_euro_a.pdf* - | grep 'Valore quota' -A 2
Valore quota

13,158

Links checking

I started writing a bog just to keep users updated on the development of Gentoo/FreeBSD and other projects I worked on; it was never my intention to make it my biggest project, but one thing causes the other and I’m afraid to say that lately my biggest contribution to free software is this very blog. I’m not proud of this, it really shouldn’t be this way, but lacking time (and having a job that gets me to work on proprietary rather than free software), this is the best I can come up with.

But, I still think that a contribution is only worth to the extent it’s actually properly done, for this reason it bother me I cannot go over all the current posts and make sure there aren’t any factual mistake in them. Usually, if I know of something I got wrong for any reason, and I want to explain the mistake and fix it, after a longish time from publication, I just write a new “correction” entry and link to the older post; originally this worked out nicely because Typo would handle the internal trackback, so that it could be automatically circularly linked; unfortunately trackbacks don’t seem to work even though I did enable them when I started the User-Agent filtering (so that the spam could be reduced to a manageable amount).

In addition, there are quite a few posts that are for now only available on the older blog which bothers me quite a bit, since it’s actually full of spam, gets my name wrong, and forces users to search two places for the first topics I wrote about. Unfortunately migrating the posts out of the b2evolution install is quite cumbersome, and I guess I should try to bribe Steve again about that.

Update (2016-04-29): I actually imported the old blog in 2012. I also started merging every other post I wrote anywhere else in the mean time.

Anyway, beside the factual errors in the content, there are a few other things that I can and should deal with, on the blog, and one of this is the validity of the external and internal links. Now, I know this is the sort of stuff that falls into the so-called ”Search Engine Optimisation” field, and I don’t care. I dislike the whole idea and I find that calling that ”SEO” is just a way for script kiddies to feel important like a “CEO”; I don’t do this for the search engines, I do this for the users; I don’t like when I find a broken link on a site, so I’d like for my own sites not to have broken links.

The Google Webmaster Tools is a very interesting tool in this regard since it allows you to find broken inbound links; I already commented about OSGalaxy breaking my links (and in the mean time I don’t get published any longer in there because they don’t handle Atom feeds); for that and other sites, I keep a whole table of redirections for the blog’s URLs, as well as a series of normalisation for URLs that often have trailing garbage characters (like semicolons and other things).

Unfortunately what GWT lacks is a way to check outbound links, at least as far as I can see; I guess it would be a very useful tool for that because Google has to index the content anyway so adding checks for that stuff shouldn’t be much of a problem for them. The nicest thing would be for Typo (the software handling my blog) to check the links before publishing, and alerting me for errors (an as-you-type check would help but it would require for a proxy to cache requests for at least a few hours otherwise I would be hitting the same servers many time while writing). Since that does not seem to be happening for now and I don’t foresee it to happen in the near future, I’m trying to find an alternative approach.

At the time I’m writing (which is not when you’re going to read this post), I’m running locally a copy of the W3C LinkChecker (I should package it for Gentoo, but I don’t have much experience with Perl packaging), over my blog; I already executed it over my site and xine’s and fixed a few of the entries that the link checker already spewed out.

Again, this is not the final solution I need, the problem with this is that it does not allow me to run an actual incremental scan; while I currently am caching all the pages through polipo this is not going to work for the long run, just for today’s spree. There are a quite a few problems with the current setup, though:

  • it does not allow to remove the 1-second delay on requests, not even for localhost (when I’m testing my own static site locally I don’t need delay at all, I can actually pipeline lots of requests together);
  • it does not just have a way to provide a “whitelist of unreachable URLs” (like my Amazon’s wishlist that does not respond to the HEAD request);
  • while the output is quite suitable to be sent via email (so I can check each day for new entries), I would have preferred for it to output XML, with a provided XSL to convert it to something user friendly, that would have allowed me to handle the URL filtering in a more semi-automatic way;
  • finally, it does not support IDN, and I like IDN which makes me a bit sad;

For now, what I gathered from the checker output is that my use of Textile for linking causes most of the bad links in the blog (because it keeps the semicolons, closed parentheses and so on as part of the link), and I dislike the effect of the workaround of adding spaces (no, the “invisible space” is not a solution since the parser doesn’t understand that is whitespace, and also add that to the link). And there are lots of broken links because since, after the Typo update , the amazon: links don’t work any longer. This actually will give me a bit of a chance: they used to be referral links (even though they never made any difference), now after the change of styles I don’t need those any longer thus I’ll just replace them database-wide to the direct link.

Moving on with the nXhtml packaging

So the author of nXhtml is working toward a 1.04 release, and I decided to start already working on an ebuild with the betas.

This version does change a few things. The most important one is that the support scripts are now released under GPL 2 (or later) license. This solves the main packaging issue I wrote about before when I started packaging nxhtml. Another change is that there are two more Emacs mode included in the package; actually one more mode and a dependency of that one: wikipedia-mode and outline-magic. I wrote ebuilds for both and are now in my overlay (will see to talk to Ulrich to move them to emacs overlay).

I also will have to hack a bit the perl html-wtoc script as it now ships with CSS and templates that I need to relocate; it shouldn’t be much of a problem.

The main issue in all this is that the ebuild has to do a lot of manual fiddling, which makes it difficult to bump. I’d really have to find a way to sync with upstream so that the installation is easier. He doesn’t seem to answer to mails but probably received mine before as he fixed the license problem, so I tried contacting him again to see if it’s possible to move the contributed code on a different directory, that way it would be easier to check if stuff is added, and I wouldn’t have to remove it manually every time.

New results from my elven script

My work on ruby-elf is continuing, albeit slowly as I’m currently preparing more envelopes. I’ve improved the nm.rb tool to be a bit more alike to the real nm utility from binutils, by taking properly consideration of the Absolute and Common symbols, and today while working on the conflicts finder I also fixed the symbols’ versions parser. Now the parser and the tool should be pretty much solid; too bad that by fixing the parser to actually look in the libraries recursively I ended up making it quite more slow (well, it has a lot more libraries to look up now), and it eats more than 70MB for the database alone.

Now, let me first talk about symbol versions on ELF files. This is a GNU extension, that is not obviously documented in elf(3), as that man page comes out of BSD. There’s little to no documentation available about it, the only thing that can be somewhat reliable is a page from Ulrich Drepper on his home site on RedHat, which of course is the usual hyped version of the truth, in the usual Drepper style everybody is used to. The elf.h header does not contain much information beside the data structures with a few comments, that are not good enough to actually implement anything without a lot of analysis of the data itself, and some fill ups from Drepper’s hyped page. One of the entries, commented as «Unused» in the header, and also in Drepper’s page in the structure definition, carries the most important data of the structure: the index used to retrieve the version information.

Three tables are used, plus .dynstr that carries the symbols’ names and thus also the versions’ names (they are declared as absolute symbols of value 0); as a symbol can have multiple names if it obsoleted other symbols, the records are of variable length, rather than fixed length, which makes it more difficult to parse them. The versions for defined (exported) and for undefined (imported) symbols are declared in different structure, subtly similar enough to confuse you, then the third table tells you which version every symbol refers to. As the «auxiliary» structures for both defined and undefined symbols are not part of the version definition, but pointed by through an offset, and only carry the name of a symbol, they can be shared. Now don’t ask me why there should be two different version specifications pointing at the same name (the only reason I’d see would be if defined and undefined symbols had the same version name, but also the auxiliary structures are not the same between the two, and are defined in two different sections, so those can’t be shared), the only case I found up to now is Berkley DB that uses —default-symver).

After being able to implement an ELF parser that can read symbols’ versions, without looking at glibc source code – because I don’t want to hurt my eyes looking at it, and also because if I need I can implement it again without being restricted to GPL-2 license – I have to say that I don’t like it and I hope never to rely on it in my life! Seems to me like it would have been possible to use a few more bytes but make the load process faster by using fixed-length records; there are also flags variables pretty much unused that could have been used for defined or non-defined symbols if that matters so much, but the fact that you find the version through the symbols rather than the other way around makes it pretty much pointless: you already know if a symbol is defined or not!

Anyway after this a-bit-offtopic time, let me show a bit of results coming from the script itself:

  • bug #178919, #178920, #178921 and #178923: there are three ebuilds (one being python and two being python extensions) that brings in their own copy of expat, 1.95.8, which is incompatible with expat 2.0 that is in ~arch for a while now; Compress-Raw-Zlib instead carries its own copy of zlib, which is a waste as virtually any system at a given time has a copy of zlib already in memory.
  • I found a few more common libraries in kdepim-kresources that cut down three more megabytes from the total size of the package installed on my system; note that the size includes full debug information in dwarf format (-ggdb), but no debug code; the memory reduction should be similar in a proportion, but it’s not the same amount of course; still it’s a sensible gain.
  • I also prepared a patch for kmix to install two more libraries: libkmixprivate and libkmixuiprivate, as kmix, kmixctrl and kmix_applet were sharing half of their code to begin with; through my patch they share it effectively on disk, and during build the files are built once only.
  • Samba would probably make use of a libsambainternal library of its own, as a lot of symbols seems to be shared between different tools from the Samba package. Note that internal libraries are an useful way of sharing code, as you can put there the functions used by everything, and just declare them to be private, making sure that users of the project won’t try to use it, or at least will know that the ABI can change from one release to the other.

With the more widespread look of the script I also had to extend a lot the suppressions’ file, as there is a lot of software using plugin-based interface nowadays.
One problem I’m facing now, though, is that there are a lot of drop-in replacement libraries in a common system, for instance libncurses and libncursesw, OpenSSL and GNU-TLS’s replacement, and so on. I need to write a suppression file for those too, so that the symbols common between those libraries are not reported as actual collisions, but skipped over.

One thing I’m not sure about are the yy* symbols that come out of flex/yacc/whatever it is. There are a lot of libraries that export them, and I’m not sure if it’s correct for them to, as the different flex versions might be incompatible. Unfortunately I never used flex in my life so I can’t tell.

If somebody knows how those functions are supposed to work with shared libraries, I’ll be very grateful to know; and if they are a problem, I’ll see to report and fix all of them.

bash scripting tiny details

Although I’m now an ebuild developer for almost two years, and I contributed for at least another year through bugzilla, I never considered myself a bash expert; the functions I use are mostly the generics, a bit more advanced than a newbie usage, as often needed in ebuilds, so from time to time, when I learn some new trick that others known since ages before, or I discuss about alternatives with other developers, I usually end up posting here trying to share it with others that might find it useful.

As autoepatch is being written entirely in bash, I end up coping with some problems or corner cases that I need to dig around and thus I ended up learning some more tricks, and some things I’m thinking about for the ebuilds themselves.

The first thing, is about sed portability.. we already have made sure that “sed” called in ebuild scope is always GNU sed 4, so that the command lines supported are the same everywhere; a portable alternative, and seems also to be a faster alternative: perl. The command “perl -p -i -e” is a workalike replacement for “sed -i -e”, that as far as I can see is also faster than sed.. I wonder if, considering we already have perl in base system, it would be viable to use it as an alternative to sed throughout the Portage tree.

For find(1) we instead rely on a portable subset of commands, so that we don’t ask Gentoo/*BSD users to install GNU findutils (that also often breaks on BSD too); one of the most used features of find in ebuilds is -print0 to then run through xargs to run some process on a list of files. Timothy (drizzt) already suggested some time ago to use -exec cmd {} + instead, as that merges the xargs behaviour in find itself, avoiding one process spawning and a pipe. Unfortunately, this feature, designed in SUSv3, is present in FreeBSD, DragonFlyBSD and NetBSD, but not on OpenBSD… for autoepatch (where I’m going to use that feature pretty often as it all comes down to find to, well, find the targets) I decided that the find command used has to support that feature, so to run on OpenBSD it will have to depend on GNU findutils (until they implement it). I wonder if this could be told of the whole Portage and then replace the many xargs calls in ebuilds…

I should ask reb about this latter thing but he is, uh, disappeared :/ seems like Gentoo/OpenBSD is one of those projects where people involved end up disappearing or screaming like crazy (kinda remind me of PAM).

Talking about the MacBookPro: today I prepared an ebuild for mactel-sources in my overlay, that I’m going to commit now, that takes gentoo-sources and applies the mactel patches over it, it’s easier to handle for me in the long run; this way, the Synaptics driver for the touchpad actually worked fine. Unfortunately, KSynaptics made the touchpad go crazy, so I suggest everybody NOT to try it, as it is now.