Mail, SSL and Postfix

In my previous post, I delineated a few reasons why I care about SSL for my blog and the xine bugzilla. What I did not talk about was about the email infrastructure for both. The reason is that, according to the very same threat model that I delineated in that post, it’s not as important for me to secure that part of the service.

That does not mean, though, that I never considered it, just I considered it not important enough yet. But after that post I realized it’s time to fix that hole and I’ve now started working on securing the email coming out of my servers as well as that coming through the xine server. But before going into details about that, let’s see why I was not as eager to secure the mail servers compared to the low-hanging fruit of my blog.

As I said in the previous post, what you have to identify two details: what information is important to defend, and who the attackers would be. In the case of the blog, as I said, the information was the email addresses of the commenters, and the attackers the other users of open, unencrypted wifi networks in use. In the case of email, the attackers in particular change drastically; the only people in a position to get access to the connections’ streams are the people in the hosting and datacenter companies, and if they made mistakes, your neighbours in the same datacenter. So for sure it’s not the very easy prey of the couple sitting at the Starbucks next to you.

The content, well, is a bit more interesting information. We all know that there is no real way to make email completely opaque to service providers unless we use end to end encryption such as GnuPG, so if you really don’t want your server admin to ever be able to tell what’s in your email, that’s what you should do. But even then, there is something that (minus protocol-level encryption) is transmitted in cleartext: the headers, the so-called metadata, that stirred the press so much last year. So once again it’s the address of the people you contact that could be easily leaked, even with everything else being encrypted. In the case of xine, the mail server handles mostly bugzilla messaging, and it is well possible that it sends over, without encryption, the comments on security bugs, so reducing the risk of that information leaking is still a good idea.

Caveat emptor in all of this post, though! In the case of the xine mail server, the server handles both inbound and outbound messages, but at the same time it does not ever let users access their mailbox; the server itself is a mail router, rather than a full mail service. This is important, becuase otherwise I wouldn’t be able to justify my sloppiness on covering SSL support for the mail! If your server hosts mailboxes or allows direct mail submission (relay), you most definitely need to support SSL as then it’s a client-server connection which is attackable by the Starbucks example above.

So what needs to be done to implement this? Well, first you need to remember that a mail router like the one I described above requires SSL in two directions: when it receive a message it should be able to offer SSL to the connecting client, and when it sends a message it has to request SSL to the remote server too. In a perfect set up, the client also offers a certificate to prove who it is. This means that you need a certificate that works both as a server and as a client certificate; thankfully, StartSSL supports that for Class 2 certificates, even if they are named for web servers, they work just fine for mail servers too.

Unfortunately, the same caveat that apply to HTTP certificates, apply to mail servers: ciphers and protocol versions combinations. Unfortunately, while Qualys has SSL Labs to qualify the SSL support in your website, I know of no similar service for mail routers, and coming up with one is not trivial, as you would want to make sure not to become a relay-spammer by mistake, and the only way to judge the message pushing of the server is to trick it into sending a message to your own service back, which should not be possible on a properly non-open relay of a server.

So the good news is that of all of the xine developers with an alias on the domain have a secure server when routing mail to them, so the work I’ve been done is not for nothing. The other note is that a good chunk of the other users in Bugzilla uses GMail or similar big hosting providers. And unlike others I actually find this a good thing, as it’s much more likely that the lonely admin of a personal mail server (like me for xine) would screw up encrypion, compared to my colleagues over at GMail. But I digress.

The bad news is that not only there is no way to assess the quality of the configuration of a mail server, but at least for the case of Postfix you have only a ternary setting about TLS: yes always, yes if client requests it (or if the server provides the option, when submitting mail), or not at all. There is no way to set up policy so that e.g. gmail servers don’t get spoofed and tricked into sending the messages over a clear text connection. A second bad news is that I have not been able to get Postfix to validate the certificates either as server or client, likely caused by the use of opportunistic TLS rather than forcing TLS support. And the last problem is that servers connecting to submit mail will not fallback to cleartext if the TLS can’t be negotiated (either because of cipher or protocols), and will instead keep trying to negotiate TLS the same way.

Anyway, my current configuration for this is:

smtpd_tls_cert_file = /etc/ssl/postfix/server.crt
smtpd_tls_key_file = /etc/ssl/postfix/server.key
smtpd_tls_received_header = yes
smtpd_tls_loglevel = 1
smtpd_tls_security_level = may
smtpd_tls_ask_ccert = yes
smtpd_tls_mandatory_protocols = !SSLv2, !SSLv3

smtp_tls_cert_file = /etc/ssl/postfix/server.crt
smtp_tls_key_file = /etc/ssl/postfix/server.key
smtp_tls_security_level = may
smtp_tls_loglevel = 1
smtp_tls_protocols = !SSLv2, !SSLv3

If you have any suggestions on how to make this more reliable and secure, please do!

Why HTTPS anyway?

You probably noticed that in the past six months I had at least two bothersome incidents related to my use of HTTPS in the blog. An obvious question at this point would be why on earth would I care about making my blog (and my website) HTTPS only.

Well, first of all, the work I do for the blog usually matches fairly closely the work I do for xine’s Bugzilla so it’s not a real doubling of the effort, and actually allows me to test things out more safely than with some website that actually holds information that has some value. In the case of the Bugzilla, there are email addresses and password hashes (hopefully properly salted, I trust Bugzilla for that, although I would have preferred OAuth 2 to avoid storing those credentials), and possibly security bugs reported with exploit information that should not be sent out in the clear.

My blog has much less than that; the only user is me, and while I do want to keep my password private, there is nothing that stops me from using a self-signed certificate only for the admin interface. And indeed I had that setup for a long while. But then I got the proper certificate and made it optionally available on my blog. Unfortunately that made it terrible to deal with internal and external links to the blog, and the loading of resources; sure there were ways around it but it was still quite a pain.

The other reason for that is simply to cover for people who leave comments. Most people connecting through open networks, such as from Starbucks, will have their traffic easily sniffable as no WPA is in use (and I’ve actually seen “secure” networks using WEP, alas), and I could see how people preferred not posting their email in comments. And back last year I was pushing hard for Flattr (I don’t any more) and I was trying to remove reasons for not using your email when commenting, so HTTPS protection was an interesting point to make.

Nowadays I stopped pushing for Flattr, but I still include gravatar integration and I like having a way to contact the people who comment on my blog especially as they make points that I want to explore more properly, so I feel it’s in my duty to protect their comments as they flow by using HTTPS at the very least.

Heartbleed and xine-project

This blog comes in way too late, I know, but there has been extenuating circumstances around my delay on clearing this through. First of all, yes, this blog and every other websites I maintain were vulnerable to Heartbleed. Yes they are now completely fixed: new OpenSSL first, new certificates after. For most of the certificates, though, there is no revocation issued, as they are issued through StartSSL which means that they are free to issue, and expensive to revoke. The exception to this has been the certificate used by xine’s bugzilla which was revoked, free of charge, by StartSSL (huge thanks to the StartSSL people!)

If you have an account on xine’s Bugzilla, please change your passwords NOW. If somebody knows a way to automatically reset all passwords that were not changed before a given date in Bugzilla, please let me know. Also, if somebody knows whether Bugzilla has decent support for (optional) 2FA, I’d also be interested.

More posts on the topic will follow, this is just an announcement.

DVD access libraries and their status

I’ve noted when I posted suggestions for GSoC that we’ve been working on improving the DVD-related libraries that are currently used by most of the open-source DVD players out there: libdvdread and libdvdnav — together with these, me and the Other Diego have been dusting off libdvdcss as well, which takes care of, well, cracking the CSS protection on DVDs so that you can watch your legally owned DVDs on Linux and other operating systems without going crazy.


Yes I did take the picture just to remind you all that I do pay for content so if you find me taking about libdvdcss is not because I’m a piracy apologist because I’m cheap — whenever I do resolve to piracy it’s because it’s nigh impossible to get the content legally, like for what concerns J-Drama.

Anyway, the work we’ve been pouring into these libraries will hopefully soon come into fruition; on my part it’s mostly a build system cleanup task: while the first fork, on mplayer, was trying to replace autotools with a generic, FFmpeg-inspired build system, the results have been abysmal enough that we decided to get back to autotools (I mean, with me on board, are you surprised?) so now they have a modern, non-recursive, autotools based build system. Diego and J-B have been cleaning up the code itself from the conditionals for Windows, and and Rafaël has now started cleaning up libdvdnav’s code by itself.

One of the interesting part of all this is that the symbol table exposed by the libraries does not really match what is exposed by the headers themselves. You can easily find this by using exuberant-ctags – part of dev-util/ctags – to produce the list of exported symbols from a set of header files:

% exuberant-ctags --c-kinds=px -f - /usr/include/dvdread/*.h
DVDClose        /usr/include/dvdread/dvd_reader.h       /^void DVDClose( dvd_reader_t * );$/;"  p
DVDCloseFile    /usr/include/dvdread/dvd_reader.h       /^void DVDCloseFile( dvd_file_t * );$/;"        p
DVDDiscID       /usr/include/dvdread/dvd_reader.h       /^int DVDDiscID( dvd_reader_t *, unsigned char * );$/;" p
DVDFileSeek     /usr/include/dvdread/dvd_reader.h       /^int32_t DVDFileSeek( dvd_file_t *, int32_t );$/;"     p
DVDFileSeekForce        /usr/include/dvdread/dvd_reader.h       /^int DVDFileSeekForce( dvd_file_t *, int offset, int force_size);$/;"  p
DVDFileSize     /usr/include/dvdread/dvd_reader.h       /^ssize_t DVDFileSize( dvd_file_t * );$/;"      p
DVDFileStat     /usr/include/dvdread/dvd_reader.h       /^int DVDFileStat(dvd_reader_t *, int, dvd_read_domain_t, dvd_stat_t *);$/;"    p
DVDISOVolumeInfo        /usr/include/dvdread/dvd_reader.h       /^int DVDISOVolumeInfo( dvd_reader_t *, char *, unsigned int,$/;"       p
DVDOpen /usr/include/dvdread/dvd_reader.h       /^dvd_reader_t *DVDOpen( const char * );$/;"    p
DVDOpenFile     /usr/include/dvdread/dvd_reader.h       /^dvd_file_t *DVDOpenFile( dvd_reader_t *, int, dvd_read_domain_t );$/;"        p
DVDReadBlocks   /usr/include/dvdread/dvd_reader.h       /^ssize_t DVDReadBlocks( dvd_file_t *, int, size_t, unsigned char * );$/;"      p
DVDReadBytes    /usr/include/dvdread/dvd_reader.h       /^ssize_t DVDReadBytes( dvd_file_t *, void *, size_t );$/;"     p
DVDUDFCacheLevel        /usr/include/dvdread/dvd_reader.h       /^int DVDUDFCacheLevel( dvd_reader_t *, int );$/;"      p
DVDUDFVolumeInfo        /usr/include/dvdread/dvd_reader.h       /^int DVDUDFVolumeInfo( dvd_reader_t *, char *, unsigned int,$/;"       p
FreeUDFCache    /usr/include/dvdread/dvd_udf.h  /^void FreeUDFCache(void *cache);$/;"   p

You can then compare this list with the content of the library by using nm:

% nm -D --defined-only /usr/lib/
0000000000004480 T DVDClose
0000000000004920 T DVDCloseFile
0000000000005180 T DVDDiscID
0000000000004e60 T DVDFileSeek
0000000000004ec0 T DVDFileSeekForce
0000000000005120 T DVDFileSize
00000000000049c0 T DVDFileStat
000000000021e878 B dvdinput_close
000000000021e888 B dvdinput_error
000000000021e898 B dvdinput_open
000000000021e890 B dvdinput_read
000000000021e880 B dvdinput_seek
000000000000f960 T dvdinput_setup
000000000021e870 B dvdinput_title
0000000000005340 T DVDISOVolumeInfo
0000000000003fa0 T DVDOpen
0000000000004520 T DVDOpenFile
0000000000004d80 T DVDReadBlocks
0000000000004fa0 T DVDReadBytes

But without going into further details I can tell you that there are two functions that should be exported that are not, and the dvdinput_ series that shouldn’t have been exposed are. So there are a few things to fix there for sure.

As I said before, my personal preference would be to merge libdvdread and libdvdnav again (they were split a long time ago as some people didn’t need/want the menu support) — if it wasn’t for obvious legal issues I would merge libdvdcss as well, but that’s a different story. I just need to find the motivation to go look in the reverse dependencies of these two libraries, and see if the interface exposed between the two is ever used, it might be possible to reduce their surface as well.

Yes this would be a relatively big change for relatively small gain, on the other hand, it might be worth to get this as a new side-by-side installable library that can be used preferentially, falling back to the old ones if not present. And given the staleness of the code, I wouldn’t really mind having to go through testing from scratch at this point.

Anyway, at least the build system of the three libraries will soon look similar enough that they seem to be part of the same project, instead of each going its own way — among other things the ebuilds for the three should look almost entirely identical, in my opinion, so that should be a good start.

If you want to contribute, given that the only mailing list we have on videolan is for libdvdcss, you can push your branches to Gitorious thanks to the VideoLAN mirror and from there just contact somebody in #videolan on Freenode to get it reviewed/merged.

Update (2017-04-22): as you may know, Gitorious was acquired by GitLab in 2015 and turned down the service. So no more VideoLAN mirror or anything actually.

It’s that time of the year again…

Which time of the year? The time when Google announces the new Summer of Code!

Okay so you know I”m not always very positive about the outcome of Summer of Code work, even though I’m extremely grateful to Constanze (and Mike, who got it i tree now!) for the work on filesystem based capabilities — I’m pretty sure at this point that it also has been instrumental for the Hardened team to have their xattr-based PaX marking (I’m tempted to re-consider Hardened for my laptops now that Skype is no longer a no-go, by the way). Other projects (many of which centred around continuous integration, with no results) ended up in much worse shape.

But since being always overly negative is not a good way to proceed in life, I’m going to propose a few possible things that could be useful to have, both for Gentoo Linux and libav/VLC (whichever is going to be part of GSoC this year). Hopefully if something comes out of them is going to be good.

First of all, a re-iteration of something I’ve been asking of Gentoo for a while: a real alternatives-like system. Debian has a very well implemented tool for selecting among alternative packages supporting multiple tools. In Gentoo we have eselect — and a bunch of modules. My laptop counts 10 different eselect packages installed, and for most of them, the overhead of having another package installed is bigger than the eselect module itself! This also does not really work that well, as for instance you cannot choose the tar command, and pkg-config vs pkgconf require you to make a single selection by installing one or the other (or disabling the flag from pkgconf, but that defeats the point, doesn’t it?).

Speaking of eselect and similar tools, we still have gcc-config and binutils-config as their own tools, without using the framework that we use for about everything else. Okay, the last guy who tried to make these bit more than he could chew, and the result has been abysmal, but the reason there is likely that the target was set too high: re-do the whole compiler handling so that it could support non-GCC compilers.. this might actually be too small a project for GSoC, but might work as a qualification task, similar to the ones we’ve got for libav in the past.

Going to libav, one thing that I was discussing with Luca, J-B and other VLC developers, was the possibility to integrate at least part of the DVD handling that is currently split between libdvdread and libdvdnav into libav itself. VLC already forked the two libraries (and I rewrote the build system) — me and Luca were looking into merging them back into a single libdvd library already… but a rewrite and especially one that can reuse code from libav, or introduce new code that can be shared, would probably be a good thing. I haven’t looked into it but I wouldn’t be surprised if libdvbpsi could follow the same treatment.

Finally, another project that could sound cool for libav would be to create a library, API and ABI compatible with xine, that only uses libav. I’m pretty sure that if most of the internals of xine are dropped (including the configuration file and the plugin system), it would be possible to have a shallow wrapper around libav instead of having a full blown project. It might lose support for some files, such as modules, and DVDs, but it would probably be a nice proof of concept and would show what we still need .. and the moment when we can deal with those formats straight into libav, we know we have something better than simply libav.

On a similar note, one of the last things I’ve worked on, in xine, was the “audio out conversion branch”, see for instance this very old post — it is no more no less than what we now is libavresample, just done much worse. Indeed, libavresample actually has the SIMD-optimized routines I never found out how to actually write, which makes it much nicer. Since xine, at the moment I left it, was actually quite nicely using libavutil already, it might be interesting to see what happens if all the audio conversion code is killed, and replaced with libavresample.

So these are my suggestions for this season of GSoC, at least for the projects I’m involved on… maybe I’ll even have time to mentor them this year, as with a bit of luck I’ll have stable employment when the time comes for this to happen (more on this to follow, but not yet).

Is xine still alive? Well, let’s say it’s in life support

Personal note first: I’ve been swamped with deliveries these past two weeks, which means that most of my mail is parked in the inbox waiting for reply; if you read this post and wonder why I haven’t replied to you yet.. just know that anything that could be construed as work will have to fit in the 8am-7pm time range.. and I started writing this at 10pm.

In August I’ve migrated my blog, website (and a few customers’ websites as well) from a vserver to a KVM guest, thanks to my hoster – ios-solutions – having it available, and with IPv6. Move to IPv6 was something I was particularly interested in, since I’ve deployed it locally to have stable addresses for all systems and with it on the server I could tell who was connecting to what exactly.

Today, after making sure that the migration to KVM was working quite well, I applied the same migration to the server that hosts xine’s website and Bugzilla installation. This allows me to use a single local guest to build packages for both servers, and keep both Portage world files empty, using Portage 2.2 sets feature to distinguish what to install on one or the other.

The “new” server (which is actually a hybrid made by a base image of Earhart, this server, and the configurations from Midas, the old xine server) lost its name and naming scheme (given that when I called it Midas it was also “my” server) and is now simply — but at the same time it gained full IPv6 support and, contrarily to my own domain, that means that mail can be delivered on a pure IPv6 network as well.

At any rate, I wanted to take this opportunity to remember everyone that xine is not entirely dead, as it can be told by me actually spending my personal free time to work on its server rather than simply giving up. While I have not followed through with the 1.2 release – mostly because I lost track of xine after my weeks at the hospital three years ago – there should be enough code there to be released, if somebody cared about giving it some deserved love. But just doing that is … unlikely to help alone.

If you’ve been following me for long enough, you know that I have worked hard on xine and learnt a lot by doing so. One of the things I learned is that its design and not-invented-here-isms are not something you really want on a modern project. On the other hand, I still think that the overall, general, high-level design of splitting frontends and the library with more tightly coupled plugins is a good idea. Of course this is more or less the same overall, general, high-level design followed by VLC.

What are the problems with continuing with xine-lib the way it is? Well, the plugins right now are too abstracted; while they don’t reach the level of abstraction of GStreamer, that makes them obnoxious, and they are still mostly shipped with xine, there are limitations, which is why of all the major users of libav/ffmpeg, xine does not use libavformat to demux the files, d’oh. Plugins also have a long list of minor issues with their design, starting with the whole structure handling that is a total waste of space and CPU cycles.

So if you’re interested in xine, please come to #xine @ OFTC, the project has still potential, but it needs new blood.

Hunting for a SSL certificate

So, in the previous chapter of my personal current odyssey I noted that I was looking into SSL certificates; last time I wrote something about it I was looking into using CACert to provide a few certificates around. But CACert has one nasty issue for me: not only it’s not supported out of the box by any browser, but also I have failed up to now to find a way to get Chromium (my browser of choice) to accept it, which doesn’t make it better than the self-signed certificates for most of my aims.

Now, back at that time, Owen suggested me to look into StartSSL which is supported out of the box by most if not all the drivers out there, and supports free Class 1 certificates. Unfortunately Class 1 certificates don’t allow for SNI or wildcard certificates, which I would have liked to have, as I have a number of vhosts on this server. On the other hand, the Class 2 (which does provide that kind of information) has an affordable price ($50), so I wouldn’t have minded confirming my personal details to achieve that. The problem is that to get the validation, I need to send a scan of two IDs with a photo, and I only got one. I guess I’ll finally have to get a passport.

As a positive note for them, StartSSL actually replied to my tweet-rant suggesting I could use my birth certificate as secondary ID for validation. I guess this is easier to procure in the United States – at least judging from the kind of reverence Americans have of them – here I’d sincerely like to not bother going to look for it, especially because, as it is, my birth certificate does not report my full name directly (I legally changed it a few years ago if you remember), but as an amendment.

There are, though, a few other problems that shown up while using StartSSL; the first problem is that it doesn’t allow you to use Chrome (or Chromium) to handle registration because of troubles with client-side certificates. Another problem is that the verification for domain access is not based on the DNS hosting, but just on mail addresses: you verify the domain foo by receiving an email directed to webmaster@foo (or other email addresses, both standard and taken from the domain’s WhoIs record). While it’s relatively secure, it only works if the domain can receive email, and only seem to work to verify second level domains.

Using the kind of verification that Google uses to verify domains would make it much nicer to verify domain ownership, and works with subdomains as well as domains that lack email entirely. For those who don’t know how the Google domain verification works, they provide you with the name of a CNAME you have to add to your domain and point it to “”; since the CNAME they tell you to set up is created with a hash of your account name and the domain itself, they can ensure that you have access to the domain configuration and thus to the domain itself. I guess the problem here is just that it takes much more time for DNS to propagate than it takes an email to arrive, and have a fast way to create a new certificate is definitely a good thing of StartSSL.

At any rate, I got a couple of certificates this way, so I finally don’t get Chrome’s warnings because of invalid certificates when I access this computer’s Transmission web interface (which I secure through an Apache reverse proxy). And I also took the time to finally secure xine’s Bugzilla with an SSL connection and certificate.

Thanks Owen, thanks StartSSL!

Plugins aren’t always a good choice

I’ve been saying this for quite a while, probably one of the most on-topic post has been written a few months ago but there are some indications about it in posts about xine and other again.

I used to be an enthusiast about plugin interfaces; with time, though, I started having more and more doubts about their actual usefulness — it’s a tract I really much like in myself, I’m fine with reconsidering my own positions over time, deciding that I was wrong; it happened before with other things, like KDE (and C++ in general).

It’s not like I’m totally against the use of plugins altogether. I only think that they are expensive in more ways than one, and that their usefulness is often overstated, or tied to other kind of artificial limitations. For instance, dividing a software’s features over multiple plugins makes it easier for the binary distributions to package them, usually: they only have to ship a package with the main body of the software, and many for the plugins (one per plugin might actually be too much so sometimes they might be grouped). This works out pretty well for both the distribution and, usually, the user: the plugins that are not installed will not bring in extra dependencies, they won’t take time to load and they won’t use memory for either code nor data. It basically allows binary distribution to have a flexibility to compare with Gentoo’s USE flags (and similar options in almost any other source-based distribution).

But as I said this comes with costs, that might or might not be worth it in general. For instance, Luca wanted to implement plugins for feng similarly to what Apache and lighttpd have. I can understand his point: let’s not load code for the stuff we don’t have to deal with, which is more or less the same reason why Apache and lighttpd have modules; in the case of feng, if you don’t care about access log, why should you be loading the access load support at all? I can give you a couple of reasons:

  • because the complexity of managing a plugin to deal with the access log (or any other similar task) is higher than just having a piece of static code that handles that;
  • because the overhead of having a plugin loaded just to do that is higher than that of having the static code built in and not enabled into configuration.

The first problem is a result of the way a plugin interface is built: the main body of the software cannot know about its plugins in too specific ways. If the interface is a very generic plugin interface, you add some “hook locations” and then it’s the plugin’s task to find how to do its magic, not the software’s. There are some exceptions to this rule: if you have a plugin interface for handling protocols, like the KIO interface (and I think gvfs has the same) you get the protocol from the URL and call the correct plugin, but even then you’re leaving it to the plugin to deal with doing its magic. You can provide a way for the plugin to tell the main body what it needs and what it can do (like which functions it implements) but even that requires the plugins to be quite autonomous. And that means also being able to take care of allocating and freeing the resources as needed.

The second problem is not only tied to the cost of calling the dynamic linker dynamically to load the plugin and its eventual dependencies (which is a non-trivial amount of work, one has to say), also by the need for having code that deals with finding the modules to load, the loading of those modules, their initialisation, keeping a list of modules to call at any given interface point, and two more points: the PIC problem and the problem of less-than-page-sized segments. This last problem is often ignored, but it’s my main reason to dislike plugins when they are not warranted for other reasons. Given a page size of 4KiB (which is the norm on Linux for what I know), if the code is smaller than that size, it’ll still require a full page (it won’t pack with the rest of the software’s code areas); but at least code is disk-backed (if it’s PIC, of course), it’s worse for what concerns variable data, or variable relocated data, since those are not disk-backed, and it’s not rare that you’d be using a whole page for something like 100 bytes of actual variables.

In the case of the access log module that Luca wrote for feng, the statistics are as such:

flame@yamato feng % size modules/.libs/
   text    data     bss     dec     hex filename
   4792     704      16    5512    1588 modules/.libs/

Which results in two pages (8KiB) for bss and data segments, neither disk-backed, and two disk-backed pages for the executable code (text): 16KiB of addressable memory for a mapping that does not reach 6KiB, it’s a 10KiB overhead, which is much higher than 50%. And that’s the memory overhead alone. The whole overhead, as you might guess at this point, is usually within 12KiB (since you got three segments, and each can have at most one byte less than page size as overhead — it’s actually more complex than this but let’s assume this is true).

It really doesn’t sound like a huge overhead by itself, but you have to always judge it compared to the size of the plugin itself. In the case of feng’s access log, you got a very young plugin that lacks a lot of functionality, so one might say that with the time it’ll be worth it… so I’d like to show you the size statistics for the Apache modules on the very server my blog is hosted. Before doing so, though, I have to remind you one huge difference: feng is built with most optimisations turned off, while Apache is built optimised for size; they are both AMD64 though so the comparison is quite easy.

flame@vanguard ~ $ size /usr/lib64/apache2/modules/*.so | sort -n -k 4
   text    data     bss     dec     hex filename
   2529     792      16    3337     d09 /usr/lib64/apache2/modules/
   2960     808      16    3784     ec8 /usr/lib64/apache2/modules/
   3499     856      16    4371    1113 /usr/lib64/apache2/modules/
   3617     912      16    4545    11c1 /usr/lib64/apache2/modules/
   3773     808      24    4605    11fd /usr/lib64/apache2/modules/
   4035     888      16    4939    134b /usr/lib64/apache2/modules/
   4161     752      80    4993    1381 /usr/lib64/apache2/modules/
   4136     888      16    5040    13b0 /usr/lib64/apache2/modules/
   5129     952      24    6105    17d9 /usr/lib64/apache2/modules/
   6589    1056      16    7661    1ded /usr/lib64/apache2/modules/
   6826    1024      16    7866    1eba /usr/lib64/apache2/modules/
   7367    1040      16    8423    20e7 /usr/lib64/apache2/modules/
   7519    1064      16    8599    2197 /usr/lib64/apache2/modules/
   8583    1240      16    9839    266f /usr/lib64/apache2/modules/
  11006    1168      16   12190    2f9e /usr/lib64/apache2/modules/
  12269    1184      32   13485    34ad /usr/lib64/apache2/modules/
  12521    1672      24   14217    3789 /usr/lib64/apache2/modules/
  15935    1312      16   17263    436f /usr/lib64/apache2/modules/
  18150    1392     224   19766    4d36 /usr/lib64/apache2/modules/
  18358    2040      16   20414    4fbe /usr/lib64/apache2/modules/
  18996    1544      48   20588    506c /usr/lib64/apache2/modules/
  20406    1592      32   22030    560e /usr/lib64/apache2/modules/
  22593    1504     152   24249    5eb9 /usr/lib64/apache2/modules/
  26494    1376      16   27886    6cee /usr/lib64/apache2/modules/
  27576    1800      64   29440    7300 /usr/lib64/apache2/modules/
  54299    2096      80   56475    dc9b /usr/lib64/apache2/modules/
 268867   13152      80  282099   44df3 /usr/lib64/apache2/modules/
 288868   11520     280  300668   4967c /usr/lib64/apache2/modules/

The list is ordered for size of the whole plugin (summed up, not counting padding); the last three positions are definitely unsurprisingly, although it surprises me the sheer size of the two that are not part of Apache itself (and I start to wonder whether they link something in statically that I missed). The fact that the rewrite module was likely the most complex plugin in Apache’s distribution never left me.

As you can see, almost all plugins have vast overhead especially for what concerns the bss segment (all of them have at least 16 bytes used, and that warrants a whole page for them: 4080 bytes wasted each); the data segment is also interesting: only the two external ones have more than a page worth of variables (which also is suspicious to me). When all the plugins are loaded (like they most likely are right now as well on my server) there are at least 100KiB of overhead; just for the sheer fact that these are plugins and thus have their own address space. Might not sound like a lot of overhead indeed, since Apache is requesting so much memory already, especially with Passenger running, but it definitely doesn’t sound like a good thing for embedded systems.

Now I have no doubt that a lot of people like the fact that Apache has all of those as plugins as they can then use the same Apache build across different configurations without risking to have in memory more code and data than it’s actually needed, but is that right? While it’s obvious that it would be impossible to drop the plugin interface from Apache (since it’s used by third-party developers, more on that later), I would be glad if it was possible to build in the modules that come with Apache (given I can already choose which ones to build or not in Gentoo). Of course I also am using Apache with two configurations, and for instance the other one does not use the authentication system for anything, and this one is not using CGI, but is the overhead caused by the rest of modules worth the hassle, given that Apache already has a way to not initialise the unused built-ins?

I named above “third party developers” but I have to say now that it wasn’t really a proper definition, since it’s not just what third parties would do, it might very well be the original developers who might want to make use of plugins to develop separate projects for some (complex) features, and have different release handling altogether. For uses like that, the cost of plugins is often justifiable; and I am definitely not against having a plugin interface in feng. My main beef is when the plugins are created for functions that are part of the basic featureset of a software.

Another unfortunately not uncommon problem with plugins is that the interface might be skewed by bad design, like the case was (and is) for xine: when trying to open a file, it has to pass through all the plugins, so it loads all of them into memory, together with the libraries they depend on, to ask each of them to test the current file; since plugins cannot really be properly unloaded (and it’s not just a xine limitation) the memory will still be used, the libraries will still be mapped into memory (and relocated, causing copy on write, and thus, more memory) and at least half the point of using plugins has gone away (the ability to only load the code that is actually going to be used). Of course you’re left with the chance that an ABI break does not kill the whole program, but just the plugin, but that’s a very little advantage, given the cost involved in plugins handling. And the way xine was designed, it was definitely impossible to have third-party plugins developed properly.

And to finish off, I said before that plugins cannot be cleanly unloaded: the problem is not only that it’s difficult to have proper cleanup functions for plugins themselves (since often the allocated resources are stored within state variables), but also because some libraries (used as dependency) have no cleanup altogether, and they rely (erroneously) on the fact that they won’t be unloaded. And even when they know they could be unloaded, the PulseAudio libraries, for instance, have to remain loaded because there is no proper way to clean up Thread-Local Storage variables (and a re-load would be quite a problem). Which drives away another point of using plugins.

I leave the rest to you.

Some details about our old friends the .la files

Today, I’m going to write about the so-called “libtool archives” that you might have read about in posts like What about those .la files? or Again about .la files (or why should they be killed off sooner rather than later) (anybody picking up the faint and vague citation here is enough of a TV geek).

Before starting I’m going to say that I’m not reading any public Gentoo mailing list lately which means that if I’m going to bring out points that have been brought up already, I don’t care. I’m just writing this for the sake of it and because Jorge asked me some clarification about what I did write some longish time ago. Indeed the first post is almost exactly one year ago. I do know that there has been more discussion about the need or not need of these libraries for ebuild-provided stuff, so I’m going to try to be as clear as possible.

First problem is to identify what these files are: they are simple text files, and they provide metadata about a library, or a pair of static and shared libraries; this metadata includes some obvious and non-obvious stuff like the names of the two type of libraries, the formal name (soname) of the shared library that can be used with dlopen() and a few more things included the directory the library is to be found in. The one piece of data that creates a problem for us is, though, the dependency list of libraries that needs to be linked against when linking against this library, but I’ll go back to that later. Just please note that it’s there.

Let’s go on with what these files are used for: libtool generates them, and libtool consumes them; they are used when linking with libtool to set the proper runpath if needed (by checking the library’s directory), to choose between the static or shared library version, and to bring in further dependency libraries. This latter thing is controversial and our main issue here: older operating systems had no way to define dependencies between libraries of any kind, and even nowadays on Linux we have no way to define the dependencies of static libraries (archives). So the dependency information is needed when using static link; unfortunately libtool does not ignore the dependencies with shared objects, which can manage their dependencies just fine (through the DT_NEEDED tag present in the .dynamic section), and pollutes the linking line causing problems that can be solved by --as-needed.

Another use for libtool archive files is to know how to load or preload modules (plugins). The idea is that when the operating system provides no way to dynamic load and link further shared objects (that is, a dlopen()-like interface), libtool can simulate that facility by linking together the static modules, using the libltdl library instead. Please note, though, that you don’t need the libtool archives to use libltdl if the operating systems does provide dynamic loading and linking.

This is all fine and dandy for theory, what about practicality? Do we need the .la files installed by ebuilds? And slightly related (as we’ll see later) do we need the static libraries? Unsurprisingly, the answer comes from the developer’s mantra: There is no Answer; it Depends.

To simplify my discussion here, I’m going to reduce the scope to the two official (or semi-official) operating systems supported by Gentoo: Linux and FreeBSD. The situation is likely to be different for some of the operating systems supported by Gentoo/Prefix, and while I don’t want to reduce their impact, the situation is likely to become much more complicated to explain by adding them to the picture.

Both Linux and FreeBSD use as primary executable and linkable format ELF (which stands exactly for “executable and linkable format”), they both support shared objects (libraries), the dlopen() interface, the DT_NEEDED tag, and they both use ar flat archives for static libraries. Most importantly, they use (for now, since FreeBSD is working toward changing this) the same toolchain for compile and link, GCC and GNU binutils.

In this situation, the libltdl “fake dlopen()” is sincerely a huge waste of time, and almost nobody use it; which means that most people wouldn’t want to use the .la files to open plugins (that is, with the exception of KDE 3), which makes installing libtool archives of, say, PulseAudio’s plugins, pointless. Since most software is likely not to use libltdl in the first place, like xine or PAM to cite two that I maintain somehow, their plugins also don’t need the libtool archive files to be installed. I already have reported some rogue pam modules that install pointless .la files (even worse, in the root fs). The rule of thumb here is that if the application is using plugins with standard dlopen() instead of libltdl (or an hacked libltdl like it’s the case for KDE 3), the libtool archives for these plugins are futile; this, as far as I know, includes glib’s GModule support (and you can see by using ls /usr/lib*/gnome-*/*.la that there are some installed for probably no good reason).

But this only provides a description of what to do with the libtool archive files for plugins (modules) and not with libraries; with libraries the situation is a bit more complicated, but not too much, since the rule is even simpler: you can drop all the libtool archives for libraries that are only ever shared (no static archive provided), and for those that have no dependency other than the C library itself (after making sure it does not simply forget to link the dependencies in). In those cases, the static library is enough to be listed by itself, and you don’t need any extra file to tell you what else is needed to be linked in. This already takes care of quite a bit of libtool files: grepping for the string “dependency_libs=''” (which is present in libraries that don’t have any further dependencies than the C library), provides 62 files in my system.

There is another issue that was brought up last year: libraries whose official discovery method is a -config script, or pkg-config; these libraries can ignore the need for providing dependencies for the static variant since they provide it themselves. Unfortunately this has two nasty issues: the first is that most likely someone is not using the correct scripts to find the stuff; I’ve read one blog post a week or two ago about a developer disgruntled because pkg-config was used for a library that didn’t provide it and suggested not to use pkg-config at all (which is quite silly actually). The other problem is that while pkg-config does provide a --static parameter to use different dependency lists between shared and static linking of a library (to avoid polluting the link line), I know of no way to tell autoconf to use that option during discovery at all. But there are also a few things that can be said, since there is enough space for binutils to just implement an extension to the static archives that can actually provide the needed dependency data, but that’s beside the point now I guess.

So let’s sidestep this issue for now and return to the three known cases when we can assert with a relative certainty that the libtool archives are unneeded: non-ltdl-fakeloaded plugins (xine, pam, GModule, …), libraries with no other dependencies than the C library and libraries that only install shared objects. While the first two are pretty obvious, there is something else to say about that last one.

By Gentoo policy we’re supposed to always install both the static and shared object version of a library; unless, that is, upstream doesn’t think so. The reason for this policy is that static linking is preferred for some mission-critical software that might not allow the system to boot up if the library is somehow broken (think bash and libreadline), and because sometimes, well, you just have to leave the user with the option of static linking stuff. There have been proposals of adding an USE flag to allow enabling/disabling build of static libraries, but that’s nowhere to use, yet; one of the problems was to link up the static USE flag with the static-libs USE flag of its dependencies; EAPI 2 USE dependencies can solve that just fine. There are, though, a few cases where you might be better off not providing a static library at all, even if upstream doesn’t say something outright about it, since most likely they never cared.

This is the case for instance of libraries that use, in turn, the dlopen() interface to load their plugins (using static linking with those can produce nasty results); for instance you won’t find a static library for Linux-PAM. There are a few more cases where having static libraries is not suggested at all and we might actually decide to take it out entirely, with the due caution. In those cases you can remove the libtool archive file as well since shared objects do take care of themselves.

Now, case in point, Peter took lots of flames for changing libpcre; there are mixed flames related to, from one side, him removing the libtool archive, and from the other to him removing the static library. I haven’t been part of the flames, in any way, because I’m still minding my own health first of all (is there any point in having a sick me not working on Gentoo?), yet here is my opinion: Peter did one mistake, and that was to unconditionally remove the static library. Funnily enough, what probably most people shouted him at for is the removal of the libtool archive, which is just nothing useful since, you can guess, the library has no further dependency beside the C library (it’s different for what regards libpcreposix, though).

My suggestion at this point is for someone to actually finish cleaning up the script that I think I posted to the mailing lists some time ago, and almost surely can be recreated quite quickly, that takes care of fixing the libtool archive files in the system without requiring a full rebuild of everything. Or even better get a post-processing task for Portage that replaces the direct references to libtool archives in the new libtool archives with generic library references (so that /usr/lib/ would become -lpcre straight away and so on for all libraries; the result would be that breakage for future libtool archives removal wouldn’t exist at all).

Debian, Gentoo, FreeBSD, GNU/kFreeBSD

To shed some light and get around the confusion that seems to have taken quite a bit of people who came to ask me what I think about Debian adding GNU/kFreeBSD to the main archive, I’d like to point out, once again, that Gentoo/FreeBSD has never been the same class of project as Debian’s GNU/kFreeBSD port. Interestingly enough, I already said this before more than three years ago.

Debian’s GNU/kFreeBSD uses the FreeBSD kernel but keeps the GNU userland, which means the GNU C Library (glibc), the GNU utilities (coreutils) and so on so forth; on the other hand, Gentoo/FreeBSD uses both the vanilla FreeBSD kernel, and mostly vanilla userland. With mostly I mean that some parts of the standard FreeBSD userland are replaced, with either compatible, selectable or updated packages. For instance instead of shipping sendmail or the ISC dhcp packages as part of the base system, Gentoo/FreeBSD leaves them to be installed as extra packages, just like you’d do with Gentoo. And you can choose whichever cron software you’d like instead of using the single default provided by the system.

But, if a software is designed to build on FreeBSD, it usually builds just as fine on Gentoo/FreeBSD; rarely there are troubles, and most of the time the trouble are with different GCC versions. On the other hand, GNU/kFreeBSD require most of the system-dependant code to be ported, xine already has undergone this at least a couple of time for instance.

I sincerely am glad to see that Debian finally came to the point of accepting GNU/kFreeBSD into main; on the other hand, I have no big interest on it beside as a proof of concept; there are things that are not currently supported by glibc even on Linux, like SCTP, which on FreeBSD are provided by the standard C library; I’m not sure if they are going to port the Linux SCTP library to kFreeBSD or if they decided to implement the interface inside glibc. If that last one is the case, though, I’d be glad because it would finally mean that the code wouldn’t be left as stale.

So please, don’t mix in Gentoo/FreeBSD with Debian’s GNU/kFreeBSD. And don’t even try to call it Gentoo GNU/FreeBSD like the Wikipedia people tried to do.