Bugs on random packages

Okay, so it seems that any time I try to scratch an itch, in this case the fact that libpcre installs C++ bindings without having an opt out to disable them.

So I talked with Anant that seems to be the current maintainer of libpcre, and he’s fine with adding the USE flag, but before doing so, I wanted to make sure that nothing depends n the C++ bindings without having a built_with_use check. Up to now the only package I found depending on those is mkvtoolnix, that is now fixed in the tree.

This for the first round, of packages I could build with -B. The second round now is more complex, as some packages had dependencies I didn’t have installed, and I’m now merging them with —onlydeps, adding tons of useless packages to my system until I finish the list (then a simple —depclean and I’m done).

The big problem when doing reverse dependencies tests, which I’ve done more than once before, is that you can find plenty of packages that simply don’t build or have minor problems that are usually overseen by their maintainer and by most users. Like multilib-strict failures, pre-stripped files, documentation installed in the wrong directory, and similar.

The result is that today I’m filling bugs like crazy, even for packages I have no clue what they are supposed to do. Why oh why am I doing this again?

My personal crusade against C++ abuse

Okay, after my blog about C++ bindings libraries I decided to go a bit deeper. What does stop me from removing C++ support from GCC entirely?

Yes of course nocxx USE flag on gcc is certainly not supported, as we don’t usually put built_with_use checks on gcc to make sure that we got a C++ compiler, and we just assume we got it, but this is not the point.

For a server, having gcc installed, while sometimes useful, might be a waste of space. Having the C++ standard library installed when you know for sure that you don’t need it, is a sure waste of space.

So what did I do? I checked what needed libstdc++ in my vserver’s chroot. There are only three packages hitting: libpcre (which I’ll talk about more later), fcgi and groff. The latter is a problem: groff is used by man and other similar tools, and it is written in C++. Yet, it doesn’t use the standard library.

So I decided to give a try to one simple trick that also other packages using C++ code but no C++ library use: I added to my overlay an experimental ebuild that instead of linking the C++ code as usual, it uses gcc to link, and add -lsupc++ to the libraries to link to. It’s just the minimal subset of C++ symbols that GCC has to provide, the result is that groff can be built without depending on libstdc++, and can run just as fine where gcc was built with nocxx. Isn’t that great?

I also found out that tvtime has the same problem: it links libstdc++ because one of its deinterlacers is written in C++, but does not need libstdc++… well again the fix is to link -lsupc++. tvtime-1.0.2-r2 is in tree for who wants to try it.

Now I’m starting to wonder how much C++ gets abused every day, by adding libstdc++ dependencies to package who wouldn’t need them at all, so I started looking around, and using more often the nocxx USE flag when I know I won’t have use for it, like for tiff, or berkley db.

For what concerns libpcre, the build system does have a way to disable the C++ bindings, and I have in my overlay a modified ebuild that adds a cxx USE flag (enabled by default through EAPI=1) that soon I’ll commit to the main tree. Before doing that, though, I need to check all the packages depending on libpcre: while most developers would say that nocxx/-cxx USE flags are not supported, I’d like to actually make sure that the user is well informed rather than finding out during a failure in make. Maybe it’s being too good to users who fiddle around when they shouldn’t, but there are two reasons why I think it’s better doing it this way.

The first is of course that I’m the first person to actually disable C++ support on packages which I’m not going to use with C++, or at least not through their C++ bindings – an example of this is kdelibs, that while written in C++, uses the C interface of both tiff and pcre, rather than the C++ bindings.

The second is that I find it better to waste 20 minutes once to make sure that the ebuilds are luser-proof than having to deal with 20 dupe bugs because they disabled an option in a dependency. I see many colleagues being grumpy and blaming the users for shooting on their feet. Sure, they are right, but that attitude often leads to more stress because of the bugs open, and the more bugs are opened the more grumpy you become, and it’s basically a no-way-out cycle. My idea is that if I can do something more to make sure I don’t get bugs, I’ll do it.

So anyway, the builds are working, and I’m now wondering about updating Typo trying the 4.1 version… at least that’s hopefully updated more often, and shouldn’t have the bugs that the 4.0 SVN has. And maybe a decent spam protection.