Bundling libraries for trouble

You might remember that I’ve been very opinionated against bundling libraries and, to a point, static linking of libraries for Gentoo. My reasons have been mostly geared toward security but there has been a few more instances I wrote about of problems with bundled libraries and stability, for instance the moment when you get symbol collisions between a bundled library and a different version of said library used by one of the dependencies, like that one time in xine.

But there are other reasons why bundling is bad in most cases, especially distributions, and it’s much worse than just statically linking everything. Unfortunately, while all the major distribution have, as far as I know, a policy against bundled (or even statically linked) libraries, there are very few people speaking against them outside your average distribution speaker.

One such a rare gem comes out of Steve McIntyre a few weeks ago, and actually makes two different topics I wrote about meet in a quite interesting way. Steve worked on finding which software packages make use of CPU-specific assembly code for performance-critical code, which would have to be ported for the new 64-bit ARM architecture (Aarch64). And this has mostly reminded me of x32.

In many ways, there are so many problems in common between Aarch64 and x32, and they mostly gear toward the fact that in both cases you have an architecture (or ABI) that is very similar to a known, well-understood architecture but is not identical. The biggest difference, a part from the implementations themselves, is in the way the two have been conceived: as I said before, Intel’s public documentation for the ABI’s inception noted explicitly the way that it was designed for closed systems, rather than open ones (the definition of open or closed system has nothing to do with open- or closed-source software, and has to be found more into the expectancies on what the users will be able to add to the system). The recent stretching of x32 on the open system environments is, in my opinion, not really a positive thing, but if that’s what people want …

I think Steve’s reports is worth a read, both for those who are interested to see what it takes to introduce a new architecture (or ABI). In particular, for those who maintained before that my complaining of x32 breaking assembly code all over the place was a moot point — people with a clue on how GCC works know that sometimes you cannot get away with its optimizations, and you actually need to handwrite code; at the same time, as Steve noted, sometimes the handwritten code is so bad that you should drop it and move back to plain compiled C.

There is also a visible amount of software where the handwritten assembly gets imported due to bundling and direct inclusion… this tends to be relatively common because handwritten assembly is usually tied to performance-critical code… which for many is the same code you bundle because a dynamic link is “not fast enough” — I disagree.

So anyway, give a read to Steve’s report, and then compare with some of the points made in my series of x32-related articles and tell me if I was completely wrong.

2 thoughts on “Bundling libraries for trouble

  1. There is a world outside Linux distributions and open source code.Let’s take sqlite. sqlite amalgamation exists for a good reason. It makes embedding sqlite hassle free compared to linking against a shared library. No, don’t reply with pkg-config. In my cross platform world, pkg-config is worthless (mandatory disclaimer, I have nothing against the project per se).zlib? fortunately we now have miniz https://code.google.com/p/m…libjpeg? no, stb_image http://nothings.org/stb_ima…Each time, the primary concern is: ease of use, from a developer point of view. Also ease of deployment.I had to integrate libffi into a cross platform project. Our Windows code is compiled with Visual Studio and I believe most closed source code shops do the same. The fact that libffi comes with an Autotools build complicates things much. Sure they provide the msvcc.sh script but it’s complicated for most people. And you need to have autotools in the PATH on your Windows workstation.Not everybody in the team knows how Autotools work. And not everybody in the team want to know how it works anyway. It just doesn’t cut it.Need another example? Google gtest offers an amalgamation script that turns the myriads of headers and source files into 2 files: gtest.h, gtest-all.cc. It makes gtest usable.So… To close the loop with the first sentence in my comment. Here is some perspective: single header, single source file libraries are so much easier to use, particularly when you support lots of platforms. Sure that’s what Autotools were designed for but it only applies to platforms that offer “gcc like toolchains”. For platforms like Windows, Windows RT, Mac, iOS there’s a significant maintenance cost associated with building right versions of your dependencies as shared libraries (when possible). You need to build them for all platforms / architecture combinations and it grows fast :(At this point of my comment, I realize the discussion could go longer. One could argue that tools like CMake are a way to get cheap cross platform builds. Again, not always, not really. Not all platforms are supported. Sometimes you even lose fine grained controls of the build options in the process. Some platforms are just not supported by CMake. CMake generates Visual Studio projects and solution with absolute filenames (while not necessary), happy sharing with the team.Don’t get me wrong, I jut didn’t switch my attention towards calling names at CMake. It’s just another piece of the puzzle that explains why I believe bundling libraries sometimes makes the cross platform developer’s life so much easier.Final note, I totally agree with your concerns about security risks that come along bundling others’ code.

    Like

  2. @Gregory I like your comment. I tend to use static linking whenever I try to create a distributable Win32 PE-binary from my source. The reason differs a little from yours and it is much simpler: managing (native) DLLs on windows is a can of worms. For “managed” (.NET) code, the GAC is great, but when you develop cross-platform in C/C++, you can’t use that.I think the best approach would be: bundle libraries for platforms where bundling them REALLY makes building easier (e.g.because it’s very unlikely the libraries ARE installed there, or because there’s just no proper package management in place), but always prefer the system-wide installed version of the library, if found.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s