This Time Self-Hosted
dark mode light mode Search

Distributions are becoming irrelevant: difference was our strength and our liability

For someone that has spent the past thirteen years defining himself as a developer of a Linux distribution (whether I really am still a Gentoo Linux developer or not is up for debate I’m sure), having to write a title like this is obviously hard. But from the day I started working on open source software to now I have grown a lot, and I have realized I have been wrong about many things in the past.

One thing that I realized recently is that nowadays, distributions lost the war. As the title of this post says, difference is our strength, but at the same time, it is also the seed of our ruin. Take distributions: Gentoo, Fedora, Debian, SuSE, Archlinux, Ubuntu. They all look and act differently, focusing on different target users, and because of this they differ significantly in which software they make available, which versions are made available, and how much effort is spent on testing, both the package itself and the system integration.

While describing it this way, there is nothing that scream «Conflict!», except at this point we all know that they do conflict, and the solutions from many different communities, have been to just ignore distributions: developers of libraries for high level languages built their own packaging (Ruby Gems, PyPI, let’s not even talk about Go), business application developers started by using containers and ended up with Docker, and user application developers have now started converging onto Flatpak.

Why the conflicts? A lot of time the answer is to be found in bickering among developers of different distributions and the «We are better than them!» attitude, which often turned to «We don’t need your help!». Sometimes this went all the way to the negative side to the point of «Oh it’s a Gentoo [or other] developer complaining, it’s all their fault and their problem, ignore them.» And let’s not forget of the enmity between forks (like Gentoo, Funtoo and Exherbo), in which both sides are trying to prove being better than the other. A lot of conflict all over the place.

There were of course at least two main attempts to standardise parts of how a distribution works: the Linux Standard Base and FreeDesktop.org. The former is effectively a disaster, the latter is more or less accepted, but the problem lies there: in the more-or-less. Let’s look at these two separately.

The LSB was effectively a commercial effort, which was aimed at pleasing (effectively) only the distributors of binary packages. It really didn’t make much of an assurance of the environment you could build things in, and it never invited non-commercial entities to discuss the reasoning behind the standard. In an environment like open source, the fact that the LSB became an ISO standard is not a badge of honour, but rather a worry that it’s over-specified and over-complicated. Which I think most people agree it is. There is also quite an overreach of specifying the presence of binary libraries, rather than being a set of guidelines for distributions to follow.

And yes, although technically LSB is still out there working, the last release I could find described in Wikipedia is from 2015, and I couldn’t even find at first search whether they certified any distribution version. Also, because of the nature of certifications, it’s impossible to certify a rolling-release distribution, which as it happens are becoming much more widespread than they used to.

I think that one of the problem of LSB, both from the adoption and usefulness point of views, is that it focused entirely too much on providing a base platform for binary and commercial application. Back when it was developed, it seemed like the future of Linux (particularly on the desktop) relied entirely on the ability for proprietary software applications to be developed that could run on it, the way they do on Windows and OS X. Since many of the distributions didn’t really aim to support this particular environment, convincing them to support LSB was clearly pointless.

FreeDesktop.org is in a much better state in this regard. They point out that whatever they write is not standards, but de-facto specifications. Because of the de-facto character of these, they started by effectively writing down whatever GNOME and RedHat were doing, but then grown to be significantly more cross-desktop, thanks to KDE and other communities. Because of the nature of the open source community, FD.o specifications are much more widely adopted than the “standards”.

Again, if you compare with what I said above, FD.o provides specifications that make it easier to write, rather than run, applications. It provides you with guarantees of where you should be looking for your file and which icons should be rendering, and which interfaces are exposed. Instead of trying to provide an environment where a in-house written application will keep running for the next twenty years (which, admittedly, Windows has provided for a very long time), it provides you building blocks interfaces so that you can create whatever the heck you want and integrate with the rest of the desktop environments.

As it happens, Lennart and his systemd ended up standardizing distributions a lot more than LSB or FD.o ever did, if nothing else by taking over one of the biggest customization points of them all: the init system. Now, I have complained about this before that it probably could have been a good topic for a standard even before systemd, and independently from it, that developers should have been following, but that’s another problem. At the end of the day, there is at least some cross-distribution way to provide init system support, and developers know that if they build their daemon in a certain way, then they can provide the init system integration themselves, rather than relying on the packagers.

I feel that we should have had much more of that. When I worked on ruby-ng.eclass and fakegem.eclass, I tried getting the Debian Ruby team, who had similar complaints before, to join me on a mailing list so that we could discuss a common interface between Gems developers and Linux distributions, but once again, that did not actually happen. My afterthought is that we should have had a similar discussion for CPAN, CRAN, PyPI, Cargo and so on… and that would probably have spared us the mess that is go packaging.

The problem is not only getting the distributions to overcome their differences, both in technical direction and marketing, but that it requires sitting at a table with the people who built and use those systems, and actually figuring out what they are trying to achieve. Because in particular in the case of Gems and the other packaging systems, who you should talk with is not only your distribution’s users, but most importantly the library authors (whose main interest is shipping stuff so that people can use them) and the developers who use them (whose main interest is being able to fetch and use a library without waiting for months). The distribution users are, for most of the biggest projects, sysadmins.

This means you have a multi-faceted problem to solve, with different roles of people, and different needs of them. Finding a solution that does not compromise, and covers 100% of the needs of all the roles involved, and requires no workflow change on any one’s part is effectively impossible. What you should be doing is focusing on choosing the very important features for the roles critical to the environment (in the example above, the developers of the libraries, and the developers of the apps using those libraries), requiring the minimum amount of changes to their workflow (but convince them to change the workflow where it really is needed, as long as it’s not more cumbersome than it was before for no advantage), and figuring out what can be done to satisfy or change the requirements of the “less important” roles (distribution maintainers usually being that role).

Again going back to the example of Gems: it is clear by now that most of the developers never cared of getting their libraries to be carried onto distributions. They cared about the ability to push new releases of their code fast, seamlessly and not have to learn about distributions at all. The consumers of these libraries don’t and should not care about how to package them for their distributions or how they even interact with it, they just want to be able to deploy their application with the library versions they tested. And setting aside their trust in distributions, sysadmin only care to have a sane handling of dependencies and being able to tell which version of which library is running on their production, to upgrade them in case of a security issues. Now, the distribution maintainers can become the nexus for all these problems, and solve it once and for all… but they will have to be the ones making the biggest changes in their workflow – which is what we did with ruby-ng – otherwise they will just become irrelevant.

Indeed, Ruby Gems and Bundler, PyPI and VirtualEnv, and now Docker itself, are expressions of that: distribution themselves became a major risk and cost point, by being too different between each other and not providing an easy way to just provide one working library, and use one working library. These roles are critical to the environment: if nobody publish libraries, consumers have no library to use; if nobody consumes libraries, there is no point in publishing them. If nobody packages libraries, but there are ways to publish and consume them, the environment still stands.

What would I do if I could go back in time, be significantly more charismatic, and change the state of things? (And I’m saying this for future reference, because if it ever becomes relevant to my life again, I’ll do exactly that.)

  • I would try to convince people that even on divergence of technical direction, discussing and collaborating is a good thing to do. No idea is stupid, idiotic or any other random set of negative words. The whole point of that is that you need to make sure that even if you don’t agree on a given direction, you can agree on others, it’s not a zero-sum game!
  • Speaking of, “overly complicated” is a valid reason to not accept one direction and take another; “we always did it this way” is not a good reason. You can keep using it, but then you’ll end up with Solaris a very stagnant project.
  • Talk with the stakeholders of the projects that are bypassing distributions, and figure out why they are doing that. Provide “standard” tooling, or at least a proposal on how to do things in such a way that the distributions are still happy, without causing undue burden.
  • Most importantly, talk. Whether it is by organizing mailing lists, IRC channels, birds of a feather at conferences, or whatever else. People need to talk and discuss the issues at hand in clear, in front of the people building the tooling and making the decisions.

I have not done any of that in the past. If I ever get in front of something like this, I’ll do my best to, instead. Unfortunately, this is a position that, in the current universe we’re talking about, would have required more privilege than I had before. Not only for my personal training and experience to understand what should have been done, but because it requires actually meeting with people and organizing real life summits. And while nowadays I did become a globetrotter, I could never have afforded that before.

Comments 15
  1. Hi,I agree with you in all but one point. I don’t think distibutions themselves are the main problem, I think the main issue are different package formats and package names. No one will create 10 different packages for “linux”, let alone specify dependencies correctly.A standard ABI would help a lot too, but many solve this by statically linking all they need.

  2. I do believe that different distributions serve different needs to some extent, but as you say, there is a lot of unnecessary friction that arises from e.g. non-standardized directory structures, configuration files and init systems. I also agree with the problem of packaging things for different distributions. Working mostly with python stuff, I only publish on pypi now, since I don’t have the tools, time and skills to create .deb, .rpm, .pkg, .ebuild etc (ok, the last one I could probably manage). Unfortunately pypi (and likely most language-specific package managers) have the problem that they are really bad at managing dependencies on packages written in other languages and checking for other programs. I haven’t got a clue how to do this however with all the different distro package formats and language-specific package managers. Any approach I can think of will have drawbacks, but allowing distro packages to easily import language-specific packages would probably be the least painful solution

  3. I think that the best example of how things should be done is app-portage/g-cpan: That’s gentoo package which AUTOMATICALLY creates gentoo ebuilds from CPAN module. If distribution authors would be able to do the same for pypi, gems, cran, cargo … and – most importantly – deb and rpm (presumably sans the core system ones) and it would work good enough for most applications, noone would CARE what the native package format for distribution is.

  4. As a JavaScript and PHP developer and someone who occassionally plays with Haskell and Rust I don’t see language-specific package managers as a bad thing.Pace of development in JS is so fast I can hardly imagine involving my distribution’s package manager in it. Sometimes there is a bug in a dependency. I can fork it, fix the bug, point package.json to my fork and use the forked version in a matter of minutes.Another challenge are incompatible versions of dependencies. I think this was mostly the reason why virtualenv was created. It’s hard to imagine solutions for this via global system installs.And how are you supposed to follow principle of encapsulation when working on a piece of software? I have much more control over dependencies when they are installed separately for each of my projects.

  5. The building of all these separate communities and their tools has been great, the hard part: getting them to talk to each other and come up with common ways of doing things needs to happen, it is sorely lacking.

  6. I agree with you. Overloading distro-specific package manager with language-specific packaging is an anti-pattern IMHO. It’s simply not scalable or maintainable given the fast evolution of each language community. I think we need to de-distroify most things at several levels. Most sysadmin would not like that, but lucky for the majority of the devs, they will die if they don’t convert as DevOps.

  7. Distro’s package managers come from an age when most software distributions required compilation on the target host. That is still a reasonable use case for them, possibly the only one I can think of. Having to, say, use Python bindings to Subversion on a machine that also serves Subversion from Apache theoretically requires you to compile together Python, Subversion, Apache, The Apache APR library and the Subversion Python bindings. That’s assuming I didn’t forget anything. In similar cases prepackaging by distros is invaluable, however it is way less relevant nowadays.

  8. After 20+ years of Linux my take is that a distro is for the core operating system and the desktop environment, if it’s not a server machine. Security fixes, new versions etc. Everything related to software development is in the domain of language package managers. I never understood why I should get last year’s gems and Ruby instead of the ones of yesterday evening. Same for the database etc. Desktop applications are a mixed ground. I’ve got some from the app’s PPAs and some from the distro (Ubuntu). The only thing that matters is that the authors fix security bugs which can’t be solved by updating the core is the OS. No need to waste the time of distro maintainers to package photo managers, video editors, music players, etc.

  9. Being old school. I see my system as a multi-tool platform, where it’s important to have stable, shared libraries and other infrastructure that supports everything on it. The alternative, proposed by you and others, is certainly more flexible: just bundle the environment you need, and as a side benefit you can swap it out quickly if needed in the future. This, however, is a tradeoff: if you have multiple such applications, you need to upgrade them and their components separately: libschmoo for appX needs a separate fix from libschmoo for appY. If you have transient, single-purpose systems, that’s probably OK, but if you’re trying to run a long-lived server with multiple functions, it makes life difficult. There already has been some indication that Docker containers aren’t being properly updated and they start to acquire technical debt quickly.

  10. I know the irony of another distribution being the answer to distributions becoming irrelevant, especially one that diverges so far from the standards adopted by everyone else, but this whole article reads like someone that is looking for nixos and doesn’t know it exists. Conflicts? Non-existent, rollbacks? easy and automatic, reproducibility? what it was originally designed for.

  11. This is not even irony, this is, if something, an example of how we got to this point.Because instead of saying “We decided to take different compromises so that {things} are not a problem”, you’re saying “Fool! You should use us as we’re the One Distribution To Rule Them All!”I sure hope that this attitude is *not* an example of the nixos development team attitude, because in that case I would feel sad that clearly we’ve not learnt anything, and my post does not read at all.(But since I have seen enough people getting my point, I would venture the guess is you who have read my post trying to find how to turn it to your advantage, it failed though.)

  12. My point is not that NixOS is the one distribution to rule them all (And I don’t think the nix development team thinks this either. The nix package manager was built to be able to run on any distribution, but you don’t get the same service level/config level integration you get with NixOS). It’s that NixOS is the first distribution to take a declarative approach to defining an operating system and apply it. The FHS is fundamentally flawed because every library lives in the same place. When you upgrade one library, it can break a plethora of other packages that aren’t being maintained by the central distribution even if that change is insignificant, because of dependency pinning which is what leads to what’s called rpm hell, apt-get hell, dependency hell, etc…). Some distributions (like arch and gentoo) are rolling and encourage people to update everything frequently. This is great, and those are probably the two distributions I used the longest in my life, but with nix though, each package lives in it’s own directory (complete with a dependency tree hash so multiple of the same package can be in use on the same system). As such, an upgrade doesn’t replace the existing package, but lives along it and a rollback is just changing some symlinks around. I’m not saying NixOS is a panacea to solve all distribution problems, it has it’s own fork (guix) already. The political and personal conflicts will probably never be fully solved. But it does address the technical problems that pip, bundler, etc… try to solve at the operating system library level, and gives you the reproducible results and ability to rollback that containers solve.

  13. Just re-read the post, think about it again, and if you need clarification feel free to ask for it.I feel you completely missed the whole point of the post.

  14. I think you are attempting to act as a agent or what the Black Stone is doing right now. Maybe you can learn some from the philosophy of charlie munger. Good Luck

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.