It is difficult to be angry at Linus for anything, on the count of how much good he did, and is doing to us with the whole idea of having Linux – it’s not like waiting for a GNU kernel would have helped – but honestly I feel quite bothered that he ended up making the decision of bumping the kernel’s version number like it was just a matter of public relations, without the need to consider the technical side of it, of software relying on the version numbers being, you know, meaningful. Which, honestly, reminded me of something but let’s not get that in front of us.
Let’s ignore the fact that kernel modules started to fail — they would probably be failing because of API changes without the need of anything as sophisticated as a version bump to 3. And let me be clear on one thing at least: it’s not the build failures that upset me – as I said last year I prefer it when packages fail at build time, rather than, subtly, at runtime.
At any rate, let’s begin with the first reason why you should not rely on
uname results: cross-compilation. Like it or not, cross-compilation is still a major feature of a good build system. Actually, my main sources of revenue in the past five years have involved at least some kind of cross-compilation, and not just for embedded systems, which is why I stress out so often the importance of having a good cross-compiled build system.
So what happens with build systems and Linux 3? Well, let’s just say that if you try to distinguish between Linux 2.4 and 2.6, you should not check for
major == 2 && minor >= 4 or something along those lines. A variation of this is what happens with ISC DHCP as it didn’t consider any major version beside 2.
Now in the case of DHCP, it’s a simple failure, because the build system is refusing to build at all when not understanding the
uname results, but there are a number of other situations where this is not as clear, because the test is to enable features, or backends, or special support for Linux version 2.6, and hitting an unknown version leads to generic (or even worse, dummy!) code to be built. These situations are almost impossible to identify without actually using the software itself; even the tinderbox testing is useless most of the time with these situations, as the tests are probably also throttled down not to hit the Linux-specific codepaths.
And don’t worry, there are enough build systems that are designed so bad that this is not a random, virtual risk. You could take the build system for upower (and devicekit-power) before today: it decided which of its few backends to enable by checking for the presence of some header files on the system used for the build – which by itself is hindering cross-compilation – and if it had found no combination of files, it built in the dummy backend. For the curious, I’ve sent today a patch – that Richard Hughes applied right away, thanks Richard! – for upower to choose which backend to build based on the
$host value that is handed over by
autoconf, which finally makes it cross-compilable without passing extra parameters to
./configure (even though the override is still available of course).
How long will it take for all the bugs to be sorted out? I’m afraid the answer is impossible for me to give you. We might end up finding more bugs an year from now the same way we might not find any in the next six months.. unfortunately not all projects update at the same pace. I have an example of that in front of my eyes: my laptop’s HSDPA modem, that includes a GPS module, is well supported by the vanilla kernel, for what concerns the network connection… but at the same time, the GPS support is still lacking. While there is a project to support these cards the userland GPS driver still relies on HAL, rather than the new udev, which makes it quite useless for what I’m concerned.
So anyway, next time you write a build system, do not even consider
uname … and if your build system rely on that, please fix it — and if you write an ebuild that relies on aleatory data such as
uname results or presence of given files (which differs from given headers, be warned!), make sure that there is an override and .. use it.
Diego, don’t you think that this bump in the version numbering scheme, even though it actually breaks some userspace, actually is a good thing? Well I do. If even key system components like udev rely on broken assumptions, forcing them to get fixed actually is a move to the better. And the fact that this also also helps cross compiling is a nice side effect bonus, especially in a world that becomes increasing virtualized, where cross compiling the one way or the other will become more common place.If your disagreement is meant to the addition of that curious legacy compatibility personality added recently, well thats a different point. It will help the lazy admins we are to bypass the runtime shortcommings of certain binary packages without hindering the mainstream to proceed, even though that’s basically a non-gentoo issue I assume.
I definitely do not think that it’s a good thing to break compatibility _without a forewarning and thinking about it_.It was *obvious* for a distribution developer what the change in versioning scheme would entails, but Linus only cared about the public facing changes, not what would change behind everybody’s back.This is the kind of work that requires planning, but there has been none, compatibility was broken without any planning and without the tools to identify said compatibility issues.
Actually the fact that quite a lot of hardware that has had Linux 2.6(.0) support because it has some proprietary blob from early 2000’s is a good look obsolete now is a good sign – have you tried determining ink levels of an Epson printer – Epson provides software for Linux systems only it’s from time udev didn’t exist and doesn’t work with anything newer than dirt. The fact that there have been such serious changes to both kernel and userland pretty much warrants a major version number bump anyhow.
The kernel guarantees backwards compatibility for it’s ABI . so if you know you rely on something that was new in 2.6.39 then doif ($x <= 2 && $y <= 6 && $z < 39) FAIL;else success!Future versions are guaranteed to work, so don’t even bother with guessing the versioning scheme of future kernels.
if you dont agree with him there is windows
Just because lazy programmers don’t consider updates is not a reason not to change the kernel version. That argument SHOULD have ended with the whole Y2K flap.
MBM modems should work without this “driver”, you cat start GPS via AT commands (yep, a bit boring) like in http://www.thinkwiki.org/wi…my F5521gw work a bit unstable at this time…
I have run into this class of problems (“bug here”:https://bugs.gentoo.org/sho….This package fails subtly at build-time — it silently omits to compile a library, without producing an error.It also fails at runtime, with a nondescriptive NameError.Anyhow, this package can be fixed.What’s worrying me more is that this particular bug is a *second order* effect of the major number upgrade. Namely, a Python *compiled* under linux-3.x will have ‘linux3’ as its @sys.platform@. At least one package (pybluez) checks Python’s @sys.platform@ at compile-time and runtime and compares it to ‘linux2′.So now the bugs reproducability depends on the environment in which the Python interpreter was compiled — it has become detached of _the actual running kernels’ version_!.Subtle, indeed.