At the end of November, I’ve read an interesting in-deep look at what PIC code is on Planet Debian; although it wasn’t really perfect in my opinion (as I commented on that post in particular it seems to give the impression that x86 text relocation option is better than AMD64 pc-relative approach, which in my opinion is not the case at all) it is a good explanation for most power users still unsure of what the problem is.
Now, as you can see on that post, the problem with PIC and text relocation is usually related to just shared libraries, since executables get loaded always at the same address in memory and thus require no relocation for their symbols. While this is the case for most common scenarios, there are cases when you want to relocate executables, too. For instance if you enabled randomisation of the load address as a security mitigation strategy. When that is the case, we say that PIE was enabled: Position Independent Executables.
in PIE-enabled systems, the address used to load an executable can vary between executions, which means that the references to jumps and data will not be at a fixed address in memory any longer, just like a shared library. Again you have the option of using text relocations (but since they conflict with other security mitigation strategies, it’s quite silly to do that) or you just build PIC code for the executable too. Now, this does cause a hit in performance since the symbols need to be resolved, once again, but it’s still better for the memory compared to text relocations, too.
It’s for this reason that I try my best to make sure that even code that is not going to be built into shared libraries is optimised so that it does not cause relocations either if possible.
Now, I’m sure that most users don’t need or want a PIE-enabled setup on their desktops; while the issue of how much mitigation strategies are useful on a desktop is a debatable topic that can go on and on for months, I just accept the fact that most users don’t want that, and that they should thus not be forced to use PIE for their software if they don’t want to. Indeed, it can cause extra dirty RSS pages for the processes using those executables, which is not something you wish especially on embedded systems.
On Gentoo, we don’t enable PIE by default, which means one would expect all the binaries installed by Portage to not be built with PIC code, but this does not seem to be the case, at least not always. While I was working on my size(1) replacement last night, I found that there are binaries in
/usr/bin of my system with .data.rel.ro sections. The point is that .data.rel.ro only exists when the software is built with PIC, otherwise it makes no sense and the constants are emitted in .rodata, so I was not expecting any .data.rel.ro on
So I started looking at the issue; while none of the files are Position-Independent Executables, some of them are built with
-fPIC compiler flag, by mistake mostly, when added to properly build shared objects. Add to that the other two possible causes:
--with-pic passed to the
./configure rather than leaving it to be figured by libtool, and
-fPIC being used to compile static archives, so that they could work to link statically shared libraries too.
One of the packages doing this was Avahi, for which I sent a patch to Lennart, but it’s far from being the only one; I’m going to work on a few more patches in the next weeks, hopefully I’ll be able to reduce that. I also will have to work on an extra check for my bashrc system for the tinderbox, and consider adding that to Portage. Unfortunately just checking for .data.rel.ro is not enough: if a binary built with
-fPIC does not have constants, but only variables, it will all be merged in .data, with the proper relocation entries. So to decide whether an ELF file is PIC or not one has to check the relocations, too, and even that is not going to be an absolute certainty, I’m afraid.
At any rate, I’m going to do my usual best to find out how to mitigate these problems, at least for the most common packages. As usual I strive to ensure that free software gets the best coverage even for these small little things that are important only once seen in the grand scheme of things. I guess I’ll have to find time to fetch a copy of Linkers and Loaders and read it so I can design some more interesting optimisation tests. Maybe one day I’ll be able to find enough time to start adding some of my common tests to binutils itself.
So if “objdump -h <mybinary> | grep .data.rel.ro”prints something I found a package that did it wrong? I have indeed a couple of those, among even my own software. Unfortunately I can’t find any -fPIC anywhere. Any other reason you can think of? Any idea how to track it down?
There is another widespread alternative that I have forgot to mention in the post, and it’s old, bugged libtool.When building convenience libraries inside an autotools-based project with libtool, depending on whether it’ll be used just for binaries or also for shared libraries, it’ll have to be built with or without PIC.I haven’t dug too deep yet, but as far as I can see, libtool 2.2 does build two version of the libraries to make sure that non-PIC code is used for the executables, older version only use the PIC version, causing the obvious problem of introducing PIC code in executables.I’ll try to track down the issue deeper today and see to propose a few solutions, since it’s really bad to see it this way.