Ah, LXC! Isn’t that good now?

If you’re using LXC, you might have noticed that there was a 0.8.0 release lately, finally, after two release candidates, one of which was never really released. Do you expect everything would go well with it? Hah!

Well, you might remember that over time I found that the way that you’re supposed to mount directories in the configuration files changed, from using the path to use as root, to the default root path used by LXC, and every time that happened, no error message was issued that you were trying to mount directories outside of the tree that they are running.

Last time the problem I hit was that if you try to mount a volume instead of a path, LXC expected you to use as a base path the full realpath, which in the case of LVM volumes is quite hard to know by heart. Yes you can look it up, but it’s a bit of a bother. So I ended up patching LXC to allow using the old format, based on the path LXC mounts it to (/usr/lib/lxc/rootfs). With the new release, this changed again, and the path you’re supposed to use is /var/lib/lxc/${container}/rootfs — again, it’s a change in a micro bump (rc2 to final) which is not documented…. sigh. This would be enough to irk me, but there is more.

The new version also seem to have a bad interaction with the kernel on stopping of a container — the virtual ethernet device (veth pair) is not cleared up properly, and it causes the process to stall, with something insisting to call the kernel and failing. The result is a not happy Diego.

Without even having to add the fact that the interactions between LXC and SystemD are not clear yet – with the maintainers of the two projects trying to sort out the differences between them, at least I don’t have to care about it anytime soon – this should be enough to make it explicit that LXC is not ready for prime time so please don’t ask.

On a different, interesting note, the vulnerability publicized today that can bypass KERNEXEC? Well, unless you disable the net_admin capability in your containers (which also mean you can’t set the network parameters, or use iptables), a root user within a container could leverage that particular vulnerability.. which is one extremely good reason not to have untrusted users having root on your containers.

Oh well, time to wait for the next release and see if they can fix a few more issues.

Hard Containers — LXC and GrSecurity

Okay so Excelsior is here, and it’s installed, and here starts the new list of troubles, which seems to start, as usual, with my favourite system ever: LXC which is the basis the Tinderbox work upon.

The first problem is not strictly tied to LXC, but one of the dependencies required: the lynx browser fails to build in parallel if there are enough cores available (which there certainly are here!). This bug should be relatively easy to fix but I haven’t had time to look into it just yet.

A second issue came up due to the way I was proceeding to do the install, outside of office hours, and is that the ebuild is using the kernel sources, I think to identify which capabilities are available on the system. This should be fixed as well, so that it checks the capabilities on the installed linux-headers instead of the sources, which might not be the same.

The third issue is funny: Excelsior is going to use an hardened kernel. The reason is relatively simple to understand: it’s going to run a lot of code of unknown origins, it’ll allow other people in, one wants to be as as possible… unfortunately it seems like this is not, by default, a good configuration to use with LXC.

In particular, grsecurity is designed by default to limit what you can do within a chroot, by applying a long list of restrictions. This is fine, if not for the fact that LXC also chroots to start its own set up process. I’m now fixing the ebuild to warn about those options that you have to (or might) want to disable in your GrSec setup to use LXC.

Interestingly, it’s not a good idea to disable all of them, since a few are actually very good if you want to use LXC, such as the mknod restriction, which is very good in particular if you want to make sure that only a subset of the devices are accessible (even when counting in the allowed/non-allowed access of the devices cgroup).

In particular, these have to be disabled:

  • Deny mounts (CONFIG_GRKERNSEC_CHROOT_MOUNT)
  • Deny pivot_root in chroot (CONFIG_GRKERNSEC_CHROOT_PIVOT)
  • Capability restrictions (CONFIG_GRKERNSEC_CHROOT_CAPS)

while the double-chroot would be counter-synergistic as it would disallow services within the container to further chroot to allow a defense-in-depth approach.

Then there is another issue. Before starting to set up the actual tinderbox, I wanted to prepare another container, which is the one I’ll be using for my own purposes, including bumping of Ruby packages and stuff like that. Since the system is supposed to stay very locked down, this time I want to mount the block device straight into the container, which is a supported configuration…. but it turns out that the configuration parser, trying to workaround old issues (yes that’s a one and a half years’ old post) will ignore any mount request that doesn’t have the destination rootfs prefixed.

Unfortunately when you mount a block device, it means that you’ll end up with something along the lines of /dev/sdb1/usr/portage. This also collides with the documentation in man lxc.conf:

If the rootfs is an image file or a device block and the fstab is used to mount a point somewhere in this rootfs, the path of the rootfs mount point should be prefixed with the /usr/lib/lxc/rootfs default path or the value of lxc.rootfs.mount if specified.

Anyway this should be fixed in 0.8.0_rc2-r2 which is now in tree, I’ve not been able to test it thoroughly yet, so holler at me if something doesn’t work.

Hardened and EFI aren’t buddies

For my work I store a huge amount of data in my systems, starting with a number of customers’ systems’ backups, which take the best part of 1TB of storage to preserve for the usual year or two that I usually store them (customers are very happy that way). This unfortunately means that even the 5TB of space present in Yamato is quite restrictive for my business. It’s for this reason, and others, that I ordered the components to build myself a 12TB storage server.

For said server I decided to go with an AMD Fusion board, by Sapphire, both because I’ve been disappointed by Intel’s Atom (and nVidia’s ION), and because the price was right for a board with 5 SATA ports and an eSATA one — four 3TB Western Digital Caviar Green disks, one Kingston SSD that I still had laying around, plus a WD MyBook Studio II that is currently connected to Yamato. In particular, for those asking, the fact that most of the “high end” Atoms – such as the D510 I have on my frontend – do not support CPU speed throttling is appalling; the AMD Fusion board I have do not have said problem.

Anyway, the board itself looks good and works fine — and it was curious for me to notice that it uses UEFI 2.1, rather than BIOS, as its firmware.. it’s the first desktop-class board I found with such a setup (admittedly I haven’t bought many, lately). Also, while the IOMMU feature is disabled by default, the help text for it in the configuration pages states that it is useful to translate I/O operations under Linux.. yes it explicitly mentions Linux in the help text! Kudos to them!

Out of experience with Yamato, when you have so many disks with a complex situation (RAID1 and LVM), booting can be troublesome; in Yamato’s case a bit more, as the order the disks are listed on depends among other things by the order in which they are detected by the kernel, which can change between version and version since the box uses three different controllers and three different drivers. For this reason I was hoping to use UEFI-style booting through grub2 rather than the classic old BIOS boot…

Turns out it wasn’t that good an idea; even with scarabeus’s guide, I was unable to get it to work; it seems like to set it up properly you need to boot an EFI-capable kernel (which is not the one in SysRescueCD, that I usually use to install my systems), and boot it EFI-style. If you were to boot it without EFI support, you wouldn’t be able to find the EFI variables in the /sys hierarchy anyway. I ended up discarding the whole idea of that, since at the end grub2 can make sense of the RAID devices and can find my rootfs just fine.

But there is one thing to note here that is quite important. When I configured my kernel for the first time, I enabled EFI support; then I enabled the hardened features (GRSec and Pax)… and then when I booted it the first time I went to look for the EFI variables without finding them — I didn’t yet understand that booting via PC BIOS wouldn’t have shown them anyway! Turns out that the KERNEXEC option that is so troublesome with virtualisation … is also incompatible with EFI. If you enable that option (which both the server and workstation configurations in hardened-sources enable), the EFI support in the kernel is entirely shut off.

So it appears using EFI is still too soon: we lack tools stable enough, to begin with, and documentation (I had to rely on ArchLinux’s documentation for the most part, but even that is not very clear and refers to black magic tricks; I refrain on writing documentation on the topic myself because I don’t understand all of it at all, even after reading Matthew Garret’s posts on the topic). And it seems to require the kernel to perform unsafe operations (RWX mappings), which is definitely not a good thing.

I start to understand Matthew’s feelings in regard of EFI: it might end up with a more normal boot process once the transition is complete, but between limitations with backward compatibility and no real good feature that is missing – with exception of nvram support – it doesn’t look like it’s something we should be paying much attention to, for now.

If my router was a Caterpie, it’s now a Metapod

And this shows just how geek I can be, by using as the title for the post one of the Pokémons whose only natural attack is … Harden!

Somebody made me notice that I’m getting more scrupulous about security lately, writing more often about it and tightening my own systems. I guess this is a good thing, as becoming responsible for this kind of stuff is important for each of us: if Richard Clarke scared us with it we’re now in the midst of an interesting situation with the stuxnet virus, which gained enough attention that even BBC World News talked about it today.

So what am I actually doing for this? Well, beside insisting on fixing packages when there are even possible security issues, which is a general environment solution, I’ve decided to start hardening my systems starting from the main home router.

You might remember my router as I wrote about it before, but to refresh your mind, and explain it to those who didn’t read about it before, my router is not an off-the-shelf blackbox, and neither it is a reflashed off-the-shelf box that runs OpenWRT or similar firmwares. It is, for the most part, a “standard” system. It’s a classic IBM PC-compatible system, with a Celeron D as CPU, 512MB of RAM and, instead of standard HDDs or SDDs, it runs off a good old fashioned CompactFlash card, with a passive adapter to EIDE.

As “firmware” (or in this case we should call it operating system I guess) it always used a pre-built Gentoo; I’m not using binpkgs, I’m rather building the root out of a chroot. Originally, it used a 32-bit system without fortified sources — as of tonight it runs Gentoo Hardened, 64-bit, with PaX and ASLR; full PIE and full SSP enabled. I guess a few explanations for the changes are worth it.

First of all, why 64-bit? As I described it, there is half a gigabyte of RAM, which fits 32-bit just nicely, no need to get over the 4GiB mark; and definitely a network router is not the kind of appliance you expect powerful CPUs to be needed. So why 64-bit? Common sense wants that 64-bit code requires more memory (bigger pointers) and has an increased code size which both increase disk usage and causes cache to be used up earlier. Indeed, at first lance it seems like this does not fall into two of the most common categories for which 64-bit is suggested: databases (for the memory space) and audio/video encoding (for the extra registers and instructions). Well, I’ll add a third category: a security-oriented hardened system of any kind, as long as ASLR is in the mix.

I have written my doubts about ASLR usefulness — well, time passes and one year later I start to see why ASLR can be useful, mostly when you’re dealing with local exploits. For network services, I still maintain that most likely you cannot solve much with ASLR without occasionally restarting them, since less and less of them actually fork one process from another, while most will nowadays prefer using threads to processes for multiprocessing (especially considering the power of modern multicore, multithread systems). But for ASLR to actually be useful you need two things: relocatable code and enough address space to actually randomize the load addresses; the latter is obviously provided by a 64-bit address space (or is it 48-bit?) versus the 32-bit address space x86 provides. Let’s consider a bit the former.

In the post I linked before, you can see that to have ASLR you end up with either having text relocations on all the executables (which are much more memory hungry than standard executables — and collide with another hardened technique) or with Position-Independent Executables (PIE) that are slightly more memory hungry than normal (because of relocations) but also slower because of sacrificing at least one extra register to build PIC. Well, when using x86-64, you’re saved by this problem: PIC is part of the architecture to the point that there isn’t really much to sacrifice when building PIC. So the bottomline is that to use ASLR, 64-bit is a win.

But please, repeat after me: the security enhancement is ASLR, not PIE.

Okay so that covers half of it; what about SSP? Well, Stack Smashing Protection is a great way to … have lots to debug, I’m afraid. While nowadays there should be much fewer bugs, and the wide use of fortified sources caused already a number of issues to be detected even by those not running a hardened compiler, I’m pretty sure sooner or later I’ll hit some bug that nobody hit before, mostly out of my bad karma, or maybe just because I like using things experimental, who knows. At any rate, it also seems to me like the most important protection here; if anything tries to break the stack boundary, kill it before it can become something more serious; if it’s a DoS, well, it’s annoying, but you don’t risk your system to be used as a spambot (and we definitely have enough of those!) — at least for what concerns C code, it does not do any good for bad codebases unfortunately.

Now the two techniques combined require a huge amount of random data, and that data is fetched from the system entropy pool; given that the router is not running with an HDD (which has non-predictable seek times and thus is a source of entropy), has no audio or video devices to use, and has no keyboard/mouse to gather entropy from, it wouldn’t be extremely unlikely to think of a possible entropy depletion attack. Thankfully, I’m using an EntropyKey to solve that problem.

Finally, to be on the safe side, I enabled PaX (which I keep repeating, has a much more meaningful name on the OpenBSD implementation; W^X), which allows for pages of executable code to be marked as read-only, non-writeable, and vice-versa writeable pages are non-executable. This is probably the most important mitigation strategy I can think of. Unfortunately, the Celeron D has no nx bit support (heck, it came way after my first Athlon64 and it lacks such a feature? Shame!) but PaX does not have that much of a hit on a similar system that mostly idles at 2% of CPU usage (even though I can’t seem to get the scaler to work at all).

One thing I had to be wary of is that enabling UDEREF actually caused my router not to start, reporting memory corruption when init started.. so if you see a problem like that, give it a try to disable it.

Unfortunately, this only protects me on the LAN side, since the WAN is still handled through a PCI card that is in truth only a glorified Chinese router using a very old 2.4 kernel.. which makes me shiver to think about. Luckily there is no “trusted” path from there to the rest of the LAN. On the other hand if somebody happens to have an ADSL2+ card that can be used with a standard Linux system, with the standard kernel and no extra modules especially, then I’d be plenty grateful.

More details on how I proceeded to configure the router will come in futher posts, this one is long enough on its own!

The PIE is not exactly a lie…

Update (2017-09-10): The bottom line of this article changed since the 8 years it was posted, quite unsurprisingly. Nowadays, vanilla kernel has a decent ASLR and so everyone does actually have advantages in building everything as PIE. Indeed, that’s what Arch Linux and probably most other binary distributions do exactly that. The rest of the technical description of why this is important and how is still perfectly valid.

One very interesting misconception related to Gentoo, and especially the hardened sub-profile, is related to the PIE (Position-Independent Executable) support. This is probably due to the fact that up to now the hardened profile always contained PIE support, and since it relates directly to PIC (Position-Independent Code) and PIC as well is tied back to hardened support, people tend to confuse what technique is used for what scope.

Let’s start with remembering that PIC is a compilation option that produces the so-called relocatable code; that is, code that is valid no matter what base address it is loaded at. This is a particularly important feature for shared objects: to be able to be loaded by any executable and still share the code pages in memory, the code needs to be relocatable; if it’s not, a text relocation has to happen.

Relocating the “text” means changing the executable code segment so that the absolute addresses (of both functions and data — variables and constants) is correct for the base address the segment was loaded at. Doing this, causes a Copy-on-Write for the executable area, which among other things, wastes memory (each process running will have to have its private copy of the executable memory area, as well as the variable data memory area). This is the reason why shared objects in almost any modern distribution are built relocatable: faster load time, and reduced memory consumption, at the cost of sacrificing a register.

An important note here: sacrificing a register, which is something needed for PIC to keep the base address of the loaded segment, is a minuscule loss for most architectures, with the notable exception of x86, where there are very few general registers to use. This means that while PIC code is slightly (but not notably) slower for any other architecture, it is a particularly heavy hit on x86, especially for register-hungry code like multimedia libraries. For this reason, shared objects on x86 might still be built without PIC enabled, at the cost of load time and memory, while for most other architectures, the linker will refuse to produce a shared object if the object files are not built with PIC.

Up to now, I said nothing about hardened at all, so let me introduce the first relation between hardened and PIC: it’s called PaX in Hardened Linux, but the same concept is called W^X (Write xor eXecute) in OpenBSD – which is probably a very descriptive name for a programmer – NX (No eXecution) in CPUs, and DEP (Data Execution Prevention) in Windows. To put it in layman terms, what all these technologies do is more or less the same: they make sure that once a memory page is loaded with executable code, it cannot be modified, and vice-versa that a page that can be modified cannot be executed. This is, like most of the features of Gentoo Hardened, a mitigation strategy, that limits the effects of buffer overflows in software.

For NX to be useful, you need to make sure that all the executable memory pages are loaded and set in stone right away; this makes text relocation impossible (since they consists of editing the executable pages to change the absolute addresses), and also hinders some other techniques, such as Just-In-Time (JIT) optimisation, where executable code is created at runtime from an higher, more abstract language (both Java and Mono use this technique), and C nested functions (or at least the current GCC implementation, that makes use of trampolines, and thus require executable stack).

Does any of this mean that you need PIC-compiled executables (which is what PIE is) to make use of PaX/NX? Not at all. In Linux, by default, all executables are loaded at the same base address, so once the code is built, it doesn’t have to be relocated at all. This also helps optimising the code for the base case of no shared object used, as that’s not going to have to deal with PIC-related problems at all (see this old post for more detailed information about the issue).

But in the previous paragraph I did write some clue as to what the PIE technique is all about; as I said, the reason why PIE is not necessary is that by default all executables are loaded at the same address; but if they weren’t, then they’d be needing either text relocations or PIC (PIE), wouldn’t they? That’s the reason why PIE exists indeed. Now, the next question would be, how does PIE relate to hardened? Why does the hardened toolchain use PIE? Does using it make it magically possible to have a hardened system?

Once again, no, it’s not that easy. PIE is not, by itself, neither a security measure nor a mitigation strategy. It is, instead, a requirement for the combined use of two mitigation strategy, the first is the above-described NX idea (which rules out the idea of using text relocations entirely), while the second is is ASLR (Address Space Layout Randomization). To put this technique also in layman terms, you should consider that a lot of exploit require that you change the address a variable points to, so you need to know both the address of that variable, and the address to point it to; to find this stuff out, you can usually try and try again until you find the magic values, but if you randomize the addresses where code and data are loaded each time, you make it much harder for the attacker to guess them.

I’m pretty sure somebody here is already ready to comment that ASLR is not a 100% safe security measure, and that’s absolutely right. Indeed here we have to make some notes as to which situation this really works out decently: local command exploits. When attacking a server, you’re already left to guess the addresses (since you don’t know which of many possible variants of the same executable the server is using; two Gentoo servers rarely have the same executable either, since they are rebuilt on a case by case basis — and sometimes even with the same exact settings, the different build time might cause different addresses to be used); and at the same time, ASLR only changes the addresses between two executions of the same program: unless the server uses spawned (not cloned!) processes, like inetd does (or rather did), then the address space between two requests on the same server will be just the same (as long as the server doesn’t get restarted).

At any rate, when using ASLR, the executables are no longer loaded all at the same address, so you either have to relocate the text (which is denied by NX) or you’ve got to use PIE, to make sure that the addresses are all relative to the specified base address. Of course, this also means that, at that point, all the code is going to be PIC, losing a register, and thus slowed down (a very good reason to use x86-64 instead of x86, even on systems with less than 4GiB of RAM).

Bottomline of the explanation: using the PIE component of the hardened toolchain is only useful when you have ASLR enabled, as that’s the reason why the whole hardened profile uses PIE. Without ASLR, you will have no benefit in using PIE, but you’ll have quite a few drawbacks (especially on the old x86 architecture) due to building everything PIC. And this is also the same reason why software that enables PIE by itself (even conditionally), like KDE 3, is doing silly stuff for most user systems.

And to make it even more clear: if you’re not using hardened-sources as your kernel, PIE will not be useful. This goes for vanilla, gentoo, xen, vserver sources all the same. (I’m sincerely not sure how this behave when using Linux containers and hardened sources).

If you liked this explanation that costed me some three days worth of time to write, I’m happy to receive appreciation tokens — yes this is a shameless plug, but it’s also to remind you that stuff like this is the reason why I don’t write structured documentation and stick to simple, short and to the point blogs.

Respecting CFLAGS and CXXFLAGS, reality testing!

As I have written in my post Flags and flags, I think that one way out of the hardened problem would be to actually respect the CFLAGS and CXXFLAGS the user requests so that they actually apply to the ebuilds. Unfortunately, not all the ebuilds in the tree respect the flags, and finding out which ones do and which ones don’t hasn’t been, up to now, an easy task.

There are many reasons for this, the most common one is to look at the build output and spot that all the compile lines lack your custom flags, but this is difficult to automate, another option is to inject a fake definition option (-DIWASHERE) and grep for it in the build logs, but this is messed up if you consider that a package might ignore CFLAGS just for a subset of its final outputs.

While I was without Enterprise I spent some time thinking about this and I came to find a possible solution, which I’m going to experiment on Yamato, starting tonight (which is Friday 29th for what it’s worth).

The trick is that GCC provides a flag that allows you to include an extra file, unknown to the rest of the code. With a properly structured file, you can easily inject some beacon that you can later pick up.

And with a proper beacon injected in the build files, it shouldn’t be a problem to check using scanelf or similar tools if the flags were respected.

The trick here is all in the choice of the beacon and in looking it up; the first requirement for the proper beacon is to make sure it does not intrude and disrupt the code or the compilation, this means it has to have a name that is not common, and thus does not risk to collide with other pieces of code, and won’t clash between different translation units.

To solve this, the name can be just very long so that it’s impractical that somebody might have used it for a funciton or variable name, let’s say we call that beacon cflags_test_cflags_respected. This is the first step, but it still doesn’t solve the problem of clashing traslation units. If we were to write it like this:

const int cflags_test_cflags_respected = 1234;

two translation units with that in them, linked together, will cause a linker error that will stop the build. This cannot happen or it’ll make our test useless. The solution is to make the symbol a common symbol. In C, common symbols are usually the ones that are declared without an initialisation value, like this:

int cflags_test_cflags_respected;

Unfortunately this syntax doesn’t work on C++, as the notion of common symbol hasn’t crossed that language barrier. Which means that we have to go deeper in the stack of languages to find the way to create the common symbol. It’s not difficult, once you decide to use the assembly language:

asm(".comm cflags_test_cflags_respected,1,1");

will create a common symbol of size 1 byte. It won’t be perfect as it might increase the size of .bss section for a program by one byte, and thus screw up perfect non-.bss programs, but we’re interested in the tests rather than the performance, as of this moment.

There is still one little problem though: the asm construct is not accepted by the C99 language, so we’ll have to use the new one instead: __asm__, that works just in the same way.

But before we go on with this, there is something else to take care of. As I have written in the entry linked at the start of this one, there are packages that mix CFLAGS and CXXFLAGS. As we’re here, it could be easy to just add some more test beacons that track down for us if the package has used CFLAGS to build C++ code or CXXFLAGS to build C code. With this in mind, i came to create two files: flagscheck.h and flagscheck.hpp, respectively to be injected through CFLAGS and CXXFLAGS.

flame@yamato ~ % sudo cat /media/chroots/flagstesting/etc/portage/flagscheck.h
#ifdef __cplusplus
__asm__(".comm cflags_test_cxxflags_in_cflags,1,1");
#else
__asm__(".comm cflags_test_cflags_respected,1,1");
#endif
flame@yamato ~ % sudo cat /media/chroots/flagstesting/etc/portage/flagscheck.hpp
#ifndef __cplusplus
__asm__(".comm cflags_test_cflags_in_cxxflags,1,1");
#else
__asm__(".comm cflags_test_cxxflags_respected,1,1");
#endif

And here we are, now it’s just time to inject these in the variables and check for the output. But I’m still not satisfied with this. There are packages that, mistakenly, save their own CFLAGS and propose them to other programs that are linked against; to avoid these to falsify our tests, I’m going to make the injection unique on a package level.

Thanks to Portage, we can create two functions in the bashrc file, pre_src_unpack and post_src_unpack, in the former, we’re going to copy the two header files in the ${T} directory of the package (the temporary directory), then we can mess with the flags variables and insert the -include command. This way, each package will get its own particular path; when a library passes the CFLAGS assigned to itself to another package, it will fail to build.

pre_src_compile() {
    ln -s /etc/portage/flagscheck.{h,hpp} "${T}"

    CFLAGS="${CFLAGS} -include ${T}/flagscheck.h"
    CXXFLAGS="${CXXFLAGS} -include ${T}/flagscheck.hpp"
}

After the build completed, it’s time to check the results, luckily pax-utils contains scanelf, which makes it piece of cake to check whether one of the four symbols is defined, or if none is (and thus all the flags were ignored), the one line function is as follow:

post_src_compile() {
    scanelf "${WORKDIR}" 
        -E ET_REL -R -s 
        cflags_test_cflags_respected,cflags_test_cflags_in_cxxflags,cflags_test_cxxflags_respected,cflags_test_cxxflags_in_cflags
}

At this point you just have to look for the ET_REL output:

ET_REL cflags_test_cflags_respected /var/tmp/portage/sys-apps/which-2.19/work/which-2.19/tilde/tilde.o 
ET_REL cflags_test_cflags_respected /var/tmp/portage/sys-apps/which-2.19/work/which-2.19/tilde/shell.o 
ET_REL  -  /var/tmp/portage/sys-apps/which-2.19/work/which-2.19/getopt.o 
ET_REL cflags_test_cflags_respected /var/tmp/portage/sys-apps/which-2.19/work/which-2.19/bash.o 
ET_REL  -  /var/tmp/portage/sys-apps/which-2.19/work/which-2.19/getopt1.o 
ET_REL cflags_test_cflags_respected /var/tmp/portage/sys-apps/which-2.19/work/which-2.19/which.o

And it’s time to find out why getopt.o and getopt1.o are not respecting CFLAGS while the rest of the build is.