Updating firmware on a Crucial M4 drive, with Linux, no luck

When you think of SSD manufacturers, it might be obvious to think of them as Linux friendly, given they target power users, and Linux users are for the most part are power users. Seems like this is not that true for Crucial. My main personal laptop has, since last year, a 64GB Crucial M4 SSD – given I’ve not been using a desktop computer for a while it does start to feel small, but that’s a different point – which underwent a couple of firmware update since then. In particular, there is a new release 070H that is supposed to fix some nasty power saving issues.

Crucial only provide firmware update utilities for Windows 7 and 8 (two different versions of them), and then they have an utility for “Windows and Mac” — the latter is actually a ZIP file that contains an ISO file… well, I don’t have a CD to burn with me, so my first option was to run the Windows 7 file from my Windows 7 install, which resides on the external SATA harddrive. No luck with that, from what I read on the forums what the upgrader does is simply setting up an EFI application to boot, and then reboot. Unfortunately in my case there are two EFI partitions, because of the two bootable drives, and that most likely messes up with the upgrader.

Okay strike one, let’s look at the ISO file. The ISO is very simple and very small.. it basically is just an ISOLINUX tree that uses memdisk to launch a 2.88MB image file (2.88MB is a semi-standard floppy disk size, which never really went popular for real disks, but has been leveraged by most virtual floppy disk images in bootable CD-Roms for the expanded size). Okay what’s in the image file then? Nothing surprising, it’s a FreeDOS image, with Crucial’s own utility and the firmware.

So if you remember, I had some experience with trying to update BIOS through FreeDOS images, and I have my trusty USB stick with FreeDOS arriving on Tuesday with most of my personal effects that were in Italy waiting for me to find a final place to move my stuff on. But I wanted to see if I could try to boot the image file without needing the FreeDOS stick, so I checked. Grub2 does not include a direct way to load image — the reference to memdisk in their manual refers to the situation where you’re loading a standalone or rescue Grub image, nothing to do with what we care about.

The most obvious way to run the image is through SYSLINUX’s memdisk loader. Debian even has a package that allows you to just drop the images in /boot/images and adds them to the grub menu. Quick and easy, no? Well, no. The problem is that memdisk needs to be loaded with linux16 — and, well, Grub 2 does not support it if you’re booting via EFI, like I am.

I guess I’ll wait until Tuesday night, when my BIOS disk will be here, and I’ll just use it to update the SSD. It should solve the issue once and for all.

Protecting yourself from R-U-Dead-Yet attacks on Apache

Do you remember the infamous “slowloris” attack over HTTP webservers? Well, it turns out there is a new variant of the same technique, that rather than making the server wait for headers to arrive, makes the server wait for POST data before processing; it’s difficult to explain exactly how that works, so I’ll leave it to the expert explanation from ModSecurity

Thankfully, since there was a lot of work done to cover up the slowloris attack, there are easy protections to be put in place, the first of which would be the use of mod_reqtimeout… unfortunately, it isn’t currently enabled by the Gentoo configuration of Apache – see bug #347227 – so the first step is to workaround this limitation. Until the Gentoo Apache team appears again, you can do so simply by making use of the per-package environment hack, sort of what I’ve described in my previous nasty trick spost a few months ago.

# to be created as /etc/portage/env/www-servers/apache

export EXTRA_ECONF="${EXTRA_ECONF} --enable-reqtimeout=static"

*Do note that here I’m building it statically; this is because I’d suggest everybody to build all the modules as static; the overhead of having them as plugins is usually quite higher than what you’d have for loading a module that you don’t care about.*

Now that you got this set up, you should ensure to set a timeout for the requests; the mod_reqtimeout documentation is quite brief, but shows a number of possible configurations. I’d say that in most cases, what you want is simply the one shown in the ModSecurity examples. Please note that they made a mistake there, it’s not RequestReadyTimeout but RequestReadTimeout.

Additionally, when using ModSecurity you can stop the attack on its track after a few requests timed out, by blacklisting the IP and dropping its connections, allowing slots to be freed for other requests to arrive; this can be easily configured through this snipped, taken directly from the above-linked post:

RequestReadTimeout body=30

SecRule RESPONSE_STATUS "@streq 408" "phase:5,t:none,nolog,pass, setvar:ip.slow_dos_counter=+1,expirevar:ip.slow_dos_counter=60"
SecRule IP:SLOW_DOS_COUNTER "@gt 5" "phase:1,t:none,log,drop, msg:'Client Connection Dropped due to high # of slow DoS alerts'"

This should let you cover yourself up quite nicely, at least if you’re using hardened, with grsecurity enforcing per-user limits. But if you’re using hosting where you don’t have decision over the kernel – as I do – there is one further problem: the init script for apache does not respect the system limits at all — see bug #347301 .

The problem here is that when Apache is started during the standard system init, there are no limits set for the session is running from, and since it doesn’t use start-stop-daemon to launch the apache process itself, no limits are applied at all. This results in a quite easy DoS over the whole host as it will easily exhaust the system’s memory.

As I posted on the bug, there is a quick and dirty way to fix the situation by editing the init script itself, and change the way Apache is started up:

# Replace the following:
        ${APACHE2} ${APACHE2_OPTS} -k start

# With this

        start-stop-daemon --start --pidfile "${PIDFILE}" ${APACHE2} -- ${APACHE2_OPTS} -k start

This way at least the system generic limits are applied properly. Though, please note that start-stop-daemon limitations will not allow you to set per-user limits this way.

On a different note, I’d like to spend a few words on telling why this particular vulnerability is interesting to me: this attack relies on long-winded POST requests that might have a very low bandwidth, because just a few bytes are sent before the timeout is hit… it is not unlike the RTSP-in-HTTP tunnelling that I have designed and documented in feng during the past years.

This also means that application-level firewalls will start sooner or later filtering these long-winded requests, and that will likely put the final nail on the coffin of the RTSP-in-HTTP tunnelling. I guess it’s definitely time for feng to move on and implement real HTTP-based pseudo-streaming instead.

Service limits

There is one thing that, by maintaining PAM, I absolutely dread. No, it’s not cross-compilation but rather the handling of pam_limits for what concerns services, and, in particular, start-stop-daemon.

I have said before that if you don’t properly run start-stop-daemon to start your services, pam_limits is not executed and thus you won’t have limits supported by your startup. That is, indeed, the case. Unfortunately, the situation is not as black and white as I hoped, or most people expected.

Christian reported to me the other day he was having trouble getting user-based limits properly respected by services; I also had similar situations before, but I never went as far as checking them out properly. So I went to check it out with his particular use case: dovecot.

Dovecot processes have their user and group set to specific limited users; on the other hand, they have to be started as root to begin with; not only the runtime directories are not writeable but by root, but also it fails to bind the standard ports for IMAP as user (since they are lower than 1024); and it fails further on when starting user processes, most likely because they are partly run with the privileges of the user logging in.

So with the following configuration in /etc/security/limits.conf, what happens?

* hard nproc 50
root hard nproc unlimited
root soft nproc unlimited

dovecot hard nproc 300
dovecot soft nproc 100

The first problem is that, as I said, dovecot is started from root, not the dovecot user; and when the privileges are actually dropped, it happens directly within dovecot, and does not pass through pam_limits! So the obvious answer is, the processes are started with the limits of root, which, with the previous configuration, are unlimited. Unfortunately, as Christian reported, that was not the case; the nproc limit was set to 50 (and that was low enough that gradm killed it.

The first guess was that the user’s session limits are imposed after starting the service; but this is exactly what we’re trying to avoid by using start-stop-daemon. So, we’ve got a problem, at first glance. A quick check on OpenRC’s start-stop-daemon code shows what the problem is:

                if (changeuser != NULL)
                        pamr = pam_start("start-stop-daemon",
                            changeuser, &conv, &pamh);
                        pamr = pam_start("start-stop-daemon",
                            "nobody", &conv, &pamh);

So here’s the problem; unless we’re using the --user parameter, start-stop-daemon is applying limits for user nobody not root. Right now we’ve got a problem; this means that we cannot easily configure per-user limits for the users used by services to drop their privileges, and that is a bad thing. How can we solve this problem?

The first obvious solution is adding something like --user foo --nochuid that would make start-stop-daemon abide to the limits of the user provided, but no call to setgid() or setuid() is performed, leaving that to the software itself to take care of that. This is fast but partly hacky. The second option is not exclusive and actually should probably be implemented anyway: set up the proper file-based capabilities on the software, then run it as the user directly in s-s-d. With maybe a bit of help from pam_cap to set it per-user rather than per-file.

At any rate, this is one thing that we should be looking into. Sigh!

To finish it off, there is one nasty situation that I haven’t checked yet and actually worries me: if you set the “standard user” limits lower than nobody (since that is what the services would get), they can probably workaround the limits by using start-stop-daemon to start their own code; I’m not sure if it works, but if it does, we’ve got a security issue on at least openrc at our hands!