Another good reason to use 64-bit installations: Large File Support headaches

A couple of months ago I wrote about why I made my router a 64-bit install listing a series of reasons why 64-bit hardened systems are safer to manage than 32-bit ones, mostly because of the feature set of the CPUs themselves. What I didn’t write about that time, though, is the fact that 64-bit installs also don’t require to deal with the curse of large file support (LFS).

It was over two years ago I last wrote about this and at the time my motivation was mostly drained by a widely known troll, insisting that I got my explanation wrong. Just for the sake of not wanting to repeat the same pantomime, I’d like to thank Lars for actually getting me a copy of Advanced Programming in the Unix Environment so that I can actually reference said troll with the pages where the diagrams he referred to are: 106 to 108. And there is nothing there to corroborate his views against mine.

But now, let’s take a few steps back and let’s look at what I’m talking about altogether.

What is the large file support? It is a set of interfaces designed to work around the limits imposed by the original design of the POSIX API for file support on 32-bit systems. The original implementations of functions like open(), stat(), fseeko() and so on was designed using 32-bit data types, either signed or unsigned depending on the use case. This has the unfortunate effect of limiting a number of attributes to that boundary; the most obvious problem is the size of the files themselves: you cannot use open() to open a descriptor to a file that is bigger than 2GB, as then the offsets would overflow. The inability to process files bigger than 2GB by some of your software isn’t, though, that much of a problem – after all, not all software can work with such files within reasonable resource constrain – but that’s not the worst problem you have to consider.

Because of this limit on file size, the new set of interfaces has been always called “large file”, but the name itself is a bit of a misnomer; this new set of interfaces, with extended 64-bit parameters and data fields, is required for operating on large file systems as well. I might not have expressed it in the most comprehensible of terms two years ago, so let’s here it from scratch again.

In a filesystem, the files’ data and meta-data is tied to structures called inodes; each inode has an individual number; this number is listed within the content of a directory to link that to the files it contains. The number of files that can be created on a filesystem is limited by the number of unique inode numbers that the filesystem is able to cope with — you need at least one inode per file; you can check the status with df -i. This amount is in turn tied both to the size of the datafield itself, and to the data structure used to look up the location of the inode over the filesystem. Because of this, the ext3 filesystem does not even reach the 32-bit size limit. On the other hand, both XFS and ext4, using more modern data structures, can reach that limit just fine… and they are actually designed to overcome it altogether.

Now, the fact that they are designed to support a 64-bit inode number field does not mean that they’ll always will; for what it’s worth, XFS is designed to support block sizes over 4KiB, up to 64KiB, but the Linux kernel does not support that feature. On the other hand, as I said, the support is there to be used in the future. Unfortunately this cannot be feasibly done until we know for sure that the userland software will work with such a filesystem. It is one thing to be unable to open a huge file, it is another to not being able to interact in any way with files within a huge filesystem. Which is why both Eric and me in the previous post focused first off on testing what software was still using the old stat() calls with the data structure with a 32-bit inode number field. It’s not about the single file size, it’s a matter of huge filesystem support.

Now, let’s wander back to why I wanted to go back at this topic. With my current line of work I discovered at least one package in Gentoo (bsdiff) that was supposed to have LFS support, but didn’t because of a simple mistake (append-lfs-flags acts on CPPFLAGS but that variable wasn’t used in the build at all). I thought a bit about it, and there are so many ways to sneak in a mistake that would cause a package to lose LFS support even if it was added at first. For instance for a package based on autotools, using AC_SYS_LARGEFILE to look for the proper largefile support, is easy to forget including config.h before any other system library header, and when that happens, the largefile support is lost.

To make it easier to identify packages that might have problems, I’ve decided to implement a tool for this in my Ruby-Elf project called verify-lfs.rb which checks for the presence of non-LFS symbols, as well as a mix of both LFS and non-LFS interfaces. The code is available on Gitorious, although I have yet to write a man page, and I have to add a recursive scan option as well.

Finally, as the title suggest, if you are using a 64-bit Linux system you don’t have to even think about this at all: modern 64-bit architectures define the original ABI as 64-bit already, making all the largefile support headaches irrelevant. The same goes for FreeBSD as well, as they implemented the LFS interface as their only interface with version 5, avoiding the whole mess of conditionality.

I’m seriously scared of what I could see if I were to run my script over the (32-bit) tinderbox. Sigh.

8 thoughts on “Another good reason to use 64-bit installations: Large File Support headaches

  1. Don’t even get me started on when you want to write a portable application. To my knowledge, MinGW still does not have largefile support, leaving people the choice of hacking system headers or hacking their code to use crap like lseek64.Ok, I feel like giving up on Windows anyway since it’s not (at least in general) even possible to open files with special characters (e.g. Japanese) using any standard function like fopen or open…

    Like

  2. It is somewhat dangerous if a package which does not really need large files adds it by itself, as you seem to suggest: Sure, it might avoid problems with huge filesystems, but on the other hand it might cause troubles when libs are used which have e.g. “off_t” in a structure in a header file but which were compiled without LFS – then compiling the package with LFS would break the ABI interface of the library. Hence, perhaps it is the distribution/administrator who should take care about this and not the package author.IMHO, it should be global system’s setting in gentoo to put -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE to CFLAGS and CXXFLAGS – this seems to be the only way to avoid such problems. I do this since several years, and with the exception of compiling a few packages (glibc and some other languages), it appears that they never caused any problems and perhaps made my system more stable.

    Like

  3. Let me be clear: any library relying on @off_t@ in its public API is simply broken by design and there is no way for them to cause concern to the rest of the users and developers out there.A number of packages have problems when using largefile, which is why *Gentoo will never enable them without checking it first on a per-package basis*; we had too many problems with packages, especially compression software, changing their bitstream because of largefile support. Of course, this usually means they also fail badly on 64-bit systems.We have been already adding for years largefile support to all our packages, unconditionally as well. But that’s just not enough.

    Like

  4. > any library relying on off_t in its public API is simply broken by designLFS is sort of a hack, so I would be very careful to call a library using only C/C++-standard conventions “broken by design”.Anyway, “broken by design” or not: Quite a few libraries seem to use it. A quick find /usr/include -name '*.h' -exec grep -l -w off_t '{}' '+' |wc gave 92 matches, the same with size_t even 1378. A quick check for two random matches (xine.h and zip.h) showed that indeed both use off_t in a structure, and at least at a first glance, I couldn’t see any mechanism in these files to make sure LFS is selected. (I did not check too carefully, especially not in the complex xine.h, and of course, I did not check whether the structure actually occurs in a documented interface function.) Anyway, the high number of matches suggests that usage of these types in libraries is not an exception.> Gentoo will never enable them without checking it first on a per-package basisFor the mentioned problem with libraries, it might be safer to enable it globally than on a per-package basis. But if you are right and gentoo already has LFS enabled unconditionally in most packages (especially libs), this explains why I never had problems with setting the mentioned CFLAGS. Anyway, setting these CFLAGS might also be an insurance against forgetting to include config.h (or including it too late) in some files.

    Like

  5. @mv: no, off_t is truly broken. If your are developing on a 32-bit platform your need -D_FILE_OFFSET_BITS=64 for LFS. Which is just crap when it is part of the ABI. Enabling this globally is just a workaround.

    Like

  6. > Which is just crap when it is part of the ABISure, it is. But the question is whether you should blame the LFS hack for behaving inconsistent or a completely standard library which has not the fault that some hack redefines the types which are specified by standard. The point is that there probably simply are a lot of such libraries, some perhaps even written before the LFS hack was introduced. Just declaring those as faulty seems strange and does not solve anything.> Enabling this globally is just a workaround.Yep. But isn’t it better to have a workaround which works instead of risking subtle runtime problems which nobody can seriously check?LFS is not a problem if it is used either in all packages or in none – and I bet this is how this hack was meant to be used when it was introduced.

    Like

  7. > which is just crap when it is part of the ABI.Sure it is. But the question is whether to blame the LFS hack for inconsistent type definitions or a library which just follows the standard (but not some system-specific hacks). Point is, there are probably many libraries which just follow the standard – perhaps some have been even written before the LFS hack was introduced. Just declaring these now as faulty appears strange and does not solve anything.> Enabling this globally is just a workaround.Yep. But isn’t it better to have a workaround which really works than to risk subtle runtime problems which nobody can really check?The LFS hack works nicely and consistently if either all packages use it or none. And I bet this is how it was meant to be used when it was introduced.

    Like

  8. The “hack” was introduced as one of the transitional solutions.Unfortunately GNU/Linux isn’t exactly known for dropping borked legacy things, meaning it will probably be there forever. In short you can’t rely on off_t.Enabling this globally is not a solution at all. It’s like saying; lets make a plain int 64 bit so we don’t actually have to port any code to 64 bit platforms. The code needs to be adjusted to properly deal with it, plain and simple.At least OpenBSD had the guts to actually break the damn thing.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s