I always fine it at least fascinating, the religiousness (and this is most definitely not a compliment, coming from an atheist) with which some people stand to defend “classical” (or, in my opinion more properly, “legacy”) choices in the Unix world. I also tend to not consider them too much; I have challenged the use of separate /boot
over two years ago, and I still stand behind my opinion: for the most common systems’ configurations, /boot
is not useful to stay separate. Of course there are catches.
*One particular of these catches is that you need to have /boot
on its own partition to use LVM for the root file system, and that in turn is something you probably would like to have standing to today’s standards, so that you don’t really have to choose how much space to dedicate to root, which heavily depends on how much software you’d be going to put on it. Fedora has been doing that for a while, but then it diverges the problem to how much space dedicate to /boot
, and that became quite a problem with the update 11→12,… in general, I think the case might be building up for either using a separate /boot
, or just use EFI, which as far as I can tell, can solve the problem to the root… no pun intended.*
For some reason, it seems like a huge lot of legacies relate to filesystems, or maybe it’s just because filesystems are something I struggle with continuously, especially for what concerns combining the classical Unix filesystem hierarchy with my generally less hierarchical use of it. I’m not going to argue for not splitting the usual /usr
out of the root file system here (while it’s something I definitely would support, that pretty artificial split makes the whole system startup a messy problem), nor I’m going to discuss how to divide your storage space to file in the standard “legacy” hierarchy.
What I’m just wondering about is why has lost+found
been so strongly defended by somebody who (I read) is boasting to have experience in disaster recovery? I’m not doubting the usefulness of that in general, but I’m also considering that in most “desktop” cases it’s just confusing — or irritating in my case, in a particular automated system I’m working on.
First let me start with saying why I’m finding this annoying: try running initdb
on a newly-created, just mounted ext3 file system. It will fail, because it finds the lost+found
directory in the base of the filesystem, and since the directory is not empty, it refuses to work. There isn’t, by the way, a way to tell it to just run such as a --force
switch, which is the most obnoxious thing in all this. I know what I’m doing, I just want you to do it! So anyway, my choices here are either remove the lost+found
directory every time I mount a new filesystem (I have to admit I don’t know/don’t remember whether the directory is re-created at mount, or during fsck), or create a sub-directory to run the initdb
in. Whether the choice, it requires me one further command, which is not much but in this case it’s a slight problem.
So I went wonder “Is lost+found
really useful? Can’t I just get rid of it?”, then hell broke loose.
I’m quite positive I don’t need the directory to be there empty; I can understand it might be useful when stuff is in it, but empty? On a newly-created filesystem? I have my sincere doubts about that. And even when stuff gets into it, is it really useful to have it restored there? Well almost certainly in some cases, but always? Without a way at all to get rid of that option? It sounds a bit too much for me.
Let me show you a few possible scenarios, which is what I experience:
/var/cache
is on a separate filesystem for me, reason for which is that it’s quite big and it ends up growing a lot because it keeps, among other things, the Portage disfiles for me and the tinderbox; if anything happen to that filesystem, I won’t spend more than 5 minutes on it, it’ll be destroyed and recreated; the name cache should make it obvious, and the FHS designates it for content that can be dropped, and recreated, without trouble; do I need orphan files recovered from that filesystem? No, I just need to know whether there is something wrong with the FS, if there is, I’ll recreate it to be on the safe side that data didn’t corrupt;- my router’s root file system… it turned out to be corrupt a couple of time and stuff was added to lost+found… did I care about that? Not really. I flashed in a new copy of the filesystem, no data loss for me in there, beside once, before I set up
rsnapshot
where I lost my network configuration, oh well, took me the whole of half an hour to rewrite it from scratch — if you wonder what the corruption was about, it was a faulty CF card; I’ll have to write about those CF cards at some point; - the rest of my running data, which is all of the rest of my systems… if I were to find corruption on my filesystems, I’d do like I did in the past: clear them out, make sure I hadn’t chosen the wrong filesystem type to begin with, and then recreate them; do I care about finding the data in
lost+found
? Nope, I got backups.
The trick here is I got backups. Of course if I didn’t have backups, or if my backups were foobar’d, I’d be looking at everything to restore my data, but to be honest, I found that it’s a much better investments to improve your backup idea rather than spend time recovering data. Of course, I don’t have “down to the microsecond” backups as somebody told me I’d be needing to avoid using lost+found
, but again, I don’t need that kind of redundancy. I have hourly backups, for my systems, which are by themselves above average, it works pretty well. I’d be surprised if the vast majority of the desktop systems cared about backups over a week.
Now this should cover most of my points: lost+found
is not indispensable. You can really well live without. I don’t think I ever used it myself, when faced with corrupted filesystems (and trust me it happened to me more than once) my solution was either of: get the backup, re-do the little work lost, discard the data altogether. Sure I might have lost in the years bits and pieces of stuff that I might have cared about, but nothing major. The worse thing happened to me in the past three years has been the need to re-download the updates and drivers for Windows (XP and Vista both) that I keep around when customers bring me their computers to fix. Okay I have no experience with enterprise-grade post-apocalyptic disaster recovery, so what? It doesn’t change the fact that in my case (and I’d say, a lot of users’ cases) it doesn’t matter.
I’m not asking to get rid of the feature altogether, but to make it optional would be nice, or at least, not force me to have the directory around. Interestingly enough, xfs_repair
does not need the directory to be present; it’ll use it if it’s present and full, it’ll create and populate it if orphan files are found, but otherwise it’s invisible. Apple’s HFS+ is more or less on the same page. I admit ignorance for what concerns the Reiser family, JFS and ZFS.
Whatever the case, can we just stop asserting that what was good in the ‘70s, or what is good for enterprise-grade systems, is good for desktop systems as well? Can we stop accepting legacies just because they are there? I’m not for breaking hell of compatibility at every turn (and please nobody say HAL, ‘kay? — okay here the pun was most definitely intended), but yes, it takes to challenge the status quo to get something better out of it!
P.S.: if somebody can suggest me an eventual option to mkfs
or mount
to avoid that directory, I’m still eager to know it!