Hi (sorry about the long post, but please read until the end),
I think you are doing a *really* great job bringing XFS to Linux.
But I think you know what I'm about to talk ;)
Generally, I never had any problems with XFS (I've been using it since 2.4.x,
and I'm using vanilla 2.6.7 now).
However there's still one thing where I think it doesn't work very well: when
there's a power failure or a kernel panic/freeze.
A few days ago, I've been trying to make a flaky driver work with my ADSL
Modem, which at this point crashes every hour or so.
Well, out of 10 crashes, at least 9 times I lose a file..
In 2 days I've lost all my aMule downloads and configuration, most of my KDE
configuration (several times), uptime records (I think it was supposed to be
crash-resistant arghh..), and other things which I can't remember right now.
The last time this happened, I've lost all of those above in the same reboot..
which is why I stopped testing the damn thing!
As you can imagine, I wasn't very happy..
The problem is that unfortunately I will have to do this again in a few days,
for a longer period (!).
The thing I'm trying to understand is why does this happen with XFS? I know it
can happen with any filesystem, but the fact is that it doesn't happen nearly
as many times as it does with XFS.
Well, for example, the aMule program uses a small file called xxx.part.met to
track downloaded parts, which usually has about 200-800 bytes.
I tried to use the trick you mentioned (chattr +S, if I'm not mistaken) with
these files, because these were the ones which after the reboot became
The problem is that after about a minute or so, lsattr shows that the file
doesn't have the attribute anymore. So I suppose these files gets deleted and
recreated every so often.
So does this really happen because XFS only journals metadata?
From what I understand, XFS puts small files inside the inode, when it fits
(ls -sh shows 0 KB used).
If the .met file gets deleted and recreated almost instantly, weren't both
changes supposed to happen on-disk (kind of) simultaneously in the periodic
disk-sync? Even with a journaled filesystem?
It makes sense that only when the power is cut between these (quick) changes,
the file content is lost, not everytime!
Or does XFS truncate the file immediately on-disk after unlink()? I don't
think that makes much sense.
So why does this happen? Is it for security reasons? I don't think it's that..
there are lots of single-user systems out there which don't need that.
Or is it really by design? Can't it be changed (optionally, if you must)?
I think the 'chattr +S', even if it works for whole directories, cannot be the
solution for this.
This will be very frustrating, as you can imagine.. Help! :-)