I have "allocsize=64m" (or simialr sizes, such as 1m, 16m etc.) on many of my
xfs filesystems, in an attempt to fight fragmentation on logfiles.
I am not sure about it's effectiveness, but in 2.6.38 (but not in 2.6.32),
this leads to very unexpected and weird behaviour, namely that files being
written have semi-permanently allocated chunks of allocsize to them.
I realised this when I did a make clean and a make in a buildroot directory,
which cross-compiles uclibc, gcc, and lots of other packages, leading to a
lot of mostly small files.
After a few minutes, the job stopped because it ate 180GB of disk space and
the disk was full. When I came back in the mornng (about 8 hours later), the
disk was still full, and investigation showed that even 3kb files were
allocated the full 64m (as seen with du).
Atfer I deleted some files to get some space and rebooted, I suddenly had
180GB of space again, so it seems an unmount "fixes" this issue.
I often do these kind of build,s and I have allocsize on thee high values for
a very long time, without ever having run into this kind of problem.
It seems that files get temporarily allocated much larger chunks (which is
expoected behaviour), but xfs doesn't free them until there is a unmount
(which is unexpected).
Is this the desired behaviour? I would assume that any allocsize > 0 could
lead to a lot of fragmentation if files that are closed and no longer being
in-use always have extra space allocated for expansion for extremely long
periods of time.
The choice of a Deliantra, the free code+content MORPG
-----==- _GNU_ http://www.deliantra.net
----==-- _ generation
---==---(_)__ __ ____ __ Marc Lehmann
--==---/ / _ \/ // /\ \/ / schmorp@xxxxxxxxxx