On Fri, Jan 23, 2009 at 11:21:30AM +0100, Carsten Oberscheid wrote:
> Hi there,
> I am experiencing my XFS filesystem degrading over time in quite a
> strange and annoying way. Googling "XFS fragmenation" tells me either
> that this does not happen or to use xfs_fsr, which doesn't really help
> me anymore -- see below. I'd appreciate any help on this.
> Background: I am using two VMware virtual machines on my Linux
> desktop. These virtual machines store images of their main memory in
> .vmem files, which are about half a gigabyte in size for each of my
> VMs. The .vmem files are created when starting the VM, written when
> suspending it and read when resuming. I prefer suspendig and resuming
> over shutting down and booting again, so with my VMs these files can
> have a lifetime of several weeks.
Oh, that's vmware being incredibly stupid about how they write
out the memory images. They only write pages that are allocated
and it's sparse file full of holes. Effectively this guarantees
file fragmentation over time as random holes are filled. For
example, a .vmem file on a recent VM I built:
$ xfs_bmap -vvp foo.vmem |grep hole |wc -l
$ xfs_bmap -vvp foo.vmem |grep -v hole |wc -l
Contains 675 holes and almost 900 real extents in a 512MB memory
image that has only 160MB of data blocks allocated.
In reality, this is a classic case of the application doing a "smart
optimisation" that looks good in the short term (i.e. saves some
disk space), but that has very bad long term side effects (i.e.
guaranteed fragmentation of the file in the long term).
You might be able to pre-allocate the .vmem files after the file
is created with an xfs_io hack prior to it being badly fragmented;
that should avoid the worse case fragmentation caused by writing
randomly to a sparse file.
In summary, this is an application problem, not a filesystem