xfs
[Top] [All Lists]

Strange fragmentation in nearly empty filesystem

To: xfs@xxxxxxxxxxx
Subject: Strange fragmentation in nearly empty filesystem
From: Carsten Oberscheid <oberscheid@xxxxxxxxxxxx>
Date: Fri, 23 Jan 2009 11:21:30 +0100
User-agent: Mutt/1.5.13 (2006-08-11)
Hi there,

I am experiencing my XFS filesystem degrading over time in quite a
strange and annoying way. Googling "XFS fragmenation" tells me either
that this does not happen or to use xfs_fsr, which doesn't really help
me anymore -- see below. I'd appreciate any help on this.

Background: I am using two VMware virtual machines on my Linux
desktop. These virtual machines store images of their main memory in
.vmem files, which are about half a gigabyte in size for each of my
VMs. The .vmem files are created when starting the VM, written when
suspending it and read when resuming. I prefer suspendig and resuming
over shutting down and booting again, so with my VMs these files can
have a lifetime of several weeks.

On Ubuntu's default ext3 filesystem the vmem files showed heavy
fragmentation problems right from the start, so I switched over to
XFS. At first everything seemed fine, but after some time (about two
weeks, in the beginning) it took longer and longer to suspend the VM
in the evening and to resume it in the morning. 'sudo filefrag *'
showed heavy fragmentation (up to 50.000 extents and more) of the
.vmem files. Reading a 512M file thus fragmented takes several
minutes, and writing it takes at least twice as long.

For some time, 'sudo xfs_fsr *.vmem' used to fix this (back down to
one single extent). Replacing the files by a copy of themselves or
rebooting the VM (thus creating new .vmem files) fixed fragmentation
as well.

After some months I observed that the time it took to reach a
perceptible level of fragmentation (slow reading/writing) got shorter
and shorter. Instead of, say, two weeks it now took only a few days to
reach 10.000 extents or more.

Again some months later, xfs_fsr was unable to get the files back to
one single extent, and even new files started with at least a handful
of extents right away.

Today, a formerly new .vmem file is already after the first VM suspend
badly fragmented (about 20.000 extents) and accordingly slow to read
and write. Suspending the VM can take 15 minutes or more.

Some more facts:

 -  The filesystem is nearly empty (8% of about 500GB used, see
    below).

 -  System information, xfs_info output appended below.

 -  Other large files created in this filesystem show fragmentation
    from start as well, but they are not rewritten as often as the
    .vmem files, so they don't deteriorate as much and as quickly. So
    I don't think this to be a VMware specific problem. I think it
    just shows more obviously there.

 -  A few weeks ago, I did a fresh mkfs.xfs on the filesystem and
    restored the contents from a tar backup -- observing the same
    heavy fragmentation as before. Perhaps this did not really
    create a fresh filesystem structure?

What is happening here? I thought fragmentation would become serious
only on nearly full filesystems when there's not enough continuous
free space left. Or can free space also be fragmented over time by
some patterns of usage? Are there any XFS parameters I could try to
make this more robust against fragmentation?

Thanks in advance


Carsten Oberscheid



Running Ubuntu 8.10 with current XFS package (where can I find the XFS
version?)

[co@tangchai]~ uname -a
Linux tangchai 2.6.27-7-generic #1 SMP Tue Nov 4 19:33:06 UTC 2008 x86_64 
GNU/Linux

[co@tangchai]~ df
Filesystem           1K-blocks      Used Available Use% Mounted on
...
/dev/sdb5            488252896  40028724 448224172   9% /home

[co@tangchai]~ xfs_info /home
meta-data=/dev/sdb5              isize=256    agcount=4, agsize=30523998 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=122095992, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096  
log      =internal               bsize=4096   blocks=32768, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0

[co@tangchai]~ cat /etc/mtab
...
/dev/sdb5 /home xfs rw,inode64 0 0

<Prev in Thread] Current Thread [Next in Thread>