I have been backing up my main Linux server onto a secondary machine via
NFS. I use xfsdump like this:
xfsdump -l 9 -f /machine/backups/fs.9.xfsdump /
Over on the server machine, xfs_bmap shows an *extreme* amount of
fragmentation in the backup file. 20,000+ extents are not uncommon, with
many extents consisting of a single allocation block (8x 512B sectors or
I do notice while the backup file is being written that holes often
appear in the extent map towards the end of the file. I theorize that
somehow the individual writes are going to the file system out of order,
and this causes both the temporary holes and the extreme fragmentation.
I'm able to work around the fragmentation manually by looking at the
estimate from xfsdump of the size of the backup and then using the
fallocate command locally on the file server to allocate more than that
amount of space to the backup file. When the backup is done, I look at
xfsdump's report of the actual size of the backup file and use the
truncate command locally on the server to trim off the excess.
Is fragmentation on XFS via NFS a known problem?