xfs
[Top] [All Lists]

Re: suddenly slow writes on XFS Filesystem

To: Stefan Priebe <s.priebe@xxxxxxxxxxxx>
Subject: Re: suddenly slow writes on XFS Filesystem
From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Wed, 09 May 2012 01:57:01 -0500
Cc: "xfs@xxxxxxxxxxx" <xfs@xxxxxxxxxxx>
In-reply-to: <4FA82B07.1020102@xxxxxxxxxxxx>
References: <4FA63DDA.9070707@xxxxxxxxxxxx> <20120507013456.GW5091@dastard> <4FA76E11.1070708@xxxxxxxxxxxx> <20120507071713.GZ5091@dastard> <4FA77842.5010703@xxxxxxxxxxxx> <4FA7FA14.6080700@xxxxxxxxxxxxxxxxx> <4FA82B07.1020102@xxxxxxxxxxxx>
Reply-to: stan@xxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Windows NT 5.1; rv:12.0) Gecko/20120428 Thunderbird/12.0.1
On 5/7/2012 3:05 PM, Stefan Priebe wrote:
> Am 07.05.2012 18:36, schrieb Stan Hoeppner:

>> Stefan, at this point in your filesystem's aging process, it may not
>> matter how much space you keep freeing up, as your deletion of small
>> files simply adds more heavily fragmented free space to the pool.  It's
>> the nature of your workload causing this.
> This makes sense - do you have any idea or solution for this? Are
> Filesystems, Block layers or something else which suits this problem /
> situation?

The problem isn't the block layer nor the filesystem.  The problem is a
combination of the workload and filling the FS to near capacity.

Any workload that regularly allocates and then deletes large quantities
of small files and fills up the filesystem is going to suffer poor
performance from free space fragmentation as the water in the FS gets
close to the rim of the glass.  Two other example workloads are large
mail spools on very busy internet mail servers, and maildir storage on
IMAP/POP servers.

In your case there are two solutions to this problem, the second of
which is also the solution for these mail workloads:

1.  Use a backup methodology that writes larger files
2.  Give your workload a much larger sandbox to play in

Regarding #1, if you're using rsnapshot your disk space shouldn't be
continuously growing, which is does seem to be.  If you're not using
rsnapshot look into it.

Regarding #2 ...

>> What I would suggest is doing an xfsdump to a filesystem on another LUN
>> or machine, expand the size of this LUN by 50% or more (I gather this is
>> an external RAID), format it appropriately, then xfsrestore.  This will
>> eliminate your current free space fragmentation, and the 50% size
>> increase will delay the next occurrence of this problem.  If you can't
>> expand the LUN, simply do the xfsdump/format/xfsrestore, which will give
>> you contiguous free space.
> But this will only help for a few month or perhaps a year.

So you are saying your backup solution will fill up an additional 2.3TB
in less than a year?  In that case I'd say you have dramatically
undersized your backup storage and/or are not using file compression to
your advantage.  And you're obviously not using archiving to your
advantage or you'd not have the free space fragmentation issue because
you'd be dealing with much larger files.

So the best solution to your current problem, and one that will save you
disk space and thus $$, is to use a backup solution that makes use of
both tar and gzip/bzip2.

You can't fix this fundamental small file free space fragmentation
problem by tuning/tweaking XFS, or switching to another filesystem, as
again, the problem is the workload, not the block layer or FS.

-- 
Stan

<Prev in Thread] Current Thread [Next in Thread>