xfs
[Top] [All Lists]

Re: suddenly slow writes on XFS Filesystem

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: suddenly slow writes on XFS Filesystem
From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Wed, 09 May 2012 02:49:56 -0500
Cc: "xfs@xxxxxxxxxxx" <xfs@xxxxxxxxxxx>, Stefan Priebe <s.priebe@xxxxxxxxxxxx>
In-reply-to: <20120509070450.GP5091@dastard>
References: <4FA63DDA.9070707@xxxxxxxxxxxx> <20120507013456.GW5091@dastard> <4FA76E11.1070708@xxxxxxxxxxxx> <20120507071713.GZ5091@dastard> <4FA77842.5010703@xxxxxxxxxxxx> <4FA7FA14.6080700@xxxxxxxxxxxxxxxxx> <4FA82B07.1020102@xxxxxxxxxxxx> <4FAA153D.1030606@xxxxxxxxxxxxxxxxx> <20120509070450.GP5091@dastard>
Reply-to: stan@xxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Windows NT 5.1; rv:12.0) Gecko/20120428 Thunderbird/12.0.1
On 5/9/2012 2:04 AM, Dave Chinner wrote:
> On Wed, May 09, 2012 at 01:57:01AM -0500, Stan Hoeppner wrote:
>> On 5/7/2012 3:05 PM, Stefan Priebe wrote:
>>> Am 07.05.2012 18:36, schrieb Stan Hoeppner:
> ....
>>>> What I would suggest is doing an xfsdump to a filesystem on another LUN
>>>> or machine, expand the size of this LUN by 50% or more (I gather this is
>>>> an external RAID), format it appropriately, then xfsrestore.  This will
>>>> eliminate your current free space fragmentation, and the 50% size
>>>> increase will delay the next occurrence of this problem.  If you can't
>>>> expand the LUN, simply do the xfsdump/format/xfsrestore, which will give
>>>> you contiguous free space.
>>> But this will only help for a few month or perhaps a year.
>>
>> So you are saying your backup solution will fill up an additional 2.3TB
>> in less than a year?  In that case I'd say you have dramatically
>> undersized your backup storage and/or are not using file compression to
>> your advantage.  And you're obviously not using archiving to your
>> advantage or you'd not have the free space fragmentation issue because
>> you'd be dealing with much larger files.
>>
>> So the best solution to your current problem, and one that will save you
>> disk space and thus $$, is to use a backup solution that makes use of
>> both tar and gzip/bzip2.
> 
> Why use the slow method? xfsdump will be much faster than tar. :)

Certainly faster.  I've assumed Stefan is backing up heterogeneous
systems so xfsdump is probably not an option across the board.  My
thinking directly above related to the possibility that his current
software may have integrated functions for tar'ing then compressing at
the directory level.  This would save space, cause less free space
fragmentation when old files are deleted, and allow decompressing files
that aren't insanely large in the even a single file needs to be
restored from a compressed tar file.  Of course the latter can be
accomplished with xfsdump/restore, but not on heterogeneous systems.
And AFAIK there isn't any nifty FOSS backup software that makes use of
xfsdump/restore, requiring some serious custom scripting.  If there
exists such backup software please do share. :)

-- 
Stan

<Prev in Thread] Current Thread [Next in Thread>