xfs
[Top] [All Lists]

Re: I/O hang, possibly XFS, possibly general

To: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Subject: Re: I/O hang, possibly XFS, possibly general
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Sat, 4 Jun 2011 20:32:47 +1000
Cc: Paul Anderson <pha@xxxxxxxxx>, Christoph Hellwig <hch@xxxxxxxxxxxxx>, xfs-oss <xfs@xxxxxxxxxxx>
In-reply-to: <4DE9E97D.30500@xxxxxxxxxxxxxxxxx>
References: <BANLkTim_BCiKeqi5gY_gXAcmg7JgrgJCxQ@xxxxxxxxxxxxxx> <20110603004247.GA28043@xxxxxxxxxxxxx> <20110603013948.GX561@dastard> <BANLkTi=FjSzSZJXGofVjtiUe2ZNvki2R-Q@xxxxxxxxxxxxxx> <4DE9E97D.30500@xxxxxxxxxxxxxxxxx>
User-agent: Mutt/1.5.20 (2009-06-14)
On Sat, Jun 04, 2011 at 03:14:53AM -0500, Stan Hoeppner wrote:
> On 6/3/2011 10:59 AM, Paul Anderson wrote:
> > Not sure what I can do about the log - man page says xfs_growfs
> > doesn't implement log moving.  I can rebuild the filesystems, but for
> > the one mentioned in this theread, this will take a long time.
> 
> See the logdev mount option.  Using two mirrored drives was recommended,
> I'd go a step further and use two quality "consumer grade", i.e. MLC
> based, SSDs, such as:
> 
> http://www.cdw.com/shop/products/Corsair-Force-Series-F40-solid-state-drive-40-GB-SATA-300/2181114.aspx
> 
> Rated at 50K 4K write IOPS, about 150 times greater than a 15K SAS drive.

If you are using delayed logging, then a pair of mirrored 7200rpm
SAS or SATA drives would be sufficient for most workloads as the log
bandwidth rarely gets above 50MB/s in normal operation.

If you have fsync heavy workloads, or are not using delayed logging,
then you really need to use the RAID5/6 device behind a BBWC because
the log is -seriously- bandwidth intensive. I can drive >500MB/s of
log throughput on metadata intensive workloads on 2.6.39 when not
using delayed logging or I'm regularly forcing the log via fsync.
You sure as hell don't want to be running a sustained long term
write load like that on consumer grade SSDs.....

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>