[Top] [All Lists]

Re: xfs and raid5 - "Structure needs cleaning for directory open"

To: Doug Ledford <dledford@xxxxxxxxxx>
Subject: Re: xfs and raid5 - "Structure needs cleaning for directory open"
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Tue, 18 May 2010 07:45:32 +1000
Cc: Rainer Fuegenstein <rfu@xxxxxxxxxxxxxxxxxxxxxxxx>, xfs@xxxxxxxxxxx, linux-raid@xxxxxxxxxxxxxxx
In-reply-to: <4BF1B4FE.7020503@xxxxxxxxxx>
References: <20100510022033.GB7165@dastard> <4BF1B4FE.7020503@xxxxxxxxxx>
User-agent: Mutt/1.5.20 (2009-06-14)
On Mon, May 17, 2010 at 05:28:30PM -0400, Doug Ledford wrote:
> On 05/09/2010 10:20 PM, Dave Chinner wrote:
> > On Sun, May 09, 2010 at 08:48:00PM +0200, Rainer Fuegenstein wrote:
> >>
> >> today in the morning some daemon processes terminated because of
> >> errors in the xfs file system on top of a software raid5, consisting
> >> of 4*1.5TB WD caviar green SATA disks.
> > 
> > Reminds me of a recent(-ish) md/dm readahead cancellation fix - that
> > would fit the symptoms of (btree corruption showing up under heavy IO
> > load but no corruption on disk. However, I can't seem to find any
> > references to it at the moment (can't remember the bug title), but
> > perhaps your distro doesn't have the fix in it?
> > 
> > Cheers,
> > 
> > Dave.
> That sounds plausible, as does hardware error.  A memory bit flip under
> heavy load would cause the in memory data to be corrupt while the on
> disk data is good.

The data dumps from the bad blocks weren't wrong by a single bit -
they were unrecogniѕable garbage - so that it very unlikely to be
a memory erro causing the problem.

> By waiting to check it until later, the bad memory
> was flushed at some point and when the data was reloaded it came in ok
> this time.

Yup - XFS needs to do a better job of catching this case - the
prototype metadata checksumming patch caught most of these cases...


Dave Chinner

<Prev in Thread] Current Thread [Next in Thread>