On Mon, May 17, 2010 at 06:18:28PM -0400, Doug Ledford wrote:
> On 05/17/2010 05:45 PM, Dave Chinner wrote:
> > On Mon, May 17, 2010 at 05:28:30PM -0400, Doug Ledford wrote:
> >> On 05/09/2010 10:20 PM, Dave Chinner wrote:
> >>> On Sun, May 09, 2010 at 08:48:00PM +0200, Rainer Fuegenstein wrote:
> >>>> today in the morning some daemon processes terminated because of
> >>>> errors in the xfs file system on top of a software raid5, consisting
> >>>> of 4*1.5TB WD caviar green SATA disks.
> >>> Reminds me of a recent(-ish) md/dm readahead cancellation fix - that
> >>> would fit the symptoms of (btree corruption showing up under heavy IO
> >>> load but no corruption on disk. However, I can't seem to find any
> >>> references to it at the moment (can't remember the bug title), but
> >>> perhaps your distro doesn't have the fix in it?
> >>> Cheers,
> >>> Dave.
> >> That sounds plausible, as does hardware error. A memory bit flip under
> >> heavy load would cause the in memory data to be corrupt while the on
> >> disk data is good.
> > The data dumps from the bad blocks weren't wrong by a single bit -
> > they were unrecogniѕable garbage - so that it very unlikely to be
> > a memory erro causing the problem.
> Not true. It can still be a single bit error but a single bit error
> higher up in the chain. Aka a single bit error in the scsi command to
> read various sectors, then you read in all sorts of wrong data and
> everything from there is totally whacked.
I didn't say it *couldn't be* a bit error, just it was _very
unlikely_. Hardware errors that result only in repeated XFS btree
corruption in memory or causing other errors in the system is
something I've never seen, even on machines with known bad memory,
HBAs, interconnects, etc. Applying Occam's Razor to this case
indicates that it is going to be caused by a software problem.
Yes, it's still possible that it's a hardware issue, just very, very
unlikely. And if it is hardware and you can prove that it was the
cause, then I suggest we all buy a lottery ticket.... ;)