xfs
[Top] [All Lists]

Re: 2.6.0-test8 XFS bug

To: Jirka Kosina <jikos@xxxxxxxx>
Subject: Re: 2.6.0-test8 XFS bug
From: Nathan Scott <nathans@xxxxxxx>
Date: Sat, 25 Oct 2003 14:28:41 +1000
Cc: linux-kernel@xxxxxxxxxxxxxxx, linux-xfs@xxxxxxxxxxx
In-reply-to: <Pine.LNX.4.58.0310240853500.30731@xxxxxxxxxxxxx>; from jikos@xxxxxxxx on Fri, Oct 24, 2003 at 09:03:01AM +0200
References: <Pine.LNX.4.58.0310232336180.6971@xxxxxxxxxxxxx> <20031024000951.GH858@frodo> <Pine.LNX.4.58.0310240853500.30731@xxxxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Mutt/1.2.5i
On Fri, Oct 24, 2003 at 09:03:01AM +0200, Jirka Kosina wrote:
> On Fri, 24 Oct 2003, Nathan Scott wrote:
> > > Oct 22 13:12:56 storage2 kernel: Filesystem "dm-0": XFS internal error 
> > > xfs_alloc_read_agf at line 2208 of file fs/xfs/xfs_alloc.c.  Caller 
> > > 0xc01e8f5c
> > You're allocating real disk space for delayed allocate file data
> > down this path, and the read of the allocation group header found
> > something that didn't look at all like metadata on disk.
> > So, you definately have corruption and will need to xfs_repair.
> > Any ideas as to what operations triggered the initial problem?
> > (is it reproducible for you?)
> 
> I can simply reproduce it - the only thing needed is to nfsmount this
> partition from clients and start writing a file to it, as I've written
> before. The crash occurs immediately after the transfer begins.

OK, I missed that information in your last mail then, I thought
you had done successful NFS tests and then failed on a local cp.
Looks like we need to focus on more XFS/NFS testing in 2.6; will
do.

> As I've said, I wouldn't blame HW failure or things like this, because 
> ext3 does the job flawlessly, as far as I can see.

Understood -- its good to have it narrowed down to XFS.  Was that
ext3 test on the same DM device, also at 5.5 Tb with LBD, and NFS
involved too?  Could you send me your xfs_repair output and also
your filesystem geometry (xfs_info).

thanks.

-- 
Nathan


<Prev in Thread] Current Thread [Next in Thread>