xfs
[Top] [All Lists]

Re: xfs_repair 3.1.2 crashing

To: Michael Monnerie <michael.monnerie@xxxxxxxxxxxxxxxxxxx>
Subject: Re: xfs_repair 3.1.2 crashing
From: Eric Sandeen <sandeen@xxxxxxxxxxx>
Date: Fri, 11 Jun 2010 20:38:40 -0500
Cc: xfs@xxxxxxxxxxx
In-reply-to: <201006120138.22265@xxxxxx>
References: <201006101306.07587@xxxxxx> <4C11127C.3030907@xxxxxxxxxxx> <201006120138.22265@xxxxxx>
User-agent: Thunderbird 2.0.0.24 (Macintosh/20100228)
Michael Monnerie wrote:
> On Donnerstag, 10. Juni 2010 Eric Sandeen wrote:
>> It'd be great to at least capture the issue by creating an
>>  xfs_metadump image for analysis...
> 
> I sent it to you in private.
> 
> But now I'm really puzzled: I bought 2 2TB drives, installed an lvm with 
> xfs on them to have 4TB, and copied the contents from the server to 
> these 4TB via rsync -aHAX. And now I have a broken XFS on that brand new 
> created drives, without any crash, not even a reboot!
> 
> I got this message after making a "du -s" on the new disks:
>  du: cannot access `samba/backup/uranus/WindowsImageBackup/uranus/Backup 
> 2010-06-05 010014/852c2690-cf1a-11de-b09b-806e6f6e6963.vhd': Structure 
> needs cleaning

dmesg would be the right thing to do here ...

> So I umounted and xfs_repaired (v3.1.2) it:
> # xfs_repair -V                                                               
>                 
> xfs_repair version 3.1.2

which kernel, again?  The fork offset problems smell like something that's 
fixed.

-Eric
                                                                               
> # xfs_repair /dev/swraid0/backup 
> Phase 1 - find and verify superblock...   
> Phase 2 - using internal log              
>         - zero log...                     
>         - scan filesystem freespace and inode maps...
>         - found root inode chunk                     
> Phase 3 - for each AG...                             
>         - scan and clear agi unlinked lists...
>         - process known inodes and perform inode discovery...
>         - agno = 0
>         - agno = 1
> local inode 2195133988 attr too small (size = 3, min size = 4)
> bad attribute fork in inode 2195133988, clearing attr fork
> clearing inode 2195133988 attributes
> cleared inode 2195133988
>         - agno = 2
>         - agno = 3
>         - agno = 4
>         - agno = 5
>         - agno = 6
>         - agno = 7
>         - process newly discovered inodes...
> Phase 4 - check for duplicate blocks...
>         - setting up duplicate extent list...
>         - check for inodes claiming duplicate blocks...
>         - agno = 2
>         - agno = 4
>         - agno = 5
>         - agno = 6
>         - agno = 7
>         - agno = 3
>         - agno = 1
>         - agno = 0
> data fork in inode 2195133988 claims metadata block 537122652
> xfs_repair: dinode.c:2101: process_inode_data_fork: Assertion `err == 0' 
> failed.
> Aborted
> 
> What's this now? I copied the error from the source via rsync? ;-)
> 
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> xfs mailing list
> xfs@xxxxxxxxxxx
> http://oss.sgi.com/mailman/listinfo/xfs

<Prev in Thread] Current Thread [Next in Thread>