xfs
[Top] [All Lists]

Re: ls -l versus du -sk after xfs_fsr

To: Eric Sandeen <sandeen@xxxxxxx>
Subject: Re: ls -l versus du -sk after xfs_fsr
From: Ludek Finstrle <luf@xxxxxxxxxx>
Date: Thu, 29 Sep 2005 07:44:10 +0200
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <433976C5.1000104@sgi.com>
References: <20050926071451.GA3751@soptik.pzkagis.cz> <4338128F.8000707@sgi.com> <20050927163531.GA19652@soptik.pzkagis.cz> <433976C5.1000104@sgi.com>
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Mutt/1.4i
> Well, if you run into this again we'll dig some more :)

I try xfs_fsr again and the problem is back.

> ># xfs_bmap -v Drafts
> >Drafts:
> > EXT: FILE-OFFSET      BLOCK-RANGE        AG AG-OFFSET        TOTAL FLAGS
> >   0: [0..151]:        14693632..14693783  7 (13568..13719)     152 00101
> 
> 151 * 512 = 77824, so that's fine... bmap reports only 78k used.  Not sure 
> du is reporting more...

# ls -l Inbox
-rwxrwx---  1 user  group  5038423 Sep 28 15:43 Inbox
# du -b Inbox
1347219456      Inbox
# xfs_bmap -v Inbox
Inbox:
 EXT: FILE-OFFSET      BLOCK-RANGE        AG AG-OFFSET          TOTAL FLAGS
   0: [0..9847]:       58182528..58192375 27 (1559424..1569271)  9848 00111

> if you still have a problematic file around, try xfs_bmap with the -a option.

# xfs_bmap -a Inbox
Inbox:
        0: [0..7]: 23962832..23962839
        1: [8..8388607]: hole

> Do you happen to still have the xfs_repair output in scrollback somewhere?

Unfortunetelly I haven't. But there were a lot of lines with bad block
count (9 to 12 digits maybe more - corrected e.g. to 2 blocks).
And / for FS was moved to lost+found.
If you are interested in xfs_repair output I can run it again in the
evening (I'm in timezone GMT+2). The problem is on production machine
so I have to wait.

> I suppose we should have checked the attribute fork.... any idea if you're
> using extended attributes?

I use ACL (extended attributes).
# chacl -l Inbox
Inbox [u::rwx,u:user:rwx,g::---,g:group1:rwx,g:group2:rwx,m::rwx,o::---]

Thanks,

Luf


<Prev in Thread] Current Thread [Next in Thread>