[Top] [All Lists]

Re: XFS shutdown with 1.3.0

To: Simon Matter <simon.matter@xxxxxxxxxxxxxxxx>
Subject: Re: XFS shutdown with 1.3.0
From: Nathan Scott <nathans@xxxxxxx>
Date: Mon, 8 Sep 2003 09:24:46 +1000
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <2588.>
References: <41782.> <20030902071613.GB1378@frodo> <43946.> <20030905052032.GD1126@frodo> <2588.>
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Mutt/1.5.3i
On Fri, Sep 05, 2003 at 11:22:13AM +0200, Simon Matter wrote:
> >> [root@xxl root]# xfs_info /home
> >> meta-data=/home                  isize=256    agcount=160, agsize=262144
> >> blks
> >>          =                       sectsz=512
> >> data     =                       bsize=4096   blocks=41889120,
> >> imaxpct=25
> >>          =                       sunit=32     swidth=96 blks,
> >> unwritten=0
> >> naming   =version 2              bsize=4096
> >> log      =external               bsize=4096   blocks=25600, version=1
> >>          =                       sectsz=512   sunit=0 blks
> >> realtime =none                   extsz=393216 blocks=0, rtextents=0
> >>
> ...
> Unfortunately the problem looks like a timebomb to me. Is there a way to
> find out whether a filesystem has ever been grown? This would help me to
> find out whether the growing was the culprit here.

The above filesystem has almost certainly been grown.  You can
tell because the agsize is fairly small & the agcount is quite
large (the only case where this may not have been grown is if
that agsize/agcount was explicitly requested at mkfs time, and
thats unlikely I think).

To contradict Eric - ;) - there is a cute trick you can use to
tell:  if you run "mkfs.xfs -N /dev/XXX", this will just print
the geometry that mkfs _would_ have used (-N means "don't") so
if that doesn't match up to the actual filesystem geometry (in
particular, the agcount= field), then it has likely been grown.

Both this method and Erics method can be fooled if unusual mkfs
options were used when the filesystem was created, however.



<Prev in Thread] Current Thread [Next in Thread>