xfs
[Top] [All Lists]

Re: shutting down f/s

To: Steve Lord <lord@xxxxxxx>
Subject: Re: shutting down f/s
From: kris buggenhout <kris.buggenhout@xxxxxxxxx>
Date: Thu, 04 Oct 2001 23:30:36 +0200
Cc: "linux-xfs@xxxxxxxxxxx" <linux-xfs@xxxxxxxxxxx>
References: <200110041515.f94FFW607488@jen.americas.sgi.com>
Sender: owner-linux-xfs@xxxxxxxxxxx
Steve Lord wrote:

> > Steve Lord wrote:
> > >
> > > > Might this be related to shutdowns taking a long time to actually
> > > > unmount fs...
> > > > I noticed since I upped the kernel to an 2.4.10 release ( dont know the
> > > > exact checkout)
> > > > shutdowns hang on umount for a significant larger period... as in taking
> > > > 2-3 minutes istd of 20-30 secs.
> > >
> > > Hmm, thats a new one to me too, my shutdowns on all xfs boxes do not 
> > > appear
> > > to have become any longer - however, the systems are not busy before the
> > > unmount usually. The new VM showed up later in the 2.4.10-pre series,
> > > possibly this was the cause.
> > >
> > > Can you characterize how much activity there is on the system before
> > > shutdown - would there typically be a lot of dirty data in filesystems?
> > > Also, you mention write activity - is this during shutdown?
> > >
> > After a whole day of writing files / ftp logs / ftp data /archiving data
> > on cd / removing data, .
> > approx 1-3Gig/day.
> >
> > with some processes still holding a lock on that dir ( they should get
> > killed before umount gets called)
> >
> > I would say this fs should amount to a high percentage of dirty data.
> > as its a very active fs read/write/delete...
>
> So this probably relates mostly to how the vm changes have affected flushing
> of data out to disk.
>

I suppose so,  I am not that proficient in kernel hacking... ( i am not a coder)
but it seems logical ... flushing to disks get's delayed... givven that I 
upgraded
the memory too, 400M... which acounts for a larger possible fs cache. As not a 
lo
of processes run, not much of system memory has to be devoted to running 
tasks...
which can let the cache maximize.

I dont remember the defaults in the kernel, but if they are set like in HPUX :
default fs cache =50% ( HPUX11) this could amount to a cache of data holding
200Meg ... which could acount for the longer delay in unmounting ( flushing) the
disk is actually ratling on umount...

I will look into the settings..


kind regards...


<Prev in Thread] Current Thread [Next in Thread>