[Top] [All Lists]

Re: Kernel hangs with mongo.pl benchmark.

To: Seth Mos <knuffie@xxxxxxxxx>
Subject: Re: Kernel hangs with mongo.pl benchmark.
From: Paul Schutte <paul@xxxxxxxxxxx>
Date: Fri, 31 Aug 2001 00:24:11 +0200
Cc: XFS mailing list <linux-xfs@xxxxxxxxxxx>
References: <Pine.BSI.4.10.10108302244090.17576-100000@xxxxxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
Seth Mos wrote:

> On Thu, 30 Aug 2001, Paul Schutte wrote:
> > Seth Mos wrote:
> > > On Thu, 30 Aug 2001, Paul Schutte wrote:
> > > If you toggle the mode in the bios to scsi mode it works as a normal scsi
> > > controller. It is detected in this case by the normal adaptec driver.
> > >
> >
> > It is operating in the RAID mode in the BIOS and not the SCSI mode.
> > There is also a 36Gbx4 in a RAID 5 array which I did'nt use for the test. 
> > It not
> > mounted at the moment.
> You have the latest driver I assume? Their performance was good but we
> decided to switch them for the AMI MegaRaid cards for compatibility
> reasons.
> > > Do you mean packing as in gzip or something similar? XFS uses delayed
> > > allocation which means that it might allocate more space then it would
> > > normally need to reduce fragmentation. So after a slight period XFS will
> > > be returning the overallocated space.
> > >
> >
> > No, there is no compression like gzip or anything. When the files are 
> > created the
> > usage shoots up to about 2 Gb.
> > It then starts dropping down to about 900Mb. It is probably the delayed
> > allocation. I see this often on XFS when
> > there's a lot of small files.
> Ok, I am running the tests here too on a spare 4.3GB disk that I had
> around. It's not screaming fast but it works.
> I have the mongo.pl running now on my home box with 6 processes for over
> an hour and the machine has not crashed yet.
> The home machine is a AMD Athlon 1.4Ghz with 256MB DDR ram and a AMD760
> chipset with a VIA IDE UDMA100 controller. The disk is operating in UDMA33
> mode. The process is still running but it looks like we need adaptec
> hardware to confirm. Eric was testing this IIRC.

I have checked straight away and the driver that I am using is still the latest.

I managed to complete the creation part on the 4400. It is still perculating 
away on
the copy ...

kernel-2.4.10-pre2 and egcs-1.1.2 is used here.

1) if I mount with logbufs=8 then it dies.
2) if I do mkfs.xfs with -i maxpct=90,size=2048 then it also dies irespective 
of the
logbufs settings.

3) mkfs.xfs -f -l size=8192b -i maxpct=90,size=2048 and mount -o logbufs=8 is a 
combo ;-)

4) mkfs.xfs -f -l size=8192b and mount -o logbufs=2 is stable.

These observations holds true for the Dell 4400 in my test environment.

The 1400C just died on me using the setting in 4) above.
I am going to rebuild the kernel using the old style driver to see if it helps.

When should one use logbufs=8 and when not.
I messed around with the -i option because ext2 ran out of inodes on the same 

I have 4 mailservers running XFS, on Adaptec hardware ,in production, mounted 
(Butterflies are eating chunks out of my stomac as I am typing ...)

They are running for 2 months now without any problems.
I think I should remount them with logbufs=2 just in case.
Will I lose anything by doing that ?

For those interrested
The backup speed of these servers increased from about 50Mb/min to about 
85Mb/min on
full backups when I switched from ext2 to XFS.

I really like XFS and are very impressed with the quality and appreciate the 
put into making it happen.



<Prev in Thread] Current Thread [Next in Thread>