xfs
[Top] [All Lists]

Re: Kernel hangs with mongo.pl benchmark.

To: Paul Schutte <paul@xxxxxxxxxxx>
Subject: Re: Kernel hangs with mongo.pl benchmark.
From: Seth Mos <knuffie@xxxxxxxxx>
Date: Thu, 30 Aug 2001 22:50:33 +0200 (CEST)
Cc: XFS mailing list <linux-xfs@xxxxxxxxxxx>
In-reply-to: <3B8EA1A0.F3EC2326@it.up.ac.za>
Sender: owner-linux-xfs@xxxxxxxxxxx
On Thu, 30 Aug 2001, Paul Schutte wrote:

> Seth Mos wrote:
> > On Thu, 30 Aug 2001, Paul Schutte wrote:

> > If you toggle the mode in the bios to scsi mode it works as a normal scsi
> > controller. It is detected in this case by the normal adaptec driver.
> >
> 
> It is operating in the RAID mode in the BIOS and not the SCSI mode.
> There is also a 36Gbx4 in a RAID 5 array which I did'nt use for the test. It 
> not
> mounted at the moment.

You have the latest driver I assume? Their performance was good but we
decided to switch them for the AMI MegaRaid cards for compatibility
reasons.

> > Do you mean packing as in gzip or something similar? XFS uses delayed
> > allocation which means that it might allocate more space then it would
> > normally need to reduce fragmentation. So after a slight period XFS will
> > be returning the overallocated space.
> >
> 
> No, there is no compression like gzip or anything. When the files are created 
> the
> usage shoots up to about 2 Gb.
> It then starts dropping down to about 900Mb. It is probably the delayed
> allocation. I see this often on XFS when
> there's a lot of small files.

Ok, I am running the tests here too on a spare 4.3GB disk that I had
around. It's not screaming fast but it works.

I have the mongo.pl running now on my home box with 6 processes for over
an hour and the machine has not crashed yet.

The home machine is a AMD Athlon 1.4Ghz with 256MB DDR ram and a AMD760
chipset with a VIA IDE UDMA100 controller. The disk is operating in UDMA33
mode. The process is still running but it looks like we need adaptec
hardware to confirm. Eric was testing this IIRC.

Cheers
Seth


<Prev in Thread] Current Thread [Next in Thread>