xfs
[Top] [All Lists]

Re: Everyone's favorite offtopic topic

To: Robin Humble <rjh@xxxxxxxxxxxxxxxxxxxxxxxxxxx>, linux-xfs@xxxxxxxxxxx
Subject: Re: Everyone's favorite offtopic topic
From: Seth Mos <knuffie@xxxxxxxxx>
Date: Sun, 29 Jul 2001 20:43:52 +0200
In-reply-to: <200107291147.LAA16285@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
References: <4.3.2.7.2.20010729113909.03a12bc8@xxxxxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
At 21:47 29-7-2001 +1000, Robin Humble wrote:

Seth Mos writes:
>There recently was a message about building a $5000 dollar IDE raid on
>slashdot IIRC.
>Don't know the URL anymore.

  http://staff.sdsc.edu/its/terafile/

>Basic configuration was purchasing a large case which can house 16 disks
>Purchase 16 100GB Maxtor IDE disks (the 540DX uses less power)
>Purchase 2 3ware Escalade 6800 controllers
>A PC for booting the machine.

I was disapointed by this article. It was way more than $5k (probably
that was Slashdot's mistake) which is to be expected for 16 disks

They had 3 different models. The one they built was 6000 or so.

I guess, however their performance barchart (which looks impressive)
bears little relation to the numbers in their spreadsheet.
In particular their RAID5 write numbers are kinda low at ~18MB/s.
The newer 7000 series 3ware cards might be better???

Write perfomance of raid5 always sucks. Tried doing heavy database work on a raid5 volume? Don't!

Every database server I have built uses either raid 1 or raid10.

Anyway - our latest datapoint:
Initial setup tests of our XFS RAID box with a lowly pentium 450, eight
75G 7200rpm IBM drives, and two Promise Ultra100 Tx2 controller cards,
cvs xfs kernel, seem to indicate that we can get around 40MB/s RAID5

aha, different raid controller.

writes. Alternatively we can double their 50-60MB/s RAID0 reads to around
110MB/s(!). We were pretty happy to see those numbers :)

But you probably can't go faster. The PCI bus does not go faster. Otherwise use a serverworks chipset with a dual PCI bus. Then you can go past the illusive 133MB/s.
I believe that there are other motherboards out there that have 2 PCI busses.

Even a Dell PowerEdge 1300 with a serverworks chipset had a dual PCI bus ($2000 machine)

We're still setting up the machine (these are from late last night so
should maybe be taken with a grain of salt) and are moving to faster
memory and cpu soon, so hope to see them improve.  Also one of our
disks is misbehaving and probably is dragging the numbers down.

How do you do that? bad cable perhaps (I've seen this with IDE before).

 I
should try bonnie++ with 4x memory size and not just the default 2x
also - that may lower our numbers a bit and remove more caching effects.
We haven't played with chunk sizes, mkfs options, mount options, or
external logs yet.

External logs make a huge difference and using more logbufs during mount will probably help if you have a busy filesystem. Make sure you have more the 256MB of ram.

I also tried the SDSC folks 'fastest' 2xRAID5 + RAID0 configuration,
but it was no quicker than just a RAID5 over all 8 disks. I guess
they're seeing some artifacts from their 3ware cards/drivers.

2 cards, 2 different raidsets, no problems. I think they are having a hard time living together on the bus.

XFS seemed slightly slower (5%?) than ext2 for the tests we've run so
far. We're running some NFS tests over 100Mbit now. Gigabit will follow
once we have another cat5 gigabit card somewhere to write from! :)

It might be sligthly slower but it is possible that you are getting bitten by the internal log on the raid device. How many ethernetcards does the machine contain and wat network are you connected to? It might well be that you are not hitting the stalls yet because you are not pushing harder then 10MB/s over NFS.

Cheers
Seth
--
Seth
Every program has two purposes one for which
it was written and another for which it wasn't
I use the last kind.


<Prev in Thread] Current Thread [Next in Thread>