[Top] [All Lists]

Re: Configuring large XFS filesystem

To: linux-xfs@xxxxxxxxxxx
Subject: Re: Configuring large XFS filesystem
From: "Net Llama!" <netllama@xxxxxxxxxxxxx>
Date: Thu, 11 Sep 2003 20:43:50 -0700
In-reply-to: <3F613BE3.6000507@xxxxxxx>
Organization: HAL III
References: <3F613BE3.6000507@xxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.5b) Gecko/20030827
On 09/11/03 20:22, Kevin P. Fleming wrote:

I've been using XFS for a while now on Linux, but never on a filesystem larger than about 60GB or so. Tomorrow I need to configure a disk array for a client with a single XFS filesystem. The total size of the filesystem will be approximately 300GB, and it needs to be able hold 1.5-2.0 million files at any one time. There are no database work loads or transaction processing workloads, it's just a big fat file server for their network. They store a wide range of file sizes, but at least 50% of the files will be less than 32KB in size. The filesystem will be shared out using Samba 3.0, and there will be limited usage of extended attributes and ACLs through Samba (but probably no more than a couple thousand files, unless Samba decides to put extended attributes on things I'm not aware of yet).

The server is running kernel 2.6.0-test5. Anyone have any suggestions on configuring the filesystem? I hesitate to just use mkfs.xfs defaults for something this large, and I certainly don't want them to run out of space for new files/directories when the system is not full.

I'd have to wonder why you're using an unstable kernel on a production server?

L. Friedman                                    netllama@xxxxxxxxxxxxx
Linux Step-by-step & TyGeMo:                    http://netllama.ipfox.com

  8:40pm  up 6 days,  7:33,  1 user,  load average: 0.12, 0.25, 0.30

<Prev in Thread] Current Thread [Next in Thread>