xfs
[Top] [All Lists]

Re: [reiserfs-list] Re: benchmarks

To: Nikita Danilov <NikitaDanilov@xxxxxxxxx>, Xuan Baldauf <xuan--reiserfs@xxxxxxxxxxx>
Subject: Re: [reiserfs-list] Re: benchmarks
From: Russell Coker <russell@xxxxxxxxxxxx>
Date: Tue, 17 Jul 2001 00:29:27 +0200
Cc: Chris Wedgwood <cw@xxxxxxxx>, rsharpe@xxxxxxxxxx, Seth Mos <knuffie@xxxxxxxxx>, Federico Sevilla III <jijo@xxxxxxxxxxxxxxxxxxxx>, linux-xfs@xxxxxxxxxxx, reiserfs-list@xxxxxxxxxxx
In-reply-to: <15187.18225.196286.123754@beta.namesys.com>
References: <Pine.BSI.4.10.10107141752080.18419-100000@xs3.xs4all.nl> <3B5341BA.1F68F755@baldauf.org> <15187.18225.196286.123754@beta.namesys.com>
Reply-to: Russell Coker <russell@xxxxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
On Mon, 16 Jul 2001 21:57, Nikita Danilov wrote:
>  > > If you have 10000 clients each opening 100 files you got 1e6 opened
>  > > files on the server---it wouldn't work. NFS was designed to be
>  > > stateless to be scalable.
>  >
>  > Every existing file has at least one name (or is member of the hidden
>  > to-be-deleted-directory, and so has a name, too), and an object_id.
>  > Suppose the object_id is 32 bytes long. A virtual filedescriptor may be
>  > 4 bytes long, some housekeeping metadata 28 bytes, so we will have 64MB
>  > occupied in you scenario. Where's the problem? 80% of those 64MB can be
>  > swapped out.
>
> For each open file you have:
>
>  struct file (96b)
>  struct inode (460b)
>  struct dentry (112b)
>
> at least. This totals to 668M of kernel memory, that is, unpageable.
> All files are kept in several hash tables and hash-tables are known to
> degrade. Well, actually, I am afraid current Linux kernels cannot open
> 1e6 of files.

Last time I tested this one process could only open slightly more than 80,000 
files (I am not about to test this at the moment as doing so allocates a lot 
of kernel memory that can't be swapped and therefore requires a reboot to 
regain good performance).

I am not sure why the limit is around 80K files, but I suspect that if I had 
more RAM the limit would be higher.

80K files takes 80M of unswappable RAM if the hash tables you refer to take 
356 bytes per file.

80M of unswappable RAM is not a crushing burden on my Thinkpad which has 256M 
of RAM.  On a large server machine 80M is no big deal at all.

However most large servers will be killed by having even 10K files open and 
active at one time.

I doubt that there exists a situation where the server has such a large 
amount of disk capacity that it can handle 100K files (how big would that be, 
100 spindles?  more?), yet it has such a small amount of RAM (by server 
standards) that it can't handle the file structures.

When you have server machines with 8G of RAM if the kernel memory is the 
issue then you should be able to have 2M or 3M files open...

-- 
http://www.coker.com.au/bonnie++/     Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/       Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/     My home page


<Prev in Thread] Current Thread [Next in Thread>