On Fri, 26 Apr 2002, Paul Schutte wrote:
> I know that it grows with the size, but the rate is much too slow.
> If you create a 2Gb filesystem, you will have a 1200b log.
> If you create a 64Gb filesystem, you still have the same 1200b log.
> (That was still the case when a have set up my mailserver a month
> If you have 80 clients logging in on a 8Gb partition as in his case,
> you can be sure to have your performance limited by your
> 1200b log.
> 1200b is good for a workstation, not for a high performance server.
> How was he suppose to know that.
> The size of the filesystem does not matter as much as the amount
> of I/O that you expect, as Steve pointed out.
> Larger filesystems obviously have potential for more I/O.
> Maybe I put this a bit harsh, but I am trying to defend XFS's honour.
> All the people that I encountered that said XFS's performace sucks,
> used the default log size.
> After corrrecting their mistake, they were impressed by XFS.
> I bet that we have lost a lot of people because of a too small
> log size.
Excuse me for dropping in but I wanted to say some things about this
We are using an XFS partition for a high I/O load (many IOPs, not too
The partition is 100Gb and its using its default log size (as per
xfs_info) of 3200 blocks (4096 bytes for a block).
Also I am monitoring (MRTG graphs) the IOPs as from /proc/stat | grep
disk_io . Would it be right to consider from /proc/stat the firs value
as the number of I/O transactions?
Whould 52710982 be the number of I/O transactions of the device 48,0 ?
If yes then would help very much to get statistics from them and maybe
answer the question of a log of X size would lower the performance of XFS
partition of Y size and Z transactions/s.
PS: having a single XFS partition of device 48,0
Disclaimer: Any views or opinions presented within this e-mail are solely
those of the author and do not necessarily represent those of any company,
unless otherwise specifically stated.