Thanks, this helps explain what the log is actually doing. The FAQ's
says that xfs for linux is still largely experimental and should not be
put on any production system. Is anyone running xfs on a 500+Gb file
system with a fairly heavy user load? I would be interested in your
successes/failers with xfs.
Steve Lord wrote:
> > On Thu, 31 May 2001 at 2:21pm, C. J. Keist wrote
> > > Is there a standard formula on how to determine what log file size for
> > > xfs on a given file system size?
> > > I'll be looking a creating a 500Gb size xfs file system.
> > >
> > I asked the same question back in March (for a 560GB hardware IDE-SCSI
> > RAID), and Steve Lord suggested 16384b or 32768b. There was mention of
> > adding heuristics to mfks.xfs for this at some point, but I don't recall
> > seeing any TAKEs for that...
> > Steve also suggested mounting with -o logbufs=8 if you expect heavy
> > traffic.
> > --
> > Joshua Baker-LePain
> > Department of Biomedical Engineering
> > Duke University
> We have done some more thinking about this since then, and the heuristics
> are in the latest mkfs, but I do not think it will bump the log size for
> a filesystem this small ;-), it probably does not start until you are
> in the terabyte range.
> Anyway, the size of the log (those are 4K blocks by the way) governs
> how much metadata you can have in modified state without having to
> flush it to disk. A bigger log means there will be more occasions
> when you do not end up in what we call tail pushing where each new
> transaction going into the filesystem has to push some metadata
> out to disk before it can get log space. Of course, constant sustained
> activity can always get you there - unless you disk runs at memory
> speeds. So a bigger log makes the filesystem run faster more of the
> time, it also slows down how long it takes to mount, especially if
> recovery is involved. You pays your money and takes your choice.
> Log size is not a function of filesystem size, but a function of how
> much metadata is changing per second.
> I would maybe go for 4096b (which is 16 Mbytes) for a fairly active
> large filesystem (I am calling yours large), but you might want to
> benchmark a bit if you really care, mkfs will not take too long to
> run (there are no inodes to create).
> The iclog buffers is also useful - it represents how many log writes
> can be in flight to disk at once. If all your buffers are in transit
> then transactions will get backed up waiting for them.
C. J. Keist Email: cjay@xxxxxxxxxxxxxxxxxx
UNIX/Network Manager Phone: 970-491-0630
Engineering Network Services Fax: 970-491-2465
College of Engineering, CSU
Ft. Collins, CO 80523-1301