>>> On Wed, 19 Jul 2006 09:11:10 -0400, Ming Zhang
>>> <mingz@xxxxxxxxxxx> said:
[ ... ]
mingz> what kind of "ram vs fs" size ratio here will be a
mingz> safe/good/proper one? any rule of thumb? thanks! hope
mingz> not 1:1. :)
This is driven mostly by the space required by check/repair
(which can well be above 4GiB, so 64 bit systems are often
«e.g. it took 1.5GiB RAM for 32bit xfs_check and 2.7GiB RAM
for a 64bit xfs_check on a 1.1TiB filesystem with 3million
inodes in it.»
It suggests that a 10TB filesystem might need from about 15
gigabytes of RAM (or swap, with corresponding slowdown), after
all only less than 0.2% of its size.
Anyhow, a system with lots of RAM to speedly check/repair an
XFS filesystem also benefits from the same RAM for caching and
delayed writing, so it is all for good (as long as one has a
perfectly reliable block IO subsystem).
Note that the 15 gigabytes in the example above are well above
what a 32 bit process can address, thus for multi-terabyte
filesystem one should really have a 64 bit system (from the same
article mentioned above):
«> > Your filesystem (8TiB) may simply bee too large for your
> > system to be able to repair. Try mounting it on a 64bit
> > system with more RAM in it and repairing it from there.
> Sorry, but is this a joke?
A joke? Absolutely not.
Acheivable XFS filesystem sizes outgrew the capability of 32
bit Irix systems to repair them several years ago. Now that
linux supports larger than 2TiB filesystems on 32 bit
systems, this is true for Linux as well.»