I guess I'm just starting to get back into this because I'm
starting to run into real system integration projects where
data volumes are exceeding 4+TB. I know this is not a staple
for Red Hat where they are either catering to grid computing
clusters or web servers, possibly Oracle SQL databases using
"raw" slices (instead of filesystems), but they are still the
"flagship" carrier of any distribution when it comes to file
services with NFS (or NFS+SMB) in my book.
I'm not looking for dealing with x86 issues. The
configurations I'm talking about are 4+TB disks on systems
with 4+GiB mem (pretty much 1GiB per DDR channel is
commodity, and that's a minimum of 4GiB for 4 DDR channels on
Opteron 2xx). I'm also not doing software RAID, but using
3Ware and LSI Logic hardware**. From all I've read, x86-64
in PAE mode using 52-bit register (48-bit "Long Mode") is 4K
pages, although I do note 2M and 4M pages as well.
Again, I guess since I'm just getting back into XFS as a
solution, as my last serious set of runs was in the XFS 1.2 /
Red Hat Linux 7.x time-frame. Fedora Core 3 has been a good
test, but I'm always leery of deploying anything that is not
SGI blessed -- not even stock kernels -- and I believe the
Fedora Core 3 offers little in an XFS implementation beyond
the stock kernel. XFS is the absolute perfect complement to
Ext3 for 100+GB volumes, especially 1+TB volumes.
Red Hat can choose to ignore us system integrators and lose a
lot of business. In fact, I'm really getting to the point
I'm half-way serious about getting some investors to build a
new enterprise distribution and offer Service Level
Agreements (SLAs). The distribution would always be based on
a fork of the 2rd or 3rd Fedora Core release, as I typically
do agree with Red Hat's design decisions at the core -- but
not the end-focus of Red Hat Enterprise Linux. I cannot
trust Red Hat's focus as of late, and I 100% agree with Sun
when they say that Red Hat is not addressing the filesystem
aspect (among other things).
Otherwise, if I had to implement a commodity 2 or 4-way
Opteron 2xx/8xx solution today, I would go Solaris 10.
**SIDE NOTE: I rather tire of people who make claims about
how "bad" hardware RAID and how "good" software RAID is by
using "not even current 5 years ago" i960-based hardware RAID
solutions as examples, along with the farce of "universal
compatibility" which LVM/MD has _not_ done as well over time
-- especially with volumes on LSI Logic, let alone 3Ware,
solutions. There are those of us who have serious, multiple
8+ disk volumes who have been using 3Ware and LSI Logic
solutions for at least 5 years with hardware mirroring/XORs.
We don't have the "race conditions" between software
volume/filesystem layers, and we don't overload our system
interconnect with needless data replication.
Even Intel is now starting to put its XScale I/O Processors
(IOPs) on server chipsets/mainboards as _standard_ for
off-loading network/storage (RAID, network, iSCSI HBA,
etc...), because their IOPs at 500+MHz are better than using
the traditional CPU/memory interconnect. I.e., there is a
reason why we don't use PCs for network switches and routers,
and today's serial ATA (SATA) and serial attached SCSI (SAS)
storage controllers are basically "storage switches" in the
exact same regard overhead/performance-wise.
Bryan J. Smith | Sent from Yahoo Mail
mailto:b.j.smith@xxxxxxxx | (please excuse any
http://thebs413.blogspot.com/ | missing headers)