xfs
[Top] [All Lists]

Re: creating a new 80 TB XFS

To: xfs@xxxxxxxxxxx
Subject: Re: creating a new 80 TB XFS
From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Sat, 25 Feb 2012 20:57:05 -0600
In-reply-to: <20297.22833.759182.360340@xxxxxxxxxxxxxxxxxx>
References: <4F478818.4050803@xxxxxxxxxxxxxxxxx> <20120224150805.243e4906@xxxxxxxxxxxxxxxxxxxx> <4F47B020.4000202@xxxxxxxxxxxxxxxxx> <20297.22833.759182.360340@xxxxxxxxxxxxxxxxxx>
Reply-to: stan@xxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Windows NT 5.1; rv:10.0.2) Gecko/20120216 Thunderbird/10.0.2
On 2/25/2012 3:57 PM, Peter Grandi wrote:

>> There are always failures. But again, this is a backup system.
> 
> Sure, but the last thing you want is for your backup system to
> fail.

Putting an exclamation point on Peter's wisdom requires nothing more
than browsing the list archive:

Subject: xfs_repair of critical volume
Date: Sun, 31 Oct 2010 00:54:13 -0700
To: xfs@xxxxxxxxxxx

I have a large XFS filesystem (60 TB) that is composed of 5 hardware
RAID 6 volumes. One of those volumes had several drives fail in a very
short time and we lost that volume. However, four of the volumes seem
OK. We are in a worse state because our backup unit failed a week later
when four drives simultaneously went offline. So we are in a bad very state.
[...]


This saga is available in these two XFS list threads:
http://oss.sgi.com/archives/xfs/2010-07/msg00077.html
http://oss.sgi.com/archives/xfs/2010-10/msg00373.html

Lessons:
1.  Don't use cheap hardware for a backup server
2.  Make sure your backup system is reliable
    Do test restores operations regularly


I suggest you get the dual active/active controller configuration and
use two PCIe SAS HBAs, one connected to each controller, and use SCSI
multipath.  This prevents a dead HBA leaving you dead in the water until
replacement.  How long does it take, and at what cost to operations, if
your single HBA fails during a critical restore?

Get the battery backed cache option.  Verify the controllers disable the
drive write caches.

Others have recommended stitching 2 small arrays together with mdadm and
using a single XFS on the volume instead of one big array and one XFS.
I suggest using two XFS, one on each small array.  This ensures you can
still access some of your backups in the event of a problem with one
array or one filesystem.

As others mentioned, an xfs_[check|repair] can take many hours or even
days on a multi-terabyte huge metadata filesystem.  If you need to do a
restore during that period you're out of luck.  With two filesystems,
and if duplicating critical images/files on each, you're still in business.

-- 
Stan

<Prev in Thread] Current Thread [Next in Thread>