xfs
[Top] [All Lists]

Re: LBA to File?

To: Michael Sinz <msinz@xxxxxxxxx>
Subject: Re: LBA to File?
From: Ingo Juergensmann <ij@xxxxxxxxxxxxxxxx>
Date: Fri, 7 Mar 2003 16:17:32 +0100
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <3E68A99D.4060301@xxxxxxxxx>
References: <Pine.LNX.4.44.0303071150220.30312-100000@xxxxxxxxxxxxxxxxxxxxxxxxxxx> <3E68A99D.4060301@xxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Mutt/1.5.3i
On Fri, Mar 07, 2003 at 09:15:57AM -0500, Michael Sinz wrote:

> >No, not only for them. The frequent access exists for all drives and 
> >will destroy a high-end SCSI disk just as well, but maybe not as fast. 
> I would think that all of this "worry" is not a major issue.  One
> of the reasons you may find more "remapped" blocks in the highly used
> areas is that they drive had more chance to notice a questionable
> sector.  (That is a sector where error correction was needed to
> recover the data beyond the norm and thus it chose to relocate the
> sector)

Hmm, well, I have some disks with large grown defects tables and others
without any grown defect at all, regardless of how old the disks are. 
But I had as well more failing IDE disks than SCSI disks. Only one SCA disk
failed in an array the last days, but some IDE disks died just after a few
days (yes, IBM ones ;). The failing SCA disk instead was an old 2 GB Seagate
Barracuda, just to mention how old this disk was...  
 
> If you are worried that such usage patterns can be a problem, just look
> at filesystems such as FAT/FAT32/VFAT/etc.  They always must write to
> the front of the disk (fat table) and read from that same area (except
> when cached)  There is no noticable higher failure rate of drives due
> to FAT file system usage (don't count Windows re-install and file
> corruption as disk failures :-) 
> NTFS also has the central allocation bitmap and control structures.

Most users will re-install Windows anyway on broken disks as well, until
they realises that actually the disk is broken and not Windows... ;)

> The main problem with the "low end" IDE drives is that they are very
> fragile bits of mechanical equipment.  Very small head to disk spacing
> and very high rotational rates and very high bit density means that
> failures will happen due to mechanical reasons.  As you cut your costs
> and try to pump out millions of these units, the MTBF will tend to
> drop until you get some new technique to make them more reliable.
> Then then push the limits and make the drives double their bit
> density and half the head flying height and the cycle starts over.

Try and error or "public betatesting"... 
There is no time anymore to test the hard-/software inhouse, instead many
uses the crowd out there as public betatesters... *sigh* 
IBM disks are a good example for this QA problems.
 
> Given the track record of XFS and other file systems and usage
> patterns that cause certain areas of disks to be used much more
> often than others, I have not seen a major failure mode in many
> years.  (There was the lubrication failure mode from many
> years ago, but that has long since been solved)
> 
> That is not to say that we should not look at the fact that there
> are hot zones in where writing happens.  The fact that there are
> hot zones (multiple) means that there is more seeking going on than
> you might want (albeit it may be needed).  (And each seek does
> cause some mechanical wear and poses potential mechanical failure)

For me as a user it's something like this:
I don't bother about the filesystem internals as long as I can get my data
back in some way. I know that each filesystem can and certainly will fail
at some time. 
I made this experiences with Amiga OFS and FFS, with Amiga AFS (now SFS),
FAT, NTFS, ext2/ext3 and XFS (both Irix and x86).
As you might now FFS has a certain manner in accessing data, which make it
easy to regain data after a disk error (disk validating error): quick format
it and let some undelete tools do their job (such as DiskSalv from Dave H.). 
Back then I tried AFS (Advanced FileSystem), which worked fine for some
time, until it failed. The delivered AFS salvage tools failed in validating
the disk and 400 MB of data (back in 1995 that was a big block of data ;). 

My experiences with FAT and NTFS are not as deep as with others operating
systems or file systems, but it's similar: as long as there are tools that
are doing their job excellent, you don't worry much about your data - as
long the operating/file systems doesn't try to be very clever and doing
stuff it better shouldn't do. 

Ext2/3 is doing some fscking regularly, but when fsck stumbles about an
error it can't solve at its own, you're doomed as an average user.

XFS instead is somewhat different. Not that I haven't had failing disks
under XFS usage - indeed I had the most failing disks (IBMs that is ;) under
XFS, but I was always able to recover my data (with the help of esandeen and
others from #xfs). The reason is simple (for me): I can make a diskimage of
that filesystem and try to salvage the data from that "backup"... if dd
stumbles across an error on hardware, I can skip that block and XFS is
robust enough to get a clean file system with this missing block. And
because XFS is on disk compatible I can use XFS tools on my SGI to exclude
bugs in Linux XFS tools. 

To make it short: 
A good file system is not the fastest one or the newest one, but the one
that gives me my data back after an error happened - errors will happen for
sure! Regardless of FS errors or hardware errors on disk. So, the FS tools
are somewhat more important than the FS itself. 

For me XFS and Amiga FFS are the most robust file systems I came across over
the years... :-) 

-- 
Ciao...              // 
      Ingo         \X/


<Prev in Thread] Current Thread [Next in Thread>