I ran a ftp server on a pentium II 333Mhz with 256M RAM, using the
Used 4 x 120 Gb IDE drives in a RAID 5 array on an Adaptec 2400 hardware
There is a 4Gb root partition and a +/- 320Gb data partition.
One of the drives failed and the machine crashed.
We replaced the drive and rebuild the array.
I booted up with a CD that I created a while a go with
2.4.19-pre9-20020604 and mounted a
nfs root partition with all the xfs tools on it.
We ran xfs_repair (version 2.2.1) on the root partition of the raid
A lot of the files have the dreaded zero problem, but apart from that it
is mountable and usable.
We ran xfs_repair on the 320Gb partition.
After about 15min xfs_repair died with 'Terminated' being print on the
Out of Memory: Killed process 269 (xfs_repair).
I recreated the swap partition and activated it.
Ran xfs_repair again.
Phase 6 - check inode connectivity...
- resetting contents of realtime bitmap and summary inodes
- ensuring existence of lost+found directory
- traversing filesystem starting at / ...
- traversal finished ...
- traversing all unattached subtrees ...
fatal error -- can't read block 0 for directory inode 2097749
When you mount the filesystem, it is empty (except for lost+found which
is also empty)
The output of xfs_repair is large about 300k bzip2'ed. It would be best
if interested parties download it.
Have I lost the 320G partition or does someone still have a trick up
their sleeve ?
Would it be possible to make xfs_repair use a lot less memory ?
My guess is that the filesystem got it's final blow by xfs_repair
Any suggestions are welcome.