On Tue, 2002-10-29 at 16:30, yoros@xxxxxxxxxx wrote:
> On Tue, Oct 29, 2002 at 11:57:35AM -0800, Chris Wedgwood wrote:
> > On Mon, Oct 28, 2002 at 11:51:13PM +0100, yoros@xxxxxxxxxx wrote:
> > > Yes, I know that the files I'm deleting has a lot of extents but I
> > > also know that other filesystems are faster deleting files.
> > It really depends. For large files, XFS is *much* faster than
> > anythiny else presently available for Linux.
> When a file is very fragmented... ext2 only have to remove a few blocks
> (inode, simple-indirect, double-indirect, etc...). This is the historic
> Tanembaum's standard and a lot of UNIX-filesystems implemented this
> methods in the past.
for every block in the file ext2 needs to free this block and place it
in the bitmaps. For xfs the same is true, it needs to free each extent.
One of the issues with a journalled filesystem is we need to keep the
filesystem consistent between each transaction. The amount of work in
removing a file is unbounded, and a transaction needs to have a bounded
size (don't ask it gets really complicated). But what it means is
that removing a file takes multiple transactions, and those end
up causing disk I/O.
In the case of ext2, if you crash between the remove and all the
metadata getting flushed out to disk, you need to run fsck. In
a journaled filesystem you do not. You are seeing one of the costs
of a journaled filesystem.
now we wait for Stephen Tweedie to jump in with some brain wave
about how to make it cheap.