[Top] [All Lists]

Re: file system defragmentation

To: Cosmo Nova <cs_mcc98@xxxxxxxxxxx>
Subject: Re: file system defragmentation
From: Chris Wedgwood <cw@xxxxxxxx>
Date: Mon, 17 Jul 2006 12:08:27 -0700
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <5356806.post@xxxxxxxxxxxxxxx>
References: <4f52331f050826001612f8e323@xxxxxxxxxxxxxx> <20050826101131.GA24544@xxxxxxxxx> <4f52331f0508260848782f240a@xxxxxxxxxxxxxx> <43128F82.4010004@xxxxxxxxx> <4312913F.6040205@xxxxxxxxxxxxxxx> <43311567.3060208@xxxxxxxxx> <5356806.post@xxxxxxxxxxxxxxx>
Sender: xfs-bounce@xxxxxxxxxxx
On Mon, Jul 17, 2006 at 12:36:09AM -0700, Cosmo Nova wrote:

> If I have a DVR system of 16 channels. They keep writing data to the
> disk in pieces of video files.

I did some work for someone who does a similar thing (they write 96
channels in parallel and have to be able to do read back up to 32 of
them at the same time of something).

By default concurrent writes into the same directory will cause the
files to get badly interleaved, and trying to get one file per-ag
doesn't work so well if the agcount < n-files.

I ended up getting them to change their code to preallocate and AFAIK
it works very well now (you can throw away any space you over allocate
when you're closing the file too).

<Prev in Thread] Current Thread [Next in Thread>