On Sat, 12 Feb 2005, Andi Kleen wrote:
> David J N Begley <d.begley@xxxxxxxxxx> writes:
> > Are you talking here about implementing an "ordered journal" (similar to
> > ext3) where data is written before metadata updates,
> Ordered data just guarantees that there is never a window where
> the machine crashes that you can see "raw" disk blocks after recovery.
> "Raw" means blocks that are not under control of the file system
> and can be arbitary old data. This can be a theoretical security hole
> (although in practice you usually only see some garbage)
Agreed, "raw" data in the above sense is most definitely a security hole and
something to be avoided.
> As far as I know XFS guarantees this already, so it supports "ordered
> data" in the JBD sense.
This, I believe, is where we get to the heart of people's confusion/concern
regarding XFS - the method used to ensure "raw" data is avoided.
If an "ordered journal" (as above, in the ext3 sense) ensures data is written
before its associated metadata, then any interruption will leave you in one of
- either the metadata has not been updated, in which case you point to the
old file data (as opposed to "random" raw data from any file); or,
- the meta data has been updated, and points to the correct new file data
(because it was flushed to disk before the metadata).
My understanding was that XFS did not impose this ordering (ie., data first,
metadata second) and that was why writing blocks of zeros was so important for
stopping potential security/information leaks; if this ordering were imposed,
then the extra write-zero step may not be so important. Have I misunderstood?