xfs
[Top] [All Lists]

Re: [Evms-devel] question on some evms features

To: Kwon SoonSon <ksoonson@xxxxxxxxxxx>, evms-devel@xxxxxxxxxxxxxxxxxxxxx
Subject: Re: [Evms-devel] question on some evms features
From: Kevin Corry <kevcorry@xxxxxxxxxx>
Date: Wed, 30 Apr 2003 10:25:31 -0500
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <20030430012539.89870.qmail@xxxxxxxxxxxxxxxxxxxxxxxxx>
References: <20030430012539.89870.qmail@xxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: KMail/1.5
On Tuesday 29 April 2003 20:25, Kwon SoonSon wrote:
> >
> > There has been a report of a problem with running
> > XFS (in 2.4 kernels) on
> > RAID-1 and/or RAID-5 devices that have been
> > converted into EVMS volumes. The
> > problem stems from XFS not following some implicit
> > block-I/O restrictions.
> > Work-arounds have been added to XFS to recognize MD,
> > LVM1, and EVMS 1.2, but
> > it does not yet recognize Device-Mapper.
> >
> > If you aren't using RAID-1 or RAID-5 (or possibly
> > striped LVM volumes), you
> > shouldn't have any problems.
>
> Cc'ing xfs-devel just in case....
>
> Then, does this mean that XFS cannot be used with
> RAID-1 orRAID-5 whether it is software RAID or
> hardware
> RAID?
>
> Not being able to recognize device mapper seems to be
> a big problem as of now because that sounds like
> it does not recognize LVM2 which works with device
> mapper.
>
> If I am wrong, please let me know....

[My previous email should have used RAID-0 as an example instead of RAID-1, as 
I'll explain below. Sorry for any possible confusion. Also - XFS guys - if I 
mangle this explanation, feel free to slap me around.]

As I understand the issue, XFS may do unaligned I/O when mounting the 
filesystem.  This works fine on devices that are a single, linear, physically 
contiguous range of sectors. However, on striped devices (RAID-0, RAID-5, and 
some EVMS/LVM/Device-Mapper devices), the device being mounted is not 
physically contiguous on disk. These block-device-drivers are not expecting 
unaligned I/Os, and thus if an I/O happened to cross some internal boundary 
(i.e. the I/O spans two stripe chunks), the data is most likely going to get 
mangled.

Thus, XFS (for 2.4) currently checks the major number of the device being 
mounted to see if it is an MD, LVM1, or EVMS-1.2 volume. In the version of 
XFS that I currently have (snapshot from early April, I believe), this check 
is performed in fs/xfs/linux/xfs_super.c::xfs_alloc_buftarg(). If one of 
these devices are detected, a flag is set indicating that all I/O should be 
aligned. Thus, a similar check needs to be added to detect Device-Mapper 
devices.

The tricky part about this check is that Device-Mapper doesn't have a 
statically defined major number (the way MD, LVM1, and EVMS-1.2 have). It 
simply asks the block-layer for a major number when the driver is loaded. And 
what will make it even trickier is that Device-Mapper will eventually support 
multiple major numbers. So XFS will somehow need to probe Device-Mapper for 
the current list of registered major numbers, and compare against the major 
number of the device being mounted.

So that's the jist of the issue. And none of this is a problem in 2.5, since 
the block layer is now expected to handle large I/O requests at any offset.

-- 
Kevin Corry
kevcorry@xxxxxxxxxx
http://evms.sourceforge.net/


<Prev in Thread] Current Thread [Next in Thread>