xfs
[Top] [All Lists]

Re: [PATCH 20/34] xfs: remove all the inodes on a buffer from the AIL in

To: Alex Elder <aelder@xxxxxxx>
Subject: Re: [PATCH 20/34] xfs: remove all the inodes on a buffer from the AIL in bulk
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Wed, 22 Dec 2010 14:49:02 +1100
Cc: xfs@xxxxxxxxxxx
In-reply-to: <1292984446.2408.358.camel@doink>
References: <1292916570-25015-1-git-send-email-david@xxxxxxxxxxxxx> <1292916570-25015-21-git-send-email-david@xxxxxxxxxxxxx> <1292984446.2408.358.camel@doink>
User-agent: Mutt/1.5.20 (2009-06-14)
On Tue, Dec 21, 2010 at 08:20:46PM -0600, Alex Elder wrote:
> On Tue, 2010-12-21 at 18:29 +1100, Dave Chinner wrote:
> > From: Dave Chinner <dchinner@xxxxxxxxxx>
> > 
> > When inode buffer IO completes, usually all of the inodes are removed from 
> > the
> > AIL. This involves processing them one at a time and taking the AIL lock 
> > once
> > for every inode. When all CPUs are processing inode IO completions, this 
> > causes
> > excessive amount sof contention on the AIL lock.
> > 
> > Instead, change the way we process inode IO completion in the buffer
> > IO done callback. Allow the inode IO done callback to walk the list
> > of IO done callbacks and pull all the inodes off the buffer in one
> > go and then process them as a batch.
> > 
> > Once all the inodes for removal are collected, take the AIL lock
> > once and do a bulk removal operation to minimise traffic on the AIL
> > lock.
> > 
> > Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
> > Reviewed-by: Christoph Hellwig <hch@xxxxxx>
> 
> One question, below.          -Alex
> 
> . . .
> 
> > @@ -861,28 +910,37 @@ xfs_iflush_done(
> >      * the lock since it's cheaper, and then we recheck while
> >      * holding the lock before removing the inode from the AIL.
> >      */
> > -   if (iip->ili_logged && lip->li_lsn == iip->ili_flush_lsn) {
> > +   if (need_ail) {
> > +           struct xfs_log_item *log_items[need_ail];
> 
> What's the worst-case value of need_ail we might see here?

The number of inodes in a cluster. That's 32 for 256 byte inodes
with the current 8k cluster size.

Cheers,

Dave
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>