On Wed, Feb 03, 2010 at 03:56:48PM -0500, Christoph Hellwig wrote:
> On Wed, Feb 03, 2010 at 06:27:53AM -0500, Christoph Hellwig wrote:
> > Still not entirely happy with this one. The first one is that I think
> > the barriers in fsync are still too heavy for the normal sync use
> > case. I'd be more happy with exporting the body of xfs_fsync without
> > the cache flushes (and a ebtter name than xfs_fsync) and use that
> > for write_inode.
> > That leaves open the NFSD case thought. I'd prefer to have that fixed
> > if possibly. Ben, any chance you could send your patch to use fsync
> > to the nfs list ASAP? I think we'd be even better off to just force
> > -o wsync and disable ->write_inode entirely for NFS, any chance you
> > could test such a patch on your setup?
> Thinking about it, we usually do cause a log buffer write from ->fsync
> which means we submit the barrier anyway. That might be the reason
> why you're not seeing the performance hit in your testing. With that
> I'm okay with the patch as-is for now, we can micro-optimize it later.
OK. I was thinking that a "inode cluster fsync" function might be
appropriate here. i.e. a transaction that takes all the dirty inodes
in the inode cluster and logs/forces them all in one transaction.
That would substanitally reduce the number of log writes needed,
I think. I'll look into this over the next few days.
Thanks for the reviews, Christoph.