|To:||Nathan Scott <nathans@xxxxxxx>|
|Subject:||Re: Page cache write performance issue|
|From:||Nick Piggin <piggin@xxxxxxxxxxxxxxx>|
|Date:||Wed, 13 Oct 2004 18:15:31 +1000|
|Cc:||Andrew Morton <akpm@xxxxxxxx>, linux-kernel@xxxxxxxxxxxxxxx, linux-mm@xxxxxxxxx, linux-xfs@xxxxxxxxxxx|
|References:||<20041013054452.GB1618@frodo> <20041012231945.2aff9a00.akpm@xxxxxxxx> <20041013063955.GA2079@frodo> <20041013000206.680132ad.akpm@xxxxxxxx> <20041013172352.B4917536@xxxxxxxxxxxxxxxxxxxxxxxx>|
|User-agent:||Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.2) Gecko/20040820 Debian/1.7.2-4|
Nathan Scott wrote:
On Wed, Oct 13, 2004 at 12:02:06AM -0700, Andrew Morton wrote:Well something else if fishy: how can you possibly achieve only 4MB/sec?These are 1K writes too remember, so it feels a bit like we write 'em out one at a time, sync (though no O_SYNC, or fsync, or such involved here). This is on an i686, so 4K pages, and using 4K filesystem blocksizes (both xfs and ext2).
Still shouldn't cause such a big slowdown. Seems like they might be getting written off the end of the page reclaim LRU (although in that case it is a bit odd that increasing the dirty thresholds are improving performance). I don't think we have any vmscan metrics for this... kswapd definitely has become more active in 2.6.9-rc. If you're stuck for ideas, try editing mm/vmscan.c:may_write_to_queue - comment out the if(current_is_kswapd()) check. It is a long shot though. Andrew probably has better ideas.
|<Prev in Thread]||Current Thread||[Next in Thread>|
|Previous by Date:||Re: Page cache write performance issue, Nathan Scott|
|Next by Date:||Re: Page cache write performance issue, Andrew Morton|
|Previous by Thread:||Re: Page cache write performance issue, Nathan Scott|
|Next by Thread:||Re: Page cache write performance issue, Andrew Morton|
|Indexes:||[Date] [Thread] [Top] [All Lists]|