xfs
[Top] [All Lists]

[PATCH 7/8] mm: vmscan: Immediately reclaim end-of-LRU dirty pages when

To: Linux-MM <linux-mm@xxxxxxxxx>
Subject: [PATCH 7/8] mm: vmscan: Immediately reclaim end-of-LRU dirty pages when writeback completes
From: Mel Gorman <mgorman@xxxxxxx>
Date: Thu, 21 Jul 2011 17:28:49 +0100
Cc: LKML <linux-kernel@xxxxxxxxxxxxxxx>, XFS <xfs@xxxxxxxxxxx>, Dave Chinner <david@xxxxxxxxxxxxx>, Christoph Hellwig <hch@xxxxxxxxxxxxx>, Johannes Weiner <jweiner@xxxxxxxxxx>, Wu Fengguang <fengguang.wu@xxxxxxxxx>, Jan Kara <jack@xxxxxxx>, Rik van Riel <riel@xxxxxxxxxx>, Minchan Kim <minchan.kim@xxxxxxxxx>, Mel Gorman <mgorman@xxxxxxx>
In-reply-to: <1311265730-5324-1-git-send-email-mgorman@xxxxxxx>
References: <1311265730-5324-1-git-send-email-mgorman@xxxxxxx>
When direct reclaim encounters a dirty page, it gets recycled around
the LRU for another cycle. This patch marks the page PageReclaim
similar to deactivate_page() so that the page gets reclaimed almost
immediately after the page gets cleaned. This is to avoid reclaiming
clean pages that are younger than a dirty page encountered at the
end of the LRU that might have been something like a use-once page.

Signed-off-by: Mel Gorman <mgorman@xxxxxxx>
---
 include/linux/mmzone.h |    2 +-
 mm/vmscan.c            |   10 +++++++++-
 mm/vmstat.c            |    2 +-
 3 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index b70a0c0..30d1dd1 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -100,7 +100,7 @@ enum zone_stat_item {
        NR_UNSTABLE_NFS,        /* NFS unstable pages */
        NR_BOUNCE,
        NR_VMSCAN_WRITE,
-       NR_VMSCAN_WRITE_SKIP,
+       NR_VMSCAN_INVALIDATE,
        NR_WRITEBACK_TEMP,      /* Writeback using temporary buffers */
        NR_ISOLATED_ANON,       /* Temporary isolated pages from anon lru */
        NR_ISOLATED_FILE,       /* Temporary isolated pages from file lru */
diff --git a/mm/vmscan.c b/mm/vmscan.c
index b0060f8..c3d8341 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -834,7 +834,15 @@ static unsigned long shrink_page_list(struct list_head 
*page_list,
                         */
                        if (page_is_file_cache(page) &&
                                        (!current_is_kswapd() || priority >= 
DEF_PRIORITY - 2)) {
-                               inc_zone_page_state(page, NR_VMSCAN_WRITE_SKIP);
+                               /*
+                                * Immediately reclaim when written back.
+                                * Similar in principal to deactivate_page()
+                                * except we already have the page isolated
+                                * and know it's dirty
+                                */
+                               inc_zone_page_state(page, NR_VMSCAN_INVALIDATE);
+                               SetPageReclaim(page);
+
                                goto keep_locked;
                        }
 
diff --git a/mm/vmstat.c b/mm/vmstat.c
index fd109f3..5bd2043 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -702,7 +702,7 @@ const char * const vmstat_text[] = {
        "nr_unstable",
        "nr_bounce",
        "nr_vmscan_write",
-       "nr_vmscan_write_skip",
+       "nr_vmscan_invalidate",
        "nr_writeback_temp",
        "nr_isolated_anon",
        "nr_isolated_file",
-- 
1.7.3.4

<Prev in Thread] Current Thread [Next in Thread>