aboutsummaryrefslogtreecommitdiffstats
path: root/include
diff options
context:
space:
mode:
authorMel Gorman <mgorman@suse.de>2012-10-08 16:32:45 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2012-10-09 16:22:50 +0900
commitc89511ab2f8fe2b47585e60da8af7fd213ec877e (patch)
treec6b04cb5335957e8409edc77ca23ef012d9d326d /include
parentmm: compaction: cache if a pageblock was scanned and no pages were isolated (diff)
downloadlinux-dev-c89511ab2f8fe2b47585e60da8af7fd213ec877e.tar.xz
linux-dev-c89511ab2f8fe2b47585e60da8af7fd213ec877e.zip
mm: compaction: Restart compaction from near where it left off
This is almost entirely based on Rik's previous patches and discussions with him about how this might be implemented. Order > 0 compaction stops when enough free pages of the correct page order have been coalesced. When doing subsequent higher order allocations, it is possible for compaction to be invoked many times. However, the compaction code always starts out looking for things to compact at the start of the zone, and for free pages to compact things to at the end of the zone. This can cause quadratic behaviour, with isolate_freepages starting at the end of the zone each time, even though previous invocations of the compaction code already filled up all free memory on that end of the zone. This can cause isolate_freepages to take enormous amounts of CPU with certain workloads on larger memory systems. This patch caches where the migration and free scanner should start from on subsequent compaction invocations using the pageblock-skip information. When compaction starts it begins from the cached restart points and will update the cached restart points until a page is isolated or a pageblock is skipped that would have been scanned by synchronous compaction. Signed-off-by: Mel Gorman <mgorman@suse.de> Acked-by: Rik van Riel <riel@redhat.com> Cc: Richard Davies <richard@arachsys.com> Cc: Shaohua Li <shli@kernel.org> Cc: Avi Kivity <avi@redhat.com> Acked-by: Rafael Aquini <aquini@redhat.com> Cc: Fengguang Wu <fengguang.wu@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include')
-rw-r--r--include/linux/mmzone.h4
1 files changed, 4 insertions, 0 deletions
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index f85ecc9cfa1b..c8b3abc97a1e 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -371,6 +371,10 @@ struct zone {
int all_unreclaimable; /* All pages pinned */
#if defined CONFIG_COMPACTION || defined CONFIG_CMA
unsigned long compact_blockskip_expire;
+
+ /* pfns where compaction scanners should start */
+ unsigned long compact_cached_free_pfn;
+ unsigned long compact_cached_migrate_pfn;
#endif
#ifdef CONFIG_MEMORY_HOTPLUG
/* see spanned/present_pages for more description */