aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/mm/memory_hotplug.c
diff options
context:
space:
mode:
authorDavid Hildenbrand <david@redhat.com>2019-09-23 15:36:05 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2019-09-24 15:54:09 -0700
commitbd02cc01d342b43618c86e25552150f7a7e09080 (patch)
treea7fb742c9cb692c07ff29e5d856900981e341896 /mm/memory_hotplug.c
parentmm/memory_hotplug: simplify online_pages_range() (diff)
downloadwireguard-linux-bd02cc01d342b43618c86e25552150f7a7e09080.tar.xz
wireguard-linux-bd02cc01d342b43618c86e25552150f7a7e09080.zip
mm/memory_hotplug: make sure the pfn is aligned to the order when onlining
Commit a9cd410a3d29 ("mm/page_alloc.c: memory hotplug: free pages as higher order") assumed that any PFN we get via memory resources is aligned to to MAX_ORDER - 1, I am not convinced that is always true. Let's play safe, check the alignment and fallback to single pages. akpm: warn in this situation so we get to find out if and why this ever occurs. [akpm@linux-foundation.org: add WARN_ON_ONCE()] Link: http://lkml.kernel.org/r/20190814154109.3448-5-david@redhat.com Signed-off-by: David Hildenbrand <david@redhat.com> Cc: Arun KS <arunks@codeaurora.org> Cc: Oscar Salvador <osalvador@suse.de> Cc: Michal Hocko <mhocko@suse.com> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Borislav Petkov <bp@suse.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Nadav Amit <namit@vmware.com> Cc: Wei Yang <richardw.yang@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/memory_hotplug.c')
-rw-r--r--mm/memory_hotplug.c3
1 files changed, 3 insertions, 0 deletions
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index b5ad646df86b..aa54e15ea830 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -646,6 +646,9 @@ static int online_pages_range(unsigned long start_pfn, unsigned long nr_pages,
*/
for (pfn = start_pfn; pfn < end_pfn; pfn += 1ul << order) {
order = min(MAX_ORDER - 1, get_order(PFN_PHYS(end_pfn - pfn)));
+ /* __free_pages_core() wants pfns to be aligned to the order */
+ if (WARN_ON_ONCE(!IS_ALIGNED(pfn, 1ul << order)))
+ order = 0;
(*online_page_callback)(pfn_to_page(pfn), order);
}