aboutsummaryrefslogtreecommitdiffstats
path: root/mm/memory.c
diff options
context:
space:
mode:
authorHugh Dickins <hughd@google.com>2022-03-24 18:14:02 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2022-03-24 19:06:51 -0700
commit2c8659951654fc14c0351e33ca7604cdf670341e (patch)
treea8d77327771bd3036ca68ee82dd808a3bed4436f /mm/memory.c
parentmm: warn on deleting redirtied only if accounted (diff)
downloadlinux-dev-2c8659951654fc14c0351e33ca7604cdf670341e.tar.xz
linux-dev-2c8659951654fc14c0351e33ca7604cdf670341e.zip
mm: unmap_mapping_range_tree() with i_mmap_rwsem shared
Revert 48ec833b7851 ("Revert "mm/memory.c: share the i_mmap_rwsem"") to reinstate c8475d144abb ("mm/memory.c: share the i_mmap_rwsem"): the unmap_mapping_range family of functions do the unmapping of user pages (ultimately via zap_page_range_single) without modifying the interval tree itself, and unmapping races are necessarily guarded by page table lock, thus the i_mmap_rwsem should be shared in unmap_mapping_pages() and unmap_mapping_folio(). Commit 48ec833b7851 was intended as a short-term measure, allowing the other shared lock changes into 3.19 final, before investigating three trinity crashes, one of which had been bisected to commit c8475d144ab: [1] https://lkml.org/lkml/2014/11/14/342 https://lore.kernel.org/lkml/5466142C.60100@oracle.com/ [2] https://lkml.org/lkml/2014/12/22/213 https://lore.kernel.org/lkml/549832E2.8060609@oracle.com/ [3] https://lkml.org/lkml/2014/12/9/741 https://lore.kernel.org/lkml/5487ACC5.1010002@oracle.com/ Two of those were Bad page states: free_pages_prepare() found PG_mlocked still set - almost certain to have been fixed by 4.4 commit b87537d9e2fe ("mm: rmap use pte lock not mmap_sem to set PageMlocked"). The NULL deref on rwsem in [2]: unclear, only happened once, not bisected to c8475d144ab. No change to the i_mmap_lock_write() around __unmap_hugepage_range_final() in unmap_single_vma(): IIRC that's a special usage, helping to serialize hugetlbfs page table sharing, not to be dabbled with lightly. No change to other uses of i_mmap_lock_write() by hugetlbfs. I am not aware of any significant gains from the concurrency allowed by this commit: it is submitted more to resolve an ancient misunderstanding. Link: https://lkml.kernel.org/r/e4a5e356-6c87-47b2-3ce8-c2a95ae84e20@google.com Signed-off-by: Hugh Dickins <hughd@google.com> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Sasha Levin <sashal@kernel.org> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Mike Kravetz <mike.kravetz@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/memory.c')
-rw-r--r--mm/memory.c8
1 files changed, 4 insertions, 4 deletions
diff --git a/mm/memory.c b/mm/memory.c
index f721735ff947..be44d0b36b18 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3388,11 +3388,11 @@ void unmap_mapping_folio(struct folio *folio)
details.even_cows = false;
details.single_folio = folio;
- i_mmap_lock_write(mapping);
+ i_mmap_lock_read(mapping);
if (unlikely(!RB_EMPTY_ROOT(&mapping->i_mmap.rb_root)))
unmap_mapping_range_tree(&mapping->i_mmap, first_index,
last_index, &details);
- i_mmap_unlock_write(mapping);
+ i_mmap_unlock_read(mapping);
}
/**
@@ -3418,11 +3418,11 @@ void unmap_mapping_pages(struct address_space *mapping, pgoff_t start,
if (last_index < first_index)
last_index = ULONG_MAX;
- i_mmap_lock_write(mapping);
+ i_mmap_lock_read(mapping);
if (unlikely(!RB_EMPTY_ROOT(&mapping->i_mmap.rb_root)))
unmap_mapping_range_tree(&mapping->i_mmap, first_index,
last_index, &details);
- i_mmap_unlock_write(mapping);
+ i_mmap_unlock_read(mapping);
}
EXPORT_SYMBOL_GPL(unmap_mapping_pages);