diff options
author | Vlastimil Babka <vbabka@suse.cz> | 2014-01-02 12:58:44 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2014-01-02 14:40:30 -0800 |
commit | 3b25df93c6e37e323b86a2a8c1e00c0a2821c6c9 (patch) | |
tree | 69737469efd0313ffa2b3fcbb93daf04c72a5dc6 /mm | |
parent | mm: munlock: fix a bug where THP tail page is encountered (diff) | |
download | linux-dev-3b25df93c6e37e323b86a2a8c1e00c0a2821c6c9.tar.xz linux-dev-3b25df93c6e37e323b86a2a8c1e00c0a2821c6c9.zip |
mm: munlock: fix deadlock in __munlock_pagevec()
Commit 7225522bb429 ("mm: munlock: batch non-THP page isolation and
munlock+putback using pagevec" introduced __munlock_pagevec() to speed
up munlock by holding lru_lock over multiple isolated pages. Pages that
fail to be isolated are put_page()d immediately, also within the lock.
This can lead to deadlock when __munlock_pagevec() becomes the holder of
the last page pin and put_page() leads to __page_cache_release() which
also locks lru_lock. The deadlock has been observed by Sasha Levin
using trinity.
This patch avoids the deadlock by deferring put_page() operations until
lru_lock is released. Another pagevec (which is also used by later
phases of the function is reused to gather the pages for put_page()
operation.
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Michel Lespinasse <walken@google.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Hugh Dickins <hughd@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to '')
-rw-r--r-- | mm/mlock.c | 15 |
1 files changed, 10 insertions, 5 deletions
diff --git a/mm/mlock.c b/mm/mlock.c index c59c420fd6e1..192e6eebe4f2 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -298,10 +298,12 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) { int i; int nr = pagevec_count(pvec); - int delta_munlocked = -nr; + int delta_munlocked; struct pagevec pvec_putback; int pgrescued = 0; + pagevec_init(&pvec_putback, 0); + /* Phase 1: page isolation */ spin_lock_irq(&zone->lru_lock); for (i = 0; i < nr; i++) { @@ -330,18 +332,21 @@ skip_munlock: /* * We won't be munlocking this page in the next phase * but we still need to release the follow_page_mask() - * pin. + * pin. We cannot do it under lru_lock however. If it's + * the last pin, __page_cache_release would deadlock. */ + pagevec_add(&pvec_putback, pvec->pages[i]); pvec->pages[i] = NULL; - put_page(page); - delta_munlocked++; } } + delta_munlocked = -nr + pagevec_count(&pvec_putback); __mod_zone_page_state(zone, NR_MLOCK, delta_munlocked); spin_unlock_irq(&zone->lru_lock); + /* Now we can release pins of pages that we are not munlocking */ + pagevec_release(&pvec_putback); + /* Phase 2: page munlock */ - pagevec_init(&pvec_putback, 0); for (i = 0; i < nr; i++) { struct page *page = pvec->pages[i]; |