diff options
author | 2022-02-14 18:33:17 -0800 | |
---|---|---|
committer | 2022-02-17 11:57:06 -0500 | |
commit | c3096e6782b733158bf34f6bbb4567808d4e0740 (patch) | |
tree | a28708da7662fc586a0ad8df19d29ccc162ecb12 /mm/mlock.c | |
parent | mm/munlock: mlock_pte_range() when mlocking or munlocking (diff) | |
download | wireguard-linux-c3096e6782b733158bf34f6bbb4567808d4e0740.tar.xz wireguard-linux-c3096e6782b733158bf34f6bbb4567808d4e0740.zip |
mm/migrate: __unmap_and_move() push good newpage to LRU
Compaction, NUMA page movement, THP collapse/split, and memory failure
do isolate unevictable pages from their "LRU", losing the record of
mlock_count in doing so (isolators are likely to use page->lru for their
own private lists, so mlock_count has to be presumed lost).
That's unfortunate, and we should put in some work to correct that: one
can imagine a function to build up the mlock_count again - but it would
require i_mmap_rwsem for read, so be careful where it's called. Or
page_referenced_one() and try_to_unmap_one() might do that extra work.
But one place that can very easily be improved is page migration's
__unmap_and_move(): a small adjustment to where the successful new page
is put back on LRU, and its mlock_count (if any) is built back up by
remove_migration_ptes().
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Diffstat (limited to 'mm/mlock.c')
0 files changed, 0 insertions, 0 deletions