aboutsummaryrefslogtreecommitdiffstats
path: root/.get_maintainer.ignore
diff options
context:
space:
mode:
authorHugh Dickins <hughd@google.com>2018-07-20 17:53:45 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2018-07-21 12:50:46 -0700
commite1f1b1572e8db87a56609fd05bef76f98f0e456a (patch)
tree97c419c35f7bea38037c6ffa77017fa6ddf1232c /.get_maintainer.ignore
parentfat: fix memory allocation failure handling of match_strdup() (diff)
downloadlinux-dev-e1f1b1572e8db87a56609fd05bef76f98f0e456a.tar.xz
linux-dev-e1f1b1572e8db87a56609fd05bef76f98f0e456a.zip
mm/huge_memory.c: fix data loss when splitting a file pmd
__split_huge_pmd_locked() must check if the cleared huge pmd was dirty, and propagate that to PageDirty: otherwise, data may be lost when a huge tmpfs page is modified then split then reclaimed. How has this taken so long to be noticed? Because there was no problem when the huge page is written by a write system call (shmem_write_end() calls set_page_dirty()), nor when the page is allocated for a write fault (fault_dirty_shared_page() calls set_page_dirty()); but when allocated for a read fault (which MAP_POPULATE simulates), no set_page_dirty(). Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1807111741430.1106@eggly.anvils Fixes: d21b9e57c74c ("thp: handle file pages in split_huge_pmd()") Signed-off-by: Hugh Dickins <hughd@google.com> Reported-by: Ashwin Chaugule <ashwinch@google.com> Reviewed-by: Yang Shi <yang.shi@linux.alibaba.com> Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: "Huang, Ying" <ying.huang@intel.com> Cc: <stable@vger.kernel.org> [4.8+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to '.get_maintainer.ignore')
0 files changed, 0 insertions, 0 deletions