From 152d7bd80dca5ce77ec2d7313149a2ab990e808e Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Thu, 12 Nov 2015 18:33:54 -0800 Subject: dax: fix __dax_pmd_fault crash Since 4.3 introduced devm_memremap_pages() the pfns handled by DAX may optionally have a struct page backing. When a mapped pfn reaches vmf_insert_pfn_pmd() it fails with a crash signature like the following: kernel BUG at mm/huge_memory.c:905! [..] Call Trace: [] __dax_pmd_fault+0x2ea/0x5b0 [] xfs_filemap_pmd_fault+0x92/0x150 [xfs] [] handle_mm_fault+0x312/0x1b50 Fix this by falling back to 4K mappings in the pfn_valid() case. Longer term, vmf_insert_pfn_pmd() needs to grow support for architectures that can provide a 'pmd_special' capability. Cc: Cc: Andrew Morton Reported-by: Ross Zwisler Signed-off-by: Dan Williams --- fs/dax.c | 7 +++++++ 1 file changed, 7 insertions(+) (limited to 'fs') diff --git a/fs/dax.c b/fs/dax.c index 131fd35ae39d..bff20cc56130 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -627,6 +627,13 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, if ((length < PMD_SIZE) || (pfn & PG_PMD_COLOUR)) goto fallback; + /* + * TODO: teach vmf_insert_pfn_pmd() to support + * 'pte_special' for pmds + */ + if (pfn_valid(pfn)) + goto fallback; + if (buffer_unwritten(&bh) || buffer_new(&bh)) { int i; for (i = 0; i < PTRS_PER_PMD; i++) -- cgit v1.2.3-59-g8ed1b