aboutsummaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorThomas Hellstrom <thellstrom@vmware.com>2019-11-22 09:34:35 +0100
committerThomas Hellstrom <thellstrom@vmware.com>2020-01-16 10:32:41 +0100
commit5379e4dd3220e23f68ce70b76b3a52a9a68cee05 (patch)
tree36945924b0141ad317be66d73676423a17be6c1b /mm
parentmm: Add a vmf_insert_mixed_prot() function (diff)
downloadlinux-dev-5379e4dd3220e23f68ce70b76b3a52a9a68cee05.tar.xz
linux-dev-5379e4dd3220e23f68ce70b76b3a52a9a68cee05.zip
mm, drm/ttm: Fix vm page protection handling
TTM graphics buffer objects may, transparently to user-space, move between IO and system memory. When that happens, all PTEs pointing to the old location are zapped before the move and then faulted in again if needed. When that happens, the page protection caching mode- and encryption bits may change and be different from those of struct vm_area_struct::vm_page_prot. We were using an ugly hack to set the page protection correctly. Fix that and instead export and use vmf_insert_mixed_prot() or use vmf_insert_pfn_prot(). Also get the default page protection from struct vm_area_struct::vm_page_prot rather than using vm_get_page_prot(). This way we catch modifications done by the vm system for drivers that want write-notification. Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Michal Hocko <mhocko@suse.com> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Ralph Campbell <rcampbell@nvidia.com> Cc: "Jérôme Glisse" <jglisse@redhat.com> Cc: "Christian König" <christian.koenig@amd.com> Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com> Reviewed-by: Christian König <christian.koenig@amd.com> Acked-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to '')
-rw-r--r--mm/memory.c1
1 files changed, 1 insertions, 0 deletions
diff --git a/mm/memory.c b/mm/memory.c
index f5e1fe1d5331..17aadc751e5c 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1816,6 +1816,7 @@ vm_fault_t vmf_insert_mixed_prot(struct vm_area_struct *vma, unsigned long addr,
{
return __vm_insert_mixed(vma, addr, pfn, pgprot, false);
}
+EXPORT_SYMBOL(vmf_insert_mixed_prot);
vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
pfn_t pfn)