aboutsummaryrefslogtreecommitdiffstats
path: root/include/linux/mm.h
diff options
context:
space:
mode:
authorvenkatesh.pallipadi@intel.com <venkatesh.pallipadi@intel.com>2008-12-19 13:47:26 -0800
committerH. Peter Anvin <hpa@zytor.com>2008-12-19 15:40:30 -0800
commit6bd9cd50c830eb88d571c492ec370a30bf999e15 (patch)
tree4232d9aacd16e524644e8a259a35d99efec97ea4 /include/linux/mm.h
parentx86: PAT: update documentation to cover pgprot and remap_pfn related changes - v3 (diff)
downloadlinux-dev-6bd9cd50c830eb88d571c492ec370a30bf999e15.tar.xz
linux-dev-6bd9cd50c830eb88d571c492ec370a30bf999e15.zip
x86: PAT: clarify is_linear_pfn_mapping() interface
Impact: Documentation only Incremental patches to address the review comments from Nick Piggin for v3 version of x86 PAT pfnmap changes patchset here http://lkml.indiana.edu/hypermail/linux/kernel/0812.2/01330.html This patch: Clarify is_linear_pfn_mapping() and its usage. It is used by x86 PAT code for performance reasons. Identifying pfnmap as linear over entire vma helps speedup reserve and free of memtype for the region. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Diffstat (limited to 'include/linux/mm.h')
-rw-r--r--include/linux/mm.h8
1 files changed, 8 insertions, 0 deletions
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 87ecb40e11a0..35f811b0cd69 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -145,6 +145,14 @@ extern pgprot_t protection_map[16];
#define FAULT_FLAG_WRITE 0x01 /* Fault was a write access */
#define FAULT_FLAG_NONLINEAR 0x02 /* Fault was via a nonlinear mapping */
+/*
+ * This interface is used by x86 PAT code to identify a pfn mapping that is
+ * linear over entire vma. This is to optimize PAT code that deals with
+ * marking the physical region with a particular prot. This is not for generic
+ * mm use. Note also that this check will not work if the pfn mapping is
+ * linear for a vma starting at physical address 0. In which case PAT code
+ * falls back to slow path of reserving physical range page by page.
+ */
static inline int is_linear_pfn_mapping(struct vm_area_struct *vma)
{
return ((vma->vm_flags & VM_PFNMAP) && vma->vm_pgoff);