aboutsummaryrefslogtreecommitdiffstats
path: root/include
diff options
context:
space:
mode:
authorKirill A. Shutemov <kirill.shutemov@linux.intel.com>2015-02-10 14:09:49 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2015-02-10 14:30:30 -0800
commit8a5f14a23177061ec11daeaa3d09d0765d785c47 (patch)
tree5199ffd75455cc98b652767813d07e64e6895c4e /include
parentmm: replace remap_file_pages() syscall with emulation (diff)
downloadlinux-dev-8a5f14a23177061ec11daeaa3d09d0765d785c47.tar.xz
linux-dev-8a5f14a23177061ec11daeaa3d09d0765d785c47.zip
mm: drop support of non-linear mapping from unmap/zap codepath
We have remap_file_pages(2) emulation in -mm tree for few release cycles and we plan to have it mainline in v3.20. This patchset removes rest of VM_NONLINEAR infrastructure. Patches 1-8 take care about generic code. They are pretty straight-forward and can be applied without other of patches. Rest patches removes pte_file()-related stuff from architecture-specific code. It usually frees up one bit in non-present pte. I've tried to reuse that bit for swap offset, where I was able to figure out how to do that. For obvious reason I cannot test all that arch-specific code and would like to see acks from maintainers. In total, remap_file_pages(2) required about 1.4K lines of not-so-trivial kernel code. That's too much for functionality nobody uses. Tested-by: Felipe Balbi <balbi@ti.com> This patch (of 38): We don't create non-linear mappings anymore. Let's drop code which handles them on unmap/zap. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include')
-rw-r--r--include/linux/mm.h1
1 files changed, 0 insertions, 1 deletions
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 2c6fd3c5424a..600ef5ed4698 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1146,7 +1146,6 @@ extern void user_shm_unlock(size_t, struct user_struct *);
* Parameter block passed down to zap_pte_range in exceptional cases.
*/
struct zap_details {
- struct vm_area_struct *nonlinear_vma; /* Check page->index if set */
struct address_space *check_mapping; /* Check page->mapping if set */
pgoff_t first_index; /* Lowest page->index to unmap */
pgoff_t last_index; /* Highest page->index to unmap */