aboutsummaryrefslogtreecommitdiffstats
path: root/mm/memory.c
diff options
context:
space:
mode:
authorWill Deacon <will.deacon@arm.com>2018-08-23 21:01:46 +0100
committerWill Deacon <will.deacon@arm.com>2018-09-04 11:08:27 +0100
commita6d60245d6d9b1caf66b0d94419988c4836980af (patch)
tree7c09c92c42b987a72d5c1c2410cd0918d2c76c17 /mm/memory.c
parentasm-generic/tlb: Track freeing of page-table directories in struct mmu_gather (diff)
downloadlinux-dev-a6d60245d6d9b1caf66b0d94419988c4836980af.tar.xz
linux-dev-a6d60245d6d9b1caf66b0d94419988c4836980af.zip
asm-generic/tlb: Track which levels of the page tables have been cleared
It is common for architectures with hugepage support to require only a single TLB invalidation operation per hugepage during unmap(), rather than iterating through the mapping at a PAGE_SIZE increment. Currently, however, the level in the page table where the unmap() operation occurs is not stored in the mmu_gather structure, therefore forcing architectures to issue additional TLB invalidation operations or to give up and over-invalidate by e.g. invalidating the entire TLB. Ideally, we could add an interval rbtree to the mmu_gather structure, which would allow us to associate the correct mapping granule with the various sub-mappings within the range being invalidated. However, this is costly in terms of book-keeping and memory management, so instead we approximate by keeping track of the page table levels that are cleared and provide a means to query the smallest granule required for invalidation. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
Diffstat (limited to 'mm/memory.c')
-rw-r--r--mm/memory.c4
1 files changed, 3 insertions, 1 deletions
diff --git a/mm/memory.c b/mm/memory.c
index c467102a5cbc..9135f48e8d84 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -267,8 +267,10 @@ void arch_tlb_finish_mmu(struct mmu_gather *tlb,
{
struct mmu_gather_batch *batch, *next;
- if (force)
+ if (force) {
+ __tlb_reset_range(tlb);
__tlb_adjust_range(tlb, start, end - start);
+ }
tlb_flush_mmu(tlb);