aboutsummaryrefslogtreecommitdiffstats
path: root/include/asm-parisc
diff options
context:
space:
mode:
authorHugh Dickins <hugh@veritas.com>2005-10-29 18:16:28 -0700
committerLinus Torvalds <torvalds@g5.osdl.org>2005-10-29 21:40:40 -0700
commit663b97f7efd001b0c56bd5fce059c5272725b86f (patch)
treec80088db3514bf7f1749243e81fc3abaf7252ebd /include/asm-parisc
parent[PATCH] mm: pte_offset_map_lock loops (diff)
downloadlinux-dev-663b97f7efd001b0c56bd5fce059c5272725b86f.tar.xz
linux-dev-663b97f7efd001b0c56bd5fce059c5272725b86f.zip
[PATCH] mm: flush_tlb_range outside ptlock
There was one small but very significant change in the previous patch: mprotect's flush_tlb_range fell outside the page_table_lock: as it is in 2.4, but that doesn't prove it safe in 2.6. On some architectures flush_tlb_range comes to the same as flush_tlb_mm, which has always been called from outside page_table_lock in dup_mmap, and is so proved safe. Others required a deeper audit: I could find no reliance on page_table_lock in any; but in ia64 and parisc found some code which looks a bit as if it might want preemption disabled. That won't do any actual harm, so pending a decision from the maintainers, disable preemption there. Remove comments on page_table_lock from flush_tlb_mm, flush_tlb_range and flush_tlb_page entries in cachetlb.txt: they were rather misleading (what generic code does is different from what usually happens), the rules are now changing, and it's not yet clear where we'll end up (will the generic tlb_flush_mmu happen always under lock? never under lock? or sometimes under and sometimes not?). Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'include/asm-parisc')
-rw-r--r--include/asm-parisc/tlbflush.h3
1 files changed, 2 insertions, 1 deletions
diff --git a/include/asm-parisc/tlbflush.h b/include/asm-parisc/tlbflush.h
index 84af4ab1fe51..e97aa8d1eff5 100644
--- a/include/asm-parisc/tlbflush.h
+++ b/include/asm-parisc/tlbflush.h
@@ -88,7 +88,7 @@ static inline void flush_tlb_range(struct vm_area_struct *vma,
if (npages >= 512) /* 2MB of space: arbitrary, should be tuned */
flush_tlb_all();
else {
-
+ preempt_disable();
mtsp(vma->vm_mm->context,1);
purge_tlb_start();
if (split_tlb) {
@@ -102,6 +102,7 @@ static inline void flush_tlb_range(struct vm_area_struct *vma,
pdtlb(start);
start += PAGE_SIZE;
}
+ preempt_enable();
}
purge_tlb_end();
}