From badea125d7cbd93f1678a95cf009b3bdfe6065cd Mon Sep 17 00:00:00 2001 From: David Mosberger-Tang Date: Mon, 25 Jul 2005 22:23:00 -0700 Subject: [IA64] Fix race in mm-context wrap-around logic. The patch below should fix a race which could cause stale TLB entries. Specifically, when 2 CPUs ended up racing for entrance to wrap_mmu_context(). The losing CPU would find that by the time it acquired ctx.lock, mm->context already had a valid value, but then it failed to (re-)check the delayed TLB flushing logic and hence could end up using a context number when there were still stale entries in its TLB. The fix is to check for delayed TLB flushes only after mm->context is valid (non-zero). The patch also makes GCC v4.x happier by defining a non-volatile variant of mm_context_t called nv_mm_context_t. Signed-off-by: David Mosberger-Tang Signed-off-by: Tony Luck --- include/asm-ia64/mmu.h | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) (limited to 'include/asm-ia64/mmu.h') diff --git a/include/asm-ia64/mmu.h b/include/asm-ia64/mmu.h index ae1525352a25..611432ba579c 100644 --- a/include/asm-ia64/mmu.h +++ b/include/asm-ia64/mmu.h @@ -2,10 +2,12 @@ #define __MMU_H /* - * Type for a context number. We declare it volatile to ensure proper ordering when it's - * accessed outside of spinlock'd critical sections (e.g., as done in activate_mm() and - * init_new_context()). + * Type for a context number. We declare it volatile to ensure proper + * ordering when it's accessed outside of spinlock'd critical sections + * (e.g., as done in activate_mm() and init_new_context()). */ typedef volatile unsigned long mm_context_t; +typedef unsigned long nv_mm_context_t; + #endif -- cgit v1.2.3-59-g8ed1b