aboutsummaryrefslogtreecommitdiffstats
path: root/arch/s390/mm
diff options
context:
space:
mode:
authorMartin Schwidefsky <schwidefsky@de.ibm.com>2017-08-17 08:15:16 +0200
committerMartin Schwidefsky <schwidefsky@de.ibm.com>2017-09-06 09:24:42 +0200
commit60f07c8ec5fae06c23e9fd7bab67dabce92b3414 (patch)
tree3c189bb7d158caba68c36b467603f94b243eea8f /arch/s390/mm
parents390/mm: fix local TLB flushing vs. detach of an mm address space (diff)
downloadlinux-dev-60f07c8ec5fae06c23e9fd7bab67dabce92b3414.tar.xz
linux-dev-60f07c8ec5fae06c23e9fd7bab67dabce92b3414.zip
s390/mm: fix race on mm->context.flush_mm
The order in __tlb_flush_mm_lazy is to flush TLB first and then clear the mm->context.flush_mm bit. This can lead to missed flushes as the bit can be set anytime, the order needs to be the other way aronud. But this leads to a different race, __tlb_flush_mm_lazy may be called on two CPUs concurrently. If mm->context.flush_mm is cleared first then another CPU can bypass __tlb_flush_mm_lazy although the first CPU has not done the flush yet. In a virtualized environment the time until the flush is finally completed can be arbitrarily long. Add a spinlock to serialize __tlb_flush_mm_lazy and use the function in finish_arch_post_lock_switch as well. Cc: <stable@vger.kernel.org> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Diffstat (limited to 'arch/s390/mm')
0 files changed, 0 insertions, 0 deletions