aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/include/linux/spinlock.h
diff options
context:
space:
mode:
authorWill Deacon <will.deacon@arm.com>2015-03-31 09:39:41 +0100
committerPaul E. McKenney <paulmck@linux.vnet.ibm.com>2015-05-27 12:57:27 -0700
commitd956028e99b30726b0bce0ca684b40b1ad67b514 (patch)
tree9ac343191ce631a313cf1ac810b61d15b73bea8e /include/linux/spinlock.h
parentrcu: Convert ACCESS_ONCE() to READ_ONCE() and WRITE_ONCE() (diff)
downloadwireguard-linux-d956028e99b30726b0bce0ca684b40b1ad67b514.tar.xz
wireguard-linux-d956028e99b30726b0bce0ca684b40b1ad67b514.zip
documentation: memory-barriers: Fix smp_mb__before_spinlock() semantics
Our current documentation claims that, when followed by an ACQUIRE, smp_mb__before_spinlock() orders prior loads against subsequent loads and stores, which isn't the intent. This commit therefore fixes the documentation to state that this sequence orders only prior stores against subsequent loads and stores. In addition, the original intent of smp_mb__before_spinlock() was to only order prior loads against subsequent stores, however, people have started using it as if it ordered prior loads against subsequent loads and stores. This commit therefore also updates smp_mb__before_spinlock()'s header comment to reflect this new reality. Cc: Oleg Nesterov <oleg@redhat.com> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Diffstat (limited to 'include/linux/spinlock.h')
-rw-r--r--include/linux/spinlock.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 3e18379dfa6f..0063b24b4f36 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -120,7 +120,7 @@ do { \
/*
* Despite its name it doesn't necessarily has to be a full barrier.
* It should only guarantee that a STORE before the critical section
- * can not be reordered with a LOAD inside this section.
+ * can not be reordered with LOADs and STOREs inside this section.
* spin_lock() is the one-way barrier, this LOAD can not escape out
* of the region. So the default implementation simply ensures that
* a STORE can not move into the critical section, smp_wmb() should