diff options
author | Will Deacon <will.deacon@arm.com> | 2013-07-02 14:54:33 +0100 |
---|---|---|
committer | Will Deacon <will.deacon@arm.com> | 2013-09-30 16:42:55 +0100 |
commit | 9bb17be062de6f5a9c9643258951aa0935652ec3 (patch) | |
tree | cf430be919c709f50752dc856a5384d329abcaee /arch/arm/include/asm/spinlock_types.h | |
parent | ARM: prefetch: add support for prefetchw using pldw on SMP ARMv7+ CPUs (diff) | |
download | linux-dev-9bb17be062de6f5a9c9643258951aa0935652ec3.tar.xz linux-dev-9bb17be062de6f5a9c9643258951aa0935652ec3.zip |
ARM: locks: prefetch the destination word for write prior to strex
The cost of changing a cacheline from shared to exclusive state can be
significant, especially when this is triggered by an exclusive store,
since it may result in having to retry the transaction.
This patch prefixes our {spin,read,write}_[try]lock implementations with
pldw instructions (on CPUs which support them) to try and grab the line
in exclusive state from the start. arch_rwlock_t is changed to avoid
using a volatile member, since this generates compiler warnings when
falling back on the __builtin_prefetch intrinsic which expects a const
void * argument.
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Diffstat (limited to 'arch/arm/include/asm/spinlock_types.h')
-rw-r--r-- | arch/arm/include/asm/spinlock_types.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/arm/include/asm/spinlock_types.h b/arch/arm/include/asm/spinlock_types.h index b262d2f8b478..47663fcb10ad 100644 --- a/arch/arm/include/asm/spinlock_types.h +++ b/arch/arm/include/asm/spinlock_types.h @@ -25,7 +25,7 @@ typedef struct { #define __ARCH_SPIN_LOCK_UNLOCKED { { 0 } } typedef struct { - volatile unsigned int lock; + u32 lock; } arch_rwlock_t; #define __ARCH_RW_LOCK_UNLOCKED { 0 } |