aboutsummaryrefslogtreecommitdiffstats
path: root/arch/arm/include/asm/Kbuild
diff options
context:
space:
mode:
authorArnd Bergmann <arnd@arndb.de>2017-10-20 21:17:05 +0100
committerRussell King <rmk+kernel@armlinux.org.uk>2017-10-24 10:33:23 +0100
commit1cce91dfc8f7990ca3aea896bfb148f240b12860 (patch)
tree2c9e0861ee2ef00c46585b289ac2c030f1a24792 /arch/arm/include/asm/Kbuild
parentARM: 8704/1: semihosting: use proper instruction on v7m processors (diff)
downloadlinux-dev-1cce91dfc8f7990ca3aea896bfb148f240b12860.tar.xz
linux-dev-1cce91dfc8f7990ca3aea896bfb148f240b12860.zip
ARM: 8715/1: add a private asm/unaligned.h
The asm-generic/unaligned.h header provides two different implementations for accessing unaligned variables: the access_ok.h version used when CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is set pretends that all pointers are in fact aligned, while the le_struct.h version convinces gcc that the alignment of a pointer is '1', to make it issue the correct load/store instructions depending on the architecture flags. On ARMv5 and older, we always use the second version, to let the compiler use byte accesses. On ARMv6 and newer, we currently use the access_ok.h version, so the compiler can use any instruction including stm/ldm and ldrd/strd that will cause an alignment trap. This trap can significantly impact performance when we have to do a lot of fixups and, worse, has led to crashes in the LZ4 decompressor code that does not have a trap handler. This adds an ARM specific version of asm/unaligned.h that uses the le_struct.h/be_struct.h implementation unconditionally. This should lead to essentially the same code on ARMv6+ as before, with the exception of using regular load/store instructions instead of the trapping instructions multi-register variants. The crash in the LZ4 decompressor code was probably introduced by the patch replacing the LZ4 implementation, commit 4e1a33b105dd ("lib: update LZ4 compressor module"), so linux-4.11 and higher would be affected most. However, we probably want to have this backported to all older stable kernels as well, to help with the performance issues. There are two follow-ups that I think we should also work on, but not backport to stable kernels, first to change the asm-generic version of the header to remove the ARM special case, and second to review all other uses of CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS to see if they might be affected by the same problem on ARM. Cc: stable@vger.kernel.org Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Diffstat (limited to 'arch/arm/include/asm/Kbuild')
-rw-r--r--arch/arm/include/asm/Kbuild1
1 files changed, 0 insertions, 1 deletions
diff --git a/arch/arm/include/asm/Kbuild b/arch/arm/include/asm/Kbuild
index 721ab5ecfb9b..0f2c8a2a8131 100644
--- a/arch/arm/include/asm/Kbuild
+++ b/arch/arm/include/asm/Kbuild
@@ -20,7 +20,6 @@ generic-y += simd.h
generic-y += sizes.h
generic-y += timex.h
generic-y += trace_clock.h
-generic-y += unaligned.h
generated-y += mach-types.h
generated-y += unistd-nr.h