aboutsummaryrefslogtreecommitdiffstats
path: root/arch/arm/include/asm/memory.h
diff options
context:
space:
mode:
authorLinus Walleij <linus.walleij@linaro.org>2021-08-09 12:57:19 +0100
committerRussell King (Oracle) <rmk+kernel@armlinux.org.uk>2021-08-10 12:17:25 +0100
commit463dbba4d189750c2f576449d0bbb11c5413712e (patch)
treef207401d96165739964822b4230795da11fecb35 /arch/arm/include/asm/memory.h
parentLinux 5.14-rc1 (diff)
downloadlinux-dev-463dbba4d189750c2f576449d0bbb11c5413712e.tar.xz
linux-dev-463dbba4d189750c2f576449d0bbb11c5413712e.zip
ARM: 9104/2: Fix Keystone 2 kernel mapping regression
This fixes a Keystone 2 regression discovered as a side effect of defining an passing the physical start/end sections of the kernel to the MMU remapping code. As the Keystone applies an offset to all physical addresses, including those identified and patches by phys2virt, we fail to account for this offset in the kernel_sec_start and kernel_sec_end variables. Further these offsets can extend into the 64bit range on LPAE systems such as the Keystone 2. Fix it like this: - Extend kernel_sec_start and kernel_sec_end to be 64bit - Add the offset also to kernel_sec_start and kernel_sec_end As passing kernel_sec_start and kernel_sec_end as 64bit invariably incurs BE8 endianness issues I have attempted to dry-code around these. Tested on the Vexpress QEMU model both with and without LPAE enabled. Fixes: 6e121df14ccd ("ARM: 9090/1: Map the lowmem and kernel separately") Reported-by: Nishanth Menon <nmenon@kernel.org> Suggested-by: Russell King <rmk+kernel@armlinux.org.uk> Tested-by: Grygorii Strashko <grygorii.strashko@ti.com> Tested-by: Nishanth Menon <nmenon@kernel.org> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Diffstat (limited to 'arch/arm/include/asm/memory.h')
-rw-r--r--arch/arm/include/asm/memory.h7
1 files changed, 4 insertions, 3 deletions
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index cfc9dfd70aad..f673e13e0f94 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -160,10 +160,11 @@ extern unsigned long vectors_base;
/*
* Physical start and end address of the kernel sections. These addresses are
- * 2MB-aligned to match the section mappings placed over the kernel.
+ * 2MB-aligned to match the section mappings placed over the kernel. We use
+ * u64 so that LPAE mappings beyond the 32bit limit will work out as well.
*/
-extern u32 kernel_sec_start;
-extern u32 kernel_sec_end;
+extern u64 kernel_sec_start;
+extern u64 kernel_sec_end;
/*
* Physical vs virtual RAM address space conversion. These are