aboutsummaryrefslogtreecommitdiffstats
path: root/arch/arm64/mm/init.c
diff options
context:
space:
mode:
authorArd Biesheuvel <ardb@kernel.org>2020-10-08 17:36:01 +0200
committerCatalin Marinas <catalin.marinas@arm.com>2020-11-09 17:15:37 +0000
commit8c96400d6a39be763130a5c493647c57726f7013 (patch)
tree0794db8f11e674731069015b34a5c593f8a77ac7 /arch/arm64/mm/init.c
parentarm64: mm: extend linear region for 52-bit VA configurations (diff)
downloadlinux-dev-8c96400d6a39be763130a5c493647c57726f7013.tar.xz
linux-dev-8c96400d6a39be763130a5c493647c57726f7013.zip
arm64: mm: make vmemmap region a projection of the linear region
Now that we have reverted the introduction of the vmemmap struct page pointer and the separate physvirt_offset, we can simplify things further, and place the vmemmap region in the VA space in such a way that virtual to page translations and vice versa can be implemented using a single arithmetic shift. One happy coincidence resulting from this is that the 48-bit/4k and 52-bit/64k configurations (which are assumed to be the two most prevalent) end up with the same placement of the vmemmap region. In a subsequent patch, we will take advantage of this, and unify the memory maps even more. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Steve Capper <steve.capper@arm.com> Link: https://lore.kernel.org/r/20201008153602.9467-4-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Diffstat (limited to '')
-rw-r--r--arch/arm64/mm/init.c2
1 files changed, 2 insertions, 0 deletions
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 7e15d92836d8..3a5e9f9298e9 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -502,6 +502,8 @@ static void __init free_unused_memmap(void)
*/
void __init mem_init(void)
{
+ BUILD_BUG_ON(!is_power_of_2(sizeof(struct page)));
+
if (swiotlb_force == SWIOTLB_FORCE ||
max_pfn > PFN_DOWN(arm64_dma_phys_limit ? : arm64_dma32_phys_limit))
swiotlb_init(1);