aboutsummaryrefslogtreecommitdiffstats
path: root/arch/arm64/kvm
diff options
context:
space:
mode:
authorWill Deacon <will@kernel.org>2020-10-29 14:47:16 +0000
committerMarc Zyngier <maz@kernel.org>2020-10-29 19:49:03 +0000
commite2fc6a9f686d037cbd9b08b9fb657685b4a722d3 (patch)
treec7362c1a7526b438a27376f8333ee9560921ed9d /arch/arm64/kvm
parentKVM: arm64: Fix AArch32 handling of DBGD{CCINT,SCRext} and DBGVCR (diff)
downloadlinux-dev-e2fc6a9f686d037cbd9b08b9fb657685b4a722d3.tar.xz
linux-dev-e2fc6a9f686d037cbd9b08b9fb657685b4a722d3.zip
KVM: arm64: Fix masks in stage2_pte_cacheable()
stage2_pte_cacheable() tries to figure out whether the mapping installed in its 'pte' parameter is cacheable or not. Unfortunately, it fails miserably because it extracts the memory attributes from the entry using FIELD_GET(), which returns the attributes shifted down to bit 0, but then compares this with the unshifted value generated by the PAGE_S2_MEMATTR() macro. A direct consequence of this bug is that cache maintenance is silently skipped, which in turn causes 32-bit guests to crash early on when their set/way maintenance is trapped but not emulated correctly. Fix the broken masks by avoiding the use of FIELD_GET() altogether. Fixes: 6d9d2115c480 ("KVM: arm64: Add support for stage-2 map()/unmap() in generic page-table") Reported-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: Quentin Perret <qperret@google.com> Link: https://lore.kernel.org/r/20201029144716.30476-1-will@kernel.org
Diffstat (limited to 'arch/arm64/kvm')
-rw-r--r--arch/arm64/kvm/hyp/pgtable.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 95141b0d6088..0271b4a3b9fe 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -635,7 +635,7 @@ static void stage2_flush_dcache(void *addr, u64 size)
static bool stage2_pte_cacheable(kvm_pte_t pte)
{
- u64 memattr = FIELD_GET(KVM_PTE_LEAF_ATTR_LO_S2_MEMATTR, pte);
+ u64 memattr = pte & KVM_PTE_LEAF_ATTR_LO_S2_MEMATTR;
return memattr == PAGE_S2_MEMATTR(NORMAL);
}