aboutsummaryrefslogtreecommitdiffstats
path: root/arch/arm/include (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2019-05-17powerpc/mm/hash: Fix get_region_id() for invalid addressesAneesh Kumar K.V1-0/+4
Accesses by userspace to random addresses outside the user or kernel address range will generate an SLB fault. When we handle that fault we classify the effective address into several classes, eg. user, kernel linear, kernel virtual etc. For addresses that are completely outside of any valid range, we should not insert an SLB entry at all, and instead immediately an exception. In the past this was handled in two ways. Firstly we would check the top nibble of the address (using REGION_ID(ea)) and that would tell us if the address was user (0), kernel linear (c), kernel virtual (d), or vmemmap (f). If the address didn't match any of these it was invalid. Then for each type of address we would do a secondary check. For the user region we check against H_PGTABLE_RANGE, for kernel linear we would mask the top nibble of the address and then check the address against MAX_PHYSMEM_BITS. As part of commit 0034d395f89d ("powerpc/mm/hash64: Map all the kernel regions in the same 0xc range") we replaced REGION_ID() with get_region_id() and changed the masking of the top nibble to only mask the top two bits, which introduced a bug. Addresses less than (4 << 60) are still handled correctly, they are either less than (1 << 60) in which case they are subject to the H_PGTABLE_RANGE check, or they are correctly checked against MAX_PHYSMEM_BITS. However addresses from (4 << 60) to ((0xc << 60) - 1), are incorrectly treated as kernel linear addresses in get_region_id(). Then the top two bits are cleared by EA_MASK in slb_allocate_kernel() and the address is checked against MAX_PHYSMEM_BITS, which it passes due to the masking. The end result is we incorrectly insert SLB entries for those addresses. That is not actually catastrophic, having inserted the SLB entry we will then go on to take a page fault for the address and at that point we detect the problem and report it as a bad fault. Still we should not be inserting those entries, or treating them as kernel linear addresses in the first place. So fix get_region_id() to detect addresses in that range and return an invalid region id, which we cause use to not insert an SLB entry and directly report an exception. Fixes: 0034d395f89d ("powerpc/mm/hash64: Map all the kernel regions in the same 0xc range") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> [mpe: Drop change to EA_MASK for now, rewrite change log] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-05-17kvm: fix compilation on aarch64Paolo Bonzini1-1/+1
Commit e45adf665a53 ("KVM: Introduce a new guest mapping API", 2019-01-31) introduced a build failure on aarch64 defconfig: $ make -j$(nproc) ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- O=out defconfig \ Image.gz ... ../arch/arm64/kvm/../../../virt/kvm/kvm_main.c: In function '__kvm_map_gfn': ../arch/arm64/kvm/../../../virt/kvm/kvm_main.c:1763:9: error: implicit declaration of function 'memremap'; did you mean 'memset_p'? ../arch/arm64/kvm/../../../virt/kvm/kvm_main.c:1763:46: error: 'MEMREMAP_WB' undeclared (first use in this function) ../arch/arm64/kvm/../../../virt/kvm/kvm_main.c: In function 'kvm_vcpu_unmap': ../arch/arm64/kvm/../../../virt/kvm/kvm_main.c:1795:3: error: implicit declaration of function 'memunmap'; did you mean 'vm_munmap'? because these functions are declared in <linux/io.h> rather than <asm/io.h>, and the former was being pulled in already on x86 but not on aarch64. Reported-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-05-16riscv: fix locking violation in page fault handlerAndreas Schwab1-1/+2
When a user mode process accesses an address in the vmalloc area do_page_fault tries to unlock the mmap semaphore when it isn't locked. Signed-off-by: Andreas Schwab <schwab@suse.de> [Palmer: Duplicated code instead of a goto] Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-05-16RISC-V: sifive_l2_cache: Add L2 cache controller driver for SiFive SoCsYash Shah3-0/+192
The driver currently supports only SiFive FU540-C000 platform. The initial version of L2 cache controller driver includes: - Initial configuration reporting at boot up. - Support for ECC related functionality. Signed-off-by: Yash Shah <yash.shah@sifive.com> Signed-off-by: Palmer Dabbelt <palmer@sifive.com>