aboutsummaryrefslogtreecommitdiffstats
path: root/arch/powerpc (follow)
AgeCommit message (Collapse)AuthorFilesLines
2022-09-30powerpc: Add hardware description stringMichael Ellerman2-1/+20
Create a hardware description string, which we will use to record various details of the hardware platform we are running on. Print the accumulated description at boot, and use it to set the generic description which is printed in oopses. To begin with add ppc_md.name, aka the "machine description". Example output at boot with the full series applied: Linux version 6.0.0-rc2-gcc-11.1.0-00199-g893f9007a5ce-dirty (michael@alpine1-p1) (powerpc64-linux-gcc (GCC) 11.1.0, GNU ld (GNU Binutils) 2.36.1) #844 SMP Thu Sep 29 22:29:53 AEST 2022 Hardware name: IBM pSeries (emulated by qemu) POWER9 (raw) 0x4e1200 0xf000005 of:SLOF,git-5b4c5a pSeries printk: bootconsole [udbg0] enabled Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220930082709.55830-1-mpe@ellerman.id.au
2022-09-30powerpc/configs: Enable PPC_UV in powernv_defconfigMichael Ellerman1-0/+3
Make sure the ultravisor code at least gets some build testing by enabling it in powernv_defconfig. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220929051517.1903079-1-mpe@ellerman.id.au
2022-09-30powerpc/configs: Update config files for removed/renamed symbolsLukas Bulwahn17-33/+1
Clean up config files by: - removing configs that were deleted in the past - removing configs not in tree and without recently pending patches - adding new configs that are replacements for old configs in the file For some detailed information, see: https://lore.kernel.org/kernel-janitors/20220929090645.1389-1-lukas.bulwahn@gmail.com/ Renamed: - CONFIG_PPC_PTDUMP -> CONFIG_GENERIC_PTDUMP e084728393a5 ("powerpc/ptdump: Convert powerpc to GENERIC_PTDUMP") Removed: - CONFIG_BLK_DEV_CRYPTOLOOP 47e9624616c8 ("block: remove support for cryptoloop and the xor transfer") - CONFIG_CRYPTO_RMD128 b21b9a5e0aef ("crypto: rmd128 - remove RIPE-MD 128 hash algorithm") - CONFIG_CRYPTO_RMD256 c15d4167f0b0 ("crypto: rmd256 - remove RIPE-MD 256 hash algorithm") - CONFIG_CRYPTO_RMD320 93f64202926f ("crypto: rmd320 - remove RIPE-MD 320 hash algorithm") - CONFIG_CRYPTO_SALSA20 663f63ee6d9c ("crypto: salsa20 - remove Salsa20 stream cipher algorithm") - CONFIG_CRYPTO_TGR192 87cd723f8978 ("crypto: tgr192 - remove Tiger 128/160/192 hash algorithms") - CONFIG_HARDENED_USERCOPY_PAGESPAN 1109a5d90701 ("usercopy: Remove HARDENED_USERCOPY_PAGESPAN") - CONFIG_RAPIDIO_TSI568, CONFIG_RAPIDIO_TSI57X 612d4904191f ("rapidio: remove not used code about RIO_VID_TUNDRA") - CONFIG_RAW_DRIVER 603e4922f1c8 ("remove the raw driver") - CONFIG_ROCKETPORT 3b00b6af7a5b ("tty: rocket, remove the driver") - CONFIG_ENABLE_MUST_CHECK 196793946264 ("Compiler Attributes: remove CONFIG_ENABLE_MUST_CHECK") Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com> [mpe: Add documentation of relevant commit for each symbol change] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220929101502.32527-1-lukas.bulwahn@gmail.com
2022-09-30powerpc/mm: Fix UBSAN warning reported on hugetlbAneesh Kumar K.V1-3/+3
Powerpc architecture supports 16GB hugetlb pages with hash translation. For 4K page size, this is implemented as a hugepage directory entry at PGD level and for 64K it is implemented as a huge page pte at PUD level With 16GB hugetlb size, offset within a page is greater than 32 bits. Hence switch to use unsigned long type when using hugepd_shift. In order to keep things simpler, we make sure we always use unsigned long type when using hugepd_shift() even though all the hugetlb page size won't require that. The hugetlb_free_p*d_range changes are all related to nohash usage where we can have multiple pgd entries pointing to the same hugepd entries. Hence on book3s64 where we can have > 4GB hugetlb page size we will always find more < next even if we compute the value of more correctly. Hence there is no functional change in this patch except that it fixes the below warning. UBSAN: shift-out-of-bounds in arch/powerpc/mm/hugetlbpage.c:499:21 shift exponent 34 is too large for 32-bit type 'int' CPU: 39 PID: 1673 Comm: a.out Not tainted 6.0.0-rc2-00327-gee88a56e8517-dirty #1 Call Trace: dump_stack_lvl+0x98/0xe0 (unreliable) ubsan_epilogue+0x18/0x70 __ubsan_handle_shift_out_of_bounds+0x1bc/0x390 hugetlb_free_pgd_range+0x5d8/0x600 free_pgtables+0x114/0x290 exit_mmap+0x150/0x550 mmput+0xcc/0x210 do_exit+0x420/0xdd0 do_group_exit+0x4c/0xd0 sys_exit_group+0x24/0x30 system_call_exception+0x250/0x600 system_call_common+0xec/0x250 Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> [mpe: Drop generic change to be sent separately, change 1ULL to 1UL] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220908072440.258301-1-aneesh.kumar@linux.ibm.com
2022-09-30powerpc/mm: Always update max/min_low_pfn in mem_topology_setup()Aneesh Kumar K.V1-3/+3
For both CONFIG_NUMA enabled/disabled use mem_topology_setup() to update max/min_low_pfn. This also adds min_low_pfn update to CONFIG_NUMA which was initialized to zero before. (mpe: Though MEMORY_START is == 0 for PPC64=y which is all possible NUMA=y systems) Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220704063851.295482-1-aneesh.kumar@linux.ibm.com
2022-09-30powerpc/mm/book3s/hash: Rename flush_tlb_pmd_rangeAneesh Kumar K.V3-5/+3
This function does the hash page table update. Hence rename it to indicate this better to avoid confusion with flush_pmd_tlb_range() Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> [mpe: Drop unnecessary extern] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220907081941.209501-1-aneesh.kumar@linux.ibm.com
2022-09-30powerpc: Drops STABS_DEBUG from linker scriptsMichael Ellerman3-3/+0
No toolchain we support should be generating stabs debug information anymore. Drop the sections entirely from our linker scripts. We removed all the manual stabs annotations in commit 12318163737c ("powerpc/32: Remove remaining .stabs annotations"). Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220928130951.1732983-1-mpe@ellerman.id.au
2022-09-30powerpc/64s: Remove lost/old commentMichael Ellerman1-10/+0
The bulk of this was moved/reworded in: 57f266497d81 ("powerpc: Use gas sections for arranging exception vectors") And now appears around line 700 in arch/powerpc/kernel/exceptions-64s.S. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220928130941.1732818-1-mpe@ellerman.id.au
2022-09-30powerpc/64s: Remove old STAB commentMichael Ellerman1-6/+0
This used to be about the 0x4300 handler, but that was moved in commit da2bc4644c75 ("powerpc/64s: Add new exception vector macros"). Note that "STAB" here refers to "Segment Table" not the debug format. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220928130912.1732466-1-mpe@ellerman.id.au
2022-09-30powerpc: remove orphan systbl_chk.shNicholas Piggin1-30/+0
arch/powerpc/kernel/systbl_chk.sh has not been referenced since commit ab66dcc76d6a ("powerpc: generate uapi header and system call table files"). Remove it. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220929032120.3592593-1-npiggin@gmail.com
2022-09-30powerpc/pseries/vas: Pass hw_cpu_id to node associativity HCALLHaren Myneni1-1/+1
Generally the hypervisor decides to allocate a window on different VAS instances. But if user space wishes to allocate on the current VAS instance where the process is executing, the kernel has to pass associativity domain IDs to allocate VAS window HCALL. To determine the associativity domain IDs for the current CPU, smp_processor_id() is passed to node associativity HCALL which may return H_P2 (-55) error during DLPAR CPU event. This is because Linux CPU numbers (smp_processor_id()) are not the same as the hypervisor's view of CPU numbers. Fix the issue by passing hard_smp_processor_id() with VPHN_FLAG_VCPU flag (PAPR 14.11.6.1 H_HOME_NODE_ASSOCIATIVITY). Fixes: b22f2d88e435 ("powerpc/pseries/vas: Integrate API with open/close windows") Reviewed-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Haren Myneni <haren@linux.ibm.com> [mpe: Update change log to mention Linux vs HV CPU numbers] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/55380253ea0c11341824cd4c0fc6bbcfc5752689.camel@linux.ibm.com
2022-09-30KVM: PPC: Book3S HV: Implement scheduling wait interval counters in the VPANicholas Piggin1-21/+41
PAPR specifies accumulated virtual processor wait intervals that relate to partition scheduling interval times. Implement these counters in the same way as they are repoted by dtl. Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220908132545.4085849-5-npiggin@gmail.com
2022-09-30Merge branch 'topic/ppc-kvm' into nextMichael Ellerman2-33/+76
Merge some KVM commits we are keeping in our topic branch.
2022-09-29Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski1-9/+0
No conflicts. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-29Merge branch 'v6.0-rc7'Peter Zijlstra8-72/+93
Merge upstream to get RAPTORLAKE_S Signed-off-by: Peter Zijlstra <peterz@infradead.org>
2022-09-29kbuild: build init/built-in.a just onceMasahiro Yamada1-1/+1
Kbuild builds init/built-in.a twice; first during the ordinary directory descending, second from scripts/link-vmlinux.sh. We do this because UTS_VERSION contains the build version and the timestamp. We cannot update it during the normal directory traversal since we do not yet know if we need to update vmlinux. UTS_VERSION is temporarily calculated, but omitted from the update check. Otherwise, vmlinux would be rebuilt every time. When Kbuild results in running link-vmlinux.sh, it increments the version number in the .version file and takes the timestamp at that time to really fix UTS_VERSION. However, updating the same file twice is a footgun. To avoid nasty timestamp issues, all build artifacts that depend on init/built-in.a are atomically generated in link-vmlinux.sh, where some of them do not need rebuilding. To fix this issue, this commit changes as follows: [1] Split UTS_VERSION out to include/generated/utsversion.h from include/generated/compile.h include/generated/utsversion.h is generated just before the vmlinux link. It is generated under include/generated/ because some decompressors (s390, x86) use UTS_VERSION. [2] Split init_uts_ns and linux_banner out to init/version-timestamp.c from init/version.c init_uts_ns and linux_banner contain UTS_VERSION. During the ordinary directory descending, they are compiled with __weak and used to determine if vmlinux needs relinking. Just before the vmlinux link, they are compiled without __weak to embed the real version and timestamp. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2022-09-28powerpc: Ignore DSI error caused by the copy/paste instructionHaren Myneni3-12/+46
The data storage interrupt (DSI) error will be generated when the paste operation is issued on the suspended Nest Accelerator (NX) window due to NX state changes. The hypervisor expects the partition to ignore this error during page fault handling. To differentiate DSI caused by an actual HW configuration or by the NX window, a new “ibm,pi-features” type value is defined. Byte 0, bit 3 of pi-attribute-specifier-type is now defined to indicate this DSI error. If this error is not ignored, the user space can get SIGBUS when the NX request is issued. This patch adds changes to read ibm,pi-features property and ignore DSI error during page fault handling if MMU_FTR_NX_DSI is defined. Signed-off-by: Haren Myneni <haren@linux.ibm.com> [mpe: Mention PAPR version in comment] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b9cd844b85eb8f70459109ce1b14e44c4cc85fa7.camel@linux.ibm.com
2022-09-28powerpc: Reverse stack frame marker on little endianMichael Ellerman1-0/+5
On little endian the stack frame marker appears reversed when dumping memory sequentially, as is typical in xmon or gdb, eg: c000000004733e40 0000000000000000 0000000000000000 |................| c000000004733e50 0000000000000000 0000000000000000 |................| c000000004733e60 0000000000000000 0000000000000000 |................| c000000004733e70 5347455200000000 0000000000000000 |SGER............| c000000004733e80 a700000000000000 708897f7ff7f0000 |........p.......| c000000004733e90 0073428fff7f0000 208997f7ff7f0000 |.sB..... .......| c000000004733ea0 0100000000000000 ffffffffffffffff |................| c000000004733eb0 0000000000000000 0000000000000000 |................| To make it easier to recognise, reverse the value on little endian, so it always appears as "REGS", eg: c000000004733e70 5245475300000000 0000000000000000 |REGS............| Acked-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220927150419.1503001-2-mpe@ellerman.id.au
2022-09-28powerpc: Make stack frame marker upper caseMichael Ellerman1-1/+1
Now that the stack frame regs marker is only 32-bits it is not as obvious in memory dumps and easier to miss, eg: c000000004733e40 0000000000000000 0000000000000000 |................| c000000004733e50 0000000000000000 0000000000000000 |................| c000000004733e60 0000000000000000 0000000000000000 |................| c000000004733e70 7367657200000000 0000000000000000 |sger............| c000000004733e80 a700000000000000 708897f7ff7f0000 |........p.......| c000000004733e90 0073428fff7f0000 208997f7ff7f0000 |.sB..... .......| c000000004733ea0 0100000000000000 ffffffffffffffff |................| c000000004733eb0 0000000000000000 0000000000000000 |................| So make it upper case to make it stand out a bit more: c000000004733e70 5347455200000000 0000000000000000 |SGER............| Acked-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220927150419.1503001-1-mpe@ellerman.id.au
2022-09-28powerpc/kprobes: Fix null pointer reference in arch_prepare_kprobe()Li Huafei1-1/+7
I found a null pointer reference in arch_prepare_kprobe(): # echo 'p cmdline_proc_show' > kprobe_events # echo 'p cmdline_proc_show+16' >> kprobe_events Kernel attempted to read user page (0) - exploit attempt? (uid: 0) BUG: Kernel NULL pointer dereference on read at 0x00000000 Faulting instruction address: 0xc000000000050bfc Oops: Kernel access of bad area, sig: 11 [#1] LE PAGE_SIZE=64K MMU=Radix SMP NR_CPUS=2048 NUMA PowerNV Modules linked in: CPU: 0 PID: 122 Comm: sh Not tainted 6.0.0-rc3-00007-gdcf8e5633e2e #10 NIP: c000000000050bfc LR: c000000000050bec CTR: 0000000000005bdc REGS: c0000000348475b0 TRAP: 0300 Not tainted (6.0.0-rc3-00007-gdcf8e5633e2e) MSR: 9000000000009033 <SF,HV,EE,ME,IR,DR,RI,LE> CR: 88002444 XER: 20040006 CFAR: c00000000022d100 DAR: 0000000000000000 DSISR: 40000000 IRQMASK: 0 ... NIP arch_prepare_kprobe+0x10c/0x2d0 LR arch_prepare_kprobe+0xfc/0x2d0 Call Trace: 0xc0000000012f77a0 (unreliable) register_kprobe+0x3c0/0x7a0 __register_trace_kprobe+0x140/0x1a0 __trace_kprobe_create+0x794/0x1040 trace_probe_create+0xc4/0xe0 create_or_delete_trace_kprobe+0x2c/0x80 trace_parse_run_command+0xf0/0x210 probes_write+0x20/0x40 vfs_write+0xfc/0x450 ksys_write+0x84/0x140 system_call_exception+0x17c/0x3a0 system_call_vectored_common+0xe8/0x278 --- interrupt: 3000 at 0x7fffa5682de0 NIP: 00007fffa5682de0 LR: 0000000000000000 CTR: 0000000000000000 REGS: c000000034847e80 TRAP: 3000 Not tainted (6.0.0-rc3-00007-gdcf8e5633e2e) MSR: 900000000280f033 <SF,HV,VEC,VSX,EE,PR,FP,ME,IR,DR,RI,LE> CR: 44002408 XER: 00000000 The address being probed has some special: cmdline_proc_show: Probe based on ftrace cmdline_proc_show+16: Probe for the next instruction at the ftrace location The ftrace-based kprobe does not generate kprobe::ainsn::insn, it gets set to NULL. In arch_prepare_kprobe() it will check for: ... prev = get_kprobe(p->addr - 1); preempt_enable_no_resched(); if (prev && ppc_inst_prefixed(ppc_inst_read(prev->ainsn.insn))) { ... If prev is based on ftrace, 'ppc_inst_read(prev->ainsn.insn)' will occur with a null pointer reference. At this point prev->addr will not be a prefixed instruction, so the check can be skipped. Check if prev is ftrace-based kprobe before reading 'prev->ainsn.insn' to fix this problem. Fixes: b4657f7650ba ("powerpc/kprobes: Don't allow breakpoints on suffixes") Signed-off-by: Li Huafei <lihuafei1@huawei.com> [mpe: Trim oops] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220923093253.177298-1-lihuafei1@huawei.com
2022-09-28ocxl: Remove the unneeded result variableye xingchen1-3/+1
Return the value opal_npu_spa_clear_cache() directly instead of storing it in another redundant variable. Reported-by: Zeal Robot <zealci@zte.com.cn> Signed-off-by: ye xingchen <ye.xingchen@zte.com.cn> Acked-by: Andrew Donnellan <ajd@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220906072006.337099-1-ye.xingchen@zte.com.cn
2022-09-28powerpc/pseries/vas: Remove the unneeded result variableye xingchen1-5/+1
Return the value vas_register_coproc_api() directly instead of storing it in another redundant variable. Reported-by: Zeal Robot <zealci@zte.com.cn> Signed-off-by: ye xingchen <ye.xingchen@zte.com.cn> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220825072657.229168-1-ye.xingchen@zte.com.cn
2022-09-28powerpc/smp: poll cpu_callin_map more aggressively in __cpu_up()Nathan Lynch1-16/+22
At boot time, it is not necessary to delay between polls of cpu_callin_map when waiting for a kicked CPU to come up. Remove the delay intervals, but preserve the overall deadline (five seconds). At run time, the first poll result is usually negative and we incur a sleeping wait. If we spin on the callin word for a short time first, we can reduce __cpu_up() from dozens of milliseconds to under 1ms in the common case on a P9 LPAR: $ ppc64_cpu --smt=off $ bpftrace -e 'kprobe:__cpu_up { @start[tid] = nsecs; } kretprobe:__cpu_up /@start[tid]/ { @us = hist((nsecs - @start[tid]) / 1000); delete(@start[tid]); }' -c 'ppc64_cpu --smt=on' Before: @us: [16K, 32K) 85 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| [32K, 64K) 13 |@@@@@@@ | After: @us: [128, 256) 95 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| [256, 512) 3 |@ | Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220926220250.157022-1-nathanl@linux.ibm.com
2022-09-28powerpc/rtas: block error injection when locked downNathan Lynch1-1/+24
The error injection facility on pseries VMs allows corruption of arbitrary guest memory, potentially enabling a sufficiently privileged user to disable lockdown or perform other modifications of the running kernel via the rtas syscall. Block the PAPR error injection facility from being opened or called when locked down. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Acked-by: Paul Moore <paul@paul-moore.com> (LSM) Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220926131643.146502-3-nathanl@linux.ibm.com
2022-09-28powerpc/pseries: block untrusted device tree changes when locked downNathan Lynch1-0/+5
The /proc/powerpc/ofdt interface allows the root user to freely alter the in-kernel device tree, enabling arbitrary physical address writes via drivers that could bind to malicious device nodes, thus making it possible to disable lockdown. Historically this interface has been used on the pseries platform to facilitate the runtime addition and removal of processor, memory, and device resources (aka Dynamic Logical Partitioning or DLPAR). Years ago, the processor and memory use cases were migrated to designs that happen to be lockdown-friendly: device tree updates are communicated directly to the kernel from firmware without passing through untrusted user space. I/O device DLPAR via the "drmgr" command in powerpc-utils remains the sole legitimate user of /proc/powerpc/ofdt, but it is already broken in lockdown since it uses /dev/mem to allocate argument buffers for the rtas syscall. So only illegitimate uses of the interface should see a behavior change when running on a locked down kernel. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Acked-by: Paul Moore <paul@paul-moore.com> (LSM) Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220926131643.146502-2-nathanl@linux.ibm.com
2022-09-28powerpc/udbg: Remove extern function prototypesPali Rohár1-27/+27
'extern' keyword is pointless and deprecated for function prototypes. Signed-off-by: Pali Rohár <pali@kernel.org> Suggested-by: Gabriel Paubert <paubert@iram.es> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220822231751.16973-1-pali@kernel.org
2022-09-28powerpc/boot: Explicitly disable usage of SPE instructionsPali Rohár1-0/+1
uImage boot wrapper should not use SPE instructions, like kernel itself. Boot wrapper has already disabled Altivec and VSX instructions but not SPE. Options -mno-spe and -mspe=no already set when compilation of kernel, but not when compiling uImage wrapper yet. Fix it. Cc: stable@vger.kernel.org Signed-off-by: Pali Rohár <pali@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220827134454.17365-1-pali@kernel.org
2022-09-28powerpc: Include e500v1_power_isa.dtsi for remaining e500v1 platformsPali Rohár7-0/+14
There are still some board device tree files without Power ISA properties which have Freescale e500v1 cores, namely those which are based on Freescale mpc8540, mpc8541, mpc8555 and mpc8560 processors. So include newly introduced e500v1_power_isa.dtsi file in devices tree files with those processors. Signed-off-by: Pali Rohár <pali@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220902212103.22534-2-pali@kernel.org
2022-09-28powerpc: Fix SPE Power ISA properties for e500v1 platformsPali Rohár5-4/+55
Commit 2eb28006431c ("powerpc/e500v2: Add Power ISA properties to comply with ePAPR 1.1") introduced new include file e500v2_power_isa.dtsi and should have used it for all e500v2 platforms. But apparently it was used also for e500v1 platforms mpc8540, mpc8541, mpc8555 and mpc8560. e500v1 cores compared to e500v2 do not support double precision floating point SPE instructions. Hence power-isa-sp.fd should not be set on e500v1 platforms, which is in e500v2_power_isa.dtsi include file. Fix this issue by introducing a new e500v1_power_isa.dtsi include file and use it in all e500v1 device tree files. Fixes: 2eb28006431c ("powerpc/e500v2: Add Power ISA properties to comply with ePAPR 1.1") Signed-off-by: Pali Rohár <pali@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220902212103.22534-1-pali@kernel.org
2022-09-28powerpc/perf: Fix branch_filter support for multiple filtersAthira Rajeev1-0/+17
For PERF_SAMPLE_BRANCH_STACK sample type, different branch_sample_type ie branch filters are supported. The branch filters are requested via event attribute "branch_sample_type". Multiple branch filters can be passed in event attribute. eg: $ perf record -b -o- -B --branch-filter any,ind_call true None of the Power PMUs support having multiple branch filters at the same time. Branch filters for branch stack sampling is set via MMCRA IFM bits [32:33]. But currently when requesting for multiple filter types, the "perf record" command does not report any error. eg: $ perf record -b -o- -B --branch-filter any,save_type true $ perf record -b -o- -B --branch-filter any,ind_call true The "bhrb_filter_map" function in PMU driver code does the validity check for supported branch filters. But this check is done for single filter. Hence "perf record" will proceed here without reporting any error. Fix power_pmu_event_init() to return EOPNOTSUPP when multiple branch filters are requested in the event attr. After the fix: $ perf record --branch-filter any,ind_call -- ls Error: cycles: PMU Hardware doesn't support sampling/overflow-interrupts. Try 'perf stat' Reported-by: Disha Goel <disgoel@linux.vnet.ibm.com> Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Tested-by: Disha Goel<disgoel@linux.vnet.ibm.com> Reviewed-by: Madhavan Srinivasan <maddy@linux.ibm.com> Reviewed-by: Kajol Jain <kjain@linux.ibm.com> [mpe: Tweak comment and change log wording] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220921145255.20972-1-atrajeev@linux.vnet.ibm.com
2022-09-28powerpc/64s/interrupt: halt early boot interrupts if paca is not set upNicholas Piggin3-2/+17
Ensure r13 is zero from very early in boot until it gets set to the boot paca pointer. This allows early program and mce handlers to halt if there is no valid paca, rather than potentially run off into the weeds. This preserves register and memory contents for low level debugging tools. Nothing could be printed to console at this point in any case because even udbg is only set up after the boot paca is set, so this shouldn't be missed. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220926055620.2676869-6-npiggin@gmail.com
2022-09-28powerpc/64: don't set boot CPU's r13 to paca until the structure is set upNicholas Piggin1-10/+10
The idea is to get to the point where if r13 is non-zero, then it should contain a reasonable paca. This can be used in early boot program check and machine check handlers to avoid running off into the weeds if they hit before r13 has a paca. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220926055620.2676869-5-npiggin@gmail.com
2022-09-28powerpc/64: avoid using r13 in relocateNicholas Piggin1-7/+7
relocate() uses r13 in early boot before it is used for the paca. Use a different register for this so r13 is kept unchanged until it is set to the paca pointer. Avoid r14 as well while we're here, there's no reason not to use the volatile registers which is a bit less surprising, and r14 could be used as another fixed reg one day. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220926055620.2676869-4-npiggin@gmail.com
2022-09-28powerpc/64s: early boot machine check handlerNicholas Piggin4-1/+34
Use the early boot interrupt fixup in the machine check handler to allow the machine check handler to run before interrupt endian is set up. Branch to an early boot handler that just does a basic crash, which allows it to run before ppc_md is set up. MSR[ME] is enabled on the boot CPU earlier, and the machine check stack is temporarily set to the middle of the init task stack. This allows machine checks (e.g., due to invalid data access in real mode) to print something useful earlier in boot (as soon as udbg is set up, if CONFIG_PPC_EARLY_DEBUG=y). Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220926055620.2676869-3-npiggin@gmail.com
2022-09-28powerpc/64s/interrupt: move early boot ILE fixup into a macroNicholas Piggin1-45/+56
In preparation for using this sequence in machine check interrupt, move it into a macro, with a small change to make it position independent. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220926055620.2676869-2-npiggin@gmail.com
2022-09-28powerpc/64e: provide an addressing macro for use with TOC in alternate registerNicholas Piggin3-8/+19
The interrupt entry code carefully saves a minimal number of registers, so in some places the TOC is required, it is loaded into a different register, so provide a macro that can supply an alternate TOC register. This continues to use got addressing because TOC-relative results in "got/toc optimization is not supported" messages by the linker. Having r2 be one of the saved registers and using that for TOC addressing may be the best way to avoid that and switch this to TOC addressing. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220926034057.2360083-6-npiggin@gmail.com
2022-09-28powerpc/64: provide a helper macro to load r2 with the kernel TOCNicholas Piggin13-27/+32
A later change stops the kernel using r2 and loads it with a poison value. Provide a PACATOC loading abstraction which can hide this detail. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220926034057.2360083-5-npiggin@gmail.com
2022-09-28powerpc/64: switch asm helpers from GOT to TOC relative addressingNicholas Piggin2-2/+4
There is no need to use GOT addressing within the kernel. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220926034057.2360083-4-npiggin@gmail.com
2022-09-28powerpc/64: asm use consistent global variable declaration and accessNicholas Piggin9-39/+30
Use helper macros to access global variables, and place them in .data sections rather than in .toc. Putting addresses in TOC is not required because the kernel is linked with a single TOC. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220926034057.2360083-3-npiggin@gmail.com
2022-09-28powerpc/64: use 32-bit immediate for STACK_FRAME_REGS_MARKERNicholas Piggin6-23/+10
Using a 32-bit constant for this marker allows it to be loaded with two ALU instructions, like 32-bit. This avoids a TOC entry and a TOC load that depends on the r2 value that has just been loaded from the PACA. This changes the value for 32-bit as well, so both have the same value in the low 4 bytes and 64-bit has 0 in the top bytes. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220926034057.2360083-2-npiggin@gmail.com
2022-09-28powerpc/64s: POWER10 CPU Kconfig build optionNicholas Piggin2-1/+12
This adds basic POWER10_CPU option, which builds with -mcpu=power10. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220923033004.536127-1-npiggin@gmail.com
2022-09-28powerpc/pseries: Move vas_migration_handler early during migrationHaren Myneni1-3/+12
When the migration is initiated, the hypervisor changes VAS mappings as part of pre-migration event. Then the OS gets the migration event which closes all VAS windows before the migration starts. NX generates continuous faults until windows are closed and the user space can not differentiate these NX faults coming from the actual migration. So to reduce this time window, close VAS windows first in pseries_migrate_partition(). Signed-off-by: Haren Myneni <haren@linux.ibm.com> Reviewed-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d8efade91dda831c9ed4abb226dab627da594c5f.camel@linux.ibm.com
2022-09-28powerpc/64/irq: tidy soft-masked irq replay and improve documentationNicholas Piggin1-32/+61
irq replay is quite complicated because of softirq processing which itself enables and disables irqs. Several considerations need to be accounted for due to this, and they are not clearly documented. Refactor the irq replay code a bit to tidy and deduplicate some common functions. Add comments, debug checks. This has a minor functional change that irq tracing enable/disable is done after each interrupt replayed, rather than after a batch. It also re-sets state to IRQS_ALL_DISABLED after an interrupt, which doesn't matter much because interrupts are hard disabled at this point, but it is more consistent with how interrupt handlers are called. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220926054305.2671436-8-npiggin@gmail.com
2022-09-28powerpc/64/interrupt: avoid BUG/WARN recursion in interrupt entryNicholas Piggin1-13/+20
BUG/WARN are handled with a program interrupt which can turn into an infinite recursion when there are bugs in interrupt handler entry (which can be irritated by bugs in other parts of the code). There is one feeble attempt to avoid this recursion, but it misses several cases. Make a tidier macro for this and switch most bugs in the interrupt entry wrapper over to use it. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220926054305.2671436-7-npiggin@gmail.com
2022-09-28powerpc/64s/interrupt: masked handler debug check for previous hard disableNicholas Piggin1-0/+10
Prior changes eliminated cases of masked PACA_IRQ_MUST_HARD_MASK interrupts that re-fire due to MSR[EE] being enabled while they are pending. Add a debug check in the masked interrupt handler to catch if this occurs. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220926054305.2671436-6-npiggin@gmail.com
2022-09-28powerpc/64s: Fix irq state management in runlatch functionsNicholas Piggin1-4/+2
When irqs are soft-disabled, MSR[EE] is volatile and can change from 1 to 0 asynchronously (if a PACA_IRQ_MUST_HARD_MASK interrupt hits). So it can not be used to check hard IRQ enabled status, except to confirm it is disabled. ppc64_runlatch_on/off functions use MSR this way to decide whether to re-enable MSR[EE] after disabling it, which leads to MSR[EE] being enabled when it shouldn't be (when a PACA_IRQ_MUST_HARD_MASK had disabled it between reading the MSR and clearing EE). This has been tolerated in the kernel previously, and it doesn't seem to cause a problem, but it is unexpected and may trip warnings or cause other problems as we tighten up this state management. Fix this by only re-enabling if PACA_IRQ_HARD_DIS is clear. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220926054305.2671436-5-npiggin@gmail.com
2022-09-28powerpc/64/interrupt: Fix return to masked context after hard-mask irq becomes pendingNicholas Piggin2-13/+31
If a synchronous interrupt (e.g., hash fault) is taken inside an irqs-disabled region which has MSR[EE]=1, then an asynchronous interrupt that is PACA_IRQ_MUST_HARD_MASK (e.g., PMI) is taken inside the synchronous interrupt handler, then the synchronous interrupt will return with MSR[EE]=1 and the asynchronous interrupt fires again. If the asynchronous interrupt is a PMI and the original context does not have PMIs disabled (only Linux IRQs), the asynchronous interrupt will fire despite having the PMI marked soft pending. This can confuse the perf code and cause warnings. This patch changes the interrupt return so that irqs-disabled MSR[EE]=1 contexts will be returned to with MSR[EE]=0 if a PACA_IRQ_MUST_HARD_MASK interrupt has become pending in the meantime. The longer explanation for what happens: 1. local_irq_disable() 2. Hash fault interrupt fires, do_hash_fault handler runs 3. interrupt_enter_prepare() sets IRQS_ALL_DISABLED 4. interrupt_enter_prepare() sets MSR[EE]=1 5. PMU interrupt fires, masked handler runs 6. Masked handler marks PMI pending 7. Masked handler returns with PACA_IRQ_HARD_DIS set, MSR[EE]=0 8. do_hash_fault interrupt return handler runs 9. interrupt_exit_kernel_prepare() clears PACA_IRQ_HARD_DIS 10. interrupt returns with MSR[EE]=1 11. PMU interrupt fires, perf handler runs Fixes: 4423eb5ae32e ("powerpc/64/interrupt: make normal synchronous interrupts enable MSR[EE] if possible") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220926054305.2671436-4-npiggin@gmail.com
2022-09-28powerpc/64: mark irqs hard disabled in boot pacaNicholas Piggin1-1/+3
This prevents interrupts in early boot (e.g., program check) from enabling MSR[EE], potentially causing endian mismatch or other crashes when reporting early boot traps. Fixes: 4423eb5ae32ec ("powerpc/64/interrupt: make normal synchronous interrupts enable MSR[EE] if possible") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220926054305.2671436-3-npiggin@gmail.com
2022-09-28powerpc/64/interrupt: Fix false warning in context tracking due to idle stateNicholas Piggin1-1/+2
Commit 171476775d32 ("context_tracking: Convert state to atomic_t") added a CONTEXT_IDLE state which can be encountered by interrupts from kernel mode in the idle thread, causing a false positive warning. Fixes: 171476775d32 ("context_tracking: Convert state to atomic_t") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220926054305.2671436-2-npiggin@gmail.com
2022-09-28powerpc/64s: Enable KFENCE on book3s64Nicholas Miehlbradt6-11/+30
KFENCE support was added for ppc32 in commit 90cbac0e995d ("powerpc: Enable KFENCE for PPC32"). Enable KFENCE on ppc64 architecture with hash and radix MMUs. It uses the same mechanism as debug pagealloc to protect/unprotect pages. All KFENCE kunit tests pass on both MMUs. KFENCE memory is initially allocated using memblock but is later marked as SLAB allocated. This necessitates the change to __pud_free to ensure that the KFENCE pages are freed appropriately. Based on previous work by Christophe Leroy and Jordan Niethe. Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com> Reviewed-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220926075726.2846-4-nicholas@linux.ibm.com