aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2025-05-22arm64: el2_setup.h: Make __init_el2_fgt labels consistent, againRob Herring (Arm)1-3/+7
Commit 5b39db6037e7 ("arm64: el2_setup.h: Rename some labels to be more diff-friendly") reworked the labels in __init_el2_fgt to say what's skipped rather than what the target location is. The exception was "set_fgt_" which is where registers are written. In reviewing the BRBE additions, Will suggested "set_debug_fgt_" where HDFGxTR_EL2 are written. Doing that would partially revert commit 5b39db6037e7 undoing the goal of minimizing additions here, but it would follow the convention for labels where registers are written. So let's do both. Branches that skip something go to a "skip" label and places that set registers have a "set" label. This results in some double labels, but it makes things entirely consistent. While we're here, the SME skip label was incorrectly named, so fix it. Reported-by: Will Deacon <will@kernel.org> Cc: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Rob Herring (Arm) <robh@kernel.org> Link: https://lore.kernel.org/r/20250520-arm-brbe-v19-v22-2-c1ddde38e7f8@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2025-05-19perf/arm-cmn: Add CMN S3 ACPI bindingRobin Murphy1-0/+1
An ACPI binding for CMN S3 was not yet finalised when the driver support was originally written, but v1.2 of DEN0093 "ACPI for Arm Components" has at last been published; support ACPI systems using the proper HID. Cc: stable@vger.kernel.org Fixes: 0dc2f4963f7e ("perf/arm-cmn: Support CMN S3") Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/7dafe147f186423020af49d7037552ee59c60e97.1747652164.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-05-16arm64/boot: Disallow BSS exports to startup codeArd Biesheuvel2-28/+34
BSS might be uninitialized when entering the startup code, so forbid the use by the startup code of any variables that live after __bss_start in the linker map. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Yeoreum Yun <yeoreum.yun@arm.com> Reviewed-by: Yeoreum Yun <yeoreum.yun@arm.com> Link: https://lore.kernel.org/r/20250508114328.2460610-8-ardb+git@google.com [will: Drop export of 'memstart_offset_seed', as this has been removed] Signed-off-by: Will Deacon <will@kernel.org>
2025-05-16arm64/boot: Move global CPU override variables out of BSSArd Biesheuvel1-11/+11
Accessing BSS will no longer be permitted from the startup code in arch/arm64/kernel/pi, as some of it executes before BSS is cleared. Clearing BSS earlier would involve managing cache coherency explicitly in software, which is a hassle we prefer to avoid. So move some variables that are assigned by the startup code out of BSS and into .data. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Yeoreum Yun <yeoreum.yun@arm.com> Reviewed-by: Yeoreum Yun <yeoreum.yun@arm.com> Link: https://lore.kernel.org/r/20250508114328.2460610-7-ardb+git@google.com Signed-off-by: Will Deacon <will@kernel.org>
2025-05-16arm64/boot: Move init_pgdir[] and init_idmap_pgdir[] into __pi_ namespaceArd Biesheuvel5-13/+8
init_pgdir[] is only referenced from the startup code, but lives after BSS in the linker map. Before tightening the rules about accessing BSS from startup code, move init_pgdir[] into the __pi_ namespace, so it does not need to be exported explicitly. For symmetry, do the same with init_idmap_pgdir[], although it lives before BSS. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Yeoreum Yun <yeoreum.yun@arm.com> Reviewed-by: Yeoreum Yun <yeoreum.yun@arm.com> Link: https://lore.kernel.org/r/20250508114328.2460610-6-ardb+git@google.com Signed-off-by: Will Deacon <will@kernel.org>
2025-05-16perf/arm-cmn: Initialise cmn->cpu earlierRobin Murphy1-1/+1
For all the complexity of handling affinity for CPU hotplug, what we've apparently managed to overlook is that arm_cmn_init_irqs() has in fact always been setting the *initial* affinity of all IRQs to CPU 0, not the CPU we subsequently choose for event scheduling. Oh dear. Cc: stable@vger.kernel.org Fixes: 0ba64770a2f2 ("perf: Add Arm CMN-600 PMU driver") Signed-off-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Ilkka Koskinen <ilkka@os.amperecomputing.com> Link: https://lore.kernel.org/r/b12fccba6b5b4d2674944f59e4daad91cd63420b.1747069914.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-05-16arm64: Update comment regarding values in __boot_cpu_modeBen Horgan1-1/+2
The values stored in __boot_cpu_mode were changed without updating the comment. Rectify that. Signed-off-by: Ben Horgan <ben.horgan@arm.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/20250513124525.677736-1-ben.horgan@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-05-16arm64: mm: Drop redundant check in pmd_trans_huge()Gavin Shan1-2/+1
pmd_val(pmd) is redundant because a positive pmd_present(pmd) ensures a positive pmd_val(pmd) according to their definitions like below. #define pmd_val(x) ((x).pmd) #define pmd_present(pmd) pte_present(pmd_pte(pmd)) #define pte_present(pte) (pte_valid(pte) || pte_present_invalid(pte)) #define pte_valid(pte) (!!(pte_val(pte) & PTE_VALID)) #define pte_present_invalid(pte) \ ((pte_val(pte) & (PTE_VALID | PTE_PRESENT_INVALID)) == PTE_PRESENT_INVALID) pte_present() can't be positive unless either of the flag PTE_VALID or PTE_PRESENT_INVALID is set. In this case, pmd_val(pmd) should be positive either. So lets drop the redundant check pmd_val(pmd) and no functional changes intended. Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Dev Jain <dev.jain@arm.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Link: https://lore.kernel.org/r/20250508085251.204282-1-gshan@redhat.com Signed-off-by: Will Deacon <will@kernel.org>
2025-05-16arm64/mm: Re-organise setting up FEAT_S1PIE registers PIRE0_EL1 and PIR_EL1Anshuman Khandual2-17/+4
mov_q cannot really move PIE_E[0|1] macros into a general purpose register as expected if those macro constants contain some 128 bit layout elements, that are required for D128 page tables. The primary issue is that for D128, PIE_E[0|1] are defined in terms of 128-bit types with shifting and masking, which the assembler can't accommodate. Instead pre-calculate these PIRE0_EL1/PIR_EL1 constants into asm-offsets.h based PIE_E0_ASM/PIE_E1_ASM which can then be used in arch/arm64/mm/proc.S. While here also drop PTE_MAYBE_NG/PTE_MAYBE_SHARED assembly overrides which are not required any longer, as the compiler toolchains are smart enough to compute both the PIE_[E0|E1]_ASM constants in all scenarios. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Link: https://lore.kernel.org/r/20250429050511.1663235-1-anshuman.khandual@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-05-14arm64/mm: Permit lazy_mmu_mode to be nestedRyan Roberts1-2/+12
lazy_mmu_mode is not supposed to permit nesting. But in practice this does happen with CONFIG_DEBUG_PAGEALLOC, where a page allocation inside a lazy_mmu_mode section (such as zap_pte_range()) will change permissions on the linear map with apply_to_page_range(), which re-enters lazy_mmu_mode (see stack trace below). The warning checking that nesting was not happening was previously being triggered due to this. So let's relax by removing the warning and tolerate nesting in the arm64 implementation. The first (inner) call to arch_leave_lazy_mmu_mode() will flush and clear the flag such that the remainder of the work in the outer nest behaves as if outside of lazy mmu mode. This is safe and keeps tracking simple. Code review suggests powerpc deals with this issue in the same way. ------------[ cut here ]------------ WARNING: CPU: 6 PID: 1 at arch/arm64/include/asm/pgtable.h:89 __apply_to_page_range+0x85c/0x9f8 Modules linked in: ip_tables x_tables ipv6 CPU: 6 UID: 0 PID: 1 Comm: systemd Not tainted 6.15.0-rc5-00075-g676795fe9cf6 #1 PREEMPT Hardware name: QEMU KVM Virtual Machine, BIOS 2024.08-4 10/25/2024 pstate: 40400005 (nZcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : __apply_to_page_range+0x85c/0x9f8 lr : __apply_to_page_range+0x2b4/0x9f8 sp : ffff80008009b3c0 x29: ffff80008009b460 x28: ffff0000c43a3000 x27: ffff0001ff62b108 x26: ffff0000c43a4000 x25: 0000000000000001 x24: 0010000000000001 x23: ffffbf24c9c209c0 x22: ffff80008009b4d0 x21: ffffbf24c74a3b20 x20: ffff0000c43a3000 x19: ffff0001ff609d18 x18: 0000000000000001 x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000003 x14: 0000000000000028 x13: ffffbf24c97c1000 x12: ffff0000c43a3fff x11: ffffbf24cacc9a70 x10: ffff0000c43a3fff x9 : ffff0001fffff018 x8 : 0000000000000012 x7 : ffff0000c43a4000 x6 : ffff0000c43a4000 x5 : ffffbf24c9c209c0 x4 : ffff0000c43a3fff x3 : ffff0001ff609000 x2 : 0000000000000d18 x1 : ffff0000c03e8000 x0 : 0000000080000000 Call trace: __apply_to_page_range+0x85c/0x9f8 (P) apply_to_page_range+0x14/0x20 set_memory_valid+0x5c/0xd8 __kernel_map_pages+0x84/0xc0 get_page_from_freelist+0x1110/0x1340 __alloc_frozen_pages_noprof+0x114/0x1178 alloc_pages_mpol+0xb8/0x1d0 alloc_frozen_pages_noprof+0x48/0xc0 alloc_pages_noprof+0x10/0x60 get_free_pages_noprof+0x14/0x90 __tlb_remove_folio_pages_size.isra.0+0xe4/0x140 __tlb_remove_folio_pages+0x10/0x20 unmap_page_range+0xa1c/0x14c0 unmap_single_vma.isra.0+0x48/0x90 unmap_vmas+0xe0/0x200 vms_clear_ptes+0xf4/0x140 vms_complete_munmap_vmas+0x7c/0x208 do_vmi_align_munmap+0x180/0x1a8 do_vmi_munmap+0xac/0x188 __vm_munmap+0xe0/0x1e0 __arm64_sys_munmap+0x20/0x38 invoke_syscall+0x48/0x104 el0_svc_common.constprop.0+0x40/0xe0 do_el0_svc+0x1c/0x28 el0_svc+0x4c/0x16c el0t_64_sync_handler+0x10c/0x140 el0t_64_sync+0x198/0x19c irq event stamp: 281312 hardirqs last enabled at (281311): [<ffffbf24c780fd04>] bad_range+0x164/0x1c0 hardirqs last disabled at (281312): [<ffffbf24c89c4550>] el1_dbg+0x24/0x98 softirqs last enabled at (281054): [<ffffbf24c752d99c>] handle_softirqs+0x4cc/0x518 softirqs last disabled at (281019): [<ffffbf24c7450694>] __do_softirq+0x14/0x20 ---[ end trace 0000000000000000 ]--- Fixes: 5fdd05efa1cd ("arm64/mm: Batch barriers when updating kernel mappings") Reported-by: Catalin Marinas <catalin.marinas@arm.com> Closes: https://lore.kernel.org/linux-arm-kernel/aCH0TLRQslXHin5Q@arm.com/ Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20250512150333.5589-1-ryan.roberts@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-05-14arm64/mm: Disable barrier batching in interrupt contextsRyan Roberts1-2/+14
Commit 5fdd05efa1cd ("arm64/mm: Batch barriers when updating kernel mappings") enabled arm64 kernels to track "lazy mmu mode" using TIF flags in order to defer barriers until exiting the mode. At the same time, it added warnings to check that pte manipulations were never performed in interrupt context, because the tracking implementation could not deal with nesting. But it turns out that some debug features (e.g. KFENCE, DEBUG_PAGEALLOC) do manipulate ptes in softirq context, which triggered the warnings. So let's take the simplest and safest route and disable the batching optimization in interrupt contexts. This makes these users no worse off than prior to the optimization. Additionally the known offenders are debug features that only manipulate a single PTE, so there is no performance gain anyway. There may be some obscure case of encrypted/decrypted DMA with the dma_free_coherent called from an interrupt context, but again, this is no worse off than prior to the commit. Some options for supporting nesting were considered, but there is a difficult to solve problem if any code manipulates ptes within interrupt context but *outside of* a lazy mmu region. If this case exists, the code would expect the updates to be immediate, but because the task context may have already been in lazy mmu mode, the updates would be deferred, which could cause incorrect behaviour. This problem is avoided by always ensuring updates within interrupt context are immediate. Fixes: 5fdd05efa1cd ("arm64/mm: Batch barriers when updating kernel mappings") Reported-by: syzbot+5c0d9392e042f41d45c5@syzkaller.appspotmail.com Closes: https://lore.kernel.org/linux-arm-kernel/681f2a09.050a0220.f2294.0006.GAE@google.com/ Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20250512102242.4156463-1-ryan.roberts@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-05-09arm64/cpuinfo: only show one cpu's info in c_show()Ye Bin1-54/+53
Currently, when ARM64 displays CPU information, every call to c_show() assembles all CPU information. However, as the number of CPUs increases, this can lead to insufficient buffer space due to excessive assembly in a single call, causing repeated expansion and multiple calls to c_show(). To prevent this invalid c_show() call, only one CPU's information is assembled each time c_show() is called. Signed-off-by: Ye Bin <yebin10@huawei.com> Link: https://lore.kernel.org/r/20250421062947.4072855-1-yebin@huaweicloud.com Signed-off-by: Will Deacon <will@kernel.org>
2025-05-09arm64/mm: Batch barriers when updating kernel mappingsRyan Roberts3-20/+72
Because the kernel can't tolerate page faults for kernel mappings, when setting a valid, kernel space pte (or pmd/pud/p4d/pgd), it emits a dsb(ishst) to ensure that the store to the pgtable is observed by the table walker immediately. Additionally it emits an isb() to ensure that any already speculatively determined invalid mapping fault gets canceled. We can improve the performance of vmalloc operations by batching these barriers until the end of a set of entry updates. arch_enter_lazy_mmu_mode() and arch_leave_lazy_mmu_mode() provide the required hooks. vmalloc improves by up to 30% as a result. Two new TIF_ flags are created; TIF_LAZY_MMU tells us if the task is in the lazy mode and can therefore defer any barriers until exit from the lazy mode. TIF_LAZY_MMU_PENDING is used to remember if any pte operation was performed while in the lazy mode that required barriers. Then when leaving lazy mode, if that flag is set, we emit the barriers. Since arch_enter_lazy_mmu_mode() and arch_leave_lazy_mmu_mode() are used for both user and kernel mappings, we need the second flag to avoid emitting barriers unnecessarily if only user mappings were updated. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Tested-by: Luiz Capitulino <luizcap@redhat.com> Link: https://lore.kernel.org/r/20250422081822.1836315-12-ryan.roberts@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-05-09mm/vmalloc: Enter lazy mmu mode while manipulating vmalloc ptesRyan Roberts1-0/+14
Wrap vmalloc's pte table manipulation loops with arch_enter_lazy_mmu_mode() / arch_leave_lazy_mmu_mode(). This provides the arch code with the opportunity to optimize the pte manipulations. Note that vmap_pfn() already uses lazy mmu mode since it delegates to apply_to_page_range() which enters lazy mmu mode for both user and kernel mappings. These hooks will shortly be used by arm64 to improve vmalloc performance. Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Tested-by: Luiz Capitulino <luizcap@redhat.com> Link: https://lore.kernel.org/r/20250422081822.1836315-11-ryan.roberts@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-05-09arm64/mm: Support huge pte-mapped pages in vmapRyan Roberts2-1/+49
Implement the required arch functions to enable use of contpte in the vmap when VM_ALLOW_HUGE_VMAP is specified. This speeds up vmap operations due to only having to issue a DSB and ISB per contpte block instead of per pte. But it also means that the TLB pressure reduces due to only needing a single TLB entry for the whole contpte block. Since vmap uses set_huge_pte_at() to set the contpte, that API is now used for kernel mappings for the first time. Although in the vmap case we never expect it to be called to modify a valid mapping so clear_flush() should never be called, it's still wise to make it robust for the kernel case, so amend the tlb flush function if the mm is for kernel space. Tested with vmalloc performance selftests: # kself/mm/test_vmalloc.sh \ run_test_mask=1 test_repeat_count=5 nr_pages=256 test_loop_count=100000 use_huge=1 Duration reduced from 1274243 usec to 1083553 usec on Apple M2 for 15% reduction in time taken. Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Tested-by: Luiz Capitulino <luizcap@redhat.com> Link: https://lore.kernel.org/r/20250422081822.1836315-10-ryan.roberts@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-05-09mm/vmalloc: Gracefully unmap huge ptesRyan Roberts2-2/+24
Commit f7ee1f13d606 ("mm/vmalloc: enable mapping of huge pages at pte level in vmap") added its support by reusing the set_huge_pte_at() API, which is otherwise only used for user mappings. But when unmapping those huge ptes, it continued to call ptep_get_and_clear(), which is a layering violation. To date, the only arch to implement this support is powerpc and it all happens to work ok for it. But arm64's implementation of ptep_get_and_clear() can not be safely used to clear a previous set_huge_pte_at(). So let's introduce a new arch opt-in function, arch_vmap_pte_range_unmap_size(), which can provide the size of a (present) pte. Then we can call huge_ptep_get_and_clear() to tear it down properly. Note that if vunmap_range() is called with a range that starts in the middle of a huge pte-mapped page, we must unmap the entire huge page so the behaviour is consistent with pmd and pud block mappings. In this case emit a warning just like we do for pmd/pud mappings. Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Tested-by: Luiz Capitulino <luizcap@redhat.com> Link: https://lore.kernel.org/r/20250422081822.1836315-9-ryan.roberts@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-05-09mm/vmalloc: Warn on improper use of vunmap_range()Ryan Roberts1-2/+6
A call to vmalloc_huge() may cause memory blocks to be mapped at pmd or pud level. But it is possible to subsequently call vunmap_range() on a sub-range of the mapped memory, which partially overlaps a pmd or pud. In this case, vmalloc unmaps the entire pmd or pud so that the no-overlapping portion is also unmapped. Clearly that would have a bad outcome, but it's not something that any callers do today as far as I can tell. So I guess it's just expected that callers will not do this. However, it would be useful to know if this happened in future; let's add a warning to cover the eventuality. Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Tested-by: Luiz Capitulino <luizcap@redhat.com> Link: https://lore.kernel.org/r/20250422081822.1836315-8-ryan.roberts@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-05-09arm64/mm: Hoist barriers out of set_ptes_anysz() loopRyan Roberts1-5/+11
set_ptes_anysz() previously called __set_pte() for each PTE in the range, which would conditionally issue a DSB and ISB to make the new PTE value immediately visible to the table walker if the new PTE was valid and for kernel space. We can do better than this; let's hoist those barriers out of the loop so that they are only issued once at the end of the loop. We then reduce the cost by the number of PTEs in the range. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Tested-by: Luiz Capitulino <luizcap@redhat.com> Link: https://lore.kernel.org/r/20250422081822.1836315-7-ryan.roberts@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-05-09arm64: hugetlb: Use __set_ptes_anysz() and __ptep_get_and_clear_anysz()Ryan Roberts1-43/+10
Refactor the huge_pte helpers to use the new common __set_ptes_anysz() and __ptep_get_and_clear_anysz() APIs. This provides 2 benefits; First, when page_table_check=on, hugetlb is now properly/fully checked. Previously only the first page of a hugetlb folio was checked. Second, instead of having to call __set_ptes(nr=1) for each pte in a loop, the whole contiguous batch can now be set in one go, which enables some efficiencies and cleans up the code. One detail to note is that huge_ptep_clear_flush() was previously calling ptep_clear_flush() for a non-contiguous pte (i.e. a pud or pmd block mapping). This has a couple of disadvantages; first ptep_clear_flush() calls ptep_get_and_clear() which transparently handles contpte. Given we only call for non-contiguous ptes, it would be safe, but a waste of effort. It's preferable to go straight to the layer below. However, more problematic is that ptep_get_and_clear() is for PAGE_SIZE entries so it calls page_table_check_pte_clear() and would not clear the whole hugetlb folio. So let's stop special-casing the non-cont case and just rely on get_clear_contig_flush() to do the right thing for non-cont entries. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Tested-by: Luiz Capitulino <luizcap@redhat.com> Link: https://lore.kernel.org/r/20250422081822.1836315-6-ryan.roberts@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-05-09arm64/mm: Refactor __set_ptes() and __ptep_get_and_clear()Ryan Roberts1-41/+73
Refactor __set_ptes(), set_pmd_at() and set_pud_at() so that they are all a thin wrapper around a new common __set_ptes_anysz(), which takes pgsize parameter. Additionally, refactor __ptep_get_and_clear() and pmdp_huge_get_and_clear() to use a new common __ptep_get_and_clear_anysz() which also takes a pgsize parameter. These changes will permit the huge_pte API to efficiently batch-set pgtable entries and take advantage of the future barrier optimizations. Additionally since the new *_anysz() helpers call the correct page_table_check_*_set() API based on pgsize, this means that huge_ptes will be able to get proper coverage. Currently the huge_pte API always uses the pte API which assumes an entry only covers a single page. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Tested-by: Luiz Capitulino <luizcap@redhat.com> Link: https://lore.kernel.org/r/20250422081822.1836315-5-ryan.roberts@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-05-09mm/page_table_check: Batch-check pmds/puds just like ptesRyan Roberts2-26/+38
Convert page_table_check_p[mu]d_set(...) to page_table_check_p[mu]ds_set(..., nr) to allow checking a contiguous set of pmds/puds in single batch. We retain page_table_check_p[mu]d_set(...) as macros that call new batch functions with nr=1 for compatibility. arm64 is about to reorganise its pte/pmd/pud helpers to reuse more code and to allow the implementation for huge_pte to more efficiently set ptes/pmds/puds in batches. We need these batch-helpers to make the refactoring possible. Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Tested-by: Luiz Capitulino <luizcap@redhat.com> Link: https://lore.kernel.org/r/20250422081822.1836315-4-ryan.roberts@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-05-09arm64: hugetlb: Refine tlb maintenance scopeRyan Roberts2-13/+25
When operating on contiguous blocks of ptes (or pmds) for some hugetlb sizes, we must honour break-before-make requirements and clear down the block to invalid state in the pgtable then invalidate the relevant tlb entries before making the pgtable entries valid again. However, the tlb maintenance is currently always done assuming the worst case stride (PAGE_SIZE), last_level (false) and tlb_level (TLBI_TTL_UNKNOWN). We can do much better with the hinting; In reality, we know the stride from the huge_pte pgsize, we are always operating only on the last level, and we always know the tlb_level, again based on pgsize. So let's start providing these hints. Additionally, avoid tlb maintenace in set_huge_pte_at(). Break-before-make is only required if we are transitioning the contiguous pte block from valid -> valid. So let's elide the clear-and-flush ("break") if the pte range was previously invalid. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Tested-by: Luiz Capitulino <luizcap@redhat.com> Link: https://lore.kernel.org/r/20250422081822.1836315-3-ryan.roberts@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-05-09arm64: hugetlb: Cleanup huge_pte size discovery mechanismsRyan Roberts1-5/+15
Not all huge_pte helper APIs explicitly provide the size of the huge_pte. So the helpers have to depend on various methods to determine the size of the huge_pte. Some of these methods are dubious. Let's clean up the code to use preferred methods and retire the dubious ones. The options in order of preference: - If size is provided as parameter, use it together with num_contig_ptes(). This is explicit and works for both present and non-present ptes. - If vma is provided as a parameter, retrieve size via huge_page_size(hstate_vma(vma)) and use it together with num_contig_ptes(). This is explicit and works for both present and non-present ptes. - If the pte is present and contiguous, use find_num_contig() to walk the pgtable to find the level and infer the number of ptes from level. Only works for *present* ptes. - If the pte is present and not contiguous and you can infer from this that only 1 pte needs to be operated on. This is ok if you don't care about the absolute size, and just want to know the number of ptes. - NEVER rely on resolving the PFN of a present pte to a folio and getting the folio's size. This is fragile at best, because there is nothing to stop the core-mm from allocating a folio twice as big as the huge_pte then mapping it across 2 consecutive huge_ptes. Or just partially mapping it. Where we require that the pte is present, add warnings if not-present. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Tested-by: Luiz Capitulino <luizcap@redhat.com> Link: https://lore.kernel.org/r/20250422081822.1836315-2-ryan.roberts@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-05-09perf/amlogic: Replace smp_processor_id() with raw_smp_processor_id() in meson_ddr_pmu_create()Anand Moon1-1/+1
The Amlogic DDR PMU driver meson_ddr_pmu_create() function incorrectly uses smp_processor_id(), which assumes disabled preemption. This leads to kernel warnings during module loading because meson_ddr_pmu_create() can be called in a preemptible context. Following kernel warning and stack trace: [ 31.745138] [ T2289] BUG: using smp_processor_id() in preemptible [00000000] code: (udev-worker)/2289 [ 31.745154] [ T2289] caller is debug_smp_processor_id+0x28/0x38 [ 31.745172] [ T2289] CPU: 4 UID: 0 PID: 2289 Comm: (udev-worker) Tainted: GW 6.14.0-0-MANJARO-ARM #1 59519addcbca6ba8de735e151fd7b9e97aac7ff0 [ 31.745181] [ T2289] Tainted: [W]=WARN [ 31.745183] [ T2289] Hardware name: Hardkernel ODROID-N2Plus (DT) [ 31.745188] [ T2289] Call trace: [ 31.745191] [ T2289] show_stack+0x28/0x40 (C) [ 31.745199] [ T2289] dump_stack_lvl+0x4c/0x198 [ 31.745205] [ T2289] dump_stack+0x20/0x50 [ 31.745209] [ T2289] check_preemption_disabled+0xec/0xf0 [ 31.745213] [ T2289] debug_smp_processor_id+0x28/0x38 [ 31.745216] [ T2289] meson_ddr_pmu_create+0x200/0x560 [meson_ddr_pmu_g12 8095101c49676ad138d9961e3eddaee10acca7bd] [ 31.745237] [ T2289] g12_ddr_pmu_probe+0x20/0x38 [meson_ddr_pmu_g12 8095101c49676ad138d9961e3eddaee10acca7bd] [ 31.745246] [ T2289] platform_probe+0x98/0xe0 [ 31.745254] [ T2289] really_probe+0x144/0x3f8 [ 31.745258] [ T2289] __driver_probe_device+0xb8/0x180 [ 31.745261] [ T2289] driver_probe_device+0x54/0x268 [ 31.745264] [ T2289] __driver_attach+0x11c/0x288 [ 31.745267] [ T2289] bus_for_each_dev+0xfc/0x160 [ 31.745274] [ T2289] driver_attach+0x34/0x50 [ 31.745277] [ T2289] bus_add_driver+0x160/0x2b0 [ 31.745281] [ T2289] driver_register+0x78/0x120 [ 31.745285] [ T2289] __platform_driver_register+0x30/0x48 [ 31.745288] [ T2289] init_module+0x30/0xfe0 [meson_ddr_pmu_g12 8095101c49676ad138d9961e3eddaee10acca7bd] [ 31.745298] [ T2289] do_one_initcall+0x11c/0x438 [ 31.745303] [ T2289] do_init_module+0x68/0x228 [ 31.745311] [ T2289] load_module+0x118c/0x13a8 [ 31.745315] [ T2289] __arm64_sys_finit_module+0x274/0x390 [ 31.745320] [ T2289] invoke_syscall+0x74/0x108 [ 31.745326] [ T2289] el0_svc_common+0x90/0xf8 [ 31.745330] [ T2289] do_el0_svc+0x2c/0x48 [ 31.745333] [ T2289] el0_svc+0x60/0x150 [ 31.745337] [ T2289] el0t_64_sync_handler+0x80/0x118 [ 31.745341] [ T2289] el0t_64_sync+0x1b8/0x1c0 Changes replaces smp_processor_id() with raw_smp_processor_id() to ensure safe CPU ID retrieval in preemptible contexts. Cc: Jiucheng Xu <jiucheng.xu@amlogic.com> Fixes: 2016e2113d35 ("perf/amlogic: Add support for Amlogic meson G12 SoC DDR PMU driver") Signed-off-by: Anand Moon <linux.amoon@gmail.com> Link: https://lore.kernel.org/r/20250407063206.5211-1-linux.amoon@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2025-05-09perf/arm-cmn: Fix REQ2/SNP2 mixupRobin Murphy1-4/+4
Somehow the encodings for REQ2/SNP2 channels in XP events got mixed up... Unmix them. CC: stable@vger.kernel.org Fixes: 23760a014417 ("perf/arm-cmn: Add CMN-700 support") Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/087023e9737ac93d7ec7a841da904758c254cb01.1746717400.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-05-08firmware: SDEI: Allow sdei initialization without ACPI_APEI_GHESHuang Yiwei5-7/+12
SDEI usually initialize with the ACPI table, but on platforms where ACPI is not used, the SDEI feature can still be used to handle specific firmware calls or other customized purposes. Therefore, it is not necessary for ARM_SDE_INTERFACE to depend on ACPI_APEI_GHES. In commit dc4e8c07e9e2 ("ACPI: APEI: explicit init of HEST and GHES in acpi_init()"), to make APEI ready earlier, sdei_init was moved into acpi_ghes_init instead of being a standalone initcall, adding ACPI_APEI_GHES dependency to ARM_SDE_INTERFACE. This restricts the flexibility and usability of SDEI. This patch corrects the dependency in Kconfig and splits sdei_init() into two separate functions: sdei_init() and acpi_sdei_init(). sdei_init() will be called by arch_initcall and will only initialize the platform driver, while acpi_sdei_init() will initialize the device from acpi_ghes_init() when ACPI is ready. This allows the initialization of SDEI without ACPI_APEI_GHES enabled. Fixes: dc4e8c07e9e2 ("ACPI: APEI: explicit init of HEST and GHES in apci_init()") Cc: Shuai Xue <xueshuai@linux.alibaba.com> Signed-off-by: Huang Yiwei <quic_hyiwei@quicinc.com> Reviewed-by: Shuai Xue <xueshuai@linux.alibaba.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Link: https://lore.kernel.org/r/20250507045757.2658795-1-quic_hyiwei@quicinc.com Signed-off-by: Will Deacon <will@kernel.org>
2025-05-06arm64: cpufeature: Move arm64_use_ng_mappings to the .data section to prevent wrong idmap generationYeoreum Yun1-1/+8
The PTE_MAYBE_NG macro sets the nG page table bit according to the value of "arm64_use_ng_mappings". This variable is currently placed in the .bss section. create_init_idmap() is called before the .bss section initialisation which is done in early_map_kernel(). Therefore, data/test_prot in create_init_idmap() could be set incorrectly through the PAGE_KERNEL -> PROT_DEFAULT -> PTE_MAYBE_NG macros. # llvm-objdump-21 --syms vmlinux-gcc | grep arm64_use_ng_mappings ffff800082f242a8 g O .bss 0000000000000001 arm64_use_ng_mappings The create_init_idmap() function disassembly compiled with llvm-21: // create_init_idmap() ffff80008255c058: d10103ff sub sp, sp, #0x40 ffff80008255c05c: a9017bfd stp x29, x30, [sp, #0x10] ffff80008255c060: a90257f6 stp x22, x21, [sp, #0x20] ffff80008255c064: a9034ff4 stp x20, x19, [sp, #0x30] ffff80008255c068: 910043fd add x29, sp, #0x10 ffff80008255c06c: 90003fc8 adrp x8, 0xffff800082d54000 ffff80008255c070: d280e06a mov x10, #0x703 // =1795 ffff80008255c074: 91400409 add x9, x0, #0x1, lsl #12 // =0x1000 ffff80008255c078: 394a4108 ldrb w8, [x8, #0x290] ------------- (1) ffff80008255c07c: f2e00d0a movk x10, #0x68, lsl #48 ffff80008255c080: f90007e9 str x9, [sp, #0x8] ffff80008255c084: aa0103f3 mov x19, x1 ffff80008255c088: aa0003f4 mov x20, x0 ffff80008255c08c: 14000000 b 0xffff80008255c08c <__pi_create_init_idmap+0x34> ffff80008255c090: aa082d56 orr x22, x10, x8, lsl #11 -------- (2) Note (1) is loading the arm64_use_ng_mappings value in w8 and (2) is set the text or data prot with the w8 value to set PTE_NG bit. If the .bss section isn't initialized, x8 could include a garbage value and generate an incorrect mapping. Annotate arm64_use_ng_mappings as __read_mostly so that it is placed in the .data section. Fixes: 84b04d3e6bdb ("arm64: kernel: Create initial ID map from C code") Cc: stable@vger.kernel.org # 6.9.x Tested-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com> Link: https://lore.kernel.org/r/20250502180412.3774883-1-yeoreum.yun@arm.com [catalin.marinas@arm.com: use __read_mostly instead of __ro_after_init] [catalin.marinas@arm.com: slight tweaking of the code comment] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2025-05-01arm64: errata: Add missing sentinels to Spectre-BHB MIDR arraysWill Deacon1-0/+2
Commit a5951389e58d ("arm64: errata: Add newer ARM cores to the spectre_bhb_loop_affected() lists") added some additional CPUs to the Spectre-BHB workaround, including some new arrays for designs that require new 'k' values for the workaround to be effective. Unfortunately, the new arrays omitted the sentinel entry and so is_midr_in_range_list() will walk off the end when it doesn't find a match. With UBSAN enabled, this leads to a crash during boot when is_midr_in_range_list() is inlined (which was more common prior to c8c2647e69be ("arm64: Make  _midr_in_range_list() an exported function")): | Internal error: aarch64 BRK: 00000000f2000001 [#1] PREEMPT SMP | pstate: 804000c5 (Nzcv daIF +PAN -UAO -TCO -DIT -SSBS BTYPE=--) | pc : spectre_bhb_loop_affected+0x28/0x30 | lr : is_spectre_bhb_affected+0x170/0x190 | [...] | Call trace: | spectre_bhb_loop_affected+0x28/0x30 | update_cpu_capabilities+0xc0/0x184 | init_cpu_features+0x188/0x1a4 | cpuinfo_store_boot_cpu+0x4c/0x60 | smp_prepare_boot_cpu+0x38/0x54 | start_kernel+0x8c/0x478 | __primary_switched+0xc8/0xd4 | Code: 6b09011f 54000061 52801080 d65f03c0 (d4200020) | ---[ end trace 0000000000000000 ]--- | Kernel panic - not syncing: aarch64 BRK: Fatal exception Add the missing sentinel entries. Cc: Lee Jones <lee@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Doug Anderson <dianders@chromium.org> Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Cc: <stable@vger.kernel.org> Reported-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Fixes: a5951389e58d ("arm64: errata: Add newer ARM cores to the spectre_bhb_loop_affected() lists") Signed-off-by: Will Deacon <will@kernel.org> Reviewed-by: Lee Jones <lee@kernel.org> Reviewed-by: Douglas Anderson <dianders@chromium.org> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lore.kernel.org/r/20250501104747.28431-1-will@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2025-04-29arm64: pageattr: Explicitly bail out when changing permissions for vmalloc_huge mappingsDev Jain1-3/+3
arm64 uses apply_to_page_range to change permissions for kernel vmalloc mappings, which does not support changing permissions for block mappings. This function will change permissions until it encounters a block mapping, and will bail out with a warning. Since there are no reports of this triggering, it implies that there are currently no cases of code doing a vmalloc_huge() followed by partial permission change. But this is a footgun waiting to go off, so let's detect it early and avoid the possibility of permissions in an intermediate state. So, explicitly disallow changing permissions for VM_ALLOW_HUGE_VMAP mappings. Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Signed-off-by: Dev Jain <dev.jain@arm.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20250403052844.61818-1-dev.jain@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-04-29arm64: Extend pr_crit message on invalid FDTBartosz Szczepanek1-5/+5
Log size in addition to physical and virtual addresses. It has potential to be helpful when DTB exceeds the 2 MB limit. Initialize size to 0 to print out sane value if fixmap_remap_fdt fails without setting the size. Signed-off-by: Bartosz Szczepanek <bsz@amazon.de> Link: https://lore.kernel.org/r/20250423084851.26449-1-bsz@amazon.de Signed-off-by: Will Deacon <will@kernel.org>
2025-04-29arm64: Kconfig: remove unnecessary selection of CRC32Eric Biggers1-1/+0
The selection of CRC32 by ARM64 was added by commit 7481cddf29ed ("arm64/lib: add accelerated crc32 routines") as a workaround for the fact that, at the time, the CRC32 library functions used weak symbols to allow architecture-specific overrides. That only worked when CRC32 was built-in, and thus ARM64 was made to just force CRC32 to built-in. Now that the CRC32 library no longer uses weak symbols, that no longer applies. And the selection does not fulfill a user dependency either; those all have their own selections from other options. Therefore, the selection of CRC32 by ARM64 is no longer necessary. Remove it. Note that this does not necessarily result in CRC32 no longer being set to y, as it still tends to get selected by something else anyway. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250414174018.6359-1-ebiggers@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2025-04-29arm64: Add missing includes for mem_encryptJason Gunthorpe2-0/+4
Doing: #include <linux/mem_encrypt.h> Causes a bunch of compiler failures due to missing implicit includes that don't happen on x86: ../arch/arm64/include/asm/rsi_cmds.h:117:2: error: call to undeclared library function 'memcpy' with type 'void *(void *, const void *, unsigned long)'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] 117 | memcpy(&regs.a1, challenge, size); ../arch/arm64/include/asm/mem_encrypt.h:19:49: warning: declaration of 'struct device' will not be visible outside of this function [-Wvisibility] 19 | static inline bool force_dma_unencrypted(struct device *dev) ../arch/arm64/include/asm/rsi_cmds.h:44:38: error: call to undeclared function 'virt_to_phys'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] 44 | arm_smccc_smc(SMC_RSI_REALM_CONFIG, virt_to_phys(cfg), Add the missing includes to the arch/arm headers to avoid this. Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/0-v1-47aadfbd64cd+25795-arm_memenc_h_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2025-04-29arm64: Support ARM64_VA_BITS=52 when setting ARCH_MMAP_RND_BITS_MAXKornel Dulęba1-3/+3
When the 52-bit virtual addressing was introduced the select like ARCH_MMAP_RND_BITS_MAX logic was never updated to account for it. Because of that the rnd max bits knob is set to the default value of 18 when ARM64_VA_BITS=52. Fix this by setting ARCH_MMAP_RND_BITS_MAX to the same value that would be used if 48-bit addressing was used. Higher values can't used here because 52-bit addressing is used only if the caller provides a hint to mmap, with a fallback to 48-bit. The knob in question is an upper bound for what the user can set in /proc/sys/vm/mmap_rnd_bits, which in turn is used to determine how many random bits can be inserted into the base address used for mmap allocations. Since 48-bit allocations are legal with ARM64_VA_BITS=52, we need to make sure that the base address is small enough to facilitate this. Fixes: b6d00d47e81a ("arm64: mm: Introduce 52-bit Kernel VAs") Signed-off-by: Kornel Dulęba <korneld@google.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20250417114754.3238273-1-korneld@google.com Signed-off-by: Will Deacon <will@kernel.org>
2025-04-29arm64: Expose AIDR_EL1 via sysfsOliver Upton4-6/+12
The KVM PV ABI recently added a feature that allows the VM to discover the set of physical CPU implementations, identified by a tuple of {MIDR_EL1, REVIDR_EL1, AIDR_EL1}. Unlike other KVM PV features, the expectation is that the VMM implements the hypercall instead of KVM as it has the authoritative view of where the VM gets scheduled. To do this the VMM needs to know the values of these registers on any CPU in the system. While MIDR_EL1 and REVIDR_EL1 are already exposed, AIDR_EL1 is not. Provide it in sysfs along with the other identification registers. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20250403231626.3181116-1-oliver.upton@linux.dev Signed-off-by: Will Deacon <will@kernel.org>
2025-04-29arm64: enable PREEMPT_LAZYMark Rutland3-8/+11
For an architecture to enable CONFIG_ARCH_HAS_RESCHED_LAZY, two things are required: 1) Adding a TIF_NEED_RESCHED_LAZY flag definition 2) Checking for TIF_NEED_RESCHED_LAZY in the appropriate locations 2) is handled in a generic manner by CONFIG_GENERIC_ENTRY, which isn't (yet) implemented for arm64. However, outside of core scheduler code, TIF_NEED_RESCHED_LAZY only needs to be checked on a kernel exit, meaning: o return/entry to userspace. o return/entry to guest. The return/entry to a guest is all handled by xfer_to_guest_mode_handle_work() which already does the right thing, so it can be left as-is. arm64 doesn't use common entry's exit_to_user_mode_prepare(), so update its return to user path to check for TIF_NEED_RESCHED_LAZY and call into schedule() accordingly. Link: https://lore.kernel.org/linux-rt-users/20241216190451.1c61977c@mordecai.tesarici.cz/ Link: https://lore.kernel.org/all/xhsmh4j0fl0p3.mognet@vschneid-thinkpadt14sgen2i.remote.csb/ Signed-off-by: Mark Rutland <mark.rutland@arm.com> [testdrive, _TIF_WORK_MASK fixlet and changelog.] Signed-off-by: Mike Galbraith <efault@gmx.de> [Another round of testing; changelog faff] Signed-off-by: Valentin Schneider <vschneid@redhat.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://lore.kernel.org/r/20250305104925.189198-2-vschneid@redhat.com Signed-off-by: Will Deacon <will@kernel.org>
2025-04-29arm64/cpufeature: Add missing id_aa64mmfr4 feature reg updateYicong Yang1-0/+2
Add missing id_aa64mmfr4 feature register check and update in update_cpu_features(). Update the taint status as well. Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> Link: https://lore.kernel.org/r/20250329034409.21354-2-yangyicong@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2025-04-29arm64/mm: Remove randomization of the linear mapArd Biesheuvel4-27/+0
Since commit 97d6786e0669 ("arm64: mm: account for hotplug memory when randomizing the linear region") the decision whether or not to randomize the placement of the system's DRAM inside the linear map is based on the capabilities of the CPU rather than how much memory is present at boot time. This change was necessary because memory hotplug may result in DRAM appearing in places that are not covered by the linear region at all (and therefore unusable) if the decision is solely based on the memory map at boot. In the Android GKI kernel, which requires support for memory hotplug, and is built with a reduced virtual address space of only 39 bits wide, randomization of the linear map never happens in practice as a result. And even on arm64 kernels built with support for 48 bit virtual addressing, the wider PArange of recent CPUs means that linear map randomization is slowly becoming a feature that only works on systems that will soon be obsolete. So let's just remove this feature. We can always bring it back in an improved form if there is a real need for it. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Kees Cook <kees@kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250318134949.3194334-2-ardb+git@google.com Signed-off-by: Will Deacon <will@kernel.org>
2025-04-29arm64/fpsimd: Avoid unnecessary per-CPU buffers for EFI runtime callsArd Biesheuvel2-31/+27
The EFI specification has some elaborate rules about which runtime services may be called while another runtime service call is already in progress. In Linux, however, for simplicity, all EFI runtime service invocations are serialized via the efi_runtime_lock semaphore. This implies that calls to the helper pair arch_efi_call_virt_setup() and arch_efi_call_virt_teardown() are serialized too, and are guaranteed not to nest. Furthermore, the arm64 arch code has its own spinlock to serialize use of the EFI runtime stack, of which only a single instance exists. This all means that the FP/SIMD and SVE state preserve/restore logic in __efi_fpsimd_begin() and __efi_fpsimd_end() are also serialized, and only a single instance of the associated per-CPU variables can ever be in use at the same time. There is therefore no need at all for per-CPU variables here, and they can all be replaced with singleton instances. This saves a non-trivial amount of memory on systems with many CPUs. To be more robust against potential future changes in the core EFI code that may invalidate the reasoning above, move the invocations of __efi_fpsimd_begin() and __efi_fpsimd_end() into the critical section covered by the efi_rt_lock spinlock. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20250318132421.3155799-2-ardb+git@google.com Signed-off-by: Will Deacon <will@kernel.org>
2025-04-17perf: Do not enable by default during compile testingKrzysztof Kozlowski1-1/+1
Enabling the compile test should not cause automatic enabling of all drivers, but only allow to choose to compile them. Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Link: https://lore.kernel.org/r/20250417074650.81561-1-krzysztof.kozlowski@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2025-04-17perf: arm-ni: Fix missing platform_set_drvdata()Hongbo Yao1-0/+1
Add missing platform_set_drvdata in arm_ni_probe(), otherwise calling platform_get_drvdata() in remove returns NULL. Fixes: 4d5a7680f2b4 ("perf: Add driver for Arm NI-700 interconnect PMU") Signed-off-by: Hongbo Yao <andy.xu@hj-micro.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20250401054248.3985814-1-andy.xu@hj-micro.com Signed-off-by: Will Deacon <will@kernel.org>
2025-04-17perf: arm-ni: Unregister PMUs on probe failureHongbo Yao1-18/+21
When a resource allocation fails in one clock domain of an NI device, we need to properly roll back all previously registered perf PMUs in other clock domains of the same device. Otherwise, it can lead to kernel panics. Calling arm_ni_init+0x0/0xff8 [arm_ni] @ 2374 arm-ni ARMHCB70:00: Failed to request PMU region 0x1f3c13000 arm-ni ARMHCB70:00: probe with driver arm-ni failed with error -16 list_add corruption: next->prev should be prev (fffffd01e9698a18), but was 0000000000000000. (next=ffff10001a0decc8). pstate: 6340009 (nZCv daif +PAN -UAO +TCO +DIT -SSBS BTYPE=--) pc : list_add_valid_or_report+0x7c/0xb8 lr : list_add_valid_or_report+0x7c/0xb8 Call trace: __list_add_valid_or_report+0x7c/0xb8 perf_pmu_register+0x22c/0x3a0 arm_ni_probe+0x554/0x70c [arm_ni] platform_probe+0x70/0xe8 really_probe+0xc6/0x4d8 driver_probe_device+0x48/0x170 __driver_attach+0x8e/0x1c0 bus_for_each_dev+0x64/0xf0 driver_add+0x138/0x260 bus_add_driver+0x68/0x138 __platform_driver_register+0x2c/0x40 arm_ni_init+0x14/0x2a [arm_ni] do_init_module+0x36/0x298 ---[ end trace 0000000000000000 ]--- Kernel panic - not syncing: Oops - BUG: Fatal exception SMP: stopping secondary CPUs Fixes: 4d5a7680f2b4 ("perf: Add driver for Arm NI-700 interconnect PMU") Signed-off-by: Hongbo Yao <andy.xu@hj-micro.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20250403070918.4153839-1-andy.xu@hj-micro.com Signed-off-by: Will Deacon <will@kernel.org>
2025-04-17perf/arm-cmn: Remove CMN-600 DTC domain special caseRobin Murphy1-7/+0
The special case for trying to infer the DTC domain for DTC-adjacent nodes on CMN-600 is fragile and buggy - currently resulting in subtly messed up DTC counter allocation - and the theoretical benefit it offers to a tiny minority of use-cases arguably doesn't outweigh the inconsistency it offers to others anyway. Just get rid of it. Fixes: ab33c66fd8f1 ("perf/arm-cmn: Enable per-DTC counter allocation") Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/67985e39f53b56385d79a4f1264cf7f9cacedb58.1742308248.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2025-04-06Linux 6.15-rc1Linus Torvalds1-2/+2
2025-04-06tools/include: make uapi/linux/types.h usable from assemblyThomas Weißschuh1-0/+3
The "real" linux/types.h UAPI header gracefully degrades to a NOOP when included from assembly code. Mirror this behaviour in the tools/ variant. Test for __ASSEMBLER__ over __ASSEMBLY__ as the former is provided by the toolchain automatically. Reported-by: Mark Brown <broonie@kernel.org> Closes: https://lore.kernel.org/lkml/af553c62-ca2f-4956-932c-dd6e3a126f58@sirena.org.uk/ Fixes: c9fbaa879508 ("selftests: vDSO: parse_vdso: Use UAPI headers instead of libc headers") Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Link: https://patch.msgid.link/20250321-uapi-consistency-v1-1-439070118dc0@linutronix.de Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Mark Brown <broonie@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2025-04-06tools/power turbostat: v2025.05.06Len Brown1-1/+1
Support up to 8192 processors Add cpuidle governor debug telemetry, disabled by default Update default output to exclude cpuidle invocation counts Bug fixes Signed-off-by: Len Brown <len.brown@intel.com>
2025-04-06tools/power turbostat: disable "cpuidle" invocation counters, by defaultLen Brown2-13/+33
Create "pct_idle" counter group, the sofware notion of residency so it can now be singled out, independent of other counter groups. Create "cpuidle" group, the cpuidle invocation counts. Disable "cpuidle", by default. Create "swidle" = "cpuidle" + "pct_idle". Undocument "sysfs", the old name for "swidle", but keep it working for backwards compatibilty. Create "hwidle", all the HW idle counters Modify "idle", enabled by default "idle" = "hwidle" + "pct_idle" (and now excludes "cpuidle") Signed-off-by: Len Brown <len.brown@intel.com>
2025-04-06Disable SLUB_TINY for build testingLinus Torvalds2-2/+2
... and don't error out so hard on missing module descriptions. Before commit 6c6c1fc09de3 ("modpost: require a MODULE_DESCRIPTION()") we used to warn about missing module descriptions, but only when building with extra warnigns (ie 'W=1'). After that commit the warning became an unconditional hard error. And it turns out not all modules have been converted despite the claims to the contrary. As reported by Damian Tometzki, the slub KUnit test didn't have a module description, and apparently nobody ever really noticed. The reason nobody noticed seems to be that the slub KUnit tests get disabled by SLUB_TINY, which also ends up disabling a lot of other code, both in tests and in slub itself. And so anybody doing full build tests didn't actually see this failre. So let's disable SLUB_TINY for build-only tests, since it clearly ends up limiting build coverage. Also turn the missing module descriptions error back into a warning, but let's keep it around for non-'W=1' builds. Reported-by: Damian Tometzki <damian@riscv-rocks.de> Link: https://lore.kernel.org/all/01070196099fd059-e8463438-7b1b-4ec8-816d-173874be9966-000000@eu-central-1.amazonses.com/ Cc: Masahiro Yamada <masahiroy@kernel.org> Cc: Jeff Johnson <jeff.johnson@oss.qualcomm.com> Fixes: 6c6c1fc09de3 ("modpost: require a MODULE_DESCRIPTION()") Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2025-04-06tools/power turbostat: re-factor sysfs codeLen Brown1-10/+21
Probe cpuidle "sysfs" residency and counts separately, since soon we will make one disabled on, and the other disabled off. Clarify that some BIC (build-in-counters) are actually "groups". since we're about to re-name some of those groups. no functional change. Signed-off-by: Len Brown <len.brown@intel.com>
2025-04-06tools/power turbostat: Restore GFX sysfs fflush() callZhang Rui1-0/+1
Do fflush() to discard the buffered data, before each read of the graphics sysfs knobs. Fixes: ba99a4fc8c24 ("tools/power turbostat: Remove unnecessary fflush() call") Signed-off-by: Zhang Rui <rui.zhang@intel.com> Signed-off-by: Len Brown <len.brown@intel.com>
2025-04-06tools/power turbostat: Document GNR UncMHz domain conventionLen Brown1-0/+1
Document that on Intel Granite Rapids Systems, Uncore domains 0-2 are CPU domains, and uncore domains 3-4 are IO domains. Signed-off-by: Len Brown <len.brown@intel.com>