Age | Commit message (Collapse) | Author | Files | Lines |
|
Simplify the JIT code by replacing the custom expolines with the ones
defined in the kernel text.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Link: https://lore.kernel.org/r/20250519223646.66382-4-iii@linux.ibm.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
After the V!=R rework (commit c98d2ecae08f ("s390/mm: Uncouple physical
vs virtual address spaces")), kernel and BPF programs are allocated
within a 4G region, making it possible to use relative addressing to
directly use kernel functions from BPF code.
Add two new macros for calling kernel functions from BPF code:
EMIT6_PCREL_RILB_PTR() and EMIT6_PCREL_RILC_PTR(). Factor out parts
of the existing macros that are helpful for implementing the new ones.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Link: https://lore.kernel.org/r/20250519223646.66382-3-iii@linux.ibm.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
It would be convenient to use the following pattern in the BPF JIT:
if (nospec_uses_trampoline())
emit_call(__s390_indirect_jump_r1);
Unfortunately with CONFIG_EXPOLINE=n the compiler complains about the
missing prototype of __s390_indirect_jump_r1(). One could wrap the
whole "if" statement in an #ifdef, but this clutters the code.
Instead, declare expoline thunk prototypes even when compiling without
expolines. When using the above code structure and compiling without
expolines, references to them are optimized away, and there are no
linker errors.
Acked-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Link: https://lore.kernel.org/r/20250519223646.66382-2-iii@linux.ibm.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Prior changes ensured that when zpci_release_device() is called and it
removed the zdev from the zpci_list this instance can not be found via
the zpci_list anymore even while allowing re-add of reserved devices.
This only accounts for the overall lifetime and zpci_list addition and
removal, it does not yet prevent concurrent add of a new instance for
the same underlying device. Such concurrent add would subsequently cause
issues such as attempted re-use of the same IOMMU sysfs directory and is
generally undesired.
Introduce a new zpci_add_remove_lock mutex to serialize adding a new
device with removal. Together this ensures that if a struct zpci_dev is
not found in the zpci_list it was either already removed and torn down,
or its removal and tear down is in progress with the
zpci_add_remove_lock held.
Cc: stable@vger.kernel.org
Fixes: a46044a92add ("s390/pci: fix zpci_zdev_put() on reserve")
Reviewed-by: Gerd Bayer <gbayer@linux.ibm.com>
Tested-by: Gerd Bayer <gbayer@linux.ibm.com>
Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
|
|
The architecture assumes that PCI functions can be removed synchronously
as PCI events are processed. This however clashes with the reference
counting of struct pci_dev which allows device drivers to hold on to a
struct pci_dev reference even as the underlying device is removed. To
bridge this gap commit 2a671f77ee49 ("s390/pci: fix use after free of
zpci_dev") keeps the struct zpci_dev in ZPCI_FN_STATE_RESERVED state
until common code releases the struct pci_dev. Only when all references
are dropped, the struct zpci_dev can be removed and freed.
Later commit a46044a92add ("s390/pci: fix zpci_zdev_put() on reserve")
moved the deletion of the struct zpci_dev from the zpci_list in
zpci_release_device() to the point where the device is reserved. This
was done to prevent handling events for a device that is already being
removed, e.g. when the platform generates both PCI event codes 0x304
and 0x308. In retrospect, deletion from the zpci_list in the release
function without holding the zpci_list_lock was also racy.
A side effect of this handling is that if the underlying device
re-appears while the struct zpci_dev is in the ZPCI_FN_STATE_RESERVED
state, the new and old instances of the struct zpci_dev and/or struct
pci_dev may clash. For example when trying to create the IOMMU sysfs
files for the new instance. In this case, re-adding the new instance is
aborted. The old instance is removed, and the device will remain absent
until the platform issues another event.
Fix this by allowing the struct zpci_dev to be brought back up right
until it is finally removed. To this end also keep the struct zpci_dev
in the zpci_list until it is finally released when all references have
been dropped.
Deletion from the zpci_list from within the release function is made
safe by using kref_put_lock() with the zpci_list_lock. This ensures that
the releasing code holds the last reference.
Cc: stable@vger.kernel.org
Fixes: a46044a92add ("s390/pci: fix zpci_zdev_put() on reserve")
Reviewed-by: Gerd Bayer <gbayer@linux.ibm.com>
Tested-by: Gerd Bayer <gbayer@linux.ibm.com>
Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
|
|
Remove zpci_bus_remove_device() and zpci_disable_device() calls from
zpci_release_device(). These calls were done when the device
transitioned into the ZPCI_FN_STATE_STANDBY state which is guaranteed to
happen before it enters the ZPCI_FN_STATE_RESERVED state. When
zpci_release_device() is called the device is known to be in the
ZPCI_FN_STATE_RESERVED state which is also checked by a WARN_ON().
Cc: stable@vger.kernel.org
Fixes: a46044a92add ("s390/pci: fix zpci_zdev_put() on reserve")
Reviewed-by: Gerd Bayer <gbayer@linux.ibm.com>
Reviewed-by: Julian Ruess <julianr@linux.ibm.com>
Tested-by: Gerd Bayer <gbayer@linux.ibm.com>
Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
|
|
Add MIDR encoding for HiSilicon HIP12 which is used on HiSilicon
HIP12 SoCs.
Signed-off-by: Yicong Yang <yangyicong@hisilicon.com>
Link: https://lore.kernel.org/r/20250425033845.57671-2-yangyicong@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Commit 5b39db6037e7 ("arm64: el2_setup.h: Rename some labels to be more
diff-friendly") reworked the labels in __init_el2_fgt to say what's
skipped rather than what the target location is. The exception was
"set_fgt_" which is where registers are written. In reviewing the BRBE
additions, Will suggested "set_debug_fgt_" where HDFGxTR_EL2 are
written. Doing that would partially revert commit 5b39db6037e7 undoing
the goal of minimizing additions here, but it would follow the
convention for labels where registers are written.
So let's do both. Branches that skip something go to a "skip" label and
places that set registers have a "set" label. This results in some
double labels, but it makes things entirely consistent.
While we're here, the SME skip label was incorrectly named, so fix it.
Reported-by: Will Deacon <will@kernel.org>
Cc: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Rob Herring (Arm) <robh@kernel.org>
Link: https://lore.kernel.org/r/20250520-arm-brbe-v19-v22-2-c1ddde38e7f8@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
|
|
mvebu fixes for 6.15 (part 1)
Fix uDPU board LEDs by correcting pinctrl state
* tag 'mvebu-fixes-6.15-1' of https://git.kernel.org/pub/scm/linux/kernel/git/gclement/mvebu:
arm64: dts: marvell: uDPU: define pinctrl state for alarm LEDs
Link: https://lore.kernel.org/r/87wmagpr0a.fsf@BLaptop.bootlin.com
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
|
|
Allwinner fixes for 6.15
Only one fix:
Switch back to I2C for PMICs on Allwinner H6 devices. Apparently using
Allwinner's proprietary bus ended up causing issues when the PMIC was
sharing the bus with other devices.
* tag 'sunxi-fixes-for-6.15' of https://git.kernel.org/pub/scm/linux/kernel/git/sunxi/linux:
Revert "arm64: dts: allwinner: h6: Use RSB for AXP805 PMIC connection"
Link: https://lore.kernel.org/r/aCaeLgjZllV7bauX@wens.tw
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
|
|
Since commit 17ec3e71ba79 ("crypto: lib/Kconfig - Hide arch options from
user"), the CRYPTO_CHACHA20_NEON option is no longer selected by default
due to changes in how crypto library options are exposed and selected.
To restore the previous behavior and ensure CRYPTO_CHACHA20_NEON is
enabled, explicitly select CONFIG_CRYPTO_CHACHA20 in the defconfig. This
pulls in CRYPTO_LIB_CHACHA_INTERNAL and allows CRYPTO_CHACHA20_NEON to be
selected automatically as before.
Fixes: 17ec3e71ba79 ("crypto: lib/Kconfig - Hide arch options from user")
Signed-off-by: Fabio Estevam <festevam@denx.de>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
|
|
The throttle support has been added in the generic code. Remove
the driver-specific throttle support.
Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20250520181644.2673067-17-kan.liang@linux.intel.com
|
|
The throttle support has been added in the generic code. Remove
the driver-specific throttle support.
Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Max Filippov <jcmvbkbc@gmail.com>
Link: https://lore.kernel.org/r/20250520181644.2673067-16-kan.liang@linux.intel.com
|
|
The throttle support has been added in the generic code. Remove
the driver-specific throttle support.
Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20250520181644.2673067-15-kan.liang@linux.intel.com
|
|
The throttle support has been added in the generic code. Remove
the driver-specific throttle support.
Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20250520181644.2673067-14-kan.liang@linux.intel.com
|
|
The throttle support has been added in the generic code. Remove
the driver-specific throttle support.
Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Guo Ren <guoren@kernel.org>
Link: https://lore.kernel.org/r/20250520181644.2673067-13-kan.liang@linux.intel.com
|
|
The throttle support has been added in the generic code. Remove
the driver-specific throttle support.
Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Vineet Gupta <vgupta@kernel.org>
Link: https://lore.kernel.org/r/20250520181644.2673067-12-kan.liang@linux.intel.com
|
|
The throttle support has been added in the generic code. Remove
the driver-specific throttle support.
Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20250520181644.2673067-11-kan.liang@linux.intel.com
|
|
The throttle support has been added in the generic code. Remove
the driver-specific throttle support.
Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Thomas Richter <tmricht@linux.ibm.com>
Link: https://lore.kernel.org/r/20250520181644.2673067-8-kan.liang@linux.intel.com
|
|
The throttle support has been added in the generic code. Remove
the driver-specific throttle support.
Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20250520181644.2673067-7-kan.liang@linux.intel.com
|
|
The throttle support has been added in the generic code. Remove
the driver-specific throttle support.
Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20250520181644.2673067-6-kan.liang@linux.intel.com
|
|
The throttle support has been added in the generic code. Remove
the driver-specific throttle support.
Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Ravi Bangoria <ravi.bangoria@amd.com>
Link: https://lore.kernel.org/r/20250520181644.2673067-5-kan.liang@linux.intel.com
|
|
The throttle support has been added in the generic code. Remove
the driver-specific throttle support.
Besides the throttle, perf_event_overflow may return true because of
event_limit. It already does an inatomic event disable. The pmu->stop
is not required either.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20250520181644.2673067-4-kan.liang@linux.intel.com
|
|
CI runs show that the protected key conversion retry loop
runs into timeout if a master key change was initiated on
the addressed crypto resource shortly before the conversion
request.
This patch extends the retry logic to run in total 5 attempts
with increasing delay (200, 400, 800 and 1600 ms) in case of
a busy card.
Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
Reviewed-by: Ingo Franzki <ifranzki@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
|
|
Use "a" constraint for the shift operand of the __pcilg_mio_inuser() inline
assembly. The used "d" constraint allows the compiler to use any general
purpose register for the shift operand, including register zero.
If register zero is used this my result in incorrect code generation:
8f6: a7 0a ff f8 ahi %r0,-8
8fa: eb 32 00 00 00 0c srlg %r3,%r2,0 <----
If register zero is selected to contain the shift value, the srlg
instruction ignores the contents of the register and always shifts zero
bits. Therefore use the "a" constraint which does not permit to select
register zero.
Fixes: f058599e22d5 ("s390/pci: Fix s390_mmio_read/write with MIO")
Cc: stable@vger.kernel.org
Reported-by: Niklas Schnelle <schnelle@linux.ibm.com>
Reviewed-by: Niklas Schnelle <schnelle@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
|
|
Commit
480e803dacf8 ("x86/bugs: Restructure spectre_v2 mitigation")
inadvertently changed the spectre-v2 mitigation default from eIBRS to IBRS on
Intel. While splitting the spectre_v2 mitigation in select/update/apply
functions, eIBRS and IBRS selection logic was separated in select and update.
This caused IBRS selection to not consider that eIBRS mitigation is already
selected, fix it.
Fixes: 480e803dacf8 ("x86/bugs: Restructure spectre_v2 mitigation")
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/20250520-eibrs-fix-v1-1-91bacd35ed09@linux.intel.com
|
|
Restructure the ITS mitigation to use select/update/apply functions like
the other mitigations.
There is a particularly complex interaction between ITS and Retbleed as CDT
(Call Depth Tracking) is a mitigation for both, and either its=stuff or
retbleed=stuff will attempt to enable CDT.
retbleed_update_mitigation() runs first and will check the necessary
pre-conditions for CDT if either ITS or Retbleed stuffing is selected. If
checks pass and ITS stuffing is selected, it will select stuffing for
Retbleed as well.
its_update_mitigation() runs after and will either select stuffing if
retbleed stuffing was enabled, or fall back to the default (aligned thunks)
if stuffing could not be enabled.
Enablement of CDT is done exclusively in retbleed_apply_mitigation().
its_apply_mitigation() is only used to enable aligned thunks.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/20250516193212.128782-1-david.kaplan@amd.com
|
|
Pick up build fixes from upstream to make this tree more testable.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
xen_read_msr_safe() currently passes an uninitialized argument 'err' to
xen_do_read_msr(). But as xen_do_read_msr() may not set the argument,
xen_read_msr_safe() could return err with an unpredictable value.
To ensure correctness, initialize err to 0 (representing success)
in xen_read_msr_safe().
Do the same in xen_read_msr(), even err is not used after being passed
to xen_do_read_msr().
Closes: https://lore.kernel.org/xen-devel/aBxNI_Q0-MhtBSZG@stanley.mountain/
Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Xin Li (Intel) <xin@zytor.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Juergen Gross <jgross@suse.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Link: https://lore.kernel.org/r/20250517165713.935384-1-xin@zytor.com
|
|
All callers pass PHY_POLL, therefore remove irq argument from
fixed_phy_add().
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com>
Acked-by: Greg Ungerer <gerg@linux-m68k.org>
Link: https://patch.msgid.link/b3b9b3bc-c310-4a54-b376-c909c83575de@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
As discussed in the thread containing
https://lore.kernel.org/linux-crypto/20250510053308.GB505731@sol/, the
Power10-optimized Poly1305 code is currently not safe to call in softirq
context. Disable it for now. It can be re-enabled once it is fixed.
Fixes: ba8f8624fde2 ("crypto: poly1305-p10 - Glue code for optmized Poly1305 implementation for ppc64le")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This reverts commit c66d7ebbe2fa14e41913adb421090a7426f59786.
Link: https://lore.kernel.org/all/02c22854-eebf-4ad1-b89e-8c2b65ab8236@csgroup.eu/
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
- Drop CONFIG_PRIME_NUMBERS=m (auto-modular since commit
313b38a6ecb46db4 ("lib/prime_numbers: convert self-test to KUnit")),
- Drop CONFIG_TEST_PRINTF=m (Replaced by auto-modular
CONFIG_PRINTF_KUNIT_TEST in commit 7a79e7daa84e2302 ("printf:
convert self-test to KUnit")),
- Drop CONFIG_TEST_SCANF=m (Replaced by auto-modular
CONFIG_SCANF_KUNIT_TEST in commit 97c1f302f2bc318e ("scanf: convert
self-test to KUnit")),
- Drop CONFIG_TEST_BLACKHOLE_DEV=m (replaced by auto-modular
CONFIG_BLACKHOLE_DEV_KUNIT_TEST in commit b341f6fd45abb188
("blackhole_dev: convert self-test to KUnit")).
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Link: https://lore.kernel.org/e7d9ff0ef58efaa3e0adcb7207e6b44d948caa25.1744615371.git.geert@linux-m68k.org
|
|
Updates for v6.16
CI:
- uprev mesa
GPU:
- ACD (Adaptive Clock Distribution) support for X1-85. This is required
enable the higher frequencies.
- Drop fictional `address_space_size`. For some older devices, the address
space size is limited to 4GB to avoid potential 64b rollover math problems
in the fw. For these, an `ADRENO_QUIRK_4GB_VA` quirk is added. For
everyone else we get the address space size from the SMMU `ias` (input
address sizes), which is usually 48b.
- Improve robustness when GMU HFI responses time out
- Fix crash when throttling GPU immediately during boot
- Fix for rgb565_predicator on Adreno 7c3
- Remove `MODULE_FIRMWARE()`s for GPU, the GPU can load the firmware after
probe and having partial set of fw (ie. sqe+gmu but not zap) causes problems
MDSS:
- Added SAR2130P support to MDSS driver
DPU:
- Changed to use single CTL path for flushing on DPU 5.x+
- Improved SSPP allocation code to allow sharing of SSPP between planes
- Enabled SmartDMA on SM8150, SC8180X, SC8280XP, SM8550
- Added SAR2130P support
- Disabled DSC support on MSM8937, MSM8917, MSM8953, SDM660
- Misc fixes
DP:
- Switch to use new helpers for DP Audio / HDMI codec handling
- Fixed LTTPR handling
DSI:
- Added support for SA8775P
- Added SAR2130P support
MDP4:
- Fixed LCDC / LVDS controller on
HDMI:
- Switched to use new helpers for ACR data
- Fixed old standing issue of HPD not working in some cases
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Rob Clark <robdclark@gmail.com>
Link: https://lore.kernel.org/r/CAF6AEGv2Go+nseaEwRgeZbecet-h+Pf2oBKw1CobCF01xu2XVg@mail.gmail.com
|
|
Pull misc x86 fixes from Ingo Molnar:
- Fix SEV-SNP kdump bugs
- Update the email address of Alexey Makhalov in MAINTAINERS
- Add the CPU feature flag for the Zen6 microarchitecture
- Fix typo in system message
* tag 'x86-urgent-2025-05-17' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/mm: Remove duplicated word in warning message
x86/CPU/AMD: Add X86_FEATURE_ZEN6
x86/sev: Make sure pages are not skipped during kdump
x86/sev: Do not touch VMSA pages during SNP guest memory kdump
MAINTAINERS: Update Alexey Makhalov's email address
x86/sev: Fix operator precedence in GHCB_MSR_VMPL_REQ_LEVEL macro
|
|
Pull x86 perf event fix from Ingo Molnar:
"Fix PEBS-via-PT crash"
* tag 'perf-urgent-2025-05-17' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf/x86/intel: Fix segfault with PEBS-via-PT with sample_freq
|
|
Pull LoongArch fixes from Huacai Chen:
"Fix some bugs in kernel-fpu, cpu idle function, hibernation and
uprobes"
* tag 'loongarch-fixes-6.15-2' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson:
LoongArch: uprobes: Remove redundant code about resume_era
LoongArch: uprobes: Remove user_{en,dis}able_single_step()
LoongArch: Save and restore CSR.CNTC for hibernation
LoongArch: Move __arch_cpu_idle() to .cpuidle.text section
LoongArch: Fix MAX_REG_OFFSET calculation
LoongArch: Prevent cond_resched() occurring within kernel-fpu
|
|
Both regs_get_kernel_stack_nth() and regs_get_register() are not
inlined. With the new ftrace funcgraph-args feature they show up in
function graph tracing:
4) | sched_core_idle_cpu(cpu=4) {
4) 0.257 us | regs_get_register(regs=0x37fe00afa10, offset=2);
4) 0.218 us | regs_get_register(regs=0x37fe00afa10, offset=3);
4) 0.225 us | regs_get_register(regs=0x37fe00afa10, offset=4);
4) 0.239 us | regs_get_register(regs=0x37fe00afa10, offset=5);
4) 0.239 us | regs_get_register(regs=0x37fe00afa10, offset=6);
4) 0.245 us | regs_get_kernel_stack_nth(regs=0x37fe00afa10, n=20);
This is subtoptimal, since both functions are supposed to be ftrace
internal helper functions. If they appear in ftrace traces this reduces
readability significantly, plus this adds tons of extra useless extra
entries.
Address this by moving both functions and required helpers to ptrace.h and
always inline them. This way they don't appear in traces anymore. In
addition the overhead that comes with functions calls is also reduced.
Reviewed-by: Sven Schnelle <svens@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
|
|
asm/thread_info.h requires PAGE_SIZE, which is defined in vdso/page.h,
but doesn't need to include asm/lowcore.h or asm/page.h.
Therefore change the includes accordingly and reduce header dependencies.
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
|
|
When calling the diag for DCSS unload on a non-IPL CPU, the sclp maximum
memory detection on the next IPL would falsely return the end of the
previously loaded DCSS.
This is because of an issue in z/VM, so work around it by always calling
the diag for DCSS unload on IPL CPU 0. That CPU cannot be set offline,
so the dcss_diag() call can directly be scheduled to CPU 0.
The wrong maximum memory value returned by sclp would only affect KASAN
kernels. When a DCSS within the falsely reported extra memory range is
loaded and accessed again, it would result in a kernel crash:
Unable to handle kernel pointer dereference in virtual kernel address space
Failing address: 001c0000a3ffe000 TEID: 001c0000a3ffe803
Fault in home space mode while using kernel ASCE.
AS:000000039955400b R2:00000003fe3b400b R3:000000037a2a8007 S:0000000000000020
Oops: 0010 ilc:3 [#1]SMP
[...]
CPU: 2 UID: 0 PID: 1563 Comm: mount Kdump: loaded Not tainted 6.15.0-rc5-11546-g3ea93fb3d026-dirty #7 NONE
Hardware name: IBM 3931 A01 704 (z/VM 7.4.0)
Krnl PSW : 0704c00180000000 000da6f2b338faf2 (kasan_check_range+0x172/0x310)
R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 RI:0 EA:3
Krnl GPRS: 0000000000000040 001c0000a3ffe000 000000051fff0000 0000000000001000
0000000000000000 000da6f233380ff6 00000000000001f8 0000000000000000
001c0000a3ffe200 0000000000000040 001c0000a3ffe200 0000000000000200
000003ff97a2cfa8 0000000000000000 0000000000000010 000da672b58af070
Krnl Code: 000da6f2b338fae2: 41101008 la %r1,8(%r1)
000da6f2b338fae6: eca100268064 cgrj %r10,%r1,8,000da6f2b338fb32
#000da6f2b338faec: ebe00002000c srlg %r14,%r0,2
>000da6f2b338faf2: e3b010000002 ltg %r11,0(%r1)
000da6f2b338faf8: a77400a8 brc 7,000da6f2b338fc48
000da6f2b338fafc: 41b01008 la %r11,8(%r1)
000da6f2b338fb00: b904001b lgr %r1,%r11
000da6f2b338fb04: e3a0b0000002 ltg %r10,0(%r11)
Call Trace:
[<000da6f2b338faf2>] kasan_check_range+0x172/0x310
[<000da6f2b3390b3c>] __asan_memcpy+0x3c/0x90
[<000da6f233380ff6>] dcssblk_submit_bio+0x3a6/0x620 [dcssblk]
[<000da6f2b3eb403c>] __submit_bio+0x25c/0x4a0
[<000da6f2b3eb43bc>] __submit_bio_noacct+0x13c/0x450
[<000da6f2b3eb4bde>] submit_bio_noacct_nocheck+0x50e/0x620
[<000da6f2b34f4978>] mpage_readahead+0x318/0x3f0
[<000da6f2b31edbe6>] read_pages+0x156/0x740
[<000da6f2b31ee594>] page_cache_ra_unbounded+0x3c4/0x610
[<000da6f2b31ef094>] force_page_cache_ra+0x1f4/0x2d0
[<000da6f2b31d092e>] filemap_get_pages+0x2ce/0xaa0
[<000da6f2b31d1428>] filemap_read+0x328/0x9a0
[<000da6f2b3e9b7e8>] blkdev_read_iter+0x228/0x3b0
[<000da6f2b340f7a6>] vfs_read+0x5b6/0x7f0
[<000da6f2b34110be>] ksys_read+0x10e/0x1e0
[<000da6f2b4e7acb2>] __do_syscall+0x122/0x1f0
[<000da6f2b4e93ffe>] system_call+0x6e/0x90
Last Breaking-Event-Address:
[<000da6f2b338faac>] kasan_check_range+0x12c/0x310
Kernel panic - not syncing: Fatal exception: panic_on_oops
Reviewed-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
|
|
This is a complete rework of the protected key AES (PAES) implementation.
The goal of this rework is to implement the 4 modes (ecb, cbc, ctr, xts)
in a real asynchronous fashion:
- init(), exit() and setkey() are synchronous and don't allocate any
memory.
- the encrypt/decrypt functions first try to do the job in a synchronous
manner. If this fails, for example the protected key got invalid caused
by a guest suspend/resume or guest migration action, the encrypt/decrypt
is transferred to an instance of the crypto engine (see below) for
asynchronous processing.
These postponed requests are then handled by the crypto engine by
invoking the do_one_request() callback but may of course again run into
a still not converted key or the key is getting invalid. If the key is
still not converted, the first thread does the conversion and updates
the key status in the transformation context. The conversion is
invoked via pkey API with a new flag PKEY_XFLAG_NOMEMALLOC.
Note that once there is an active requests enqueued to get async
processed via crypto engine, further requests also need to go via
crypto engine to keep the request sequence.
This patch together with the pkey/zcrypt/AP extensions to support
the new PKEY_XFLAG_NOMEMMALOC should toughen the paes crypto algorithms
to truly meet the requirements for in-kernel skcipher implementations
and the usage patterns for the dm-crypt and dm-integrity layers.
Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
Reviewed-by: Holger Dengler <dengler@linux.ibm.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Link: https://lore.kernel.org/r/20250514090955.72370-3-freude@linux.ibm.com
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
|
|
Some of the pcc sub-functions have a protected key as
input and thus may run into the situation that this
key may be invalid for example due to live guest migration
to another physical hardware.
Rework the inline assembler function cpacf_pcc() to
return the condition code (cc) as return value:
0 - cc code 0 (normal completion)
1 - cc code 1 (prot key wkvp mismatch or src op out of range)
2 - cc code 2 (something invalid, scalar multiply infinity, ...)
Note that cc 3 (partial completion) is handled within the asm code
and never returned.
Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
Reviewed-by: Holger Dengler <dengler@linux.ibm.com>
Link: https://lore.kernel.org/r/20250514090955.72370-2-freude@linux.ibm.com
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
|
|
PARAVIRT_XXL is exclusively utilized by XEN_PV, which is only compatible
with 64-bit machines.
Clearly designate PARAVIRT_XXL as 64-bit only and remove ifdefs to
support CONFIG_PGTABLE_LEVELS < 5.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Juergen Gross <jgross@suse.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/r/20250516123306.3812286-5-kirill.shutemov@linux.intel.com
|
|
Both Intel and AMD CPUs support 5-level paging, which is expected to
become more widely adopted in the future. All major x86 Linux
distributions have the feature enabled.
Remove CONFIG_X86_5LEVEL and related #ifdeffery for it to make it more readable.
Suggested-by: Borislav Petkov <bp@alien8.de>
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/r/20250516123306.3812286-4-kirill.shutemov@linux.intel.com
|
|
5-level paging only supports SPARSEMEM_VMEMMAP. CONFIG_X86_5LEVEL is
being phased out, making 5-level paging support mandatory.
Make CONFIG_SPARSEMEM_VMEMMAP mandatory for x86-64 and eliminate
any associated conditional statements.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/r/20250516123306.3812286-3-kirill.shutemov@linux.intel.com
|
|
Dynamic memory layout is used by KASLR and 5-level paging.
CONFIG_X86_5LEVEL is going to be removed, making 5-level paging support
unconditional which requires unconditional support of dynamic memory
layout.
Remove CONFIG_DYNAMIC_MEMORY_LAYOUT.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de>
Cc: Jan Kiszka <jan.kiszka@siemens.com>
Cc: Kieran Bingham <kbingham@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/r/20250516123306.3812286-2-kirill.shutemov@linux.intel.com
|
|
No functional changes.
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
irq_linear_revmap() is deprecated, so remove all its uses and supersede
them by an identical call to irq_find_mapping().
[ tglx: Fix up subject prefix ]
Signed-off-by: Jiri Slaby (SUSE) <jirislaby@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/all/20250319092951.37667-43-jirislaby@kernel.org
|
|
irq_linear_revmap() is deprecated, so remove all its uses and supersede
them by an identical call to irq_find_mapping().
[ tglx: Fix up subject prefix ]
Signed-off-by: Jiri Slaby (SUSE) <jirislaby@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Christophe Leroy <christophe.leroy@csgroup.eu> # for 8xx
Link: https://lore.kernel.org/all/20250319092951.37667-42-jirislaby@kernel.org
|
|
All irq_domain_add_*() functions are going away. PowerPC is the only
user of irq_domain_add_nomap() and there is no irq_domain_create_nomap()
complement.
Therefore, to align with the rest of the kernel, rename
irq_domain_add_nomap() to irq_domain_create_nomap() and accept a
fwnode_handle instead of a device_node.
[ tglx: Fix up subject prefix ]
Signed-off-by: Jiri Slaby (SUSE) <jirislaby@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/all/20250319092951.37667-40-jirislaby@kernel.org
|