diff options
author | 2024-03-13 07:30:33 -0700 | |
---|---|---|
committer | 2024-03-15 10:17:14 -0700 | |
commit | 099dbac6e90c620d8ce0bbf75bbdc94da1feb4fb (patch) | |
tree | 8b2c14469a09e3f7ada3b4b5dbab5e5f44d08419 /arch/riscv/lib/csum.c | |
parent | Merge patch series "Support Andes PMU extension" (diff) | |
parent | riscv: Set unaligned access speed at compile time (diff) | |
download | wireguard-linux-099dbac6e90c620d8ce0bbf75bbdc94da1feb4fb.tar.xz wireguard-linux-099dbac6e90c620d8ce0bbf75bbdc94da1feb4fb.zip |
Merge patch series "riscv: Use Kconfig to set unaligned access speed"
Charlie Jenkins <charlie@rivosinc.com> says:
If the hardware unaligned access speed is known at compile time, it is
possible to avoid running the unaligned access speed probe to speedup
boot-time.
* b4-shazam-merge:
riscv: Set unaligned access speed at compile time
riscv: Decouple emulated unaligned accesses from access speed
riscv: Only check online cpus for emulated accesses
riscv: lib: Introduce has_fast_unaligned_access()
Link: https://lore.kernel.org/r/20240308-disable_misaligned_probe_config-v9-0-a388770ba0ce@rivosinc.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
Diffstat (limited to 'arch/riscv/lib/csum.c')
-rw-r--r-- | arch/riscv/lib/csum.c | 7 |
1 files changed, 2 insertions, 5 deletions
diff --git a/arch/riscv/lib/csum.c b/arch/riscv/lib/csum.c index af3df5274ccb..7178e0acfa22 100644 --- a/arch/riscv/lib/csum.c +++ b/arch/riscv/lib/csum.c @@ -3,7 +3,7 @@ * Checksum library * * Influenced by arch/arm64/lib/csum.c - * Copyright (C) 2023 Rivos Inc. + * Copyright (C) 2023-2024 Rivos Inc. */ #include <linux/bitops.h> #include <linux/compiler.h> @@ -318,10 +318,7 @@ unsigned int do_csum(const unsigned char *buff, int len) * branches. The largest chunk of overlap was delegated into the * do_csum_common function. */ - if (static_branch_likely(&fast_misaligned_access_speed_key)) - return do_csum_no_alignment(buff, len); - - if (((unsigned long)buff & OFFSET_MASK) == 0) + if (has_fast_unaligned_accesses() || (((unsigned long)buff & OFFSET_MASK) == 0)) return do_csum_no_alignment(buff, len); return do_csum_with_alignment(buff, len); |