| Age | Commit message (Collapse) | Author | Files | Lines |
|
Bail out instead of trying to perform a bpf_arch_text_copy() if
__arch_prepare_bpf_trampoline() failed.
Cc: stable@vger.kernel.org
Tested-by: Vincent Li <vincent.mc.li@gmail.com>
Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
|
|
When attach fentry/fexit BPF programs, __arch_prepare_bpf_trampoline()
is called twice with different `struct bpf_tramp_image *im`:
bpf_trampoline_update()
-> arch_bpf_trampoline_size()
-> __arch_prepare_bpf_trampoline()
-> arch_prepare_bpf_trampoline()
-> __arch_prepare_bpf_trampoline()
Use move_imm() will emit unstable instruction sequences, so let's use
move_addr() instead to prevent subtle bugs.
(I observed this while debugging other issues with printk.)
Cc: stable@vger.kernel.org
Tested-by: Vincent Li <vincent.mc.li@gmail.com>
Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
|
|
Currently, arch_alloc_bpf_trampoline() use bpf_prog_pack_alloc() which
will pack multiple trampolines into a huge page. So, no need to assume
the trampoline size is page aligned.
Cc: stable@vger.kernel.org
Tested-by: Vincent Li <vincent.mc.li@gmail.com>
Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
|
|
The current implementation does not support struct argument. This causes
a oops when running bpf selftest:
$ ./test_progs -a tracing_struct
Oops[#1]:
CPU -1 Unable to handle kernel paging request at virtual address 0000000000000018, era == 9000000085bef268, ra == 90000000844f3938
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 1-...0: (19 ticks this GP) idle=1094/1/0x4000000000000000 softirq=1380/1382 fqs=801
rcu: (detected by 0, t=5252 jiffies, g=1197, q=52 ncpus=4)
Sending NMI from CPU 0 to CPUs 1:
rcu: rcu_preempt kthread starved for 2495 jiffies! g1197 f0x0 RCU_GP_DOING_FQS(6) ->state=0x0 ->cpu=2
rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt state:I stack:0 pid:15 tgid:15 ppid:2 task_flags:0x208040 flags:0x00000800
Stack : 9000000100423e80 0000000000000402 0000000000000010 90000001003b0680
9000000085d88000 0000000000000000 0000000000000040 9000000087159350
9000000085c2b9b0 0000000000000001 900000008704a000 0000000000000005
00000000ffff355b 00000000ffff355b 0000000000000000 0000000000000004
9000000085d90510 0000000000000000 0000000000000002 7b5d998f8281e86e
00000000ffff355c 7b5d998f8281e86e 000000000000003f 9000000087159350
900000008715bf98 0000000000000005 9000000087036000 900000008704a000
9000000100407c98 90000001003aff80 900000008715c4c0 9000000085c2b9b0
00000000ffff355b 9000000085c33d3c 00000000000000b4 0000000000000000
9000000007002150 00000000ffff355b 9000000084615480 0000000007000002
...
Call Trace:
[<9000000085c2a868>] __schedule+0x410/0x1520
[<9000000085c2b9ac>] schedule+0x34/0x190
[<9000000085c33d38>] schedule_timeout+0x98/0x140
[<90000000845e9120>] rcu_gp_fqs_loop+0x5f8/0x868
[<90000000845ed538>] rcu_gp_kthread+0x260/0x2e0
[<900000008454e8a4>] kthread+0x144/0x238
[<9000000085c26b60>] ret_from_kernel_thread+0x28/0xc8
[<90000000844f20e4>] ret_from_kernel_thread_asm+0xc/0x88
rcu: Stack dump where RCU GP kthread last ran:
Sending NMI from CPU 0 to CPUs 2:
NMI backtrace for cpu 2 skipped: idling at idle_exit+0x0/0x4
Reject it for now.
Cc: stable@vger.kernel.org
Fixes: f9b6b41f0cf3 ("LoongArch: BPF: Add basic bpf trampoline support")
Tested-by: Vincent Li <vincent.mc.li@gmail.com>
Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
|
|
The current implementation of bpf_arch_text_poke() requires 5 nops
at patch site which is not applicable for kernel/module functions.
Because LoongArch reserves ONLY 2 nops at the function entry. With
CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y, this can be done by ftrace
instead.
See the following commit for details:
* commit b91e014f078e ("bpf: Make BPF trampoline use register_ftrace_direct() API")
* commit 9cdc3b6a299c ("LoongArch: ftrace: Add direct call support")
Cc: stable@vger.kernel.org
Tested-by: Vincent Li <vincent.mc.li@gmail.com>
Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
|
|
The bpf_flush_icache() is called by bpf_arch_text_copy() already. So
remove it. This has been done in arm64 and riscv.
Cc: stable@vger.kernel.org
Tested-by: Vincent Li <vincent.mc.li@gmail.com>
Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
|
|
The check for (BPF_TRAMP_F_ORIG_STACK | BPF_TRAMP_F_SHARE_IPMODIFY) is
duplicated in __arch_prepare_bpf_trampoline(). Remove it.
While at it, make sure stack_size and nargs are initialized once.
Cc: stable@vger.kernel.org
Tested-by: Vincent Li <vincent.mc.li@gmail.com>
Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
|
|
In __arch_prepare_bpf_trampoline(), retval_off is meaningful only when
save_ret is not 0, so the current logic is correct. But it may cause a
build warning:
arch/loongarch/net/bpf_jit.c:1547 __arch_prepare_bpf_trampoline() error: uninitialized symbol 'retval_off'.
So initialize retval_off unconditionally to fix it.
Cc: stable@vger.kernel.org
Fixes: f9b6b41f0cf3 ("LoongArch: BPF: Add basic bpf trampoline support")
Closes: https://lore.kernel.org/r/202508191020.PBBh07cK-lkp@intel.com/
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
|
|
For 8-bit and 16-bit sign-extention mov instructions, it can use the
native instructions ext.w.b and ext.w.h directly, no need to use the
temporary t1 register, just remove the redundant operations.
Here are the test results:
# modprobe test_bpf test_range=81,84
# dmesg -t | tail -5
test_bpf: #81 ALU_MOVSX | BPF_B jited:1 5 PASS
test_bpf: #82 ALU_MOVSX | BPF_H jited:1 5 PASS
test_bpf: #83 ALU64_MOVSX | BPF_B jited:1 5 PASS
test_bpf: #84 ALU64_MOVSX | BPF_H jited:1 5 PASS
test_bpf: Summary: 4 PASSED, 0 FAILED, [4/4 JIT'ed]
Acked-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
|
|
The atomic instructions sc.q, llacq.{w/d}, screl.{w/d} were newly added
in the LoongArch Reference Manual v1.10, it is necessary to handle them
in insns_not_supported() to avoid putting a breakpoint in the middle of
a ll/sc atomic sequence, otherwise it will loop forever for kprobes and
uprobes.
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
|
|
Attempt VMA lock-based page fault handling first, and fall back to the
existing mmap_lock-based handling if that fails.
The "ebizzy -mTRp" test on Loongson-3A6000 shows that PER_VMA_LOCK can
improve the benchmark by about 17.9% (97837.7 to 115430.8).
This is the LoongArch variant of "x86/mm: try VMA lock-based page fault
handling first".
Signed-off-by: Wentao Guan <guanwentao@uniontech.com>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
|
|
Automatically disable kaslr when the kernel loads from kexec_file.
kexec_file loads the secondary kernel image to a non-linked address,
inherently providing KASLR-like randomization.
However, on LoongArch where System RAM may be non-contiguous, enabling
KASLR for the second kernel may relocate it to an invalid memory region
and cause a boot failure. Thus, we disable KASLR when "kexec_file" is
detected in the command line.
To ensure compatibility with older kernels loaded via kexec_file, this
patch should be backported to stable branches.
Cc: stable@vger.kernel.org
Signed-off-by: Youling Tang <tangyouling@kylinos.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
|
|
Enabling crash dump (kdump) includes:
- Prepare contents of ELF header of a core dump file, /proc/vmcore,
using crash_prepare_elf64_headers().
- Add "mem=size@start" parameter to the command line and pass it to the
capture kernel. Limit the runtime memory area of the captured kernel
to avoid disrupting the production kernel's runtime state.
- Add "elfcorehdr=size@start" parameter to the cmdline.
The basic usage for kdump (add the cmdline parameter crashkernel=512M
to grub.cfg for production kernel):
1) Load capture kernel image (vmlinux.efi or vmlinux can both be used):
# kexec -s -p vmlinuz.efi --initrd=initrd.img --reuse-cmdline
2) Do something to crash, like:
# echo c > /proc/sysrq-trigger
Signed-off-by: Youling Tang <tangyouling@kylinos.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
|
|
This patch creates kexec_elf_ops to load ELF binary file for
kexec_file_load() syscall.
However, for `kbuf->memsz` and `kbuf->buf_min`, special handling is
required, and the generic `kexec_elf_load()` cannot be used directly.
$ readelf -l vmlinux
...
Type Offset VirtAddr PhysAddr
FileSiz MemSiz Flags Align
LOAD 0x0000000000010000 0x9000000000200000 0x9000000000200000
0x0000000002747a00 0x000000000287a0d8 RWE 0x10000
NOTE 0x0000000000000000 0x0000000000000000 0x0000000000000000
0x0000000000000000 0x0000000000000000 R 0x8
phdr->p_paddr should have been a physical address, but it is a virtual
address on the current LoongArch. This will cause kexec_file to fail
when loading the kernel and need to be converted to a physical address.
From the above MemSiz, it can be seen that 0x287a0d8 isn't page aligned.
Although kexec_add_buffer() will perform PAGE_SIZE alignment on kbuf->
memsz, there is still a stampeding in the loaded kernel space and initrd
space. The initrd resolution failed when starting the second kernel.
It can be known from the link script vmlinux.lds.S that,
BSS_SECTION(0, SZ_64K, 8)
. = ALIGN(PECOFF_SEGMENT_ALIGN);
It needs to be aligned according to SZ_64K, so that after alignment, its
size is consistent with _kernel_asize.
The basic usage (vmlinux):
1) Load second kernel image:
# kexec -s -l vmlinux --initrd=initrd.img --reuse-cmdline
2) Startup second kernel:
# kexec -e
Signed-off-by: Youling Tang <tangyouling@kylinos.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
|
|
This patch creates kexec_efi_ops to load EFI binary file for
kexec_file_load() syscall.
The efi_kexec_load() as two parts:
- the first part loads the kernel image (vmlinuz.efi or vmlinux.efi)
- the second part loads other segments (e.g: initrd, cmdline, etc)
Currently, pez (vmlinuz.efi) and pei (vmlinux.efi) format images are
supported.
The basic usage (vmlinuz.efi or vmlinux.efi):
1) Load second kernel image:
# kexec -s -l vmlinuz.efi --initrd=initrd.img --reuse-cmdline
2) Startup second kernel:
# kexec -e
Signed-off-by: Youling Tang <tangyouling@kylinos.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
|
|
Add some preparatory infrastructure:
- Add command line processing.
- Add support for loading other segments.
- Other minor modifications.
This initrd will be passed to the second kernel via the command line
'initrd=start,size'.
The 'kexec_file' command line parameter indicates that the kernel is
loaded via kexec_file.
Signed-off-by: Youling Tang <tangyouling@kylinos.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
|