aboutsummaryrefslogtreecommitdiffstats
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2025-02-07bpf: selftests: Test constant key extraction on irrelevant mapsDaniel Xu1-0/+15
Test that very high constant map keys are not interpreted as an error value by the verifier. This would previously fail. Acked-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Daniel Xu <dxu@dxuuu.xyz> Link: https://lore.kernel.org/r/c0590b62eb9303f389b2f52c0c7e9cf22a358a30.1738689872.git.dxu@dxuuu.xyz Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-02-07bpf: verifier: Do not extract constant map keys for irrelevant mapsDaniel Xu1-3/+7
Previously, we were trying to extract constant map keys for all bpf_map_lookup_elem(), regardless of map type. This is an issue if the map has a u64 key and the value is very high, as it can be interpreted as a negative signed value. This in turn is treated as an error value by check_func_arg() which causes a valid program to be incorrectly rejected. Fix by only extracting constant map keys for relevant maps. This fix works because nullness elision is only allowed for {PERCPU_}ARRAY maps, and keys for these are within u32 range. See next commit for an example via selftest. Acked-by: Eduard Zingerman <eddyz87@gmail.com> Reported-by: Marc Hartmayer <mhartmay@linux.ibm.com> Reported-by: Ilya Leoshkevich <iii@linux.ibm.com> Tested-by: Marc Hartmayer <mhartmay@linux.ibm.com> Signed-off-by: Daniel Xu <dxu@dxuuu.xyz> Link: https://lore.kernel.org/r/aa868b642b026ff87ba6105ea151bc8693b35932.1738689872.git.dxu@dxuuu.xyz Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-02-06bpf: Fix softlockup in arena_map_free on 64k page kernelAlan Maguire1-1/+1
On an aarch64 kernel with CONFIG_PAGE_SIZE_64KB=y, arena_htab tests cause a segmentation fault and soft lockup. The same failure is not observed with 4k pages on aarch64. It turns out arena_map_free() is calling apply_to_existing_page_range() with the address returned by bpf_arena_get_kern_vm_start(). If this address is not page-aligned the code ends up calling apply_to_pte_range() with that unaligned address causing soft lockup. Fix it by round up GUARD_SZ to PAGE_SIZE << 1 so that the division by 2 in bpf_arena_get_kern_vm_start() returns a page-aligned value. Fixes: 317460317a02 ("bpf: Introduce bpf_arena.") Reported-by: Colm Harrington <colm.harrington@oracle.com> Suggested-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Alan Maguire <alan.maguire@oracle.com> Link: https://lore.kernel.org/r/20250205170059.427458-1-alan.maguire@oracle.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-02-06net: Add rx_skb of kfree_skb to raw_tp_null_args[].Kuniyuki Iwashima1-0/+2
Yan Zhai reported a BPF prog could trigger a null-ptr-deref [0] in trace_kfree_skb if the prog does not check if rx_sk is NULL. Commit c53795d48ee8 ("net: add rx_sk to trace_kfree_skb") added rx_sk to trace_kfree_skb, but rx_sk is optional and could be NULL. Let's add kfree_skb to raw_tp_null_args[] to let the BPF verifier validate such a prog and prevent the issue. Now we fail to load such a prog: libbpf: prog 'drop': -- BEGIN PROG LOAD LOG -- 0: R1=ctx() R10=fp0 ; int BPF_PROG(drop, struct sk_buff *skb, void *location, @ kfree_skb_sk_null.bpf.c:21 0: (79) r3 = *(u64 *)(r1 +24) func 'kfree_skb' arg3 has btf_id 5253 type STRUCT 'sock' 1: R1=ctx() R3_w=trusted_ptr_or_null_sock(id=1) ; bpf_printk("sk: %d, %d\n", sk, sk->__sk_common.skc_family); @ kfree_skb_sk_null.bpf.c:24 1: (69) r4 = *(u16 *)(r3 +16) R3 invalid mem access 'trusted_ptr_or_null_' processed 2 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0 -- END PROG LOAD LOG -- Note this fix requires commit 838a10bd2ebf ("bpf: Augment raw_tp arguments with PTR_MAYBE_NULL"). [0]: BUG: kernel NULL pointer dereference, address: 0000000000000010 PF: supervisor read access in kernel mode PF: error_code(0x0000) - not-present page PGD 0 P4D 0 PREEMPT SMP RIP: 0010:bpf_prog_5e21a6db8fcff1aa_drop+0x10/0x2d Call Trace: <TASK> ? __die+0x1f/0x60 ? page_fault_oops+0x148/0x420 ? search_bpf_extables+0x5b/0x70 ? fixup_exception+0x27/0x2c0 ? exc_page_fault+0x75/0x170 ? asm_exc_page_fault+0x22/0x30 ? bpf_prog_5e21a6db8fcff1aa_drop+0x10/0x2d bpf_trace_run4+0x68/0xd0 ? unix_stream_connect+0x1f4/0x6f0 sk_skb_reason_drop+0x90/0x120 unix_stream_connect+0x1f4/0x6f0 __sys_connect+0x7f/0xb0 __x64_sys_connect+0x14/0x20 do_syscall_64+0x47/0xc30 entry_SYSCALL_64_after_hwframe+0x4b/0x53 Fixes: c53795d48ee8 ("net: add rx_sk to trace_kfree_skb") Reported-by: Yan Zhai <yan@cloudflare.com> Closes: https://lore.kernel.org/netdev/Z50zebTRzI962e6X@debian.debian/ Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Tested-by: Yan Zhai <yan@cloudflare.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20250201030142.62703-1-kuniyu@amazon.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-29bpf: Fix deadlock when freeing cgroup storageAbel Wu1-1/+1
The following commit bc235cdb423a ("bpf: Prevent deadlock from recursive bpf_task_storage_[get|delete]") first introduced deadlock prevention for fentry/fexit programs attaching on bpf_task_storage helpers. That commit also employed the logic in map free path in its v6 version. Later bpf_cgrp_storage was first introduced in c4bcfb38a95e ("bpf: Implement cgroup storage available to non-cgroup-attached bpf progs") which faces the same issue as bpf_task_storage, instead of its busy counter, NULL was passed to bpf_local_storage_map_free() which opened a window to cause deadlock: <TASK> (acquiring local_storage->lock) _raw_spin_lock_irqsave+0x3d/0x50 bpf_local_storage_update+0xd1/0x460 bpf_cgrp_storage_get+0x109/0x130 bpf_prog_a4d4a370ba857314_cgrp_ptr+0x139/0x170 ? __bpf_prog_enter_recur+0x16/0x80 bpf_trampoline_6442485186+0x43/0xa4 cgroup_storage_ptr+0x9/0x20 (holding local_storage->lock) bpf_selem_unlink_storage_nolock.constprop.0+0x135/0x160 bpf_selem_unlink_storage+0x6f/0x110 bpf_local_storage_map_free+0xa2/0x110 bpf_map_free_deferred+0x5b/0x90 process_one_work+0x17c/0x390 worker_thread+0x251/0x360 kthread+0xd2/0x100 ret_from_fork+0x34/0x50 ret_from_fork_asm+0x1a/0x30 </TASK> Progs: - A: SEC("fentry/cgroup_storage_ptr") - cgid (BPF_MAP_TYPE_HASH) Record the id of the cgroup the current task belonging to in this hash map, using the address of the cgroup as the map key. - cgrpa (BPF_MAP_TYPE_CGRP_STORAGE) If current task is a kworker, lookup the above hash map using function parameter @owner as the key to get its corresponding cgroup id which is then used to get a trusted pointer to the cgroup through bpf_cgroup_from_id(). This trusted pointer can then be passed to bpf_cgrp_storage_get() to finally trigger the deadlock issue. - B: SEC("tp_btf/sys_enter") - cgrpb (BPF_MAP_TYPE_CGRP_STORAGE) The only purpose of this prog is to fill Prog A's hash map by calling bpf_cgrp_storage_get() for as many userspace tasks as possible. Steps to reproduce: - Run A; - while (true) { Run B; Destroy B; } Fix this issue by passing its busy counter to the free procedure so it can be properly incremented before storage/smap locking. Fixes: c4bcfb38a95e ("bpf: Implement cgroup storage available to non-cgroup-attached bpf progs") Signed-off-by: Abel Wu <wuyun.abel@bytedance.com> Acked-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/r/20241221061018.37717-1-wuyun.abel@bytedance.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-29selftests/bpf: Add strparser test for bpfJiayuan Chen3-53/+507
Add test cases for bpf + strparser and separated them from sockmap_basic, as strparser has more encapsulation and parsing capabilities compared to standard sockmap. Signed-off-by: Jiayuan Chen <mrpre@163.com> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Acked-by: Jakub Sitnicki <jakub@cloudflare.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://patch.msgid.link/20250122100917.49845-6-mrpre@163.com
2025-01-29selftests/bpf: Fix invalid flag of recv()Jiayuan Chen1-3/+3
SOCK_NONBLOCK flag is only effective during socket creation, not during recv. Use MSG_DONTWAIT instead. Signed-off-by: Jiayuan Chen <mrpre@163.com> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Acked-by: Jakub Sitnicki <jakub@cloudflare.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://patch.msgid.link/20250122100917.49845-5-mrpre@163.com
2025-01-29bpf: Disable non stream socket for strparserJiayuan Chen1-1/+4
Currently, only TCP supports strparser, but sockmap doesn't intercept non-TCP connections to attach strparser. For example, with UDP, although the read/write handlers are replaced, strparser is not executed due to the lack of a read_sock operation. Furthermore, in udp_bpf_recvmsg(), it checks whether the psock has data, and if not, it falls back to the native UDP read interface, making UDP + strparser appear to read correctly. According to its commit history, this behavior is unexpected. Moreover, since UDP lacks the concept of streams, we intercept it directly. Fixes: 1fa1fe8ff161 ("bpf, sockmap: Test shutdown() correctly exits epoll and recv()=0") Signed-off-by: Jiayuan Chen <mrpre@163.com> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Acked-by: Jakub Sitnicki <jakub@cloudflare.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://patch.msgid.link/20250122100917.49845-4-mrpre@163.com
2025-01-29bpf: Fix wrong copied_seq calculationJiayuan Chen5-5/+77
'sk->copied_seq' was updated in the tcp_eat_skb() function when the action of a BPF program was SK_REDIRECT. For other actions, like SK_PASS, the update logic for 'sk->copied_seq' was moved to tcp_bpf_recvmsg_parser() to ensure the accuracy of the 'fionread' feature. It works for a single stream_verdict scenario, as it also modified sk_data_ready->sk_psock_verdict_data_ready->tcp_read_skb to remove updating 'sk->copied_seq'. However, for programs where both stream_parser and stream_verdict are active (strparser purpose), tcp_read_sock() was used instead of tcp_read_skb() (sk_data_ready->strp_data_ready->tcp_read_sock). tcp_read_sock() now still updates 'sk->copied_seq', leading to duplicate updates. In summary, for strparser + SK_PASS, copied_seq is redundantly calculated in both tcp_read_sock() and tcp_bpf_recvmsg_parser(). The issue causes incorrect copied_seq calculations, which prevent correct data reads from the recv() interface in user-land. We do not want to add new proto_ops to implement a new version of tcp_read_sock, as this would introduce code complexity [1]. We could have added noack and copied_seq to desc, and then called ops->read_sock. However, unfortunately, other modules didn’t fully initialize desc to zero. So, for now, we are directly calling tcp_read_sock_noack() in tcp_bpf.c. [1]: https://lore.kernel.org/bpf/20241218053408.437295-1-mrpre@163.com Fixes: e5c6de5fa025 ("bpf, sockmap: Incorrectly handling copied_seq") Suggested-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Jiayuan Chen <mrpre@163.com> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://patch.msgid.link/20250122100917.49845-3-mrpre@163.com
2025-01-29strparser: Add read_sock callbackJiayuan Chen3-3/+19
Added a new read_sock handler, allowing users to customize read operations instead of relying on the native socket's read_sock. Signed-off-by: Jiayuan Chen <mrpre@163.com> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://patch.msgid.link/20250122100917.49845-2-mrpre@163.com
2025-01-29bpf: avoid holding freeze_mutex during mmap operationAndrii Nakryiko1-7/+10
We use map->freeze_mutex to prevent races between map_freeze() and memory mapping BPF map contents with writable permissions. The way we naively do this means we'll hold freeze_mutex for entire duration of all the mm and VMA manipulations, which is completely unnecessary. This can potentially also lead to deadlocks, as reported by syzbot in [0]. So, instead, hold freeze_mutex only during writeability checks, bump (proactively) "write active" count for the map, unlock the mutex and proceed with mmap logic. And only if something went wrong during mmap logic, then undo that "write active" counter increment. [0] https://lore.kernel.org/bpf/678dcbc9.050a0220.303755.0066.GAE@google.com/ Fixes: fc9702273e2e ("bpf: Add mmap() support for BPF_MAP_TYPE_ARRAY") Reported-by: syzbot+4dc041c686b7c816a71e@syzkaller.appspotmail.com Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20250129012246.1515826-2-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-29bpf: unify VM_WRITE vs VM_MAYWRITE use in BPF map mmaping logicAndrii Nakryiko2-6/+8
For all BPF maps we ensure that VM_MAYWRITE is cleared when memory-mapping BPF map contents as initially read-only VMA. This is because in some cases BPF verifier relies on the underlying data to not be modified afterwards by user space, so once something is mapped read-only, it shouldn't be re-mmap'ed as read-write. As such, it's not necessary to check VM_MAYWRITE in bpf_map_mmap() and map->ops->map_mmap() callbacks: VM_WRITE should be consistently set for read-write mappings, and if VM_WRITE is not set, there is no way for user space to upgrade read-only mapping to read-write one. This patch cleans up this VM_WRITE vs VM_MAYWRITE handling within bpf_map_mmap(), which is an entry point for any BPF map mmap()-ing logic. We also drop unnecessary sanitization of VM_MAYWRITE in BPF ringbuf's map_mmap() callback implementation, as it is already performed by common code in bpf_map_mmap(). Note, though, that in bpf_map_mmap_{open,close}() callbacks we can't drop VM_MAYWRITE use, because it's possible (and is outside of subsystem's control) to have initially read-write memory mapping, which is subsequently dropped to read-only by user space through mprotect(). In such case, from BPF verifier POV it's read-write data throughout the lifetime of BPF map, and is counted as "active writer". But its VMAs will start out as VM_WRITE|VM_MAYWRITE, then mprotect() can change it to just VM_MAYWRITE (and no VM_WRITE), so when its finally munmap()'ed and bpf_map_mmap_close() is called, vm_flags will be just VM_MAYWRITE, but we still need to decrement active writer count with bpf_map_write_active_dec() as it's still considered to be a read-write mapping by the rest of BPF subsystem. Similar reasoning applies to bpf_map_mmap_open(), which is called whenever mmap(), munmap(), and/or mprotect() forces mm subsystem to split original VMA into multiple discontiguous VMAs. Memory-mapping handling is a bit tricky, yes. Cc: Jann Horn <jannh@google.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Shakeel Butt <shakeel.butt@linux.dev> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20250129012246.1515826-1-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-29selftests/bpf: Adjust data size to have ETH_HLENShigeru Yoshida2-6/+6
The function bpf_test_init() now returns an error if user_size (.data_size_in) is less than ETH_HLEN, causing the tests to fail. Adjust the data size to ensure it meets the requirement of ETH_HLEN. Signed-off-by: Shigeru Yoshida <syoshida@redhat.com> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://patch.msgid.link/20250121150643.671650-2-syoshida@redhat.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-29bpf, test_run: Fix use-after-free issue in eth_skb_pkt_type()Shigeru Yoshida1-4/+1
KMSAN reported a use-after-free issue in eth_skb_pkt_type()[1]. The cause of the issue was that eth_skb_pkt_type() accessed skb's data that didn't contain an Ethernet header. This occurs when bpf_prog_test_run_xdp() passes an invalid value as the user_data argument to bpf_test_init(). Fix this by returning an error when user_data is less than ETH_HLEN in bpf_test_init(). Additionally, remove the check for "if (user_size > size)" as it is unnecessary. [1] BUG: KMSAN: use-after-free in eth_skb_pkt_type include/linux/etherdevice.h:627 [inline] BUG: KMSAN: use-after-free in eth_type_trans+0x4ee/0x980 net/ethernet/eth.c:165 eth_skb_pkt_type include/linux/etherdevice.h:627 [inline] eth_type_trans+0x4ee/0x980 net/ethernet/eth.c:165 __xdp_build_skb_from_frame+0x5a8/0xa50 net/core/xdp.c:635 xdp_recv_frames net/bpf/test_run.c:272 [inline] xdp_test_run_batch net/bpf/test_run.c:361 [inline] bpf_test_run_xdp_live+0x2954/0x3330 net/bpf/test_run.c:390 bpf_prog_test_run_xdp+0x148e/0x1b10 net/bpf/test_run.c:1318 bpf_prog_test_run+0x5b7/0xa30 kernel/bpf/syscall.c:4371 __sys_bpf+0x6a6/0xe20 kernel/bpf/syscall.c:5777 __do_sys_bpf kernel/bpf/syscall.c:5866 [inline] __se_sys_bpf kernel/bpf/syscall.c:5864 [inline] __x64_sys_bpf+0xa4/0xf0 kernel/bpf/syscall.c:5864 x64_sys_call+0x2ea0/0x3d90 arch/x86/include/generated/asm/syscalls_64.h:322 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xd9/0x1d0 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f Uninit was created at: free_pages_prepare mm/page_alloc.c:1056 [inline] free_unref_page+0x156/0x1320 mm/page_alloc.c:2657 __free_pages+0xa3/0x1b0 mm/page_alloc.c:4838 bpf_ringbuf_free kernel/bpf/ringbuf.c:226 [inline] ringbuf_map_free+0xff/0x1e0 kernel/bpf/ringbuf.c:235 bpf_map_free kernel/bpf/syscall.c:838 [inline] bpf_map_free_deferred+0x17c/0x310 kernel/bpf/syscall.c:862 process_one_work kernel/workqueue.c:3229 [inline] process_scheduled_works+0xa2b/0x1b60 kernel/workqueue.c:3310 worker_thread+0xedf/0x1550 kernel/workqueue.c:3391 kthread+0x535/0x6b0 kernel/kthread.c:389 ret_from_fork+0x6e/0x90 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 CPU: 1 UID: 0 PID: 17276 Comm: syz.1.16450 Not tainted 6.12.0-05490-g9bb88c659673 #8 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-3.fc41 04/01/2014 Fixes: be3d72a2896c ("bpf: move user_size out of bpf_test_init") Reported-by: syzkaller <syzkaller@googlegroups.com> Suggested-by: Martin KaFai Lau <martin.lau@linux.dev> Signed-off-by: Shigeru Yoshida <syoshida@redhat.com> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Acked-by: Stanislav Fomichev <sdf@fomichev.me> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://patch.msgid.link/20250121150643.671650-1-syoshida@redhat.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-29bpf: Remove unnecessary BTF lookups in bpf_sk_storage_tracing_allowedJared Kangas1-12/+1
When loading BPF programs, bpf_sk_storage_tracing_allowed() does a series of lookups to get a type name from the program's attach_btf_id, making the assumption that the type is present in the vmlinux BTF along the way. However, this results in btf_type_by_id() returning a null pointer if a non-vmlinux kernel BTF is attached to. Proof-of-concept on a kernel with CONFIG_IPV6=m: $ cat bpfcrash.c #include <unistd.h> #include <linux/bpf.h> #include <sys/syscall.h> static int bpf(enum bpf_cmd cmd, union bpf_attr *attr) { return syscall(__NR_bpf, cmd, attr, sizeof(*attr)); } int main(void) { const int btf_fd = bpf(BPF_BTF_GET_FD_BY_ID, &(union bpf_attr) { .btf_id = BTF_ID, }); if (btf_fd < 0) return 1; const int bpf_sk_storage_get = 107; const struct bpf_insn insns[] = { { .code = BPF_JMP | BPF_CALL, .imm = bpf_sk_storage_get}, { .code = BPF_JMP | BPF_EXIT }, }; return bpf(BPF_PROG_LOAD, &(union bpf_attr) { .prog_type = BPF_PROG_TYPE_TRACING, .expected_attach_type = BPF_TRACE_FENTRY, .license = (unsigned long)"GPL", .insns = (unsigned long)&insns, .insn_cnt = sizeof(insns) / sizeof(insns[0]), .attach_btf_obj_fd = btf_fd, .attach_btf_id = TYPE_ID, }); } $ sudo bpftool btf list | grep ipv6 2: name [ipv6] size 928200B $ sudo bpftool btf dump id 2 | awk '$3 ~ /inet6_sock_destruct/' [130689] FUNC 'inet6_sock_destruct' type_id=130677 linkage=static $ gcc -D_DEFAULT_SOURCE -DBTF_ID=2 -DTYPE_ID=130689 \ bpfcrash.c -o bpfcrash $ sudo ./bpfcrash This causes a null pointer dereference: Unable to handle kernel NULL pointer dereference at virtual address 0000000000000000 Call trace: bpf_sk_storage_tracing_allowed+0x8c/0xb0 P check_helper_call.isra.0+0xa8/0x1730 do_check+0xa18/0xb40 do_check_common+0x140/0x640 bpf_check+0xb74/0xcb8 bpf_prog_load+0x598/0x9a8 __sys_bpf+0x580/0x980 __arm64_sys_bpf+0x28/0x40 invoke_syscall.constprop.0+0x54/0xe8 do_el0_svc+0xb4/0xd0 el0_svc+0x44/0x1f8 el0t_64_sync_handler+0x13c/0x160 el0t_64_sync+0x184/0x188 Resolve this by using prog->aux->attach_func_name and removing the lookups. Fixes: 8e4597c627fb ("bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP") Suggested-by: Martin KaFai Lau <martin.lau@linux.dev> Signed-off-by: Jared Kangas <jkangas@redhat.com> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://patch.msgid.link/20250121142504.1369436-1-jkangas@redhat.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-28x86/sev: Disable jump tables in SEV startup codeArd Biesheuvel1-0/+4
When retpolines and IBT are both disabled, the compiler is free to use jump tables to optimize switch instructions. However, these are emitted by Clang as absolute references into .rodata: jmp *-0x7dfffe90(,%r9,8) R_X86_64_32S .rodata+0x170 Given that this code will execute before that address in .rodata has even been mapped, it is guaranteed to crash a SEV-SNP guest in a way that is difficult to diagnose. So disable jump tables when building this code. It would be better if we could attach this annotation to the __head macro but this appears to be impossible. Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Tested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250127114334.1045857-6-ardb+git@google.com
2025-01-28tools/bootconfig: Fix the wrong format specifierLuo Yifan1-2/+2
Use '%u' instead of '%d' for unsigned int. Link: https://lore.kernel.org/all/20241105011048.201629-1-luoyifan@cmss.chinamobile.com/ Fixes: 973780011106 ("tools/bootconfig: Suppress non-error messages") Signed-off-by: Luo Yifan <luoyifan@cmss.chinamobile.com> Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
2025-01-27vfio/nvgrace-gpu: Add GB200 SKU to the devid tableAnkit Agrawal1-0/+2
NVIDIA is productizing the new Grace Blackwell superchip SKU bearing device ID 0x2941. Add the SKU devid to nvgrace_gpu_vfio_pci_table. CC: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Ankit Agrawal <ankita@nvidia.com> Link: https://lore.kernel.org/r/20250124183102.3976-5-ankita@nvidia.com Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2025-01-27vfio/nvgrace-gpu: Check the HBM training and C2C link statusAnkit Agrawal1-0/+72
In contrast to Grace Hopper systems, the HBM training has been moved out of the UEFI on the Grace Blackwell systems. This reduces the system bootup time significantly. The onus of checking whether the HBM training has completed thus falls on the module. The HBM training status can be determined from a BAR0 register. Similarly, another BAR0 register exposes the status of the CPU-GPU chip-to-chip (C2C) cache coherent interconnect. Based on testing, 30s is determined to be sufficient to ensure initialization completion on all the Grace based systems. Thus poll these register and check for 30s. If the HBM training is not complete or if the C2C link is not ready, fail the probe. While the time is not required on Grace Hopper systems, it is beneficial to make the check to ensure the device is in an expected state. Hence keeping it generalized to both the generations. Ensure that the BAR0 is enabled before accessing the registers. CC: Alex Williamson <alex.williamson@redhat.com> CC: Kevin Tian <kevin.tian@intel.com> CC: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Ankit Agrawal <ankita@nvidia.com> Link: https://lore.kernel.org/r/20250124183102.3976-4-ankita@nvidia.com Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2025-01-27vfio/nvgrace-gpu: Expose the blackwell device PF BAR1 to the VMAnkit Agrawal1-22/+45
There is a HW defect on Grace Hopper (GH) to support the Multi-Instance GPU (MIG) feature [1] that necessiated the presence of a 1G region carved out from the device memory and mapped as uncached. The 1G region is shown as a fake BAR (comprising region 2 and 3) to workaround the issue. The Grace Blackwell systems (GB) differ from GH systems in the following aspects: 1. The aforementioned HW defect is fixed on GB systems. 2. There is a usable BAR1 (region 2 and 3) on GB systems for the GPUdirect RDMA feature [2]. This patch accommodate those GB changes by showing the 64b physical device BAR1 (region2 and 3) to the VM instead of the fake one. This takes care of both the differences. Moreover, the entire device memory is exposed on GB as cacheable to the VM as there is no carveout required. Link: https://www.nvidia.com/en-in/technologies/multi-instance-gpu/ [1] Link: https://docs.nvidia.com/cuda/gpudirect-rdma/ [2] Cc: Kevin Tian <kevin.tian@intel.com> CC: Jason Gunthorpe <jgg@nvidia.com> Suggested-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Ankit Agrawal <ankita@nvidia.com> Link: https://lore.kernel.org/r/20250124183102.3976-3-ankita@nvidia.com Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2025-01-27vfio/nvgrace-gpu: Read dvsec register to determine need for uncached resmemAnkit Agrawal1-0/+28
NVIDIA's recently introduced Grace Blackwell (GB) Superchip is a continuation with the Grace Hopper (GH) superchip that provides a cache coherent access to CPU and GPU to each other's memory with an internal proprietary chip-to-chip cache coherent interconnect. There is a HW defect on GH systems to support the Multi-Instance GPU (MIG) feature [1] that necessiated the presence of a 1G region with uncached mapping carved out from the device memory. The 1G region is shown as a fake BAR (comprising region 2 and 3) to workaround the issue. This is fixed on the GB systems. The presence of the fix for the HW defect is communicated by the device firmware through the DVSEC PCI config register with ID 3. The module reads this to take a different codepath on GB vs GH. Scan through the DVSEC registers to identify the correct one and use it to determine the presence of the fix. Save the value in the device's nvgrace_gpu_pci_core_device structure. Link: https://www.nvidia.com/en-in/technologies/multi-instance-gpu/ [1] CC: Jason Gunthorpe <jgg@nvidia.com> CC: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Ankit Agrawal <ankita@nvidia.com> Link: https://lore.kernel.org/r/20250124183102.3976-2-ankita@nvidia.com Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2025-01-27virtio_blk: Add support for transport error recoveryIsrael Rukshin1-3/+25
Add support for proper cleanup and re-initialization of virtio-blk devices during transport reset error recovery flow. This enhancement includes: - Pre-reset handler (reset_prepare) to perform device-specific cleanup - Post-reset handler (reset_done) to re-initialize the device These changes allow the device to recover from various reset scenarios, ensuring proper functionality after a reset event occurs. Without this implementation, the device cannot properly recover from resets, potentially leading to undefined behavior or device malfunction. This feature has been tested using PCI transport with Function Level Reset (FLR) as an example reset mechanism. The reset can be triggered manually via sysfs (echo 1 > /sys/bus/pci/devices/$PCI_ADDR/reset). Signed-off-by: Israel Rukshin <israelr@nvidia.com> Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com> Message-Id: <1732690652-3065-3-git-send-email-israelr@nvidia.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-01-27virtio_pci: Add support for PCIe Function Level ResetIsrael Rukshin3-25/+118
Implement support for Function Level Reset (FLR) in virtio_pci devices. This change adds reset_prepare and reset_done callbacks, allowing drivers to properly handle FLR operations. Without this patch, performing and recovering from an FLR is not possible for virtio_pci devices. This implementation ensures proper FLR handling and recovery for both physical and virtual functions. The device reset can be triggered in case of error or manually via sysfs: echo 1 > /sys/bus/pci/devices/$PCI_ADDR/reset Signed-off-by: Israel Rukshin <israelr@nvidia.com> Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com> Message-Id: <1732690652-3065-2-git-send-email-israelr@nvidia.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-01-27vhost/net: Set num_buffers for virtio 1.0Akihiko Odaki1-1/+4
The specification says the device MUST set num_buffers to 1 if VIRTIO_NET_F_MRG_RXBUF has not been negotiated. Fixes: 41e3e42108bc ("vhost/net: enable virtio 1.0") Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Message-Id: <20240915-v1-v1-1-f10d2cb5e759@daynix.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-01-27vdpa/octeon_ep: read vendor-specific PCI capabilityShijith Thotton3-2/+58
Added support to read the vendor-specific PCI capability to identify the type of device being emulated. Reviewed-by: Dan Carpenter <dan.carpenter@linaro.org> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Shijith Thotton <sthotton@marvell.com> Message-Id: <20250103153226.1933479-4-sthotton@marvell.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-01-27virtio-pci: define type and header for PCI vendor dataShijith Thotton1-0/+14
Added macro definition for VIRTIO_PCI_CAP_VENDOR_CFG to identify the PCI vendor data type in the virtio_pci_cap structure. Defined a new struct virtio_pci_vndr_data for the vendor data capability header as per the specification. Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Shijith Thotton <sthotton@marvell.com> Message-Id: <20250103153226.1933479-3-sthotton@marvell.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-01-27vdpa/octeon_ep: handle device config change eventsSatha Rao1-0/+8
The first interrupt of the device is used to notify the host about device configuration changes, such as link status updates. The ISR configuration area is updated to indicate a config change event when triggered. Signed-off-by: Satha Rao <skoteshwar@marvell.com> Reviewed-by: Dan Carpenter <dan.carpenter@linaro.org> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Shijith Thotton <sthotton@marvell.com> Message-Id: <20250103153226.1933479-2-sthotton@marvell.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-01-27vdpa/octeon_ep: enable support for multiple interrupts per deviceShijith Thotton3-39/+62
Updated the driver to utilize all the MSI-X interrupt vectors supported by each OCTEON endpoint VF, instead of relying on a single vector. Enabling more interrupts allows packets from multiple rings to be distributed across multiple cores, improving parallelism and performance. Reviewed-by: Dan Carpenter <dan.carpenter@linaro.org> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Shijith Thotton <sthotton@marvell.com> Message-Id: <20250103153226.1933479-1-sthotton@marvell.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-01-27vdpa: solidrun: Replace deprecated PCI functionsPhilipp Stanner1-29/+28
The PCI functions pcim_iomap_regions() pcim_iounmap_regions() pcim_iomap_table() have been deprecated by the PCI subsystem. Replace these functions with their successors pcim_iomap_region() and pcim_iounmap_region(). Signed-off-by: Philipp Stanner <pstanner@redhat.com> Message-Id: <20241219094428.21511-2-phasta@kernel.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Stefano Garzarella <sgarzare@redhat.com>
2025-01-27s390/kdump: virtio-mem kdump support (CONFIG_PROC_VMCORE_DEVICE_RAM)David Hildenbrand2-8/+32
Let's add support for including virtio-mem device RAM in the crash dump, setting NEED_PROC_VMCORE_DEVICE_RAM, and implementing elfcorehdr_fill_device_ram_ptload_elf64(). To avoid code duplication, factor out the code to fill a PT_LOAD entry. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20241204125444.1734652-13-david@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-01-27virtio-mem: support CONFIG_PROC_VMCORE_DEVICE_RAMDavid Hildenbrand2-0/+89
Let's implement the get_device_ram() vmcore callback, so architectures that select NEED_PROC_VMCORE_NEED_DEVICE_RAM, like s390 soon, can include that memory in a crash dump. Merge ranges, and process ranges that might contain a mixture of plugged and unplugged, to reduce the total number of ranges. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20241204125444.1734652-12-david@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-01-27virtio-mem: remember usable region sizeDavid Hildenbrand1-3/+7
Let's remember the usable region size, which will be helpful in kdump mode next. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20241204125444.1734652-11-david@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-01-27virtio-mem: mark device ready before registering callbacks in kdump modeDavid Hildenbrand1-2/+3
After the callbacks are registered we may immediately get a callback. So mark the device ready before registering the callbacks. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20241204125444.1734652-10-david@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-01-27fs/proc/vmcore: introduce PROC_VMCORE_DEVICE_RAM to detect device RAM ranges in 2nd kernelDavid Hildenbrand3-0/+183
s390 allocates+prepares the elfcore hdr in the dump (2nd) kernel, not in the crashed kernel. RAM provided by memory devices such as virtio-mem can only be detected using the device driver; when vmcore_init() is called, these device drivers are usually not loaded yet, or the devices did not get probed yet. Consequently, on s390 these RAM ranges will not be included in the crash dump, which makes the dump partially corrupt and is unfortunate. Instead of deferring the vmcore_init() call, to an (unclear?) later point, let's reuse the vmcore_cb infrastructure to obtain device RAM ranges as the device drivers probe the device and get access to this information. Then, we'll add these ranges to the vmcore, adding more PT_LOAD entries and updating the offsets+vmcore size. Use a separate Kconfig option to be set by an architecture to include this code only if the arch really needs it. Further, we'll make the config depend on the relevant drivers (i.e., virtio_mem) once they implement support (next). The alternative of having a PROVIDE_PROC_VMCORE_DEVICE_RAM config option was dropped for now for simplicity. The current target use case is s390, which only creates an elf64 elfcore, so focusing on elf64 is sufficient. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20241204125444.1734652-9-david@redhat.com> Acked-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-01-27fs/proc/vmcore: factor out freeing a list of vmcore rangesDavid Hildenbrand2-8/+12
Let's factor it out into include/linux/crash_dump.h, from where we can use it also outside of vmcore.c later. Acked-by: Baoquan He <bhe@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20241204125444.1734652-8-david@redhat.com> Acked-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-01-27fs/proc/vmcore: factor out allocating a vmcore range and adding it to a listDavid Hildenbrand2-19/+16
Let's factor it out into include/linux/crash_dump.h, from where we can use it also outside of vmcore.c later. Acked-by: Baoquan He <bhe@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20241204125444.1734652-7-david@redhat.com> Acked-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-01-27fs/proc/vmcore: move vmcore definitions out of kcore.hDavid Hildenbrand3-23/+23
These vmcore defines are not related to /proc/kcore, move them out. We'll move "struct vmcoredd_node" to vmcore.c, because it is only used internally. While "struct vmcore" is only used internally for now, we're planning on using it from inline functions in crash_dump.h next, so move it to crash_dump.h. While at it, rename "struct vmcore" to "struct vmcore_range", which is a more suitable name and will make the usage of it outside of vmcore.c clearer. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20241204125444.1734652-6-david@redhat.com> Acked-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-01-27fs/proc/vmcore: prefix all pr_* with "vmcore:"David Hildenbrand1-1/+3
Let's use "vmcore: " as a prefix, converting the single "Kdump: vmcore not initialized" one to effectively be "vmcore: not initialized". Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20241204125444.1734652-5-david@redhat.com> Acked-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-01-27fs/proc/vmcore: disallow vmcore modifications while the vmcore is openDavid Hildenbrand1-23/+34
The vmcoredd_update_size() call and its effects (size/offset changes) are currently completely unsynchronized, and will cause trouble when performed concurrently, or when done while someone is already reading the vmcore. Let's protect all vmcore modifications by the vmcore_mutex, disallow vmcore modifications while the vmcore is open, and warn on vmcore modifications after the vmcore was already opened once: modifications while the vmcore is open are unsafe, and modifications after the vmcore was opened indicates trouble. Properly synchronize against concurrent opening of the vmcore. No need to grab the mutex during mmap()/read(): after we opened the vmcore, modifications are impossible. It's worth noting that modifications after the vmcore was opened are completely unexpected, so failing if open, and warning if already opened (+closed again) is good enough. This change not only handles concurrent adding of device dumps + concurrent reading of the vmcore properly, it also prepares for other mechanisms that will modify the vmcore. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20241204125444.1734652-4-david@redhat.com> Acked-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-01-27fs/proc/vmcore: replace vmcoredd_mutex by vmcore_mutexDavid Hildenbrand1-9/+8
Now that we have a mutex that synchronizes against opening of the vmcore, let's use that one to replace vmcoredd_mutex: there is no need to have two separate ones. This is a preparation for properly preventing vmcore modifications after the vmcore was opened. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20241204125444.1734652-3-david@redhat.com> Acked-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-01-27fs/proc/vmcore: convert vmcore_cb_lock into vmcore_mutexDavid Hildenbrand1-7/+8
We want to protect vmcore modifications from concurrent opening of the vmcore, and also serialize vmcore modification. (a) We can currently modify the vmcore after it was opened. This can happen if a vmcoredd is added after the vmcore module was initialized and already opened by user space. We want to fix that and prepare for new code wanting to serialize against concurrent opening. (b) To handle it cleanly we need to protect the modifications against concurrent opening. As the modifications end up allocating memory and can sleep, we cannot rely on the spinlock. Let's convert the spinlock into a mutex to prepare for further changes. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20241204125444.1734652-2-david@redhat.com> Acked-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-01-26LoongArch: Extend the maximum number of watchpointsTiezhu Yang2-3/+13
The maximum number of load/store watchpoints and fetch instruction watchpoints is 14 each according to LoongArch Reference Manual, so extend the maximum number of watchpoints from 8 to 14 for ptrace. By the way, just simply change 8 to 14 for the definition in struct user_watch_state at the beginning, but it may corrupt uapi, then add a new struct user_watch_state_v2 directly. As far as I can tell, the only users for this struct in the userspace are GDB and LLDB, there are no any problems of software compatibility between the application and kernel according to the analysis. The compatibility problem has been considered while developing and testing. When the applications in the userspace get watchpoint state, the length will be specified which is no bigger than the sizeof struct user_watch_state or user_watch_state_v2, the actual length is assigned as the minimal value of the application and kernel in the generic code of ptrace: kernel/ptrace.c: ptrace_regset(): kiov->iov_len = min(kiov->iov_len, (__kernel_size_t) (regset->n * regset->size)); if (req == PTRACE_GETREGSET) return copy_regset_to_user(task, view, regset_no, 0, kiov->iov_len, kiov->iov_base); else return copy_regset_from_user(task, view, regset_no, 0, kiov->iov_len, kiov->iov_base); For example, there are four kind of combinations, all of them work well. (1) "older kernel + older gdb", the actual length is 8+(8+8+4+4)*8=200; (2) "newer kernel + newer gdb", the actual length is 8+(8+8+4+4)*14=344; (3) "older kernel + newer gdb", the actual length is 8+(8+8+4+4)*8=200; (4) "newer kernel + older gdb", the actual length is 8+(8+8+4+4)*8=200. Link: https://loongson.github.io/LoongArch-Documentation/LoongArch-Vol1-EN.html#control-and-status-registers-related-to-watchpoints Cc: stable@vger.kernel.org Fixes: 1a69f7a161a7 ("LoongArch: ptrace: Expose hardware breakpoints to debuggers") Reviewed-by: WANG Xuerui <git@xen0n.name> Reviewed-by: Xi Ruoyao <xry111@xry111.site> Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2025-01-26LoongArch: Change 8 to 14 for LOONGARCH_MAX_{BRP,WRP}Tiezhu Yang3-4/+76
The maximum number of load/store watchpoints and fetch instruction watchpoints is 14 each according to LoongArch Reference Manual, so change 8 to 14 for the related code. Link: https://loongson.github.io/LoongArch-Documentation/LoongArch-Vol1-EN.html#control-and-status-registers-related-to-watchpoints Cc: stable@vger.kernel.org Fixes: edffa33c7bb5 ("LoongArch: Add hardware breakpoints/watchpoints support") Reviewed-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2025-01-26LoongArch: Add debugfs entries to switch SFB/TSO stateHuacai Chen4-7/+186
We need to switch SFB (Store Fill Buffer) and TSO (Total Store Order) state at runtime to debug memory management and KVM virtualization, so add two debugfs entries "sfb_state" and "tso_state" under the directory /sys/kernel/debug/loongarch. Query SFB: cat /sys/kernel/debug/loongarch/sfb_state Enable SFB: echo 1 > /sys/kernel/debug/loongarch/sfb_state Disable SFB: echo 0 > /sys/kernel/debug/loongarch/sfb_state Query TSO: cat /sys/kernel/debug/loongarch/tso_state Switch TSO: echo [TSO] > /sys/kernel/debug/loongarch/tso_state Available [TSO] states: 0 (No Load No Store) 1 (All Load No Store) 3 (Same Load No Store) 4 (No Load All Store) 5 (All Load All Store) 7 (Same Load All Store) Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2025-01-26LoongArch: Fix warnings during S3 suspendHuacai Chen3-3/+2
The enable_gpe_wakeup() function calls acpi_enable_all_wakeup_gpes(), and the later one may call the preempt_schedule_common() function, resulting in a thread switch and causing the CPU to be in an interrupt enabled state after the enable_gpe_wakeup() function returns, leading to the warnings as follow. [ C0] WARNING: ... at kernel/time/timekeeping.c:845 ktime_get+0xbc/0xc8 [ C0] ... [ C0] Call Trace: [ C0] [<90000000002243b4>] show_stack+0x64/0x188 [ C0] [<900000000164673c>] dump_stack_lvl+0x60/0x88 [ C0] [<90000000002687e4>] __warn+0x8c/0x148 [ C0] [<90000000015e9978>] report_bug+0x1c0/0x2b0 [ C0] [<90000000016478e4>] do_bp+0x204/0x3b8 [ C0] [<90000000025b1924>] exception_handlers+0x1924/0x10000 [ C0] [<9000000000343bbc>] ktime_get+0xbc/0xc8 [ C0] [<9000000000354c08>] tick_sched_timer+0x30/0xb0 [ C0] [<90000000003408e0>] __hrtimer_run_queues+0x160/0x378 [ C0] [<9000000000341f14>] hrtimer_interrupt+0x144/0x388 [ C0] [<9000000000228348>] constant_timer_interrupt+0x38/0x48 [ C0] [<90000000002feba4>] __handle_irq_event_percpu+0x64/0x1e8 [ C0] [<90000000002fed48>] handle_irq_event_percpu+0x20/0x80 [ C0] [<9000000000306b9c>] handle_percpu_irq+0x5c/0x98 [ C0] [<90000000002fd4a0>] generic_handle_domain_irq+0x30/0x48 [ C0] [<9000000000d0c7b0>] handle_cpu_irq+0x70/0xa8 [ C0] [<9000000001646b30>] handle_loongarch_irq+0x30/0x48 [ C0] [<9000000001646bc8>] do_vint+0x80/0xe0 [ C0] [<90000000002aea1c>] finish_task_switch.isra.0+0x8c/0x2a8 [ C0] [<900000000164e34c>] __schedule+0x314/0xa48 [ C0] [<900000000164ead8>] schedule+0x58/0xf0 [ C0] [<9000000000294a2c>] worker_thread+0x224/0x498 [ C0] [<900000000029d2f0>] kthread+0xf8/0x108 [ C0] [<9000000000221f28>] ret_from_kernel_thread+0xc/0xa4 [ C0] [ C0] ---[ end trace 0000000000000000 ]--- The root cause is acpi_enable_all_wakeup_gpes() uses a mutex to protect acpi_hw_enable_all_wakeup_gpes(), and acpi_ut_acquire_mutex() may cause a thread switch. Since there is no longer concurrent execution during loongarch_acpi_suspend(), we can call acpi_hw_enable_all_wakeup_gpes() directly in enable_gpe_wakeup(). The solution is similar to commit 22db06337f590d01 ("ACPI: sleep: Avoid breaking S3 wakeup due to might_sleep()"). Fixes: 366bb35a8e48 ("LoongArch: Add suspend (ACPI S3) support") Signed-off-by: Qunqin Zhao <zhaoqunqin@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2025-01-26module: sign with sha512 instead of sha1 by defaultThorsten Leemhuis1-0/+1
Switch away from using sha1 for module signing by default and use the more modern sha512 instead, which is what among others Arch, Fedora, RHEL, and Ubuntu are currently using for their kernels. Sha1 has not been considered secure against well-funded opponents since 2005[1]; since 2011 the NIST and other organizations furthermore recommended its replacement[2]. This is why OpenSSL on RHEL9, Fedora Linux 41+[3], and likely some other current and future distributions reject the creation of sha1 signatures, which leads to a build error of allmodconfig configurations: 80A20474797F0000:error:03000098:digital envelope routines:do_sigver_init:invalid digest:crypto/evp/m_sigver.c:342: make[4]: *** [.../certs/Makefile:53: certs/signing_key.pem] Error 1 make[4]: *** Deleting file 'certs/signing_key.pem' make[4]: *** Waiting for unfinished jobs.... make[3]: *** [.../scripts/Makefile.build:478: certs] Error 2 make[2]: *** [.../Makefile:1936: .] Error 2 make[1]: *** [.../Makefile:224: __sub-make] Error 2 make[1]: Leaving directory '...' make: *** [Makefile:224: __sub-make] Error 2 This change makes allmodconfig work again and sets a default that is more appropriate for current and future users, too. Link: https://www.schneier.com/blog/archives/2005/02/cryptanalysis_o.html [1] Link: https://csrc.nist.gov/projects/hash-functions [2] Link: https://fedoraproject.org/wiki/Changes/OpenSSLDistrustsha1SigVer [3] Signed-off-by: Thorsten Leemhuis <linux@leemhuis.info> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Tested-by: kdevops <kdevops@lists.linux.dev> [0] Link: https://github.com/linux-kdevops/linux-modules-kpd/actions/runs/11420092929/job/31775404330 [0] Link: https://lore.kernel.org/r/52ee32c0c92afc4d3263cea1f8a1cdc809728aff.1729088288.git.linux@leemhuis.info Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-01-26module: Don't fail module loading when setting ro_after_init section RO failedChristophe Leroy1-3/+4
Once module init has succeded it is too late to cancel loading. If setting ro_after_init data section to read-only fails, all we can do is to inform the user through a warning. Reported-by: Thomas Gleixner <tglx@linutronix.de> Closes: https://lore.kernel.org/all/20230915082126.4187913-1-ruanjinjie@huawei.com/ Fixes: d1909c022173 ("module: Don't ignore errors from set_memory_XX()") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org> Link: https://lore.kernel.org/r/d6c81f38da76092de8aacc8c93c4c65cb0fe48b8.1733427536.git.christophe.leroy@csgroup.eu Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-01-26module: Split module_enable_rodata_ro()Christophe Leroy3-7/+13
module_enable_rodata_ro() is called twice, once before module init to set rodata sections readonly and once after module init to set rodata_after_init section readonly. The second time, only the rodata_after_init section needs to be set to read-only, no need to re-apply it to already set rodata. Split module_enable_rodata_ro() in two. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Tested-by: Daniel Gomez <da.gomez@samsung.com> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org> Link: https://lore.kernel.org/r/e3b6ff0df7eac281c58bb02cecaeb377215daff3.1733427536.git.christophe.leroy@csgroup.eu Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-01-26module: sysfs: Use const 'struct bin_attribute'Thomas Weißschuh1-10/+10
The sysfs core is switching to 'const struct bin_attribute's. Prepare for that. Signed-off-by: Thomas Weißschuh <linux@weissschuh.net> Reviewed-by: Petr Pavlu <petr.pavlu@suse.com> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lore.kernel.org/r/20241227-sysfs-const-bin_attr-module-v2-6-e267275f0f37@weissschuh.net Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-01-26module: sysfs: Add notes attributes through attribute_groupThomas Weißschuh1-26/+28
A kobject is meant to manage the lifecycle of some resource. However the module sysfs code only creates a kobject to get a "notes" subdirectory in sysfs. This can be achieved easier and cheaper by using a sysfs group. Switch the notes attribute code to such a group, similar to how the section allocation in the same file already works. Signed-off-by: Thomas Weißschuh <linux@weissschuh.net> Reviewed-by: Petr Pavlu <petr.pavlu@suse.com> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lore.kernel.org/r/20241227-sysfs-const-bin_attr-module-v2-5-e267275f0f37@weissschuh.net Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>