aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2024-09-27[tree-wide] finally take no_llseek outAl Viro221-270/+0
no_llseek had been defined to NULL two years ago, in commit 868941b14441 ("fs: remove no_llseek") To quote that commit, At -rc1 we'll need do a mechanical removal of no_llseek - git grep -l -w no_llseek | grep -v porting.rst | while read i; do sed -i '/\<no_llseek\>/d' $i done would do it. Unfortunately, that hadn't been done. Linus, could you do that now, so that we could finally put that thing to rest? All instances are of the form .llseek = no_llseek, so it's obviously safe. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-09-26smb: client: make SHA-512 TFM ephemeralEnzo Matsumiya6-47/+17
The SHA-512 shash TFM is used only briefly during Session Setup stage, when computing SMB 3.1.1 preauth hash. There's no need to keep it allocated in servers' secmech the whole time, so keep its lifetime inside smb311_update_preauth_hash(). This also makes smb311_crypto_shash_allocate() redundant, so expose smb3_crypto_shash_allocate() and use that. Signed-off-by: Enzo Matsumiya <ematsumiya@suse.de> Signed-off-by: Steve French <stfrench@microsoft.com>
2024-09-26smb: client: make HMAC-MD5 TFM ephemeralEnzo Matsumiya2-84/+50
The HMAC-MD5 shash TFM is used only briefly during Session Setup stage, when computing NTLMv2 hashes. There's no need to keep it allocated in servers' secmech the whole time, so keep its lifetime inside setup_ntlmv2_rsp(). Signed-off-by: Enzo Matsumiya <ematsumiya@suse.de> Signed-off-by: Steve French <stfrench@microsoft.com>
2024-09-26smb: client: stop flooding dmesg in smb2_calc_signature()Paulo Alcantara1-1/+1
When having several mounts that share same credential and the client couldn't re-establish an SMB session due to an expired kerberos ticket or rotated password, smb2_calc_signature() will end up flooding dmesg when not finding SMB sessions to calculate signatures. Signed-off-by: Paulo Alcantara (Red Hat) <pc@manguebit.com> Signed-off-by: Steve French <stfrench@microsoft.com>
2024-09-26smb: client: allocate crypto only for primary serverEnzo Matsumiya2-9/+18
For extra channels, point ->secmech.{enc,dec} to the primary server ones. Signed-off-by: Enzo Matsumiya <ematsumiya@suse.de> Signed-off-by: Steve French <stfrench@microsoft.com>
2024-09-26smb: client: fix UAF in async decryptionEnzo Matsumiya2-19/+34
Doing an async decryption (large read) crashes with a slab-use-after-free way down in the crypto API. Reproducer: # mount.cifs -o ...,seal,esize=1 //srv/share /mnt # dd if=/mnt/largefile of=/dev/null ... [ 194.196391] ================================================================== [ 194.196844] BUG: KASAN: slab-use-after-free in gf128mul_4k_lle+0xc1/0x110 [ 194.197269] Read of size 8 at addr ffff888112bd0448 by task kworker/u77:2/899 [ 194.197707] [ 194.197818] CPU: 12 UID: 0 PID: 899 Comm: kworker/u77:2 Not tainted 6.11.0-lku-00028-gfca3ca14a17a-dirty #43 [ 194.198400] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.2-3-gd478f380-prebuilt.qemu.org 04/01/2014 [ 194.199046] Workqueue: smb3decryptd smb2_decrypt_offload [cifs] [ 194.200032] Call Trace: [ 194.200191] <TASK> [ 194.200327] dump_stack_lvl+0x4e/0x70 [ 194.200558] ? gf128mul_4k_lle+0xc1/0x110 [ 194.200809] print_report+0x174/0x505 [ 194.201040] ? __pfx__raw_spin_lock_irqsave+0x10/0x10 [ 194.201352] ? srso_return_thunk+0x5/0x5f [ 194.201604] ? __virt_addr_valid+0xdf/0x1c0 [ 194.201868] ? gf128mul_4k_lle+0xc1/0x110 [ 194.202128] kasan_report+0xc8/0x150 [ 194.202361] ? gf128mul_4k_lle+0xc1/0x110 [ 194.202616] gf128mul_4k_lle+0xc1/0x110 [ 194.202863] ghash_update+0x184/0x210 [ 194.203103] shash_ahash_update+0x184/0x2a0 [ 194.203377] ? __pfx_shash_ahash_update+0x10/0x10 [ 194.203651] ? srso_return_thunk+0x5/0x5f [ 194.203877] ? crypto_gcm_init_common+0x1ba/0x340 [ 194.204142] gcm_hash_assoc_remain_continue+0x10a/0x140 [ 194.204434] crypt_message+0xec1/0x10a0 [cifs] [ 194.206489] ? __pfx_crypt_message+0x10/0x10 [cifs] [ 194.208507] ? srso_return_thunk+0x5/0x5f [ 194.209205] ? srso_return_thunk+0x5/0x5f [ 194.209925] ? srso_return_thunk+0x5/0x5f [ 194.210443] ? srso_return_thunk+0x5/0x5f [ 194.211037] decrypt_raw_data+0x15f/0x250 [cifs] [ 194.212906] ? __pfx_decrypt_raw_data+0x10/0x10 [cifs] [ 194.214670] ? srso_return_thunk+0x5/0x5f [ 194.215193] smb2_decrypt_offload+0x12a/0x6c0 [cifs] This is because TFM is being used in parallel. Fix this by allocating a new AEAD TFM for async decryption, but keep the existing one for synchronous READ cases (similar to what is done in smb3_calc_signature()). Also remove the calls to aead_request_set_callback() and crypto_wait_req() since it's always going to be a synchronous operation. Signed-off-by: Enzo Matsumiya <ematsumiya@suse.de> Signed-off-by: Steve French <stfrench@microsoft.com>
2024-09-26netfs: Fix write oops in generic/346 (9p) and generic/074 (cifs)David Howells3-22/+65
In netfslib, a buffered writeback operation has a 'write queue' of folios that are being written, held in a linear sequence of folio_queue structs. The 'issuer' adds new folio_queues on the leading edge of the queue and populates each one progressively; the 'collector' pops them off the trailing edge and discards them and the folios they point to as they are consumed. The queue is required to always retain at least one folio_queue structure. This allows the queue to be accessed without locking and with just a bit of barriering. When a new subrequest is prepared, its ->io_iter iterator is pointed at the current end of the write queue and then the iterator is extended as more data is added to the queue until the subrequest is committed. Now, the problem is that the folio_queue at the leading edge of the write queue when a subrequest is prepared might have been entirely consumed - but not yet removed from the queue as it is the only remaining one and is preventing the queue from collapsing. So, what happens is that subreq->io_iter is pointed at the spent folio_queue, then a new folio_queue is added, and, at that point, the collector is at entirely at liberty to immediately delete the spent folio_queue. This leaves the subreq->io_iter pointing at a freed object. If the system is lucky, iterate_folioq() sees ->io_iter, sees the as-yet uncorrupted freed object and advances to the next folio_queue in the queue. In the case seen, however, the freed object gets recycled and put back onto the queue at the tail and filled to the end. This confuses iterate_folioq() and it tries to step ->next, which may be NULL - resulting in an oops. Fix this by the following means: (1) When preparing a write subrequest, make sure there's a folio_queue struct with space in it at the leading edge of the queue. A function to make space is split out of the function to append a folio so that it can be called for this purpose. (2) If the request struct iterator is pointing to a completely spent folio_queue when we make space, then advance the iterator to the newly allocated folio_queue. The subrequest's iterator will then be set from this. The oops could be triggered using the generic/346 xfstest with a filesystem on9P over TCP with cache=loose. The oops looked something like: BUG: kernel NULL pointer dereference, address: 0000000000000008 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page ... RIP: 0010:_copy_from_iter+0x2db/0x530 ... Call Trace: <TASK> ... p9pdu_vwritef+0x3d8/0x5d0 p9_client_prepare_req+0xa8/0x140 p9_client_rpc+0x81/0x280 p9_client_write+0xcf/0x1c0 v9fs_issue_write+0x87/0xc0 netfs_advance_write+0xa0/0xb0 netfs_write_folio.isra.0+0x42d/0x500 netfs_writepages+0x15a/0x1f0 do_writepages+0xd1/0x220 filemap_fdatawrite_wbc+0x5c/0x80 v9fs_mmap_vm_close+0x7d/0xb0 remove_vma+0x35/0x70 vms_complete_munmap_vmas+0x11a/0x170 do_vmi_align_munmap+0x17d/0x1c0 do_vmi_munmap+0x13e/0x150 __vm_munmap+0x92/0xd0 __x64_sys_munmap+0x17/0x20 do_syscall_64+0x80/0xe0 entry_SYSCALL_64_after_hwframe+0x71/0x79 This also fixed a similar-looking issue with cifs and generic/074. Fixes: cd0277ed0c18 ("netfs: Use new folio_queue data type and iterator instead of xarray iter") Reported-by: kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202409180928.f20b5a08-oliver.sang@intel.com Closes: https://lore.kernel.org/oe-lkp/202409131438.3f225fbf-oliver.sang@intel.com Signed-off-by: David Howells <dhowells@redhat.com> Tested-by: kernel test robot <oliver.sang@intel.com> cc: Eric Van Hensbergen <ericvh@kernel.org> cc: Latchesar Ionkov <lucho@ionkov.net> cc: Dominique Martinet <asmadeus@codewreck.org> cc: Christian Schoenebeck <linux_oss@crudebyte.com> cc: Paulo Alcantara <pc@manguebit.com> cc: Jeff Layton <jlayton@kernel.org> cc: v9fs@lists.linux.dev cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org Signed-off-by: Steve French <stfrench@microsoft.com>
2024-09-26ocfs2: fix uninit-value in ocfs2_get_block()Joseph Qi1-3/+2
syzbot reported an uninit-value BUG: BUG: KMSAN: uninit-value in ocfs2_get_block+0xed2/0x2710 fs/ocfs2/aops.c:159 ocfs2_get_block+0xed2/0x2710 fs/ocfs2/aops.c:159 do_mpage_readpage+0xc45/0x2780 fs/mpage.c:225 mpage_readahead+0x43f/0x840 fs/mpage.c:374 ocfs2_readahead+0x269/0x320 fs/ocfs2/aops.c:381 read_pages+0x193/0x1110 mm/readahead.c:160 page_cache_ra_unbounded+0x901/0x9f0 mm/readahead.c:273 do_page_cache_ra mm/readahead.c:303 [inline] force_page_cache_ra+0x3b1/0x4b0 mm/readahead.c:332 force_page_cache_readahead mm/internal.h:347 [inline] generic_fadvise+0x6b0/0xa90 mm/fadvise.c:106 vfs_fadvise mm/fadvise.c:185 [inline] ksys_fadvise64_64 mm/fadvise.c:199 [inline] __do_sys_fadvise64 mm/fadvise.c:214 [inline] __se_sys_fadvise64 mm/fadvise.c:212 [inline] __x64_sys_fadvise64+0x1fb/0x3a0 mm/fadvise.c:212 x64_sys_call+0xe11/0x3ba0 arch/x86/include/generated/asm/syscalls_64.h:222 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xcd/0x1e0 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f This is because when ocfs2_extent_map_get_blocks() fails, p_blkno is uninitialized. So the error log will trigger the above uninit-value access. The error log is out-of-date since get_blocks() was removed long time ago. And the error code will be logged in ocfs2_extent_map_get_blocks() once ocfs2_get_cluster() fails, so fix this by only logging inode and block. Link: https://syzkaller.appspot.com/bug?extid=9709e73bae885b05314b Link: https://lkml.kernel.org/r/20240925090600.3643376-1-joseph.qi@linux.alibaba.com Fixes: ccd979bdbce9 ("[PATCH] OCFS2: The Second Oracle Cluster Filesystem") Signed-off-by: Joseph Qi <joseph.qi@linux.alibaba.com> Reported-by: syzbot+9709e73bae885b05314b@syzkaller.appspotmail.com Tested-by: syzbot+9709e73bae885b05314b@syzkaller.appspotmail.com Cc: Heming Zhao <heming.zhao@suse.com> Cc: Mark Fasheh <mark@fasheh.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Junxiao Bi <junxiao.bi@oracle.com> Cc: Changwei Ge <gechangwei@live.cn> Cc: Gang He <ghe@suse.com> Cc: Jun Piao <piaojun@huawei.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-09-26zram: don't free statically defined namesAndrey Skvortsov1-2/+4
When CONFIG_ZRAM_MULTI_COMP isn't set ZRAM_SECONDARY_COMP can hold default_compressor, because it's the same offset as ZRAM_PRIMARY_COMP, so we need to make sure that we don't attempt to kfree() the statically defined compressor name. This is detected by KASAN. ================================================================== Call trace: kfree+0x60/0x3a0 zram_destroy_comps+0x98/0x198 [zram] zram_reset_device+0x22c/0x4a8 [zram] reset_store+0x1bc/0x2d8 [zram] dev_attr_store+0x44/0x80 sysfs_kf_write+0xfc/0x188 kernfs_fop_write_iter+0x28c/0x428 vfs_write+0x4dc/0x9b8 ksys_write+0x100/0x1f8 __arm64_sys_write+0x74/0xb8 invoke_syscall+0xd8/0x260 el0_svc_common.constprop.0+0xb4/0x240 do_el0_svc+0x48/0x68 el0_svc+0x40/0xc8 el0t_64_sync_handler+0x120/0x130 el0t_64_sync+0x190/0x198 ================================================================== Link: https://lkml.kernel.org/r/20240923164843.1117010-1-andrej.skvortzov@gmail.com Fixes: 684826f8271a ("zram: free secondary algorithms names") Signed-off-by: Andrey Skvortsov <andrej.skvortzov@gmail.com> Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org> Reported-by: Venkat Rao Bagalkote <venkat88@linux.vnet.ibm.com> Closes: https://lore.kernel.org/lkml/57130e48-dbb6-4047-a8c7-ebf5aaea93f4@linux.vnet.ibm.com/ Tested-by: Venkat Rao Bagalkote <venkat88@linux.vnet.ibm.com> Cc: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Cc: Jens Axboe <axboe@kernel.dk> Cc: Minchan Kim <minchan@kernel.org> Cc: Sergey Senozhatsky <senozhatsky@chromium.org> Cc: Venkat Rao Bagalkote <venkat88@linux.vnet.ibm.com> Cc: Chris Li <chrisl@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-09-26memory tiers: use default_dram_perf_ref_source in log messageHuang Ying1-3/+3
Commit 3718c02dbd4c ("acpi, hmat: calculate abstract distance with HMAT") added a default_dram_perf_ref_source variable that was initialized but never used. This causes kmemleak to report the following memory leak: unreferenced object 0xff11000225a47b60 (size 16): comm "swapper/0", pid 1, jiffies 4294761654 hex dump (first 16 bytes): 41 43 50 49 20 48 4d 41 54 00 c1 4b 7d b7 75 7c ACPI HMAT..K}.u| backtrace (crc e6d0e7b2): [<ffffffff95d5afdb>] __kmalloc_node_track_caller_noprof+0x36b/0x440 [<ffffffff95c276d6>] kstrdup+0x36/0x60 [<ffffffff95dfabfa>] mt_set_default_dram_perf+0x23a/0x2c0 [<ffffffff9ad64733>] hmat_init+0x2b3/0x660 [<ffffffff95203cec>] do_one_initcall+0x11c/0x5c0 [<ffffffff9ac9cfc4>] do_initcalls+0x1b4/0x1f0 [<ffffffff9ac9d52e>] kernel_init_freeable+0x4ae/0x520 [<ffffffff97c789cc>] kernel_init+0x1c/0x150 [<ffffffff952aecd1>] ret_from_fork+0x31/0x70 [<ffffffff9520b18a>] ret_from_fork_asm+0x1a/0x30 This reminds us that we forget to use the performance data source information. So, use the variable in the error log message to help identify the root cause of inconsistent performance number. Link: https://lkml.kernel.org/r/87y13mvo0n.fsf@yhuang6-desk2.ccr.corp.intel.com Fixes: 3718c02dbd4c ("acpi, hmat: calculate abstract distance with HMAT") Signed-off-by: "Huang, Ying" <ying.huang@intel.com> Reported-by: Waiman Long <longman@redhat.com> Acked-by: Waiman Long <longman@redhat.com> Cc: Alistair Popple <apopple@nvidia.com> Cc: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-09-26Revert "list: test: fix tests for list_cut_position()"Guenter Roeck1-6/+0
This reverts commit e620799c414a035dea1208bcb51c869744931dbb. The commit introduces unit test failures. Expected cur == &entries[i], but cur == 0000037fffadfd80 &entries[i] == 0000037fffadfd60 # list_test_list_cut_position: pass:0 fail:1 skip:0 total:1 not ok 21 list_test_list_cut_position # list_test_list_cut_before: EXPECTATION FAILED at lib/list-test.c:444 Expected cur == &entries[i], but cur == 0000037fffa9fd70 &entries[i] == 0000037fffa9fd60 # list_test_list_cut_before: EXPECTATION FAILED at lib/list-test.c:444 Expected cur == &entries[i], but cur == 0000037fffa9fd80 &entries[i] == 0000037fffa9fd70 Revert it. Link: https://lkml.kernel.org/r/20240922150507.553814-1-linux@roeck-us.net Fixes: e620799c414a ("list: test: fix tests for list_cut_position()") Signed-off-by: Guenter Roeck <linux@roeck-us.net> Cc: I Hsin Cheng <richard120310@gmail.com> Cc: David Gow <davidgow@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-09-26kselftests: mm: fix wrong __NR_userfaultfd valueMuhammad Usama Anjum1-1/+1
grep -rnIF "#define __NR_userfaultfd" tools/include/uapi/asm-generic/unistd.h:681:#define __NR_userfaultfd 282 arch/x86/include/generated/uapi/asm/unistd_32.h:374:#define __NR_userfaultfd 374 arch/x86/include/generated/uapi/asm/unistd_64.h:327:#define __NR_userfaultfd 323 arch/x86/include/generated/uapi/asm/unistd_x32.h:282:#define __NR_userfaultfd (__X32_SYSCALL_BIT + 323) arch/arm/include/generated/uapi/asm/unistd-eabi.h:347:#define __NR_userfaultfd (__NR_SYSCALL_BASE + 388) arch/arm/include/generated/uapi/asm/unistd-oabi.h:359:#define __NR_userfaultfd (__NR_SYSCALL_BASE + 388) include/uapi/asm-generic/unistd.h:681:#define __NR_userfaultfd 282 The number is dependent on the architecture. The above data shows that: x86 374 x86_64 323 The value of __NR_userfaultfd was changed to 282 when asm-generic/unistd.h was included. It makes the test to fail every time as the correct number of this syscall on x86_64 is 323. Fix the header to asm/unistd.h. Link: https://lkml.kernel.org/r/20240923053836.3270393-1-usama.anjum@collabora.com Fixes: a5c6bc590094 ("selftests/mm: remove local __NR_* definitions") Signed-off-by: Muhammad Usama Anjum <usama.anjum@collabora.com> Reviewed-by: Shuah Khan <skhan@linuxfoundation.org> Reviewed-by: David Hildenbrand <david@redhat.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-09-26compiler.h: specify correct attribute for .rodata..c_jump_tableTiezhu Yang1-1/+1
Currently, there is an assembler message when generating kernel/bpf/core.o under CONFIG_OBJTOOL with LoongArch compiler toolchain: Warning: setting incorrect section attributes for .rodata..c_jump_table This is because the section ".rodata..c_jump_table" should be readonly, but there is a "W" (writable) part of the flags: $ readelf -S kernel/bpf/core.o | grep -A 1 "rodata..c" [34] .rodata..c_j[...] PROGBITS 0000000000000000 0000d2e0 0000000000000800 0000000000000000 WA 0 0 8 There is no above issue on x86 due to the generated section flag is only "A" (allocatable). In order to silence the warning on LoongArch, specify the attribute like ".rodata..c_jump_table,\"a\",@progbits #" explicitly, then the section attribute of ".rodata..c_jump_table" must be readonly in the kernel/bpf/core.o file. Before: $ objdump -h kernel/bpf/core.o | grep -A 1 "rodata..c" 21 .rodata..c_jump_table 00000800 0000000000000000 0000000000000000 0000d2e0 2**3 CONTENTS, ALLOC, LOAD, RELOC, DATA After: $ objdump -h kernel/bpf/core.o | grep -A 1 "rodata..c" 21 .rodata..c_jump_table 00000800 0000000000000000 0000000000000000 0000d2e0 2**3 CONTENTS, ALLOC, LOAD, RELOC, READONLY, DATA By the way, AFAICT, maybe the root cause is related with the different compiler behavior of various archs, so to some extent this change is a workaround for LoongArch, and also there is no effect for x86 which is the only port supported by objtool before LoongArch with this patch. Link: https://lkml.kernel.org/r/20240924062710.1243-1-yangtiezhu@loongson.cn Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Cc: Josh Poimboeuf <jpoimboe@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: <stable@vger.kernel.org> [6.9+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-09-26mm/damon/Kconfig: update DAMON doc URLDiederik de Haas1-1/+1
The old URL doesn't really work anymore and as the documentation has been integrated in the main kernel documentation site, change the URL to point to that. Link: https://lkml.kernel.org/r/20240924082331.11499-1-didi.debian@cknow.org Signed-off-by: Diederik de Haas <didi.debian@cknow.org> Reviewed-by: SeongJae Park <sj@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-09-26mm: kfence: fix elapsed time for allocated/freed trackqiwu.chen1-1/+1
Fix elapsed time for the allocated/freed track introduced by commit 62e73fd85d7bf. Link: https://lkml.kernel.org/r/20240924085004.75401-1-qiwu.chen@transsion.com Fixes: 62e73fd85d7b ("mm: kfence: print the elapsed time for allocated/freed track") Signed-off-by: qiwu.chen <qiwu.chen@transsion.com> Reviewed-by: Marco Elver <elver@google.com> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-09-26ocfs2: fix deadlock in ocfs2_get_system_file_inodeMohammed Anees1-1/+7
syzbot has found a possible deadlock in ocfs2_get_system_file_inode [1]. The scenario is depicted here, CPU0 CPU1 lock(&ocfs2_file_ip_alloc_sem_key); lock(&osb->system_file_mutex); lock(&ocfs2_file_ip_alloc_sem_key); lock(&osb->system_file_mutex); The function calls which could lead to this are: CPU0 ocfs2_mknod - lock(&ocfs2_file_ip_alloc_sem_key); . . . ocfs2_get_system_file_inode - lock(&osb->system_file_mutex); CPU1 - ocfs2_fill_super - lock(&osb->system_file_mutex); . . . ocfs2_read_virt_blocks - lock(&ocfs2_file_ip_alloc_sem_key); This issue can be resolved by making the down_read -> down_read_try in the ocfs2_read_virt_blocks. [1] https://syzkaller.appspot.com/bug?extid=e0055ea09f1f5e6fabdd Link: https://lkml.kernel.org/r/20240924093257.7181-1-pvmohammedanees2003@gmail.com Signed-off-by: Mohammed Anees <pvmohammedanees2003@gmail.com> Reviewed-by: Joseph Qi <joseph.qi@linux.alibaba.com> Reported-by: <syzbot+e0055ea09f1f5e6fabdd@syzkaller.appspotmail.com> Closes: https://syzkaller.appspot.com/bug?extid=e0055ea09f1f5e6fabdd Tested-by: syzbot+e0055ea09f1f5e6fabdd@syzkaller.appspotmail.com Cc: Mark Fasheh <mark@fasheh.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Junxiao Bi <junxiao.bi@oracle.com> Cc: Changwei Ge <gechangwei@live.cn> Cc: Gang He <ghe@suse.com> Cc: Jun Piao <piaojun@huawei.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-09-26ocfs2: reserve space for inline xattr before attaching reflink treeGautham Ananthakrishna2-12/+25
One of our customers reported a crash and a corrupted ocfs2 filesystem. The crash was due to the detection of corruption. Upon troubleshooting, the fsck -fn output showed the below corruption [EXTENT_LIST_FREE] Extent list in owner 33080590 claims 230 as the next free chain record, but fsck believes the largest valid value is 227. Clamp the next record value? n The stat output from the debugfs.ocfs2 showed the following corruption where the "Next Free Rec:" had overshot the "Count:" in the root metadata block. Inode: 33080590 Mode: 0640 Generation: 2619713622 (0x9c25a856) FS Generation: 904309833 (0x35e6ac49) CRC32: 00000000 ECC: 0000 Type: Regular Attr: 0x0 Flags: Valid Dynamic Features: (0x16) HasXattr InlineXattr Refcounted Extended Attributes Block: 0 Extended Attributes Inline Size: 256 User: 0 (root) Group: 0 (root) Size: 281320357888 Links: 1 Clusters: 141738 ctime: 0x66911b56 0x316edcb8 -- Fri Jul 12 06:02:30.829349048 2024 atime: 0x66911d6b 0x7f7a28d -- Fri Jul 12 06:11:23.133669517 2024 mtime: 0x66911b56 0x12ed75d7 -- Fri Jul 12 06:02:30.317552087 2024 dtime: 0x0 -- Wed Dec 31 17:00:00 1969 Refcount Block: 2777346 Last Extblk: 2886943 Orphan Slot: 0 Sub Alloc Slot: 0 Sub Alloc Bit: 14 Tree Depth: 1 Count: 227 Next Free Rec: 230 ## Offset Clusters Block# 0 0 2310 2776351 1 2310 2139 2777375 2 4449 1221 2778399 3 5670 731 2779423 4 6401 566 2780447 ....... .... ....... ....... .... ....... The issue was in the reflink workfow while reserving space for inline xattr. The problematic function is ocfs2_reflink_xattr_inline(). By the time this function is called the reflink tree is already recreated at the destination inode from the source inode. At this point, this function reserves space for inline xattrs at the destination inode without even checking if there is space at the root metadata block. It simply reduces the l_count from 243 to 227 thereby making space of 256 bytes for inline xattr whereas the inode already has extents beyond this index (in this case up to 230), thereby causing corruption. The fix for this is to reserve space for inline metadata at the destination inode before the reflink tree gets recreated. The customer has verified the fix. Link: https://lkml.kernel.org/r/20240918063844.1830332-1-gautham.ananthakrishna@oracle.com Fixes: ef962df057aa ("ocfs2: xattr: fix inlined xattr reflink") Signed-off-by: Gautham Ananthakrishna <gautham.ananthakrishna@oracle.com> Reviewed-by: Joseph Qi <joseph.qi@linux.alibaba.com> Cc: Mark Fasheh <mark@fasheh.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Junxiao Bi <junxiao.bi@oracle.com> Cc: Changwei Ge <gechangwei@live.cn> Cc: Gang He <ghe@suse.com> Cc: Jun Piao <piaojun@huawei.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-09-26mm: migrate: annotate data-race in migrate_folio_unmap()Jeongjun Park1-1/+1
I found a report from syzbot [1] This report shows that the value can be changed, but in reality, the value of __folio_set_movable() cannot be changed because it holds the folio refcount. Therefore, it is appropriate to add an annotate to make KCSAN ignore that data-race. [1] ================================================================== BUG: KCSAN: data-race in __filemap_remove_folio / migrate_pages_batch write to 0xffffea0004b81dd8 of 8 bytes by task 6348 on cpu 0: page_cache_delete mm/filemap.c:153 [inline] __filemap_remove_folio+0x1ac/0x2c0 mm/filemap.c:233 filemap_remove_folio+0x6b/0x1f0 mm/filemap.c:265 truncate_inode_folio+0x42/0x50 mm/truncate.c:178 shmem_undo_range+0x25b/0xa70 mm/shmem.c:1028 shmem_truncate_range mm/shmem.c:1144 [inline] shmem_evict_inode+0x14d/0x530 mm/shmem.c:1272 evict+0x2f0/0x580 fs/inode.c:731 iput_final fs/inode.c:1883 [inline] iput+0x42a/0x5b0 fs/inode.c:1909 dentry_unlink_inode+0x24f/0x260 fs/dcache.c:412 __dentry_kill+0x18b/0x4c0 fs/dcache.c:615 dput+0x5c/0xd0 fs/dcache.c:857 __fput+0x3fb/0x6d0 fs/file_table.c:439 ____fput+0x1c/0x30 fs/file_table.c:459 task_work_run+0x13a/0x1a0 kernel/task_work.c:228 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline] exit_to_user_mode_loop kernel/entry/common.c:114 [inline] exit_to_user_mode_prepare include/linux/entry-common.h:328 [inline] __syscall_exit_to_user_mode_work kernel/entry/common.c:207 [inline] syscall_exit_to_user_mode+0xbe/0x130 kernel/entry/common.c:218 do_syscall_64+0xd6/0x1c0 arch/x86/entry/common.c:89 entry_SYSCALL_64_after_hwframe+0x77/0x7f read to 0xffffea0004b81dd8 of 8 bytes by task 6342 on cpu 1: __folio_test_movable include/linux/page-flags.h:699 [inline] migrate_folio_unmap mm/migrate.c:1199 [inline] migrate_pages_batch+0x24c/0x1940 mm/migrate.c:1797 migrate_pages_sync mm/migrate.c:1963 [inline] migrate_pages+0xff1/0x1820 mm/migrate.c:2072 do_mbind mm/mempolicy.c:1390 [inline] kernel_mbind mm/mempolicy.c:1533 [inline] __do_sys_mbind mm/mempolicy.c:1607 [inline] __se_sys_mbind+0xf76/0x1160 mm/mempolicy.c:1603 __x64_sys_mbind+0x78/0x90 mm/mempolicy.c:1603 x64_sys_call+0x2b4d/0x2d60 arch/x86/include/generated/asm/syscalls_64.h:238 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xc9/0x1c0 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f value changed: 0xffff888127601078 -> 0x0000000000000000 Link: https://lkml.kernel.org/r/20240924130053.107490-1-aha310510@gmail.com Fixes: 7e2a5e5ab217 ("mm: migrate: use __folio_test_movable()") Signed-off-by: Jeongjun Park <aha310510@gmail.com> Reported-by: syzbot <syzkaller@googlegroups.com> Acked-by: David Hildenbrand <david@redhat.com> Cc: Kefeng Wang <wangkefeng.wang@huawei.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Zi Yan <ziy@nvidia.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-09-26mm/hugetlb: simplify refs in memfd_alloc_folioSteve Sistare2-5/+2
The folio_try_get in memfd_alloc_folio is not necessary. Delete it, and delete the matching folio_put in memfd_pin_folios. This also avoids leaking a ref if the memfd_alloc_folio call to hugetlb_add_to_page_cache fails. That error path is also broken in a second way -- when its folio_put causes the ref to become 0, it will implicitly call free_huge_folio, but then the path *explicitly* calls free_huge_folio. Delete the latter. This is a continuation of the fix "mm/hugetlb: fix memfd_pin_folios free_huge_pages leak" [steven.sistare@oracle.com: remove explicit call to free_huge_folio(), per Matthew] Link: https://lkml.kernel.org/r/Zti-7nPVMcGgpcbi@casper.infradead.org Link: https://lkml.kernel.org/r/1725481920-82506-1-git-send-email-steven.sistare@oracle.com Link: https://lkml.kernel.org/r/1725478868-61732-1-git-send-email-steven.sistare@oracle.com Fixes: 89c1905d9c14 ("mm/gup: introduce memfd_pin_folios() for pinning memfd folios") Signed-off-by: Steve Sistare <steven.sistare@oracle.com> Suggested-by: Vivek Kasireddy <vivek.kasireddy@intel.com> Cc: David Hildenbrand <david@redhat.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Muchun Song <muchun.song@linux.dev> Cc: Peter Xu <peterx@redhat.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-09-26mm/gup: fix memfd_pin_folios alloc race panicSteve Sistare1-0/+1
If memfd_pin_folios tries to create a hugetlb page, but someone else already did, then folio gets the value -EEXIST here: folio = memfd_alloc_folio(memfd, start_idx); if (IS_ERR(folio)) { ret = PTR_ERR(folio); if (ret != -EEXIST) goto err; then on the next trip through the "while start_idx" loop we panic here: if (folio) { folio_put(folio); To fix, set the folio to NULL on error. Link: https://lkml.kernel.org/r/1725373521-451395-6-git-send-email-steven.sistare@oracle.com Fixes: 89c1905d9c14 ("mm/gup: introduce memfd_pin_folios() for pinning memfd folios") Signed-off-by: Steve Sistare <steven.sistare@oracle.com> Acked-by: Vivek Kasireddy <vivek.kasireddy@intel.com> Cc: David Hildenbrand <david@redhat.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Muchun Song <muchun.song@linux.dev> Cc: Peter Xu <peterx@redhat.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-09-26mm/gup: fix memfd_pin_folios hugetlb page allocationSteve Sistare1-2/+6
When memfd_pin_folios -> memfd_alloc_folio creates a hugetlb page, the index is wrong. The subsequent call to filemap_get_folios_contig thus cannot find it, and fails, and memfd_pin_folios loops forever. To fix, adjust the index for the huge_page_order. memfd_alloc_folio also forgets to unlock the folio, so the next touch of the page calls hugetlb_fault which blocks forever trying to take the lock. Unlock it. Link: https://lkml.kernel.org/r/1725373521-451395-5-git-send-email-steven.sistare@oracle.com Fixes: 89c1905d9c14 ("mm/gup: introduce memfd_pin_folios() for pinning memfd folios") Signed-off-by: Steve Sistare <steven.sistare@oracle.com> Acked-by: Vivek Kasireddy <vivek.kasireddy@intel.com> Cc: David Hildenbrand <david@redhat.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Muchun Song <muchun.song@linux.dev> Cc: Peter Xu <peterx@redhat.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-09-26mm/hugetlb: fix memfd_pin_folios resv_huge_pages leakSteve Sistare3-5/+31
memfd_pin_folios followed by unpin_folios leaves resv_huge_pages elevated if the pages were not already faulted in. During a normal page fault, resv_huge_pages is consumed here: hugetlb_fault() alloc_hugetlb_folio() dequeue_hugetlb_folio_vma() dequeue_hugetlb_folio_nodemask() dequeue_hugetlb_folio_node_exact() free_huge_pages-- resv_huge_pages-- During memfd_pin_folios, the page is created by calling alloc_hugetlb_folio_nodemask instead of alloc_hugetlb_folio, and resv_huge_pages is not modified: memfd_alloc_folio() alloc_hugetlb_folio_nodemask() dequeue_hugetlb_folio_nodemask() dequeue_hugetlb_folio_node_exact() free_huge_pages-- alloc_hugetlb_folio_nodemask has other callers that must not modify resv_huge_pages. Therefore, to fix, define an alternate version of alloc_hugetlb_folio_nodemask for this call site that adjusts resv_huge_pages. Link: https://lkml.kernel.org/r/1725373521-451395-4-git-send-email-steven.sistare@oracle.com Fixes: 89c1905d9c14 ("mm/gup: introduce memfd_pin_folios() for pinning memfd folios") Signed-off-by: Steve Sistare <steven.sistare@oracle.com> Acked-by: Vivek Kasireddy <vivek.kasireddy@intel.com> Cc: David Hildenbrand <david@redhat.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Muchun Song <muchun.song@linux.dev> Cc: Peter Xu <peterx@redhat.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-09-26mm/hugetlb: fix memfd_pin_folios free_huge_pages leakSteve Sistare1-1/+3
memfd_pin_folios followed by unpin_folios fails to restore free_huge_pages if the pages were not already faulted in, because the folio refcount for pages created by memfd_alloc_folio never goes to 0. memfd_pin_folios needs another folio_put to undo the folio_try_get below: memfd_alloc_folio() alloc_hugetlb_folio_nodemask() dequeue_hugetlb_folio_nodemask() dequeue_hugetlb_folio_node_exact() folio_ref_unfreeze(folio, 1); ; adds 1 refcount folio_try_get() ; adds 1 refcount hugetlb_add_to_page_cache() ; adds 512 refcount (on x86) With the fix, after memfd_pin_folios + unpin_folios, the refcount for the (unfaulted) page is 512, which is correct, as the refcount for a faulted unpinned page is 513. Link: https://lkml.kernel.org/r/1725373521-451395-3-git-send-email-steven.sistare@oracle.com Fixes: 89c1905d9c14 ("mm/gup: introduce memfd_pin_folios() for pinning memfd folios") Signed-off-by: Steve Sistare <steven.sistare@oracle.com> Acked-by: Vivek Kasireddy <vivek.kasireddy@intel.com> Cc: David Hildenbrand <david@redhat.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Muchun Song <muchun.song@linux.dev> Cc: Peter Xu <peterx@redhat.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-09-26mm/filemap: fix filemap_get_folios_contig THP panicSteve Sistare1-0/+4
Patch series "memfd-pin huge page fixes". Fix multiple bugs that occur when using memfd_pin_folios with hugetlb pages and THP. The hugetlb bugs only bite when the page is not yet faulted in when memfd_pin_folios is called. The THP bug bites when the starting offset passed to memfd_pin_folios is not huge page aligned. See the commit messages for details. This patch (of 5): memfd_pin_folios on memory backed by THP panics if the requested start offset is not huge page aligned: BUG: kernel NULL pointer dereference, address: 0000000000000036 RIP: 0010:filemap_get_folios_contig+0xdf/0x290 RSP: 0018:ffffc9002092fbe8 EFLAGS: 00010202 RAX: 0000000000000002 RBX: 0000000000000002 RCX: 0000000000000002 The fault occurs here, because xas_load returns a folio with value 2: filemap_get_folios_contig() for (folio = xas_load(&xas); folio && xas.xa_index <= end; folio = xas_next(&xas)) { ... if (!folio_try_get(folio)) <-- BOOM "2" is an xarray sibling entry. We get it because memfd_pin_folios does not round the indices passed to filemap_get_folios_contig to huge page boundaries for THP, so we load from the middle of a huge page range see a sibling. (It does round for hugetlbfs, at the is_file_hugepages test). To fix, if the folio is a sibling, then return the next index as the starting point for the next call to filemap_get_folios_contig. Link: https://lkml.kernel.org/r/1725373521-451395-1-git-send-email-steven.sistare@oracle.com Link: https://lkml.kernel.org/r/1725373521-451395-2-git-send-email-steven.sistare@oracle.com Fixes: 89c1905d9c14 ("mm/gup: introduce memfd_pin_folios() for pinning memfd folios") Signed-off-by: Steve Sistare <steven.sistare@oracle.com> Cc: David Hildenbrand <david@redhat.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Muchun Song <muchun.song@linux.dev> Cc: Peter Xu <peterx@redhat.com> Cc: Vivek Kasireddy <vivek.kasireddy@intel.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-09-26mm: make SPLIT_PTE_PTLOCKS depend on SMPGuenter Roeck1-0/+1
SPLIT_PTE_PTLOCKS depends on "NR_CPUS >= 4". Unfortunately, that evaluates to true if there is no NR_CPUS configuration option. This results in CONFIG_SPLIT_PTE_PTLOCKS=y for mac_defconfig. This in turn causes the m68k "q800" and "virt" machines to crash in qemu if debugging options are enabled. Making CONFIG_SPLIT_PTE_PTLOCKS dependent on the existence of NR_CPUS does not work since a dependency on the existence of a numeric Kconfig entry always evaluates to false. Example: config HAVE_NO_NR_CPUS def_bool y depends on !NR_CPUS After adding this to a Kconfig file, "make defconfig" includes: $ grep NR_CPUS .config CONFIG_NR_CPUS=64 CONFIG_HAVE_NO_NR_CPUS=y Defining NR_CPUS for m68k does not help either since many architectures define NR_CPUS only for SMP configurations. Make SPLIT_PTE_PTLOCKS depend on SMP instead to solve the problem. Link: https://lkml.kernel.org/r/20240924154205.1491376-1-linux@roeck-us.net Fixes: 394290cba966 ("mm: turn USE_SPLIT_PTE_PTLOCKS / USE_SPLIT_PTE_PTLOCKS into Kconfig options") Signed-off-by: Guenter Roeck <linux@roeck-us.net> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org> Tested-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-09-26tools: fix shared radix-tree buildLorenzo Stoakes4-1/+15
The shared radix-tree build is not correctly recompiling when lib/maple_tree.c and lib/test_maple_tree.c are modified - fix this by adding these core components to the SHARED_DEPS list. Additionally, add missing header guards to shared header files. Link: https://lkml.kernel.org/r/20240924180724.112169-1-lorenzo.stoakes@oracle.com Fixes: 74579d8dab47 ("tools: separate out shared radix-tree components") Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Tested-by: Sidhartha Kumar <sidhartha.kumar@oracle.com> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-09-26Revert "binfmt_elf, coredump: Log the reason of the failed core dumps"Linus Torvalds4-150/+34
This reverts commit fb97d2eb542faf19a8725afbd75cbc2518903210. The logging was questionable to begin with, but it seems to actively deadlock on the task lock. "On second thought, let's not log core dump failures. 'Tis a silly place" because if you can't tell your core dump is truncated, maybe you should just fix your debugger instead of adding bugs to the kernel. Reported-by: Vegard Nossum <vegard.nossum@oracle.com> Link: https://lore.kernel.org/all/d122ece6-3606-49de-ae4d-8da88846bef2@oracle.com/ Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-09-26dm verity: fallback to platform keyring also if key in trusted keyring is rejectedLuca Boccassi1-1/+1
If enabled, we fallback to the platform keyring if the trusted keyring doesn't have the key used to sign the roothash. But if pkcs7_verify() rejects the key for other reasons, such as usage restrictions, we do not fallback. Do so. Follow-up for 6fce1f40e95182ebbfe1ee3096b8fc0b37903269 Suggested-by: Serge Hallyn <serge@hallyn.com> Signed-off-by: Luca Boccassi <bluca@debian.org> Acked-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
2024-09-26dm-verity: restart or panic on an I/O errorMikulas Patocka1-2/+21
Maxim Suhanov reported that dm-verity doesn't crash if an I/O error happens. In theory, this could be used to subvert security, because an attacker can create sectors that return error with the Write Uncorrectable command. Some programs may misbehave if they have to deal with EIO. This commit fixes dm-verity, so that if "panic_on_corruption" or "restart_on_corruption" was specified and an I/O error happens, the machine will panic or restart. This commit also changes kernel_restart to emergency_restart - kernel_restart calls reboot notifiers and these reboot notifiers may wait for the bio that failed. emergency_restart doesn't call the notifiers. Reported-by: Maxim Suhanov <dfirblog@gmail.com> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Cc: stable@vger.kernel.org
2024-09-26dm: fix spelling errorsShen Lichuan2-2/+2
Fixed some confusing spelling errors that were currently identified, the details are as follows: -in the code comments: dm-cache-target.c: 1371: exclussive ==> exclusive dm-raid.c: 2522: repective ==> respective Signed-off-by: Shen Lichuan <shenlichuan@vivo.com> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
2024-09-26dm-cache: remove pointless error checkDipendra Khadka1-4/+0
Smatch reported following: ''' drivers/md/dm-cache-target.c:3204 parse_cblock_range() warn: sscanf doesn't return error codes drivers/md/dm-cache-target.c:3217 parse_cblock_range() warn: sscanf doesn't return error codes ''' Sscanf doesn't return negative values at all. Signed-off-by: Dipendra Khadka <kdipendra88@gmail.com> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
2024-09-26sh: intc: Replace simple_strtoul() with kstrtoul()Hongbo Li1-1/+4
The function simple_strtoul() performs no error checking in scenarios where the input value overflows the intended output variable. We can replace the use of simple_strtoul() with the safer alternative kstrtoul(). This also allows us to print an error message in case of failure. Signed-off-by: Hongbo Li <lihongbo22@huawei.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Signed-off-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
2024-09-26sh: Remove unused declarations for make_maskreg_irq() and irq_mask_registerGaosheng Cui1-6/+0
make_maskreg_irq() and irq_mask_register have been removed since commit 5a4053b23262 ("sh: Kill off dead boards."), so remove the unused declarations. Signed-off-by: Gaosheng Cui <cuigaosheng1@huawei.com> Reviewed-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Signed-off-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
2024-09-26dt-bindings: gpio: ep9301: Add missing "#interrupt-cells" to examplesRob Herring1-3/+6
Enabling dtc interrupt_provider check reveals the examples are missing the "#interrupt-cells" property as it is a dependency of "interrupt-controller". Some of the indentation is off, so fix that too. Signed-off-by: Rob Herring (Arm) <robh@kernel.org> Reviewed-by: Nikita Shubin <nikita.shubin@maquefel.me> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-09-26MAINTAINERS: Update EP93XX ARM ARCHITECTURE maintainerNikita Shubin1-0/+1
Add myself as maintainer of EP93XX ARCHITECTURE. CC: Alexander Sverdlin <alexander.sverdlin@gmail.com> CC: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Nikita Shubin <nikita.shubin@maquefel.me> Acked-by: Alexander Sverdlin <alexander.sverdlin@gmail.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-09-26soc: ep93xx: drop reference to removed EP93XX_SOC_COMMON configLukas Bulwahn1-1/+1
Commit 6eab0ce6e1c6 ("soc: Add SoC driver for Cirrus ep93xx") adds the config EP93XX_SOC referring to the config EP93XX_SOC_COMMON. Within the same patch series of the commit above, the commit 046322f1e1d9 ("ARM: ep93xx: DT for the Cirrus ep93xx SoC platforms") then removes the config EP93XX_SOC_COMMON. With that the reference to this config is obsolete. Simplify the expression in the EP93XX_SOC config definition. Signed-off-by: Lukas Bulwahn <lukas.bulwahn@redhat.com> Reviewed-by: Nikita Shubin <nikita.shubin@maquefel.me> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-09-26selftests: netfilter: Avoid hanging ipvs.shPhil Sutter1-1/+1
If the client can't reach the server, the latter remains listening forever. Kill it after 5s of waiting. Fixes: 867d2190799a ("selftests: netfilter: add ipvs test script") Signed-off-by: Phil Sutter <phil@nwl.cc> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-09-26kselftest: add test for nfqueue induced conntrack raceFlorian Westphal1-1/+91
The netfilter race happens when two packets with the same tuple are DNATed and enqueued with nfqueue in the postrouting hook. Once one of the packet is reinjected it may be DNATed again to a different destination, but the conntrack entry remains the same and the return packet was dropped. Based on earlier patch from Antonio Ojea. Link: https://bugzilla.netfilter.org/show_bug.cgi?id=1766 Co-developed-by: Antonio Ojea <aojea@google.com> Signed-off-by: Antonio Ojea <aojea@google.com> Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-09-26netfilter: nfnetlink_queue: remove old clash resolution logicFlorian Westphal3-90/+0
For historical reasons there are two clash resolution spots in netfilter, one in nfnetlink_queue and one in conntrack core. nfnetlink_queue one was added first: If a colliding entry is found, NAT NAT transformation is reversed by calling nat engine again with altered tuple. See commit 368982cd7d1b ("netfilter: nfnetlink_queue: resolve clash for unconfirmed conntracks") for details. One problem is that nf_reroute() won't take an action if the queueing doesn't occur in the OUTPUT hook, i.e. when queueing in forward or postrouting, packet will be sent via the wrong path. Another problem is that the scenario addressed (2nd UDP packet sent with identical addresses while first packet is still being processed) can also occur without any nfqueue involvement due to threaded resolvers doing A and AAAA requests back-to-back. This lead us to add clash resolution logic to the conntrack core, see commit 6a757c07e51f ("netfilter: conntrack: allow insertion of clashing entries"). Instead of fixing the nfqueue based logic, lets remove it and let conntrack core handle this instead. Retain the ->update hook for sake of nfqueue based conntrack helpers. We could axe this hook completely but we'd have to split confirm and helper logic again, see commit ee04805ff54a ("netfilter: conntrack: make conntrack userspace helpers work again"). This SHOULD NOT be backported to kernels earlier than v5.6; they lack adequate clash resolution handling. Patch was originally written by Pablo Neira Ayuso. Reported-by: Antonio Ojea <aojea@google.com> Closes: https://bugzilla.netfilter.org/show_bug.cgi?id=1766 Signed-off-by: Florian Westphal <fw@strlen.de> Tested-by: Antonio Ojea <aojea@google.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-09-26netfilter: nf_tables: missing objects with no memcg accountingPablo Neira Ayuso7-15/+17
Several ruleset objects are still not using GFP_KERNEL_ACCOUNT for memory accounting, update them. This includes: - catchall elements - compat match large info area - log prefix - meta secctx - numgen counters - pipapo set backend datastructure - tunnel private objects Fixes: 33758c891479 ("memcg: enable accounting for nft objects") Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-09-26netfilter: nf_tables: use rcu chain hook list iterator from netlink dump pathPablo Neira Ayuso1-1/+1
Lockless iteration over hook list is possible from netlink dump path, use rcu variant to iterate over the hook list as is done with flowtable hooks. Fixes: b9703ed44ffb ("netfilter: nf_tables: support for adding new devices to an existing netdev chain") Reported-by: Phil Sutter <phil@nwl.cc> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-09-26netfilter: ctnetlink: compile ctnetlink_label_size with CONFIG_NF_CONNTRACK_EVENTSSimon Horman1-5/+2
Only provide ctnetlink_label_size when it is used, which is when CONFIG_NF_CONNTRACK_EVENTS is configured. Flagged by clang-18 W=1 builds as: .../nf_conntrack_netlink.c:385:19: warning: unused function 'ctnetlink_label_size' [-Wunused-function] 385 | static inline int ctnetlink_label_size(const struct nf_conn *ct) | ^~~~~~~~~~~~~~~~~~~~ The condition on CONFIG_NF_CONNTRACK_LABELS being removed by this patch guards compilation of non-trivial implementations of ctnetlink_dump_labels() and ctnetlink_label_size(). However, this is not necessary as each of these functions will always return 0 if CONFIG_NF_CONNTRACK_LABELS is not defined as each function starts with the equivalent of: struct nf_conn_labels *labels = nf_ct_labels_find(ct); if (!labels) return 0; And nf_ct_labels_find always returns NULL if CONFIG_NF_CONNTRACK_LABELS is not enabled. So I believe that the compiler optimises the code away in such cases anyway. Found by inspection. Compile tested only. Originally splitted in two patches, Pablo Neira Ayuso collapsed them and added Fixes: tag. Fixes: 0ceabd83875b ("netfilter: ctnetlink: deliver labels to userspace") Link: https://lore.kernel.org/netfilter-devel/20240909151712.GZ2097826@kernel.org/ Signed-off-by: Simon Horman <horms@kernel.org> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-09-26netfilter: nf_reject: Fix build warning when CONFIG_BRIDGE_NETFILTER=nSimon Horman2-9/+6
If CONFIG_BRIDGE_NETFILTER is not enabled, which is the case for x86_64 defconfig, then building nf_reject_ipv4.c and nf_reject_ipv6.c with W=1 using gcc-14 results in the following warnings, which are treated as errors: net/ipv4/netfilter/nf_reject_ipv4.c: In function 'nf_send_reset': net/ipv4/netfilter/nf_reject_ipv4.c:243:23: error: variable 'niph' set but not used [-Werror=unused-but-set-variable] 243 | struct iphdr *niph; | ^~~~ cc1: all warnings being treated as errors net/ipv6/netfilter/nf_reject_ipv6.c: In function 'nf_send_reset6': net/ipv6/netfilter/nf_reject_ipv6.c:286:25: error: variable 'ip6h' set but not used [-Werror=unused-but-set-variable] 286 | struct ipv6hdr *ip6h; | ^~~~ cc1: all warnings being treated as errors Address this by reducing the scope of these local variables to where they are used, which is code only compiled when CONFIG_BRIDGE_NETFILTER enabled. Compile tested and run through netfilter selftests. Reported-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Closes: https://lore.kernel.org/netfilter-devel/20240906145513.567781-1-andriy.shevchenko@linux.intel.com/ Signed-off-by: Simon Horman <horms@kernel.org> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-09-26netfilter: nf_tables: Keep deleted flowtable hooks until after RCUPhil Sutter1-1/+1
Documentation of list_del_rcu() warns callers to not immediately free the deleted list item. While it seems not necessary to use the RCU-variant of list_del() here in the first place, doing so seems to require calling kfree_rcu() on the deleted item as well. Fixes: 3f0465a9ef02 ("netfilter: nf_tables: dynamically allocate hooks per net_device in flowtables") Signed-off-by: Phil Sutter <phil@nwl.cc> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-09-26docs: tproxy: ignore non-transparent sockets in iptables谢致邦 (XIE Zhibang)1-1/+1
The iptables example was added in commit d2f26037a38a (netfilter: Add documentation for tproxy, 2008-10-08), but xt_socket 'transparent' option was added in commit a31e1ffd2231 (netfilter: xt_socket: added new revision of the 'socket' match supporting flags, 2009-06-09). Now add the 'transparent' option to the iptables example to ignore non-transparent sockets, which is also consistent with the nft example. Signed-off-by: 谢致邦 (XIE Zhibang) <Yeking@Red54.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-09-26netfilter: ctnetlink: Guard possible unused functionsAndy Shevchenko1-1/+1
Some of the functions may be unused (CONFIG_NETFILTER_NETLINK_GLUE_CT=n and CONFIG_NF_CONNTRACK_EVENTS=n), it prevents kernel builds with clang, `make W=1` and CONFIG_WERROR=y: net/netfilter/nf_conntrack_netlink.c:657:22: error: unused function 'ctnetlink_acct_size' [-Werror,-Wunused-function] 657 | static inline size_t ctnetlink_acct_size(const struct nf_conn *ct) | ^~~~~~~~~~~~~~~~~~~ net/netfilter/nf_conntrack_netlink.c:667:19: error: unused function 'ctnetlink_secctx_size' [-Werror,-Wunused-function] 667 | static inline int ctnetlink_secctx_size(const struct nf_conn *ct) | ^~~~~~~~~~~~~~~~~~~~~ net/netfilter/nf_conntrack_netlink.c:683:22: error: unused function 'ctnetlink_timestamp_size' [-Werror,-Wunused-function] 683 | static inline size_t ctnetlink_timestamp_size(const struct nf_conn *ct) | ^~~~~~~~~~~~~~~~~~~~~~~~ Fix this by guarding possible unused functions with ifdeffery. See also commit 6863f5643dd7 ("kbuild: allow Clang to find unused static inline functions for W=1 build"). Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-09-26selftests: netfilter: nft_tproxy.sh: add tcp testsAntonio Ojea4-0/+623
The TPROXY functionality is widely used, however, there are only mptcp selftests covering this feature. The selftests represent the most common scenarios and can also be used as selfdocumentation of the feature. UDP and TCP testcases are split in different files because of the different nature of the protocols, specially due to the challenges that present to reliable test UDP due to the connectionless nature of the protocol. UDP only covers the scenarios involving the prerouting hook. The UDP tests are signfinicantly slower than the TCP ones, hence they use a larger timeout, it takes 20 seconds to run the full UDP suite on a 48 vCPU Intel(R) Xeon(R) CPU @2.60GHz. Signed-off-by: Antonio Ojea <aojea@google.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-09-26selftests: netfilter: add reverse-clash resolution test caseFlorian Westphal3-0/+178
Add test program that is sending UDP packets in both directions and check that packets arrive without source port modification. Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-09-26netfilter: conntrack: add clash resolution for reverse collisionsFlorian Westphal1-5/+51
Given existing entry: ORIGIN: a:b -> c:d REPLY: c:d -> a:b And colliding entry: ORIGIN: c:d -> a:b REPLY: a:b -> c:d The colliding ct (and the associated skb) get dropped on insert. Permit this by checking if the colliding entry matches the reply direction. Happens when both ends send packets at same time, both requests are picked up as NEW, rather than NEW for the 'first' and 'ESTABLISHED' for the second packet. This is an esoteric condition, as ruleset must permit NEW connections in either direction and both peers must already have a bidirectional traffic flow at the time conntrack gets enabled. Allow the 'reverse' skb to pass and assign the existing (clashing) entry. While at it, also drop the extra 'dying' check, this is already tested earlier by the calling function. Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-09-26netfilter: nf_nat: don't try nat source port reallocation for reverse dir clashFlorian Westphal1-2/+118
A conntrack entry can be inserted to the connection tracking table if there is no existing entry with an identical tuple in either direction. Example: INITIATOR -> NAT/PAT -> RESPONDER Initiator passes through NAT/PAT ("us") and SNAT is done (saddr rewrite). Then, later, NAT/PAT machine itself also wants to connect to RESPONDER. This will not work if the SNAT done earlier has same IP:PORT source pair. Conntrack table has: ORIGINAL: $IP_INITATOR:$SPORT -> $IP_RESPONDER:$DPORT REPLY: $IP_RESPONDER:$DPORT -> $IP_NAT:$SPORT and new locally originating connection wants: ORIGINAL: $IP_NAT:$SPORT -> $IP_RESPONDER:$DPORT REPLY: $IP_RESPONDER:$DPORT -> $IP_NAT:$SPORT This is handled by the NAT engine which will do a source port reallocation for the locally originating connection that is colliding with an existing tuple by attempting a source port rewrite. This is done even if this new connection attempt did not go through a masquerade/snat rule. There is a rare race condition with connection-less protocols like UDP, where we do the port reallocation even though its not needed. This happens when new packets from the same, pre-existing flow are received in both directions at the exact same time on different CPUs after the conntrack table was flushed (or conntrack becomes active for first time). With strict ordering/single cpu, the first packet creates new ct entry and second packet is resolved as established reply packet. With parallel processing, both packets are picked up as new and both get their own ct entry. In this case, the 'reply' packet (picked up as ORIGINAL) can be mangled by NAT engine because a port collision is detected. This change isn't enough to prevent a packet drop later during nf_conntrack_confirm(), the existing clash resolution strategy will not detect such reverse clash case. This is resolved by a followup patch. Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>