aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2023-07-04Revert ".gitignore: ignore *.cover and *.mbx"Linus Torvalds1-2/+0
This reverts commit 534066a983df0935847061c844eb178f8a53a9e7. It's actively detrimental in that it hides files that shouldn't be hidden. If I have some b4 mbx file in my git directory, it either was already applied with "git am" and is now stale, or maybe it's waiting for that to happen. In neither case is "ignore it" the right option. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-07-04afs: Fix accidental truncation when storing dataDavid Howells1-3/+5
When an AFS FS.StoreData RPC call is made, amongst other things it is given the resultant file size to be. On the server, this is processed by truncating the file to new size and then writing the data. Now, kafs has a lock (vnode->io_lock) that serves to serialise operations against a specific vnode (ie. inode), but the parameters for the op are set before the lock is taken. This allows two writebacks (say sync and kswapd) to race - and if writes are ongoing the writeback for a later write could occur before the writeback for an earlier one if the latter gets interrupted. Note that afs_writepages() cannot take i_mutex and only takes a shared lock on vnode->validate_lock. Also note that the server does the truncation and the write inside a lock, so there's no problem at that end. Fix this by moving the calculation for the proposed new i_size inside the vnode->io_lock. Also reset the iterator (which we might have read from) and update the mtime setting there. Fixes: bd80d8a80e12 ("afs: Use ITER_XARRAY for writing") Reported-by: Marc Dionne <marc.dionne@auristor.com> Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-by: Jeffrey Altman <jaltman@auristor.com> Reviewed-by: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org cc: linux-fsdevel@vger.kernel.org Link: https://lore.kernel.org/r/3526895.1687960024@warthog.procyon.org.uk/ Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-07-04module: fix init_module_from_file() error handlingLinus Torvalds1-16/+23
Vegard Nossum pointed out two different problems with the error handling in init_module_from_file(): (a) the idempotent loading code didn't clean up properly in some error cases, leaving the on-stack 'struct idempotent' element still in the hash table (b) failure to read the module file would nonsensically update the 'invalid_kread_bytes' stat counter with the error value The first error is quite nasty, in that it can then cause subsequent idempotent loads of that same file to access stale stack contents of the previous failure. The case may not happen in any normal situation (explaining all the "Tested-by's on the original change), and requires admin privileges, but syzkaller triggers random bad behavior as a result: BUG: soft lockup in sys_finit_module BUG: unable to handle kernel paging request in init_module_from_file general protection fault in init_module_from_file INFO: task hung in init_module_from_file KASAN: out-of-bounds Read in init_module_from_file KASAN: slab-out-of-bounds Read in init_module_from_file ... The second error is fairly benign and just leads to nonsensical stats (and has been around since the debug stats were added). Vegard also provided a patch for the idempotent loading issue, but I'd rather re-organize the code and make it more legible using another level of helper functions than add the usual "goto out" error handling. Link: https://lore.kernel.org/lkml/20230704100852.23452-1-vegard.nossum@oracle.com/ Fixes: 9b9879fc0327 ("modules: catch concurrent module loads, treat them as idempotent") Reported-by: Vegard Nossum <vegard.nossum@oracle.com> Reported-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com> Reported-by: syzbot+9c2bdc9d24e4a7abe741@syzkaller.appspotmail.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-07-04clk: tegra: Avoid calling an uninitialized functionThierry Reding1-3/+12
Commit 493ffb046cf5 ("clk: tegra: super: Switch to determine_rate") replaced clk_super_round_rate() by clk_super_determine_rate(), but didn't update one callsite that was explicitly calling the old tegra_clk_super_ops.round_rate() function, which was now NULL. This resulted in a crash on Tegra30 systems during early boot. Switch this callsite over to the clk_super_determine_rate() equivalent to avoid the crash. Fixes: 493ffb046cf5 ("clk: tegra: super: Switch to determine_rate") Tested-by: Dmitry Osipenko <digetx@gmail.com> Signed-off-by: Thierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20230630130748.840729-1-thierry.reding@gmail.com Signed-off-by: Stephen Boyd <sboyd@kernel.org>
2023-07-04mm: don't do validate_mm() unnecessarily and without mmap lockingLinus Torvalds1-4/+2
This is an addition to commit ae80b4041984 ("mm: validate the mm before dropping the mmap lock"), because it turns out there were two problems, but lockdep just stopped complaining after finding the first one. The do_vmi_align_munmap() function now drops the mmap lock after doing the validate_mm() call, but it turns out that one of the callers then immediately calls validate_mm() again. That's both a bit silly, and now (again) happens without the mmap lock held. So just remove that validate_mm() call from the caller, but make sure to not lose any coverage by doing that mm sanity checking in the error path of do_vmi_align_munmap() too. Reported-and-tested-by: kernel test robot <oliver.sang@intel.com> Link: https://lore.kernel.org/lkml/ZKN6CdkKyxBShPHi@xsang-OptiPlex-9020/ Fixes: 408579cd627a ("mm: Update do_vmi_align_munmap() return semantics") Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-07-03arch/arm64/mm/fault: Fix undeclared variable error in do_page_fault()SeongJae Park1-2/+0
Commit ae870a68b5d1 ("arm64/mm: Convert to using lock_mm_and_find_vma()") made do_page_fault() to use 'vma' even if CONFIG_PER_VMA_LOCK is not defined, but the declaration is still in the ifdef. As a result, building kernel without the config fails with undeclared variable error as below: arch/arm64/mm/fault.c: In function 'do_page_fault': arch/arm64/mm/fault.c:624:2: error: 'vma' undeclared (first use in this function); did you mean 'vmap'? 624 | vma = lock_mm_and_find_vma(mm, addr, regs); | ^~~ | vmap Fix it by moving the declaration out of the ifdef. Fixes: ae870a68b5d1 ("arm64/mm: Convert to using lock_mm_and_find_vma()") Signed-off-by: SeongJae Park <sj@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-07-03rdma: fix INFINIBAND_USER_ACCESS dependencyArnd Bergmann1-2/+4
After a change to the bnxt_re driver, it fails to link when CONFIG_INFINIBAND_USER_ACCESS is disabled: aarch64-linux-ld: drivers/infiniband/hw/bnxt_re/ib_verbs.o: in function `bnxt_re_handler_BNXT_RE_METHOD_ALLOC_PAGE': ib_verbs.c:(.text+0xd64): undefined reference to `ib_uverbs_get_ucontext_file' aarch64-linux-ld: drivers/infiniband/hw/bnxt_re/ib_verbs.o:(.rodata+0x168): undefined reference to `uverbs_idr_class' aarch64-linux-ld: drivers/infiniband/hw/bnxt_re/ib_verbs.o:(.rodata+0x1a8): undefined reference to `uverbs_destroy_def_handler' The problem is that the 'bnxt_re_uapi_defs' structure is built unconditionally and references a couple of functions that are never really called in this configuration but instead require other functions that are left out. Adding an #ifdef around the new code, or a Kconfig dependency would address this problem, but adding the compile-time check inside of the UAPI_DEF_CHAIN_OBJ_TREE_NAMED() macro seems best because that also addresses the problem in other drivers that may run into the same dependency. Fixes: 360da60d6c6ed ("RDMA/bnxt_re: Enable low latency push") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-07-03gfs2: Add quota_change typeBob Peterson2-11/+15
Function do_qc has two main uses: (1) to re-sync the local quota changes (qd) to the master quotas, and (2) normal quota changes. In the case of normal quota changes, the change can be positive or negative, as the quota usage goes up and down. Before this patch function do_qc was distinguishing one from another by whether the resulting value is or isn't zero: In the case of a re-sync (called do_sync) the quota value is moved from the temporary value to a master value, so the amount is added to one and subtracted from the other. The problem is that since the values can be positive or negative we can occasionally run into situations where we are not doing a re-sync but the quota change just happens to cancel out the previous value. In the case of a re-sync extra references and locks are taken, and so do_qc needs to release them. In the case of a normal quota change, no extra references and locks are taken, so it must not try to release them. The problem is: if the quota change is not a re-sync but the value just happens to cancel out the original quota change, the resulting zero value fools do_qc into thinking this is a re-sync and therefore it must release the extra references. This results in problems, mainly having to do with slot reference numbers going smaller than zero. This patch introduces new constants, QC_SYNC and QC_CHANGE so do_qc can really tell the difference. For QC_SYNC calls it must release the extra references acquired by gfs2_quota_unlock's call to qd_check_sync. For QC_CHANGE calls it does not have extra references to put. Note that this allows quota changes back to a value of zero, and so I removed an assert warning related to that. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2023-07-03gfs2: Use memcpy_{from,to}_page where appropriateAndreas Gruenbacher3-15/+7
Replace kmap_local_page() + memcpy() + kunmap_local() sequences with memcpy_{from,to}_page() where we are not doing anything else with the mapped page. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2023-07-03gfs2: Convert remaining kmap_atomic calls to kmap_local_pageAndreas Gruenbacher2-8/+9
Replace the remaining instances of kmap_atomic() ... kunmap_atomic() with kmap_local_page() ... kunmap_local(). In gfs2_write_buf_to_page(), we can call flush_dcache_page() after unmapping the page. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2023-07-03gfs2: Replace deprecated kmap_atomic with kmap_local_pageDeepak R Varma1-4/+4
kmap_atomic() is deprecated in favor of kmap_local_{folio,page}(). Therefore, replace kmap_atomic() with kmap_local_page() in gfs2_internal_read() and stuffed_readpage(). kmap_atomic() disables page-faults and preemption (the latter only for !PREEMPT_RT kernels), However, the code within the mapping/un-mapping in gfs2_internal_read() and stuffed_readpage() does not depend on the above-mentioned side effects. Therefore, a mere replacement of the old API with the new one is all that is required (i.e., there is no need to explicitly add any calls to pagefault_disable() and/or preempt_disable()). Signed-off-by: Deepak R Varma <drv@mailo.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2023-07-03gfs: Get rid of unnucessary locking in inode_go_dumpAndreas Gruenbacher2-12/+7
Commit 27a2660f1ef9 ("gfs2: Dump nrpages for inodes and their glocks") added some locking around reading inode->i_data.nrpages. That locking doesn't do anything really, so get rid of it. With that, the glock argument to ->go_dump() can be made const again as well. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2023-07-03gfs2: gfs2_freeze_lock_shared cleanupAndreas Gruenbacher4-12/+7
All the remaining users of gfs2_freeze_lock_shared() set freeze_gh to &sdp->sd_freeze_gh and flags to 0, so remove those two parameters. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2023-07-03gfs2: Replace sd_freeze_state with SDF_FROZEN flagAndreas Gruenbacher7-31/+17
Replace sd_freeze_state with a new SDF_FROZEN flag. There no longer is a need for indicating that a freeze is in progress (SDF_STARTING_FREEZE); we are now protecting the critical sections with the sd_freeze_mutex. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2023-07-03gfs2: Rework freeze / thaw logicAndreas Gruenbacher7-110/+178
So far, at mount time, gfs2 would take the freeze glock in shared mode and then immediately drop it again, turning it into a cached glock that can be reclaimed at any time. To freeze the filesystem cluster-wide, the node initiating the freeze would take the freeze glock in exclusive mode, which would cause the freeze glock's freeze_go_sync() callback to run on each node. There, gfs2 would freeze the filesystem and schedule gfs2_freeze_func() to run. gfs2_freeze_func() would re-acquire the freeze glock in shared mode, thaw the filesystem, and drop the freeze glock again. The initiating node would keep the freeze glock held in exclusive mode. To thaw the filesystem, the initiating node would drop the freeze glock again, which would allow gfs2_freeze_func() to resume on all nodes, leaving the filesystem in the thawed state. It turns out that in freeze_go_sync(), we cannot reliably and safely freeze the filesystem. This is primarily because the final unmount of a filesystem takes a write lock on the s_umount rw semaphore before calling into gfs2_put_super(), and freeze_go_sync() needs to call freeze_super() which also takes a write lock on the same semaphore, causing a deadlock. We could work around this by trying to take an active reference on the super block first, which would prevent unmount from running at the same time. But that can fail, and freeze_go_sync() isn't actually allowed to fail. To get around this, this patch changes the freeze glock locking scheme as follows: At mount time, each node takes the freeze glock in shared mode. To freeze a filesystem, the initiating node first freezes the filesystem locally and then drops and re-acquires the freeze glock in exclusive mode. All other nodes notice that there is contention on the freeze glock in their go_callback callbacks, and they schedule gfs2_freeze_func() to run. There, they freeze the filesystem locally and drop and re-acquire the freeze glock before re-thawing the filesystem. This is happening outside of the glock state engine, so there, we are allowed to fail. From a cluster point of view, taking and immediately dropping a glock is indistinguishable from taking the glock and only dropping it upon contention, so this new scheme is compatible with the old one. Thanks to Li Dong <lidong@vivo.com> for reporting a locking bug in gfs2_freeze_func() in a previous version of this commit. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2023-07-03mm: validate the mm before dropping the mmap lockLinus Torvalds1-2/+1
Commit 408579cd627a ("mm: Update do_vmi_align_munmap() return semantics") made the return value and locking semantics of do_vmi_align_munmap() more straightforward, but in the process it ended up unlocking the mmap lock just a tad too early: the debug code doing the mmap layout validation still needs to run with the lock held, or things might change under it while it's trying to validate things. So just move the unlocking to after the validate_mm() call. Reported-by: kernel test robot <oliver.sang@intel.com> Link: https://lore.kernel.org/lkml/ZKIsoMOT71uwCIZX@xsang-OptiPlex-9020/ Fixes: 408579cd627a ("mm: Update do_vmi_align_munmap() return semantics") Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-07-03parisc: syscalls: Avoid compiler warnings with W=1Helge Deller1-0/+3
We do not want to add prototypes for all parisc specific syscalls, so simply drop such warnings when building the kernel. Signed-off-by: Helge Deller <deller@gmx.de>
2023-07-03parisc: math-emu: Avoid compiler warnings with W=1Helge Deller1-1/+2
The math-emu code is a snapshot from the HP-UX kernel. They've been modified as little as possible. See arch/parisc/math-emu/README. Signed-off-by: Helge Deller <deller@gmx.de>
2023-07-03parisc: Raise minimal GCC version to 12.0.0Helge Deller1-2/+2
Raise the minimum gcc version for parisc64 to 12.0.0 (for __int128 type) and keep 5.1.0 as minimum for 32-bit parisc target. Fixes: 8664645ade97 ("parisc: Raise minimal GCC version") Signed-off-by: Helge Deller <deller@gmx.de>
2023-07-03parisc: unwind: Avoid missing prototype warning for handle_interruption()Helge Deller2-2/+4
Signed-off-by: Helge Deller <deller@gmx.de>
2023-07-03execve: always mark stack as growing down during early stack setupLinus Torvalds1-1/+3
While our user stacks can grow either down (all common architectures) or up (parisc and the ia64 register stack), the initial stack setup when we copy the argument and environment strings to the new stack at execve() time is always done by extending the stack downwards. But it turns out that in commit 8d7071af8907 ("mm: always expand the stack with the mmap write lock held"), as part of making the stack growing code more robust, 'expand_downwards()' was now made to actually check the vma flags: if (!(vma->vm_flags & VM_GROWSDOWN)) return -EFAULT; and that meant that this execve-time stack expansion started failing on parisc, because on that architecture, the stack flags do not contain the VM_GROWSDOWN bit. At the same time the new check in expand_downwards() is clearly correct, and simplified the callers, so let's not remove it. The solution is instead to just codify the fact that yes, during execve(), the stack grows down. This not only matches reality, it ends up being particularly simple: we already have special execve-time flags for the stack (VM_STACK_INCOMPLETE_SETUP) and use those flags to avoid page migration during this setup time (see vma_is_temporary_stack() and invalid_migration_vma()). So just add VM_GROWSDOWN to that set of temporary flags, and now our stack flags automatically match reality, and the parisc stack expansion works again. Note that the VM_STACK_INCOMPLETE_SETUP bits will be cleared when the stack is finalized, so we only add the extra VM_GROWSDOWN bit on CONFIG_STACK_GROWSUP architectures (ie parisc) rather than adding it in general. Link: https://lore.kernel.org/all/612eaa53-6904-6e16-67fc-394f4faa0e16@bell.net/ Link: https://lore.kernel.org/all/5fd98a09-4792-1433-752d-029ae3545168@gmx.de/ Fixes: 8d7071af8907 ("mm: always expand the stack with the mmap write lock held") Reported-by: John David Anglin <dave.anglin@bell.net> Reported-and-tested-by: Helge Deller <deller@gmx.de> Reported-and-tested-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-07-03parisc: smp: Add declaration for start_cpu_itimer()Helge Deller2-2/+1
Avoid gcc warning about missing prototype for start_cpu_itimer(). Signed-off-by: Helge Deller <deller@gmx.de>
2023-07-03parisc: pdt: Get prototype for arch_report_meminfo()Helge Deller1-0/+1
Include linux/proc_fs.h to avoid compiler warning about missing prototype for 'arch_report_meminfo' Signed-off-by: Helge Deller <deller@gmx.de>
2023-07-03vhost: Make parameter name match of vhost_get_vq_desc()Xianting Tian1-1/+1
The parameter name in the function declaration and definition should be the same. drivers/vhost/vhost.h, int vhost_get_vq_desc(..., unsigned int iov_count,...); drivers/vhost/vhost.c, int vhost_get_vq_desc(..., unsigned int iov_size,...) Signed-off-by: Xianting Tian <xianting.tian@linux.alibaba.com> Message-Id: <20230621093835.36878-1-xianting.tian@linux.alibaba.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vduse: fix NULL pointer dereferenceMaxime Coquelin1-1/+5
vduse_vdpa_set_vq_affinity callback can be called with NULL value as cpu_mask when deleting the vduse device. This patch resets virtqueue's IRQ affinity mask value to set all CPUs instead of dereferencing NULL cpu_mask. [ 4760.952149] BUG: kernel NULL pointer dereference, address: 0000000000000000 [ 4760.959110] #PF: supervisor read access in kernel mode [ 4760.964247] #PF: error_code(0x0000) - not-present page [ 4760.969385] PGD 0 P4D 0 [ 4760.971927] Oops: 0000 [#1] PREEMPT SMP PTI [ 4760.976112] CPU: 13 PID: 2346 Comm: vdpa Not tainted 6.4.0-rc6+ #4 [ 4760.982291] Hardware name: Dell Inc. PowerEdge R640/0W23H8, BIOS 2.8.1 06/26/2020 [ 4760.989769] RIP: 0010:memcpy_orig+0xc5/0x130 [ 4760.994049] Code: 16 f8 4c 89 07 4c 89 4f 08 4c 89 54 17 f0 4c 89 5c 17 f8 c3 cc cc cc cc 66 66 2e 0f 1f 84 00 00 00 00 00 66 90 83 fa 08 72 1b <4c> 8b 06 4c 8b 4c 16 f8 4c 89 07 4c 89 4c 17 f8 c3 cc cc cc cc 66 [ 4761.012793] RSP: 0018:ffffb1d565abb830 EFLAGS: 00010246 [ 4761.018020] RAX: ffff9f4bf6b27898 RBX: ffff9f4be23969c0 RCX: ffff9f4bcadf6400 [ 4761.025152] RDX: 0000000000000008 RSI: 0000000000000000 RDI: ffff9f4bf6b27898 [ 4761.032286] RBP: 0000000000000000 R08: 0000000000000008 R09: 0000000000000000 [ 4761.039416] R10: 0000000000000000 R11: 0000000000000600 R12: 0000000000000000 [ 4761.046549] R13: 0000000000000000 R14: 0000000000000080 R15: ffffb1d565abbb10 [ 4761.053680] FS: 00007f64c2ec2740(0000) GS:ffff9f635f980000(0000) knlGS:0000000000000000 [ 4761.061765] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 4761.067513] CR2: 0000000000000000 CR3: 0000001875270006 CR4: 00000000007706e0 [ 4761.074645] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 4761.081775] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 4761.088909] PKRU: 55555554 [ 4761.091620] Call Trace: [ 4761.094074] <TASK> [ 4761.096180] ? __die+0x1f/0x70 [ 4761.099238] ? page_fault_oops+0x171/0x4f0 [ 4761.103340] ? exc_page_fault+0x7b/0x180 [ 4761.107265] ? asm_exc_page_fault+0x22/0x30 [ 4761.111460] ? memcpy_orig+0xc5/0x130 [ 4761.115126] vduse_vdpa_set_vq_affinity+0x3e/0x50 [vduse] [ 4761.120533] virtnet_clean_affinity.part.0+0x3d/0x90 [virtio_net] [ 4761.126635] remove_vq_common+0x1a4/0x250 [virtio_net] [ 4761.131781] virtnet_remove+0x5d/0x70 [virtio_net] [ 4761.136580] virtio_dev_remove+0x3a/0x90 [ 4761.140509] device_release_driver_internal+0x19b/0x200 [ 4761.145742] bus_remove_device+0xc2/0x130 [ 4761.149755] device_del+0x158/0x3e0 [ 4761.153245] ? kernfs_find_ns+0x35/0xc0 [ 4761.157086] device_unregister+0x13/0x60 [ 4761.161010] unregister_virtio_device+0x11/0x20 [ 4761.165543] device_release_driver_internal+0x19b/0x200 [ 4761.170770] bus_remove_device+0xc2/0x130 [ 4761.174782] device_del+0x158/0x3e0 [ 4761.178276] ? __pfx_vdpa_name_match+0x10/0x10 [vdpa] [ 4761.183336] device_unregister+0x13/0x60 [ 4761.187260] vdpa_nl_cmd_dev_del_set_doit+0x63/0xe0 [vdpa] Fixes: 28f6288eb63d ("vduse: Support set_vq_affinity callback") Cc: xieyongji@bytedance.com Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com> Message-Id: <20230622204851.318125-1-maxime.coquelin@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Reviewed-by: Xie Yongji <xieyongji@bytedance.com>
2023-07-03vhost: Allow worker switching while work is queueingMike Christie3-46/+115
This patch drops the requirement that we can only switch workers if work has not been queued by using RCU for the vq based queueing paths and a mutex for the device wide flush. We can also use this to support SIGKILL properly in the future where we should exit almost immediately after getting that signal. With this patch, when get_signal returns true, we can set the vq->worker to NULL and do a synchronize_rcu to prevent new work from being queued to the vhost_task that has been killed. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-18-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost_scsi: add support for worker ioctlsMike Christie1-0/+8
This has vhost-scsi support the worker ioctls by calling the vhost_worker_ioctl helper. With a single worker, the single thread becomes a bottlneck when trying to use 3 or more virtqueues like: fio --filename=/dev/sdb --direct=1 --rw=randrw --bs=4k \ --ioengine=libaio --iodepth=128 --numjobs=3 With the patches and doing a worker per vq, we can scale to at least 16 vCPUs/vqs (that's my system limit) with the same command fio command above with numjobs=16: fio --filename=/dev/sdb --direct=1 --rw=randrw --bs=4k \ --ioengine=libaio --iodepth=64 --numjobs=16 which gives around 2002K IOPs. Note that for testing I dropped depth to 64 above because the vhost/virt layer supports only 1024 total commands per device. And the only tuning I did was set LIO's emulate_pr to 0 to avoid LIO's PR lock in the main IO path which becomes an issue at around 12 jobs/virtqueues. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-17-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost: allow userspace to create workersMike Christie4-1/+193
For vhost-scsi with 3 vqs or more and a workload that tries to use them in parallel like: fio --filename=/dev/sdb --direct=1 --rw=randrw --bs=4k \ --ioengine=libaio --iodepth=128 --numjobs=3 the single vhost worker thread will become a bottlneck and we are stuck at around 500K IOPs no matter how many jobs, virtqueues, and CPUs are used. To better utilize virtqueues and available CPUs, this patch allows userspace to create workers and bind them to vqs. You can have N workers per dev and also share N workers with M vqs on that dev. This patch adds the interface related code and the next patch will hook vhost-scsi into it. The patches do not try to hook net and vsock into the interface because: 1. multiple workers don't seem to help vsock. The problem is that with only 2 virtqueues we never fully use the existing worker when doing bidirectional tests. This seems to match vhost-scsi where we don't see the worker as a bottleneck until 3 virtqueues are used. 2. net already has a way to use multiple workers. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-16-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost: replace single worker pointer with xarrayMike Christie2-17/+50
The next patch allows userspace to create multiple workers per device, so this patch replaces the vhost_worker pointer with an xarray so we can store mupltiple workers and look them up. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-15-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost: add helper to parse userspace vring state/fileMike Christie1-7/+22
The next patches add new vhost worker ioctls which will need to get a vhost_virtqueue from a userspace struct which specifies the vq's index. This moves the vhost_vring_ioctl code to do this to a helper so it can be shared. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-14-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost: remove vhost_work_queueMike Christie2-9/+2
vhost_work_queue is no longer used. Each driver is using the poll or vq based queueing, so remove vhost_work_queue. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-13-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost_scsi: flush IO vqs then send TMF rspMike Christie1-3/+18
With one worker we will always send the scsi cmd responses then send the TMF rsp, because LIO will always complete the scsi cmds first then call into us to send the TMF response. With multiple workers, the IO vq workers could be running while the TMF/ctl vq worker is running so this has us do a flush before completing the TMF to make sure cmds are completed when it's work is later queued and run. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-12-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost_scsi: convert to vhost_vq_work_queueMike Christie1-9/+9
Convert from vhost_work_queue to vhost_vq_work_queue so we can remove vhost_work_queue. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-11-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost_scsi: make SCSI cmd completion per vqMike Christie1-30/+26
This patch separates the scsi cmd completion code paths so we can complete cmds based on their vq instead of having all cmds complete on the same worker/CPU. This will be useful with the next patches that allow us to create mulitple worker threads and bind them to different vqs, and we can have completions running on different threads/CPUs. Signed-off-by: Mike Christie <michael.christie@oracle.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20230626232307.97930-10-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost_sock: convert to vhost_vq_work_queueMike Christie1-2/+2
Convert from vhost_work_queue to vhost_vq_work_queue, so we can drop vhost_work_queue. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-9-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost: convert poll work to be vq basedMike Christie3-6/+12
This has the drivers pass in their poll to vq mapping and then converts the core poll code to use the vq based helpers. In the next patches we will allow vqs to be handled by different workers, so to allow drivers to execute operations like queue, stop, flush, etc on specific polls/vqs we need to know the mappings. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-8-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost: take worker or vq for flushingMike Christie2-2/+14
This patch has the core work flush function take a worker. When we support multiple workers we can then flush each worker during device removal, stoppage, etc. It also adds a helper to flush specific virtqueues, so vhost-scsi can flush IO vqs from it's ctl vq. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-7-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost: take worker or vq instead of dev for queueingMike Christie2-16/+29
This patch has the core work queueing function take a worker for when we support multiple workers. It also adds a helper that takes a vq during queueing so modules can control which vq/worker to queue work on. This temp leaves vhost_work_queue. It will be removed when the drivers are converted in the next patches. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-6-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost, vhost_net: add helper to check if vq has workMike Christie3-5/+5
In the next patches each vq might have different workers so one could have work but others do not. For net, we only want to check specific vqs, so this adds a helper to check if a vq has work pending and converts vhost-net to use it. Signed-off-by: Mike Christie <michael.christie@oracle.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20230626232307.97930-5-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost: add vhost_worker pointer to vhost_virtqueueMike Christie2-7/+15
This patchset allows userspace to map vqs to different workers. This patch adds a worker pointer to the vq so in later patches in this set we can queue/flush specific vqs and their workers. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-4-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost: dynamically allocate vhost_workerMike Christie2-25/+45
This patchset allows us to allocate multiple workers, so this has us move from the vhost_worker that's embedded in the vhost_dev to dynamically allocating it. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-3-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost: create worker at end of vhost_dev_set_ownerMike Christie1-6/+13
vsock can start queueing work after VHOST_VSOCK_SET_GUEST_CID, so after we have called vhost_worker_create it can be calling vhost_work_queue and trying to access the vhost worker/task. If vhost_dev_alloc_iovecs fails, then vhost_worker_free could free the worker/task from under vsock. This moves vhost_worker_create to the end of vhost_dev_set_owner where we know we can no longer fail in that path. If it fails after the VHOST_SET_OWNER and userspace closes the device, then the normal vsock release handling will do the right thing. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-2-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03virtio_bt: call scheduler when we free unused buffsXianting Tian1-0/+1
For virtio-net we were getting CPU stall warnings, and fixed it by calling the scheduler: see f8bb51043945 ("virtio_net: suppress cpu stall when free_unused_bufs"). This driver is similar so theoretically the same logic applies. Signed-off-by: Xianting Tian <xianting.tian@linux.alibaba.com> Message-Id: <20230609131817.712867-4-xianting.tian@linux.alibaba.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03virtio-console: call scheduler when we free unused buffsXianting Tian1-0/+1
For virtio-net we were getting CPU stall warnings, and fixed it by calling the scheduler: see f8bb51043945 ("virtio_net: suppress cpu stall when free_unused_bufs"). This driver is similar so theoretically the same logic applies. Signed-off-by: Xianting Tian <xianting.tian@linux.alibaba.com> Message-Id: <20230609131817.712867-3-xianting.tian@linux.alibaba.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03virtio-crypto: call scheduler when we free unused buffsXianting Tian1-0/+1
For virtio-net we were getting CPU stall warnings, and fixed it by calling the scheduler: see f8bb51043945 ("virtio_net: suppress cpu stall when free_unused_bufs"). This driver is similar so theoretically the same logic applies. Signed-off-by: Xianting Tian <xianting.tian@linux.alibaba.com> Message-Id: <20230609131817.712867-2-xianting.tian@linux.alibaba.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vDPA/ifcvf: implement new accessors for vq_stateZhu Lingshan3-33/+17
This commit implements a better layout of the live migration bar, therefore the accessors for virtqueue state have been refactored. This commit also add a comment to the probing-ids list, indicating this driver drives F2000X-PL virtio-net Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com> Message-Id: <20230612151420.1019504-4-lingshan.zhu@intel.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vDPA/ifcvf: detect and report max allowed vq sizeZhu Lingshan3-2/+35
Rather than a hardcode, this commit detects and reports the max value of allowed size of the virtqueues Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com> Message-Id: <20230612151420.1019504-3-lingshan.zhu@intel.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vDPA/ifcvf: dynamic allocate vq data storesZhu Lingshan3-1/+6
This commit dynamically allocates the data stores for the virtqueues based on virtio_pci_common_cfg.num_queues. Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com> Message-Id: <20230612151420.1019504-2-lingshan.zhu@intel.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03ovl: move all parameter handling into params.{c,h}Christian Brauner4-564/+581
While initially I thought that we couldn't move all new mount api handling into params.{c,h} it turns out it is possible. So this just moves a good chunk of code out of super.c and into params.{c,h}. Signed-off-by: Christian Brauner <brauner@kernel.org> Signed-off-by: Amir Goldstein <amir73il@gmail.com>
2023-07-03kdb: move kdb_send_sig() declaration to a better header fileDaniel Thompson2-1/+2
kdb_send_sig() is defined in the signal code and called from kdb, but the declaration is part of the kdb internal code. Move the declaration to the shared header to avoid the warning: kernel/signal.c:4789:6: error: no previous prototype for 'kdb_send_sig' [-Werror=missing-prototypes] Reported-by: Arnd Bergmann <arnd@arndb.de> Closes: https://lore.kernel.org/lkml/20230517125423.930967-1-arnd@kernel.org/ Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Link: https://lore.kernel.org/r/20230630201206.2396930-1-daniel.thompson@linaro.org