aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2023-07-03execve: always mark stack as growing down during early stack setupLinus Torvalds1-1/+3
While our user stacks can grow either down (all common architectures) or up (parisc and the ia64 register stack), the initial stack setup when we copy the argument and environment strings to the new stack at execve() time is always done by extending the stack downwards. But it turns out that in commit 8d7071af8907 ("mm: always expand the stack with the mmap write lock held"), as part of making the stack growing code more robust, 'expand_downwards()' was now made to actually check the vma flags: if (!(vma->vm_flags & VM_GROWSDOWN)) return -EFAULT; and that meant that this execve-time stack expansion started failing on parisc, because on that architecture, the stack flags do not contain the VM_GROWSDOWN bit. At the same time the new check in expand_downwards() is clearly correct, and simplified the callers, so let's not remove it. The solution is instead to just codify the fact that yes, during execve(), the stack grows down. This not only matches reality, it ends up being particularly simple: we already have special execve-time flags for the stack (VM_STACK_INCOMPLETE_SETUP) and use those flags to avoid page migration during this setup time (see vma_is_temporary_stack() and invalid_migration_vma()). So just add VM_GROWSDOWN to that set of temporary flags, and now our stack flags automatically match reality, and the parisc stack expansion works again. Note that the VM_STACK_INCOMPLETE_SETUP bits will be cleared when the stack is finalized, so we only add the extra VM_GROWSDOWN bit on CONFIG_STACK_GROWSUP architectures (ie parisc) rather than adding it in general. Link: https://lore.kernel.org/all/612eaa53-6904-6e16-67fc-394f4faa0e16@bell.net/ Link: https://lore.kernel.org/all/5fd98a09-4792-1433-752d-029ae3545168@gmx.de/ Fixes: 8d7071af8907 ("mm: always expand the stack with the mmap write lock held") Reported-by: John David Anglin <dave.anglin@bell.net> Reported-and-tested-by: Helge Deller <deller@gmx.de> Reported-and-tested-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-07-03vhost: Make parameter name match of vhost_get_vq_desc()Xianting Tian1-1/+1
The parameter name in the function declaration and definition should be the same. drivers/vhost/vhost.h, int vhost_get_vq_desc(..., unsigned int iov_count,...); drivers/vhost/vhost.c, int vhost_get_vq_desc(..., unsigned int iov_size,...) Signed-off-by: Xianting Tian <xianting.tian@linux.alibaba.com> Message-Id: <20230621093835.36878-1-xianting.tian@linux.alibaba.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vduse: fix NULL pointer dereferenceMaxime Coquelin1-1/+5
vduse_vdpa_set_vq_affinity callback can be called with NULL value as cpu_mask when deleting the vduse device. This patch resets virtqueue's IRQ affinity mask value to set all CPUs instead of dereferencing NULL cpu_mask. [ 4760.952149] BUG: kernel NULL pointer dereference, address: 0000000000000000 [ 4760.959110] #PF: supervisor read access in kernel mode [ 4760.964247] #PF: error_code(0x0000) - not-present page [ 4760.969385] PGD 0 P4D 0 [ 4760.971927] Oops: 0000 [#1] PREEMPT SMP PTI [ 4760.976112] CPU: 13 PID: 2346 Comm: vdpa Not tainted 6.4.0-rc6+ #4 [ 4760.982291] Hardware name: Dell Inc. PowerEdge R640/0W23H8, BIOS 2.8.1 06/26/2020 [ 4760.989769] RIP: 0010:memcpy_orig+0xc5/0x130 [ 4760.994049] Code: 16 f8 4c 89 07 4c 89 4f 08 4c 89 54 17 f0 4c 89 5c 17 f8 c3 cc cc cc cc 66 66 2e 0f 1f 84 00 00 00 00 00 66 90 83 fa 08 72 1b <4c> 8b 06 4c 8b 4c 16 f8 4c 89 07 4c 89 4c 17 f8 c3 cc cc cc cc 66 [ 4761.012793] RSP: 0018:ffffb1d565abb830 EFLAGS: 00010246 [ 4761.018020] RAX: ffff9f4bf6b27898 RBX: ffff9f4be23969c0 RCX: ffff9f4bcadf6400 [ 4761.025152] RDX: 0000000000000008 RSI: 0000000000000000 RDI: ffff9f4bf6b27898 [ 4761.032286] RBP: 0000000000000000 R08: 0000000000000008 R09: 0000000000000000 [ 4761.039416] R10: 0000000000000000 R11: 0000000000000600 R12: 0000000000000000 [ 4761.046549] R13: 0000000000000000 R14: 0000000000000080 R15: ffffb1d565abbb10 [ 4761.053680] FS: 00007f64c2ec2740(0000) GS:ffff9f635f980000(0000) knlGS:0000000000000000 [ 4761.061765] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 4761.067513] CR2: 0000000000000000 CR3: 0000001875270006 CR4: 00000000007706e0 [ 4761.074645] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 4761.081775] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 4761.088909] PKRU: 55555554 [ 4761.091620] Call Trace: [ 4761.094074] <TASK> [ 4761.096180] ? __die+0x1f/0x70 [ 4761.099238] ? page_fault_oops+0x171/0x4f0 [ 4761.103340] ? exc_page_fault+0x7b/0x180 [ 4761.107265] ? asm_exc_page_fault+0x22/0x30 [ 4761.111460] ? memcpy_orig+0xc5/0x130 [ 4761.115126] vduse_vdpa_set_vq_affinity+0x3e/0x50 [vduse] [ 4761.120533] virtnet_clean_affinity.part.0+0x3d/0x90 [virtio_net] [ 4761.126635] remove_vq_common+0x1a4/0x250 [virtio_net] [ 4761.131781] virtnet_remove+0x5d/0x70 [virtio_net] [ 4761.136580] virtio_dev_remove+0x3a/0x90 [ 4761.140509] device_release_driver_internal+0x19b/0x200 [ 4761.145742] bus_remove_device+0xc2/0x130 [ 4761.149755] device_del+0x158/0x3e0 [ 4761.153245] ? kernfs_find_ns+0x35/0xc0 [ 4761.157086] device_unregister+0x13/0x60 [ 4761.161010] unregister_virtio_device+0x11/0x20 [ 4761.165543] device_release_driver_internal+0x19b/0x200 [ 4761.170770] bus_remove_device+0xc2/0x130 [ 4761.174782] device_del+0x158/0x3e0 [ 4761.178276] ? __pfx_vdpa_name_match+0x10/0x10 [vdpa] [ 4761.183336] device_unregister+0x13/0x60 [ 4761.187260] vdpa_nl_cmd_dev_del_set_doit+0x63/0xe0 [vdpa] Fixes: 28f6288eb63d ("vduse: Support set_vq_affinity callback") Cc: xieyongji@bytedance.com Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com> Message-Id: <20230622204851.318125-1-maxime.coquelin@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Reviewed-by: Xie Yongji <xieyongji@bytedance.com>
2023-07-03vhost: Allow worker switching while work is queueingMike Christie3-46/+115
This patch drops the requirement that we can only switch workers if work has not been queued by using RCU for the vq based queueing paths and a mutex for the device wide flush. We can also use this to support SIGKILL properly in the future where we should exit almost immediately after getting that signal. With this patch, when get_signal returns true, we can set the vq->worker to NULL and do a synchronize_rcu to prevent new work from being queued to the vhost_task that has been killed. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-18-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost_scsi: add support for worker ioctlsMike Christie1-0/+8
This has vhost-scsi support the worker ioctls by calling the vhost_worker_ioctl helper. With a single worker, the single thread becomes a bottlneck when trying to use 3 or more virtqueues like: fio --filename=/dev/sdb --direct=1 --rw=randrw --bs=4k \ --ioengine=libaio --iodepth=128 --numjobs=3 With the patches and doing a worker per vq, we can scale to at least 16 vCPUs/vqs (that's my system limit) with the same command fio command above with numjobs=16: fio --filename=/dev/sdb --direct=1 --rw=randrw --bs=4k \ --ioengine=libaio --iodepth=64 --numjobs=16 which gives around 2002K IOPs. Note that for testing I dropped depth to 64 above because the vhost/virt layer supports only 1024 total commands per device. And the only tuning I did was set LIO's emulate_pr to 0 to avoid LIO's PR lock in the main IO path which becomes an issue at around 12 jobs/virtqueues. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-17-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost: allow userspace to create workersMike Christie4-1/+193
For vhost-scsi with 3 vqs or more and a workload that tries to use them in parallel like: fio --filename=/dev/sdb --direct=1 --rw=randrw --bs=4k \ --ioengine=libaio --iodepth=128 --numjobs=3 the single vhost worker thread will become a bottlneck and we are stuck at around 500K IOPs no matter how many jobs, virtqueues, and CPUs are used. To better utilize virtqueues and available CPUs, this patch allows userspace to create workers and bind them to vqs. You can have N workers per dev and also share N workers with M vqs on that dev. This patch adds the interface related code and the next patch will hook vhost-scsi into it. The patches do not try to hook net and vsock into the interface because: 1. multiple workers don't seem to help vsock. The problem is that with only 2 virtqueues we never fully use the existing worker when doing bidirectional tests. This seems to match vhost-scsi where we don't see the worker as a bottleneck until 3 virtqueues are used. 2. net already has a way to use multiple workers. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-16-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost: replace single worker pointer with xarrayMike Christie2-17/+50
The next patch allows userspace to create multiple workers per device, so this patch replaces the vhost_worker pointer with an xarray so we can store mupltiple workers and look them up. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-15-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost: add helper to parse userspace vring state/fileMike Christie1-7/+22
The next patches add new vhost worker ioctls which will need to get a vhost_virtqueue from a userspace struct which specifies the vq's index. This moves the vhost_vring_ioctl code to do this to a helper so it can be shared. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-14-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost: remove vhost_work_queueMike Christie2-9/+2
vhost_work_queue is no longer used. Each driver is using the poll or vq based queueing, so remove vhost_work_queue. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-13-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost_scsi: flush IO vqs then send TMF rspMike Christie1-3/+18
With one worker we will always send the scsi cmd responses then send the TMF rsp, because LIO will always complete the scsi cmds first then call into us to send the TMF response. With multiple workers, the IO vq workers could be running while the TMF/ctl vq worker is running so this has us do a flush before completing the TMF to make sure cmds are completed when it's work is later queued and run. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-12-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost_scsi: convert to vhost_vq_work_queueMike Christie1-9/+9
Convert from vhost_work_queue to vhost_vq_work_queue so we can remove vhost_work_queue. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-11-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost_scsi: make SCSI cmd completion per vqMike Christie1-30/+26
This patch separates the scsi cmd completion code paths so we can complete cmds based on their vq instead of having all cmds complete on the same worker/CPU. This will be useful with the next patches that allow us to create mulitple worker threads and bind them to different vqs, and we can have completions running on different threads/CPUs. Signed-off-by: Mike Christie <michael.christie@oracle.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20230626232307.97930-10-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost_sock: convert to vhost_vq_work_queueMike Christie1-2/+2
Convert from vhost_work_queue to vhost_vq_work_queue, so we can drop vhost_work_queue. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-9-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost: convert poll work to be vq basedMike Christie3-6/+12
This has the drivers pass in their poll to vq mapping and then converts the core poll code to use the vq based helpers. In the next patches we will allow vqs to be handled by different workers, so to allow drivers to execute operations like queue, stop, flush, etc on specific polls/vqs we need to know the mappings. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-8-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost: take worker or vq for flushingMike Christie2-2/+14
This patch has the core work flush function take a worker. When we support multiple workers we can then flush each worker during device removal, stoppage, etc. It also adds a helper to flush specific virtqueues, so vhost-scsi can flush IO vqs from it's ctl vq. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-7-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost: take worker or vq instead of dev for queueingMike Christie2-16/+29
This patch has the core work queueing function take a worker for when we support multiple workers. It also adds a helper that takes a vq during queueing so modules can control which vq/worker to queue work on. This temp leaves vhost_work_queue. It will be removed when the drivers are converted in the next patches. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-6-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost, vhost_net: add helper to check if vq has workMike Christie3-5/+5
In the next patches each vq might have different workers so one could have work but others do not. For net, we only want to check specific vqs, so this adds a helper to check if a vq has work pending and converts vhost-net to use it. Signed-off-by: Mike Christie <michael.christie@oracle.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20230626232307.97930-5-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost: add vhost_worker pointer to vhost_virtqueueMike Christie2-7/+15
This patchset allows userspace to map vqs to different workers. This patch adds a worker pointer to the vq so in later patches in this set we can queue/flush specific vqs and their workers. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-4-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost: dynamically allocate vhost_workerMike Christie2-25/+45
This patchset allows us to allocate multiple workers, so this has us move from the vhost_worker that's embedded in the vhost_dev to dynamically allocating it. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-3-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vhost: create worker at end of vhost_dev_set_ownerMike Christie1-6/+13
vsock can start queueing work after VHOST_VSOCK_SET_GUEST_CID, so after we have called vhost_worker_create it can be calling vhost_work_queue and trying to access the vhost worker/task. If vhost_dev_alloc_iovecs fails, then vhost_worker_free could free the worker/task from under vsock. This moves vhost_worker_create to the end of vhost_dev_set_owner where we know we can no longer fail in that path. If it fails after the VHOST_SET_OWNER and userspace closes the device, then the normal vsock release handling will do the right thing. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230626232307.97930-2-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03virtio_bt: call scheduler when we free unused buffsXianting Tian1-0/+1
For virtio-net we were getting CPU stall warnings, and fixed it by calling the scheduler: see f8bb51043945 ("virtio_net: suppress cpu stall when free_unused_bufs"). This driver is similar so theoretically the same logic applies. Signed-off-by: Xianting Tian <xianting.tian@linux.alibaba.com> Message-Id: <20230609131817.712867-4-xianting.tian@linux.alibaba.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03virtio-console: call scheduler when we free unused buffsXianting Tian1-0/+1
For virtio-net we were getting CPU stall warnings, and fixed it by calling the scheduler: see f8bb51043945 ("virtio_net: suppress cpu stall when free_unused_bufs"). This driver is similar so theoretically the same logic applies. Signed-off-by: Xianting Tian <xianting.tian@linux.alibaba.com> Message-Id: <20230609131817.712867-3-xianting.tian@linux.alibaba.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03virtio-crypto: call scheduler when we free unused buffsXianting Tian1-0/+1
For virtio-net we were getting CPU stall warnings, and fixed it by calling the scheduler: see f8bb51043945 ("virtio_net: suppress cpu stall when free_unused_bufs"). This driver is similar so theoretically the same logic applies. Signed-off-by: Xianting Tian <xianting.tian@linux.alibaba.com> Message-Id: <20230609131817.712867-2-xianting.tian@linux.alibaba.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vDPA/ifcvf: implement new accessors for vq_stateZhu Lingshan3-33/+17
This commit implements a better layout of the live migration bar, therefore the accessors for virtqueue state have been refactored. This commit also add a comment to the probing-ids list, indicating this driver drives F2000X-PL virtio-net Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com> Message-Id: <20230612151420.1019504-4-lingshan.zhu@intel.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vDPA/ifcvf: detect and report max allowed vq sizeZhu Lingshan3-2/+35
Rather than a hardcode, this commit detects and reports the max value of allowed size of the virtqueues Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com> Message-Id: <20230612151420.1019504-3-lingshan.zhu@intel.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03vDPA/ifcvf: dynamic allocate vq data storesZhu Lingshan3-1/+6
This commit dynamically allocates the data stores for the virtqueues based on virtio_pci_common_cfg.num_queues. Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com> Message-Id: <20230612151420.1019504-2-lingshan.zhu@intel.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-07-03kdb: move kdb_send_sig() declaration to a better header fileDaniel Thompson2-1/+2
kdb_send_sig() is defined in the signal code and called from kdb, but the declaration is part of the kdb internal code. Move the declaration to the shared header to avoid the warning: kernel/signal.c:4789:6: error: no previous prototype for 'kdb_send_sig' [-Werror=missing-prototypes] Reported-by: Arnd Bergmann <arnd@arndb.de> Closes: https://lore.kernel.org/lkml/20230517125423.930967-1-arnd@kernel.org/ Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Link: https://lore.kernel.org/r/20230630201206.2396930-1-daniel.thompson@linaro.org
2023-07-02fs: move cleanup from init_file() into its callersAmir Goldstein1-4/+10
The use of file_free_rcu() in init_file() to free the struct that was allocated by the caller was hacky and we got what we deserved. Let init_file() and its callers take care of cleaning up each after their own allocated resources on error. Fixes: 62d53c4a1dfe ("fs: use backing_file container for internal files with "fake" f_path") # mainline only Reported-and-tested-by: syzbot+ada42aab05cf51b00e98@syzkaller.appspotmail.com Signed-off-by: Amir Goldstein <amir73il@gmail.com> Message-Id: <20230701171134.239409-1-amir73il@gmail.com> Signed-off-by: Christian Brauner <brauner@kernel.org>
2023-07-01pid: use struct_size_t() helperChristian Brauner2-2/+2
Before commit d67790ddf021 ("overflow: Add struct_size_t() helper") only struct_size() existed, which expects a valid pointer instance containing the flexible array. However, when we determine the default struct pid allocation size for the associated kmem cache of a pid namespace we need to take the nesting depth of the pid namespace into account without an variable instance necessarily being available. In commit b69f0aeb0689 ("pid: Replace struct pid 1-element array with flex-array") we used to handle this the old fashioned way and cast NULL to a struct pid pointer type. However, we do apparently have a dedicated struct_size_t() helper for exactly this case. So switch to that. Suggested-by: Kees Cook <keescook@chromium.org> Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Christian Brauner <brauner@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-07-01mm: Update do_vmi_align_munmap() return semanticsLiam R. Howlett3-69/+57
Since do_vmi_align_munmap() will always honor the downgrade request on the success, the callers no longer have to deal with confusing return codes. Since all callers that request downgrade actually want the lock to be dropped, change the downgrade to an unlock request. Note that the lock still needs to be held in read mode during the page table clean up to avoid races with a map request. Update do_vmi_align_munmap() to return 0 for success. Clean up the callers and comments to always expect the unlock to be honored on the success path. The error path will always leave the lock untouched. As part of the cleanup, the wrapper function do_vmi_munmap() and callers to the wrapper are also updated. Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/linux-mm/20230629191414.1215929-1-willy@infradead.org/ Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-07-01mm: Always downgrade mmap_lock if requestedMatthew Wilcox (Oracle)1-13/+2
Now that stack growth must always hold the mmap_lock for write, we can always downgrade the mmap_lock to read and safely unmap pages from the page table, even if we're next to a stack. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-07-01xtensa: fix lock_mm_and_find_vma in case VMA not foundMax Filippov1-1/+6
MMU version of lock_mm_and_find_vma releases the mm lock before returning when VMA is not found. Do the same in noMMU version. This fixes hang on an attempt to handle protection fault. Fixes: d85a143b69ab ("xtensa: fix NOMMU build with lock_mm_and_find_vma() conversion") Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-06-30xtensa: fix NOMMU build with lock_mm_and_find_vma() conversionLinus Torvalds2-2/+14
It turns out that xtensa has a really odd configuration situation: you can do a no-MMU config, but still have the page fault code enabled. Which doesn't sound all that sensible, but it turns out that xtensa can have protection faults even without the MMU, and we have this: config PFAULT bool "Handle protection faults" if EXPERT && !MMU default y help Handle protection faults. MMU configurations must enable it. noMMU configurations may disable it if used memory map never generates protection faults or faults are always fatal. If unsure, say Y. which completely violated my expectations of the page fault handling. End result: Guenter reports that the xtensa no-MMU builds all fail with arch/xtensa/mm/fault.c: In function ‘do_page_fault’: arch/xtensa/mm/fault.c:133:8: error: implicit declaration of function ‘lock_mm_and_find_vma’ because I never exposed the new lock_mm_and_find_vma() function for the no-MMU case. Doing so is simple enough, and fixes the problem. Reported-and-tested-by: Guenter Roeck <linux@roeck-us.net> Fixes: a050ba1e7422 ("mm/fault: convert remaining simple cases to lock_mm_and_find_vma()") Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-06-30sysctl: set variable sysctl_mount_point storage-class-specifier to staticTom Rix1-1/+1
smatch reports fs/proc/proc_sysctl.c:32:18: warning: symbol 'sysctl_mount_point' was not declared. Should it be static? This variable is only used in its defining file, so it should be static. Signed-off-by: Tom Rix <trix@redhat.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
2023-06-30mailbox: ti-msgmgr: Fill non-message tx data fields with 0x0Nishanth Menon1-3/+9
Sec proxy/message manager data buffer is 60 bytes with the last of the registers indicating transmission completion. This however poses a bit of a challenge. The backing memory for sec_proxy / message manager is regular memory, and all sec proxy does is to trigger a burst of all 60 bytes of data over to the target thread backing ring accelerator. It doesn't do a memory scrub when it moves data out in the burst. When we transmit multiple messages, remnants of previous message is also transmitted which results in some random data being set in TISCI fields of messages that have been expanded forward. The entire concept of backward compatibility hinges on the fact that the unused message fields remain 0x0 allowing for 0x0 value to be specially considered when backward compatibility of message extension is done. So, instead of just writing the completion register, we continue to fill the message buffer up with 0x0 (note: for partial message involving completion, we already do this). This allows us to scale and introduce ABI changes back also work with other boot stages that may have left data in the internal memory. While at this, be consistent and explicit with the data_reg pointer increment. Fixes: aace66b170ce ("mailbox: Introduce TI message manager driver") Signed-off-by: Nishanth Menon <nm@ti.com> Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
2023-06-30mailbox: tegra: add support for Tegra264Stefan Kristiansson1-2/+14
Tegra264 has a slightly different doorbell register layout than previous chips. Acked-by: Thierry Reding <treding@nvidia.com> Signed-off-by: Stefan Kristiansson <stefank@nvidia.com> Signed-off-by: Peter De Schrijver <pdeschrijver@nvidia.com> Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
2023-06-30dt-bindings: mailbox: tegra: Document Tegra264 HSPPeter De Schrijver1-0/+1
Add the compatible string for the HSP block found on the Tegra264 SoC. The HSP block in Tegra264 is not register compatible with the one in Tegra194 or Tegra234 hence there is no fallback compatibility string. Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Acked-by: Thierry Reding <treding@nvidia.com> Signed-off-by: Peter De Schrijver <pdeschrijver@nvidia.com> Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
2023-06-30dt-bindings: mailbox: convert bcm2835-mbox bindings to YAMLStefan Wahren2-26/+40
Convert the DT binding document for bcm2835-mbox from .txt to YAML. Signed-off-by: Stefan Wahren <stefan.wahren@i2se.com> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
2023-06-30dt-bindings: mailbox: qcom: Add IPQ5018 APCS compatibleManikanta Mylavarapu1-0/+1
Add compatible for the Qualcomm IPQ5018 APCS block. Signed-off-by: Manikanta Mylavarapu <quic_mmanikan@quicinc.com> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
2023-06-30pid: Replace struct pid 1-element array with flex-arrayKees Cook3-4/+7
For pid namespaces, struct pid uses a dynamically sized array member, "numbers". This was implemented using the ancient 1-element fake flexible array, which has been deprecated for decades. Replace it with a C99 flexible array, refactor the array size calculations to use struct_size(), and address elements via indexes. Note that the static initializer (which defines a single element) works as-is, and requires no special handling. Without this, CONFIG_UBSAN_BOUNDS (and potentially CONFIG_FORTIFY_SOURCE) will trigger bounds checks: https://lore.kernel.org/lkml/20230517-bushaltestelle-super-e223978c1ba6@brauner Cc: Christian Brauner <brauner@kernel.org> Cc: Jan Kara <jack@suse.cz> Cc: Jeff Xu <jeffxu@google.com> Cc: Andreas Gruenbacher <agruenba@redhat.com> Cc: Daniel Verkamp <dverkamp@chromium.org> Cc: "Paul E. McKenney" <paulmck@kernel.org> Cc: Jeff Xu <jeffxu@google.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Luis Chamberlain <mcgrof@kernel.org> Cc: Frederic Weisbecker <frederic@kernel.org> Reported-by: syzbot+ac3b41786a2d0565b6d5@syzkaller.appspotmail.com [brauner: dropped unrelated changes and remove 0 with NULL cast] Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Christian Brauner <brauner@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-06-30kdb: Handle LF in the command parserDouglas Anderson1-1/+15
The main kdb command parser only handles CR (ASCII 13 AKA '\r') today, but not LF (ASCII 10 AKA '\n'). That means that the kdb command parser can handle terminals that send just CR or that send CR+LF but can't handle terminals that send just LF. The fact that kdb didn't handle LF in the command parser tripped up a tool I tried to use with it. Specifically, I was trying to send a command to my device to resume it from kdb using a ChromeOS tool like: dut-control cpu_uart_cmd:"g" That tool only terminates lines with LF, not CR+LF. Arguably the ChromeOS tool should be fixed. After all, officially kdb seems to be designed such that CR+LF is the official line ending transmitted over the wire and that internally a line ending is just '\n' (LF). Some evidence: * uart_poll_put_char(), which is used by kdb, notices a '\n' and converts it to '\r\n'. * kdb functions specifically use '\r' to get a carriage return without a newline. You can see this in the pager where kdb will write a '\r' and then write over the pager prompt. However, all that being said there's no real harm in accepting LF as a command terminator in the kdb parser and doing so seems like it would improve compatibility. After this, I'd expect that things would work OK-ish with a remote terminal that used any of CR, CR+LF, or LF as a line ending. Someone using CR as a line ending might get some ugliness where kdb wasn't able to overwrite the last line, but basic commands would work. Someone using just LF as a line ending would probably also work OK. A few other notes: - It can be noted that "bash" running on an "agetty" handles LF as a line termination with no complaints. - Historically, kdb's "pager" actually handled either CR or LF fine. A very quick inspection would make one think that kdb's pager actually could have paged down two lines instead of one for anyone using CR+LF, but this is generally avoided because of kdb_input_flush(). - Conceivably one could argue that some of this special case logic belongs in uart_poll_get_char() since uart_poll_put_char() handles the '\n' => '\r\n' conversion. I would argue that perhaps we should eventually do the opposite and move the '\n' => '\r\n' out of uart_poll_put_char(). Having that conversion at such a low level could interfere if we ever want to transfer binary data. In addition, if we truly made uart_poll_get_char() the inverse of uart_poll_put_char() it would convert back to '\n' and (ironically) kdb's parser currently only looks for '\r' to find the end of a command. Signed-off-by: Douglas Anderson <dianders@chromium.org> Link: https://lore.kernel.org/r/20230628125612.1.I5cc6c3d916195f5bcfdf5b75d823f2037707f5dc@changeid Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
2023-06-30parisc: Refresh defconfigsHelge Deller2-18/+83
Refresh defconfigs and enable some more graphic cards. Signed-off-by: Helge Deller <deller@gmx.de>
2023-06-30parisc: irq: Add irq-related function declarationsHelge Deller2-4/+4
Move function declarations for do_cpu_irq_mask(), timer_interrupt() and ipi_interrupt() to header file. Signed-off-by: Helge Deller <deller@gmx.de>
2023-06-30parisc: Move init function declarations into header fileHelge Deller6-29/+36
Clean up the code to not have external function declarations inside the C source files. Reduces warnings when compiled with W=1. Signed-off-by: Helge Deller <deller@gmx.de>
2023-06-30parisc: dino: Make dino_init() returning voidHelge Deller1-2/+1
Signed-off-by: Helge Deller <deller@gmx.de>
2023-06-30parisc: lba_pci: Mark two variables __maybe_unusedHelge Deller1-2/+2
Signed-off-by: Helge Deller <deller@gmx.de>
2023-06-30parisc: unaligned: Include header file to avoid missing prototype warningsHelge Deller1-0/+1
Signed-off-by: Helge Deller <deller@gmx.de>
2023-06-30parisc: signal: Mark do_notify_resume() and sys_rt_sigreturn() asmlinkageHelge Deller1-5/+5
Signed-off-by: Helge Deller <deller@gmx.de>
2023-06-30parisc: unwind: Mark start and stop variables __maybe_unusedHelge Deller1-2/+3
Signed-off-by: Helge Deller <deller@gmx.de>
2023-06-30parisc: init: Drop unused variable end_paddrHelge Deller1-2/+0
Signed-off-by: Helge Deller <deller@gmx.de>