aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2023-06-25tools/testing/cxl: Use named effects for the Command Effect LogVishal Verma1-9/+23
As more emulated mailbox commands are added to cxl_test, it is a pain point to look up command effect numbers for each effect. Replace the bare numbers in the mock driver with an enum that lists all possible effects. Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Jonathan Cameron <Jonathan.Cameron@Huawei.com> Cc: Russ Weight <russell.h.weight@intel.com> Cc: Alison Schofield <alison.schofield@intel.com> Cc: Ira Weiny <ira.weiny@intel.com> Cc: Dave Jiang <dave.jiang@intel.com> Cc: Ben Widawsky <bwidawsk@kernel.org> Cc: Dan Williams <dan.j.williams@intel.com> Suggested-by: Jonathan Cameron <Jonathan.Cameron@Huawei.com> Reviewed-by: Alison Schofield <alison.schofield@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vishal Verma <vishal.l.verma@intel.com> Link: https://lore.kernel.org/r/20230602-vv-fw_update-v4-3-c6265bd7343b@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-06-25tools/testing/cxl: Fix command effects for inject/clear poisonVishal Verma1-2/+2
The CXL spec (3.0, section 8.2.9.8.4) Lists Inject Poison and Clear Poison as having the effects of "Immediate Data Change". Fix this in the mock driver so that the command effect log is populated correctly. Fixes: 371c16101ee8 ("tools/testing/cxl: Mock the Inject Poison mailbox command") Cc: Alison Schofield <alison.schofield@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Reviewed-by: Alison Schofield <alison.schofield@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vishal Verma <vishal.l.verma@intel.com> Link: https://lore.kernel.org/r/20230602-vv-fw_update-v4-2-c6265bd7343b@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-06-25cxl: add a firmware update mechanism using the sysfs firmware loaderVishal Verma5-0/+406
The sysfs based firmware loader mechanism was created to easily allow userspace to upload firmware images to FPGA cards. This also happens to be pretty suitable to create a user-initiated but kernel-controlled firmware update mechanism for CXL devices, using the CXL specified mailbox commands. Since firmware update commands can be long-running, and can be processed in the background by the endpoint device, it is desirable to have the ability to chunk the firmware transfer down to smaller pieces, so that one operation does not monopolize the mailbox, locking out any other long running background commands entirely - e.g. security commands like 'sanitize' or poison scanning operations. The firmware loader mechanism allows a natural way to perform this chunking, as after each mailbox command, that is restricted to the maximum mailbox payload size, the cxl memdev driver relinquishes control back to the fw_loader system and awaits the next chunk of data to transfer. This opens opportunities for other background commands to access the mailbox and send their own slices of background commands. Add the necessary helpers and state tracking to be able to perform the 'Get FW Info', 'Transfer FW', and 'Activate FW' mailbox commands as described in the CXL spec. Wire these up to the firmware loader callbacks, and register with that system to create the memX/firmware/ sysfs ABI. Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Jonathan Cameron <Jonathan.Cameron@Huawei.com> Cc: Russ Weight <russell.h.weight@intel.com> Cc: Alison Schofield <alison.schofield@intel.com> Cc: Ira Weiny <ira.weiny@intel.com> Cc: Dave Jiang <dave.jiang@intel.com> Cc: Ben Widawsky <bwidawsk@kernel.org> Cc: Dan Williams <dan.j.williams@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vishal Verma <vishal.l.verma@intel.com> Link: https://lore.kernel.org/r/20230602-vv-fw_update-v4-1-c6265bd7343b@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-06-25cxl/test: Add Secure Erase opcode supportDavidlohr Bueso1-0/+27
Add support to emulate the CXL the "Secure Erase" operation. Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Davidlohr Bueso <dave@stgolabs.net> Link: https://lore.kernel.org/r/20230612181038.14421-8-dave@stgolabs.net Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-06-25cxl/mem: Support Secure EraseDavidlohr Bueso4-1/+44
Implement support for the non-pmem exclusive secure erase, per CXL specs. Create a write-only 'security/erase' sysfs file to perform the requested operation. As with the sanitation this requires the device being offline and thus no active HPA-DPA decoding. The expectation is that userspace can use it such as: cxl disable-memdev memX echo 1 > /sys/bus/cxl/devices/memX/security/erase cxl enable-memdev memX Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Fan Ni <fan.ni@samsung.com> Signed-off-by: Davidlohr Bueso <dave@stgolabs.net> Link: https://lore.kernel.org/r/20230612181038.14421-7-dave@stgolabs.net Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-06-25cxl/test: Add Sanitize opcode supportDavidlohr Bueso1-0/+25
Add support to emulate the "Sanitize" operation, without incurring in the background. Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Davidlohr Bueso <dave@stgolabs.net> Link: https://lore.kernel.org/r/20230612181038.14421-6-dave@stgolabs.net Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-06-25cxl/mem: Wire up Sanitization supportDavidlohr Bueso5-2/+151
Implement support for CXL 3.0 8.2.9.8.5.1 Sanitize. This is done by adding a security/sanitize' memdev sysfs file to trigger the operation and extend the status file to make it poll(2)-capable for completion. Unlike all other background commands, this is the only operation that is special and monopolizes the device for long periods of time. In addition to the traditional pmem security requirements, all regions must also be offline in order to perform the operation. This permits avoiding explicit global CPU cache management, relying instead on the implict cache management when a region transitions between CXL_CONFIG_ACTIVE and CXL_CONFIG_COMMIT. The expectation is that userspace can use it such as: cxl disable-memdev memX echo 1 > /sys/bus/cxl/devices/memX/security/sanitize cxl wait-sanitize memX cxl enable-memdev memX Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Davidlohr Bueso <dave@stgolabs.net> Link: https://lore.kernel.org/r/20230612181038.14421-5-dave@stgolabs.net Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-06-25cxl/mbox: Add sanitization handling machineryDavidlohr Bueso3-3/+91
Sanitization is by definition a device-monopolizing operation, and thus the timeslicing rules for other background commands do not apply. As such handle this special case asynchronously and return immediately. Subsequent changes will allow completion to be pollable from userspace via a sysfs file interface. For devices that don't support interrupts for notifying background command completion, self-poll with the caveat that the poller can be out of sync with the ready hardware, and therefore care must be taken to not allow any new commands to go through until the poller sees the hw completion. The poller takes the mbox_mutex to stabilize the flagging, minimizing any runtime overhead in the send path to check for 'sanitize_tmo' for uncommon poll scenarios. The irq case is much simpler as hardware will serialize/error appropriately. Reviewed-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Davidlohr Bueso <dave@stgolabs.net> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230612181038.14421-4-dave@stgolabs.net Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-06-25cxl/mem: Introduce security state sysfs fileDavidlohr Bueso4-0/+56
Add a read-only sysfs file to display the security state of a device (currently only pmem): /sys/bus/cxl/devices/memX/security/state This introduces a cxl_security_state structure that is to be the placeholder for common CXL security features. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Fan Ni <fan.ni@samsung.com> Signed-off-by: Davidlohr Bueso <dave@stgolabs.net> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20230612181038.14421-3-dave@stgolabs.net Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-06-25cxl/mbox: Allow for IRQ_NONE case in the isrDavidlohr Bueso1-2/+4
For cases when the mailbox background operation is not complete, do not "handle" the interrupt, as it was not from this device. And furthermore there are no racy scenarios such as the hw being out of sync with the driver and starting a new background op behind its back. Reported-by: Jonathan Cameron <Jonathan.Cameron@Huawei.com> Fixes: ccadf1310fb (cxl/mbox: Add background cmd handling machinery) Signed-off-by: Davidlohr Bueso <dave@stgolabs.net> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230612181038.14421-2-dave@stgolabs.net Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-06-25Revert "cxl/port: Enable the HDM decoder capability for switch ports"Dan Williams5-49/+9
commit eb0764b822b9 ("cxl/port: Enable the HDM decoder capability for switch ports") ...was added on the observation of CXL memory not being accessible after setting up a region on a "cold-plugged" device. A "cold-plugged" CXL device is one that was not present at boot, so platform-firmware/BIOS has no chance to set it up. While it is true that the debug found the enable bit clear in the host-bridge's instance of the global control register (CXL 3.0 8.2.4.19.2 CXL HDM Decoder Global Control Register), that bit is described as: "This bit is only applicable to CXL.mem devices and shall return 0 on CXL Host Bridges and Upstream Switch Ports." So it is meant to be zero, and further testing confirmed that this "fix" had no effect on the failure. Revert it, and be more vigilant about proposed fixes in the future. Since the original copied stable@, flag this revert for stable@ as well. Cc: <stable@vger.kernel.org> Fixes: eb0764b822b9 ("cxl/port: Enable the HDM decoder capability for switch ports") Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168685882012.3475336.16733084892658264991.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-06-25cxl/memdev: Formalize endpoint port linkageDan Williams4-5/+8
Move the endpoint port that the cxl_mem driver establishes from drvdata to a first class attribute. This is in preparation for device-memory drivers reusing the CXL core for memory region management. Those drivers need a type-safe method to retrieve their CXL port linkage. Leave drvdata for private usage of the cxl_mem driver not external consumers of a 'struct cxl_memdev' object. Reviewed-by: Fan Ni <fan.ni@samsung.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168679264292.3436160.3901392135863405807.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-06-25cxl/pci: Unconditionally unmask 256B Flit errorsDan Williams1-16/+2
The current check for 256B Flit mode is incomplete and unnecessary. It is incomplete because it fails to consider the link speed, or check for CXL link capabilities. It is unnecessary because unconditionally unmasking 256B Flit errors is a nop when 256B Flit operation is not available. Remove this check in preparation for creating a cxl_probe_link() helper to centralize this detection. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168679263124.3436160.6228910132469454346.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-06-25cxl/region: Manage decoder target_type at decoder-attach timeDan Williams1-0/+12
Switch-level (mid-level) decoders between the platform root and an endpoint can dynamically switch modes between HDM-H and HDM-D[B] depending on which region they target. Use the region type to fixup each decoder that gets allocated to map the given region. Note that endpoint decoders are meant to determine the region type, so warn if those ever need to be fixed up, but since it is possible to continue do so. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168679262543.3436160.13053831955768440312.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-06-25cxl/hdm: Default CXL_DEVTYPE_DEVMEM decoders to CXL_DECODER_DEVMEMDan Williams2-10/+27
In preparation for device-memory region creation, arrange for decoders of CXL_DEVTYPE_DEVMEM memdevs to default to CXL_DECODER_DEVMEM for their target type. Revisit this if a device ever shows up that wants to offer mixed HDM-H (Host-Only Memory) and HDM-DB support, or an CXL_DEVTYPE_DEVMEM device that supports HDM-H. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168679261945.3436160.11673393474107374595.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-06-25cxl/port: Rename CXL_DECODER_{EXPANDER, ACCELERATOR} => {HOSTONLYMEM, DEVMEM}Dan Williams6-15/+16
In preparation for support for HDM-D and HDM-DB configuration (device-memory, and device-memory with back-invalidate). Rename the current type designators to use HOSTONLYMEM and DEVMEM as a suffix. HDM-DB can be supported by devices that are not accelerators, so DEVMEM is a more generic term for that case. Fixup one location where this type value was open coded. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168679261369.3436160.7042443847605280593.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-06-25cxl/memdev: Make mailbox functionality optionalDan Williams3-1/+28
In support of the Linux CXL core scaling for a wider set of CXL devices, allow for the creation of memdevs with some memory device capabilities disabled. Specifically, allow for CXL devices outside of those claiming to be compliant with the generic CXL memory device class code, like vendor specific Type-2/3 devices that host CXL.mem. This implies, allow for the creation of memdevs that only support component-registers, not necessarily memory-device-registers (like mailbox registers). A memdev derived from a CXL endpoint that does not support generic class code expectations is tagged "CXL_DEVTYPE_DEVMEM", while a memdev derived from a class-code compliant endpoint is tagged "CXL_DEVTYPE_CLASSMEM". The primary assumption of a CXL_DEVTYPE_DEVMEM memdev is that it optionally may not host a mailbox. Disable the command passthrough ioctl for memdevs that are not CXL_DEVTYPE_CLASSMEM, and return empty strings from memdev attributes associated with data retrieved via the class-device-standard IDENTIFY command. Note that empty strings were chosen over attribute visibility to maintain compatibility with shipping versions of cxl-cli that expect those attributes to always be present. Once cxl-cli has dropped that requirement this workaround can be deprecated. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168679260782.3436160.7587293613945445365.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-06-25cxl/mbox: Move mailbox related driver state to its own data structureDan Williams8-290/+336
'struct cxl_dev_state' makes too many assumptions about the capabilities of a CXL device. In particular it assumes a CXL device has a mailbox and all of the infrastructure and state that comes along with that. In preparation for supporting accelerator / Type-2 devices that may not have a mailbox and in general maintain a minimal core context structure, make mailbox functionality a super-set of 'struct cxl_dev_state' with 'struct cxl_memdev_state'. With this reorganization it allows for CXL devices that support HDM decoder mapping, but not other general-expander / Type-3 capabilities, to only enable that subset without the rest of the mailbox infrastructure coming along for the ride. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168679260240.3436160.15520641540463704524.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-06-25cxl: Remove leftover attribute documentation in 'struct cxl_dev_state'Dan Williams1-1/+0
commit 14d788740774 ("cxl/mem: Consolidate CXL DVSEC Range enumeration in the core") ...removed @info from 'struct cxl_dev_state', but neglected to remove the corresponding kernel-doc entry. Complete the removal. Reported-by: Jonathan Cameron <Jonathan.Cameron@Huawei.com> Closes: http://lore.kernel.org/r/20230606121054.000069e1@Huawei.com Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168679259703.3436160.12583306507362357946.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-06-25cxl: Fix kernel-doc warningsDan Williams1-3/+3
After Jonathan noticed [1] that 'struct cxl_dev_state' had a kernel-doc entry without a corresponding struct attribute I ran the kernel-doc script to see what else might be broken. Fix these warnings: drivers/cxl/cxlmem.h:199: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst * Event Interrupt Policy drivers/cxl/cxlmem.h:224: warning: Function parameter or member 'buf' not described in 'cxl_event_state' drivers/cxl/cxlmem.h:224: warning: Function parameter or member 'log_lock' not described in 'cxl_event_state' Note that scripts/kernel-doc only finds missing kernel-doc entries. It does not warn on too many kernel-doc entries, i.e. it did not catch the fact that @info refers to a not present member. Link: http://lore.kernel.org/r/20230606121054.000069e1@Huawei.com [1] Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168679259170.3436160.3686460404739136336.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-06-25tools/testing/cxl: Remove unused @cxlds argumentDan Williams1-47/+39
In preparation for plumbing a 'struct cxl_memdev_state' as a superset of a 'struct cxl_dev_state' cleanup the usage of @cxlds in the unit test infrastructure. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168679258640.3436160.7641308222525246728.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-06-25cxl/regs: Clarify when a 'struct cxl_register_map' is input vs outputDan Williams2-6/+6
The @map parameter to cxl_probe_X_registers() is filled in with the mapping parameters of the register block. The @map parameter to cxl_map_X_registers() only reads that information to perform the mapping. Mark @map const for cxl_map_X_registers() to clarify that it is only an input to those helpers. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168679258103.3436160.4941603739448763855.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-06-11Linux 6.4-rc6Linus Torvalds1-1/+1
2023-06-09s390/dasd: Use correct lock while counting channel queue lengthJan Höppner1-2/+2
The lock around counting the channel queue length in the BIODASDINFO ioctl was incorrectly changed to the dasd_block->queue_lock with commit 583d6535cb9d ("dasd: remove dead code"). This can lead to endless list iterations and a subsequent crash. The queue_lock is supposed to be used only for queue lists belonging to dasd_block. For dasd_device related queue lists the ccwdev lock must be used. Fix the mentioned issues by correctly using the ccwdev lock instead of the queue lock. Fixes: 583d6535cb9d ("dasd: remove dead code") Cc: stable@vger.kernel.org # v5.0+ Signed-off-by: Jan Höppner <hoeppner@linux.ibm.com> Reviewed-by: Stefan Haberland <sth@linux.ibm.com> Signed-off-by: Stefan Haberland <sth@linux.ibm.com> Link: https://lore.kernel.org/r/20230609153750.1258763-2-sth@linux.ibm.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-06-09tools/virtio: use canonical ftrace pathRoss Zwisler2-5/+9
The canonical location for the tracefs filesystem is at /sys/kernel/tracing. But, from Documentation/trace/ftrace.rst: Before 4.1, all ftrace tracing control files were within the debugfs file system, which is typically located at /sys/kernel/debug/tracing. For backward compatibility, when mounting the debugfs file system, the tracefs file system will be automatically mounted at: /sys/kernel/debug/tracing A few spots in tools/virtio still refer to this older debugfs path, so let's update them to avoid confusion. Signed-off-by: Ross Zwisler <zwisler@google.com> Message-Id: <20230215223350.2658616-6-zwisler@google.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Mukesh Ojha <quic_mojha@quicinc.com>
2023-06-09vhost_vdpa: support PACKED when setting-getting vring_baseShannon Nelson1-4/+17
Use the right structs for PACKED or split vqs when setting and getting the vring base. Fixes: 4c8cf31885f6 ("vhost: introduce vDPA-based backend") Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Message-Id: <20230424225031.18947-4-shannon.nelson@amd.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com>
2023-06-09vhost: support PACKED when setting-getting vring_baseShannon Nelson2-7/+19
Use the right structs for PACKED or split vqs when setting and getting the vring base. Fixes: 4c8cf31885f6 ("vhost: introduce vDPA-based backend") Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Message-Id: <20230424225031.18947-3-shannon.nelson@amd.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com>
2023-06-08drm/msm/a6xx: initialize GMU mutex earlierDmitry Baryshkov2-2/+2
Move GMU mutex initialization earlier to make sure that it is always initialized. a6xx_destroy can be called from ther failure path before GMU initialization. This fixes the following backtrace: ------------[ cut here ]------------ DEBUG_LOCKS_WARN_ON(lock->magic != lock) WARNING: CPU: 0 PID: 58 at kernel/locking/mutex.c:582 __mutex_lock+0x1ec/0x3d0 Modules linked in: CPU: 0 PID: 58 Comm: kworker/u16:1 Not tainted 6.3.0-rc5-00155-g187c06436519 #565 Hardware name: Qualcomm Technologies, Inc. SM8350 HDK (DT) Workqueue: events_unbound deferred_probe_work_func pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : __mutex_lock+0x1ec/0x3d0 lr : __mutex_lock+0x1ec/0x3d0 sp : ffff800008993620 x29: ffff800008993620 x28: 0000000000000002 x27: ffff47b253c52800 x26: 0000000001000606 x25: ffff47b240bb2810 x24: fffffffffffffff4 x23: 0000000000000000 x22: ffffc38bba15ac14 x21: 0000000000000002 x20: ffff800008993690 x19: ffff47b2430cc668 x18: fffffffffffe98f0 x17: 6f74616c75676572 x16: 20796d6d75642067 x15: 0000000000000038 x14: 0000000000000000 x13: ffffc38bbba050b8 x12: 0000000000000666 x11: 0000000000000222 x10: ffffc38bbba603e8 x9 : ffffc38bbba050b8 x8 : 00000000ffffefff x7 : ffffc38bbba5d0b8 x6 : 0000000000000222 x5 : 000000000000bff4 x4 : 40000000fffff222 x3 : 0000000000000000 x2 : 0000000000000000 x1 : 0000000000000000 x0 : ffff47b240cb1880 Call trace: __mutex_lock+0x1ec/0x3d0 mutex_lock_nested+0x2c/0x38 a6xx_destroy+0xa0/0x138 a6xx_gpu_init+0x41c/0x618 adreno_bind+0x188/0x290 component_bind_all+0x118/0x248 msm_drm_bind+0x1c0/0x670 try_to_bring_up_aggregate_device+0x164/0x1d0 __component_add+0xa8/0x16c component_add+0x14/0x20 dsi_dev_attach+0x20/0x2c dsi_host_attach+0x9c/0x144 devm_mipi_dsi_attach+0x34/0xac lt9611uxc_attach_dsi.isra.0+0x84/0xfc lt9611uxc_probe+0x5b8/0x67c i2c_device_probe+0x1ac/0x358 really_probe+0x148/0x2ac __driver_probe_device+0x78/0xe0 driver_probe_device+0x3c/0x160 __device_attach_driver+0xb8/0x138 bus_for_each_drv+0x84/0xe0 __device_attach+0x9c/0x188 device_initial_probe+0x14/0x20 bus_probe_device+0xac/0xb0 deferred_probe_work_func+0x8c/0xc8 process_one_work+0x2bc/0x594 worker_thread+0x228/0x438 kthread+0x108/0x10c ret_from_fork+0x10/0x20 irq event stamp: 299345 hardirqs last enabled at (299345): [<ffffc38bb9ba61e4>] put_cpu_partial+0x1c8/0x22c hardirqs last disabled at (299344): [<ffffc38bb9ba61dc>] put_cpu_partial+0x1c0/0x22c softirqs last enabled at (296752): [<ffffc38bb9890434>] _stext+0x434/0x4e8 softirqs last disabled at (296741): [<ffffc38bb989669c>] ____do_softirq+0x10/0x1c ---[ end trace 0000000000000000 ]--- Fixes: 4cd15a3e8b36 ("drm/msm/a6xx: Make GPU destroy a bit safer") Cc: Douglas Anderson <dianders@chromium.org> Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Reviewed-by: Douglas Anderson <dianders@chromium.org> Patchwork: https://patchwork.freedesktop.org/patch/531540/ Signed-off-by: Rob Clark <robdclark@chromium.org>
2023-06-08drm/msm/dp: enable HDP plugin/unplugged interrupts at hpd_enable/disableKuogee Hsieh3-54/+35
The internal_hpd flag is set to true by dp_bridge_hpd_enable() and set to false by dp_bridge_hpd_disable() to handle GPIO pinmuxed into DP controller case. HDP related interrupts can not be enabled until internal_hpd is set to true. At current implementation dp_display_config_hpd() will initialize DP host controller first followed by enabling HDP related interrupts if internal_hpd was true at that time. Enable HDP related interrupts depends on internal_hpd status may leave system with DP driver host is in running state but without HDP related interrupts being enabled. This will prevent external display from being detected. Eliminated this dependency by moving HDP related interrupts enable/disable be done at dp_bridge_hpd_enable/disable() directly regardless of internal_hpd status. Changes in V3: -- dp_catalog_ctrl_hpd_enable() and dp_catalog_ctrl_hpd_disable() -- rewording ocmmit text Changes in V4: -- replace dp_display_config_hpd() with dp_display_host_start() -- move enable_irq() at dp_display_host_start(); Changes in V5: -- replace dp_display_host_start() with dp_display_host_init() Changes in V6: -- squash remove enable_irq() and disable_irq() Fixes: cd198caddea7 ("drm/msm/dp: Rely on hpd_enable/disable callbacks") Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com> Tested-by: Leonard Lausen <leonard@lausen.nl> # on sc7180 lazor Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Reviewed-by: Bjorn Andersson <andersson@kernel.org> Tested-by: Bjorn Andersson <andersson@kernel.org> Reviewed-by: Abhinav Kumar <quic_abhinavk@quicinc.com> Link: https://lore.kernel.org/r/1684878756-17830-1-git-send-email-quic_khsieh@quicinc.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2023-06-08vhost: Fix worker hangs due to missed wake up callsMike Christie2-7/+11
We can race where we have added work to the work_list, but vhost_task_fn has passed that check but not yet set us into TASK_INTERRUPTIBLE. wake_up_process will see us in TASK_RUNNING and just return. This bug was intoduced in commit f9010dbdce91 ("fork, vhost: Use CLONE_THREAD to fix freezer/ps regression") when I moved the setting of TASK_INTERRUPTIBLE to simplfy the code and avoid get_signal from logging warnings about being in the wrong state. This moves the setting of TASK_INTERRUPTIBLE back to before we test if we need to stop the task to avoid a possible race there as well. We then have vhost_worker set TASK_RUNNING if it finds work similar to before. Fixes: f9010dbdce91 ("fork, vhost: Use CLONE_THREAD to fix freezer/ps regression") Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230607192338.6041-3-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-06-08vhost: Fix crash during early vhost_transport_send_pkt callsMike Christie2-34/+18
If userspace does VHOST_VSOCK_SET_GUEST_CID before VHOST_SET_OWNER we can race where: 1. thread0 calls vhost_transport_send_pkt -> vhost_work_queue 2. thread1 does VHOST_SET_OWNER which calls vhost_worker_create. 3. vhost_worker_create will set the dev->worker pointer before setting the worker->vtsk pointer. 4. thread0's vhost_work_queue will see the dev->worker pointer is set and try to call vhost_task_wake using not yet set worker->vtsk pointer. 5. We then crash since vtsk is NULL. Before commit 6e890c5d5021 ("vhost: use vhost_tasks for worker threads"), we only had the worker pointer so we could just check it to see if VHOST_SET_OWNER has been done. After that commit we have the vhost_worker and vhost_task pointer, so we can now hit the bug above. This patch embeds the vhost_worker in the vhost_dev and moves the work list initialization back to vhost_dev_init, so we can just check the worker.vtsk pointer to check if VHOST_SET_OWNER has been done like before. Fixes: 6e890c5d5021 ("vhost: use vhost_tasks for worker threads") Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20230607192338.6041-2-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reported-by: syzbot+d0d442c22fa8db45ff0e@syzkaller.appspotmail.com Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
2023-06-08vhost_net: revert upend_idx only on retriable errorAndrey Smetanin1-3/+8
Fix possible virtqueue used buffers leak and corresponding stuck in case of temporary -EIO from sendmsg() which is produced by tun driver while backend device is not up. In case of no-retriable error and zcopy do not revert upend_idx to pass packet data (that is update used_idx in corresponding vhost_zerocopy_signal_used()) as if packet data has been transferred successfully. v2: set vq->heads[ubuf->desc].len equal to VHOST_DMA_DONE_LEN in case of fake successful transmit. Signed-off-by: Andrey Smetanin <asmetanin@yandex-team.ru> Message-Id: <20230424204411.24888-1-asmetanin@yandex-team.ru> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Andrey Smetanin <asmetanin@yandex-team.ru> Acked-by: Jason Wang <jasowang@redhat.com>
2023-06-08vhost_vdpa: tell vqs about the negotiatedShannon Nelson1-0/+13
As is done in the net, iscsi, and vsock vhost support, let the vdpa vqs know about the features that have been negotiated. This allows vhost to more safely make decisions based on the features, such as when using PACKED vs split queues. Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20230424225031.18947-2-shannon.nelson@amd.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-06-08vdpa/mlx5: Fix hang when cvq commands are triggered during device unregisterDragos Tatulea1-1/+1
Currently the vdpa device is unregistered after the workqueue that processes vq commands is disabled. However, the device unregister process can still send commands to the cvq (a vlan delete for example) which leads to a hang because the handing workqueue has been disabled and the command never finishes: [ 2263.095764] rcu: INFO: rcu_sched self-detected stall on CPU [ 2263.096307] rcu: 9-....: (5250 ticks this GP) idle=dac4/1/0x4000000000000000 softirq=111009/111009 fqs=2544 [ 2263.097154] rcu: (t=5251 jiffies g=393549 q=347 ncpus=10) [ 2263.097648] CPU: 9 PID: 94300 Comm: kworker/u20:2 Not tainted 6.3.0-rc6_for_upstream_min_debug_2023_04_14_00_02 #1 [ 2263.098535] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014 [ 2263.099481] Workqueue: mlx5_events mlx5_vhca_state_work_handler [mlx5_core] [ 2263.100143] RIP: 0010:virtnet_send_command+0x109/0x170 [ 2263.100621] Code: 1d df f5 ff 85 c0 78 5c 48 8b 7b 08 e8 d0 c5 f5 ff 84 c0 75 11 eb 22 48 8b 7b 08 e8 01 b7 f5 ff 84 c0 75 15 f3 90 48 8b 7b 08 <48> 8d 74 24 04 e8 8d c5 f5 ff 48 85 c0 74 de 48 8b 83 f8 00 00 00 [ 2263.102148] RSP: 0018:ffff888139cf36e8 EFLAGS: 00000246 [ 2263.102624] RAX: 0000000000000000 RBX: ffff888166bea940 RCX: 0000000000000001 [ 2263.103244] RDX: 0000000000000000 RSI: ffff888139cf36ec RDI: ffff888146763800 [ 2263.103864] RBP: ffff888139cf3710 R08: ffff88810d201000 R09: 0000000000000000 [ 2263.104473] R10: 0000000000000002 R11: 0000000000000003 R12: 0000000000000002 [ 2263.105082] R13: 0000000000000002 R14: ffff888114528400 R15: ffff888166bea000 [ 2263.105689] FS: 0000000000000000(0000) GS:ffff88852cc80000(0000) knlGS:0000000000000000 [ 2263.106404] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 2263.106925] CR2: 00007f31f394b000 CR3: 000000010615b006 CR4: 0000000000370ea0 [ 2263.107542] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 2263.108163] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 2263.108769] Call Trace: [ 2263.109059] <TASK> [ 2263.109320] ? check_preempt_wakeup+0x11f/0x230 [ 2263.109750] virtnet_vlan_rx_kill_vid+0x5a/0xa0 [ 2263.110180] vlan_vid_del+0x9c/0x170 [ 2263.110546] vlan_device_event+0x351/0x760 [8021q] [ 2263.111004] raw_notifier_call_chain+0x41/0x60 [ 2263.111426] dev_close_many+0xcb/0x120 [ 2263.111808] unregister_netdevice_many_notify+0x130/0x770 [ 2263.112297] ? wq_worker_running+0xa/0x30 [ 2263.112688] unregister_netdevice_queue+0x89/0xc0 [ 2263.113128] unregister_netdev+0x18/0x20 [ 2263.113512] virtnet_remove+0x4f/0x230 [ 2263.113885] virtio_dev_remove+0x31/0x70 [ 2263.114273] device_release_driver_internal+0x18f/0x1f0 [ 2263.114746] bus_remove_device+0xc6/0x130 [ 2263.115146] device_del+0x173/0x3c0 [ 2263.115502] ? kernfs_find_ns+0x35/0xd0 [ 2263.115895] device_unregister+0x1a/0x60 [ 2263.116279] unregister_virtio_device+0x11/0x20 [ 2263.116706] device_release_driver_internal+0x18f/0x1f0 [ 2263.117182] bus_remove_device+0xc6/0x130 [ 2263.117576] device_del+0x173/0x3c0 [ 2263.117929] ? vdpa_dev_remove+0x20/0x20 [vdpa] [ 2263.118364] device_unregister+0x1a/0x60 [ 2263.118752] mlx5_vdpa_dev_del+0x4c/0x80 [mlx5_vdpa] [ 2263.119232] vdpa_match_remove+0x21/0x30 [vdpa] [ 2263.119663] bus_for_each_dev+0x71/0xc0 [ 2263.120054] vdpa_mgmtdev_unregister+0x57/0x70 [vdpa] [ 2263.120520] mlx5v_remove+0x12/0x20 [mlx5_vdpa] [ 2263.120953] auxiliary_bus_remove+0x18/0x30 [ 2263.121356] device_release_driver_internal+0x18f/0x1f0 [ 2263.121830] bus_remove_device+0xc6/0x130 [ 2263.122223] device_del+0x173/0x3c0 [ 2263.122581] ? devl_param_driverinit_value_get+0x29/0x90 [ 2263.123070] mlx5_rescan_drivers_locked+0xc4/0x2d0 [mlx5_core] [ 2263.123633] mlx5_unregister_device+0x54/0x80 [mlx5_core] [ 2263.124169] mlx5_uninit_one+0x54/0x150 [mlx5_core] [ 2263.124656] mlx5_sf_dev_remove+0x45/0x90 [mlx5_core] [ 2263.125153] auxiliary_bus_remove+0x18/0x30 [ 2263.125560] device_release_driver_internal+0x18f/0x1f0 [ 2263.126052] bus_remove_device+0xc6/0x130 [ 2263.126451] device_del+0x173/0x3c0 [ 2263.126815] mlx5_sf_dev_remove+0x39/0xf0 [mlx5_core] [ 2263.127318] mlx5_sf_dev_state_change_handler+0x178/0x270 [mlx5_core] [ 2263.127920] blocking_notifier_call_chain+0x5a/0x80 [ 2263.128379] mlx5_vhca_state_work_handler+0x151/0x200 [mlx5_core] [ 2263.128951] process_one_work+0x1bb/0x3c0 [ 2263.129355] ? process_one_work+0x3c0/0x3c0 [ 2263.129766] worker_thread+0x4d/0x3c0 [ 2263.130140] ? process_one_work+0x3c0/0x3c0 [ 2263.130548] kthread+0xb9/0xe0 [ 2263.130895] ? kthread_complete_and_exit+0x20/0x20 [ 2263.131349] ret_from_fork+0x1f/0x30 [ 2263.131717] </TASK> The fix is to disable and destroy the workqueue after the device unregister. It is expected that vhost will not trigger kicks after the unregister. But even if it would, the wq is disabled already by setting the pointer to NULL (done so in the referenced commit). Fixes: ad6dc1daaf29 ("vdpa/mlx5: Avoid processing works if workqueue was destroyed") Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Message-Id: <20230516095800.3549932-1-dtatulea@nvidia.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Acked-by: Jason Wang <jasowang@redhat.com>