aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2024-07-12iommu/vt-d: Fix identity map bounds in si_domain_init()Jon Pan-Doh1-1/+1
Intel IOMMU operates on inclusive bounds (both generally aas well as iommu_domain_identity_map()). Meanwhile, for_each_mem_pfn_range() uses exclusive bounds for end_pfn. This creates an off-by-one error when switching between the two. Fixes: c5395d5c4a82 ("intel-iommu: Clean up iommu_domain_identity_map()") Signed-off-by: Jon Pan-Doh <pandoh@google.com> Tested-by: Sudheer Dantuluri <dantuluris@google.com> Suggested-by: Gary Zibrat <gzibrat@google.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240709234913.2749386-1-pandoh@google.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-12iommu: Move IOMMU_DIRTY_NO_CLEAR defineShameer Kolothum1-3/+3
Fixes the compile issue when CONFIG_IOMMU_API is not set. Fixes: 4fe88fd8b4ae ("iommu/io-pgtable-arm: Add read_and_clear_dirty() support") Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202407121602.HL9ih1it-lkp@intel.com/ Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: Joao Martins <joao.m.martins@oracle.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240712113132.45100-1-shameerali.kolothum.thodi@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-10dt-bindings: iommu: Convert msm,iommu-v0 to yamlDavid Heidelberg2-64/+78
Convert Qualcomm IOMMU v0 implementation to yaml format. iommus part being ommited for the other bindings, as mdp4 one. Signed-off-by: David Heidelberg <david@ixit.cz> Reviewed-by: Rob Herring (Arm) <robh@kernel.org> Link: https://lore.kernel.org/r/20240705221520.109540-1-david@ixit.cz Signed-off-by: Will Deacon <will@kernel.org>
2024-07-10iommu/vt-d: Fix aligned pages in calculate_psi_aligned_address()Lu Baolu1-0/+1
The helper calculate_psi_aligned_address() is used to convert an arbitrary range into a size-aligned one. The aligned_pages variable is calculated from input start and end, but is not adjusted when the start pfn is not aligned and the mask is adjusted, which results in an incorrect number of pages returned. The number of pages is used by qi_flush_piotlb() to flush caches for the first-stage translation. With the wrong number of pages, the cache is not synchronized, leading to inconsistencies in some cases. Fixes: c4d27ffaa8eb ("iommu/vt-d: Add cache tag invalidation helpers") Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240709152643.28109-3-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-10iommu/vt-d: Limit max address mask to MAX_AGAW_PFN_WIDTHLu Baolu1-1/+1
Address mask specifies the number of low order bits of the address field that must be masked for the invalidation operation. Since address bits masked start from bit 12, the max address mask should be MAX_AGAW_PFN_WIDTH, as defined in Table 19 ("Invalidate Descriptor Address Mask Encodings") of the spec. Limit the max address mask returned from calculate_psi_aligned_address() to MAX_AGAW_PFN_WIDTH to prevent potential integer overflow in the following code: qi_flush_dev_iotlb(): ... addr |= (1ULL << (VTD_PAGE_SHIFT + mask - 1)) - 1; ... Fixes: c4d27ffaa8eb ("iommu/vt-d: Add cache tag invalidation helpers") Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240709152643.28109-2-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-09docs: iommu: Remove outdated Documentation/userspace-api/iommu.rstMark-PK Tsai3-211/+0
The Documentation/userspace-api/iommu.rst file has become outdated due to the removal of associated structures and APIs. Specifically, struct such as iommu_cache_invalidate_info and guest pasid related uapi were removed in commit 0c9f17877891 ("iommu: Remove guest pasid related interfaces and definitions"). And the corresponding uapi/linux/iommu.h file was removed in commit 00a9bc607043 ("iommu: Move iommu fault data to linux/iommu.h"). Signed-off-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20240702120617.26882-1-mark-pk.tsai@mediatek.com [will: Remove stale reference to 'iommu' from index.rst] Signed-off-by: Will Deacon <will@kernel.org>
2024-07-04arm64: dts: fvp: Enable PCIe ATS for Base RevC FVPJean-Philippe Brucker1-0/+1
Declare that the host controller supports ATS, so the OS can enable it for ATS-capable PCIe endpoints. Acked-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20240607105415.2501934-5-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2024-07-04iommu/of: Support ats-supported device-tree propertyJean-Philippe Brucker1-0/+9
Device-tree declares whether a PCI root-complex supports ATS by setting the "ats-supported" property. Copy this flag into device fwspec to let IOMMU drivers quickly check if they can enable ATS for a device. Tested-by: Ketan Patil <ketanp@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Liviu Dudau <liviu.dudau@arm.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20240607105415.2501934-4-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2024-07-04dt-bindings: PCI: generic: Add ats-supported propertyJean-Philippe Brucker1-0/+6
Add a way for firmware to tell the OS that ATS is supported by the PCI root complex. An endpoint with ATS enabled may send Translation Requests and Translated Memory Requests, which look just like Normal Memory Requests with a non-zero AT field. So a root controller that ignores the AT field may simply forward the request to the IOMMU as a Normal Memory Request, which could end badly. In any case, the endpoint will be unusable. The ats-supported property allows the OS to only enable ATS in endpoints if the root controller can handle ATS requests. Only add the property to pcie-host-ecam-generic for the moment. For non-generic root controllers, availability of ATS can be inferred from the compatible string. Reviewed-by: Rob Herring <robh@kernel.org> Reviewed-by: Liviu Dudau <liviu.dudau@arm.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20240607105415.2501934-3-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2024-07-04iommu: Remove iommu_fwspec opsRobin Murphy4-12/+10
The ops in iommu_fwspec are only needed for the early configuration and probe process, and by now are easy enough to derive on-demand in those couple of places which need them, so remove the redundant stored copy. Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/55c1410b2cd09531eab4f8e2f18f92a0faa0ea75.1719919669.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-04OF: Simplify of_iommu_configure()Robin Murphy2-38/+21
We no longer have a notion of partially-initialised fwspecs existing, and we also no longer need to use an iommu_ops pointer to return status to of_dma_configure(). Clean up the remains of those, which lends itself to clarifying the logic around the dma_range_map allocation as well. Acked-by: Rob Herring (Arm) <robh@kernel.org> Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/61972f88e31a6eda8bf5852f0853951164279a3c.1719919669.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-04ACPI: Retire acpi_iommu_fwspec_ops()Robin Murphy1-23/+5
Now that iommu_fwspec_init() can signal for probe deferral directly, acpi_iommu_fwspec_ops() is unneeded and can be cleaned up. Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/011e39e275aba3ad451c5a1965ca8ddf20ed36c2.1719919669.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-04iommu: Resolve fwspec ops automaticallyRobin Murphy11-61/+30
There's no real need for callers to resolve ops from a fwnode in order to then pass both to iommu_fwspec_init() - it's simpler and more sensible for that to resolve the ops itself. This in turn means we can centralise the notion of checking for a present driver, and enforce that fwspecs aren't allocated unless and until we know they will be usable. Also use this opportunity to modernise with some "new" helpers that arrived shortly after this code was first written; the generic fwnode_handle_get() clears up that ugly get/put mismatch, while of_fwnode_handle() can now abstract those open-coded dereferences. Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/0e2727adeb8cd73274425322f2f793561bdc927e.1719919669.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-04iommu/mediatek-v1: Clean up redundant fwspec checksRobin Murphy1-9/+3
The driver explicitly clears any existing fwspec before calling mtk_iommu_v1_create_mapping(), but even if it didn't, the checks it's doing there duplicate what iommu_fwspec_init() would do anyway. Clean them up. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202407020415.KKnhPTUj-lkp@intel.com/ Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/7d6ebec135483f889af00eb376aa31c012efc3b2.1719919669.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-04RDMA/usnic: Use iommu_paging_domain_alloc()Lu Baolu1-3/+3
usnic_uiom_alloc_pd() allocates a paging domain for a given device. In this case, iommu_domain_alloc(dev->bus) is equivalent to  iommu_paging_domain_alloc(dev). Replace it as iommu_domain_alloc() has been deprecated. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Acked-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240610085555.88197-15-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-04wifi: ath11k: Use iommu_paging_domain_alloc()Lu Baolu1-3/+3
An iommu domain is allocated in ath11k_ahb_fw_resources_init() and is attached to ab_ahb->fw.dev in the same function. Use iommu_paging_domain_alloc() to make it explicit. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Acked-by: Jeff Johnson <quic_jjohnson@quicinc.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240610085555.88197-12-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-04wifi: ath10k: Use iommu_paging_domain_alloc()Lu Baolu1-3/+3
An iommu domain is allocated in ath10k_fw_init() and is attached to ar_snoc->fw.dev in the same function. Use iommu_paging_domain_alloc() to make it explicit. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Acked-by: Jeff Johnson <quic_jjohnson@quicinc.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240610085555.88197-11-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-04drm/msm: Use iommu_paging_domain_alloc()Lu Baolu1-2/+5
The domain allocated in msm_iommu_new() is for the @dev. Replace iommu_domain_alloc() with iommu_paging_domain_alloc() to make it explicit. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Acked-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Link: https://lore.kernel.org/r/20240610085555.88197-6-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-04vhost-vdpa: Use iommu_paging_domain_alloc()Lu Baolu1-8/+6
Replace iommu_domain_alloc() with iommu_paging_domain_alloc(). Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240610085555.88197-5-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-04vfio/type1: Use iommu_paging_domain_alloc()Lu Baolu1-3/+4
Replace iommu_domain_alloc() with iommu_paging_domain_alloc(). Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240610085555.88197-4-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-04iommufd: Use iommu_paging_domain_alloc()Lu Baolu1-3/+4
If the iommu driver doesn't implement its domain_alloc_user callback, iommufd_hwpt_paging_alloc() rolls back to allocate an iommu paging domain. Replace iommu_domain_alloc() with iommu_user_domain_alloc() to pass the device pointer along the path. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240610085555.88197-3-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-04iommu: Add iommu_paging_domain_alloc() interfaceLu Baolu2-0/+26
Commit <17de3f5fdd35> ("iommu: Retire bus ops") removes iommu ops from bus. The iommu subsystem no longer relies on bus for operations. So the bus parameter in iommu_domain_alloc() is no longer relevant. Add a new interface named iommu_paging_domain_alloc(), which explicitly indicates the allocation of a paging domain for DMA managed by a kernel driver. The new interface takes a device pointer as its parameter, that better aligns with the current iommu subsystem. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20240610085555.88197-2-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-04iommu: Extend domain attach group with handle supportLu Baolu2-0/+111
Unlike the SVA case where each PASID of a device has an SVA domain attached to it, the I/O page faults are handled by the fault handler of the SVA domain. The I/O page faults for a user page table might be handled by the domain attached to RID or the domain attached to the PASID, depending on whether the PASID table is managed by user space or kernel. As a result, there is a need for the domain attach group interfaces to have attach handle support. The attach handle will be forwarded to the fault handler of the user domain. Add some variants of the domain attaching group interfaces so that they could support the attach handle and export them for use in IOMMUFD. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240702063444.105814-5-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-04iommu: Add attach handle to struct iopf_groupLu Baolu4-78/+42
Previously, the domain that a page fault targets is stored in an iopf_group, which represents a minimal set of page faults. With the introduction of attach handle, replace the domain with the handle so that the fault handler can obtain more information as needed when handling the faults. iommu_report_device_fault() is currently used for SVA page faults, which handles the page fault in an internal cycle. The domain is retrieved with iommu_get_domain_for_dev_pasid() if the pasid in the fault message is valid. This doesn't work in IOMMUFD case, where if the pasid table of a device is wholly managed by user space, there is no domain attached to the PASID of the device, and all page faults are forwarded through a NESTING domain attaching to RID. Add a static flag in iommu ops, which indicates if the IOMMU driver supports user-managed PASID tables. In the iopf deliver path, if no attach handle found for the iopf PASID, roll back to RID domain when the IOMMU driver supports this capability. iommu_get_domain_for_dev_pasid() is no longer used and can be removed. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240702063444.105814-4-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-04iommu: Remove sva handle listLu Baolu4-12/+54
The struct sva_iommu represents an association between an SVA domain and a PASID of a device. It's stored in the iommu group's pasid array and also tracked by a list in the per-mm data structure. Removes duplicate tracking of sva_iommu by eliminating the list. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240702063444.105814-3-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-04iommu: Introduce domain attachment handleLu Baolu4-19/+40
Currently, when attaching a domain to a device or its PASID, domain is stored within the iommu group. It could be retrieved for use during the window between attachment and detachment. With new features introduced, there's a need to store more information than just a domain pointer. This information essentially represents the association between a domain and a device. For example, the SVA code already has a custom struct iommu_sva which represents a bond between sva domain and a PASID of a device. Looking forward, the IOMMUFD needs a place to store the iommufd_device pointer in the core, so that the device object ID could be quickly retrieved in the critical fault handling path. Introduce domain attachment handle that explicitly represents the attachment relationship between a domain and a device or its PASID. Co-developed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240702063444.105814-2-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-04iommu/arm-smmu-qcom: Register the TBU driver in qcom_smmu_impl_initGeorgi Djakov3-16/+42
Currently the TBU driver will only probe when CONFIG_ARM_SMMU_QCOM_DEBUG is enabled. The driver not probing would prevent the platform to reach sync_state and the system will remain in sub-optimal power consumption mode while waiting for all consumer drivers to probe. To address this, let's register the TBU driver in qcom_smmu_impl_init(), so that it can probe, but still enable its functionality only when the debug option in Kconfig is enabled. Reported-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Closes: https://lore.kernel.org/r/CAA8EJppcXVu72OSo+OiYEiC1HQjP3qCwKMumOsUhcn6Czj0URg@mail.gmail.com Fixes: 414ecb030870 ("iommu/arm-smmu-qcom-debug: Add support for TBUs") Signed-off-by: Georgi Djakov <quic_c_gdjako@quicinc.com> Link: https://lore.kernel.org/r/20240704010759.507798-1-quic_c_gdjako@quicinc.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-03iommu/vt-d: Refactor PCI PRI enabling/disabling callbacksLu Baolu3-7/+61
Commit 0095bf83554f8 ("iommu: Improve iopf_queue_remove_device()") specified the flow for disabling the PRI on a device. Refactor the PRI callbacks in the intel iommu driver to better manage PRI enabling and disabling and align it with the device queue interfaces in the iommu core. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240701112317.94022-3-baolu.lu@linux.intel.com Link: https://lore.kernel.org/r/20240702130839.108139-8-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-03iommu/vt-d: Add helper to flush caches for context changeLu Baolu3-50/+92
This helper is used to flush the related caches following a change in a context table entry that was previously present. The VT-d specification provides guidance for such invalidations in section 6.5.3.3. This helper replaces the existing open code in the code paths where a present context entry is being torn down. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240701112317.94022-2-baolu.lu@linux.intel.com Link: https://lore.kernel.org/r/20240702130839.108139-7-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-03iommu/vt-d: Add helper to allocate paging domainLu Baolu1-9/+81
The domain_alloc_user operation is currently implemented by allocating a paging domain using iommu_domain_alloc(). This is because it needs to fully initialize the domain before return. Add a helper to do this to avoid using iommu_domain_alloc(). Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240610085555.88197-16-baolu.lu@linux.intel.com Reviewed-by: Yi Liu <yi.l.liu@intel.com> Link: https://lore.kernel.org/r/20240702130839.108139-6-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-03iommu/vt-d: Downgrade warning for pre-enabled IRLu Baolu1-2/+2
Emitting a warning is overkill in intel_setup_irq_remapping() since the interrupt remapping is pre-enabled. For example, there's no guarantee that kexec will explicitly disable interrupt remapping before booting a new kernel. As a result, users are seeing warning messages like below when they kexec boot a kernel, though there is nothing wrong: DMAR-IR: IRQ remapping was enabled on dmar18 but we are not in kdump mode DMAR-IR: IRQ remapping was enabled on dmar17 but we are not in kdump mode DMAR-IR: IRQ remapping was enabled on dmar16 but we are not in kdump mode ... ... Downgrade the severity of this message to avoid user confusion. CC: Paul Menzel <pmenzel@molgen.mpg.de> Link: https://lore.kernel.org/linux-iommu/5517f76a-94ad-452c-bae6-34ecc0ec4831@molgen.mpg.de/ Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240625043912.258036-1-baolu.lu@linux.intel.com Link: https://lore.kernel.org/r/20240702130839.108139-5-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-03iommu/vt-d: Remove control over Execute-Requested requestsLu Baolu4-17/+4
The VT-d specification has removed architectural support of the requests with pasid with a value of 1 for Execute-Requested (ER). And the NXE bit in the pasid table entry and XD bit in the first-stage paging Entries are deprecated accordingly. Remove the programming of these bits to make it consistent with the spec. Suggested-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240624032351.249858-1-baolu.lu@linux.intel.com Link: https://lore.kernel.org/r/20240702130839.108139-4-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-03iommu/vt-d: Remove comment for def_domain_typeLu Baolu1-11/+0
The comment for def_domain_type is outdated. Part of it is irrelevant. Furthermore, it could just be deleted since the iommu_ops::def_domain_type callback is properly documented in iommu.h, so individual implementations shouldn't need to repeat that. Remove it to avoid confusion. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240624024327.234979-1-baolu.lu@linux.intel.com Link: https://lore.kernel.org/r/20240702130839.108139-3-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-03iommu/vt-d: Handle volatile descriptor status readJacob Pan1-1/+1
Queued invalidation wait descriptor status is volatile in that IOMMU hardware writes the data upon completion. Use READ_ONCE() to prevent compiler optimizations which ensures memory reads every time. As a side effect, READ_ONCE() also enforces strict types and may add an extra instruction. But it should not have negative performance impact since we use cpu_relax anyway and the extra time(by adding an instruction) may allow IOMMU HW request cacheline ownership easier. e.g. gcc 12.3 BEFORE: 81 38 ad de 00 00 cmpl $0x2,(%rax) AFTER (with READ_ONCE()) 772f: 8b 00 mov (%rax),%eax 7731: 3d ad de 00 00 cmp $0x2,%eax //status data is 32 bit Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Yi Liu <yi.l.liu@intel.com> Link: https://lore.kernel.org/r/20240607173817.3914600-1-jacob.jun.pan@linux.intel.com Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20240702130839.108139-2-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-03iommu/arm-smmu-v3: Enable HTTU for stage1 with io-pgtable mappingKunkun Jiang4-1/+23
If io-pgtable quirk flag indicates support for hardware update of dirty state, enable HA/HD bits in the SMMU CD and also set the DBM bit in the page descriptor. Now report the dirty page tracking capability of SMMUv3 and select IOMMUFD_DRIVER for ARM_SMMU_V3 if IOMMUFD is enabled. Co-developed-by: Keqian Zhu <zhukeqian1@huawei.com> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com> Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com> Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Link: https://lore.kernel.org/r/20240703101604.2576-6-shameerali.kolothum.thodi@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-03iommu/arm-smmu-v3: Add support for dirty tracking in domain allocJoao Martins3-23/+67
This provides all the infrastructure to enable dirty tracking if the hardware has the capability and domain alloc request for it. Also, add a device_iommu_capable() check in iommufd core for IOMMU_CAP_DIRTY_TRACKING before we request a user domain with dirty tracking support. Please note, we still report no support for IOMMU_CAP_DIRTY_TRACKING as it will finally be enabled in a subsequent patch. Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Link: https://lore.kernel.org/r/20240703101604.2576-5-shameerali.kolothum.thodi@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-03iommu/io-pgtable-arm: Add read_and_clear_dirty() supportShameer Kolothum1-2/+112
.read_and_clear_dirty() IOMMU domain op takes care of reading the dirty bits (i.e. PTE has DBM set and AP[2] clear) and marshalling into a bitmap of a given page size. While reading the dirty bits we also set the PTE AP[2] bit to mark it as writeable-clean depending on read_and_clear_dirty() flags. PTE states with respect to DBM bit: DBM bit AP[2]("RDONLY" bit) 1. writable_clean 1 1 2. writable_dirty 1 0 3. read-only 0 1 Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Link: https://lore.kernel.org/r/20240703101604.2576-4-shameerali.kolothum.thodi@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-03iommu/arm-smmu-v3: Add feature detection for HTTUJean-Philippe Brucker2-0/+37
If the SMMU supports it and the kernel was built with HTTU support, Probe support for Hardware Translation Table Update (HTTU) which is essentially to enable hardware update of access and dirty flags. Probe and set the smmu::features for Hardware Dirty and Hardware Access bits. This is in preparation, to enable it on the context descriptors of stage 1 format. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Link: https://lore.kernel.org/r/20240703101604.2576-3-shameerali.kolothum.thodi@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-03iommu/arm-smmu-v3: Add support for domain_alloc_user fnShameer Kolothum1-2/+31
This will be used by iommufd for allocating usr managed domains and is also required when we add support for iommufd based dirty tracking support. Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Link: https://lore.kernel.org/r/20240703101604.2576-2-shameerali.kolothum.thodi@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-02iommu/arm-smmu-qcom: record reason for deferring probeZhenhua Huang1-1/+2
To avoid deferring probe smmu driver silently, record reason for it. It can be checked through ../debugfs/devices_deferred as well: /sys/kernel/debug# cat devices_deferred 15000000.iommu arm-smmu: qcom_scm not ready Signed-off-by: Zhenhua Huang <quic_zhenhuah@quicinc.com> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Link: https://lore.kernel.org/r/1719910870-25079-1-git-send-email-quic_zhenhuah@quicinc.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-02iommu/arm-smmu: Pretty-print context fault related regsRob Clark3-51/+92
Parse out the bitfields for easier-to-read fault messages. Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Pranjal Shrivastava <praan@google.com> Link: https://lore.kernel.org/r/20240701162025.375134-4-robdclark@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-02iommu/arm-smmu-qcom-debug: Do not print for handled faultsRob Clark1-4/+0
Handled faults can be "normal", don't spam dmesg about them. Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Pranjal Shrivastava <praan@google.com> Link: https://lore.kernel.org/r/20240701162025.375134-3-robdclark@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-02iommu/arm-smmu: Add CB prefix to register bitfieldsRob Clark5-41/+41
For consistency, add the "CB" prefix to the bitfield defines for context registers. Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Pranjal Shrivastava <praan@google.com> Link: https://lore.kernel.org/r/20240701162025.375134-2-robdclark@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-02dt-bindings: arm-smmu: Add X1E80100 GPU SMMUAkhil P Oommen1-1/+2
Update the devicetree bindings to support the gpu present in X1E80100 platform. Signed-off-by: Akhil P Oommen <quic_akhilpo@quicinc.com> Link: https://lore.kernel.org/r/20240629015111.264564-5-quic_akhilpo@quicinc.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-02iommu/arm-smmu-v3: add missing MODULE_DESCRIPTION() macroJeff Johnson1-0/+1
With ARCH=arm64, make allmodconfig && make W=1 C=1 reports: WARNING: modpost: missing MODULE_DESCRIPTION() in drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.o Add the missing invocation of the MODULE_DESCRIPTION() macro. Signed-off-by: Jeff Johnson <quic_jjohnson@quicinc.com> Fixes: da55da5a42d4 ("iommu/arm-smmu-v3: Make the kunit into a module") Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240613-md-arm64-drivers-iommu-arm-arm-smmu-v3-v1-1-0e9f7584a5c8@quicinc.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-02iommu/arm-smmu-v3: Shrink the strtab l1_desc arrayJason Gunthorpe2-10/+6
The top of the 2 level stream table is (at most) 128k entries big, and two high order allocations are required. One of __le64 which is programmed into the HW (1M), and one of struct arm_smmu_strtab_l1_desc which holds the CPU pointer (3M). There is no reason to store the l2ptr_dma as nothing reads it. devm stores a copy of it and the DMA memory will be freed via devm mechanisms. span is a constant of 8+1. Remove both. This removes 16 bytes from each arm_smmu_l1_ctx_desc and saves up to 2M of memory per iommu instance. Tested-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Mostafa Saleh <smostafa@google.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Link: https://lore.kernel.org/r/2-v2-318ed5f6983b+198f-smmuv3_tidy_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-02iommu/arm-smmu-v3: Do not zero the strtab twiceJason Gunthorpe1-20/+6
dmam_alloc_coherent() already returns zero'd memory so cfg->strtab.l1_desc (the list of DMA addresses for the L2 entries) is already zero'd. arm_smmu_init_l1_strtab() goes through and calls arm_smmu_write_strtab_l1_desc() on the newly allocated (and zero'd) struct arm_smmu_strtab_l1_desc, which ends up computing 'val = 0' and zeroing it again. Remove arm_smmu_init_l1_strtab() and just call devm_kcalloc() from arm_smmu_init_strtab_2lvl to allocate the companion struct. Tested-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Mostafa Saleh <smostafa@google.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Link: https://lore.kernel.org/r/1-v2-318ed5f6983b+198f-smmuv3_tidy_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-02iommu/arm-smmu-v3: Allow setting a S1 domain to a PASIDJason Gunthorpe2-2/+41
The SVA cleanup made the SSID logic entirely general so all we need to do is call it with the correct cd table entry for a S1 domain. This is slightly tricky because of the ASID and how the locking works, the simple fix is to just update the ASID once we get the right locks. Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/14-v9-5cd718286059+79186-smmuv3_newapi_p2b_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-02iommu/arm-smmu-v3: Allow a PASID to be set when RID is IDENTITY/BLOCKEDJason Gunthorpe2-3/+49
If the STE doesn't point to the CD table we can upgrade it by reprogramming the STE with the appropriate S1DSS. We may also need to turn on ATS at the same time. Keep track if the installed STE is pointing at the cd_table and the ATS state to trigger this path. Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/13-v9-5cd718286059+79186-smmuv3_newapi_p2b_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-02iommu/arm-smmu-v3: Test the STE S1DSS functionalityJason Gunthorpe1-5/+108
S1DSS brings in quite a few new transition pairs that are interesting. Test to/from S1DSS_BYPASS <-> S1DSS_SSID0, and BYPASS <-> S1DSS_SSID0. Test a contrived non-hitless flow to make sure that the logic works. Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Michael Shavit <mshavit@google.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/12-v9-5cd718286059+79186-smmuv3_newapi_p2b_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>