aboutsummaryrefslogtreecommitdiffstats
path: root/include/linux/iommu.h (follow)
AgeCommit message (Collapse)AuthorFilesLines
2019-11-12Merge branches 'iommu/fixes', 'arm/qcom', 'arm/renesas', 'arm/rockchip', 'arm/mediatek', 'arm/tegra', 'arm/smmu', 'x86/amd', 'x86/vt-d', 'virtio' and 'core' into nextJoerg Roedel1-5/+60
2019-11-07iommu/io-pgtable-arm: Rename IOMMU_QCOM_SYS_CACHE and improve docWill Deacon1-4/+4
The 'IOMMU_QCOM_SYS_CACHE' IOMMU protection flag is exposed to all users of the IOMMU API. Despite its name, the idea behind it isn't especially tied to Qualcomm implementations and could conceivably be used by other systems. Rename it to 'IOMMU_SYS_CACHE_ONLY' and update the comment to describe a bit better the idea behind it. Cc: Robin Murphy <robin.murphy@arm.com> Cc: "Isaac J. Manjarres" <isaacm@codeaurora.org> Signed-off-by: Will Deacon <will@kernel.org>
2019-10-15iommu: Introduce guest PASID bind functionJacob Pan1-0/+22
Guest shared virtual address (SVA) may require host to shadow guest PASID tables. Guest PASID can also be allocated from the host via enlightened interfaces. In this case, guest needs to bind the guest mm, i.e. cr3 in guest physical address to the actual PASID table in the host IOMMU. Nesting will be turned on such that guest virtual address can go through a two level translation: - 1st level translates GVA to GPA - 2nd level translates GPA to HPA This patch introduces APIs to bind guest PASID data to the assigned device entry in the physical IOMMU. See the diagram below for usage explanation. .-------------. .---------------------------. | vIOMMU | | Guest process mm, FL only | | | '---------------------------' .----------------/ | PASID Entry |--- PASID cache flush - '-------------' | | | V | | GP '-------------' Guest ------| Shadow |----------------------- GP->HP* --------- v v | Host v .-------------. .----------------------. | pIOMMU | | Bind FL for GVA-GPA | | | '----------------------' .----------------/ | | PASID Entry | V (Nested xlate) '----------------\.---------------------. | | |Set SL to GPA-HPA | | | '---------------------' '-------------' Where: - FL = First level/stage one page tables - SL = Second level/stage two page tables - GP = Guest PASID - HP = Host PASID * Conversion needed if non-identity GP-HP mapping option is chosen. Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Liu Yi L <yi.l.liu@intel.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-15iommu: Introduce cache_invalidate APIYi L Liu1-0/+14
In any virtualization use case, when the first translation stage is "owned" by the guest OS, the host IOMMU driver has no knowledge of caching structure updates unless the guest invalidation activities are trapped by the virtualizer and passed down to the host. Since the invalidation data can be obtained from user space and will be written into physical IOMMU, we must allow security check at various layers. Therefore, generic invalidation data format are proposed here, model specific IOMMU drivers need to convert them into their own format. Signed-off-by: Yi L Liu <yi.l.liu@intel.com> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-15iommu: Add gfp parameter to iommu_ops::mapTom Murphy1-1/+20
Add a gfp_t parameter to the iommu_ops::map function. Remove the needless locking in the AMD iommu driver. The iommu_ops::map function (or the iommu_map function which calls it) was always supposed to be sleepable (according to Joerg's comment in this thread: https://lore.kernel.org/patchwork/patch/977520/ ) and so should probably have had a "might_sleep()" since it was written. However currently the dma-iommu api can call iommu_map in an atomic context, which it shouldn't do. This doesn't cause any problems because any iommu driver which uses the dma-iommu api uses gfp_atomic in it's iommu_ops::map function. But doing this wastes the memory allocators atomic pools. Signed-off-by: Tom Murphy <murphyt7@tcd.ie> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-23iommu: Add helpers to set/get default domain typeJoerg Roedel1-0/+16
Add a couple of functions to allow changing the default domain type from architecture code and a function for iommu drivers to request whether the default domain is passthrough. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-07-29iommu: Pass struct iommu_iotlb_gather to ->unmap() and ->iotlb_sync()Will Deacon1-3/+4
To allow IOMMU drivers to batch up TLB flushing operations and postpone them until ->iotlb_sync() is called, extend the prototypes for the ->unmap() and ->iotlb_sync() IOMMU ops callbacks to take a pointer to the current iommu_iotlb_gather structure. All affected IOMMU drivers are updated, but there should be no functional change since the extra parameter is ignored for now. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-24iommu: Introduce iommu_iotlb_gather_add_page()Will Deacon1-0/+31
Introduce a helper function for drivers to use when updating an iommu_iotlb_gather structure in response to an ->unmap() call, rather than having to open-code the logic in every page-table implementation. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-24iommu: Introduce struct iommu_iotlb_gather for batching TLB flushesWill Deacon1-4/+39
To permit batching of TLB flushes across multiple calls to the IOMMU driver's ->unmap() implementation, introduce a new structure for tracking the address range to be flushed and the granularity at which the flushing is required. This is hooked into the IOMMU API and its caller are updated to make use of the new structure. Subsequent patches will plumb this into the IOMMU drivers as well, but for now the gathering information is ignored. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-24iommu: Remove empty iommu_tlb_range_add() callback from iommu_opsWill Deacon1-15/+0
Commit add02cfdc9bc ("iommu: Introduce Interface for IOMMU TLB Flushing") added three new TLB flushing operations to the IOMMU API so that the underlying driver operations can be batched when unmapping large regions of IO virtual address space. However, the ->iotlb_range_add() callback has not been implemented by any IOMMU drivers (amd_iommu.c implements it as an empty function, which incurs the overhead of an indirect branch). Instead, drivers either flush the entire IOTLB in the ->iotlb_sync() callback or perform the necessary invalidation during ->unmap(). Attempting to implement ->iotlb_range_add() for arm-smmu-v3.c revealed two major issues: 1. The page size used to map the region in the page-table is not known, and so it is not generally possible to issue TLB flushes in the most efficient manner. 2. The only mutable state passed to the callback is a pointer to the iommu_domain, which can be accessed concurrently and therefore requires expensive synchronisation to keep track of the outstanding flushes. Remove the callback entirely in preparation for extending ->unmap() and ->iotlb_sync() to update a token on the caller's stack. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-04Merge branches 'x86/vt-d', 'x86/amd', 'arm/smmu', 'arm/omap', 'generic-dma-ops' and 'core' into nextJoerg Roedel1-13/+94
2019-07-01Merge branch 'for-joerg/arm-smmu/updates' of git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into arm/smmuJoerg Roedel1-0/+6
2019-06-18iommu/io-pgtable-arm: Add support to use system cacheVivek Gautam1-0/+6
Few Qualcomm platforms such as, sdm845 have an additional outer cache called as System cache, aka. Last level cache (LLC) that allows non-coherent devices to upgrade to using caching. This cache sits right before the DDR, and is tightly coupled with the memory controller. The clients using this cache request their slices from this system cache, make it active, and can then start using it. There is a fundamental assumption that non-coherent devices can't access caches. This change adds an exception where they *can* use some level of cache despite still being non-coherent overall. The coherent devices that use cacheable memory, and CPU make use of this system cache by default. Looking at memory types, we have following - a) Normal uncached :- MAIR 0x44, inner non-cacheable, outer non-cacheable; b) Normal cached :- MAIR 0xff, inner read write-back non-transient, outer read write-back non-transient; attribute setting for coherenet I/O devices. and, for non-coherent i/o devices that can allocate in system cache another type gets added - c) Normal sys-cached :- MAIR 0xf4, inner non-cacheable, outer read write-back non-transient Coherent I/O devices use system cache by marking the memory as normal cached. Non-coherent I/O devices should mark the memory as normal sys-cached in page tables to use system cache. Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Vivek Gautam <vivek.gautam@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-06-12iommu: Introduce IOMMU_RESV_DIRECT_RELAXABLE reserved memory regionsEric Auger1-0/+6
Introduce a new type for reserved region. This corresponds to directly mapped regions which are known to be relaxable in some specific conditions, such as device assignment use case. Well known examples are those used by USB controllers providing PS/2 keyboard emulation for pre-boot BIOS and early BOOT or RMRRs associated to IGD working in legacy mode. Since commit c875d2c1b808 ("iommu/vt-d: Exclude devices using RMRRs from IOMMU API domains") and commit 18436afdc11a ("iommu/vt-d: Allow RMRR on graphics devices too"), those regions are currently considered "safe" with respect to device assignment use case which requires a non direct mapping at IOMMU physical level (RAM GPA -> HPA mapping). Those RMRRs currently exist and sometimes the device is attempting to access it but this has not been considered an issue until now. However at the moment, iommu_get_group_resv_regions() is not able to make any difference between directly mapped regions: those which must be absolutely enforced and those like above ones which are known as relaxable. This is a blocker for reporting severe conflicts between non relaxable RMRRs (like MSI doorbells) and guest GPA space. With this new reserved region type we will be able to use iommu_get_group_resv_regions() to enumerate the IOVA space that is usable through the IOMMU API without introducing regressions with respect to existing device assignment use cases (USB and IGD). Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-06-12iommu: Add recoverable fault reportingJean-Philippe Brucker1-0/+19
Some IOMMU hardware features, for example PCI PRI and Arm SMMU Stall, enable recoverable I/O page faults. Allow IOMMU drivers to report PRI Page Requests and Stall events through the new fault reporting API. The consumer of the fault can be either an I/O page fault handler in the host, or a guest OS. Once handled, the fault must be completed by sending a page response back to the IOMMU. Add an iommu_page_response() function to complete a page fault. There are two ways to extend the userspace API: * Add a field to iommu_page_response and a flag to iommu_page_response::flags describing the validity of this field. * Introduce a new iommu_page_response_X structure with a different version number. The kernel must then support both versions. Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-06-12iommu: Introduce device fault report APIJacob Pan1-0/+29
Traditionally, device specific faults are detected and handled within their own device drivers. When IOMMU is enabled, faults such as DMA related transactions are detected by IOMMU. There is no generic reporting mechanism to report faults back to the in-kernel device driver or the guest OS in case of assigned devices. This patch introduces a registration API for device specific fault handlers. This differs from the existing iommu_set_fault_handler/ report_iommu_fault infrastructures in several ways: - it allows to report more sophisticated fault events (both unrecoverable faults and page request faults) due to the nature of the iommu_fault struct - it is device specific and not domain specific. The current iommu_report_device_fault() implementation only handles the "shoot and forget" unrecoverable fault case. Handling of page request faults or stalled faults will come later. Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-06-12iommu: Introduce device fault dataJacob Pan1-0/+39
Device faults detected by IOMMU can be reported outside the IOMMU subsystem for further processing. This patch introduces a generic device fault data structure. The fault can be either an unrecoverable fault or a page request, also referred to as a recoverable fault. We only care about non internal faults that are likely to be reported to an external subsystem. Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: Liu, Yi L <yi.l.liu@linux.intel.com> Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-06-05treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 333Thomas Gleixner1-13/+1
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not write to the free software foundation inc 59 temple place suite 330 boston ma 02111 1307 usa extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 136 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190530000436.384967451@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-27iommu: Add API to request DMA domain for deviceLu Baolu1-0/+6
Normally during iommu probing a device, a default doamin will be allocated and attached to the device. The domain type of the default domain is statically defined, which results in a situation where the allocated default domain isn't suitable for the device due to some limitations. We already have API iommu_request_dm_for_dev() to replace a DMA domain with an identity one. This adds iommu_request_dma_domain_for_dev() to request a dma domain if an allocated identity domain isn't suitable for the device in question. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-04-26Merge branch 'api-features' into arm/smmuJoerg Roedel1-0/+140
2019-04-23ACPI/IORT: Check ATS capability in root complex nodesJean-Philippe Brucker1-0/+4
Root complex node in IORT has a bit telling whether it supports ATS or not. Store this bit in the IOMMU fwspec when setting up a device, so it can be accessed later by an IOMMU driver. In the future we'll probably want to store this bit at the host bridge or SMMU rather than in each endpoint. Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-04-11iommu: Bind process address spaces to devicesJean-Philippe Brucker1-0/+70
Add bind() and unbind() operations to the IOMMU API. iommu_sva_bind_device() binds a device to an mm, and returns a handle to the bond, which is released by calling iommu_sva_unbind_device(). Each mm bound to devices gets a PASID (by convention, a 20-bit system-wide ID representing the address space), which can be retrieved with iommu_sva_get_pasid(). When programming DMA addresses, device drivers include this PASID in a device-specific manner, to let the device access the given address space. Since the process memory may be paged out, device and IOMMU must support I/O page faults (e.g. PCI PRI). Using iommu_sva_set_ops(), device drivers provide an mm_exit() callback that is called by the IOMMU driver if the process exits before the device driver called unbind(). In mm_exit(), device driver should disable DMA from the given context, so that the core IOMMU can reallocate the PASID. Whether the process exited or nor, the device driver should always release the handle with unbind(). To use these functions, device driver must first enable the IOMMU_DEV_FEAT_SVA device feature with iommu_dev_enable_feature(). Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-04-11iommu: Add APIs for multiple domains per deviceLu Baolu1-0/+70
Sharing a physical PCI device in a finer-granularity way is becoming a consensus in the industry. IOMMU vendors are also engaging efforts to support such sharing as well as possible. Among the efforts, the capability of support finer-granularity DMA isolation is a common requirement due to the security consideration. With finer-granularity DMA isolation, subsets of a PCI function can be isolated from each others by the IOMMU. As a result, there is a request in software to attach multiple domains to a physical PCI device. One example of such use model is the Intel Scalable IOV [1] [2]. The Intel vt-d 3.0 spec [3] introduces the scalable mode which enables PASID granularity DMA isolation. This adds the APIs to support multiple domains per device. In order to ease the discussions, we call it 'a domain in auxiliary mode' or simply 'auxiliary domain' when multiple domains are attached to a physical device. The APIs include: * iommu_dev_has_feature(dev, IOMMU_DEV_FEAT_AUX) - Detect both IOMMU and PCI endpoint devices supporting the feature (aux-domain here) without the host driver dependency. * iommu_dev_feature_enabled(dev, IOMMU_DEV_FEAT_AUX) - Check the enabling status of the feature (aux-domain here). The aux-domain interfaces are available only if this returns true. * iommu_dev_enable/disable_feature(dev, IOMMU_DEV_FEAT_AUX) - Enable/disable device specific aux-domain feature. * iommu_aux_attach_device(domain, dev) - Attaches @domain to @dev in the auxiliary mode. Multiple domains could be attached to a single device in the auxiliary mode with each domain representing an isolated address space for an assignable subset of the device. * iommu_aux_detach_device(domain, dev) - Detach @domain which has been attached to @dev in the auxiliary mode. * iommu_aux_get_pasid(domain, dev) - Return ID used for finer-granularity DMA translation. For the Intel Scalable IOV usage model, this will be a PASID. The device which supports Scalable IOV needs to write this ID to the device register so that DMA requests could be tagged with a right PASID prefix. This has been updated with the latest proposal from Joerg posted here [5]. Many people involved in discussions of this design. Kevin Tian <kevin.tian@intel.com> Liu Yi L <yi.l.liu@intel.com> Ashok Raj <ashok.raj@intel.com> Sanjay Kumar <sanjay.k.kumar@intel.com> Jacob Pan <jacob.jun.pan@linux.intel.com> Alex Williamson <alex.williamson@redhat.com> Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Joerg Roedel <joro@8bytes.org> and some discussions can be found here [4] [5]. [1] https://software.intel.com/en-us/download/intel-scalable-io-virtualization-technical-specification [2] https://schd.ws/hosted_files/lc32018/00/LC3-SIOV-final.pdf [3] https://software.intel.com/en-us/download/intel-virtualization-technology-for-directed-io-architecture-specification [4] https://lkml.org/lkml/2018/7/26/4 [5] https://www.spinics.net/lists/iommu/msg31874.html Cc: Ashok Raj <ashok.raj@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Cc: Liu Yi L <yi.l.liu@intel.com> Suggested-by: Kevin Tian <kevin.tian@intel.com> Suggested-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Suggested-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-03-01Merge branches 'iommu/fixes', 'arm/msm', 'arm/tegra', 'arm/mediatek', 'x86/vt-d', 'x86/amd', 'hyper-v' and 'core' into nextJoerg Roedel1-1/+5
2019-02-26iommu: Document iommu_ops.is_attach_deferred()Geert Uytterhoeven1-0/+2
Add missing kerneldoc for iommu_ops.is_attach_deferred(). Fixes: e01d1913b0d08171 ("iommu: Add is_attach_deferred call-back to iommu-ops") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-02-26iommu: Document iommu_ops.iotlb_sync_map()Geert Uytterhoeven1-0/+1
Add missing kerneldoc for iommu_ops.iotlb_sync_map(). Fixes: 1d7ae53b152dbc5b ("iommu: Introduce iotlb_sync_map callback") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Dmitry Osipenko <digetx@gmail.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-02-26iommu: Fix flush_tlb_all typoTom Murphy1-1/+1
Fix typo, flush_tlb_all should be flush_iotlb_all. Signed-off-by: Tom Murphy <murphyt7@tcd.ie> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-01-16iommu: Introduce iotlb_sync_map callbackDmitry Osipenko1-0/+1
Introduce iotlb_sync_map() callback that is invoked in the end of iommu_map(). This new callback allows IOMMU drivers to avoid syncing after mapping of each contiguous chunk and sync only when the whole mapping is completed, optimizing performance of the mapping operation. Signed-off-by: Dmitry Osipenko <digetx@gmail.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Thierry Reding <treding@nvidia.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2018-12-17iommu: Consolitate ->add/remove_device() callsJoerg Roedel1-0/+3
Put them into separate functions and call those where the plain ops have been called before. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2018-12-17iommu: Introduce wrappers around dev->iommu_fwspecJoerg Roedel1-0/+11
These wrappers will be used to easily change the location of the field later when all users are converted. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2018-12-05iommu: Change tlb_range_add to iotlb_range_add and tlb_sync to iotlb_synctom1-2/+2
Someone forgot to update this comment. Signed-off-by: Tom Murphy <murphyt7@tcd.ie> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2018-10-10Merge branches 'arm/renesas', 'arm/smmu', 'ppc/pamu', 'x86/vt-d', 'x86/amd' and 'core' into nextJoerg Roedel1-6/+4
2018-10-01iommu/dma: Add support for non-strict modeZhen Lei1-0/+1
With the flush queue infrastructure already abstracted into IOVA domains, hooking it up in iommu-dma is pretty simple. Since there is a degree of dependency on the IOMMU driver knowing what to do to play along, we key the whole thing off a domain attribute which will be set on default DMA ops domains to request non-strict invalidation. That way, drivers can indicate the appropriate support by acknowledging the attribute, and we can easily fall back to strict invalidation otherwise. The flush queue callback needs a handle on the iommu_domain which owns our cookie, so we have to add a pointer back to that, but neatly, that's also sufficient to indicate whether we're using a flush queue or not, and thus which way to release IOVAs. The only slight subtlety is switching __iommu_dma_unmap() from calling iommu_unmap() to explicit iommu_unmap_fast()/iommu_tlb_sync() so that we can elide the sync entirely in non-strict mode. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> [rm: convert to domain attribute, tweak comments and commit message] Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-09-25iommu: Remove .domain_{get,set}_windowsRobin Murphy1-6/+0
Since these are trivially handled by the .domain_{get,set}_attr callbacks when relevant, we can streamline struct iommu_ops for everyone. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2018-09-25iommu: Add fast hook for getting DMA domainsRobin Murphy1-0/+1
While iommu_get_domain_for_dev() is the robust way for arbitrary IOMMU API callers to retrieve the domain pointer, for DMA ops domains it doesn't scale well for large systems and multi-queue devices, since the momentary refcount adjustment will lead to exclusive cacheline contention when multiple CPUs are operating in parallel on different mappings for the same device. In the case of DMA ops domains, however, this refcounting is actually unnecessary, since they already imply that the group exists and is managed by platform code and IOMMU internals (by virtue of iommu_group_get_for_dev()) such that a reference will already be held for the lifetime of the device. Thus we can avoid the bottleneck by providing a fast lookup specifically for the DMA code to retrieve the default domain it already knows it has set up - a simple read-only dereference plays much nicer with cache-coherency protocols. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Tested-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2018-09-25iommu/arm-smmu: Add support for the fsl-mc busNipun Gupta1-0/+2
Implement bus specific support for the fsl-mc bus including registering arm_smmu_ops and bus specific device add operations. Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2018-08-08iommu: Remove the ->map_sg indirectionChristoph Hellwig1-14/+2
All iommu drivers use the default_iommu_map_sg implementation, and there is no good reason to ever override it. Just expose it as iommu_map_sg directly and remove the indirection, specially in our post-spectre world where indirect calls are horribly expensive. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2018-07-06iommu: Enable debugfs exposure of IOMMU driver internalsGary R Hook1-0/+7
Provide base enablement for using debugfs to expose internal data of an IOMMU driver. When called, create the /sys/kernel/debug/iommu directory. Emit a strong warning at boot time to indicate that this feature is enabled. This function is called from iommu_init, and creates the initial DebugFS directory. Drivers may then call iommu_debugfs_new_driver_dir() to instantiate a device-specific directory to expose internal data. It will return a pointer to the new dentry structure created in /sys/kernel/debug/iommu, or NULL in the event of a failure. Since the IOMMU driver can not be removed from the running system, there is no need for an "off" function. Signed-off-by: Gary R Hook <gary.hook@amd.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2018-02-13iommu: Do not return error code for APIs with size_t return typeSuravee Suthikulpanit1-7/+7
Currently, iommu_unmap, iommu_unmap_fast and iommu_map_sg return size_t. However, some of the return values are error codes (< 0), which can be misinterpreted as large size. Therefore, returning size 0 instead to signify failure to map/unmap. Cc: Joerg Roedel <joro@8bytes.org> Cc: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-09-27iommu: Fix comment for iommu_ops.map_sgJean-Philippe Brucker1-1/+1
The definition of map_sg was split during a recent addition to iommu_ops. Put it back together. Fixes: add02cfdc9bc ("iommu: Introduce Interface for IOMMU TLB Flushing") Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-09-01Merge branches 'arm/exynos', 'arm/renesas', 'arm/rockchip', 'arm/omap', 'arm/mediatek', 'arm/tegra', 'arm/qcom', 'arm/smmu', 'ppc/pamu', 'x86/vt-d', 'x86/amd', 's390' and 'core' into nextJoerg Roedel1-3/+52
2017-08-30iommu: Introduce Interface for IOMMU TLB FlushingJoerg Roedel1-1/+49
With the current IOMMU-API the hardware TLBs have to be flushed in every iommu_ops->unmap() call-back. For unmapping large amounts of address space, like it happens when a KVM domain with assigned devices is destroyed, this causes thousands of unnecessary TLB flushes in the IOMMU hardware because the unmap call-back runs for every unmapped physical page. With the TLB Flush Interface and the new iommu_unmap_fast() function introduced here the need to clean the hardware TLBs is removed from the unmapping code-path. Users of iommu_unmap_fast() have to explicitly call the TLB-Flush functions to sync the page-table changes to the hardware. Three functions for TLB-Flushes are introduced: * iommu_flush_tlb_all() - Flushes all TLB entries associated with that domain. TLBs entries are flushed when this function returns. * iommu_tlb_range_add() - This will add a given range to the flush queue for this domain. * iommu_tlb_sync() - Flushes all queued ranges from the hardware TLBs. Returns when the flush is finished. The semantic of this interface is intentionally similar to the iommu_gather_ops from the io-pgtable code. Cc: Alex Williamson <alex.williamson@redhat.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-08-15iommu: Add is_attach_deferred call-back to iommu-opsBaoquan He1-0/+1
This new call-back will be used to check if the domain attach need be deferred for now. If yes, the domain attach/detach will return directly. Signed-off-by: Baoquan He <bhe@redhat.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-08-15iommu: Fix wrong freeing of iommu_device->devJoerg Roedel1-1/+11
The struct iommu_device has a 'struct device' embedded into it, not as a pointer, but the whole struct. In the conversion of the iommu drivers to use struct iommu_device it was forgotten that the relase function for that struct device simply calls kfree() on the pointer. This frees memory that was never allocated and causes memory corruption. To fix this issue, use a pointer to struct device instead of embedding the whole struct. This needs some updates in the iommu sysfs code as well as the Intel VT-d and AMD IOMMU driver. Reported-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Fixes: 39ab9555c241 ('iommu: Add sysfs bindings for struct iommu_device') Cc: stable@vger.kernel.org # >= v4.11 Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-07-26iommu: Correct iommu_map / iommu_unmap prototypesDmitry Osipenko1-2/+2
Commit 7d3002cc8c16 ("iommu/core: split mapping to page sizes as supported by the hardware") replaced 'int gfp_order' with a 'size_t size' of iommu_map / iommu_unmap function arguments, but missed the function prototypes for the disabled CONFIG_IOMMU_API case, let's correct them for consistency. Signed-off-by: Dmitry Osipenko <digetx@gmail.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-05-04Merge branches 'arm/exynos', 'arm/omap', 'arm/rockchip', 'arm/mediatek', 'arm/smmu', 'arm/core', 'x86/vt-d', 'x86/amd' and 'core' into nextJoerg Roedel1-51/+25
2017-04-29iommu: Remove trace-events include from iommu.hJoerg Roedel1-2/+0
It is not needed there anymore. All places needing it are fixed. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-04-27iommu: Move report_iommu_fault() to iommu.cJoerg Roedel1-39/+2
The function is in no fast-path, there is no need for it to be static inline in a header file. This also removes the need to include iommu trace-points in iommu.h. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-04-27iommu: Include device.h in iommu.hJoerg Roedel1-2/+4
We make use of 'struct device' in iommu.h, so include device.h to make it available explicitly. Re-order the other headers while at it. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-04-06iommu: Better document the IOMMU_PRIV flagRobin Murphy1-4/+7
This is a fairly subtle thing - let's make sure it's described as clearly as possible to avoid potential misunderstandings. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>