aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/iommu (follow)
AgeCommit message (Collapse)AuthorFilesLines
2016-02-29iommu/vt-d: Use BUS_NOTIFY_REMOVED_DEVICE in hotplug pathJoerg Roedel2-4/+5
In the PCI hotplug path of the Intel IOMMU driver, replace the usage of the BUS_NOTIFY_DEL_DEVICE notifier, which is executed before the driver is unbound from the device, with BUS_NOTIFY_REMOVED_DEVICE, which runs after that. This fixes a kernel BUG being triggered in the VT-d code when the device driver tries to unmap DMA buffers and the VT-d driver already destroyed all mappings. Reported-by: Stefani Seibold <stefani@seibold.net> Cc: stable@vger.kernel.org # v4.3+ Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-02-29iommu/amd: Detach device from domain before removalJoerg Roedel1-0/+4
Detach the device that is about to be removed from its domain (if it has one) to clear any related state like DTE entry and device's ATS state. Reported-by: Kelly Zytaruk <Kelly.Zytaruk@amd.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-02-25iommu/amd: Apply workaround for ATS write permission checkJay Cornwall1-0/+29
The AMD Family 15h Models 30h-3Fh (Kaveri) BIOS and Kernel Developer's Guide omitted part of the BIOS IOMMU L2 register setup specification. Without this setup the IOMMU L2 does not fully respect write permissions when handling an ATS translation request. The IOMMU L2 will set PTE dirty bit when handling an ATS translation with write permission request, even when PTE RW bit is clear. This may occur by direct translation (which would cause a PPR) or by prefetch request from the ATC. This is observed in practice when the IOMMU L2 modifies a PTE which maps a pagecache page. The ext4 filesystem driver BUGs when asked to writeback these (non-modified) pages. Enable ATS write permission check in the Kaveri IOMMU L2 if BIOS has not. Signed-off-by: Jay Cornwall <jay@jcornwall.me> Cc: <stable@vger.kernel.org> # v3.19+ Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-02-25iommu/amd: Fix boot warning when device 00:00.0 is not iommu coveredSuravee Suthikulpanit1-12/+22
The setup code for the performance counters in the AMD IOMMU driver tests whether the counters can be written. It tests to setup a counter for device 00:00.0, which fails on systems where this particular device is not covered by the IOMMU. Fix this by not relying on device 00:00.0 but only on the IOMMU being present. Cc: stable@vger.kernel.org Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-02-16Merge tag 'for-linus-20160216' of git://git.infradead.org/intel-iommuLinus Torvalds3-8/+33
Pull IOMMU SVM fixes from David Woodhouse: "Minor register size and interrupt acknowledgement fixes which only showed up in testing on newer hardware, but mostly a fix to the MM refcount handling to prevent a recursive refcount issue when mmap() is used on the file descriptor associated with a bound PASID" * tag 'for-linus-20160216' of git://git.infradead.org/intel-iommu: iommu/vt-d: Clear PPR bit to ensure we get more page request interrupts iommu/vt-d: Fix 64-bit accesses to 32-bit DMAR_GSTS_REG iommu/vt-d: Fix mm refcounting to hold mm_count not mm_users
2016-02-15iommu/vt-d: Clear PPR bit to ensure we get more page request interruptsDavid Woodhouse1-0/+4
According to the VT-d specification we need to clear the PPR bit in the Page Request Status register when handling page requests, or the hardware won't generate any more interrupts. This wasn't actually necessary on SKL/KBL (which may well be the subject of a hardware erratum, although it's harmless enough). But other implementations do appear to get it right, and we only ever get one interrupt unless we clear the PPR bit. Reported-by: CQ Tang <cq.tang@intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Cc: stable@vger.kernel.org
2016-01-29iommu/amd: Correct the wrong setting of alias DTE in do_attachBaoquan He1-1/+1
In below commit alias DTE is set when its peripheral is setting DTE. However there's a code bug here to wrongly set the alias DTE, correct it in this patch. commit e25bfb56ea7f046b71414e02f80f620deb5c6362 Author: Joerg Roedel <jroedel@suse.de> Date: Tue Oct 20 17:33:38 2015 +0200 iommu/amd: Set alias DTE in do_attach/do_detach Signed-off-by: Baoquan He <bhe@redhat.com> Tested-by: Mark Hounschell <markh@compro.net> Cc: stable@vger.kernel.org # v4.4 Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-01-29iommu/vt-d: Don't skip PCI devices when disabling IOTLBJeremy McNicoll1-1/+1
Fix a simple typo when disabling IOTLB on PCI(e) devices. Fixes: b16d0cb9e2fc ("iommu/vt-d: Always enable PASID/PRI PCI capabilities before ATS") Cc: stable@vger.kernel.org # v4.4 Signed-off-by: Jeremy McNicoll <jmcnicol@redhat.com> Reviewed-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-01-29iommu/io-pgtable-arm: Fix io-pgtable-arm build failureLada Trimasova1-0/+1
Trying to build a kernel for ARC with both options CONFIG_COMPILE_TEST and CONFIG_IOMMU_IO_PGTABLE_LPAE enabled (e.g. as a result of "make allyesconfig") results in the following build failure: | CC drivers/iommu/io-pgtable-arm.o | linux/drivers/iommu/io-pgtable-arm.c: In | function ‘__arm_lpae_alloc_pages’: | linux/drivers/iommu/io-pgtable-arm.c:221:3: | error: implicit declaration of function ‘dma_map_single’ | [-Werror=implicit-function-declaration] | dma = dma_map_single(dev, pages, size, DMA_TO_DEVICE); | ^ | linux/drivers/iommu/io-pgtable-arm.c:221:42: | error: ‘DMA_TO_DEVICE’ undeclared (first use in this function) | dma = dma_map_single(dev, pages, size, DMA_TO_DEVICE); | ^ Since IOMMU_IO_PGTABLE_LPAE depends on DMA API, io-pgtable-arm.c should include linux/dma-mapping.h. This fixes the reported failure. Cc: Alexey Brodkin <abrodkin@synopsys.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Joerg Roedel <joro@8bytes.org> Signed-off-by: Lada Trimasova <ltrimas@synopsys.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-01-19Merge tag 'iommu-updates-v4.5' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommuLinus Torvalds16-1048/+401
Pull IOMMU updates from Joerg Roedel: "The updates include: - Small code cleanups in the AMD IOMMUv2 driver - Scalability improvements for the DMA-API implementation of the AMD IOMMU driver. This is just a starting point, but already showed some good improvements in my tests. - Removal of the unused Renesas IPMMU/IPMMUI driver - Updates for ARM-SMMU include: * Some fixes to get the driver working nicely on Broadcom hardware * A change to the io-pgtable API to indicate the unit in which to flush (all callers converted, with Ack from Laurent) * Use of devm_* for allocating/freeing the SMMUv3 buffers - Some other small fixes and improvements for other drivers" * tag 'iommu-updates-v4.5' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (46 commits) iommu/vt-d: Fix up error handling in alloc_iommu iommu/vt-d: Check the return value of iommu_device_create() iommu/amd: Remove an unneeded condition iommu/amd: Preallocate dma_ops apertures based on dma_mask iommu/amd: Use trylock to aquire bitmap_lock iommu/amd: Make dma_ops_domain->next_index percpu iommu/amd: Relax locking in dma_ops path iommu/amd: Initialize new aperture range before making it visible iommu/amd: Build io page-tables with cmpxchg64 iommu/amd: Allocate new aperture ranges in dma_ops_alloc_addresses iommu/amd: Optimize dma_ops_free_addresses iommu/amd: Remove need_flush from struct dma_ops_domain iommu/amd: Iterate over all aperture ranges in dma_ops_area_alloc iommu/amd: Flush iommu tlb in dma_ops_free_addresses iommu/amd: Rename dma_ops_domain->next_address to next_index iommu/amd: Remove 'start' parameter from dma_ops_area_alloc iommu/amd: Flush iommu tlb in dma_ops_aperture_alloc() iommu/amd: Retry address allocation within one aperture iommu/amd: Move aperture_range.offset to another cache-line iommu/amd: Add dma_ops_aperture_alloc() function ...
2016-01-19Merge branches 's390', 'arm/renesas', 'arm/msm', 'arm/shmobile', 'arm/smmu', 'x86/amd' and 'x86/vt-d' into nextJoerg Roedel16-1048/+401
2016-01-13iommu/vt-d: Fix 64-bit accesses to 32-bit DMAR_GSTS_REGCQ Tang2-2/+2
This is a 32-bit register. Apparently harmless on real hardware, but causing justified warnings in simulation. Signed-off-by: CQ Tang <cq.tang@intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Cc: stable@vger.kernel.org
2016-01-13iommu/vt-d: Fix mm refcounting to hold mm_count not mm_usersDavid Woodhouse1-6/+27
Holding mm_users works OK for graphics, which was the first user of SVM with VT-d. However, it works less well for other devices, where we actually do a mmap() from the file descriptor to which the SVM PASID state is tied. In this case on process exit we end up with a recursive reference count: - The MM remains alive until the file is closed and the driver's release() call ends up unbinding the PASID. - The VMA corresponding to the mmap() remains intact until the MM is destroyed. - Thus the file isn't closed, even when exit_files() runs, because the VMA is still holding a reference to it. And the MM remains alive… To address this issue, we *stop* holding mm_users while the PASID is bound. We already hold mm_count by virtue of the MMU notifier, and that can be made to be sufficient. It means that for a period during process exit, the fun part of mmput() has happened and exit_mmap() has been called so the MM is basically defunct. But the PGD still exists and the PASID is still bound to it. During this period, we have to be very careful — exit_mmap() doesn't use mm->mmap_sem because it doesn't expect anyone else to be touching the MM (quite reasonably, since mm_users is zero). So we also need to fix the fault handler to just report failure if mm_users is already zero, and to temporarily bump mm_users while handling any faults. Additionally, exit_mmap() calls mmu_notifier_release() *before* it tears down the page tables, which is too early for us to flush the IOTLB for this PASID. And __mmu_notifier_release() removes every notifier from the list, so when exit_mmap() finally *does* tear down the mappings and clear the page tables, we don't get notified. So we work around this by clearing the PASID table entry in our MMU notifier release() callback. That way, the hardware *can't* get any pages back from the page tables before they get cleared. Hardware designers have confirmed that the resulting 'PASID not present' faults should be handled just as gracefully as 'page not present' faults, the important criterion being that they don't perturb the operation for any *other* PASID in the system. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Cc: stable@vger.kernel.org
2016-01-11Merge branch 'x86-cleanups-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds1-1/+1
Pull x86 cleanups from Ingo Molnar: "The main changes in this cycle were: - code patching and cpu_has cleanups (Borislav Petkov) - paravirt cleanups (Juergen Gross) - TSC cleanup (Thomas Gleixner) - ptrace cleanup (Chen Gang)" * 'x86-cleanups-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: arch/x86/kernel/ptrace.c: Remove unused arg_offs_table x86/mm: Align macro defines x86/cpu: Provide a config option to disable static_cpu_has x86/cpufeature: Remove unused and seldomly used cpu_has_xx macros x86/cpufeature: Cleanup get_cpu_cap() x86/cpufeature: Move some of the scattered feature bits to x86_capability x86/paravirt: Remove paravirt ops pmd_update[_defer] and pte_update_defer x86/paravirt: Remove unused pv_apic_ops structure x86/tsc: Remove unused tsc_pre_init() hook x86: Remove unused function cpu_has_ht_siblings() x86/paravirt: Kill some unused patching functions
2016-01-07iommu/vt-d: Fix up error handling in alloc_iommuJoerg Roedel1-7/+7
Only check for error when iommu->iommu_dev has been assigned and only assign drhd->iommu when the function can't fail anymore. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-01-07iommu/vt-d: Check the return value of iommu_device_create()Nicholas Krause1-0/+6
This adds the proper check to alloc_iommu to make sure that the call to iommu_device_create has completed successfully and if not return the error code to the caller after freeing up resources allocated previously. Signed-off-by: Nicholas Krause <xerofoify@gmail.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-01-07iommu/dma: Use correct offset in map_sgRobin Murphy1-1/+1
When mapping a non-page-aligned scatterlist entry, we copy the original offset to the output DMA address before aligning it to hand off to iommu_map_sg(), then later adding the IOVA page address portion to get the final mapped address. However, when the IOVA page size is smaller than the CPU page size, it is the offset within the IOVA page we want, not that within the CPU page, which can easily be larger than an IOVA page and thus result in an incorrect final address. Fix the bug by taking only the IOVA-aligned part of the offset as the basis of the DMA address, not the whole thing. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-01-07iommu/amd: Remove an unneeded conditionDan Carpenter1-5/+3
get_device_id() returns an unsigned short device id. It never fails and it never returns a negative so we can remove this condition. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/amd: Preallocate dma_ops apertures based on dma_maskJoerg Roedel1-7/+53
Preallocate between 4 and 8 apertures when a device gets it dma_mask. With more apertures we reduce the lock contention of the domain lock significantly. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/amd: Use trylock to aquire bitmap_lockJoerg Roedel1-3/+17
First search for a non-contended aperture with trylock before spinning. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/amd: Make dma_ops_domain->next_index percpuJoerg Roedel1-10/+29
Make this pointer percpu so that we start searching for new addresses in the range we last stopped and which is has a higher probability of being still in the cache. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/amd: Relax locking in dma_ops pathJoerg Roedel1-59/+11
Remove the long holding times of the domain->lock and rely on the bitmap_lock instead. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/amd: Initialize new aperture range before making it visibleJoerg Roedel1-13/+20
Make sure the aperture range is fully initialized before it is visible to the address allocator. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/amd: Build io page-tables with cmpxchg64Joerg Roedel1-3/+13
This allows to build up the page-tables without holding any locks. As a consequence it removes the need to pre-populate dma_ops page-tables. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/amd: Allocate new aperture ranges in dma_ops_alloc_addressesJoerg Roedel1-19/+10
It really belongs there and not in __map_single. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/amd: Optimize dma_ops_free_addressesJoerg Roedel1-2/+3
Don't flush the iommu tlb when we free something behind the current next_bit pointer. Update the next_bit pointer instead and let the flush happen on the next wraparound in the allocation path. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/amd: Remove need_flush from struct dma_ops_domainJoerg Roedel1-24/+6
The flushing of iommu tlbs is now done on a per-range basis. So there is no need anymore for domain-wide flush tracking. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/amd: Iterate over all aperture ranges in dma_ops_area_allocJoerg Roedel1-17/+11
This way we don't need to care about the next_index wrapping around in dma_ops_alloc_addresses. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/amd: Flush iommu tlb in dma_ops_free_addressesJoerg Roedel1-2/+4
Instead of setting need_flush, do the flush directly in dma_ops_free_addresses. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/amd: Rename dma_ops_domain->next_address to next_indexJoerg Roedel1-13/+13
It points to the next aperture index to allocate from. We don't need the full address anymore because this is now tracked in struct aperture_range. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/amd: Remove 'start' parameter from dma_ops_area_allocJoerg Roedel1-6/+4
Parameter is not needed because the value is part of the already passed in struct dma_ops_domain. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/amd: Flush iommu tlb in dma_ops_aperture_alloc()Joerg Roedel1-5/+16
Since the allocator wraparound happens in this function now, flush the iommu tlb there too. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/amd: Retry address allocation within one apertureJoerg Roedel1-10/+19
Instead of skipping to the next aperture, first try again in the current one. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/amd: Move aperture_range.offset to another cache-lineJoerg Roedel1-2/+1
Moving it before the pte_pages array puts in into the same cache-line as the spin-lock and the bitmap array pointer. This should safe a cache-miss. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/amd: Add dma_ops_aperture_alloc() functionJoerg Roedel1-12/+25
Make this a wrapper around iommu_ops_area_alloc() for now and add more logic to this function later on. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/amd: Pass correct shift to iommu_area_alloc()Joerg Roedel1-1/+1
The page-offset of the aperture must be passed instead of 0. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/amd: Flush the IOMMU TLB before the addresses are freedJoerg Roedel1-4/+4
This allows to keep the bitmap_lock only for a very short period of time. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/amd: Flush IOMMU TLB on __map_single error pathJoerg Roedel1-0/+2
There have been present PTEs which in theory could have made it to the IOMMU TLB. Flush the addresses out on the error path to make sure no stale entries remain. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/amd: Introduce bitmap_lock in struct aperture_rangeJoerg Roedel1-0/+10
This lock only protects the address allocation bitmap in one aperture. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/amd: Move 'struct dma_ops_domain' definition to amd_iommu.cJoerg Roedel2-40/+40
It is only used in this file anyway, so keep it there. Same with 'struct aperture_range'. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/amd: Warn only once on unexpected pte valueJoerg Roedel1-2/+2
This prevents possible flooding of the kernel log. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/ipmmu-vmsa: Don't truncate ttbr if LPAE is not enabledGeert Uytterhoeven1-1/+1
If CONFIG_PHYS_ADDR_T_64BIT=n: drivers/iommu/ipmmu-vmsa.c: In function 'ipmmu_domain_init_context': drivers/iommu/ipmmu-vmsa.c:434:2: warning: right shift count >= width of type ipmmu_ctx_write(domain, IMTTUBR0, ttbr >> 32); ^ As io_pgtable_cfg.arm_lpae_s1_cfg.ttbr[] is an array of u64s, assigning it to a phys_addr_t may truncates it. Make ttbr u64 to fix this. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/dma: Avoid unlikely high-order allocationsRobin Murphy1-2/+4
Doug reports that the equivalent page allocator on 32-bit ARM exhibits particularly pathalogical behaviour under memory pressure when fragmentation is high, where allocating a 4MB buffer takes tens of seconds and the number of calls to alloc_pages() is over 9000![1] We can drastically improve that situation without losing the other benefits of high-order allocations when they would succeed, by assuming memory pressure is relatively constant over the course of an allocation, and not retrying allocations at orders we know to have failed before. This way, the best-case behaviour remains unchanged, and in the worst case we should see at most a dozen or so (MAX_ORDER - 1) failed attempts before falling back to single pages for the remainder of the buffer. [1]:http://lists.infradead.org/pipermail/linux-arm-kernel/2015-December/394660.html Reported-by: Douglas Anderson <dianders@chromium.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/dma: Add some missing #includesRobin Murphy1-0/+3
dma-iommu.c was naughtily relying on an implicit transitive #include of linux/vmalloc.h, which is apparently not present on some architectures. Add that, plus a couple more headers for other functions which are used similarly. Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-22Merge branch 'for-joerg/arm-smmu/updates' of git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into arm/smmuJoerg Roedel5-173/+119
2015-12-19x86/cpufeature: Remove unused and seldomly used cpu_has_xx macrosBorislav Petkov1-1/+1
Those are stupid and code should use static_cpu_has_safe() or boot_cpu_has() instead. Kill the least used and unused ones. The remaining ones need more careful inspection before a conversion can happen. On the TODO. Signed-off-by: Borislav Petkov <bp@suse.de> Link: http://lkml.kernel.org/r/1449481182-27541-4-git-send-email-bp@alien8.de Cc: David Sterba <dsterba@suse.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Matt Mackall <mpm@selenic.com> Cc: Chris Mason <clm@fb.com> Cc: Josef Bacik <jbacik@fb.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-12-18Merge tag 'iommu-fixes-v4.4-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommuLinus Torvalds2-2/+38
Pull IOMMU fixes from Joerg Roedel: "Two similar fixes for the Intel and AMD IOMMU drivers to add proper access checks before calling handle_mm_fault" * tag 'iommu-fixes-v4.4-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: iommu/vt-d: Do access checks before calling handle_mm_fault() iommu/amd: Do proper access checking before calling handle_mm_fault()
2015-12-17iommu/io-pgtable-arm: Ensure we free the final level on teardownWill Deacon1-5/+6
When tearing down page tables, we return early for the final level since we know that we won't have any table pointers to follow. Unfortunately, this also means that we forget to free the final level, so we end up leaking memory. Fix the issue by always freeing the current level, but just don't bother to iterate over the ptes if we're at the final level. Cc: <stable@vger.kernel.org> Reported-by: Zhang Bo <zhangbo_a@xiaomi.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-12-17iommu/arm-smmu: Use STE.S1STALLD only when supportedPrem Mallappa1-3/+12
It is ILLEGAL to set STE.S1STALLD to 1 if stage 1 is enabled and either the stall or terminate models are not supported. This patch fixes the STALLD check and ensures that we don't set STALLD in the STE when it is not supported. Signed-off-by: Prem Mallappa <pmallapp@broadcom.com> [will: consistently use IDR0_STALL_MODEL_* prefix] Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-12-17iommu/arm-smmu: Fix write to GERRORN registerPrem Mallappa1-12/+12
When acknowledging global errors, the GERRORN register should be written with the original GERROR value so that active errors are toggled. This patch fixed the driver to write the original GERROR value to GERRORN, instead of an active error mask. Signed-off-by: Prem Mallappa <pmallapp@broadcom.com> [will: reworked use of active bits and fixed commit log] Signed-off-by: Will Deacon <will.deacon@arm.com>