aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/drivers/iommu/io-pgtable-arm.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2020-03-02iommu/io-pgtable-arm: Fix IOVA validation for 32-bitRobin Murphy1-2/+2
Since we ony support the TTB1 quirk for AArch64 contexts, and consequently only for 64-bit builds, the sign-extension aspect of the "are all bits above IAS consistent?" check should implicitly only apply to 64-bit IOVAs. Change the type of the cast to ensure that 32-bit longs don't inadvertently get sign-extended, and thus considered invalid, if they happen to be above 2GB in the TTB0 region. Reported-by: Stephan Gerhold <stephan@gerhold.net> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Acked-by: Will Deacon <will@kernel.org> Fixes: db6903010aa5 ("iommu/io-pgtable-arm: Prepare for TTBR1 usage") Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-10iommu/io-pgtable-arm: Prepare for TTBR1 usageRobin Murphy1-6/+19
Now that we can correctly extract top-level indices without relying on the remaining upper bits being zero, the only remaining impediments to using a given table for TTBR1 are the address validation on map/unmap and the awkward TCR translation granule format. Add a quirk so that we can do the right thing at those points. Tested-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-10iommu/io-pgtable-arm: Rationalise VTCR handlingWill Deacon1-36/+21
Commit 05a648cd2dd7 ("iommu/io-pgtable-arm: Rationalise TCR handling") reworked the way in which the TCR register value is returned from the io-pgtable code when targetting the Arm long-descriptor format, in preparation for allowing page-tables to target TTBR1. As it turns out, the new interface is a lot nicer to use, so do the same conversion for the VTCR register even though there is only a single base register for stage-2 translation. Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-10iommu/io-pgtable-arm: Rationalise TCR handlingRobin Murphy1-58/+40
Although it's conceptually nice for the io_pgtable_cfg to provide a standard VMSA TCR value, the reality is that no VMSA-compliant IOMMU looks exactly like an Arm CPU, and they all have various other TCR controls which io-pgtable can't be expected to understand. Thus since there is an expectation that drivers will have to add to the given TCR value anyway, let's strip it down to just the essentials that are directly relevant to io-pgtable's inner workings - namely the various sizes and the walk attributes. Tested-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> [will: Add missing include of bitfield.h] Signed-off-by: Will Deacon <will@kernel.org>
2020-01-10iommu/io-pgtable-arm: Ensure ARM_64_LPAE_S2_TCR_RES1 is unsignedWill Deacon1-1/+1
ARM_64_LPAE_S2_TCR_RES1 is intended to map to bit 31 of the VTCR register, which is required to be set to 1 by the architecture. Unfortunately, we accidentally treat this as a signed quantity which means we also set the upper 32 bits of the VTCR to one, and they are required to be zero. Treat ARM_64_LPAE_S2_TCR_RES1 as unsigned to avoid the unwanted sign-extension up to 64 bits. Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-10iommu/io-pgtable-arm: Improve attribute handlingRobin Murphy1-6/+11
By VMSA rules, using Normal Non-Cacheable type with a shareability attribute of anything other than Outer Shareable is liable to lead into unpredictable territory: | Overlaying the shareability attribute (B3-1377, ARM DDI 0406C.c) | | A memory region with a resultant memory type attribute of Normal, and | a resultant cacheability attribute of Inner Non-cacheable, Outer | Non-cacheable, must have a resultant shareability attribute of Outer | Shareable, otherwise shareability is UNPREDICTABLE Although the SMMU architectures seem to give some slightly stronger guarantees of Non-Cacheable output types becoming implicitly Outer Shareable in most cases, we may as well be explicit and not take any chances. It's also weird that LPAE attribute handling is currently split between prot_to_pte() and init_pte() given that it can all be statically determined up-front. Thus, collect *all* the LPAE attributes into prot_to_pte() in order to logically pick the shareability based on the incoming IOMMU API prot value, and tweak the short-descriptor code to stop setting TTBR0.NOS for Non-Cacheable walks. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-10iommu/io-pgtable-arm: Support non-coherent stage-2 page tablesWill Deacon1-4/+10
Commit 9e6ea59f3ff3 ("iommu/io-pgtable: Support non-coherent page tables") added support for non-coherent page-table walks to the Arm IOMMU page-table backends. Unfortunately, it left the stage-2 allocator unchanged, so let's hook that up in the same way. Cc: Bjorn Andersson <bjorn.andersson@linaro.org> Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-10iommu/io-pgtable-arm: Rationalise TTBRn handlingRobin Murphy1-3/+2
TTBR1 values have so far been redundant since no users implement any support for split address spaces. Crucially, though, one of the main reasons for wanting to do so is to be able to manage each half entirely independently, e.g. context-switching one set of mappings without disturbing the other. Thus it seems unlikely that tying two tables together in a single io_pgtable_cfg would ever be particularly desirable or useful. Streamline the configs to just a single conceptual TTBR value representing the allocated table. This paves the way for future users to support split address spaces by simply allocating a table and dealing with the detailed TTBRn logistics themselves. Tested-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> [will: Drop change to ttbr value] Signed-off-by: Will Deacon <will@kernel.org>
2019-11-07iommu/io-pgtable-arm: Rename IOMMU_QCOM_SYS_CACHE and improve docWill Deacon1-1/+1
The 'IOMMU_QCOM_SYS_CACHE' IOMMU protection flag is exposed to all users of the IOMMU API. Despite its name, the idea behind it isn't especially tied to Qualcomm implementations and could conceivably be used by other systems. Rename it to 'IOMMU_SYS_CACHE_ONLY' and update the comment to describe a bit better the idea behind it. Cc: Robin Murphy <robin.murphy@arm.com> Cc: "Isaac J. Manjarres" <isaacm@codeaurora.org> Signed-off-by: Will Deacon <will@kernel.org>
2019-11-04iommu/io-pgtable-arm: Rationalise MAIR handlingRobin Murphy1-2/+1
Between VMSAv8-64 and the various 32-bit formats, there is either one 64-bit MAIR or a pair of 32-bit MAIR0/MAIR1 or NMRR/PMRR registers. As such, keeping two 64-bit values in io_pgtable_cfg has always been overkill. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-11-04iommu/io-pgtable-arm: Simplify level indexingRobin Murphy1-16/+13
The nature of the LPAE format means that data->pg_shift is always redundant with data->bits_per_level, since they represent the size of a page and the number of PTEs per page respectively, and the size of a PTE is constant. Thus it works out more efficient to only store the latter, and derive the former via a trivial addition where necessary. Signed-off-by: Robin Murphy <robin.murphy@arm.com> [will: Reworked granule check in iopte_to_paddr()] Signed-off-by: Will Deacon <will@kernel.org>
2019-11-04iommu/io-pgtable-arm: Simplify PGD size handlingRobin Murphy1-16/+17
We use data->pgd_size directly for the one-off allocation and freeing of the top-level table, but otherwise it serves for ARM_LPAE_PGD_IDX() to repeatedly re-calculate the effective number of top-level address bits it represents. Flip this around so we store the form we most commonly need, and derive the lesser-used one instead. This cuts a whole bunch of code out of the map/unmap/iova_to_phys fast-paths. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-11-04iommu/io-pgtable-arm: Simplify start level lookupRobin Murphy1-25/+20
Beyond a couple of allocation-time calculations, data->levels is only ever used to derive the start level. Storing the start level directly leads to a small reduction in object code, which should help eke out a little more efficiency, and slightly more readable source to boot. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-11-04iommu/io-pgtable-arm: Simplify bounds checksRobin Murphy1-3/+2
We're merely checking that the relevant upper bits of each address are all zero, so there are cheaper ways to achieve that. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-11-04iommu/io-pgtable-arm: Rationalise size checkRobin Murphy1-1/+9
It makes little sense to only validate the requested size after we think we've found a matching block size - making the check up-front is simple, and far more logical than waiting to walk off the bottom of the table to infer that we must have been passed a bogus size to start with. We're missing an equivalent check on the unmap path, so add that as well for consistency. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-11-04iommu/io-pgtable: Make selftest gubbins consistently __initRobin Murphy1-6/+7
The selftests run as an initcall, but the annotation of the various callbacks and data seems to be somewhat arbitrary. Add it consistently for everything related to the selftests. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-11-04Merge branch 'for-joerg/arm-smmu/fixes' into for-joerg/arm-smmu/updatesWill Deacon1-13/+45
Merge in ARM SMMU fixes to avoid conflicts in the ARM io-pgtable code. * for-joerg/arm-smmu/fixes: iommu/io-pgtable-arm: Support all Mali configurations iommu/io-pgtable-arm: Correct Mali attributes iommu/arm-smmu: Free context bitmap in the err path of arm_smmu_init_domain_context
2019-10-01iommu/io-pgtable: Move some initialization data to .init.rodataChristophe JAILLET1-3/+3
The memory used by '__init' functions can be freed once the initialization phase has been performed. Mark some 'static const' array defined and used within some '__init' functions as '__initconst', so that the corresponding data can also be discarded. Without '__initconst', the data are put in the .rodata section. With the qualifier, they are put in the .init.rodata section. With gcc 8.3.0, the following changes have been measured: Without '__initconst': section size .rodata 00000720 .init.rodata 00000018 With '__initconst': section size .rodata 00000660 .init.rodata 00000058 Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Will Deacon <will@kernel.org>
2019-10-01iommu/io-pgtable-arm: Support all Mali configurationsRobin Murphy1-1/+6
In principle, Midgard GPUs supporting smaller VA sizes should only require 3-level pagetables, since level 0 only resolves bits 48:40 of the address. However, the kbase driver does not appear to have any notion of a variable start level, and empirically T720 and T820 rapidly blow up with translation faults unless given a full 4-level table, despite only supporting a 33-bit VA size. The 'real' IAS value is still valuable in terms of validating addresses on map/unmap, so tweak the allocator to allow smaller values while still forcing the resultant tables to the full 4 levels. As far as I can test, this should make all known Midgard variants happy. Fixes: d08d42de6432 ("iommu: io-pgtable: Add ARM Mali midgard MMU page table format") Tested-by: Neil Armstrong <narmstrong@baylibre.com> Reviewed-by: Steven Price <steven.price@arm.com> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-10-01iommu/io-pgtable-arm: Correct Mali attributesRobin Murphy1-13/+40
Whilst Midgard's MEMATTR follows a similar principle to the VMSA MAIR, the actual attribute values differ, so although it currently appears to work to some degree, we probably shouldn't be using our standard stage 1 MAIR for that. Instead, generate a reasonable MEMATTR with attribute values borrowed from the kbase driver; at this point we'll be overriding or ignoring pretty much all of the LPAE config, so just implement these Mali details in a dedicated allocator instead of pretending to subclass the standard VMSA format. Fixes: d08d42de6432 ("iommu: io-pgtable: Add ARM Mali midgard MMU page table format") Tested-by: Neil Armstrong <narmstrong@baylibre.com> Reviewed-by: Steven Price <steven.price@arm.com> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-07-29iommu/io-pgtable: Pass struct iommu_iotlb_gather to ->tlb_add_page()Will Deacon1-8/+14
With all the pieces in place, we can finally propagate the iommu_iotlb_gather structure from the call to unmap() down to the IOMMU drivers' implementation of ->tlb_add_page(). Currently everybody ignores it, but the machinery is now there to defer invalidation. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-29iommu/io-pgtable: Pass struct iommu_iotlb_gather to ->unmap()Will Deacon1-4/+3
Update the io-pgtable ->unmap() function to take an iommu_iotlb_gather pointer as an argument, and update the callers as appropriate. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-29iommu/io-pgtable: Remove unused ->tlb_sync() callbackWill Deacon1-6/+0
The ->tlb_sync() callback is no longer used, so it can be removed. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-29iommu/io-pgtable: Replace ->tlb_add_flush() with ->tlb_add_page()Will Deacon1-6/+5
The ->tlb_add_flush() callback in the io-pgtable API now looks a bit silly: - It takes a size and a granule, which are always the same - It takes a 'bool leaf', which is always true - It only ever flushes a single page With that in mind, replace it with an optional ->tlb_add_page() callback that drops the useless parameters. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-29iommu/io-pgtable-arm: Call ->tlb_flush_walk() and ->tlb_flush_leaf()Will Deacon1-5/+12
Now that all IOMMU drivers using the io-pgtable API implement the ->tlb_flush_walk() and ->tlb_flush_leaf() callbacks, we can use them in the io-pgtable code instead of ->tlb_add_flush() immediately followed by ->tlb_sync(). Signed-off-by: Will Deacon <will@kernel.org>
2019-07-24iommu/io-pgtable: Rename iommu_gather_ops to iommu_flush_opsWill Deacon1-1/+1
In preparation for TLB flush gathering in the IOMMU API, rename the iommu_gather_ops structure in io-pgtable to iommu_flush_ops, which better describes its purpose and avoids the potential for confusion between different levels of the API. $ find linux/ -type f -name '*.[ch]' | xargs sed -i 's/gather_ops/flush_ops/g' Signed-off-by: Will Deacon <will@kernel.org>
2019-07-24iommu/io-pgtable-arm: Remove redundant call to io_pgtable_tlb_sync()Will Deacon1-1/+0
Commit b6b65ca20bc9 ("iommu/io-pgtable-arm: Add support for non-strict mode") added an unconditional call to io_pgtable_tlb_sync() immediately after the case where we replace a block entry with a table entry during an unmap() call. This is redundant, since the IOMMU API will call iommu_tlb_sync() on this path and the patch in question mentions this: | To save having to reason about it too much, make sure the invalidation | in arm_lpae_split_blk_unmap() just performs its own unconditional sync | to minimise the window in which we're technically violating the break- | before-make requirement on a live mapping. This might work out redundant | with an outer-level sync for strict unmaps, but we'll never be splitting | blocks on a DMA fastpath anyway. However, this sync gets in the way of deferred TLB invalidation for leaf entries and is at best a questionable, unproven hack. Remove it. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-01Merge branch 'for-joerg/arm-smmu/updates' of git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into arm/smmuJoerg Roedel1-15/+25
2019-06-25iommu/io-pgtable: Support non-coherent page tablesBjorn Andersson1-3/+9
Describe the memory related to page table walks as non-cacheable for iommu instances that are not DMA coherent. Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> [will: Use cfg->coherent_walk, fix arm-v7s, ensure outer-shareable for NC] Signed-off-by: Will Deacon <will@kernel.org>
2019-06-25iommu/io-pgtable: Replace IO_PGTABLE_QUIRK_NO_DMA with specific flagWill Deacon1-11/+8
IO_PGTABLE_QUIRK_NO_DMA is a bit of a misnomer, since it's really just an indication of whether or not the page-table walker for the IOMMU is coherent with the CPU caches. Since cache coherency is more than just a quirk, replace the flag with its own field in the io_pgtable_cfg structure. Cc: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234Thomas Gleixner1-12/+1
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 503 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Enrico Weigelt <info@metux.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-18iommu/io-pgtable-arm: Add support to use system cacheVivek Gautam1-1/+8
Few Qualcomm platforms such as, sdm845 have an additional outer cache called as System cache, aka. Last level cache (LLC) that allows non-coherent devices to upgrade to using caching. This cache sits right before the DDR, and is tightly coupled with the memory controller. The clients using this cache request their slices from this system cache, make it active, and can then start using it. There is a fundamental assumption that non-coherent devices can't access caches. This change adds an exception where they *can* use some level of cache despite still being non-coherent overall. The coherent devices that use cacheable memory, and CPU make use of this system cache by default. Looking at memory types, we have following - a) Normal uncached :- MAIR 0x44, inner non-cacheable, outer non-cacheable; b) Normal cached :- MAIR 0xff, inner read write-back non-transient, outer read write-back non-transient; attribute setting for coherenet I/O devices. and, for non-coherent i/o devices that can allocate in system cache another type gets added - c) Normal sys-cached :- MAIR 0xf4, inner non-cacheable, outer read write-back non-transient Coherent I/O devices use system cache by marking the memory as normal cached. Non-coherent I/O devices should mark the memory as normal sys-cached in page tables to use system cache. Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Vivek Gautam <vivek.gautam@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-04-12iommu: io-pgtable: Add ARM Mali midgard MMU page table formatRob Herring1-22/+69
ARM Mali midgard GPU is similar to standard 64-bit stage 1 page tables, but have a few differences. Add a new format type to represent the format. The input address size is 48-bits and the output address size is 40-bits (and possibly less?). Note that the later bifrost GPUs follow the standard 64-bit stage 1 format. The differences in the format compared to 64-bit stage 1 format are: The 3rd level page entry bits are 0x1 instead of 0x3 for page entries. The access flags are not read-only and unprivileged, but read and write. This is similar to stage 2 entries, but the memory attributes field matches stage 1 being an index. The nG bit is not set by the vendor driver. This one didn't seem to matter, but we'll keep it aligned to the vendor driver. Cc: Will Deacon <will.deacon@arm.com> Acked-by: Robin Murphy <robin.murphy@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: iommu@lists.linux-foundation.org Acked-by: Alyssa Rosenzweig <alyssa@rosenzweig.io> Acked-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Rob Herring <robh@kernel.org> Link: https://patchwork.freedesktop.org/patch/msgid/20190409205427.6943-2-robh@kernel.org
2019-02-11iommu: Allow io-pgtable to be used outside of drivers/iommu/Rob Herring1-2/+1
Move io-pgtable.h to include/linux/ and export alloc_io_pgtable_ops and free_io_pgtable_ops. This enables drivers outside drivers/iommu/ to use the page table library. Specifically, some ARM Mali GPUs use the ARM page table formats. Cc: Will Deacon <will.deacon@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Matthias Brugger <matthias.bgg@gmail.com> Cc: Rob Clark <robdclark@gmail.com> Cc: linux-arm-kernel@lists.infradead.org Cc: iommu@lists.linux-foundation.org Cc: linux-mediatek@lists.infradead.org Cc: linux-arm-msm@vger.kernel.org Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2018-10-01iommu/io-pgtable-arm: Add support for non-strict modeZhen Lei1-2/+12
Non-strict mode is simply a case of skipping 'regular' leaf TLBIs, since the sync is already factored out into ops->iotlb_sync at the core API level. Non-leaf invalidations where we change the page table structure itself still have to be issued synchronously in order to maintain walk caches correctly. To save having to reason about it too much, make sure the invalidation in arm_lpae_split_blk_unmap() just performs its own unconditional sync to minimise the window in which we're technically violating the break- before-make requirement on a live mapping. This might work out redundant with an outer-level sync for strict unmaps, but we'll never be splitting blocks on a DMA fastpath anyway. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> [rm: tweak comment, commit message, split_blk_unmap logic and barriers] Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-10-01iommu/io-pgtable-arm: Fix race handling in split_blk_unmap()Robin Murphy1-5/+4
In removing the pagetable-wide lock, we gained the possibility of the vanishingly unlikely case where we have a race between two concurrent unmappers splitting the same block entry. The logic to handle this is fairly straightforward - whoever loses the race frees their partial next-level table and instead dereferences the winner's newly-installed entry in order to fall back to a regular unmap, which intentionally echoes the pre-existing case of recursively splitting a 1GB block down to 4KB pages by installing a full table of 2MB blocks first. Unfortunately, the chump who implemented that logic failed to update the condition check for that fallback, meaning that if said race occurs at the last level (where the loser's unmap_idx is valid) then the unmap won't actually happen. Fix that to properly account for both the race and recursive cases. Fixes: 2c3d273eabe8 ("iommu/io-pgtable-arm: Support lockless operation") Signed-off-by: Robin Murphy <robin.murphy@arm.com> [will: re-jig control flow to avoid duplicate cmpxchg test] Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-07-26iommu/io-pgtable-arm: Fix pgtable allocation in selftestJean-Philippe Brucker1-1/+2
Commit 4b123757eeaa ("iommu/io-pgtable-arm: Make allocations NUMA-aware") added a NUMA hint to page table allocation, but the pgtable selftest doesn't provide an SMMU device parameter. Since dev_to_node doesn't accept a NULL argument, add a special case for selftest. Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-05-29iommu/io-pgtable-arm: Make allocations NUMA-awareRobin Murphy1-4/+9
We would generally expect pagetables to be read by the IOMMU more than written by the CPU, so in NUMA systems it makes sense to locate them close to the former and avoid cross-node pagetable walks if at all possible. As it turns out, we already have a handle on the IOMMU device for the sake of coherency management, so it's trivial to grab the appropriate NUMA node when allocating new pagetable pages. Note that we drop the semantics of alloc_pages_exact(), but that's fine since they have never been necessary: the only time we're allocating more than one page is for stage 2 top-level concatenation, but since that is based on the number of IPA bits, the size is always some exact power of two anyway. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2018-05-03iommu/io-pgtable-arm: Use for_each_set_bit to simplify codeYueHaibing1-4/+1
We can use for_each_set_bit() to simplify code slightly in the ARM io-pgtable self tests while unmapping. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2018-03-29Merge branches 'x86/amd', 'x86/vt-d', 'arm/rockchip', 'arm/omap', 'arm/mediatek', 'arm/exynos', 'arm/renesas', 'arm/smmu' and 'core' into nextJoerg Roedel1-30/+61
2018-03-29iommu/io-pgtable-arm: Avoid warning with 32-bit phys_addr_tRobin Murphy1-1/+1
It's not entirely unreasonable for io-pgtable-arm to be built for configurations with 32-bit phys_addr_t, where the compiler rightly raises a warning about the 36-bit shift. That particular code path should never actually *run* on those systems, but we still want it to compile cleanly, which is easily done by using an unambiguous u64 as the intermediate type instead. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2018-03-27iommu/io-pgtable-arm: Support 52-bit physical addressRobin Murphy1-18/+49
Bring io-pgtable-arm in line with the ARMv8.2-LPA feature allowing 52-bit physical addresses when using the 64KB translation granule. This will be supported by SMMUv3.1. Tested-by: Nate Watterson <nwatters@codeaurora.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-02-13iommu/io-pgtable: Use size_t return type for all foo_unmapVivek Gautam1-12/+12
Unmap returns a size_t all throughout the IOMMU framework. Make io-pgtable match this convention. Moreover, there isn't a need to have a signed int return type as we return 0 in case of failures. Signed-off-by: Vivek Gautam <vivek.gautam@codeaurora.org> Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-10-02iommu/io-pgtable-arm: Convert to IOMMU API TLB syncRobin Murphy1-6/+1
Now that the core API issues its own post-unmap TLB sync call, push that operation out from the io-pgtable-arm internals into the users. For now, we leave the invalidation implicit in the unmap operation, since none of the current users would benefit much from any change to that. CC: Magnus Damm <damm+renesas@opensource.se> CC: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-07-20iommu/io-pgtable: Sanitise map/unmap addressesRobin Murphy1-0/+7
It may be an egregious error to attempt to use addresses outside the range of the pagetable format, but that still doesn't mean we should merrily wreak havoc by silently mapping/unmapping whatever truncated portions of them might happen to correspond to real addresses. Add some up-front checks to sanitise our inputs so that buggy callers don't invite potential memory corruption. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-06-23iommu/io-pgtable-arm: Use dma_wmb() instead of wmb() when publishing tableWill Deacon1-2/+6
When writing a new table entry, we must ensure that the contents of the table is made visible to the SMMU page table walker before the updated table entry itself. This is currently achieved using wmb(), which expands to an expensive and unnecessary DSB instruction. Ideally, we'd just use cmpxchg64_release when writing the table entry, but this doesn't have memory ordering semantics on !SMP systems. Instead, use dma_wmb(), which emits DMB OSHST. Strictly speaking, this does more than we require (since it targets the outer-shareable domain), but it's likely to be significantly faster than the DSB approach. Reported-by: Linu Cherian <linu.cherian@cavium.com> Suggested-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-06-23iommu/io-pgtable-arm: Support lockless operationRobin Murphy1-15/+57
For parallel I/O with multiple concurrent threads servicing the same device (or devices, if several share a domain), serialising page table updates becomes a massive bottleneck. On reflection, though, we don't strictly need to do that - for valid IOMMU API usage, there are in fact only two races that we need to guard against: multiple map requests for different blocks within the same region, when the intermediate-level table for that region does not yet exist; and multiple unmaps of different parts of the same block entry. Both of those are fairly easily solved by using a cmpxchg to install the new table, such that if we then find that someone else's table got there first, we can simply free ours and continue. Make the requisite changes such that we can withstand being called without the caller maintaining a lock. In theory, this opens up a few corners in which wildly misbehaving callers making nonsensical overlapping requests might lead to crashes instead of just unpredictable results, but correct code really does not deserve to pay a significant performance cost for the sake of masking bugs in theoretical broken code. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-06-23iommu/io-pgtable: Introduce explicit coherencyRobin Murphy1-5/+6
Once we remove the serialising spinlock, a potential race opens up for non-coherent IOMMUs whereby a caller of .map() can be sure that cache maintenance has been performed on their new PTE, but will have no guarantee that such maintenance for table entries above it has actually completed (e.g. if another CPU took an interrupt immediately after writing the table entry, but before initiating the DMA sync). Handling this race safely will add some potentially non-trivial overhead to installing a table entry, which we would much rather avoid on coherent systems where it will be unnecessary, and where we are stirivng to minimise latency by removing the locking in the first place. To that end, let's introduce an explicit notion of cache-coherency to io-pgtable, such that we will be able to avoid penalising IOMMUs which know enough to know when they are coherent. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-06-23iommu/io-pgtable-arm: Improve split_blk_unmapRobin Murphy1-47/+67
The current split_blk_unmap implementation suffers from some inscrutable pointer trickery for creating the tables to replace the block entry, but more than that it also suffers from hideous inefficiency. For example, the most pathological case of unmapping a level 3 page from a level 1 block will allocate 513 lower-level tables to remap the entire block at page granularity, when only 2 are actually needed (the rest can be covered by level 2 block entries). Also, we would like to be able to relax the spinlock requirement in future, for which the roll-back-and-try-again logic for race resolution would be pretty hideous under the current paradigm. Both issues can be resolved most neatly by turning things sideways: instead of repeatedly recursing into __arm_lpae_map() map to build up an entire new sub-table depth-first, we can directly replace the block entry with a next-level table of block/page entries, then repeat by unmapping at the next level if necessary. With a little refactoring of some helper functions, the code ends up not much bigger than before, but considerably easier to follow and to adapt in future. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-04-06iommu/io-pgtable-arm: Avoid shift overflow in block sizeRobin Murphy1-1/+1
The recursive nature of __arm_lpae_{map,unmap}() means that ARM_LPAE_BLOCK_SIZE() is evaluated for every level, including those where block mappings aren't possible. This in itself is harmless enough, as we will only ever be called with valid sizes from the pgsize_bitmap, and thus always recurse down past any imaginary block sizes. The only problem is that most of those imaginary sizes overflow the type used for the calculation, and thus trigger warnings under UBsan: [ 63.020939] ================================================================================ [ 63.021284] UBSAN: Undefined behaviour in drivers/iommu/io-pgtable-arm.c:312:22 [ 63.021602] shift exponent 39 is too large for 32-bit type 'int' [ 63.021909] CPU: 0 PID: 1119 Comm: lkvm Not tainted 4.7.0-rc3+ #819 [ 63.022163] Hardware name: FVP Base (DT) [ 63.022345] Call trace: [ 63.022629] [<ffffff900808f258>] dump_backtrace+0x0/0x3a8 [ 63.022975] [<ffffff900808f614>] show_stack+0x14/0x20 [ 63.023294] [<ffffff90086bc9dc>] dump_stack+0x104/0x148 [ 63.023609] [<ffffff9008713ce8>] ubsan_epilogue+0x18/0x68 [ 63.023956] [<ffffff9008714410>] __ubsan_handle_shift_out_of_bounds+0x18c/0x1bc [ 63.024365] [<ffffff900890fcb0>] __arm_lpae_map+0x720/0xae0 [ 63.024732] [<ffffff9008910170>] arm_lpae_map+0x100/0x190 [ 63.025049] [<ffffff90089183d8>] arm_smmu_map+0x78/0xc8 [ 63.025390] [<ffffff9008906c18>] iommu_map+0x130/0x230 [ 63.025763] [<ffffff9008bf7564>] vfio_iommu_type1_attach_group+0x4bc/0xa00 [ 63.026156] [<ffffff9008bf3c78>] vfio_fops_unl_ioctl+0x320/0x580 [ 63.026515] [<ffffff9008377420>] do_vfs_ioctl+0x140/0xd28 [ 63.026858] [<ffffff9008378094>] SyS_ioctl+0x8c/0xa0 [ 63.027179] [<ffffff9008086e70>] el0_svc_naked+0x24/0x28 [ 63.027412] ================================================================================ Perform the shift in a 64-bit type to prevent the theoretical overflow and keep the peace. As it turns out, this generates identical code for 32-bit ARM, and marginally shorter AArch64 code, so it's good all round. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>