aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2019-08-22iommu/arm-smmu-v3: Avoid locking on invalidation path when not using ATSWill Deacon1-5/+32
When ATS is not in use, we can avoid taking the 'devices_lock' for the domain on the invalidation path by simply caching the number of ATS masters currently attached. The fiddly part is handling a concurrent ->attach() of an ATS-enabled master to a domain that is being invalidated, but we can handle this using an 'smp_mb()' to ensure that our check of the count is ordered after completion of our prior TLB invalidation. This also makes our ->attach() and ->detach() flows symmetric wrt ATS interactions. Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-21iommu/arm-smmu-v3: Fix ATC invalidation ordering wrt main TLBsWill Deacon1-7/+9
When invalidating the ATC for an PCIe endpoint using ATS, we must take care to complete invalidation of the main SMMU TLBs beforehand, otherwise the device could immediately repopulate its ATC with stale translations. Hooking the ATC invalidation into ->unmap() as we currently do does the exact opposite: it ensures that the ATC is invalidated *before* the main TLBs, which is bogus. Move ATC invalidation into the actual (leaf) invalidation routines so that it is always called after completing main TLB invalidation. Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-21iommu/arm-smmu-v3: Rework enabling/disabling of ATS for PCI mastersWill Deacon1-19/+28
To prevent any potential issues arising from speculative Address Translation Requests from an ATS-enabled PCIe endpoint, rework our ATS enabling/disabling logic so that we enable ATS at the SMMU before we enable it at the endpoint, and disable things in the opposite order. Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-21iommu/arm-smmu-v3: Don't issue CMD_SYNC for zero-length invalidationsWill Deacon1-0/+3
Calling arm_smmu_tlb_inv_range() with a size of zero, perhaps due to an empty 'iommu_iotlb_gather' structure, should be a NOP. Elide the CMD_SYNC when there is no invalidation to be performed. Signed-off-by: Will Deacon <will@kernel.org>
2019-08-21iommu/arm-smmu-v3: Remove boolean bitfield for 'ats_enabled' flagWill Deacon1-1/+1
There's really no need for this to be a bitfield, particularly as we don't have bitwise addressing on arm64. Signed-off-by: Will Deacon <will@kernel.org>
2019-08-21iommu/arm-smmu-v3: Disable detection of ATS and PRIWill Deacon1-0/+2
Detecting the ATS capability of the SMMU at probe time introduces a spinlock into the ->unmap() fast path, even when ATS is not actually in use. Furthermore, the ATC invalidation that exists is broken, as it occurs before invalidation of the main SMMU TLB which leaves a window where the ATC can be repopulated with stale entries. Given that ATS is both a new feature and a specialist sport, disable it for now whilst we fix it properly in subsequent patches. Since PRI requires ATS, disable that too. Cc: <stable@vger.kernel.org> Fixes: 9ce27afc0830 ("iommu/arm-smmu-v3: Add support for PCI ATS") Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-21iommu/arm-smmu-v3: Document ordering guarantees of command insertionWill Deacon1-0/+16
It turns out that we've always relied on some subtle ordering guarantees when inserting commands into the SMMUv3 command queue. With the recent changes to elide locking when possible, these guarantees become more subtle and even more important. Add a comment documented the barrier semantics of command insertion so that we don't have to derive the behaviour from scratch each time it comes up on the list. Signed-off-by: Will Deacon <will@kernel.org>
2019-08-20iommu/arm-smmu: Ensure 64-bit I/O accessors are available on 32-bit CPURobin Murphy2-1/+1
As part of the grand SMMU driver refactoring effort, the I/O register accessors were moved into 'arm-smmu.h' in commit 6d7dff62afb0 ("iommu/arm-smmu: Move Secure access quirk to implementation"). On 32-bit architectures (such as ARM), the 64-bit accessors are defined in 'linux/io-64-nonatomic-hi-lo.h', so include this header to fix the build. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-20iommu/arm-smmu: Make private implementation details staticWill Deacon1-5/+5
Many of the device-specific implementation details in 'arm-smmu-impl.c' are exposed to other compilation units. Whilst we may require this in the future, let's make it all 'static' for now so that we can expose things on a case-by-case basic. Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19iommu/arm-smmu: Add context init implementation hookRobin Murphy3-48/+87
Allocating and initialising a context for a domain is another point where certain implementations are known to want special behaviour. Currently the other half of the Cavium workaround comes into play here, so let's finish the job to get the whole thing right out of the way. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19iommu/arm-smmu: Add reset implementation hookRobin Murphy3-35/+54
Reset is an activity rife with implementation-defined poking. Add a corresponding hook, and use it to encapsulate the existing MMU-500 details. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19iommu/arm-smmu: Add configuration implementation hookRobin Murphy3-14/+38
Probing the ID registers and setting up the SMMU configuration is an area where overrides and workarounds may well be needed. Indeed, the Cavium workaround detection lives there at the moment, so let's break that out. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19iommu/arm-smmu: Move Secure access quirk to implementationRobin Murphy3-99/+114
Move detection of the Secure access quirk to its new home, trimming it down in the process - time has proven that boolean DT flags are neither ideal nor necessarily sufficient, so it's highly unlikely we'll ever add more, let alone enough to justify the frankly overengineered parsing machinery. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19iommu/arm-smmu: Add implementation infrastructureRobin Murphy5-81/+108
Add some nascent infrastructure for handling implementation-specific details outside the flow of the architectural code. This will allow us to keep mutually-incompatible vendor-specific hooks in their own files where the respective interested parties can maintain them with minimal chance of conflicts. As somewhat of a template, we'll start with a general place to collect the relatively trivial existing quirks. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19iommu/arm-smmu: Rename arm-smmu-regs.hRobin Murphy3-5/+5
We're about to start using it for more than just register definitions, so generalise the name. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19iommu/arm-smmu: Abstract GR0 accessesRobin Murphy1-48/+58
Clean up the remaining accesses to GR0 registers, so that everything is now neatly abstracted. This folds up the Non-Secure alias quirk as the first step towards moving it out of the way entirely. Although GR0 does technically contain some 64-bit registers (sGFAR and the weird SMMUv2 HYPC and MONC stuff), they're not ones we have any need to access. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19iommu/arm-smmu: Abstract context bank accessesRobin Murphy1-65/+73
Context bank accesses are fiddly enough to deserve a number of extra helpers to keep the callsites looking sane, even though there are only one or two of each. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19iommu/arm-smmu: Abstract GR1 accessesRobin Murphy1-7/+27
Introduce some register access abstractions which we will later use to encapsulate various quirks. GR1 is the easiest page to start with. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19iommu/arm-smmu: Get rid of weird "atomic" writeRobin Murphy1-16/+7
The smmu_write_atomic_lq oddity made some sense when the context format was effectively tied to CONFIG_64BIT, but these days it's simpler to just pick an explicit access size based on the format for the one-and-a-half times we actually care. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19iommu/arm-smmu: Split arm_smmu_tlb_inv_range_nosync()Robin Murphy1-28/+36
Since we now use separate iommu_gather_ops for stage 1 and stage 2 contexts, we may as well divide up the monolithic callback into its respective stage 1 and stage 2 parts. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19iommu/arm-smmu: Rework cb_base handlingRobin Murphy1-10/+15
To keep register-access quirks manageable, we want to structure things to avoid needing too many individual overrides. It seems fairly clean to have a single interface which handles both global and context registers in terms of the architectural pages, so the first preparatory step is to rework cb_base into a page number rather than an absolute address. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19iommu/arm-smmu: Convert context bank registers to bitfieldsRobin Murphy3-56/+59
Finish the final part of the job, once again updating some names to match the current spec. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19iommu/arm-smmu: Convert GR1 registers to bitfieldsRobin Murphy2-28/+23
As for GR0, use the bitfield helpers to make GR1 usage a little cleaner, and use it as an opportunity to audit and tidy the definitions. This tweaks the handling of CBAR types to match what we did for S2CR a while back, and fixes a couple of names which didn't quite match the latest architecture spec (IHI0062D.c). Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19iommu/arm-smmu: Convert GR0 registers to bitfieldsRobin Murphy2-93/+84
FIELD_PREP remains a terrible name, but the overall simplification will make further work on this stuff that much more manageable. This also serves as an audit of the header, wherein we can impose a consistent grouping and ordering of the offset and field definitions Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19iommu/qcom: Mask TLBI addresses correctlyRobin Murphy1-1/+1
As with arm-smmu from whence this code was borrowed, the IOVAs passed in here happen to be at least page-aligned anyway, but still; oh dear. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-19iommu/arm-smmu: Mask TLBI address correctlyRobin Murphy1-1/+1
The less said about "~12UL" the better. Oh dear. We get away with it due to calling constraints that mean IOVAs are implicitly at least page-aligned to begin with, but still; oh dear. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-08iommu/arm-smmu-v3: Defer TLB invalidation until ->iotlb_sync()Will Deacon1-29/+42
Update the iommu_iotlb_gather structure passed to ->tlb_add_page() and use this information to defer all TLB invalidation until ->iotlb_sync(). This drastically reduces contention on the command queue, since we can insert our commands in batches rather than one-by-one. Tested-by: Ganapatrao Kulkarni <gkulkarni@marvell.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-08iommu/arm-smmu-v3: Reduce contention during command-queue insertionWill Deacon1-144/+533
The SMMU command queue is a bottleneck in large systems, thanks to the spin_lock which serialises accesses from all CPUs to the single queue supported by the hardware. Attempt to improve this situation by moving to a new algorithm for inserting commands into the queue, which is lock-free on the fast-path. Tested-by: Ganapatrao Kulkarni <gkulkarni@marvell.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-07-29iommu/arm-smmu-v3: Operate directly on low-level queue where possibleWill Deacon1-27/+31
In preparation for rewriting the command queue insertion code to use a new algorithm, rework many of our queue macro accessors and manipulation functions so that they operate on the arm_smmu_ll_queue structure where possible. This will allow us to call these helpers on local variables without having to construct a full-blown arm_smmu_queue on the stack. No functional change. Tested-by: Ganapatrao Kulkarni <gkulkarni@marvell.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-07-29iommu/arm-smmu-v3: Move low-level queue fields out of arm_smmu_queueWill Deacon1-41/+47
In preparation for rewriting the command queue insertion code to use a new algorithm, introduce a new arm_smmu_ll_queue structure which contains only the information necessary to perform queue arithmetic for a queue and will later be extended so that we can perform complex atomic manipulation on some of the fields. No functional change. Tested-by: Ganapatrao Kulkarni <gkulkarni@marvell.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-07-29iommu/arm-smmu-v3: Drop unused 'q' argument from Q_OVF macroWill Deacon1-6/+6
The Q_OVF macro doesn't need to access the arm_smmu_queue structure, so drop the unused macro argument. No functional change. Tested-by: Ganapatrao Kulkarni <gkulkarni@marvell.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-07-29iommu/arm-smmu-v3: Separate s/w and h/w views of prod and cons indexesWill Deacon1-14/+22
In preparation for rewriting the command queue insertion code to use a new algorithm, separate the software and hardware views of the prod and cons indexes so that manipulating the software state doesn't automatically update the hardware state at the same time. No functional change. Tested-by: Ganapatrao Kulkarni <gkulkarni@marvell.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-07-29iommu/io-pgtable: Pass struct iommu_iotlb_gather to ->tlb_add_page()Will Deacon8-29/+47
With all the pieces in place, we can finally propagate the iommu_iotlb_gather structure from the call to unmap() down to the IOMMU drivers' implementation of ->tlb_add_page(). Currently everybody ignores it, but the machinery is now there to defer invalidation. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-29iommu/io-pgtable: Pass struct iommu_iotlb_gather to ->unmap()Will Deacon10-15/+16
Update the io-pgtable ->unmap() function to take an iommu_iotlb_gather pointer as an argument, and update the callers as appropriate. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-29iommu/io-pgtable: Remove unused ->tlb_sync() callbackWill Deacon10-54/+16
The ->tlb_sync() callback is no longer used, so it can be removed. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-29iommu/io-pgtable: Replace ->tlb_add_flush() with ->tlb_add_page()Will Deacon10-71/+105
The ->tlb_add_flush() callback in the io-pgtable API now looks a bit silly: - It takes a size and a granule, which are always the same - It takes a 'bool leaf', which is always true - It only ever flushes a single page With that in mind, replace it with an optional ->tlb_add_page() callback that drops the useless parameters. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-29iommu/io-pgtable-arm: Call ->tlb_flush_walk() and ->tlb_flush_leaf()Will Deacon3-15/+41
Now that all IOMMU drivers using the io-pgtable API implement the ->tlb_flush_walk() and ->tlb_flush_leaf() callbacks, we can use them in the io-pgtable code instead of ->tlb_add_flush() immediately followed by ->tlb_sync(). Signed-off-by: Will Deacon <will@kernel.org>
2019-07-29iommu/io-pgtable: Hook up ->tlb_flush_walk() and ->tlb_flush_leaf() in driversWill Deacon7-0/+116
Hook up ->tlb_flush_walk() and ->tlb_flush_leaf() in drivers using the io-pgtable API so that we can start making use of them in the page-table code. For now, they can just wrap the implementations of ->tlb_add_flush and ->tlb_sync pending future optimisation in each driver. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-29iommu/io-pgtable: Introduce tlb_flush_walk() and tlb_flush_leaf()Will Deacon1-5/+19
In preparation for deferring TLB flushes to iommu_tlb_sync(), introduce two new synchronous invalidation helpers to the io-pgtable API, which allow the unmap() code to force invalidation in cases where it cannot be deferred (e.g. when replacing a table with a block or when TLBI_ON_MAP is set). Signed-off-by: Will Deacon <will@kernel.org>
2019-07-29iommu: Pass struct iommu_iotlb_gather to ->unmap() and ->iotlb_sync()Will Deacon18-33/+73
To allow IOMMU drivers to batch up TLB flushing operations and postpone them until ->iotlb_sync() is called, extend the prototypes for the ->unmap() and ->iotlb_sync() IOMMU ops callbacks to take a pointer to the current iommu_iotlb_gather structure. All affected IOMMU drivers are updated, but there should be no functional change since the extra parameter is ignored for now. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-24iommu: Introduce iommu_iotlb_gather_add_page()Will Deacon1-0/+31
Introduce a helper function for drivers to use when updating an iommu_iotlb_gather structure in response to an ->unmap() call, rather than having to open-code the logic in every page-table implementation. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-24iommu: Introduce struct iommu_iotlb_gather for batching TLB flushesWill Deacon4-22/+75
To permit batching of TLB flushes across multiple calls to the IOMMU driver's ->unmap() implementation, introduce a new structure for tracking the address range to be flushed and the granularity at which the flushing is required. This is hooked into the IOMMU API and its caller are updated to make use of the new structure. Subsequent patches will plumb this into the IOMMU drivers as well, but for now the gathering information is ignored. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-24iommu/io-pgtable: Rename iommu_gather_ops to iommu_flush_opsWill Deacon10-20/+20
In preparation for TLB flush gathering in the IOMMU API, rename the iommu_gather_ops structure in io-pgtable to iommu_flush_ops, which better describes its purpose and avoids the potential for confusion between different levels of the API. $ find linux/ -type f -name '*.[ch]' | xargs sed -i 's/gather_ops/flush_ops/g' Signed-off-by: Will Deacon <will@kernel.org>
2019-07-24iommu/io-pgtable-arm: Remove redundant call to io_pgtable_tlb_sync()Will Deacon2-2/+0
Commit b6b65ca20bc9 ("iommu/io-pgtable-arm: Add support for non-strict mode") added an unconditional call to io_pgtable_tlb_sync() immediately after the case where we replace a block entry with a table entry during an unmap() call. This is redundant, since the IOMMU API will call iommu_tlb_sync() on this path and the patch in question mentions this: | To save having to reason about it too much, make sure the invalidation | in arm_lpae_split_blk_unmap() just performs its own unconditional sync | to minimise the window in which we're technically violating the break- | before-make requirement on a live mapping. This might work out redundant | with an outer-level sync for strict unmaps, but we'll never be splitting | blocks on a DMA fastpath anyway. However, this sync gets in the way of deferred TLB invalidation for leaf entries and is at best a questionable, unproven hack. Remove it. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-24iommu: Remove empty iommu_tlb_range_add() callback from iommu_opsWill Deacon4-25/+0
Commit add02cfdc9bc ("iommu: Introduce Interface for IOMMU TLB Flushing") added three new TLB flushing operations to the IOMMU API so that the underlying driver operations can be batched when unmapping large regions of IO virtual address space. However, the ->iotlb_range_add() callback has not been implemented by any IOMMU drivers (amd_iommu.c implements it as an empty function, which incurs the overhead of an indirect branch). Instead, drivers either flush the entire IOTLB in the ->iotlb_sync() callback or perform the necessary invalidation during ->unmap(). Attempting to implement ->iotlb_range_add() for arm-smmu-v3.c revealed two major issues: 1. The page size used to map the region in the page-table is not known, and so it is not generally possible to issue TLB flushes in the most efficient manner. 2. The only mutable state passed to the callback is a pointer to the iommu_domain, which can be accessed concurrently and therefore requires expensive synchronisation to keep track of the outstanding flushes. Remove the callback entirely in preparation for extending ->unmap() and ->iotlb_sync() to update a token on the caller's stack. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-21Linus 5.3-rc1Linus Torvalds1-2/+2
2019-07-21iommu/amd: fix a crash in iova_magazine_free_pfnsQian Cai1-1/+1
The commit b3aa14f02254 ("iommu: remove the mapping_error dma_map_ops method") incorrectly changed the checking from dma_ops_alloc_iova() in map_sg() causes a crash under memory pressure as dma_ops_alloc_iova() never return DMA_MAPPING_ERROR on failure but 0, so the error handling is all wrong. kernel BUG at drivers/iommu/iova.c:801! Workqueue: kblockd blk_mq_run_work_fn RIP: 0010:iova_magazine_free_pfns+0x7d/0xc0 Call Trace: free_cpu_cached_iovas+0xbd/0x150 alloc_iova_fast+0x8c/0xba dma_ops_alloc_iova.isra.6+0x65/0xa0 map_sg+0x8c/0x2a0 scsi_dma_map+0xc6/0x160 pqi_aio_submit_io+0x1f6/0x440 [smartpqi] pqi_scsi_queue_command+0x90c/0xdd0 [smartpqi] scsi_queue_rq+0x79c/0x1200 blk_mq_dispatch_rq_list+0x4dc/0xb70 blk_mq_sched_dispatch_requests+0x249/0x310 __blk_mq_run_hw_queue+0x128/0x200 blk_mq_run_work_fn+0x27/0x30 process_one_work+0x522/0xa10 worker_thread+0x63/0x5b0 kthread+0x1d2/0x1f0 ret_from_fork+0x22/0x40 Fixes: b3aa14f02254 ("iommu: remove the mapping_error dma_map_ops method") Signed-off-by: Qian Cai <cai@lca.pw> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-07-21hexagon: switch to generic version of pte allocationMike Rapoport1-32/+2
The hexagon implementation pte_alloc_one(), pte_alloc_one_kernel(), pte_free_kernel() and pte_free() is identical to the generic except of lack of __GFP_ACCOUNT for the user PTEs allocation. Switch hexagon to use generic version of these functions. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-07-20typo fix: it's d_make_root, not d_make_inode...Al Viro1-1/+1
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2019-07-20dt-bindings: pinctrl: stm32: Fix missing 'clocks' property in examplesRob Herring1-0/+4
Now that examples are validated against the DT schema, an error with required 'clocks' property missing is exposed: Documentation/devicetree/bindings/pinctrl/st,stm32-pinctrl.example.dt.yaml: \ pinctrl@40020000: gpio@0: 'clocks' is a required property Documentation/devicetree/bindings/pinctrl/st,stm32-pinctrl.example.dt.yaml: \ pinctrl@50020000: gpio@1000: 'clocks' is a required property Documentation/devicetree/bindings/pinctrl/st,stm32-pinctrl.example.dt.yaml: \ pinctrl@50020000: gpio@2000: 'clocks' is a required property Add the missing 'clocks' properties to the examples to fix the errors. Fixes: 2c9239c125f0 ("dt-bindings: pinctrl: Convert stm32 pinctrl bindings to json-schema") Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com> Cc: linux-gpio@vger.kernel.org Cc: linux-stm32@st-md-mailman.stormreply.com Acked-by: Alexandre TORGUE <alexandre.torgue@st.com> Signed-off-by: Rob Herring <robh@kernel.org>