aboutsummaryrefslogtreecommitdiffstats
path: root/tools/perf/scripts/python/exported-sql-viewer.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2022-05-20nvme: enable uring-passthrough for admin commandsKanchan Joshi4-0/+27
Add two new opcodes that userspace can use for admin commands: NVME_URING_CMD_ADMIN : non-vectroed NVME_URING_CMD_ADMIN_VEC : vectored variant Wire up support when these are issued on controller node(/dev/nvmeX). Signed-off-by: Kanchan Joshi <joshi.k@samsung.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220520090630.70394-3-joshi.k@samsung.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-20nvme: helper for uring-passthrough checksKanchan Joshi1-8/+16
Factor out a helper consolidating the error checks, and fix typo in a comment too. This is in preparation to support admin commands on this path. Signed-off-by: Kanchan Joshi <joshi.k@samsung.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220520090630.70394-2-joshi.k@samsung.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-12blk-mq: fix passthrough pluggingMing Lei1-51/+63
First we can't add request into plug list in blk_mq_request_bypass_insert which may be called when flushing plug list, so nested plug is caused. Second if polled passthrough request is inserted via blk_execute_rq(), it can't be added to plug list too since io polling needs the request to be issued to driver. Fixes the two by moving plugging into blk_execute_rq_no_wait(). Cc: Christoph Hellwig <hch@lst.de> Fixes: 1c2d2fff6dc0 ("block: wire-up support for passthrough plugging") Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20220512140010.1458645-1-ming.lei@redhat.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-11nvme: add vectored-io support for uring-cmdAnuj Gupta2-3/+7
wire up support for async passthru that takes an array of buffers (using iovec). Exposed via a new op NVME_URING_CMD_IO_VEC. Same 'struct nvme_uring_cmd' is to be used with - 1. cmd.addr as base address of user iovec array 2. cmd.data_len as count of iovec array elements Signed-off-by: Kanchan Joshi <joshi.k@samsung.com> Signed-off-by: Anuj Gupta <anuj20.g@samsung.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220511054750.20432-6-joshi.k@samsung.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-11nvme: wire-up uring-cmd support for io-passthru on char-device.Kanchan Joshi5-3/+220
Introduce handler for fops->uring_cmd(), implementing async passthru on char device (/dev/ngX). The handler supports newly introduced operation NVME_URING_CMD_IO. This operates on a new structure nvme_uring_cmd, which is similar to struct nvme_passthru_cmd64 but without the embedded 8b result field. This field is not needed since uring-cmd allows to return additional result via big-CQE. Signed-off-by: Kanchan Joshi <joshi.k@samsung.com> Signed-off-by: Anuj Gupta <anuj20.g@samsung.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220511054750.20432-5-joshi.k@samsung.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-11nvme: refactor nvme_submit_user_cmd()Christoph Hellwig1-11/+45
Divide the work into two helpers, namely nvme_alloc_user_request and nvme_execute_user_rq. This is a prep patch, to help wiring up uring-cmd support in nvme. Signed-off-by: Christoph Hellwig <hch@lst.de> [axboe: fold in fix for assuming bio is non-NULL] Link: https://lore.kernel.org/r/20220511054750.20432-4-joshi.k@samsung.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-11block: wire-up support for passthrough pluggingJens Axboe1-34/+39
Add support for plugging in passthrough path. When plugging is enabled, the requests are added to a plug instead of getting dispatched to the driver. And when the plug is finished, the whole batch gets dispatched via ->queue_rqs which turns out to be more efficient. Otherwise dispatching used to happen via ->queue_rq, one request at a time. Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220511054750.20432-3-joshi.k@samsung.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-11fs,io_uring: add infrastructure for uring-cmdJens Axboe4-26/+165
file_operations->uring_cmd is a file private handler. This is somewhat similar to ioctl but hopefully a lot more sane and useful as it can be used to enable many io_uring capabilities for the underlying operation. IORING_OP_URING_CMD is a file private kind of request. io_uring doesn't know what is in this command type, it's for the provider of ->uring_cmd() to deal with. Co-developed-by: Kanchan Joshi <joshi.k@samsung.com> Signed-off-by: Kanchan Joshi <joshi.k@samsung.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220511054750.20432-2-joshi.k@samsung.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-09io_uring: support CQE32 for nop operationStefan Roesch1-2/+26
This adds support for filling the extra1 and extra2 fields for large CQE's. Co-developed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Stefan Roesch <shr@fb.com> Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20220426182134.136504-13-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-09io_uring: enable CQE32Stefan Roesch1-1/+1
This enables large CQE's in the uring setup. Co-developed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Stefan Roesch <shr@fb.com> Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20220426182134.136504-12-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-09io_uring: support CQE32 in /proc infoStefan Roesch1-2/+14
This exposes the extra1 and extra2 fields in the /proc output. Signed-off-by: Stefan Roesch <shr@fb.com> Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20220426182134.136504-11-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-09io_uring: add tracing for additional CQE32 fieldsStefan Roesch2-9/+20
This adds tracing for the extra1 and extra2 fields. Co-developed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Stefan Roesch <shr@fb.com> Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20220426182134.136504-10-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-09io_uring: overflow processing for CQE32Stefan Roesch1-9/+23
This adds the overflow processing for large CQE's. This adds two parameters to the io_cqring_event_overflow function and uses these fields to initialize the large CQE fields. Allocate enough space for large CQE's in the overflow structue. If no large CQE's are used, the size of the allocation is unchanged. The cqe field can have a different size depending if its a large CQE or not. To be able to allocate different sizes, the two fields in the structure are re-ordered. Co-developed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Stefan Roesch <shr@fb.com> Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20220426182134.136504-9-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-09io_uring: flush completions for CQE32Stefan Roesch1-2/+6
This flushes the completions according to their CQE type: the same processing is done for the default CQE size, but for large CQE's the extra1 and extra2 fields are filled in. Signed-off-by: Stefan Roesch <shr@fb.com> Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20220426182134.136504-8-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-09io_uring: modify io_get_cqe for CQE32Stefan Roesch1-2/+17
Modify accesses to the CQE array to take large CQE's into account. The index needs to be shifted by one for large CQE's. Signed-off-by: Stefan Roesch <shr@fb.com> Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20220426182134.136504-7-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-09io_uring: add CQE32 completion processingStefan Roesch1-8/+45
This adds the completion processing for the large CQE's and makes sure that the extra1 and extra2 fields are passed through. Co-developed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Stefan Roesch <shr@fb.com> Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20220426182134.136504-6-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-09io_uring: add CQE32 setup processingStefan Roesch1-0/+58
This adds two new function to setup and fill the CQE32 result structure. Signed-off-by: Stefan Roesch <shr@fb.com> Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20220426182134.136504-5-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-09io_uring: change ring size calculation for CQE32Stefan Roesch1-3/+7
This changes the function rings_size to take large CQE's into account. Co-developed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Stefan Roesch <shr@fb.com> Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20220426182134.136504-4-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-09io_uring: store add. return values for CQE32Stefan Roesch1-1/+7
This reuses the hash list node for the storage we need to hold the two 64-bit values that must be passed back. Co-developed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Stefan Roesch <shr@fb.com> Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20220426182134.136504-3-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-09io_uring: support CQE32 in io_uring_cqeStefan Roesch1-0/+7
This adds the big_cqe array to the struct io_uring_cqe to support large CQE's. Co-developed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Stefan Roesch <shr@fb.com> Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20220426182134.136504-2-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-09io_uring: add support for 128-byte SQEsJens Axboe2-3/+19
Normal SQEs are 64-bytes in length, which is fine for all the commands we support. However, in preparation for supporting passthrough IO, provide an option for setting up a ring with 128-byte SQEs. We continue to use the same type for io_uring_sqe, it's marked and commented with a zero sized array pad at the end. This provides up to 80 bytes of data for a passthrough command - 64 bytes for the extra added data, and 16 bytes available at the end of the existing SQE. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-09io_uring: don't clear req->kbuf when buffer selection is doneJens Axboe1-2/+0
It's not needed as the REQ_F_BUFFER_SELECTED flag tracks the state of whether or not kbuf is valid, so just drop it. Suggested-by: Dylan Yudaken <dylany@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-09io_uring: eliminate the need to track provided buffer ID separatelyJens Axboe1-6/+10
We have io_kiocb->buf_index which is used for either fixed buffers, or for provided buffers. For the latter, it's used to hold the buffer group ID for buffer selection. Post selection, req->kbuf->bid is used to get the buffer ID. Store the buffer ID, when selected, in req->buf_index. If we do end up recycling the buffer, reset it back to the buffer group ID. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-09io_uring: move provided buffer state closer to submit stateJens Axboe1-3/+5
The timeout and other items that follow are less hot, so let's move the provided buffer state above that. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-09io_uring: move provided and fixed buffers into the same io_kiocb areaJens Axboe1-4/+8
These are mutually exclusive - if you use provided buffers, then you cannot use fixed buffers and vice versa. Move them into the same spot in the io_kiocb, which is also advantageous for provided buffers as they get near the submit side hot cacheline. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-09io_uring: abstract out provided buffer list selectionJens Axboe1-11/+23
In preparation for providing another way to select a buffer, move the existing logic into a helper. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-09io_uring: never call io_buffer_select() for a buffer re-selectJens Axboe1-12/+17
Callers already have room to store the addr and length information, clean it up by having the caller just assign the previously provided data. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-09io_uring: get rid of hashed provided buffer groupsJens Axboe1-39/+58
Use a plain array for any group ID that's less than 64, and punt anything beyond that to an xarray. 64 fits in a page even for 4KB page sizes and with the planned additions. This makes the expected group usage faster by avoiding a hash and lookup to find our list, and it uses less memory upfront by not allocating any memory for provided buffers unless it's actually being used. Suggested-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-09io_uring: always use req->buf_index for the provided buffer groupJens Axboe1-13/+13
The read/write opcodes use it already, but the recv/recvmsg do not. If we switch them over and read and validate this at init time while we're checking if the opcode supports it anyway, then we can do it in one spot and we don't have to pass in a separate group ID for io_buffer_select(). Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-09io_uring: ignore ->buf_index if REQ_F_BUFFER_SELECT isn't setJens Axboe1-4/+0
There's no point in validity checking buf_index if the request doesn't have REQ_F_BUFFER_SELECT set, as we will never use it for that case. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-09io_uring: kill io_rw_buffer_select() wrapperJens Axboe1-10/+5
After the recent changes, this is direct call to io_buffer_select() anyway. With this change, there are no wrappers left for provided buffer selection. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-09io_uring: make io_buffer_select() return the user address directlyJens Axboe1-26/+20
There's no point in having callers provide a kbuf, we're just returning the address anyway. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-08Linux 5.18-rc6Linus Torvalds1-1/+1
2022-05-08Revert "parisc: Increase parisc_cache_flush_threshold setting"Helge Deller1-15/+3
This reverts commit a58e9d0984e8dad53f17ec73ae3c1cc7f8d88151. Triggers segfaults with 32-bit kernels on PA8500 machines. Signed-off-by: Helge Deller <deller@gmx.de>
2022-05-08parisc: Mark cr16 clock unstable on all SMP machinesHelge Deller1-23/+4
The cr16 interval timers are not synchronized across CPUs, even with just one dual-core CPU. This becomes visible if the machines have a longer uptime. Signed-off-by: Helge Deller <deller@gmx.de>
2022-05-08parisc: Fix typos in commentsJulia Lawall6-6/+6
Various spelling mistakes in comments. Detected with the help of Coccinelle. Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr> Signed-off-by: Helge Deller <deller@gmx.de>
2022-05-08parisc: Change MAX_ADDRESS to become unsigned long longHelge Deller1-1/+1
Dave noticed that for the 32-bit kernel MAX_ADDRESS should be a ULL, otherwise this define would become 0: MAX_ADDRESS (1UL << MAX_ADDRBITS) It has no real effect on the kernel. Signed-off-by: Helge Deller <deller@gmx.de> Noticed-by: John David Anglin <dave.anglin@bell.net>
2022-05-08parisc: Merge model and model name into one line in /proc/cpuinfoHelge Deller1-2/+1
The Linux tool "lscpu" shows the double amount of CPUs if we have "model" and "model name" in two different lines in /proc/cpuinfo. This change combines the model and the model name into one line. Signed-off-by: Helge Deller <deller@gmx.de> Cc: stable@vger.kernel.org
2022-05-08parisc: Re-enable GENERIC_CPU_DEVICES for !SMPHelge Deller1-0/+1
In commit 62773112acc5 ("parisc: Switch from GENERIC_CPU_DEVICES to GENERIC_ARCH_TOPOLOGY") GENERIC_CPU_DEVICES was unconditionally turned off, but this triggers a warning in topology_add_dev(). Turning it back on for the !SMP case avoids this warning. Reported-by: Guenter Roeck <linux@roeck-us.net> Tested-by: Guenter Roeck <linux@roeck-us.net> Fixes: 62773112acc5 ("parisc: Switch from GENERIC_CPU_DEVICES to GENERIC_ARCH_TOPOLOGY") Signed-off-by: Helge Deller <deller@gmx.de>
2022-05-08parisc: Update 32- and 64-bit defconfigsHelge Deller2-2/+5
Enable CONFIG_CGROUPS=y on 32-bit defconfig for systemd-support, and enable CONFIG_NAMESPACES and CONFIG_USER_NS. Signed-off-by: Helge Deller <deller@gmx.de>
2022-05-08parisc: Only list existing CPUs in cpu_possible_maskHelge Deller1-0/+8
The inventory knows which CPUs are in the system, so this bitmask should be in cpu_possible_mask instead of the bitmask based on CONFIG_NR_CPUS. Reset the cpu_possible_mask before scanning the system for CPUs, and mark each existing CPU as possible during initialization of that CPU. This avoids those warnings later on too: register_cpu_capacity_sysctl: too early to get CPU4 device! Signed-off-by: Helge Deller <deller@gmx.de> Noticed-by: John David Anglin <dave.anglin@bell.net>
2022-05-08Revert "parisc: Fix patch code locking and flushing"Helge Deller1-11/+14
This reverts commit a9fe7fa7d874a536e0540469f314772c054a0323. Leads to segfaults on 32bit kernel. Signed-off-by: Helge Deller <deller@gmx.de>
2022-05-08Revert "parisc: Mark sched_clock unstable only if clocks are not syncronized"Helge Deller2-3/+6
This reverts commit d97180ad68bdb7ee10f327205a649bc2f558741d. It triggers RCU stalls at boot with a 32-bit kernel. Signed-off-by: Helge Deller <deller@gmx.de> Noticed-by: John David Anglin <dave.anglin@bell.net> Cc: stable@vger.kernel.org # v5.15+
2022-05-08Revert "parisc: Mark cr16 CPU clocksource unstable on all SMP machines"Helge Deller1-8/+22
This reverts commit afdb4a5b1d340e4afffc65daa21cc71890d7d589. It triggers RCU stalls at boot with a 32-bit kernel. Signed-off-by: Helge Deller <deller@gmx.de> Noticed-by: John David Anglin <dave.anglin@bell.net> Cc: stable@vger.kernel.org # v5.16+
2022-05-08blk-mq: remove the error_count from struct requestWilly Tarreau1-1/+0
The last two users were floppy.c and ataflop.c respectively, it was verified that no other drivers makes use of this, so let's remove it. Suggested-by: Linus Torvalds <torvalds@linuxfoundation.org> Cc: Minh Yuan <yuanmingbuaa@gmail.com> Cc: Denis Efremov <efremov@linux.com>, Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Willy Tarreau <w@1wt.eu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-05-08ataflop: use a statically allocated error countersWilly Tarreau1-4/+6
This is the last driver making use of fd_request->error_count, which is easy to get wrong as was shown in floppy.c. We don't need to keep it there, it can be moved to the atari_floppy_struct instead, so let's do this. Suggested-by: Linus Torvalds <torvalds@linuxfoundation.org> Cc: Minh Yuan <yuanmingbuaa@gmail.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Willy Tarreau <w@1wt.eu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-05-08floppy: use a statically allocated error counterWilly Tarreau1-10/+8
Interrupt handler bad_flp_intr() may cause a UAF on the recently freed request just to increment the error count. There's no point keeping that one in the request anyway, and since the interrupt handler uses a static pointer to the error which cannot be kept in sync with the pending request, better make it use a static error counter that's reset for each new request. This reset now happens when entering redo_fd_request() for a new request via set_next_request(). One initial concern about a single error counter was that errors on one floppy drive could be reported on another one, but this problem is not real given that the driver uses a single drive at a time, as that PC-compatible controllers also have this limitation by using shared signals. As such the error count is always for the "current" drive. Reported-by: Minh Yuan <yuanmingbuaa@gmail.com> Suggested-by: Linus Torvalds <torvalds@linuxfoundation.org> Tested-by: Denis Efremov <efremov@linux.com> Signed-off-by: Willy Tarreau <w@1wt.eu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-05-06KVM: VMX: Exit to userspace if vCPU has injected exception and invalid stateSean Christopherson1-1/+1
Exit to userspace with an emulation error if KVM encounters an injected exception with invalid guest state, in addition to the existing check of bailing if there's a pending exception (KVM doesn't support emulating exceptions except when emulating real mode via vm86). In theory, KVM should never get to such a situation as KVM is supposed to exit to userspace before injecting an exception with invalid guest state. But in practice, userspace can intervene and manually inject an exception and/or stuff registers to force invalid guest state while a previously injected exception is awaiting reinjection. Fixes: fc4fad79fc3d ("KVM: VMX: Reject KVM_RUN if emulation is required with pending exception") Reported-by: syzbot+cfafed3bb76d3e37581b@syzkaller.appspotmail.com Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220502221850.131873-1-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-06KVM: SEV: Mark nested locking of vcpu->lockPeter Gonda1-4/+38
svm_vm_migrate_from() uses sev_lock_vcpus_for_migration() to lock all source and target vcpu->locks. Unfortunately there is an 8 subclass limit, so a new subclass cannot be used for each vCPU. Instead maintain ownership of the first vcpu's mutex.dep_map using a role specific subclass: source vs target. Release the other vcpu's mutex.dep_maps. Fixes: b56639318bb2b ("KVM: SEV: Add support for SEV intra host migration") Reported-by: John Sperbeck<jsperbeck@google.com> Suggested-by: David Rientjes <rientjes@google.com> Suggested-by: Sean Christopherson <seanjc@google.com> Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Cc: Hillf Danton <hdanton@sina.com> Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Peter Gonda <pgonda@google.com> Message-Id: <20220502165807.529624-1-pgonda@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-06gpio: pca953x: fix irq_stat not updated when irq is disabled (irq_mask not set)Puyou Lu1-2/+2
When one port's input state get inverted (eg. from low to hight) after pca953x_irq_setup but before setting irq_mask (by some other driver such as "gpio-keys"), the next inversion of this port (eg. from hight to low) will not be triggered any more (because irq_stat is not updated at the first time). Issue should be fixed after this commit. Fixes: 89ea8bbe9c3e ("gpio: pca953x.c: add interrupt handling capability") Signed-off-by: Puyou Lu <puyou.lu@gmail.com> Signed-off-by: Bartosz Golaszewski <brgl@bgdev.pl>