aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/gnss (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2018-12-17xen/pciback: Check dev_data before using itRoss Lagerwall1-1/+2
If pcistub_init_device fails, the release function will be called with dev_data set to NULL. Check it before using it to avoid a NULL pointer dereference. Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2018-12-17kprobes/x86/xen: blacklist non-attachable xen interrupt functionsAndrea Righi1-0/+2
Blacklist symbols in Xen probe-prohibited areas, so that user can see these prohibited symbols in debugfs. See also: a50480cb6d61. Signed-off-by: Andrea Righi <righi.andrea@gmail.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2018-12-13KVM: x86: Allow Qemu/KVM to use PVH entry pointMaran Wilson3-14/+34
For certain applications it is desirable to rapidly boot a KVM virtual machine. In cases where legacy hardware and software support within the guest is not needed, Qemu should be able to boot directly into the uncompressed Linux kernel binary without the need to run firmware. There already exists an ABI to allow this for Xen PVH guests and the ABI is supported by Linux and FreeBSD: https://xenbits.xen.org/docs/unstable/misc/pvh.html This patch enables Qemu to use that same entry point for booting KVM guests. Signed-off-by: Maran Wilson <maran.wilson@oracle.com> Suggested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Suggested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: Juergen Gross <jgross@suse.com> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2018-12-13xen/pvh: Add memory map pointer to hvm_start_info structMaran Wilson1-1/+62
The start info structure that is defined as part of the x86/HVM direct boot ABI and used for starting Xen PVH guests would be more versatile if it also included a way to pass information about the memory map to the guest. This would allow KVM guests to share the same entry point. Signed-off-by: Maran Wilson <maran.wilson@oracle.com> Reviewed-by: Juergen Gross <jgross@suse.com> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2018-12-13xen/pvh: Move Xen code for getting mem map via hcall out of common fileMaran Wilson2-15/+34
We need to refactor PVH entry code so that support for other hypervisors like Qemu/KVM can be added more easily. The original design for PVH entry in Xen guests relies on being able to obtain the memory map from the hypervisor using a hypercall. When we extend the PVH entry ABI to support other hypervisors like Qemu/KVM, a new mechanism will be added that allows the guest to get the memory map without needing to use hypercalls. For Xen guests, the hypercall approach will still be supported. In preparation for adding support for other hypervisors, we can move the code that uses hypercalls into the Xen specific file. This will allow us to compile kernels in the future without CONFIG_XEN that are still capable of being booted as a Qemu/KVM guest via the PVH entry point. Signed-off-by: Maran Wilson <maran.wilson@oracle.com> Reviewed-by: Juergen Gross <jgross@suse.com> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2018-12-13xen/pvh: Move Xen specific PVH VM initialization out of common fileMaran Wilson3-10/+44
We need to refactor PVH entry code so that support for other hypervisors like Qemu/KVM can be added more easily. This patch moves the small block of code used for initializing Xen PVH virtual machines into the Xen specific file. This initialization is not going to be needed for Qemu/KVM guests. Moving it out of the common file is going to allow us to compile kernels in the future without CONFIG_XEN that are still capable of being booted as a Qemu/KVM guest via the PVH entry point. Signed-off-by: Maran Wilson <maran.wilson@oracle.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: Juergen Gross <jgross@suse.com> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2018-12-13xen/pvh: Create a new file for Xen specific PVH codeMaran Wilson3-3/+14
We need to refactor PVH entry code so that support for other hypervisors like Qemu/KVM can be added more easily. The first step in that direction is to create a new file that will eventually hold the Xen specific routines. Signed-off-by: Maran Wilson <maran.wilson@oracle.com> Reviewed-by: Juergen Gross <jgross@suse.com> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2018-12-13xen/pvh: Move PVH entry code out of Xen specific treeMaran Wilson6-4/+8
Once hypervisors other than Xen start using the PVH entry point for starting VMs, we would like the option of being able to compile PVH entry capable kernels without enabling CONFIG_XEN and all the code that comes along with that. To allow that, we are moving the PVH code out of Xen and into files sitting at a higher level in the tree. This patch is not introducing any code or functional changes, just moving files from one location to another. Signed-off-by: Maran Wilson <maran.wilson@oracle.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: Juergen Gross <jgross@suse.com> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2018-12-13xen/pvh: Split CONFIG_XEN_PVH into CONFIG_PVH and CONFIG_XEN_PVHMaran Wilson3-2/+9
In order to pave the way for hypervisors other than Xen to use the PVH entry point for VMs, we need to factor the PVH entry code into Xen specific and hypervisor agnostic components. The first step in doing that, is to create a new config option for PVH entry that can be enabled independently from CONFIG_XEN. Signed-off-by: Maran Wilson <maran.wilson@oracle.com> Reviewed-by: Juergen Gross <jgross@suse.com> Acked-by: Borislav Petkov <bp@suse.de> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2018-12-09Linux 4.20-rc6Linus Torvalds1-1/+1
2018-12-09net/sched: cls_flower: Reject duplicated rules also under skip_swOr Gerlitz1-13/+10
Currently, duplicated rules are rejected only for skip_hw or "none", hence allowing users to push duplicates into HW for no reason. Use the flower tables to protect for that. Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Paul Blakey <paulb@mellanox.com> Reported-by: Chris Mi <chrism@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-09bnxt_en: Fix _bnxt_get_max_rings() for 57500 chips.Michael Chan1-4/+12
The CP rings are accounted differently on the new 57500 chips. There must be enough CP rings for the sum of RX and TX rings on the new chips. The current logic may be over-estimating the RX and TX rings. The output parameter max_cp should be the maximum NQs capped by MSIX vectors available for networking in the context of 57500 chips. The existing code which uses CMPL rings capped by the MSIX vectors works most of the time but is not always correct. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-09bnxt_en: Fix NQ/CP rings accounting on the new 57500 chips.Michael Chan1-6/+23
The new 57500 chips have introduced the NQ structure in addition to the existing CP rings in all chips. We need to introduce a new bnxt_nq_rings_in_use(). On legacy chips, the 2 functions are the same and one will just call the other. On the new chips, they refer to the 2 separate ring structures. The new function is now called to determine the resource (NQ or CP rings) associated with MSIX that are in use. On 57500 chips, the RDMA driver does not use the CP rings so we don't need to do the subtraction adjustment. Fixes: 41e8d7983752 ("bnxt_en: Modify the ring reservation functions for 57500 series chips.") Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-09bnxt_en: Keep track of reserved IRQs.Michael Chan3-3/+8
The new 57500 chips use 1 NQ per MSIX vector, whereas legacy chips use 1 CP ring per MSIX vector. To better unify this, add a resv_irqs field to struct bnxt_hw_resc. On legacy chips, we initialize resv_irqs with resv_cp_rings. On new chips, we initialize it with the allocated MSIX resources. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-09bnxt_en: Fix CNP CoS queue regression.Michael Chan1-0/+7
Recent changes to support the 57500 devices have created this regression. The bnxt_hwrm_queue_qportcfg() call was moved to be called earlier before the RDMA support was determined, causing the CoS queues configuration to be set before knowing whether RDMA was supported or not. Fix it by moving it to the right place right after RDMA support is determined. Fixes: 98f04cf0f1fc ("bnxt_en: Check context memory requirements from firmware.") Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-08net/mlx4_core: Correctly set PFC param if global pause is turned off.Tarick Bedeir1-2/+2
rx_ppp and tx_ppp can be set between 0 and 255, so don't clamp to 1. Fixes: 6e8814ceb7e8 ("net/mlx4_en: Fix mixed PFC and Global pause user control requests") Signed-off-by: Tarick Bedeir <tarick@google.com> Reviewed-by: Eran Ben Elisha <eranbe@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-08Revert "mm, thp: consolidate THP gfp handling into alloc_hugepage_direct_gfpmask"David Rientjes4-22/+51
This reverts commit 89c83fb539f95491be80cdd5158e6f0ce329e317. This should have been done as part of 2f0799a0ffc0 ("mm, thp: restore node-local hugepage allocations"). The movement of the thp allocation policy from alloc_pages_vma() to alloc_hugepage_direct_gfpmask() was intended to only set __GFP_THISNODE for mempolicies that are not MPOL_BIND whereas the revert could set this regardless of mempolicy. While the check for MPOL_BIND between alloc_hugepage_direct_gfpmask() and alloc_pages_vma() was racy, that has since been removed since the revert. What is left is the possibility to use __GFP_THISNODE in policy_node() when it is unexpected because the special handling for hugepages in alloc_pages_vma() was removed as part of the consolidation. Secondly, prior to 89c83fb539f9, alloc_pages_vma() implemented a somewhat different policy for hugepage allocations, which were allocated through alloc_hugepage_vma(). For hugepage allocations, if the allocating process's node is in the set of allowed nodes, allocate with __GFP_THISNODE for that node (for MPOL_PREFERRED, use that node with __GFP_THISNODE instead). This was changed for shmem_alloc_hugepage() to allow fallback to other nodes in 89c83fb539f9 as it did for new_page() in mm/mempolicy.c which is functionally different behavior and removes the requirement to only allocate hugepages locally. So this commit does a full revert of 89c83fb539f9 instead of the partial revert that was done in 2f0799a0ffc0. The result is the same thp allocation policy for 4.20 that was in 4.19. Fixes: 89c83fb539f9 ("mm, thp: consolidate THP gfp handling into alloc_hugepage_direct_gfpmask") Fixes: 2f0799a0ffc0 ("mm, thp: restore node-local hugepage allocations") Signed-off-by: David Rientjes <rientjes@google.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Michal Hocko <mhocko@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-12-07Revert "net/ibm/emac: wrong bit is used for STA control"Benjamin Herrenschmidt1-1/+1
This reverts commit 624ca9c33c8a853a4a589836e310d776620f4ab9. This commit is completely bogus. The STACR register has two formats, old and new, depending on the version of the IP block used. There's a pair of device-tree properties that can be used to specify the format used: has-inverted-stacr-oc has-new-stacr-staopc What this commit did was to change the bit definition used with the old parts to match the new parts. This of course breaks the driver on all the old ones. Instead, the author should have set the appropriate properties in the device-tree for the variant used on his board. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07neighbour: Avoid writing before skb->head in neigh_hh_output()Stefano Brivio1-5/+23
While skb_push() makes the kernel panic if the skb headroom is less than the unaligned hardware header size, it will proceed normally in case we copy more than that because of alignment, and we'll silently corrupt adjacent slabs. In the case fixed by the previous patch, "ipv6: Check available headroom in ip6_xmit() even without options", we end up in neigh_hh_output() with 14 bytes headroom, 14 bytes hardware header and write 16 bytes, starting 2 bytes before the allocated buffer. Always check we're not writing before skb->head and, if the headroom is not enough, warn and drop the packet. v2: - instead of panicking with BUG_ON(), WARN_ON_ONCE() and drop the packet (Eric Dumazet) - if we avoid the panic, though, we need to explicitly check the headroom before the memcpy(), otherwise we'll have corrupted slabs on a running kernel, after we warn - use __skb_push() instead of skb_push(), as the headroom check is already implemented here explicitly (Eric Dumazet) Signed-off-by: Stefano Brivio <sbrivio@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07ipv6: Check available headroom in ip6_xmit() even without optionsStefano Brivio1-21/+21
Even if we send an IPv6 packet without options, MAX_HEADER might not be enough to account for the additional headroom required by alignment of hardware headers. On a configuration without HYPERV_NET, WLAN, AX25, and with IPV6_TUNNEL, sending short SCTP packets over IPv4 over L2TP over IPv6, we start with 100 bytes of allocated headroom in sctp_packet_transmit(), end up with 54 bytes after l2tp_xmit_skb(), and 14 bytes in ip6_finish_output2(). Those would be enough to append our 14 bytes header, but we're going to align that to 16 bytes, and write 2 bytes out of the allocated slab in neigh_hh_output(). KASan says: [ 264.967848] ================================================================== [ 264.967861] BUG: KASAN: slab-out-of-bounds in ip6_finish_output2+0x1aec/0x1c70 [ 264.967866] Write of size 16 at addr 000000006af1c7fe by task netperf/6201 [ 264.967870] [ 264.967876] CPU: 0 PID: 6201 Comm: netperf Not tainted 4.20.0-rc4+ #1 [ 264.967881] Hardware name: IBM 2827 H43 400 (z/VM 6.4.0) [ 264.967887] Call Trace: [ 264.967896] ([<00000000001347d6>] show_stack+0x56/0xa0) [ 264.967903] [<00000000017e379c>] dump_stack+0x23c/0x290 [ 264.967912] [<00000000007bc594>] print_address_description+0xf4/0x290 [ 264.967919] [<00000000007bc8fc>] kasan_report+0x13c/0x240 [ 264.967927] [<000000000162f5e4>] ip6_finish_output2+0x1aec/0x1c70 [ 264.967935] [<000000000163f890>] ip6_finish_output+0x430/0x7f0 [ 264.967943] [<000000000163fe44>] ip6_output+0x1f4/0x580 [ 264.967953] [<000000000163882a>] ip6_xmit+0xfea/0x1ce8 [ 264.967963] [<00000000017396e2>] inet6_csk_xmit+0x282/0x3f8 [ 264.968033] [<000003ff805fb0ba>] l2tp_xmit_skb+0xe02/0x13e0 [l2tp_core] [ 264.968037] [<000003ff80631192>] l2tp_eth_dev_xmit+0xda/0x150 [l2tp_eth] [ 264.968041] [<0000000001220020>] dev_hard_start_xmit+0x268/0x928 [ 264.968069] [<0000000001330e8e>] sch_direct_xmit+0x7ae/0x1350 [ 264.968071] [<000000000122359c>] __dev_queue_xmit+0x2b7c/0x3478 [ 264.968075] [<00000000013d2862>] ip_finish_output2+0xce2/0x11a0 [ 264.968078] [<00000000013d9b14>] ip_finish_output+0x56c/0x8c8 [ 264.968081] [<00000000013ddd1e>] ip_output+0x226/0x4c0 [ 264.968083] [<00000000013dbd6c>] __ip_queue_xmit+0x894/0x1938 [ 264.968100] [<000003ff80bc3a5c>] sctp_packet_transmit+0x29d4/0x3648 [sctp] [ 264.968116] [<000003ff80b7bf68>] sctp_outq_flush_ctrl.constprop.5+0x8d0/0xe50 [sctp] [ 264.968131] [<000003ff80b7c716>] sctp_outq_flush+0x22e/0x7d8 [sctp] [ 264.968146] [<000003ff80b35c68>] sctp_cmd_interpreter.isra.16+0x530/0x6800 [sctp] [ 264.968161] [<000003ff80b3410a>] sctp_do_sm+0x222/0x648 [sctp] [ 264.968177] [<000003ff80bbddac>] sctp_primitive_ASSOCIATE+0xbc/0xf8 [sctp] [ 264.968192] [<000003ff80b93328>] __sctp_connect+0x830/0xc20 [sctp] [ 264.968208] [<000003ff80bb11ce>] sctp_inet_connect+0x2e6/0x378 [sctp] [ 264.968212] [<0000000001197942>] __sys_connect+0x21a/0x450 [ 264.968215] [<000000000119aff8>] sys_socketcall+0x3d0/0xb08 [ 264.968218] [<000000000184ea7a>] system_call+0x2a2/0x2c0 [...] Just like ip_finish_output2() does for IPv4, check that we have enough headroom in ip6_xmit(), and reallocate it if we don't. This issue is older than git history. Reported-by: Jianlin Shi <jishi@redhat.com> Signed-off-by: Stefano Brivio <sbrivio@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07tcp: lack of available data can also cause TSO deferEric Dumazet1-11/+24
tcp_tso_should_defer() can return true in three different cases : 1) We are cwnd-limited 2) We are rwnd-limited 3) We are application limited. Neal pointed out that my recent fix went too far, since it assumed that if we were not in 1) case, we must be rwnd-limited Fix this by properly populating the is_cwnd_limited and is_rwnd_limited booleans. After this change, we can finally move the silly check for FIN flag only for the application-limited case. The same move for EOR bit will be handled in net-next, since commit 1c09f7d073b1 ("tcp: do not try to defer skbs with eor mark (MSG_EOR)") is scheduled for linux-4.21 Tested by running 200 concurrent netperf -t TCP_RR -- -r 60000,100 and checking none of them was rwnd_limited in the chrono_stat output from "ss -ti" command. Fixes: 41727549de3e ("tcp: Do not underestimate rwnd_limited") Signed-off-by: Eric Dumazet <edumazet@google.com> Suggested-by: Neal Cardwell <ncardwell@google.com> Reviewed-by: Neal Cardwell <ncardwell@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Reviewed-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07ipv6: sr: properly initialize flowi6 prior passing to ip6_route_outputShmulik Ladkani1-0/+1
In 'seg6_output', stack variable 'struct flowi6 fl6' was missing initialization. Fixes: 6c8702c60b88 ("ipv6: sr: add support for SRH encapsulation and injection with lwtunnels") Signed-off-by: Shmulik Ladkani <shmulik.ladkani@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07x86/vdso: Drop implicit common-page-size linker flagNick Desaulniers1-2/+2
GNU linker's -z common-page-size's default value is based on the target architecture. arch/x86/entry/vdso/Makefile sets it to the architecture default, which is implicit and redundant. Drop it. Fixes: 2aae950b21e4 ("x86_64: Add vDSO for x86-64 with gettimeofday/clock_gettime/getcpu") Reported-by: Dmitry Golovin <dima@golovin.in> Reported-by: Bill Wendling <morbo@google.com> Suggested-by: Dmitry Golovin <dima@golovin.in> Suggested-by: Rui Ueyama <ruiu@google.com> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Andy Lutomirski <luto@kernel.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: Fangrui Song <maskray@google.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20181206191231.192355-1-ndesaulniers@google.com Link: https://bugs.llvm.org/show_bug.cgi?id=38774 Link: https://github.com/ClangBuiltLinux/linux/issues/31
2018-12-07arm64: hibernate: Avoid sending cross-calling with interrupts disabledWill Deacon1-1/+1
Since commit 3b8c9f1cdfc50 ("arm64: IPI each CPU after invalidating the I-cache for kernel mappings"), a call to flush_icache_range() will use an IPI to cross-call other online CPUs so that any stale instructions are flushed from their pipelines. This triggers a WARN during the hibernation resume path, where flush_icache_range() is called with interrupts disabled and is therefore prone to deadlock: | Disabling non-boot CPUs ... | CPU1: shutdown | psci: CPU1 killed. | CPU2: shutdown | psci: CPU2 killed. | CPU3: shutdown | psci: CPU3 killed. | WARNING: CPU: 0 PID: 1 at ../kernel/smp.c:416 smp_call_function_many+0xd4/0x350 | Modules linked in: | CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.20.0-rc4 #1 Since all secondary CPUs have been taken offline prior to invalidating the I-cache, there's actually no need for an IPI and we can simply call __flush_icache_range() instead. Cc: <stable@vger.kernel.org> Fixes: 3b8c9f1cdfc50 ("arm64: IPI each CPU after invalidating the I-cache for kernel mappings") Reported-by: Kunihiko Hayashi <hayashi.kunihiko@socionext.com> Tested-by: Kunihiko Hayashi <hayashi.kunihiko@socionext.com> Tested-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-12-07blk-mq: punt failed direct issue to dispatch listJens Axboe1-28/+5
After the direct dispatch corruption fix, we permanently disallow direct dispatch of non read/write requests. This works fine off the normal IO path, as they will be retried like any other failed direct dispatch request. But for the blk_insert_cloned_request() that only DM uses to bypass the bottom level scheduler, we always first attempt direct dispatch. For some types of requests, that's now a permanent failure, and no amount of retrying will make that succeed. This results in a livelock. Instead of making special cases for what we can direct issue, and now having to deal with DM solving the livelock while still retaining a BUSY condition feedback loop, always just add a request that has been through ->queue_rq() to the hardware queue dispatch list. These are safe to use as no merging can take place there. Additionally, if requests do have prepped data from drivers, we aren't dependent on them not sharing space in the request structure to safely add them to the IO scheduler lists. This basically reverts ffe81d45322c and is based on a patch from Ming, but with the list insert case covered as well. Fixes: ffe81d45322c ("blk-mq: fix corruption with direct issue") Cc: stable@vger.kernel.org Suggested-by: Ming Lei <ming.lei@redhat.com> Reported-by: Bart Van Assche <bvanassche@acm.org> Tested-by: Ming Lei <ming.lei@redhat.com> Acked-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07nvmet-rdma: fix response use after freeIsrael Rukshin1-1/+2
nvmet_rdma_release_rsp() may free the response before using it at error flow. Fixes: 8407879 ("nvmet-rdma: fix possible bogus dereference under heavy load") Signed-off-by: Israel Rukshin <israelr@mellanox.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Reviewed-by: Max Gurtovoy <maxg@mellanox.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2018-12-07nvme: validate controller state before rescheduling keep aliveJames Smart1-1/+9
Delete operations are seeing NULL pointer references in call_timer_fn. Tracking these back, the timer appears to be the keep alive timer. nvme_keep_alive_work() which is tied to the timer that is cancelled by nvme_stop_keep_alive(), simply starts the keep alive io but doesn't wait for it's completion. So nvme_stop_keep_alive() only stops a timer when it's pending. When a keep alive is in flight, there is no timer running and the nvme_stop_keep_alive() will have no affect on the keep alive io. Thus, if the io completes successfully, the keep alive timer will be rescheduled. In the failure case, delete is called, the controller state is changed, the nvme_stop_keep_alive() is called while the io is outstanding, and the delete path continues on. The keep alive happens to successfully complete before the delete paths mark it as aborted as part of the queue termination, so the timer is restarted. The delete paths then tear down the controller, and later on the timer code fires and the timer entry is now corrupt. Fix by validating the controller state before rescheduling the keep alive. Testing with the fix has confirmed the condition above was hit. Signed-off-by: James Smart <jsmart2021@gmail.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de>
2018-12-07block, bfq: fix decrement of num_active_groupsPaolo Valente3-25/+107
Since commit '2d29c9f89fcd ("block, bfq: improve asymmetric scenarios detection")', if there are process groups with I/O requests waiting for completion, then BFQ tags the scenario as 'asymmetric'. This detection is needed for preserving service guarantees (for details, see comments on the computation * of the variable asymmetric_scenario in the function bfq_better_to_idle). Unfortunately, commit '2d29c9f89fcd ("block, bfq: improve asymmetric scenarios detection")' contains an error exactly in the updating of the number of groups with I/O requests waiting for completion: if a group has more than one descendant process, then the above number of groups, which is renamed from num_active_groups to a more appropriate num_groups_with_pending_reqs by this commit, may happen to be wrongly decremented multiple times, namely every time one of the descendant processes gets all its pending I/O requests completed. A correct, complete solution should work as follows. Consider a group that is inactive, i.e., that has no descendant process with pending I/O inside BFQ queues. Then suppose that num_groups_with_pending_reqs is still accounting for this group, because the group still has some descendant process with some I/O request still in flight. num_groups_with_pending_reqs should be decremented when the in-flight request of the last descendant process is finally completed (assuming that nothing else has changed for the group in the meantime, in terms of composition of the group and active/inactive state of child groups and processes). To accomplish this, an additional pending-request counter must be added to entities, and must be updated correctly. To avoid this additional field and operations, this commit resorts to the following tradeoff between simplicity and accuracy: for an inactive group that is still counted in num_groups_with_pending_reqs, this commit decrements num_groups_with_pending_reqs when the first descendant process of the group remains with no request waiting for completion. This simplified scheme provides a fix to the unbalanced decrements introduced by 2d29c9f89fcd. Since this error was also caused by lack of comments on this non-trivial issue, this commit also adds related comments. Fixes: 2d29c9f89fcd ("block, bfq: improve asymmetric scenarios detection") Reported-by: Steven Barrett <steven@liquorix.net> Tested-by: Steven Barrett <steven@liquorix.net> Tested-by: Lucjan Lucjanov <lucjan.lucjanov@gmail.com> Reviewed-by: Federico Motta <federico@willer.it> Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07CIFS: Avoid returning EBUSY to upper layer VFSLong Li1-25/+6
EBUSY is not handled by VFS, and will be passed to user-mode. This is not correct as we need to wait for more credits. This patch also fixes a bug where rsize or wsize is used uninitialized when the call to server->ops->wait_mtu_credits() fails. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Long Li <longli@microsoft.com> Signed-off-by: Steve French <stfrench@microsoft.com> Reviewed-by: Pavel Shilovsky <pshilov@microsoft.com>
2018-12-07crypto: user - Disable statistics interfaceHerbert Xu1-1/+1
Since this user-space API is still undergoing significant changes, this patch disables it for the current merge window. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-12-06i2c: uniphier-f: fix violation of tLOW requirement for Fast-modeMasahiro Yamada1-1/+18
Currently, the clock duty is set as tLOW/tHIGH = 1/1. For Fast-mode, tLOW is set to 1.25 us while the I2C spec requires tLOW >= 1.3 us. tLOW/tHIGH = 5/4 would meet both Standard-mode and Fast-mode: Standard-mode: tLOW = 5.56 us, tHIGH = 4.44 us Fast-mode: tLOW = 1.39 us, tHIGH = 1.11 us Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2018-12-06i2c: uniphier: fix violation of tLOW requirement for Fast-modeMasahiro Yamada1-1/+7
Currently, the clock duty is set as tLOW/tHIGH = 1/1. For Fast-mode, tLOW is set to 1.25 us while the I2C spec requires tLOW >= 1.3 us. tLOW/tHIGH = 5/4 would meet both Standard-mode and Fast-mode: Standard-mode: tLOW = 5.56 us, tHIGH = 4.44 us Fast-mode: tLOW = 1.39 us, tHIGH = 1.11 us Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2018-12-06i2c: uniphier-f: fill TX-FIFO only in IRQ handler for repeated STARTMasahiro Yamada1-4/+9
- For a repeated START condition, this controller starts data transfer immediately after the slave address is written to the TX-FIFO. - Once the TX-FIFO empty interrupt is asserted, the controller makes a pause even if additional data are written to the TX-FIFO. Given those circumstances, the data after a repeated START may not be transferred if the interrupt is asserted while the TX-FIFO is being filled up. A more reliable way is to append TX data only in the interrupt handler. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2018-12-06i2c: uniphier-f: fix timeout error after reading 8 bytesMasahiro Yamada1-3/+14
I was totally screwed up in commit eaba68785c2d ("i2c: uniphier-f: fix race condition when IRQ is cleared"). Since that commit, if the number of read bytes is multiple of the FIFO size (8, 16, 24... bytes), the STOP condition could be issued twice, depending on the timing. If this happens, the controller will go wrong, resulting in the timeout error. It was more than 3 years ago when I wrote this driver, so my memory about this hardware was vague. Please let me correct the description in the commit log of eaba68785c2d. Clearing the IRQ status on exiting the IRQ handler is absolutely fine. This controller makes a pause while any IRQ status is asserted. If the IRQ status is cleared first, the hardware may start the next transaction before the IRQ handler finishes what it supposed to do. This partially reverts the bad commit with clear comments so that I will never repeat this mistake. I also investigated what is happening at the last moment of the read mode. The UNIPHIER_FI2C_INT_RF interrupt is asserted a bit earlier (by half a period of the clock cycle) than UNIPHIER_FI2C_INT_RB. I consulted a hardware engineer, and I got the following information: UNIPHIER_FI2C_INT_RF asserted at the falling edge of SCL at the 8th bit. UNIPHIER_FI2C_INT_RB asserted at the rising edge of SCL at the 9th (ACK) bit. In order to avoid calling uniphier_fi2c_stop() twice, check the latter interrupt. I also commented this because it is obscure hardware internal. Fixes: eaba68785c2d ("i2c: uniphier-f: fix race condition when IRQ is cleared") Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2018-12-06i2c: scmi: Fix probe error on devices with an empty SMB0001 ACPI device nodeHans de Goede1-3/+7
Some AMD based HP laptops have a SMB0001 ACPI device node which does not define any methods. This leads to the following error in dmesg: [ 5.222731] cmi: probe of SMB0001:00 failed with error -5 This commit makes acpi_smbus_cmi_add() return -ENODEV instead in this case silencing the error. In case of a failure of the i2c_add_adapter() call this commit now propagates the error from that call instead of -EIO. Signed-off-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2018-12-06i2c: axxia: properly handle master timeoutAdamski, Krzysztof (Nokia - PL/Wroclaw)1-11/+29
According to Intel (R) Axxia TM Lionfish Communication Processor Peripheral Subsystem Hardware Reference Manual, the AXXIA I2C module have a programmable Master Wait Timer, which among others, checks the time between commands send in manual mode. When a timeout (25ms) passes, TSS bit is set in Master Interrupt Status register and a Stop command is issued by the hardware. The axxia_i2c_xfer(), does not properly handle this situation, however. For each message a separate axxia_i2c_xfer_msg() is called and this function incorrectly assumes that any interrupt might happen only when waiting for completion. This is mostly correct but there is one exception - a master timeout can trigger if enough time has passed between individual transfers. It will, by definition, happen between transfers when the interrupts are disabled by the code. If that happens, the hardware issues Stop command. The interrupt indicating timeout will not be triggered as soon as we enable them since the Master Interrupt Status is cleared when master mode is entered again (which happens before enabling irqs) meaning this error is lost and the transfer is continued even though the Stop was issued on the bus. The subsequent operations completes without error but a bogus value (0xFF in case of read) is read as the client device is confused because aborted transfer. No error is returned from master_xfer() making caller believe that a valid value was read. To fix the problem, the TSS bit (indicating timeout) in Master Interrupt Status register is checked before each transfer. If it is set, there was a timeout before this transfer and (as described above) the hardware already issued Stop command so the transaction should be aborted thus -ETIMEOUT is returned from the master_xfer() callback. In order to be sure no timeout was issued we can't just read the status just before starting new transaction as there will always be a small window of time (few CPU cycles at best) where this might still happen. For this reason we have to temporally disable the timer before checking for TSS bit. Disabling it will, however, clear the TSS bit so in order to preserve that information, we have to read it in ISR so we have to ensure that the TSS interrupt is not masked between transfers of one transaction. There is no need to call bus recovery or controller reinitialization if that happens so it's skipped. Signed-off-by: Krzysztof Adamski <krzysztof.adamski@nokia.com> Reviewed-by: Alexander Sverdlin <alexander.sverdlin@nokia.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2018-12-06mlxsw: spectrum_switchdev: Fix VLAN device deletion via ioctlIdo Schimmel1-2/+8
When deleting a VLAN device using an ioctl the netdev is unregistered before the VLAN filter is updated via ndo_vlan_rx_kill_vid(). It can lead to a use-after-free in mlxsw in case the VLAN device is deleted while being enslaved to a bridge. The reason for the above is that when mlxsw receives the CHANGEUPPER event, it wrongly assumes that the VLAN device is no longer its upper and thus destroys the internal representation of the bridge port despite the reference count being non-zero. Fix this by checking if the VLAN device is our upper using its real device. In net-next I'm going to remove this trick and instead make mlxsw completely agnostic to the order of the events. Fixes: c57529e1d5d8 ("mlxsw: spectrum: Replace vPorts with Port-VLAN") Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-06mlxsw: spectrum_router: Relax GRE decap matching checkNir Dotan1-4/+1
GRE decap offload is configured when local routes prefix correspond to the local address of one of the offloaded GRE tunnels. The matching check was found to be too strict, such that for a flat GRE configuration, in which the overlay and underlay traffic share the same non-default VRF, decap flow was not offloaded. Relax the check for decap flow offloading. A match occurs if the local address of the tunnel matches the local route address while both share the same VRF table. Fixes: 4607f6d26950 ("mlxsw: spectrum_router: Support IPv4 underlay decap") Signed-off-by: Nir Dotan <nird@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-06mlxsw: spectrum_switchdev: Avoid leaking FID's reference countIdo Schimmel1-2/+5
It should never be possible for a user to set a VNI on a FID in case one is already set. The driver therefore returns an error, but fails to drop the reference count taken earlier when calling mlxsw_sp_fid_8021d_lookup(). Drop the reference when this unlikely error is hit. Fixes: 1c30d1836aeb ("mlxsw: spectrum: Enable VxLAN enslavement to bridges") Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-06mlxsw: spectrum_nve: Remove easily triggerable warningsIdo Schimmel1-2/+2
It is possible to trigger a warning in mlxsw in case a flood entry which mlxsw is not aware of is deleted from the VxLAN device. This is because mlxsw expects to find a singly linked list where the flood entry is present in. Fix by removing these warnings for now. Will re-add them in the next release after we teach mlxsw to ask for a dump of FDB entries from the VxLAN device, once it is enslaved to a bridge mlxsw cares about. Fixes: 6e6030bd5412 ("mlxsw: spectrum_nve: Implement common NVE core") Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-06vhost/vsock: fix use-after-free in network stack callersStefan Hajnoczi1-24/+33
If the network stack calls .send_pkt()/.cancel_pkt() during .release(), a struct vhost_vsock use-after-free is possible. This occurs because .release() does not wait for other CPUs to stop using struct vhost_vsock. Switch to an RCU-enabled hashtable (indexed by guest CID) so that .release() can wait for other CPUs by calling synchronize_rcu(). This also eliminates vhost_vsock_lock acquisition in the data path so it could have a positive effect on performance. This is CVE-2018-14625 "kernel: use-after-free Read in vhost_transport_send_pkt". Cc: stable@vger.kernel.org Reported-and-tested-by: syzbot+bd391451452fb0b93039@syzkaller.appspotmail.com Reported-by: syzbot+e3e074963495f92a89ed@syzkaller.appspotmail.com Reported-by: syzbot+d5a0a170c5069658b141@syzkaller.appspotmail.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com>
2018-12-06virtio/s390: fix race in ccw_io_helper()Halil Pasic1-1/+6
While ccw_io_helper() seems like intended to be exclusive in a sense that it is supposed to facilitate I/O for at most one thread at any given time, there is actually nothing ensuring that threads won't pile up at vcdev->wait_q. If they do, all threads get woken up and see the status that belongs to some other request than their own. This can lead to bugs. For an example see: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1788432 This race normally does not cause any problems. The operations provided by struct virtio_config_ops are usually invoked in a well defined sequence, normally don't fail, and are normally used quite infrequent too. Yet, if some of the these operations are directly triggered via sysfs attributes, like in the case described by the referenced bug, userspace is given an opportunity to force races by increasing the frequency of the given operations. Let us fix the problem by ensuring, that for each device, we finish processing the previous request before starting with a new one. Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Reported-by: Colin Ian King <colin.king@canonical.com> Cc: stable@vger.kernel.org Message-Id: <20180925121309.58524-3-pasic@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2018-12-06virtio/s390: avoid race on vcdev->configHalil Pasic1-2/+8
Currently we have a race on vcdev->config in virtio_ccw_get_config() and in virtio_ccw_set_config(). This normally does not cause problems, as these are usually infrequent operations. However, for some devices writing to/reading from the config space can be triggered through sysfs attributes. For these, userspace can force the race by increasing the frequency. Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Cc: stable@vger.kernel.org Message-Id: <20180925121309.58524-2-pasic@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2018-12-06vhost/vsock: fix reset orphans race with close timeoutStefan Hajnoczi1-7/+15
If a local process has closed a connected socket and hasn't received a RST packet yet, then the socket remains in the table until a timeout expires. When a vhost_vsock instance is released with the timeout still pending, the socket is never freed because vhost_vsock has already set the SOCK_DONE flag. Check if the close timer is pending and let it close the socket. This prevents the race which can leak sockets. Reported-by: Maximilian Riemensberger <riemensberger@cadami.net> Cc: Graham Whaley <graham.whaley@gmail.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2018-12-06dmaengine: dw: Fix FIFO size for Intel MerrifieldAndy Shevchenko1-3/+3
Intel Merrifield has a reduced size of FIFO used in iDMA 32-bit controller, i.e. 512 bytes instead of 1024. Fix this by partitioning it as 64 bytes per channel. Note, in the future we might switch to 'fifo-size' property instead of hard coded value. Fixes: 199244d69458 ("dmaengine: dw: add support of iDMA 32-bit hardware") Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: stable@vger.kernel.org Signed-off-by: Vinod Koul <vkoul@kernel.org>
2018-12-06stackleak: Register the 'stackleak_cleanup' pass before the '*free_cfg' passAlexander Popov1-3/+5
Currently the 'stackleak_cleanup' pass deleting a CALL insn is executed after the 'reload' pass. That allows gcc to do some weird optimization in function prologues and epilogues, which are generated later [1]. Let's avoid that by registering the 'stackleak_cleanup' pass before the '*free_cfg' pass. It's the moment when the stack frame size is already final, function prologues and epilogues are generated, and the machine-dependent code transformations are not done. [1] https://www.openwall.com/lists/kernel-hardening/2018/11/23/2 Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Alexander Popov <alex.popov@linux.com> Signed-off-by: Kees Cook <keescook@chromium.org>
2018-12-06ARM: ensure that processor vtables is not lost after bootRussell King1-0/+10
Marek Szyprowski reported problems with CPU hotplug in current kernels. This was tracked down to the processor vtables being located in an init section, and therefore discarded after kernel boot, despite being required after boot to properly initialise the non-boot CPUs. Arrange for these tables to end up in .rodata when required. Reported-by: Marek Szyprowski <m.szyprowski@samsung.com> Tested-by: Krzysztof Kozlowski <krzk@kernel.org> Fixes: 383fb3ee8024 ("ARM: spectre-v2: per-CPU vtables to work around big.Little systems") Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-12-06MAINTAINERS: exclude gnss from SIRFPRIMA2 regex matchingJohan Hovold1-0/+1
Exclude the gnss subsystem from SIRMPRIMA2 regex matching, which would otherwise match the unrelated gnss sirf driver. Cc: Barry Song <baohua@kernel.org> Signed-off-by: Johan Hovold <johan@kernel.org>
2018-12-06MAINTAINERS: add gnss scm treeJohan Hovold1-0/+1
Add SCM tree for the gnss subsystem. Signed-off-by: Johan Hovold <johan@kernel.org>
2018-12-06gnss: sirf: fix activation retry handlingJohan Hovold1-3/+3
Fix activation helper which would return -ETIMEDOUT even if the last retry attempt was successful. Also change the semantics of the retries variable so that it actually holds the number of retries (rather than tries). Fixes: d2efbbd18b1e ("gnss: add driver for sirfstar-based receivers") Cc: stable <stable@vger.kernel.org> # 4.19 Signed-off-by: Johan Hovold <johan@kernel.org>