aboutsummaryrefslogtreecommitdiffstats
path: root/mm (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2018-11-30userfaultfd: shmem: UFFDIO_COPY: set the page dirty if VM_WRITE is not setAndrea Arcangeli1-0/+11
Set the page dirty if VM_WRITE is not set because in such case the pte won't be marked dirty and the page would be reclaimed without writepage (i.e. swapout in the shmem case). This was found by source review. Most apps (certainly including QEMU) only use UFFDIO_COPY on PROT_READ|PROT_WRITE mappings or the app can't modify the memory in the first place. This is for correctness and it could help the non cooperative use case to avoid unexpected data loss. Link: http://lkml.kernel.org/r/20181126173452.26955-6-aarcange@redhat.com Reviewed-by: Hugh Dickins <hughd@google.com> Cc: stable@vger.kernel.org Fixes: 4c27fe4c4c84 ("userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support") Reported-by: Hugh Dickins <hughd@google.com> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Jann Horn <jannh@google.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Peter Xu <peterx@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-11-30userfaultfd: shmem: add i_size checksAndrea Arcangeli2-4/+40
With MAP_SHARED: recheck the i_size after taking the PT lock, to serialize against truncate with the PT lock. Delete the page from the pagecache if the i_size_read check fails. With MAP_PRIVATE: check the i_size after the PT lock before mapping anonymous memory or zeropages into the MAP_PRIVATE shmem mapping. A mostly irrelevant cleanup: like we do the delete_from_page_cache() pagecache removal after dropping the PT lock, the PT lock is a spinlock so drop it before the sleepable page lock. Link: http://lkml.kernel.org/r/20181126173452.26955-5-aarcange@redhat.com Fixes: 4c27fe4c4c84 ("userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support") Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Reviewed-by: Mike Rapoport <rppt@linux.ibm.com> Reviewed-by: Hugh Dickins <hughd@google.com> Reported-by: Jann Horn <jannh@google.com> Cc: <stable@vger.kernel.org> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Peter Xu <peterx@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-11-30userfaultfd: shmem/hugetlbfs: only allow to register VM_MAYWRITE vmasAndrea Arcangeli2-9/+21
After the VMA to register the uffd onto is found, check that it has VM_MAYWRITE set before allowing registration. This way we inherit all common code checks before allowing to fill file holes in shmem and hugetlbfs with UFFDIO_COPY. The userfaultfd memory model is not applicable for readonly files unless it's a MAP_PRIVATE. Link: http://lkml.kernel.org/r/20181126173452.26955-4-aarcange@redhat.com Fixes: ff62a3421044 ("hugetlb: implement memfd sealing") Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Reviewed-by: Mike Rapoport <rppt@linux.ibm.com> Reviewed-by: Hugh Dickins <hughd@google.com> Reported-by: Jann Horn <jannh@google.com> Fixes: 4c27fe4c4c84 ("userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support") Cc: <stable@vger.kernel.org> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Peter Xu <peterx@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-11-30userfaultfd: shmem: allocate anonymous memory for MAP_PRIVATE shmemAndrea Arcangeli1-2/+13
Userfaultfd did not create private memory when UFFDIO_COPY was invoked on a MAP_PRIVATE shmem mapping. Instead it wrote to the shmem file, even when that had not been opened for writing. Though, fortunately, that could only happen where there was a hole in the file. Fix the shmem-backed implementation of UFFDIO_COPY to create private memory for MAP_PRIVATE mappings. The hugetlbfs-backed implementation was already correct. This change is visible to userland, if userfaultfd has been used in unintended ways: so it introduces a small risk of incompatibility, but is necessary in order to respect file permissions. An app that uses UFFDIO_COPY for anything like postcopy live migration won't notice the difference, and in fact it'll run faster because there will be no copy-on-write and memory waste in the tmpfs pagecache anymore. Userfaults on MAP_PRIVATE shmem keep triggering only on file holes like before. The real zeropage can also be built on a MAP_PRIVATE shmem mapping through UFFDIO_ZEROPAGE and that's safe because the zeropage pte is never dirty, in turn even an mprotect upgrading the vma permission from PROT_READ to PROT_READ|PROT_WRITE won't make the zeropage pte writable. Link: http://lkml.kernel.org/r/20181126173452.26955-3-aarcange@redhat.com Fixes: 4c27fe4c4c84 ("userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support") Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Reported-by: Mike Rapoport <rppt@linux.ibm.com> Reviewed-by: Hugh Dickins <hughd@google.com> Cc: <stable@vger.kernel.org> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Jann Horn <jannh@google.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Peter Xu <peterx@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-11-30userfaultfd: use ENOENT instead of EFAULT if the atomic copy user failsAndrea Arcangeli3-5/+5
Patch series "userfaultfd shmem updates". Jann found two bugs in the userfaultfd shmem MAP_SHARED backend: the lack of the VM_MAYWRITE check and the lack of i_size checks. Then looking into the above we also fixed the MAP_PRIVATE case. Hugh by source review also found a data loss source if UFFDIO_COPY is used on shmem MAP_SHARED PROT_READ mappings (the production usages incidentally run with PROT_READ|PROT_WRITE, so the data loss couldn't happen in those production usages like with QEMU). The whole patchset is marked for stable. We verified QEMU postcopy live migration with guest running on shmem MAP_PRIVATE run as well as before after the fix of shmem MAP_PRIVATE. Regardless if it's shmem or hugetlbfs or MAP_PRIVATE or MAP_SHARED, QEMU unconditionally invokes a punch hole if the guest mapping is filebacked and a MADV_DONTNEED too (needed to get rid of the MAP_PRIVATE COWs and for the anon backend). This patch (of 5): We internally used EFAULT to communicate with the caller, switch to ENOENT, so EFAULT can be used as a non internal retval. Link: http://lkml.kernel.org/r/20181126173452.26955-2-aarcange@redhat.com Fixes: 4c27fe4c4c84 ("userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support") Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Reviewed-by: Mike Rapoport <rppt@linux.ibm.com> Reviewed-by: Hugh Dickins <hughd@google.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Jann Horn <jannh@google.com> Cc: Peter Xu <peterx@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: <stable@vger.kernel.org> Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-11-30lib/test_kmod.c: fix rmmod double freeLuis Chamberlain1-1/+0
We free the misc device string twice on rmmod; fix this. Without this we cannot remove the module without crashing. Link: http://lkml.kernel.org/r/20181124050500.5257-1-mcgrof@kernel.org Signed-off-by: Luis Chamberlain <mcgrof@kernel.org> Reported-by: Randy Dunlap <rdunlap@infradead.org> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Cc: <stable@vger.kernel.org> [4.12+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-11-30hfsplus: do not free node before usingPan Bian1-1/+2
hfs_bmap_free() frees node via hfs_bnode_put(node). However it then reads node->this when dumping error message on an error path, which may result in a use-after-free bug. This patch frees node only when it is never used. Link: http://lkml.kernel.org/r/1543053441-66942-1-git-send-email-bianpan2016@163.com Signed-off-by: Pan Bian <bianpan2016@163.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Cc: Ernesto A. Fernandez <ernesto.mnd.fernandez@gmail.com> Cc: Joe Perches <joe@perches.com> Cc: Viacheslav Dubeyko <slava@dubeyko.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-11-30hfs: do not free node before usingPan Bian1-1/+2
hfs_bmap_free() frees the node via hfs_bnode_put(node). However, it then reads node->this when dumping error message on an error path, which may result in a use-after-free bug. This patch frees the node only when it is never again used. Link: http://lkml.kernel.org/r/1542963889-128825-1-git-send-email-bianpan2016@163.com Fixes: a1185ffa2fc ("HFS rewrite") Signed-off-by: Pan Bian <bianpan2016@163.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Cc: Joe Perches <joe@perches.com> Cc: Ernesto A. Fernandez <ernesto.mnd.fernandez@gmail.com> Cc: Viacheslav Dubeyko <slava@dubeyko.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-11-30proc: update MAINTAINERS with proc.txtAlexey Dobriyan1-0/+1
Turns out that /proc has official documentation and people even trying to keep it uptodate. Link: http://lkml.kernel.org/r/20181116134630.GA8004@avx2 Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-11-30mm/page_alloc.c: fix calculation of pgdat->nr_zonesWei Yang1-1/+3
init_currently_empty_zone() will adjust pgdat->nr_zones and set it to 'zone_idx(zone) + 1' unconditionally. This is correct in the normal case, while not exact in hot-plug situation. This function is used in two places: * free_area_init_core() * move_pfn_range_to_zone() In the first case, we are sure zone index increase monotonically. While in the second one, this is under users control. One way to reproduce this is: ---------------------------- 1. create a virtual machine with empty node1 -m 4G,slots=32,maxmem=32G \ -smp 4,maxcpus=8 \ -numa node,nodeid=0,mem=4G,cpus=0-3 \ -numa node,nodeid=1,mem=0G,cpus=4-7 2. hot-add cpu 3-7 cpu-add [3-7] 2. hot-add memory to nod1 object_add memory-backend-ram,id=ram0,size=1G device_add pc-dimm,id=dimm0,memdev=ram0,node=1 3. online memory with following order echo online_movable > memory47/state echo online > memory40/state After this, node1 will have its nr_zones equals to (ZONE_NORMAL + 1) instead of (ZONE_MOVABLE + 1). Michal said: "Having an incorrect nr_zones might result in all sorts of problems which would be quite hard to debug (e.g. reclaim not considering the movable zone). I do not expect many users would suffer from this it but still this is trivial and obviously right thing to do so backporting to the stable tree shouldn't be harmful (last famous words)" Link: http://lkml.kernel.org/r/20181117022022.9956-1-richard.weiyang@gmail.com Fixes: f1dd2cd13c4b ("mm, memory_hotplug: do not associate hotadded memory to zones until online") Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Acked-by: Michal Hocko <mhocko@suse.com> Reviewed-by: Oscar Salvador <osalvador@suse.de> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Dave Hansen <dave.hansen@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-11-30mm: use swp_offset as key in shmem_replace_page()Yu Zhao1-2/+4
We changed the key of swap cache tree from swp_entry_t.val to swp_offset. We need to do so in shmem_replace_page() as well. Hugh said: "shmem_replace_page() has been wrong since the day I wrote it: good enough to work on swap "type" 0, which is all most people ever use (especially those few who need shmem_replace_page() at all), but broken once there are any non-0 swp_type bits set in the higher order bits" Link: http://lkml.kernel.org/r/20181121215442.138545-1-yuzhao@google.com Fixes: f6ab1f7f6b2d ("mm, swap: use offset of swap entry as key of swap cache") Signed-off-by: Yu Zhao <yuzhao@google.com> Reviewed-by: Matthew Wilcox <willy@infradead.org> Acked-by: Hugh Dickins <hughd@google.com> Cc: <stable@vger.kernel.org> [4.9+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-11-30mm: cleancache: fix corruption on missed inode invalidationPavel Tikhomirov1-2/+6
If all pages are deleted from the mapping by memory reclaim and also moved to the cleancache: __delete_from_page_cache (no shadow case) unaccount_page_cache_page cleancache_put_page page_cache_delete mapping->nrpages -= nr (nrpages becomes 0) We don't clean the cleancache for an inode after final file truncation (removal). truncate_inode_pages_final check (nrpages || nrexceptional) is false no truncate_inode_pages no cleancache_invalidate_inode(mapping) These way when reading the new file created with same inode we may get these trash leftover pages from cleancache and see wrong data instead of the contents of the new file. Fix it by always doing truncate_inode_pages which is already ready for nrpages == 0 && nrexceptional == 0 case and just invalidates inode. [akpm@linux-foundation.org: add comment, per Jan] Link: http://lkml.kernel.org/r/20181112095734.17979-1-ptikhomirov@virtuozzo.com Fixes: commit 91b0abe36a7b ("mm + fs: store shadow entries in page cache") Signed-off-by: Pavel Tikhomirov <ptikhomirov@virtuozzo.com> Reviewed-by: Vasily Averin <vvs@virtuozzo.com> Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Reviewed-by: Jan Kara <jack@suse.cz> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Matthew Wilcox <willy@infradead.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-11-30ocfs2: fix deadlock caused by ocfs2_defrag_extent()Larry Chen1-21/+26
ocfs2_defrag_extent may fall into deadlock. ocfs2_ioctl_move_extents ocfs2_ioctl_move_extents ocfs2_move_extents ocfs2_defrag_extent ocfs2_lock_allocators_move_extents ocfs2_reserve_clusters inode_lock GLOBAL_BITMAP_SYSTEM_INODE __ocfs2_flush_truncate_log inode_lock GLOBAL_BITMAP_SYSTEM_INODE As backtrace shows above, ocfs2_reserve_clusters() will call inode_lock against the global bitmap if local allocator has not sufficient cluters. Once global bitmap could meet the demand, ocfs2_reserve_cluster will return success with global bitmap locked. After ocfs2_reserve_cluster(), if truncate log is full, __ocfs2_flush_truncate_log() will definitely fall into deadlock because it needs to inode_lock global bitmap, which has already been locked. To fix this bug, we could remove from ocfs2_lock_allocators_move_extents() the code which intends to lock global allocator, and put the removed code after __ocfs2_flush_truncate_log(). ocfs2_lock_allocators_move_extents() is referred by 2 places, one is here, the other does not need the data allocator context, which means this patch does not affect the caller so far. Link: http://lkml.kernel.org/r/20181101071422.14470-1-lchen@suse.com Signed-off-by: Larry Chen <lchen@suse.com> Reviewed-by: Changwei Ge <ge.changwei@h3c.com> Cc: Mark Fasheh <mark@fasheh.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Junxiao Bi <junxiao.bi@oracle.com> Cc: Joseph Qi <jiangqi903@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-11-30mm/gup: finish consolidating error handlingJohn Hubbard1-2/+1
Commit df06b37ffe5a ("mm/gup: cache dev_pagemap while pinning pages") attempted to operate on each page that get_user_pages had retrieved. In order to do that, it created a common exit point from the routine. However, one case was missed, which this patch fixes up. Also, there was still an unnecessary shadow declaration (with a different type) of the "ret" variable, which this patch removes. Keith's description of the situation is: This also fixes a potentially leaked dev_pagemap reference count if a failure occurs when an iteration crosses a vma boundary. I don't think it's normal to have different vma's on a users mapped zone device memory, but good to fix anyway. I actually thought that this code: /* first iteration or cross vma bound */ if (!vma || start >= vma->vm_end) { vma = find_extend_vma(mm, start); if (!vma && in_gate_area(mm, start)) { ret = get_gate_page(mm, start & PAGE_MASK, gup_flags, &vma, pages ? &pages[i] : NULL); if (ret) goto out; dealt with the "you're trying to pin the gate page, as part of this call", rather than the generic case of crossing a vma boundary. (I think there's a fine point that I must be overlooking.) But it's still a valid case, either way. Link: http://lkml.kernel.org/r/20181121081402.29641-2-jhubbard@nvidia.com Fixes: df06b37ffe5a4 ("mm/gup: cache dev_pagemap while pinning pages") Signed-off-by: John Hubbard <jhubbard@nvidia.com> Reviewed-by: Keith Busch <keith.busch@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Dave Hansen <dave.hansen@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-11-30MAINTAINERS: name change for LuisLuis Chamberlain1-6/+6
My name has changed, works better than Global Entry I tell ya. Link: http://lkml.kernel.org/r/20181122003138.7752-1-mcgrof@kernel.org Signed-off-by: Luis Chamberlain <mcgrof@kernel.org> Cc: Kees Cook <keescook@chromium.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-11-30test_hexdump: use memcpy instead of strncpyLinus Torvalds1-1/+1
New versions of gcc reasonably warn about the odd pattern of strncpy(p, q, strlen(q)); which really doesn't make sense: the strncpy() ends up being just a slow and odd way to write memcpy() in this case. Apparently there was a patch for this floating around earlier, but it got lost. Acked-again-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-11-29tracing/fgraph: Fix set_graph_function from showing interruptsSteven Rostedt (VMware)4-3/+62
The tracefs file set_graph_function is used to only function graph functions that are listed in that file (or all functions if the file is empty). The way this is implemented is that the function graph tracer looks at every function, and if the current depth is zero and the function matches something in the file then it will trace that function. When other functions are called, the depth will be greater than zero (because the original function will be at depth zero), and all functions will be traced where the depth is greater than zero. The issue is that when a function is first entered, and the handler that checks this logic is called, the depth is set to zero. If an interrupt comes in and a function in the interrupt handler is traced, its depth will be greater than zero and it will automatically be traced, even if the original function was not. But because the logic only looks at depth it may trace interrupts when it should not be. The recent design change of the function graph tracer to fix other bugs caused the depth to be zero while the function graph callback handler is being called for a longer time, widening the race of this happening. This bug was actually there for a longer time, but because the race window was so small it seldom happened. The Fixes tag below is for the commit that widen the race window, because that commit belongs to a series that will also help fix the original bug. Cc: stable@kernel.org Fixes: 39eb456dacb5 ("function_graph: Use new curr_ret_depth to manage depth instead of curr_ret_stack") Reported-by: Joe Lawrence <joe.lawrence@redhat.com> Tested-by: Joe Lawrence <joe.lawrence@redhat.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2018-11-29tracepoint: Use __idx instead of idx in DO_TRACE macro to make it uniqueZenghui Yu1-3/+3
After enabling KVM event tracing, almost all of trace_kvm_exit()'s printk shows "kvm_exit: IRQ: ..." even if the actual exception_type is NOT IRQ. More specifically, trace_kvm_exit() is defined in virt/kvm/arm/trace.h by TRACE_EVENT. This slight problem may have existed after commit e6753f23d961 ("tracepoint: Make rcuidle tracepoint callers use SRCU"). There are two variables in trace_kvm_exit() and __DO_TRACE() which have the same name, *idx*. Thus the actual value of *idx* will be overwritten when tracing. Fix it by adding a simple prefix. Cc: Joel Fernandes <joel@joelfernandes.org> Cc: Wang Haibin <wanghaibin.wang@huawei.com> Cc: linux-trace-devel@vger.kernel.org Cc: stable@vger.kernel.org Fixes: e6753f23d961 ("tracepoint: Make rcuidle tracepoint callers use SRCU") Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2018-11-29afs: Use d_instantiate() rather than d_add() and don't d_drop()David Howells1-3/+1
Use d_instantiate() rather than d_add() and don't d_drop() in afs_vnode_new_inode(). The dentry shouldn't be removed as it's not changing its name. Reported-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2018-11-29afs: Fix missing net error handlingDavid Howells6-113/+135
kAFS can be given certain network errors (EADDRNOTAVAIL, EHOSTDOWN and ERFKILL) that it doesn't handle in its server/address rotation algorithms. They cause the probing and rotation to abort immediately rather than rotating. Fix this by: (1) Abstracting out the error prioritisation from the VL and FS rotation algorithms into a common function and expand usage into the server probing code. When multiple errors are available, this code selects the one we'd prefer to return. (2) Add handling for EADDRNOTAVAIL, EHOSTDOWN and ERFKILL. Fixes: 0fafdc9f888b ("afs: Fix file locking") Fixes: 0338747d8454 ("afs: Probe multiple fileservers simultaneously") Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2018-11-29afs: Fix validation/callback interactionDavid Howells1-6/+12
When afs_validate() is called to validate a vnode (inode), there are two unhandled cases in the fastpath at the top of the function: (1) If the vnode is promised (AFS_VNODE_CB_PROMISED is set), the break counters match and the data has expired, then there's an implicit case in which the vnode needs revalidating. This has no consequences since the default "valid = false" set at the top of the function happens to do the right thing. (2) If the vnode is not promised and it hasn't been deleted (AFS_VNODE_DELETED is not set) then there's a default case we're not handling in which the vnode is invalid. If the vnode is invalid, we need to bring cb_s_break and cb_v_break up to date before we refetch the status. As a consequence, once the server loses track of the client (ie. sufficient time has passed since we last sent it an operation), it will send us a CB.InitCallBackState* operation when we next try to talk to it. This calls afs_init_callback_state() which increments afs_server::cb_s_break, but this then doesn't propagate to the afs_vnode record. The result being that every afs_validate() call thereafter sends a status fetch operation to the server. Clarify and fix this by: (A) Setting valid in all the branches rather than initialising it at the top so that the compiler catches where we've missed. (B) Restructuring the logic in the 'promised' branch so that we set valid to false if the callback is due to expire (or has expired) and so that the final case is that the vnode is still valid. (C) Adding an else-statement that ups cb_s_break and cb_v_break if the promised and deleted cases don't match. Fixes: c435ee34551e ("afs: Overhaul the callback handling") Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2018-11-29RDMA/mlx5: Initialize return variable in case pagefault was skippedLeon Romanovsky1-0/+1
Pagefaults occurred in non-ODP MR are completely valid events, so initialize return variable to 0. Fixes: 4d5422a309de ("IB/mlx5: Skip non-ODP MR when handling a page fault") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2018-11-29pstore/ram: Correctly calculate usable PRZ bytesKees Cook2-10/+10
The actual number of bytes stored in a PRZ is smaller than the bytes requested by platform data, since there is a header on each PRZ. Additionally, if ECC is enabled, there are trailing bytes used as well. Normally this mismatch doesn't matter since PRZs are circular buffers and the leading "overflow" bytes are just thrown away. However, in the case of a compressed record, this rather badly corrupts the results. This corruption was visible with "ramoops.mem_size=204800 ramoops.ecc=1". Any stored crashes would not be uncompressable (producing a pstorefs "dmesg-*.enc.z" file), and triggering errors at boot: [ 2.790759] pstore: crypto_comp_decompress failed, ret = -22! Backporting this depends on commit 70ad35db3321 ("pstore: Convert console write to use ->write_buf") Reported-by: Joel Fernandes <joel@joelfernandes.org> Fixes: b0aad7a99c1d ("pstore: Add compression support to pstore") Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
2018-11-29selinux: add support for RTM_NEWCHAIN, RTM_DELCHAIN, and RTM_GETCHAINPaul Moore1-1/+12
Commit 32a4f5ecd738 ("net: sched: introduce chain object to uapi") added new RTM_* definitions without properly updating SELinux, this patch adds the necessary SELinux support. While there was a BUILD_BUG_ON() in the SELinux code to protect from exactly this case, it was bypassed in the broken commit. In order to hopefully prevent this from happening in the future, add additional comments which provide some instructions on how to resolve the BUILD_BUG_ON() failures. Fixes: 32a4f5ecd738 ("net: sched: introduce chain object to uapi") Cc: <stable@vger.kernel.org> # 4.19 Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Paul Moore <paul@paul-moore.com>
2018-11-29drm/ast: fixed reading monitor EDID not stable issueY.C. Chen1-6/+30
v1: over-sample data to increase the stability with some specific monitors v2: refine to avoid infinite loop v3: remove un-necessary "volatile" declaration [airlied: fix two checkpatch warnings] Signed-off-by: Y.C. Chen <yc_chen@aspeedtech.com> Signed-off-by: Dave Airlie <airlied@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/1542858988-1127-1-git-send-email-yc_chen@aspeedtech.com
2018-11-29drm/ast: Fix incorrect free on ioregsSam Bobroff1-1/+2
If the platform has no IO space, ioregs is placed next to the already allocated regs. In this case, it should not be separately freed. This prevents a kernel warning from __vunmap "Trying to vfree() nonexistent vm area" when unloading the driver. Fixes: 0dd68309b9c5 ("drm/ast: Try to use MMIO registers when PIO isn't supported") Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Dave Airlie <airlied@redhat.com>
2018-11-28Revert "drm/dp_mst: Skip validating ports during destruction, just ref"Lyude Paul1-13/+2
This reverts commit: c54c7374ff44 ("drm/dp_mst: Skip validating ports during destruction, just ref") ugh. In drm_dp_destroy_connector_work(), we have a pretty good chance of freeing the actual struct drm_dp_mst_port. However, after destroying things we send a hotplug through (*mgr->cbs->hotplug)(mgr) which is where the problems start. For i915, this calls all the way down to the fbcon probing helpers, which start trying to access the port in a modeset. [ 45.062001] ================================================================== [ 45.062112] BUG: KASAN: use-after-free in ex_handler_refcount+0x146/0x180 [ 45.062196] Write of size 4 at addr ffff8882b4b70968 by task kworker/3:1/53 [ 45.062325] CPU: 3 PID: 53 Comm: kworker/3:1 Kdump: loaded Tainted: G O 4.20.0-rc4Lyude-Test+ #3 [ 45.062442] Hardware name: LENOVO 20BWS1KY00/20BWS1KY00, BIOS JBET71WW (1.35 ) 09/14/2018 [ 45.062554] Workqueue: events drm_dp_destroy_connector_work [drm_kms_helper] [ 45.062641] Call Trace: [ 45.062685] dump_stack+0xbd/0x15a [ 45.062735] ? dump_stack_print_info.cold.0+0x1b/0x1b [ 45.062801] ? printk+0x9f/0xc5 [ 45.062847] ? kmsg_dump_rewind_nolock+0xe4/0xe4 [ 45.062909] ? ex_handler_refcount+0x146/0x180 [ 45.062970] print_address_description+0x71/0x239 [ 45.063036] ? ex_handler_refcount+0x146/0x180 [ 45.063095] kasan_report.cold.5+0x242/0x30b [ 45.063155] __asan_report_store4_noabort+0x1c/0x20 [ 45.063313] ex_handler_refcount+0x146/0x180 [ 45.063371] ? ex_handler_clear_fs+0xb0/0xb0 [ 45.063428] fixup_exception+0x98/0xd7 [ 45.063484] ? raw_notifier_call_chain+0x20/0x20 [ 45.063548] do_trap+0x6d/0x210 [ 45.063605] ? _GLOBAL__sub_I_65535_1_drm_dp_aux_unregister_devnode+0x2f/0x1c6 [drm_kms_helper] [ 45.063732] do_error_trap+0xc0/0x170 [ 45.063802] ? _GLOBAL__sub_I_65535_1_drm_dp_aux_unregister_devnode+0x2f/0x1c6 [drm_kms_helper] [ 45.063929] do_invalid_op+0x3b/0x50 [ 45.063997] ? _GLOBAL__sub_I_65535_1_drm_dp_aux_unregister_devnode+0x2f/0x1c6 [drm_kms_helper] [ 45.064103] invalid_op+0x14/0x20 [ 45.064162] RIP: 0010:_GLOBAL__sub_I_65535_1_drm_dp_aux_unregister_devnode+0x2f/0x1c6 [drm_kms_helper] [ 45.064274] Code: 00 48 c7 c7 80 fe 53 a0 48 89 e5 e8 5b 6f 26 e1 5d c3 48 8d 0e 0f 0b 48 8d 0b 0f 0b 48 8d 0f 0f 0b 48 8d 0f 0f 0b 49 8d 4d 00 <0f> 0b 49 8d 0e 0f 0b 48 8d 08 0f 0b 49 8d 4d 00 0f 0b 48 8d 0b 0f [ 45.064569] RSP: 0018:ffff8882b789ee10 EFLAGS: 00010282 [ 45.064637] RAX: ffff8882af47ae70 RBX: ffff8882af47aa60 RCX: ffff8882b4b70968 [ 45.064723] RDX: ffff8882af47ae70 RSI: 0000000000000008 RDI: ffff8882b788bdb8 [ 45.064808] RBP: ffff8882b789ee28 R08: ffffed1056f13db4 R09: ffffed1056f13db3 [ 45.064894] R10: ffffed1056f13db3 R11: ffff8882b789ed9f R12: ffff8882af47ad28 [ 45.064980] R13: ffff8882b4b70968 R14: ffff8882acd86728 R15: ffff8882b4b75dc8 [ 45.065084] drm_dp_mst_reset_vcpi_slots+0x12/0x80 [drm_kms_helper] [ 45.065225] intel_mst_disable_dp+0xda/0x180 [i915] [ 45.065361] intel_encoders_disable.isra.107+0x197/0x310 [i915] [ 45.065498] haswell_crtc_disable+0xbe/0x400 [i915] [ 45.065622] ? i9xx_disable_plane+0x1c0/0x3e0 [i915] [ 45.065750] intel_atomic_commit_tail+0x74e/0x3e60 [i915] [ 45.065884] ? intel_pre_plane_update+0xbc0/0xbc0 [i915] [ 45.065968] ? drm_atomic_helper_swap_state+0x88b/0x1d90 [drm_kms_helper] [ 45.066054] ? kasan_check_write+0x14/0x20 [ 45.066165] ? i915_gem_track_fb+0x13a/0x330 [i915] [ 45.066277] ? i915_sw_fence_complete+0xe9/0x140 [i915] [ 45.066406] ? __i915_sw_fence_complete+0xc50/0xc50 [i915] [ 45.066540] intel_atomic_commit+0x72e/0xef0 [i915] [ 45.066635] ? drm_dev_dbg+0x200/0x200 [drm] [ 45.066764] ? intel_atomic_commit_tail+0x3e60/0x3e60 [i915] [ 45.066898] ? intel_atomic_commit_tail+0x3e60/0x3e60 [i915] [ 45.067001] drm_atomic_commit+0xc4/0xf0 [drm] [ 45.067074] restore_fbdev_mode_atomic+0x562/0x780 [drm_kms_helper] [ 45.067166] ? drm_fb_helper_debug_leave+0x690/0x690 [drm_kms_helper] [ 45.067249] ? kasan_check_read+0x11/0x20 [ 45.067324] restore_fbdev_mode+0x127/0x4b0 [drm_kms_helper] [ 45.067364] ? kasan_check_read+0x11/0x20 [ 45.067406] drm_fb_helper_restore_fbdev_mode_unlocked+0x164/0x200 [drm_kms_helper] [ 45.067462] ? drm_fb_helper_hotplug_event+0x30/0x30 [drm_kms_helper] [ 45.067508] ? kasan_check_write+0x14/0x20 [ 45.070360] ? mutex_unlock+0x22/0x40 [ 45.073748] drm_fb_helper_set_par+0xb2/0xf0 [drm_kms_helper] [ 45.075846] drm_fb_helper_hotplug_event.part.33+0x1cd/0x290 [drm_kms_helper] [ 45.078088] drm_fb_helper_hotplug_event+0x1c/0x30 [drm_kms_helper] [ 45.082614] intel_fbdev_output_poll_changed+0x9f/0x140 [i915] [ 45.087069] drm_kms_helper_hotplug_event+0x67/0x90 [drm_kms_helper] [ 45.089319] intel_dp_mst_hotplug+0x37/0x50 [i915] [ 45.091496] drm_dp_destroy_connector_work+0x510/0x6f0 [drm_kms_helper] [ 45.093675] ? drm_dp_update_payload_part1+0x1220/0x1220 [drm_kms_helper] [ 45.095851] ? kasan_check_write+0x14/0x20 [ 45.098473] ? kasan_check_read+0x11/0x20 [ 45.101155] ? strscpy+0x17c/0x530 [ 45.103808] ? __switch_to_asm+0x34/0x70 [ 45.106456] ? syscall_return_via_sysret+0xf/0x7f [ 45.109711] ? read_word_at_a_time+0x20/0x20 [ 45.113138] ? __switch_to_asm+0x40/0x70 [ 45.116529] ? __switch_to_asm+0x34/0x70 [ 45.119891] ? __switch_to_asm+0x40/0x70 [ 45.123224] ? __switch_to_asm+0x34/0x70 [ 45.126540] ? __switch_to_asm+0x34/0x70 [ 45.129824] process_one_work+0x88d/0x15d0 [ 45.133172] ? pool_mayday_timeout+0x850/0x850 [ 45.136459] ? pci_mmcfg_check_reserved+0x110/0x128 [ 45.139739] ? wake_q_add+0xb0/0xb0 [ 45.143010] ? check_preempt_wakeup+0x652/0x1050 [ 45.146304] ? worker_enter_idle+0x29e/0x740 [ 45.149589] ? __schedule+0x1ec0/0x1ec0 [ 45.152937] ? kasan_check_read+0x11/0x20 [ 45.156179] ? _raw_spin_lock_irq+0xa3/0x130 [ 45.159382] ? _raw_read_unlock_irqrestore+0x30/0x30 [ 45.162542] ? kasan_check_write+0x14/0x20 [ 45.165657] worker_thread+0x1a5/0x1470 [ 45.168725] ? set_load_weight+0x2e0/0x2e0 [ 45.171755] ? process_one_work+0x15d0/0x15d0 [ 45.174806] ? __switch_to_asm+0x34/0x70 [ 45.177645] ? __switch_to_asm+0x40/0x70 [ 45.180323] ? __switch_to_asm+0x34/0x70 [ 45.182936] ? __switch_to_asm+0x40/0x70 [ 45.185539] ? __switch_to_asm+0x34/0x70 [ 45.188100] ? __switch_to_asm+0x40/0x70 [ 45.190628] ? __schedule+0x7d4/0x1ec0 [ 45.193143] ? save_stack+0xa9/0xd0 [ 45.195632] ? kasan_check_write+0x10/0x20 [ 45.198162] ? kasan_kmalloc+0xc4/0xe0 [ 45.200609] ? kmem_cache_alloc_trace+0xdd/0x190 [ 45.203046] ? kthread+0x9f/0x3b0 [ 45.205470] ? ret_from_fork+0x35/0x40 [ 45.207876] ? unwind_next_frame+0x43/0x50 [ 45.210273] ? __save_stack_trace+0x82/0x100 [ 45.212658] ? deactivate_slab.isra.67+0x3d4/0x580 [ 45.215026] ? default_wake_function+0x35/0x50 [ 45.217399] ? kasan_check_read+0x11/0x20 [ 45.219825] ? _raw_spin_lock_irqsave+0xae/0x140 [ 45.222174] ? __lock_text_start+0x8/0x8 [ 45.224521] ? replenish_dl_entity.cold.62+0x4f/0x4f [ 45.226868] ? __kthread_parkme+0x87/0xf0 [ 45.229200] kthread+0x2f7/0x3b0 [ 45.231557] ? process_one_work+0x15d0/0x15d0 [ 45.233923] ? kthread_park+0x120/0x120 [ 45.236249] ret_from_fork+0x35/0x40 [ 45.240875] Allocated by task 242: [ 45.243136] save_stack+0x43/0xd0 [ 45.245385] kasan_kmalloc+0xc4/0xe0 [ 45.247597] kmem_cache_alloc_trace+0xdd/0x190 [ 45.249793] drm_dp_add_port+0x1e0/0x2170 [drm_kms_helper] [ 45.252000] drm_dp_send_link_address+0x4a7/0x740 [drm_kms_helper] [ 45.254389] drm_dp_check_and_send_link_address+0x1a7/0x210 [drm_kms_helper] [ 45.256803] drm_dp_mst_link_probe_work+0x6f/0xb0 [drm_kms_helper] [ 45.259200] process_one_work+0x88d/0x15d0 [ 45.261597] worker_thread+0x1a5/0x1470 [ 45.264038] kthread+0x2f7/0x3b0 [ 45.266371] ret_from_fork+0x35/0x40 [ 45.270937] Freed by task 53: [ 45.273170] save_stack+0x43/0xd0 [ 45.275382] __kasan_slab_free+0x139/0x190 [ 45.277604] kasan_slab_free+0xe/0x10 [ 45.279826] kfree+0x99/0x1b0 [ 45.282044] drm_dp_free_mst_port+0x4a/0x60 [drm_kms_helper] [ 45.284330] drm_dp_destroy_connector_work+0x43e/0x6f0 [drm_kms_helper] [ 45.286660] process_one_work+0x88d/0x15d0 [ 45.288934] worker_thread+0x1a5/0x1470 [ 45.291231] kthread+0x2f7/0x3b0 [ 45.293547] ret_from_fork+0x35/0x40 [ 45.298206] The buggy address belongs to the object at ffff8882b4b70968 which belongs to the cache kmalloc-2k of size 2048 [ 45.303047] The buggy address is located 0 bytes inside of 2048-byte region [ffff8882b4b70968, ffff8882b4b71168) [ 45.308010] The buggy address belongs to the page: [ 45.310477] page:ffffea000ad2dc00 count:1 mapcount:0 mapping:ffff8882c080cf40 index:0x0 compound_mapcount: 0 [ 45.313051] flags: 0x8000000000010200(slab|head) [ 45.315635] raw: 8000000000010200 ffffea000aac2808 ffffea000abe8608 ffff8882c080cf40 [ 45.318300] raw: 0000000000000000 00000000000d000d 00000001ffffffff 0000000000000000 [ 45.320966] page dumped because: kasan: bad access detected [ 45.326312] Memory state around the buggy address: [ 45.329085] ffff8882b4b70800: fb fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc [ 45.331845] ffff8882b4b70880: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc [ 45.334584] >ffff8882b4b70900: fc fc fc fc fc fc fc fc fc fc fc fc fc fb fb fb [ 45.337302] ^ [ 45.340061] ffff8882b4b70980: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb [ 45.342910] ffff8882b4b70a00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb [ 45.345748] ================================================================== So, this definitely isn't a fix that we want. This being said; there's no real easy fix for this problem because of some of the catch-22's of the MST helpers current design. For starters; we always need to validate a port with drm_dp_get_validated_port_ref(), but validation relies on the lifetime of the port in the actual topology. So once the port is gone, it can't be validated again. If we were to try to make the payload helpers not use port validation, then we'd cause another problem: if the port isn't validated, it could be freed and we'd just start causing more KASAN issues. There are already hacks that attempt to workaround this in drm_dp_mst_destroy_connector_work() by re-initializing the kref so that it can be used again and it's memory can be freed once the VCPI helpers finish removing the port's respective payloads. But none of these really do anything helpful since the port still can't be validated since it's gone from the topology. Also, that workaround is immensely confusing to read through. What really needs to be done in order to fix this is to teach DRM how to track the lifetime of the structs for MST ports and branch devices separately from their lifetime in the actual topology. Simply put; this means having two different krefs-one that removes the port/branch device from the topology, and one that finally calls kfree(). This would let us simplify things, since we'd now be able to keep ports around without having to keep them in the topology at the same time, which is exactly what we need in order to teach our VCPI helpers to only validate ports when it's actually necessary without running the risk of trying to use unallocated memory. Such a fix is on it's way, but for now let's play it safe and just revert this. If this bug has been around for well over a year, we can wait a little while to get an actual proper fix here. Signed-off-by: Lyude Paul <lyude@redhat.com> Fixes: c54c7374ff44 ("drm/dp_mst: Skip validating ports during destruction, just ref") Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Sean Paul <sean@poorly.run> Cc: Jerry Zuo <Jerry.Zuo@amd.com> Cc: Harry Wentland <Harry.Wentland@amd.com> Cc: stable@vger.kernel.org # v4.6+ Acked-by: Sean Paul <sean@poorly.run> Link: https://patchwork.freedesktop.org/patch/msgid/20181128210005.24434-1-lyude@redhat.com
2018-11-28drm/amdgpu: Add delay after enable RLC ucodeshaoyunl1-3/+4
Driver shouldn't try to access any GFX registers until RLC is idle. During the test, it took 12 seconds for RLC to clear the BUSY bit in RLC_GPM_STAT register which is un-acceptable for driver. As per RLC engineer, it would take RLC Ucode less than 10,000 GFXCLK cycles to finish its critical section. In a lowest 300M enginer clock setting(default from vbios), 50 us delay is enough. This commit fix the hang when RLC introduce the work around for XGMI which requires more cycles to setup more registers than normal Signed-off-by: shaoyunl <shaoyun.liu@amd.com> Acked-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-11-28drm/amdgpu: Avoid endless loop in GPUVM fragment processingFelix Kuehling1-2/+4
Don't bounce back to the root level for fragment processing, because huge pages are not supported at that level. This is unlikely to happen with the default VM size on Vega, but can be exposed by limiting the VM size with the amdgpu.vm_size module parameter. Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-11-28drm/amdgpu: Cast to uint64_t before left shiftFelix Kuehling1-4/+4
Avoid potential integer overflows with left shift in huge-page mapping code by casting the operand to uin64_t first. Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-11-28s390/qeth: fix length check in SNMP processingJulian Wiedmann1-15/+12
The response for a SNMP request can consist of multiple parts, which the cmd callback stages into a kernel buffer until all parts have been received. If the callback detects that the staging buffer provides insufficient space, it bails out with error. This processing is buggy for the first part of the response - while it initially checks for a length of 'data_len', it later copies an additional amount of 'offsetof(struct qeth_snmp_cmd, data)' bytes. Fix the calculation of 'data_len' for the first part of the response. This also nicely cleans up the memcpy code. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Reviewed-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-28net: hisilicon: remove unexpected free_netdevPan Bian1-3/+1
The net device ndev is freed via free_netdev when failing to register the device. The control flow then jumps to the error handling code block. ndev is used and freed again. Resulting in a use-after-free bug. Signed-off-by: Pan Bian <bianpan2016@163.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-28rapidio/rionet: do not free skb before reading its lengthPan Bian1-1/+1
skb is freed via dev_kfree_skb_any, however, skb->len is read then. This may result in a use-after-free bug. Fixes: e6161d64263 ("rapidio/rionet: rework driver initialization and removal") Signed-off-by: Pan Bian <bianpan2016@163.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-28i40e: fix kerneldoc for xsk methodsJan Sokolowski1-7/+7
One method, xsk_umem_setup, had an incorrect kernel doc description, which has been corrected. Also fixes small typos found in the comments. Signed-off-by: Jan Sokolowski <jan.sokolowski@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-11-28ixgbe: recognize 1000BaseLX SFP modules as 1GbpsJosh Elsasser1-1/+3
Add the two 1000BaseLX enum values to the X550's check for 1Gbps modules, allowing the core driver code to establish a link over this SFP type. This is done by the out-of-tree driver but the fix wasn't in mainline. Fixes: e23f33367882 ("ixgbe: Fix 1G and 10G link stability for X550EM_x SFP+”) Fixes: 6a14ee0cfb19 ("ixgbe: Add X550 support function pointers") Signed-off-by: Josh Elsasser <jelsasser@appneta.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-11-28i40e: Fix deletion of MAC filtersLihong Yang1-1/+1
In __i40e_del_filter function, the flag __I40E_MACVLAN_SYNC_PENDING for the PF state is wrongly set for the VSI. Deleting any of the MAC filters has caused the incorrect syncing for the PF. Fix it by setting this state flag to the intended PF. CC: stable <stable@vger.kernel.org> Signed-off-by: Lihong Yang <lihong.yang@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-11-28igb: fix uninitialized variablesYunjian Wang1-0/+1
This patch fixes the variable 'phy_word' may be used uninitialized. Signed-off-by: Yunjian Wang <wangyunjian@huawei.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-11-28mtd: nand: Fix memory allocation in nanddev_bbt_init()Frieder Schrempf1-1/+2
Fix the size of the buffer allocated to store the in-memory BBT. This bug was previously hidden by a different bug, that was fixed in commit d098093ba06e ("mtd: nand: Fix nanddev_neraseblocks()"). Fixes: 9c3736a3de21 ("mtd: nand: Add core infrastructure to deal with NAND devices") Cc: <stable@vger.kernel.org> Signed-off-by: Frieder Schrempf <frieder.schrempf@kontron.de> Acked-by: Miquel Raynal <miquel.raynal@bootlin.com> Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com>
2018-11-28ALSA: usb-audio: Add vendor and product name for Dell WD19 DockHui Wang1-0/+10
Like the Dell WD15 Dock, the WD19 Dock (0bda:402e) doens't provide useful string for the vendor and product names too. In order to share the UCM with WD15, here we keep the profile_name same as the WD15. Signed-off-by: Hui Wang <hui.wang@canonical.com> Signed-off-by: Takashi Iwai <tiwai@suse.de>
2018-11-28netfilter: nf_tables: deactivate expressions in rule replecement routineTaehee Yoo1-11/+4
There is no expression deactivation call from the rule replacement path, hence, chain counter is not decremented. A few steps to reproduce the problem: %nft add table ip filter %nft add chain ip filter c1 %nft add chain ip filter c1 %nft add rule ip filter c1 jump c2 %nft replace rule ip filter c1 handle 3 accept %nft flush ruleset <jump c2> expression means immediate NFT_JUMP to chain c2. Reference count of chain c2 is increased when the rule is added. When rule is deleted or replaced, the reference counter of c2 should be decreased via nft_rule_expr_deactivate() which calls nft_immediate_deactivate(). Splat looks like: [ 214.396453] WARNING: CPU: 1 PID: 21 at net/netfilter/nf_tables_api.c:1432 nf_tables_chain_destroy.isra.38+0x2f9/0x3a0 [nf_tables] [ 214.398983] Modules linked in: nf_tables nfnetlink [ 214.398983] CPU: 1 PID: 21 Comm: kworker/1:1 Not tainted 4.20.0-rc2+ #44 [ 214.398983] Workqueue: events nf_tables_trans_destroy_work [nf_tables] [ 214.398983] RIP: 0010:nf_tables_chain_destroy.isra.38+0x2f9/0x3a0 [nf_tables] [ 214.398983] Code: 00 00 00 fc ff df 48 89 fa 48 c1 ea 03 80 3c 02 00 0f 85 8e 00 00 00 48 8b 7b 58 e8 e1 2c 4e c6 48 89 df e8 d9 2c 4e c6 eb 9a <0f> 0b eb 96 0f 0b e9 7e fe ff ff e8 a7 7e 4e c6 e9 a4 fe ff ff e8 [ 214.398983] RSP: 0018:ffff8881152874e8 EFLAGS: 00010202 [ 214.398983] RAX: 0000000000000001 RBX: ffff88810ef9fc28 RCX: ffff8881152876f0 [ 214.398983] RDX: dffffc0000000000 RSI: 1ffff11022a50ede RDI: ffff88810ef9fc78 [ 214.398983] RBP: 1ffff11022a50e9d R08: 0000000080000000 R09: 0000000000000000 [ 214.398983] R10: 0000000000000000 R11: 0000000000000000 R12: 1ffff11022a50eba [ 214.398983] R13: ffff888114446e08 R14: ffff8881152876f0 R15: ffffed1022a50ed6 [ 214.398983] FS: 0000000000000000(0000) GS:ffff888116400000(0000) knlGS:0000000000000000 [ 214.398983] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 214.398983] CR2: 00007fab9bb5f868 CR3: 000000012aa16000 CR4: 00000000001006e0 [ 214.398983] Call Trace: [ 214.398983] ? nf_tables_table_destroy.isra.37+0x100/0x100 [nf_tables] [ 214.398983] ? __kasan_slab_free+0x145/0x180 [ 214.398983] ? nf_tables_trans_destroy_work+0x439/0x830 [nf_tables] [ 214.398983] ? kfree+0xdb/0x280 [ 214.398983] nf_tables_trans_destroy_work+0x5f5/0x830 [nf_tables] [ ... ] Fixes: bb7b40aecbf7 ("netfilter: nf_tables: bogus EBUSY in chain deletions") Reported by: Christoph Anton Mitterer <calestyo@scientia.net> Link: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=914505 Link: https://bugzilla.kernel.org/show_bug.cgi?id=201791 Signed-off-by: Taehee Yoo <ap420073@gmail.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2018-11-28x86/MCE/AMD: Fix the thresholding machinery initialization orderBorislav Petkov1-13/+6
Currently, the code sets up the thresholding interrupt vector and only then goes about initializing the thresholding banks. Which is wrong, because an early thresholding interrupt would cause a NULL pointer dereference when accessing those banks and prevent the machine from booting. Therefore, set the thresholding interrupt vector only *after* having initialized the banks successfully. Fixes: 18807ddb7f88 ("x86/mce/AMD: Reset Threshold Limit after logging error") Reported-by: Rafał Miłecki <rafal@milecki.pl> Reported-by: John Clemens <clemej@gmail.com> Signed-off-by: Borislav Petkov <bp@suse.de> Tested-by: Rafał Miłecki <rafal@milecki.pl> Tested-by: John Clemens <john@deater.net> Cc: Aravind Gopalakrishnan <aravindksg.lkml@gmail.com> Cc: linux-edac@vger.kernel.org Cc: stable@vger.kernel.org Cc: Tony Luck <tony.luck@intel.com> Cc: x86@kernel.org Cc: Yazen Ghannam <Yazen.Ghannam@amd.com> Link: https://lkml.kernel.org/r/20181127101700.2964-1-zajec5@gmail.com Link: https://bugzilla.kernel.org/show_bug.cgi?id=201291
2018-11-28mtd: spi-nor: fix erase_type array to indicate current map confTudor Ambarus1-2/+29
BFPT advertises all the erase types supported by all the possible map configurations. Mask out the erase types that are not supported by the current map configuration. Backward compatibility test done on sst26vf064b. Fixes: b038e8e3be72 ("mtd: spi-nor: parse SFDP Sector Map Parameter Table") Reported-by: Alexander Sverdlin <alexander.sverdlin@nokia.com> Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Tested-by: Alexander Sverdlin <alexander.sverdlin@nokia.com> Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com>
2018-11-28usb: gadget: u_ether: fix unsafe list iterationMarek Szyprowski1-5/+6
list_for_each_entry_safe() is not safe for deleting entries from the list if the spin lock, which protects it, is released and reacquired during the list iteration. Fix this issue by replacing this construction with a simple check if list is empty and removing the first entry in each iteration. This is almost equivalent to a revert of the commit mentioned in the Fixes: tag. This patch fixes following issue: --->8--- Unable to handle kernel NULL pointer dereference at virtual address 00000104 pgd = (ptrval) [00000104] *pgd=00000000 Internal error: Oops: 817 [#1] PREEMPT SMP ARM Modules linked in: CPU: 1 PID: 84 Comm: kworker/1:1 Not tainted 4.20.0-rc2-next-20181114-00009-g8266b35ec404 #1061 Hardware name: SAMSUNG EXYNOS (Flattened Device Tree) Workqueue: events eth_work PC is at rx_fill+0x60/0xac LR is at _raw_spin_lock_irqsave+0x50/0x5c pc : [<c065fee0>] lr : [<c0a056b8>] psr: 80000093 sp : ee7fbee8 ip : 00000100 fp : 00000000 r10: 006000c0 r9 : c10b0ab0 r8 : ee7eb5c0 r7 : ee7eb614 r6 : ee7eb5ec r5 : 000000dc r4 : ee12ac00 r3 : ee12ac24 r2 : 00000200 r1 : 60000013 r0 : ee7eb5ec Flags: Nzcv IRQs off FIQs on Mode SVC_32 ISA ARM Segment none Control: 10c5387d Table: 6d5dc04a DAC: 00000051 Process kworker/1:1 (pid: 84, stack limit = 0x(ptrval)) Stack: (0xee7fbee8 to 0xee7fc000) ... [<c065fee0>] (rx_fill) from [<c0143b7c>] (process_one_work+0x200/0x738) [<c0143b7c>] (process_one_work) from [<c0144118>] (worker_thread+0x2c/0x4c8) [<c0144118>] (worker_thread) from [<c014a8a4>] (kthread+0x128/0x164) [<c014a8a4>] (kthread) from [<c01010b4>] (ret_from_fork+0x14/0x20) Exception stack(0xee7fbfb0 to 0xee7fbff8) ... ---[ end trace 64480bc835eba7d6 ]--- Fixes: fea14e68ff5e ("usb: gadget: u_ether: use better list accessors") Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
2018-11-27sched, trace: Fix prev_state output in sched_switch tracepointPavankumar Kondeti1-1/+11
commit 3f5fe9fef5b2 ("sched/debug: Fix task state recording/printout") tried to fix the problem introduced by a previous commit efb40f588b43 ("sched/tracing: Fix trace_sched_switch task-state printing"). However the prev_state output in sched_switch is still broken. task_state_index() uses fls() which considers the LSB as 1. Left shifting 1 by this value gives an incorrect mapping to the task state. Fix this by decrementing the value returned by __get_task_state() before shifting. Link: http://lkml.kernel.org/r/1540882473-1103-1-git-send-email-pkondeti@codeaurora.org Cc: stable@vger.kernel.org Fixes: 3f5fe9fef5b2 ("sched/debug: Fix task state recording/printout") Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2018-11-27function_graph: Have profiler use curr_ret_stack and not depthSteven Rostedt (VMware)1-2/+2
The profiler uses trace->depth to find its entry on the ret_stack, but the depth may not match the actual location of where its entry is (if an interrupt were to preempt the processing of the profiler for another function, the depth and the curr_ret_stack will be different). Have it use the curr_ret_stack as the index to find its ret_stack entry instead of using the depth variable, as that is no longer guaranteed to be the same. Cc: stable@kernel.org Fixes: 03274a3ffb449 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback") Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2018-11-27function_graph: Reverse the order of pushing the ret_stack and the callbackSteven Rostedt (VMware)1-4/+6
The function graph profiler uses the ret_stack to store the "subtime" and reuse it by nested functions and also on the return. But the current logic has the profiler callback called before the ret_stack is updated, and it is just modifying the ret_stack that will later be allocated (it's just lucky that the "subtime" is not touched when it is allocated). This could also cause a crash if we are at the end of the ret_stack when this happens. By reversing the order of the allocating the ret_stack and then calling the callbacks attached to a function being traced, the ret_stack entry is no longer used before it is allocated. Cc: stable@kernel.org Fixes: 03274a3ffb449 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback") Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2018-11-27function_graph: Move return callback before update of curr_ret_stackSteven Rostedt (VMware)1-9/+13
In the past, curr_ret_stack had two functions. One was to denote the depth of the call graph, the other is to keep track of where on the ret_stack the data is used. Although they may be slightly related, there are two cases where they need to be used differently. The one case is that it keeps the ret_stack data from being corrupted by an interrupt coming in and overwriting the data still in use. The other is just to know where the depth of the stack currently is. The function profiler uses the ret_stack to save a "subtime" variable that is part of the data on the ret_stack. If curr_ret_stack is modified too early, then this variable can be corrupted. The "max_depth" option, when set to 1, will record the first functions going into the kernel. To see all top functions (when dealing with timings), the depth variable needs to be lowered before calling the return hook. But by lowering the curr_ret_stack, it makes the data on the ret_stack still being used by the return hook susceptible to being overwritten. Now that there's two variables to handle both cases (curr_ret_depth), we can move them to the locations where they can handle both cases. Cc: stable@kernel.org Fixes: 03274a3ffb449 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback") Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2018-11-27function_graph: Use new curr_ret_depth to manage depth instead of curr_ret_stackSteven Rostedt (VMware)3-8/+17
Currently, the depth of the ret_stack is determined by curr_ret_stack index. The issue is that there's a race between setting of the curr_ret_stack and calling of the callback attached to the return of the function. Commit 03274a3ffb44 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback") moved the calling of the callback to after the setting of the curr_ret_stack, even stating that it was safe to do so, when in fact, it was the reason there was a barrier() there (yes, I should have commented that barrier()). Not only does the curr_ret_stack keep track of the current call graph depth, it also keeps the ret_stack content from being overwritten by new data. The function profiler, uses the "subtime" variable of ret_stack structure and by moving the curr_ret_stack, it allows for interrupts to use the same structure it was using, corrupting the data, and breaking the profiler. To fix this, there needs to be two variables to handle the call stack depth and the pointer to where the ret_stack is being used, as they need to change at two different locations. Cc: stable@kernel.org Fixes: 03274a3ffb449 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback") Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2018-11-27function_graph: Make ftrace_push_return_trace() staticSteven Rostedt (VMware)2-4/+1
As all architectures now call function_graph_enter() to do the entry work, no architecture should ever call ftrace_push_return_trace(). Make it static. This is needed to prepare for a fix of a design bug on how the curr_ret_stack is used. Cc: stable@kernel.org Fixes: 03274a3ffb449 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback") Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2018-11-27sparc/function_graph: Simplify with function_graph_enter()Steven Rostedt (VMware)1-10/+1
The function_graph_enter() function does the work of calling the function graph hook function and the management of the shadow stack, simplifying the work done in the architecture dependent prepare_ftrace_return(). Have sparc use the new code, and remove the shadow stack management as well as having to set up the trace structure. This is needed to prepare for a fix of a design bug on how the curr_ret_stack is used. Cc: "David S. Miller" <davem@davemloft.net> Cc: sparclinux@vger.kernel.org Cc: stable@kernel.org Fixes: 03274a3ffb449 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback") Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>