aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2016-02-22vfio/pci: Intel IGD OpRegion supportAlex Williamson6-0/+136
This is the first consumer of vfio device specific resource support, providing read-only access to the OpRegion for Intel graphics devices. Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2016-02-22vfio/pci: Enable virtual register in PCI config spaceAlex Williamson2-4/+34
Typically config space for a device is mapped out into capability specific handlers and unassigned space. The latter allows direct read/write access to config space. Sometimes we know about registers living in this void space and would like an easy way to virtualize them, similar to how BAR registers are managed. To do this, create one more pseudo (fake) PCI capability to be handled as purely virtual space. Reads and writes are serviced entirely from virtual config space. Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2016-02-22vfio/pci: Add infrastructure for additional device specific regionsAlex Williamson2-5/+103
Add support for additional regions with indexes started after the already defined fixed regions. Device specific code can register these regions with the new vfio_pci_register_dev_region() function. The ops structure per region currently only includes read/write access and a release function, allowing automatic cleanup when the device is closed. mmap support is only missing here because it's not needed by the first user queued for this support. Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2016-02-22vfio: Define device specific region type capabilityAlex Williamson1-1/+30
To this point vfio has only provided an interface to the user that allows them to determine the number of regions and specifics about each region. What the region represents is left to the vfio bus driver. vfio-pci chooses to use fixed indexes for fixed resources, index 0 is BAR0, 1 is BAR1,... 7 is config space, etc. This works pretty well since all PCI devices have these regions, even if they don't necessarily populate all of them. Then we start to add things like VGA, which only certain device even support. We added this the same way, but now we've wasted a region index, and due to our offset implementation the corresponding address space, for all devices. Rather than continuing that process, let's try to make regions self describing by including a capability that defines their type. For vfio-pci we'll make the current VFIO_PCI_NUM_REGIONS fixed, defining the end of the static indexes and the beginning of self describing regions. Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2016-02-22vfio/pci: Include sparse mmap capability for MSI-X table regionsAlex Williamson1-1/+72
vfio-pci has never allowed the user to directly mmap the MSI-X vector table, but we've always relied on implicit knowledge of the user that they cannot do this. Now that we have capability chains that we can expose in the region info ioctl and a sparse mmap capability that represents the sub-areas within the region that can be mmap'd, we can make the mmap constraints more explicit. Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2016-02-22vfio: Define sparse mmap capability for regionsAlex Williamson1-1/+25
We can't always support mmap across an entire device region, for example we deny mmaps covering the MSI-X table of PCI devices, but we don't really have a way to report it. We expect the user to implicitly know this restriction. We also can't split the region because vfio-pci defines an API with fixed region index to BAR number mapping. We therefore define a new capability which lists areas within the region that may be mmap'd. In addition to the MSI-X case, this potentially enables in-kernel emulation and extensions to devices. Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2016-02-22vfio: Add capability chain helpersAlex Williamson2-0/+65
Allow sub-modules to easily reallocate a buffer for managing capability chains for info ioctls. Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2016-02-22vfio: Define capability chainsAlex Williamson1-0/+27
We have a few cases where we need to extend the data returned from the INFO ioctls in VFIO. For instance we already have devices exposed through vfio-pci where VFIO_DEVICE_GET_REGION_INFO reports the region as mmap-capable, but really only supports sparse mmaps, avoiding the MSI-X table. If we wanted to provide in-kernel emulation or extended functionality for devices, we'd also want the ability to tell the user not to mmap various regions, rather than forcing them to figure it out on their own. Another example is VFIO_IOMMU_GET_INFO. We'd really like to expose the actual IOVA capabilities of the IOMMU rather than letting the user assume the address space they have available to them. We could add IOVA base and size fields to struct vfio_iommu_type1_info, but what if we have multiple IOVA ranges. For instance x86 uses a range of addresses at 0xfee00000 for MSI vectors. These typically are not available for standard DMA IOVA mappings and splits our available IOVA space into two ranges. POWER systems have both an IOVA window below 4G as well as dynamic data window which they can use to remap all of guest memory. Representing variable sized arrays within a fixed structure makes it very difficult to parse, we'd therefore like to put this data beyond fixed fields within the data structures. One way to do this is to emulate capabilities in PCI configuration space. A new flag indciates whether capabilties are supported and a new fixed field reports the offset of the first entry. Users can then walk the chain to find capabilities, adding capabilities does not require additional fields in the fixed structure, and parsing variable sized data becomes trivial. This patch outlines the theory and base header structure, which should be shared by all future users. Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2016-02-22vfio: If an IOMMU backend fails, keep lookingAlex Williamson1-10/+6
Consider an IOMMU to be an API rather than an implementation, we might have multiple implementations supporting the same API, so try another if one fails. The expectation here is that we'll really only have one implementation per device type. For instance the existing type1 driver works with any PCI device where the IOMMU API is available. A vGPU vendor may have a virtual PCI device which provides DMA isolation and mapping through other mechanisms, but can re-use userspaces that make use of the type1 VFIO IOMMU API. This allows that to work. Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2016-02-22vfio/pci: Fix unsigned comparison overflowAlex Williamson1-8/+9
Signed versus unsigned comparisons are implicitly cast to unsigned, which result in a couple possible overflows. For instance (start + count) might overflow and wrap, getting through our validation test. Also when unwinding setup, -1 being compared as unsigned doesn't produce the intended stop condition. Fix both of these and also fix vfio_msi_set_vector_signal() to validate parameters before using the vector index, though none of the callers should pass bad indexes anymore. Reported-by: Eric Auger <eric.auger@linaro.org> Reviewed-by: Eric Auger <eric.auger@linaro.org> Tested-by: Eric Auger <eric.auger@linaro.org> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2016-02-20Linux 4.5-rc5Linus Torvalds1-1/+1
2016-02-20kernel/resource.c: fix muxed resource handling in __request_region()Simon Guinot1-2/+3
In __request_region, if a conflict with a BUSY and MUXED resource is detected, then the caller goes to sleep and waits for the resource to be released. A pointer on the conflicting resource is kept. At wake-up this pointer is used as a parent to retry to request the region. A first problem is that this pointer might well be invalid (if for example the conflicting resource have already been freed). Another problem is that the next call to __request_region() fails to detect a remaining conflict. The previously conflicting resource is passed as a parameter and __request_region() will look for a conflict among the children of this resource and not at the resource itself. It is likely to succeed anyway, even if there is still a conflict. Instead, the parent of the conflicting resource should be passed to __request_region(). As a fix, this patch doesn't update the parent resource pointer in the case we have to wait for a muxed region right after. Reported-and-tested-by: Vincent Pelletier <plr.vincent@gmail.com> Signed-off-by: Simon Guinot <simon.guinot@sequanux.org> Tested-by: Vincent Donnefort <vdonnefort@gmail.com> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-02-19arm64: mm: allow the kernel to handle alignment faults on user accessesEunTaik Lee1-1/+8
Although we don't expect to take alignment faults on access to normal memory, misbehaving (i.e. buggy) user code can pass MMIO pointers into system calls, leading to things like get_user accessing device memory. Rather than OOPS the kernel, allow any exception fixups to run and return something like -EFAULT back to userspace. This makes the behaviour more consistent with userspace, even though applications with access to device mappings can easily cause other issues if they try hard enough. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Eun Taik Lee <eun.taik.lee@samsung.com> [will: dropped __kprobes annotation and rewrote commit mesage] Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-02-19arm64: kbuild: make "make install" not depend on vmlinuxMasahiro Yamada3-3/+17
For the same reason as commit 19514fc665ff ("arm, kbuild: make "make install" not depend on vmlinux"), the install targets should never trigger the rebuild of the kernel. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-02-19ext4: fix crashes in dioread_nolock modeJan Kara1-20/+20
Competing overwrite DIO in dioread_nolock mode will just overwrite pointer to io_end in the inode. This may result in data corruption or extent conversion happening from IO completion interrupt because we don't properly set buffer_defer_completion() when unlocked DIO races with locked DIO to unwritten extent. Since unlocked DIO doesn't need io_end for anything, just avoid allocating it and corrupting pointer from inode for locked DIO. A cleaner fix would be to avoid these games with io_end pointer from the inode but that requires more intrusive changes so we leave that for later. Cc: stable@vger.kernel.org Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2016-02-19ext4: fix bh->b_state corruptionJan Kara1-2/+30
ext4 can update bh->b_state non-atomically in _ext4_get_block() and ext4_da_get_block_prep(). Usually this is fine since bh is just a temporary storage for mapping information on stack but in some cases it can be fully living bh attached to a page. In such case non-atomic update of bh->b_state can race with an atomic update which then gets lost. Usually when we are mapping bh and thus updating bh->b_state non-atomically, nobody else touches the bh and so things work out fine but there is one case to especially worry about: ext4_finish_bio() uses BH_Uptodate_Lock on the first bh in the page to synchronize handling of PageWriteback state. So when blocksize < pagesize, we can be atomically modifying bh->b_state of a buffer that actually isn't under IO and thus can race e.g. with delalloc trying to map that buffer. The result is that we can mistakenly set / clear BH_Uptodate_Lock bit resulting in the corruption of PageWriteback state or missed unlock of BH_Uptodate_Lock. Fix the problem by always updating bh->b_state bits atomically. CC: stable@vger.kernel.org Reported-by: Nikolay Borisov <kernel@kyup.com> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2016-02-19drm/nouveau: use post-decrement in error handlingRasmus Villemoes1-1/+1
We need to use post-decrement to get the dma_map_page undone also for i==0, and to avoid some very unpleasant behaviour if dma_map_page failed already at i==0. Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Reviewed-by: Ben Skeggs <bskeggs@redhat.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
2016-02-19drm/atomic: Allow for holes in connector state, v2.Maarten Lankhorst4-55/+48
Because we record connector_mask using 1 << drm_connector_index now the connector_mask should stay the same even when other connectors are removed. This was not the case with MST, in that case when removing a connector all other connectors may change their index. This is fixed by waiting until the first get_connector_state to allocate connector_state, and force reallocation when state is too small. As a side effect connector arrays no longer have to be preallocated, and can be allocated on first use which means a less allocations in the page flip only path. Changes since v1: - Whitespace. (Ville) - Call ida_remove when destroying the connector. (Ville) - u32 alloc -> int. (Ville) Fixes: 14de6c44d149 ("drm/atomic: Remove drm_atomic_connectors_for_crtc.") Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Lyude <cpaul@redhat.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
2016-02-18clk: gpio: Really allow an optional clock= DT propertyStephen Boyd1-1/+1
We mis-merged the original patch from Russell here and so the patch went almost all the way, except that we still failed to probe when there wasn't a clocks property in the DT node. Allow that case by making a negative value from of_clk_get_parent_count() into "no parents", like the original patch did. Fixes: 7ed88aa2efa5 ("clk: fix clk-gpio.c with optional clock= DT property") Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Michael Turquette <mturquette@baylibre.com> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
2016-02-18mm: slab: free kmem_cache_node after destroy sysfs fileDmitry Safonov5-27/+29
When slub_debug alloc_calls_show is enabled we will try to track location and user of slab object on each online node, kmem_cache_node structure and cpu_cache/cpu_slub shouldn't be freed till there is the last reference to sysfs file. This fixes the following panic: BUG: unable to handle kernel NULL pointer dereference at 0000000000000020 IP: list_locations+0x169/0x4e0 PGD 257304067 PUD 438456067 PMD 0 Oops: 0000 [#1] SMP CPU: 3 PID: 973074 Comm: cat ve: 0 Not tainted 3.10.0-229.7.2.ovz.9.30-00007-japdoll-dirty #2 9.30 Hardware name: DEPO Computers To Be Filled By O.E.M./H67DE3, BIOS L1.60c 07/14/2011 task: ffff88042a5dc5b0 ti: ffff88037f8d8000 task.ti: ffff88037f8d8000 RIP: list_locations+0x169/0x4e0 Call Trace: alloc_calls_show+0x1d/0x30 slab_attr_show+0x1b/0x30 sysfs_read_file+0x9a/0x1a0 vfs_read+0x9c/0x170 SyS_read+0x58/0xb0 system_call_fastpath+0x16/0x1b Code: 5e 07 12 00 b9 00 04 00 00 3d 00 04 00 00 0f 4f c1 3d 00 04 00 00 89 45 b0 0f 84 c3 00 00 00 48 63 45 b0 49 8b 9c c4 f8 00 00 00 <48> 8b 43 20 48 85 c0 74 b6 48 89 df e8 46 37 44 00 48 8b 53 10 CR2: 0000000000000020 Separated __kmem_cache_release from __kmem_cache_shutdown which now called on slab_kmem_cache_release (after the last reference to sysfs file object has dropped). Reintroduced locking in free_partial as sysfs file might access cache's partial list after shutdowning - partial revert of the commit 69cb8e6b7c29 ("slub: free slabs without holding locks"). Zap __remove_partial and use remove_partial (w/o underscores) as free_partial now takes list_lock which s partial revert for commit 1e4dd9461fab ("slub: do not assert not having lock in removing freed partial") Signed-off-by: Dmitry Safonov <dsafonov@virtuozzo.com> Suggested-by: Vladimir Davydov <vdavydov@virtuozzo.com> Acked-by: Vladimir Davydov <vdavydov@virtuozzo.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-02-18ipc/shm: handle removed segments gracefully in shm_mmap()Kirill A. Shutemov1-10/+43
remap_file_pages(2) emulation can reach file which represents removed IPC ID as long as a memory segment is mapped. It breaks expectations of IPC subsystem. Test case (rewritten to be more human readable, originally autogenerated by syzkaller[1]): #define _GNU_SOURCE #include <stdlib.h> #include <sys/ipc.h> #include <sys/mman.h> #include <sys/shm.h> #define PAGE_SIZE 4096 int main() { int id; void *p; id = shmget(IPC_PRIVATE, 3 * PAGE_SIZE, 0); p = shmat(id, NULL, 0); shmctl(id, IPC_RMID, NULL); remap_file_pages(p, 3 * PAGE_SIZE, 0, 7, 0); return 0; } The patch changes shm_mmap() and code around shm_lock() to propagate locking error back to caller of shm_mmap(). [1] http://github.com/google/syzkaller Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: Dmitry Vyukov <dvyukov@google.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-02-18MAINTAINERS: update Kselftest Framework mailing listShuah Khan1-1/+1
Kselftest Framework now has a dedicated mailing list linux-kselftest. Update the entry in MAINTAINERS file. Signed-off-by: Shuah Khan <shuahkh@osg.samsung.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-02-18devm_memremap_release(): fix memremap'd addr handlingToshi Kani1-1/+1
The pmem driver calls devm_memremap() to map a persistent memory range. When the pmem driver is unloaded, this memremap'd range is not released so the kernel will leak a vma. Fix devm_memremap_release() to handle a given memremap'd address properly. Signed-off-by: Toshi Kani <toshi.kani@hpe.com> Acked-by: Dan Williams <dan.j.williams@intel.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Ross Zwisler <ross.zwisler@linux.intel.com> Cc: Matthew Wilcox <willy@linux.intel.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-02-18mm/hugetlb.c: fix incorrect proc nr_hugepages valueVaishali Thakkar1-2/+4
Currently incorrect default hugepage pool size is reported by proc nr_hugepages when number of pages for the default huge page size is specified twice. When multiple huge page sizes are supported, /proc/sys/vm/nr_hugepages indicates the current number of pre-allocated huge pages of the default size. Basically /proc/sys/vm/nr_hugepages displays default_hstate-> max_huge_pages and after boot time pre-allocation, max_huge_pages should equal the number of pre-allocated pages (nr_hugepages). Test case: Note that this is specific to x86 architecture. Boot the kernel with command line option 'default_hugepagesz=1G hugepages=X hugepagesz=2M hugepages=Y hugepagesz=1G hugepages=Z'. After boot, 'cat /proc/sys/vm/nr_hugepages' and 'sysctl -a | grep hugepages' returns the value X. However, dmesg output shows that Z huge pages were pre-allocated. So, the root cause of the problem here is that the global variable default_hstate_max_huge_pages is set if a default huge page size is specified (directly or indirectly) on the command line. After the command line processing in hugetlb_init, if default_hstate_max_huge_pages is set, the value is assigned to default_hstae.max_huge_pages. However, default_hstate.max_huge_pages may have already been set based on the number of pre-allocated huge pages of default_hstate size. The solution to this problem is if hstate->max_huge_pages is already set then it should not set as a result of global max_huge_pages value. Basically if the value of the variable hugepages is set multiple times on a command line for a specific supported hugepagesize then proc layer should consider the last specified value. Signed-off-by: Vaishali Thakkar <vaishali.thakkar@oracle.com> Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Hillf Danton <hillf.zj@alibaba-inc.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-02-18mm, x86: fix pte_page() crash in gup_pte_range()Hugh Dickins1-1/+1
Commit 3565fce3a659 ("mm, x86: get_user_pages() for dax mappings") has moved up the pte_page(pte) in x86's fast gup_pte_range(), for no discernible reason: put it back where it belongs, after the pte_flags check and the pfn_valid cross-check. That may be the cause of the NULL pointer dereference in gup_pte_range(), seen when vfio called vaddr_get_pfn() when starting a qemu-kvm based VM. Signed-off-by: Hugh Dickins <hughd@google.com> Reported-by: Michael Long <Harn-Solo@gmx.de> Tested-by: Michael Long <Harn-Solo@gmx.de> Acked-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-02-18fsnotify: turn fsnotify reaper thread into a workqueue jobJeff Layton1-31/+18
We don't require a dedicated thread for fsnotify cleanup. Switch it over to a workqueue job instead that runs on the system_unbound_wq. In the interest of not thrashing the queued job too often when there are a lot of marks being removed, we delay the reaper job slightly when queueing it, to allow several to gather on the list. Signed-off-by: Jeff Layton <jeff.layton@primarydata.com> Tested-by: Eryu Guan <guaneryu@gmail.com> Reviewed-by: Jan Kara <jack@suse.cz> Cc: Eric Paris <eparis@parisplace.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-02-18Revert "fsnotify: destroy marks with call_srcu instead of dedicated thread"Jeff Layton2-18/+53
This reverts commit c510eff6beba ("fsnotify: destroy marks with call_srcu instead of dedicated thread"). Eryu reported that he was seeing some OOM kills kick in when running a testcase that adds and removes inotify marks on a file in a tight loop. The above commit changed the code to use call_srcu to clean up the marks. While that does (in principle) work, the srcu callback job is limited to cleaning up entries in small batches and only once per jiffy. It's easily possible to overwhelm that machinery with too many call_srcu callbacks, and Eryu's reproduer did just that. There's also another potential problem with using call_srcu here. While you can obviously sleep while holding the srcu_read_lock, the callbacks run under local_bh_disable, so you can't sleep there. It's possible when putting the last reference to the fsnotify_mark that we'll end up putting a chain of references including the fsnotify_group, uid, and associated keys. While I don't see any obvious ways that that could occurs, it's probably still best to avoid using call_srcu here after all. This patch reverts the above patch. A later patch will take a different approach to eliminated the dedicated thread here. Signed-off-by: Jeff Layton <jeff.layton@primarydata.com> Reported-by: Eryu Guan <guaneryu@gmail.com> Tested-by: Eryu Guan <guaneryu@gmail.com> Cc: Jan Kara <jack@suse.com> Cc: Eric Paris <eparis@parisplace.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-02-18mm: fix regression in remap_file_pages() emulationKirill A. Shutemov1-5/+29
Grazvydas Ignotas has reported a regression in remap_file_pages() emulation. Testcase: #define _GNU_SOURCE #include <assert.h> #include <stdlib.h> #include <stdio.h> #include <sys/mman.h> #define SIZE (4096 * 3) int main(int argc, char **argv) { unsigned long *p; long i; p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, -1, 0); if (p == MAP_FAILED) { perror("mmap"); return -1; } for (i = 0; i < SIZE / 4096; i++) p[i * 4096 / sizeof(*p)] = i; if (remap_file_pages(p, 4096, 0, 1, 0)) { perror("remap_file_pages"); return -1; } if (remap_file_pages(p, 4096 * 2, 0, 1, 0)) { perror("remap_file_pages"); return -1; } assert(p[0] == 1); munmap(p, SIZE); return 0; } The second remap_file_pages() fails with -EINVAL. The reason is that remap_file_pages() emulation assumes that the target vma covers whole area we want to over map. That assumption is broken by first remap_file_pages() call: it split the area into two vma. The solution is to check next adjacent vmas, if they map the same file with the same flags. Fixes: c8d78c1823f4 ("mm: replace remap_file_pages() syscall with emulation") Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: Grazvydas Ignotas <notasas@gmail.com> Tested-by: Grazvydas Ignotas <notasas@gmail.com> Cc: <stable@vger.kernel.org> [4.0+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-02-18thp, dax: do not try to withdraw pgtable from non-anon VMAKirill A. Shutemov1-1/+2
DAX doesn't deposit pgtables when it maps huge pages: nothing to withdraw. It can lead to crash. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Matthew Wilcox <willy@linux.intel.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-02-18ALSA: pcm: Fix rwsem deadlock for non-atomic PCM streamTakashi Iwai1-2/+14
A non-atomic PCM stream may take snd_pcm_link_rwsem rw semaphore twice in the same code path, e.g. one in snd_pcm_action_nonatomic() and another in snd_pcm_stream_lock(). Usually this is OK, but when a write lock is issued between these two read locks, the problem happens: the write lock is blocked due to the first reade lock, and the second read lock is also blocked by the write lock. This eventually deadlocks. The reason is the way rwsem manages waiters; it's queued like FIFO, so even if the writer itself doesn't take the lock yet, it blocks all the waiters (including reads) queued after it. As a workaround, in this patch, we replace the standard down_write() with an spinning loop. This is far from optimal, but it's good enough, as the spinning time is supposed to be relatively short for normal PCM operations, and the code paths requiring the write lock aren't called so often. Reported-by: Vinod Koul <vinod.koul@intel.com> Tested-by: Ramesh Babu <ramesh.babu@intel.com> Cc: <stable@vger.kernel.org> # v3.18+ Signed-off-by: Takashi Iwai <tiwai@suse.de>
2016-02-18x86/mm: Fix vmalloc_fault() to handle large pages properlyToshi Kani1-4/+11
A kernel page fault oops with the callstack below was observed when a read syscall was made to a pmem device after a huge amount (>512GB) of vmalloc ranges was allocated by ioremap() on a x86_64 system: BUG: unable to handle kernel paging request at ffff880840000ff8 IP: vmalloc_fault+0x1be/0x300 PGD c7f03a067 PUD 0 Oops: 0000 [#1] SM Call Trace: __do_page_fault+0x285/0x3e0 do_page_fault+0x2f/0x80 ? put_prev_entity+0x35/0x7a0 page_fault+0x28/0x30 ? memcpy_erms+0x6/0x10 ? schedule+0x35/0x80 ? pmem_rw_bytes+0x6a/0x190 [nd_pmem] ? schedule_timeout+0x183/0x240 btt_log_read+0x63/0x140 [nd_btt] : ? __symbol_put+0x60/0x60 ? kernel_read+0x50/0x80 SyS_finit_module+0xb9/0xf0 entry_SYSCALL_64_fastpath+0x1a/0xa4 Since v4.1, ioremap() supports large page (pud/pmd) mappings in x86_64 and PAE. vmalloc_fault() however assumes that the vmalloc range is limited to pte mappings. vmalloc faults do not normally happen in ioremap'd ranges since ioremap() sets up the kernel page tables, which are shared by user processes. pgd_ctor() sets the kernel's PGD entries to user's during fork(). When allocation of the vmalloc ranges crosses a 512GB boundary, ioremap() allocates a new pud table and updates the kernel PGD entry to point it. If user process's PGD entry does not have this update yet, a read/write syscall to the range will cause a vmalloc fault, which hits the Oops above as it does not handle a large page properly. Following changes are made to vmalloc_fault(). 64-bit: - No change for the PGD sync operation as it handles large pages already. - Add pud_huge() and pmd_huge() to the validation code to handle large pages. - Change pud_page_vaddr() to pud_pfn() since an ioremap range is not directly mapped (while the if-statement still works with a bogus addr). - Change pmd_page() to pmd_pfn() since an ioremap range is not backed by struct page (while the if-statement still works with a bogus addr). 32-bit: - No change for the sync operation since the index3 PGD entry covers the entire vmalloc range, which is always valid. (A separate change to sync PGD entry is necessary if this memory layout is changed regardless of the page size.) - Add pmd_huge() to the validation code to handle large pages. This is for completeness since vmalloc_fault() won't happen in ioremap'd ranges as its PGD entry is always valid. Reported-by: Henning Schild <henning.schild@siemens.com> Signed-off-by: Toshi Kani <toshi.kani@hpe.com> Acked-by: Borislav Petkov <bp@alien8.de> Cc: <stable@vger.kernel.org> # 4.1+ Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Luis R. Rodriguez <mcgrof@suse.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Toshi Kani <toshi.kani@hp.com> Cc: linux-mm@kvack.org Cc: linux-nvdimm@lists.01.org Link: http://lkml.kernel.org/r/1455758214-24623-1-git-send-email-toshi.kani@hpe.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-02-17ftrace/module: remove ftrace module notifierJessica Yu3-37/+9
Remove the ftrace module notifier in favor of directly calling ftrace_module_enable() and ftrace_release_mod() in the module loader. Hard-coding the function calls directly in the module loader removes dependence on the module notifier call chain and provides better visibility and control over what gets called when, which is important to kernel utilities such as livepatch. This fixes a notifier ordering issue in which the ftrace module notifier (and hence ftrace_module_enable()) for coming modules was being called after klp_module_notify(), which caused livepatch modules to initialize incorrectly. This patch removes dependence on the module notifier call chain in favor of hard coding the corresponding function calls in the module loader. This ensures that ftrace and livepatch code get called in the correct order on patch module load and unload. Fixes: 5156dca34a3e ("ftrace: Fix the race between ftrace and insmod") Signed-off-by: Jessica Yu <jeyu@redhat.com> Reviewed-by: Steven Rostedt <rostedt@goodmis.org> Reviewed-by: Petr Mladek <pmladek@suse.cz> Acked-by: Rusty Russell <rusty@rustcorp.com.au> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2016-02-17blk: fix overflow in queue_discard_max_hw_showAlan1-3/+2
We get this right for queue_discard_max_show but not max_hw_show. Follow the same pattern as queue_discard_max_show instead so that we don't truncate. Signed-off-by: Alan Cox <alan@linux.intel.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2016-02-17powerpc/ioda: Set "read" permission when "write" is setAlexey Kardashevskiy1-0/+6
Quite often drivers set only "write" permission assuming that this includes "read" permission as well and this works on plenty of platforms. However IODA2 is strict about this and produces an EEH when "read" permission is not set and reading happens. This adds a workaround in the IODA code to always add the "read" bit when the "write" bit is set. Fixes: 10b35b2b7485 ("powerpc/powernv: Do not set "read" flag if direction==DMA_NONE") Cc: stable@vger.kernel.org # 4.2+ Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Tested-by: Douglas Miller <dougmill@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-02-17arm64: dma-mapping: fix handling of devices registered before arch_initcallMarek Szyprowski1-0/+4
This patch ensures that devices, which got registered before arch_initcall will be handled correctly by IOMMU-based DMA-mapping code. Cc: <stable@vger.kernel.org> Fixes: 13b8629f6511 ("arm64: Add IOMMU dma_ops") Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-02-17drm/i915: Fix hpd live status bits for g4xVille Syrjälä2-14/+15
Looks like g4x hpd live status bits actually agree with the spec. At least they do on the machine I have, and apparently on Nick Bowler's g4x as well. So gm45 may be the only platform where they don't agree. At least that seems to be the case based on the (somewhat incomplete) logs/dumps in [1], and Daniel has also tested this on his gm45 sometime in the past. So let's change the bits to match the spec on g4x. That actually makes the g4x bits identical to vlv/chv so we can just share the code between those platforms, leaving gm45 as the special case. [1] https://bugzilla.kernel.org/show_bug.cgi?id=52361 Cc: Shashank Sharma <shashank.sharma@intel.com> Cc: Sonika Jindal <sonika.jindal@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Nick Bowler <nbowler@draconx.ca> References: https://lists.freedesktop.org/archives/dri-devel/2016-February/100382.html Reported-by: Nick Bowler <nbowler@draconx.ca> Cc: stable@vger.kernel.org Fixes: 237ed86c693d ("drm/i915: Check live status before reading edid") Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1455127145-20087-1-git-send-email-ville.syrjala@linux.intel.com Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> (cherry picked from commit 0780cd36c7af70c55981ee624084f0f48cae9b95) Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2016-02-17perf/core: Plug potential memory leak in CPU_UP_PREPAREThomas Gleixner1-1/+1
If CPU_UP_PREPARE is called it is not guaranteed, that a previously allocated and assigned hash has been freed already, but perf_event_init_cpu() unconditionally allocates and assignes a new hash if the swhash is referenced. By overwriting the pointer the existing hash is not longer accessible. Verify that there is no hash assigned on this cpu before allocating and assigning a new one. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Vince Weaver <vincent.weaver@maine.edu> Link: http://lkml.kernel.org/r/20160209201007.843269966@linutronix.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-02-17perf/core: Remove the bogus and dangerous CPU_DOWN_FAILED hotplug stateThomas Gleixner1-1/+0
If CPU_DOWN_PREPARE fails the perf hotplug notifier is called for CPU_DOWN_FAILED and calls perf_event_init_cpu(), which checks whether the swhash is referenced. If yes it allocates a new hash and stores the pointer in the per cpu data structure. But at this point the cpu is still online, so there must be a valid hash already. By overwriting the pointer the existing hash is not longer accessible. Remove the CPU_DOWN_FAILED state, as there is nothing to (re)allocate. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Vince Weaver <vincent.weaver@maine.edu> Link: http://lkml.kernel.org/r/20160209201007.763417379@linutronix.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-02-17perf/core: Remove bogus UP_CANCELED hotplug stateThomas Gleixner1-1/+0
If CPU_UP_PREPARE fails the perf hotplug code calls perf_event_exit_cpu(), which is a pointless exercise. The cpu is not online, so the smp function calls return -ENXIO. So the result is a list walk to call noops. Remove it. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Vince Weaver <vincent.weaver@maine.edu> Link: http://lkml.kernel.org/r/20160209201007.682184765@linutronix.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-02-17hpet: Drop stale URLsMichael S. Tsirkin3-6/+4
Looks like the HPET spec at intel.com got moved. It isn't hard to find so drop the link, just mention the revision assumed. Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Clemens Ladisch <clemens@ladisch.de> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-doc@vger.kernel.org Link: http://lkml.kernel.org/r/1455145462-3877-1-git-send-email-mst@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-02-17s390/dasd: fix performance dropStefan Haberland1-0/+1
Commit ca369d51b ("sd: Fix device-imposed transfer length limits") introduced a new queue limit max_dev_sectors which limits the maximum sectors for requests. The default value leads to small dasd requests and therefor to a performance drop. Set the max_dev_sectors value to the same value as the max_hw_sectors to use the maximum available request size for DASD devices. Signed-off-by: Stefan Haberland <sth@linux.vnet.ibm.com> Cc: stable@vger.kernel.org # 4.4+ Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2016-02-17x86/uaccess/64: Handle the caching of 4-byte nocache copies properly in __copy_user_nocache()Toshi Kani1-4/+32
Data corruption issues were observed in tests which initiated a system crash/reset while accessing BTT devices. This problem is reproducible. The BTT driver calls pmem_rw_bytes() to update data in pmem devices. This interface calls __copy_user_nocache(), which uses non-temporal stores so that the stores to pmem are persistent. __copy_user_nocache() uses non-temporal stores when a request size is 8 bytes or larger (and is aligned by 8 bytes). The BTT driver updates the BTT map table, which entry size is 4 bytes. Therefore, updates to the map table entries remain cached, and are not written to pmem after a crash. Change __copy_user_nocache() to use non-temporal store when a request size is 4 bytes. The change extends the current byte-copy path for a less-than-8-bytes request, and does not add any overhead to the regular path. Reported-and-tested-by: Micah Parrish <micah.parrish@hpe.com> Reported-and-tested-by: Brian Boylston <brian.boylston@hpe.com> Signed-off-by: Toshi Kani <toshi.kani@hpe.com> Cc: <stable@vger.kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Borislav Petkov <bp@suse.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Luis R. Rodriguez <mcgrof@suse.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ross Zwisler <ross.zwisler@linux.intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Toshi Kani <toshi.kani@hp.com> Cc: Vishal Verma <vishal.l.verma@intel.com> Cc: linux-nvdimm@lists.01.org Link: http://lkml.kernel.org/r/1455225857-12039-3-git-send-email-toshi.kani@hpe.com [ Small readability edits. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-02-17x86/uaccess/64: Make the __copy_user_nocache() assembly code more readableToshi Kani1-41/+73
Add comments to __copy_user_nocache() to clarify its procedures and alignment requirements. Also change numeric branch target labels to named local labels. No code changed: arch/x86/lib/copy_user_64.o: text data bss dec hex filename 1239 0 0 1239 4d7 copy_user_64.o.before 1239 0 0 1239 4d7 copy_user_64.o.after md5: 58bed94c2db98c1ca9a2d46d0680aaae copy_user_64.o.before.asm 58bed94c2db98c1ca9a2d46d0680aaae copy_user_64.o.after.asm Signed-off-by: Toshi Kani <toshi.kani@hpe.com> Cc: <stable@vger.kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Borislav Petkov <bp@suse.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Luis R. Rodriguez <mcgrof@suse.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Toshi Kani <toshi.kani@hp.com> Cc: brian.boylston@hpe.com Cc: dan.j.williams@intel.com Cc: linux-nvdimm@lists.01.org Cc: micah.parrish@hpe.com Cc: ross.zwisler@linux.intel.com Cc: vishal.l.verma@intel.com Link: http://lkml.kernel.org/r/1455225857-12039-2-git-send-email-toshi.kani@hpe.com [ Small readability edits and added object file comparison. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-02-17s390/maccess: reduce stnsm instructionsHeiko Carstens1-4/+6
When fixing the DAT off bug ("s390: fix DAT off memory access, e.g. on kdump") both Christian and I missed that we can save an additional stnsm instruction. This saves us a couple of cycles which could improve the speed of memcpy_real. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2016-02-17perf/x86/amd/uncore: Plug reference leakThomas Gleixner1-0/+2
In the error path of amd_uncore_cpu_up_prepare() the newly allocated uncore struct is freed, but the percpu pointer still references it. Set it to NULL. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Vince Weaver <vincent.weaver@maine.edu> Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1602162302170.19512@nanos Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-02-17drm/qxl: fix erroneous return valueAnton Protopopov1-1/+1
The qxl_gem_prime_mmap() function returns ENOSYS instead of -ENOSYS Signed-off-by: Anton Protopopov <a.s.protopopov@gmail.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
2016-02-17drm/nouveau/display: Enable vblank irqs after display engine is on again.Mario Kleiner1-4/+4
In the display resume path, move the calls to drm_vblank_on() after the point when the display engine is running again. Since changes were made to drm_update_vblank_count() in Linux 4.4+ to emulate hw vblank counters via vblank timestamping, the function drm_vblank_on() now needs working high precision vblank timestamping and therefore working scanout position queries at time of call. These don't work before the display engine gets restarted, causing miscalculation of vblank counter increments and thereby large forward jumps in vblank count at display resume. These jumps can cause client hangs on resume, or desktop hangs in the case of composited desktops. Fix this Linux 4.4 regression by reordering calls accordingly. Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com> Cc: <stable@vger.kernel.org> # 4.4+ Cc: Ben Skeggs <bskeggs@redhat.com> Cc: ville.syrjala@linux.intel.com Cc: daniel.vetter@ffwll.ch Cc: dri-devel@lists.freedesktop.org Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com>
2016-02-17drm/radeon/pm: Handle failure of drm_vblank_get.Mario Kleiner1-2/+6
Make sure that drm_vblank_get/put() stay balanced in case drm_vblank_get fails, by skipping the corresponding put. Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: michel@daenzer.net Cc: dri-devel@lists.freedesktop.org Cc: alexander.deucher@amd.com Cc: christian.koenig@amd.com Signed-off-by: Dave Airlie <airlied@redhat.com>
2016-02-17drm: Fix treatment of drm_vblank_offdelay in drm_vblank_on() (v2)Mario Kleiner1-2/+1
drm_vblank_offdelay can have three different types of values: < 0 is to be always treated the same as dev->vblank_disable_immediate = 0 is to be treated as "never disable vblanks" > 0 is to be treated as disable immediate if kms driver wants it that way via dev->vblank_disable_immediate. Otherwise it is a disable timeout in msecs. This got broken in Linux 3.18+ for the implementation of drm_vblank_on. If the user specified a value of zero which should always reenable vblank irqs in this function, a kms driver could override the users choice by setting vblank_disable_immediate to true. This patch fixes the regression and keeps the user in control. v2: Only reenable vblank if there are clients left or the user requested to "never disable vblanks" via offdelay 0. Enabling vblanks even in the "delayed disable" case (offdelay > 0) was specifically added by Ville in commit cd19e52aee922 ("drm: Kick start vblank interrupts at drm_vblank_on()"), but after discussion it turns out that this was done by accident. Citing Ville: "I think it just ended up as a mess due to changing some of the semantics of offdelay<0 vs. offdelay==0 vs. disable_immediate during the review of the series. So yeah, given how drm_vblank_put() works now, I'd just make this check for offdelay==0." Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: <stable@vger.kernel.org> # 3.18+ Cc: michel@daenzer.net Cc: vbabka@suse.cz Cc: ville.syrjala@linux.intel.com Cc: daniel.vetter@ffwll.ch Cc: dri-devel@lists.freedesktop.org Cc: alexander.deucher@amd.com Cc: christian.koenig@amd.com Signed-off-by: Dave Airlie <airlied@redhat.com>
2016-02-17drm: Fix drm_vblank_pre/post_modeset regression from Linux 4.4Mario Kleiner1-0/+16
Changes to drm_update_vblank_count() in Linux 4.4 broke the behaviour of the pre/post modeset functions as the new update code doesn't deal with hw vblank counter resets inbetween calls to drm_vblank_pre_modeset an drm_vblank_post_modeset, as it should. This causes mistreatment of such hw counter resets as counter wraparound, and thereby large forward jumps of the software vblank counter which in turn cause vblank event dispatching and vblank waits to fail/hang --> userspace clients hang. This symptom was reported on radeon-kms to cause a infinite hang of KDE Plasma 5 shell's login procedure, preventing users from logging in. Fix this by detecting when drm_update_vblank_count() is called inside a pre->post modeset interval. If so, clamp valid vblank increments to the safe values 0 and 1, pretty much restoring the update behavior of the old update code of Linux 4.3 and earlier. Also reset the last recorded hw vblank count at call to drm_vblank_post_modeset() to be safe against hw that after modesetting, dpms on etc. only fires its first vblank irq after drm_vblank_post_modeset() was already called. Reported-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Tested-by: Vlastimil Babka <vbabka@suse.cz> Cc: <stable@vger.kernel.org> # 4.4+ Cc: michel@daenzer.net Cc: vbabka@suse.cz Cc: ville.syrjala@linux.intel.com Cc: daniel.vetter@ffwll.ch Cc: dri-devel@lists.freedesktop.org Cc: alexander.deucher@amd.com Cc: christian.koenig@amd.com Signed-off-by: Dave Airlie <airlied@redhat.com>