aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/export-to-sqlite.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2019-10-18ACPI: CPPC: Set pcc_data[pcc_ss_id] to NULL in acpi_cppc_processor_exit()John Garry1-1/+1
When enabling KASAN and DEBUG_TEST_DRIVER_REMOVE, I find this KASAN warning: [ 20.872057] BUG: KASAN: use-after-free in pcc_data_alloc+0x40/0xb8 [ 20.878226] Read of size 4 at addr ffff00236cdeb684 by task swapper/0/1 [ 20.884826] [ 20.886309] CPU: 19 PID: 1 Comm: swapper/0 Not tainted 5.4.0-rc1-00009-ge7f7df3db5bf-dirty #289 [ 20.894994] Hardware name: Huawei D06 /D06, BIOS Hisilicon D06 UEFI RC0 - V1.16.01 03/15/2019 [ 20.903505] Call trace: [ 20.905942] dump_backtrace+0x0/0x200 [ 20.909593] show_stack+0x14/0x20 [ 20.912899] dump_stack+0xd4/0x130 [ 20.916291] print_address_description.isra.9+0x6c/0x3b8 [ 20.921592] __kasan_report+0x12c/0x23c [ 20.925417] kasan_report+0xc/0x18 [ 20.928808] __asan_load4+0x94/0xb8 [ 20.932286] pcc_data_alloc+0x40/0xb8 [ 20.935938] acpi_cppc_processor_probe+0x4e8/0xb08 [ 20.940717] __acpi_processor_start+0x48/0xb0 [ 20.945062] acpi_processor_start+0x40/0x60 [ 20.949235] really_probe+0x118/0x548 [ 20.952887] driver_probe_device+0x7c/0x148 [ 20.957059] device_driver_attach+0x94/0xa0 [ 20.961231] __driver_attach+0xa4/0x110 [ 20.965055] bus_for_each_dev+0xe8/0x158 [ 20.968966] driver_attach+0x30/0x40 [ 20.972531] bus_add_driver+0x234/0x2f0 [ 20.976356] driver_register+0xbc/0x1d0 [ 20.980182] acpi_processor_driver_init+0x40/0xe4 [ 20.984875] do_one_initcall+0xb4/0x254 [ 20.988700] kernel_init_freeable+0x24c/0x2f8 [ 20.993047] kernel_init+0x10/0x118 [ 20.996524] ret_from_fork+0x10/0x18 [ 21.000087] [ 21.001567] Allocated by task 1: [ 21.004785] save_stack+0x28/0xc8 [ 21.008089] __kasan_kmalloc.isra.9+0xbc/0xd8 [ 21.012435] kasan_kmalloc+0xc/0x18 [ 21.015913] pcc_data_alloc+0x94/0xb8 [ 21.019564] acpi_cppc_processor_probe+0x4e8/0xb08 [ 21.024343] __acpi_processor_start+0x48/0xb0 [ 21.028689] acpi_processor_start+0x40/0x60 [ 21.032860] really_probe+0x118/0x548 [ 21.036512] driver_probe_device+0x7c/0x148 [ 21.040684] device_driver_attach+0x94/0xa0 [ 21.044855] __driver_attach+0xa4/0x110 [ 21.048680] bus_for_each_dev+0xe8/0x158 [ 21.052591] driver_attach+0x30/0x40 [ 21.056155] bus_add_driver+0x234/0x2f0 [ 21.059980] driver_register+0xbc/0x1d0 [ 21.063805] acpi_processor_driver_init+0x40/0xe4 [ 21.068497] do_one_initcall+0xb4/0x254 [ 21.072322] kernel_init_freeable+0x24c/0x2f8 [ 21.076667] kernel_init+0x10/0x118 [ 21.080144] ret_from_fork+0x10/0x18 [ 21.083707] [ 21.085186] Freed by task 1: [ 21.088056] save_stack+0x28/0xc8 [ 21.091360] __kasan_slab_free+0x118/0x180 [ 21.095445] kasan_slab_free+0x10/0x18 [ 21.099183] kfree+0x80/0x268 [ 21.102139] acpi_cppc_processor_exit+0x1a8/0x1b8 [ 21.106832] acpi_processor_stop+0x70/0x80 [ 21.110917] really_probe+0x174/0x548 [ 21.114568] driver_probe_device+0x7c/0x148 [ 21.118740] device_driver_attach+0x94/0xa0 [ 21.122912] __driver_attach+0xa4/0x110 [ 21.126736] bus_for_each_dev+0xe8/0x158 [ 21.130648] driver_attach+0x30/0x40 [ 21.134212] bus_add_driver+0x234/0x2f0 [ 21.0x10/0x18 [ 21.161764] [ 21.163244] The buggy address belongs to the object at ffff00236cdeb600 [ 21.163244] which belongs to the cache kmalloc-256 of size 256 [ 21.175750] The buggy address is located 132 bytes inside of [ 21.175750] 256-byte region [ffff00236cdeb600, ffff00236cdeb700) [ 21.187473] The buggy address belongs to the page: [ 21.192254] page:fffffe008d937a00 refcount:1 mapcount:0 mapping:ffff002370c0fa00 index:0x0 compound_mapcount: 0 [ 21.202331] flags: 0x1ffff00000010200(slab|head) [ 21.206940] raw: 1ffff00000010200 dead000000000100 dead000000000122 ffff002370c0fa00 [ 21.214671] raw: 0000000000000000 00000000802a002a 00000001ffffffff 0000000000000000 [ 21.222400] page dumped because: kasan: bad access detected [ 21.227959] [ 21.229438] Memory state around the buggy address: [ 21.234218] ffff00236cdeb580: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc [ 21.241427] ffff00236cdeb600: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb [ 21.248637] >ffff00236cdeb680: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb [ 21.255845] ^ [ 21.259062] ffff00236cdeb700: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc [ 21.266272] ffff00236cdeb780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb [ 21.273480] ================================================================== It seems that global pcc_data[pcc_ss_id] can be freed in acpi_cppc_processor_exit(), but we may later reference this value, so NULLify it when freed. Also remove the useless setting of data "pcc_channel_acquired", which we're about to free. Fixes: 85b1407bf6d2 ("ACPI / CPPC: Make CPPC ACPI driver aware of PCC subspace IDs") Signed-off-by: John Garry <john.garry@huawei.com> Cc: 4.15+ <stable@vger.kernel.org> # 4.15+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2019-10-17coccinelle: api/devm_platform_ioremap_resource: remove useless scriptAlexandre Belloni1-60/+0
While it is useful for new drivers to use devm_platform_ioremap_resource, this script is currently used to spam maintainers, often updating very old drivers. The net benefit is the removal of 2 lines of code in the driver but the review load for the maintainers is huge. As of now, more that 560 patches have been sent, some of them obviously broken, as in: https://lore.kernel.org/lkml/9bbcce19c777583815c92ce3c2ff2586@www.loen.fr/ Remove the script to reduce the spam. Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com> Acked-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-17ALSA: hda - Force runtime PM on Nvidia HDMI codecsLukas Wunner1-0/+2
Przemysław Kopa reports that since commit b516ea586d71 ("PCI: Enable NVIDIA HDA controllers"), the discrete GPU Nvidia GeForce GT 540M on his 2011 Samsung laptop refuses to runtime suspend, resulting in a power regression and excessive heat. Rivera Valdez witnesses the same issue with a GeForce GT 525M (GF108M) of the same era, as does another Arch Linux user named "R0AR" with a more recent GeForce GTX 1050 Ti (GP107M). The commit exposes the discrete GPU's HDA controller and all four codecs on the controller do not set the CLKSTOP and EPSS bits in the Supported Power States Response. They also do not set the PS-ClkStopOk bit in the Get Power State Response. hda_codec_runtime_suspend() therefore does not call snd_hdac_codec_link_down(), which prevents each codec and the PCI device from runtime suspending. The same issue is present on some AMD discrete GPUs and we addressed it by forcing runtime PM despite the bits not being set, see commit 57cb54e53bdd ("ALSA: hda - Force to link down at runtime suspend on ATI/AMD HDMI"). Do the same for Nvidia HDMI codecs. Fixes: b516ea586d71 ("PCI: Enable NVIDIA HDA controllers") Link: https://bbs.archlinux.org/viewtopic.php?pid=1865512 Link: https://bugs.freedesktop.org/show_bug.cgi?id=75985#c81 Reported-by: Przemysław Kopa <prymoo@gmail.com> Reported-by: Rivera Valdez <riveravaldez@ysinembargo.com> Signed-off-by: Lukas Wunner <lukas@wunner.de> Cc: Daniel Drake <dan@reactivated.net> Cc: stable@vger.kernel.org # v5.3+ Link: https://lore.kernel.org/r/3086bc75135c1e3567c5bc4f3cc4ff5cbf7a56c2.1571324194.git.lukas@wunner.de Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-10-17dm cache: fix bugs when a GFP_NOWAIT allocation failsMikulas Patocka1-26/+2
GFP_NOWAIT allocation can fail anytime - it doesn't wait for memory being available and it fails if the mempool is exhausted and there is not enough memory. If we go down this path: map_bio -> mg_start -> alloc_migration -> mempool_alloc(GFP_NOWAIT) we can see that map_bio() doesn't check the return value of mg_start(), and the bio is leaked. If we go down this path: map_bio -> mg_start -> mg_lock_writes -> alloc_prison_cell -> dm_bio_prison_alloc_cell_v2 -> mempool_alloc(GFP_NOWAIT) -> mg_lock_writes -> mg_complete the bio is ended with an error - it is unacceptable because it could cause filesystem corruption if the machine ran out of memory temporarily. Change GFP_NOWAIT to GFP_NOIO, so that the mempool code will properly wait until memory becomes available. mempool_alloc with GFP_NOIO can't fail, so remove the code paths that deal with allocation failure. Cc: stable@vger.kernel.org Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2019-10-17ALSA: hda/realtek - Enable headset mic on Asus MJ401TADaniel Drake1-0/+11
On Asus MJ401TA (with Realtek ALC256), the headset mic is connected to pin 0x19, with default configuration value 0x411111f0 (indicating no physical connection). Enable this by quirking the pin. Mic jack detection was also tested and found to be working. This enables use of the headset mic on this product. Signed-off-by: Daniel Drake <drake@endlessm.com> Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20191017081501.17135-1-drake@endlessm.com Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-10-17ALSA: usb-audio: Disable quirks for BOSS Katana amplifiersSzabolcs Szőke1-0/+3
BOSS Katana amplifiers cannot be used for recording or playback if quirks are applied BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=195223 Signed-off-by: Szabolcs Szőke <szszoke.code@gmail.com> Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20191011171937.8013-1-szszoke.code@gmail.com Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-10-16drm/i915: Fixup preempt-to-busy vs resubmission of a virtual requestChris Wilson1-6/+26
As preempt-to-busy leaves the request on the HW as the resubmission is processed, that request may complete in the background and even cause a second virtual request to enter queue. This second virtual request breaks our "single request in the virtual pipeline" assumptions. Furthermore, as the virtual request may be completed and retired, we lose the reference the virtual engine assumes is held. Normally, just removing the request from the scheduler queue removes it from the engine, but the virtual engine keeps track of its singleton request via its ve->request. This pointer needs protecting with a reference. v2: Drop unnecessary motion of rq->engine = owner Fixes: 22b7a426bbe1 ("drm/i915/execlists: Preempt-to-busy") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190923152844.8914-1-chris@chris-wilson.co.uk (cherry picked from commit b647c7df01b75761b4c0b1cb6f4841088c0b1121) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2019-10-16drm/i915/userptr: Never allow userptr into the mappable GGTTChris Wilson5-1/+19
Daniel Vetter uncovered a nasty cycle in using the mmu-notifiers to invalidate userptr objects which also happen to be pulled into GGTT mmaps. That is when we unbind the userptr object (on mmu invalidation), we revoke all CPU mmaps, which may then recurse into mmu invalidation. We looked for ways of breaking the cycle, but the revocation on invalidation is required and cannot be avoided. The only solution we could see was to not allow such GGTT bindings of userptr objects in the first place. In practice, no one really wants to use a GGTT mmapping of a CPU pointer... Just before Daniel's explosive lockdep patches land in v5.4-rc1, we got a genuine blip from CI: <4>[ 246.793958] ====================================================== <4>[ 246.793972] WARNING: possible circular locking dependency detected <4>[ 246.793989] 5.3.0-gbd6c56f50d15-drmtip_372+ #1 Tainted: G U <4>[ 246.794003] ------------------------------------------------------ <4>[ 246.794017] kswapd0/145 is trying to acquire lock: <4>[ 246.794030] 000000003f565be6 (&dev->struct_mutex/1){+.+.}, at: userptr_mn_invalidate_range_start+0x18f/0x220 [i915] <4>[ 246.794250] but task is already holding lock: <4>[ 246.794263] 000000001799cef9 (&anon_vma->rwsem){++++}, at: page_lock_anon_vma_read+0xe6/0x2a0 <4>[ 246.794291] which lock already depends on the new lock. <4>[ 246.794307] the existing dependency chain (in reverse order) is: <4>[ 246.794322] -> #3 (&anon_vma->rwsem){++++}: <4>[ 246.794344] down_write+0x33/0x70 <4>[ 246.794357] __vma_adjust+0x3d9/0x7b0 <4>[ 246.794370] __split_vma+0x16a/0x180 <4>[ 246.794385] mprotect_fixup+0x2a5/0x320 <4>[ 246.794399] do_mprotect_pkey+0x208/0x2e0 <4>[ 246.794413] __x64_sys_mprotect+0x16/0x20 <4>[ 246.794429] do_syscall_64+0x55/0x1c0 <4>[ 246.794443] entry_SYSCALL_64_after_hwframe+0x49/0xbe <4>[ 246.794456] -> #2 (&mapping->i_mmap_rwsem){++++}: <4>[ 246.794478] down_write+0x33/0x70 <4>[ 246.794493] unmap_mapping_pages+0x48/0x130 <4>[ 246.794519] i915_vma_revoke_mmap+0x81/0x1b0 [i915] <4>[ 246.794519] i915_vma_unbind+0x11d/0x4a0 [i915] <4>[ 246.794519] i915_vma_destroy+0x31/0x300 [i915] <4>[ 246.794519] __i915_gem_free_objects+0xb8/0x4b0 [i915] <4>[ 246.794519] drm_file_free.part.0+0x1e6/0x290 <4>[ 246.794519] drm_release+0xa6/0xe0 <4>[ 246.794519] __fput+0xc2/0x250 <4>[ 246.794519] task_work_run+0x82/0xb0 <4>[ 246.794519] do_exit+0x35b/0xdb0 <4>[ 246.794519] do_group_exit+0x34/0xb0 <4>[ 246.794519] __x64_sys_exit_group+0xf/0x10 <4>[ 246.794519] do_syscall_64+0x55/0x1c0 <4>[ 246.794519] entry_SYSCALL_64_after_hwframe+0x49/0xbe <4>[ 246.794519] -> #1 (&vm->mutex){+.+.}: <4>[ 246.794519] i915_gem_shrinker_taints_mutex+0x6d/0xe0 [i915] <4>[ 246.794519] i915_address_space_init+0x9f/0x160 [i915] <4>[ 246.794519] i915_ggtt_init_hw+0x55/0x170 [i915] <4>[ 246.794519] i915_driver_probe+0xc9f/0x1620 [i915] <4>[ 246.794519] i915_pci_probe+0x43/0x1b0 [i915] <4>[ 246.794519] pci_device_probe+0x9e/0x120 <4>[ 246.794519] really_probe+0xea/0x3d0 <4>[ 246.794519] driver_probe_device+0x10b/0x120 <4>[ 246.794519] device_driver_attach+0x4a/0x50 <4>[ 246.794519] __driver_attach+0x97/0x130 <4>[ 246.794519] bus_for_each_dev+0x74/0xc0 <4>[ 246.794519] bus_add_driver+0x13f/0x210 <4>[ 246.794519] driver_register+0x56/0xe0 <4>[ 246.794519] do_one_initcall+0x58/0x300 <4>[ 246.794519] do_init_module+0x56/0x1f6 <4>[ 246.794519] load_module+0x25bd/0x2a40 <4>[ 246.794519] __se_sys_finit_module+0xd3/0xf0 <4>[ 246.794519] do_syscall_64+0x55/0x1c0 <4>[ 246.794519] entry_SYSCALL_64_after_hwframe+0x49/0xbe <4>[ 246.794519] -> #0 (&dev->struct_mutex/1){+.+.}: <4>[ 246.794519] __lock_acquire+0x15d8/0x1e90 <4>[ 246.794519] lock_acquire+0xa6/0x1c0 <4>[ 246.794519] __mutex_lock+0x9d/0x9b0 <4>[ 246.794519] userptr_mn_invalidate_range_start+0x18f/0x220 [i915] <4>[ 246.794519] __mmu_notifier_invalidate_range_start+0x85/0x110 <4>[ 246.794519] try_to_unmap_one+0x76b/0x860 <4>[ 246.794519] rmap_walk_anon+0x104/0x280 <4>[ 246.794519] try_to_unmap+0xc0/0xf0 <4>[ 246.794519] shrink_page_list+0x561/0xc10 <4>[ 246.794519] shrink_inactive_list+0x220/0x440 <4>[ 246.794519] shrink_node_memcg+0x36e/0x740 <4>[ 246.794519] shrink_node+0xcb/0x490 <4>[ 246.794519] balance_pgdat+0x241/0x580 <4>[ 246.794519] kswapd+0x16c/0x530 <4>[ 246.794519] kthread+0x119/0x130 <4>[ 246.794519] ret_from_fork+0x24/0x50 <4>[ 246.794519] other info that might help us debug this: <4>[ 246.794519] Chain exists of: &dev->struct_mutex/1 --> &mapping->i_mmap_rwsem --> &anon_vma->rwsem <4>[ 246.794519] Possible unsafe locking scenario: <4>[ 246.794519] CPU0 CPU1 <4>[ 246.794519] ---- ---- <4>[ 246.794519] lock(&anon_vma->rwsem); <4>[ 246.794519] lock(&mapping->i_mmap_rwsem); <4>[ 246.794519] lock(&anon_vma->rwsem); <4>[ 246.794519] lock(&dev->struct_mutex/1); <4>[ 246.794519] *** DEADLOCK *** v2: Say no to mmap_ioctl Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111744 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111870 Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190928082546.3473-1-chris@chris-wilson.co.uk (cherry picked from commit a4311745bba9763e3c965643d4531bd5765b0513) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2019-10-16drm/i915: Favor last VBT child device with conflicting AUX ch/DDC pinVille Syrjälä1-6/+16
The first come first served apporoach to handling the VBT child device AUX ch conflicts has backfired. We have machines in the wild where the VBT specifies both port A eDP and port E DP (in that order) with port E being the real one. So let's try to flip the preference around and let the last child device win once again. Cc: stable@vger.kernel.org Cc: Jani Nikula <jani.nikula@intel.com> Tested-by: Masami Ichikawa <masami256@gmail.com> Tested-by: Torsten <freedesktop201910@liggy.de> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111966 Fixes: 36a0f92020dc ("drm/i915/bios: make child device order the priority order") Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191011202030.8829-1-ville.syrjala@linux.intel.com Acked-by: Jani Nikula <jani.nikula@intel.com> (cherry picked from commit 41e35ffb380bde1379e4030bb5b2ac824d5139cf) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2019-10-16drm/i915/execlists: Refactor -EIO markup of hung requestsChris Wilson1-14/+17
Pull setting -EIO on the hung requests into its own utility function. Having allowed ourselves to short-circuit submission of completed requests, we can now do the mark_eio() prior to submission and avoid some redundant operations. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190923110056.15176-4-chris@chris-wilson.co.uk (cherry picked from commit 0d7cf7bc15e75bf79f2f65d61d19f896609f816a) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2019-10-16arm64: tags: Preserve tags for addresses translated via TTBR1Will Deacon3-8/+13
Sign-extending TTBR1 addresses when converting to an untagged address breaks the documented POSIX semantics for mlock() in some obscure error cases where we end up returning -EINVAL instead of -ENOMEM as a direct result of rewriting the upper address bits. Rework the untagged_addr() macro to preserve the upper address bits for TTBR1 addresses and only clear the tag bits for user addresses. This matches the behaviour of the 'clear_address_tag' assembly macro, so rename that and align the implementations at the same time so that they use the same instruction sequences for the tag manipulation. Link: https://lore.kernel.org/stable/20191014162651.GF19200@arrakis.emea.arm.com/ Reported-by: Jan Stancek <jstancek@redhat.com> Tested-by: Jan Stancek <jstancek@redhat.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-10-16arm64: mm: fix inverted PAR_EL1.F checkMark Rutland1-1/+5
When detecting a spurious EL1 translation fault, we have the CPU retry the translation using an AT S1E1R instruction, and inspect PAR_EL1 to determine if the fault was spurious. When PAR_EL1.F == 0, the AT instruction successfully translated the address without a fault, which implies the original fault was spurious. However, in this case we return false and treat the original fault as if it was not spurious. Invert the return value so that we treat such a case as spurious. Cc: Catalin Marinas <catalin.marinas@arm.com> Fixes: 42f91093b043 ("arm64: mm: Ignore spurious translation faults taken from the kernel") Tested-by: James Morse <james.morse@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-10-16arm64: sysreg: fix incorrect definition of SYS_PAR_EL1_FYang Yingliang1-1/+1
The 'F' field of the PAR_EL1 register lives in bit 0, not bit 1. Fix the broken definition in 'sysreg.h'. Fixes: e8620cff9994 ("arm64: sysreg: Add some field definitions for PAR_EL1") Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-10-16arm64: entry.S: Do not preempt from IRQ before all cpufeatures are enabledJulien Thierry3-1/+20
Preempting from IRQ-return means that the task has its PSTATE saved on the stack, which will get restored when the task is resumed and does the actual IRQ return. However, enabling some CPU features requires modifying the PSTATE. This means that, if a task was scheduled out during an IRQ-return before all CPU features are enabled, the task might restore a PSTATE that does not include the feature enablement changes once scheduled back in. * Task 1: PAN == 0 ---| |--------------- | |<- return from IRQ, PSTATE.PAN = 0 | <- IRQ | +--------+ <- preempt() +-- ^ | reschedule Task 1, PSTATE.PAN == 1 * Init: --------------------+------------------------ ^ | enable_cpu_features set PSTATE.PAN on all CPUs Worse than this, since PSTATE is untouched when task switching is done, a task missing the new bits in PSTATE might affect another task, if both do direct calls to schedule() (outside of IRQ/exception contexts). Fix this by preventing preemption on IRQ-return until features are enabled on all CPUs. This way the only PSTATE values that are saved on the stack are from synchronous exceptions. These are expected to be fatal this early, the exception is BRK for WARN_ON(), but as this uses do_debug_exception() which keeps IRQs masked, it shouldn't call schedule(). Signed-off-by: Julien Thierry <julien.thierry@arm.com> [james: Replaced a really cool hack, with an even simpler static key in C. expanded commit message with Julien's cover-letter ascii art] Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-10-16kthread: make __kthread_queue_delayed_work staticBen Dooks1-3/+3
The __kthread_queue_delayed_work is not exported so make it static, to avoid the following sparse warning: kernel/kthread.c:869:6: warning: symbol '__kthread_queue_delayed_work' was not declared. Should it be static? Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-16usercopy: Avoid soft lockups in test_check_nonzero_user()Michael Ellerman1-3/+19
On a machine with a 64K PAGE_SIZE, the nested for loops in test_check_nonzero_user() can lead to soft lockups, eg: watchdog: BUG: soft lockup - CPU#4 stuck for 22s! [modprobe:611] Modules linked in: test_user_copy(+) vmx_crypto gf128mul crc32c_vpmsum virtio_balloon ip_tables x_tables autofs4 CPU: 4 PID: 611 Comm: modprobe Tainted: G L 5.4.0-rc1-gcc-8.2.0-00001-gf5a1a536fa14-dirty #1151 ... NIP __might_sleep+0x20/0xc0 LR __might_fault+0x40/0x60 Call Trace: check_zeroed_user+0x12c/0x200 test_user_copy_init+0x67c/0x1210 [test_user_copy] do_one_initcall+0x60/0x340 do_init_module+0x7c/0x2f0 load_module+0x2d94/0x30e0 __do_sys_finit_module+0xc8/0x150 system_call+0x5c/0x68 Even with a 4K PAGE_SIZE the test takes multiple seconds. Instead tweak it to only scan a 1024 byte region, but make it cross the page boundary. Fixes: f5a1a536fa14 ("lib: introduce copy_struct_from_user() helper") Suggested-by: Aleksa Sarai <cyphar@cyphar.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Aleksa Sarai <cyphar@cyphar.com> Acked-by: Christian Brauner <christian.brauner@ubuntu.com> Link: https://lore.kernel.org/r/20191016122732.13467-1-mpe@ellerman.id.au Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
2019-10-16ACPI: processor: Avoid NULL pointer dereferences at init timeRafael J. Wysocki2-8/+12
If there are neither processor objects nor processor device objects in the ACPI tables, the per-CPU processors table will not be initialized and attempting to dereference pointers from there will cause the kernel to crash. This happens in acpi_processor_ppc_init() and acpi_thermal_cpufreq_init() after commit d15ce412737a ("ACPI: cpufreq: Switch to QoS requests instead of cpufreq notifier") which didn't add the requisite NULL pointer checks in there. Add the NULL pointer checks to acpi_processor_ppc_init() and acpi_thermal_cpufreq_init(), and to the corresponding "exit" routines. While at it, drop redundant return instructions from acpi_processor_ppc_init() and acpi_thermal_cpufreq_init(). Fixes: d15ce412737a ("ACPI: cpufreq: Switch to QoS requests instead of cpufreq notifier") Reported-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
2019-10-16xtensa: fix change_bit in exclusive access optionMax Filippov1-1/+1
change_bit implementation for XCHAL_HAVE_EXCLUSIVE case changes all bits except the one required due to copy-paste error from clear_bit. Cc: stable@vger.kernel.org # v5.2+ Fixes: f7c34874f04a ("xtensa: add exclusive atomics support") Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2019-10-15Revert "Input: elantech - enable SMBus on new (2018+) systems"Kai-Heng Feng1-26/+29
This reverts commit 883a2a80f79ca5c0c105605fafabd1f3df99b34c. Apparently use dmi_get_bios_year() as manufacturing date isn't accurate and this breaks older laptops with new BIOS update. So let's revert this patch. There are still new HP laptops still need to use SMBus to support all features, but it'll be enabled via a whitelist. Signed-off-by: Kai-Heng Feng <kai.heng.feng@canonical.com> Acked-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20191001070845.9720-1-kai.heng.feng@canonical.com Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
2019-10-15PCI: PM: Fix pci_power_up()Rafael J. Wysocki1-13/+11
There is an arbitrary difference between the system resume and runtime resume code paths for PCI devices regarding the delay to apply when switching the devices from D3cold to D0. Namely, pci_restore_standard_config() used in the runtime resume code path calls pci_set_power_state() which in turn invokes __pci_start_power_transition() to power up the device through the platform firmware and that function applies the transition delay (as per PCI Express Base Specification Revision 2.0, Section 6.6.1). However, pci_pm_default_resume_early() used in the system resume code path calls pci_power_up() which doesn't apply the delay at all and that causes issues to occur during resume from suspend-to-idle on some systems where the delay is required. Since there is no reason for that difference to exist, modify pci_power_up() to follow pci_set_power_state() more closely and invoke __pci_start_power_transition() from there to call the platform firmware to power up the device (in case that's necessary). Fixes: db288c9c5f9d ("PCI / PM: restore the original behavior of pci_set_power_state()") Reported-by: Daniel Drake <drake@endlessm.com> Tested-by: Daniel Drake <drake@endlessm.com> Link: https://lore.kernel.org/linux-pm/CAD8Lp44TYxrMgPLkHCqF9hv6smEurMXvmmvmtyFhZ6Q4SE+dig@mail.gmail.com/T/#m21be74af263c6a34f36e0fc5c77c5449d9406925 Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Cc: 3.10+ <stable@vger.kernel.org> # 3.10+
2019-10-15xtensa: virt: fix PCI IO ports mappingMax Filippov1-1/+1
virt device tree incorrectly uses 0xf0000000 on both sides of PCI IO ports address space mapping. This results in incorrect port address assignment in PCI IO BARs and subsequent crash on attempt to access them. Use 0 as base address in PCI IO ports address space. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2019-10-15sparc64: disable fast-GUP due to unexplained oopsesLinus Torvalds1-1/+0
HAVE_FAST_GUP enables the lockless quick page table walker for simple cases, and is a nice optimization for some random loads that can then use get_user_pages_fast() rather than the more careful page walker. However, for some unexplained reason, it seems to be subtly broken on sparc64. The breakage is only with some compiler versions and some hardware, and nobody seems to have figured out what triggers it, although there's a simple reprodicer for the problem when it does trigger. The problem was introduced with the conversion to the generic GUP code in commit 7b9afb86b632 ("sparc64: use the generic get_user_pages_fast code"), but nothing looks obviously wrong in that conversion. It may be a compiler bug that just hits us with the code reorganization. Or it may be something very specific to sparc64. This disables HAVE_FAST_GUP entirely. That makes things like futexes a bit slower, but at least they work. If we can figure out the trigger, that would be lovely, but it's been three months already.. Link: https://lore.kernel.org/lkml/20190717215956.GA30369@altlinux.org/ Fixes: 7b9afb86b632 ("sparc64: use the generic get_user_pages_fast code") Reported-by: Dmitry V Levin <ldv@altlinux.org> Reported-by: Anatoly Pugachev <matorola@gmail.com> Requested-by: Meelis Roos <mroos@linux.ee> Suggested-by: Christoph Hellwig <hch@infradead.org> Cc: David Miller <davem@davemloft.net> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-15drm/panfrost: Handle resetting on timeout betterSteven Price1-5/+11
Panfrost uses multiple schedulers (one for each slot, so 2 in reality), and on a timeout has to stop all the schedulers to safely perform a reset. However more than one scheduler can trigger a timeout at the same time. This race condition results in jobs being freed while they are still in use. When stopping other slots use cancel_delayed_work_sync() to ensure that any timeout started for that slot has completed. Also use mutex_trylock() to obtain reset_lock. This means that only one thread attempts the reset, the other threads will simply complete without doing anything (the first thread will wait for this in the call to cancel_delayed_work_sync()). While we're here and since the function is already dependent on sched_job not being NULL, let's remove the unnecessary checks. Fixes: aa20236784ab ("drm/panfrost: Prevent concurrent resets") Tested-by: Neil Armstrong <narmstrong@baylibre.com> Signed-off-by: Steven Price <steven.price@arm.com> Cc: stable@vger.kernel.org Signed-off-by: Rob Herring <robh@kernel.org> Link: https://patchwork.freedesktop.org/patch/msgid/20191009094456.9704-1-steven.price@arm.com
2019-10-15xfs: change the seconds fields in xfs_bulkstat to signedDarrick J. Wong1-4/+4
64-bit time is a signed quantity in the kernel, so the bulkstat structure should reflect that. Note that the structure size stays the same and that we have not yet published userspace headers for this new ioctl so there are no users to break. Fixes: 7035f9724f84 ("xfs: introduce new v5 bulkstat structure") Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Carlos Maiolino <cmaiolino@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
2019-10-15iommu/amd: Fix incorrect PASID decoding from event logSuthikulpanit, Suravee2-4/+5
IOMMU Event Log encodes 20-bit PASID for events: ILLEGAL_DEV_TABLE_ENTRY IO_PAGE_FAULT PAGE_TAB_HARDWARE_ERROR INVALID_DEVICE_REQUEST as: PASID[15:0] = bit 47:32 PASID[19:16] = bit 19:16 Note that INVALID_PPR_REQUEST event has different encoding from the rest of the events as the following: PASID[15:0] = bit 31:16 PASID[19:16] = bit 45:42 So, fixes the decoding logic. Fixes: d64c0486ed50 ("iommu/amd: Update the PASID information printed to the system log") Cc: Joerg Roedel <jroedel@suse.de> Cc: Gary R Hook <gary.hook@amd.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-15iommu/ipmmu-vmsa: Only call platform_get_irq() when interrupt is mandatoryGeert Uytterhoeven1-2/+1
As platform_get_irq() now prints an error when the interrupt does not exist, calling it gratuitously causes scary messages like: ipmmu-vmsa e6740000.mmu: IRQ index 0 not found Fix this by moving the call to platform_get_irq() down, where the existence of the interrupt is mandatory. Fixes: 7723f4c5ecdb8d83 ("driver core: platform: Add an error message to platform_get_irq*()") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Tested-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: Stephen Boyd <swboyd@chromium.org> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-15iommu/rockchip: Don't use platform_get_irq to implicitly count irqsHeiko Stuebner1-5/+14
Till now the Rockchip iommu driver walked through the irq list via platform_get_irq() until it encountered an ENXIO error. With the recent change to add a central error message, this always results in such an error for each iommu on probe and shutdown. To not confuse people, switch to platform_count_irqs() to get the actual number of interrupts before walking through them. Fixes: 7723f4c5ecdb ("driver core: platform: Add an error message to platform_get_irq*()") Signed-off-by: Heiko Stuebner <heiko@sntech.de> Tested-by: Enric Balletbo i Serra <enric.balletbo@collabora.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-14arm64: hibernate: check pgd table allocationPavel Tatashin1-1/+8
There is a bug in create_safe_exec_page(), when page table is allocated it is not checked that table is allocated successfully: But it is dereferenced in: pgd_none(READ_ONCE(*pgdp)). Check that allocation was successful. Fixes: 82869ac57b5d ("arm64: kernel: Add support for hibernate/suspend-to-disk") Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-10-14arm64: cpufeature: Treat ID_AA64ZFR0_EL1 as RAZ when SVE is not enabledJulien Grall1-5/+10
If CONFIG_ARM64_SVE=n then we fail to report ID_AA64ZFR0_EL1 as 0 when read by userspace, despite being required by the architecture. Although this is theoretically a change in ABI, userspace will first check for the presence of SVE via the HWCAP or the ID_AA64PFR0_EL1.SVE field before probing the ID_AA64ZFR0_EL1 register. Given that these are reported correctly for this configuration, we can safely tighten up the current behaviour. Ensure ID_AA64ZFR0_EL1 is treated as RAZ when CONFIG_ARM64_SVE=n. Signed-off-by: Julien Grall <julien.grall@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Fixes: 06a916feca2b ("arm64: Expose SVE2 features for userspace") Signed-off-by: Will Deacon <will@kernel.org>
2019-10-15gpio: lynxpoint: set default handler to be handle_bad_irq()Andy Shevchenko1-1/+1
We switch the default handler to be handle_bad_irq() instead of handle_simple_irq() (which was not correct anyway). Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2019-10-15gpio: merrifield: Move hardware initialization to callbackAndy Shevchenko1-3/+5
The driver wants to initialize related registers before IRQ chip will be added. That's why move it to a corresponding callback. It also fixes the NULL pointer dereference. Fixes: 8f86a5b4ad67 ("gpio: merrifield: Pass irqchip when adding gpiochip") Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2019-10-15gpio: lynxpoint: Move hardware initialization to callbackAndy Shevchenko1-3/+5
The driver wants to initialize related registers before IRQ chip will be added. That's why move it to a corresponding callback. It also fixes the NULL pointer dereference. Fixes: 7b1e889436a1 ("gpio: lynxpoint: Pass irqchip when adding gpiochip") Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2019-10-15gpio: intel-mid: Move hardware initialization to callbackAndy Shevchenko1-3/+6
The driver wants to initialize related registers before IRQ chip will be added. That's why move it to a corresponding callback. It also fixes the NULL pointer dereference. Fixes: 8069e69a9792 ("gpio: intel-mid: Pass irqchip when adding gpiochip") Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2019-10-15gpiolib: Initialize the hardware with a callbackAndy Shevchenko2-1/+29
After changing the drivers to use GPIO core to add an IRQ chip it appears that some of them requires a hardware initialization before adding the IRQ chip. Add an optional callback ->init_hw() to allow that drivers to initialize hardware if needed. This change is a part of the fix NULL pointer dereference brought to the several drivers recently. Cc: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2019-10-15gpio: merrifield: Restore use of irq_baseAndy Shevchenko1-0/+1
During conversion to internal IRQ chip initialization the commit 8f86a5b4ad67 ("gpio: merrifield: Pass irqchip when adding gpiochip") lost the irq_base assignment. drivers/gpio/gpio-merrifield.c: In function ‘mrfld_gpio_probe’: drivers/gpio/gpio-merrifield.c:405:17: warning: variable ‘irq_base’ set but not used [-Wunused-but-set-variable] Assign the girq->first to it. Fixes: 8f86a5b4ad67 ("gpio: merrifield: Pass irqchip when adding gpiochip") Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2019-10-14xtensa: drop EXPORT_SYMBOL for outs*/ins*Max Filippov1-7/+0
Custom outs*/ins* implementations are long gone from the xtensa port, remove matching EXPORT_SYMBOLs. This fixes the following build warnings issued by modpost since commit 15bfc2348d54 ("modpost: check for static EXPORT_SYMBOL* functions"): WARNING: "insb" [vmlinux] is a static EXPORT_SYMBOL WARNING: "insw" [vmlinux] is a static EXPORT_SYMBOL WARNING: "insl" [vmlinux] is a static EXPORT_SYMBOL WARNING: "outsb" [vmlinux] is a static EXPORT_SYMBOL WARNING: "outsw" [vmlinux] is a static EXPORT_SYMBOL WARNING: "outsl" [vmlinux] is a static EXPORT_SYMBOL Cc: stable@vger.kernel.org Fixes: d38efc1f150f ("xtensa: adopt generic io routines") Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2019-10-14mm/memory-failure: poison read receives SIGKILL instead of SIGBUS if mmaped more than onceJane Chu1-9/+13
Mmap /dev/dax more than once, then read the poison location using address from one of the mappings. The other mappings due to not having the page mapped in will cause SIGKILLs delivered to the process. SIGKILL succeeds over SIGBUS, so user process loses the opportunity to handle the UE. Although one may add MAP_POPULATE to mmap(2) to work around the issue, MAP_POPULATE makes mapping 128GB of pmem several magnitudes slower, so isn't always an option. Details - ndctl inject-error --block=10 --count=1 namespace6.0 ./read_poison -x dax6.0 -o 5120 -m 2 mmaped address 0x7f5bb6600000 mmaped address 0x7f3cf3600000 doing local read at address 0x7f3cf3601400 Killed Console messages in instrumented kernel - mce: Uncorrected hardware memory error in user-access at edbe201400 Memory failure: tk->addr = 7f5bb6601000 Memory failure: address edbe201: call dev_pagemap_mapping_shift dev_pagemap_mapping_shift: page edbe201: no PUD Memory failure: tk->size_shift == 0 Memory failure: Unable to find user space address edbe201 in read_poison Memory failure: tk->addr = 7f3cf3601000 Memory failure: address edbe201: call dev_pagemap_mapping_shift Memory failure: tk->size_shift = 21 Memory failure: 0xedbe201: forcibly killing read_poison:22434 because of failure to unmap corrupted page => to deliver SIGKILL Memory failure: 0xedbe201: Killing read_poison:22434 due to hardware memory corruption => to deliver SIGBUS Link: http://lkml.kernel.org/r/1565112345-28754-3-git-send-email-jane.chu@oracle.com Signed-off-by: Jane Chu <jane.chu@oracle.com> Suggested-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-14mm/slab.c: fix kernel-doc warning for __ksize()Randy Dunlap1-0/+3
Fix kernel-doc warning in mm/slab.c: mm/slab.c:4215: warning: Function parameter or member 'objp' not described in '__ksize' Also add Return: documentation section for this function. Link: http://lkml.kernel.org/r/68c9fd7d-f09e-d376-e292-c7b2bdf1774d@infradead.org Fixes: 10d1f8cb3965 ("mm/slab: refactor common ksize KASAN logic into slab_common.c") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Acked-by: Marco Elver <elver@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-14xarray.h: fix kernel-doc warningRandy Dunlap1-2/+2
Fix (Sphinx) kernel-doc warning in <linux/xarray.h>: include/linux/xarray.h:232: WARNING: Unexpected indentation. Link: http://lkml.kernel.org/r/89ba2134-ce23-7c10-5ee1-ef83b35aa984@infradead.org Fixes: a3e4d3f97ec8 ("XArray: Redesign xa_alloc API") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Matthew Wilcox <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-14bitmap.h: fix kernel-doc warning and typoRandy Dunlap1-1/+2
Fix kernel-doc warning in <linux/bitmap.h>: include/linux/bitmap.h:341: warning: Function parameter or member 'nbits' not described in 'bitmap_or_equal' Also fix small typo (bitnaps). Link: http://lkml.kernel.org/r/0729ea7a-2c0d-b2c5-7dd3-3629ee0803e2@infradead.org Fixes: b9fa6442f704 ("cpumask: Implement cpumask_or_equal()") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-14fs/fs-writeback.c: fix kernel-doc warningRandy Dunlap1-1/+1
Fix kernel-doc warning in fs/fs-writeback.c: fs/fs-writeback.c:913: warning: Excess function parameter 'nr_pages' description in 'cgroup_writeback_by_id' Link: http://lkml.kernel.org/r/756645ac-0ce8-d47e-d30a-04d9e4923a4f@infradead.org Fixes: d62241c7a406 ("writeback, memcg: Implement cgroup_writeback_by_id()") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Tejun Heo <tj@kernel.org> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-14fs/libfs.c: fix kernel-doc warningRandy Dunlap1-2/+1
Fix kernel-doc warning in fs/libfs.c: fs/libfs.c:496: warning: Excess function parameter 'available' description in 'simple_write_end' Link: http://lkml.kernel.org/r/5fc9d70b-e377-0ec9-066a-970d49579041@infradead.org Fixes: ad2a722f196d ("libfs: Open code simple_commit_write into only user") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Boaz Harrosh <boazh@netapp.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-14fs/direct-io.c: fix kernel-doc warningRandy Dunlap1-2/+1
Fix kernel-doc warning in fs/direct-io.c: fs/direct-io.c:258: warning: Excess function parameter 'offset' description in 'dio_complete' Also, don't mark this function as having kernel-doc notation since it is not exported. Link: http://lkml.kernel.org/r/97908511-4328-4a56-17fe-f43a1d7aa470@infradead.org Fixes: 6d544bb4d901 ("dio: centralize completion in dio_complete()") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Zach Brown <zab@zabbo.net> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-14mm, compaction: fix wrong pfn handling in __reset_isolation_pfn()Vlastimil Babka1-3/+4
Florian and Dave reported [1] a NULL pointer dereference in __reset_isolation_pfn(). While the exact cause is unclear, staring at the code revealed two bugs, which might be related. One bug is that if zone starts in the middle of pageblock, block_page might correspond to different pfn than block_pfn, and then the pfn_valid_within() checks will check different pfn's than those accessed via struct page. This might result in acessing an unitialized page in CONFIG_HOLES_IN_ZONE configs. The other bug is that end_page refers to the first page of next pageblock and not last page of current pageblock. The online and valid check is then wrong and with sections, the while (page < end_page) loop might wander off actual struct page arrays. [1] https://lore.kernel.org/linux-xfs/87o8z1fvqu.fsf@mid.deneb.enyo.de/ Link: http://lkml.kernel.org/r/20191008152915.24704-1-vbabka@suse.cz Fixes: 6b0868c820ff ("mm/compaction.c: correct zone boundary handling when resetting pageblock skip hints") Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Reported-by: Florian Weimer <fw@deneb.enyo.de> Reported-by: Dave Chinner <david@fromorbit.com> Acked-by: Mel Gorman <mgorman@techsingularity.net> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-14mm, hugetlb: allow hugepage allocations to reclaim as neededDavid Rientjes1-2/+4
Commit b39d0ee2632d ("mm, page_alloc: avoid expensive reclaim when compaction may not succeed") has chnaged the allocator to bail out from the allocator early to prevent from a potentially excessive memory reclaim. __GFP_RETRY_MAYFAIL is designed to retry the allocation, reclaim and compaction loop as long as there is a reasonable chance to make forward progress. Neither COMPACT_SKIPPED nor COMPACT_DEFERRED at the INIT_COMPACT_PRIORITY compaction attempt gives this feedback. The most obvious affected subsystem is hugetlbfs which allocates huge pages based on an admin request (or via admin configured overcommit). I have done a simple test which tries to allocate half of the memory for hugetlb pages while the memory is full of a clean page cache. This is not an unusual situation because we try to cache as much of the memory as possible and sysctl/sysfs interface to allocate huge pages is there for flexibility to allocate hugetlb pages at any time. System has 1GB of RAM and we are requesting 515MB worth of hugetlb pages after the memory is prefilled by a clean page cache: root@test1:~# cat hugetlb_test.sh set -x echo 0 > /proc/sys/vm/nr_hugepages echo 3 > /proc/sys/vm/drop_caches echo 1 > /proc/sys/vm/compact_memory dd if=/mnt/data/file-1G of=/dev/null bs=$((4<<10)) TS=$(date +%s) echo 256 > /proc/sys/vm/nr_hugepages cat /proc/sys/vm/nr_hugepages The results for 2 consecutive runs on clean 5.3 root@test1:~# sh hugetlb_test.sh + echo 0 + echo 3 + echo 1 + dd if=/mnt/data/file-1G of=/dev/null bs=4096 262144+0 records in 262144+0 records out 1073741824 bytes (1.1 GB) copied, 21.0694 s, 51.0 MB/s + date +%s + TS=1569905284 + echo 256 + cat /proc/sys/vm/nr_hugepages 256 root@test1:~# sh hugetlb_test.sh + echo 0 + echo 3 + echo 1 + dd if=/mnt/data/file-1G of=/dev/null bs=4096 262144+0 records in 262144+0 records out 1073741824 bytes (1.1 GB) copied, 21.7548 s, 49.4 MB/s + date +%s + TS=1569905311 + echo 256 + cat /proc/sys/vm/nr_hugepages 256 Now with b39d0ee2632d applied root@test1:~# sh hugetlb_test.sh + echo 0 + echo 3 + echo 1 + dd if=/mnt/data/file-1G of=/dev/null bs=4096 262144+0 records in 262144+0 records out 1073741824 bytes (1.1 GB) copied, 20.1815 s, 53.2 MB/s + date +%s + TS=1569905516 + echo 256 + cat /proc/sys/vm/nr_hugepages 11 root@test1:~# sh hugetlb_test.sh + echo 0 + echo 3 + echo 1 + dd if=/mnt/data/file-1G of=/dev/null bs=4096 262144+0 records in 262144+0 records out 1073741824 bytes (1.1 GB) copied, 21.9485 s, 48.9 MB/s + date +%s + TS=1569905541 + echo 256 + cat /proc/sys/vm/nr_hugepages 12 The success rate went down by factor of 20! Although hugetlb allocation requests might fail and it is reasonable to expect them to under extremely fragmented memory or when the memory is under a heavy pressure but the above situation is not that case. Fix the regression by reverting back to the previous behavior for __GFP_RETRY_MAYFAIL requests and disable the beail out heuristic for those requests. Mike said: : hugetlbfs allocations are commonly done via sysctl/sysfs shortly after : boot where this may not be as much of an issue. However, I am aware of at : least three use cases where allocations are made after the system has been : up and running for quite some time: : : - DB reconfiguration. If sysctl/sysfs fails to get required number of : huge pages, system is rebooted to perform allocation after boot. : : - VM provisioning. If unable get required number of huge pages, fall : back to base pages. : : - An application that does not preallocate pool, but rather allocates : pages at fault time for optimal NUMA locality. : : In all cases, I would expect b39d0ee2632d to cause regressions and : noticable behavior changes. : : My quick/limited testing in : https://lkml.kernel.org/r/3468b605-a3a9-6978-9699-57c52a90bd7e@oracle.com : was insufficient. It was also mentioned that if something like : b39d0ee2632d went forward, I would like exemptions for __GFP_RETRY_MAYFAIL : requests as in this patch. [mhocko@suse.com: reworded changelog] Link: http://lkml.kernel.org/r/20191007075548.12456-1-mhocko@kernel.org Fixes: b39d0ee2632d ("mm, page_alloc: avoid expensive reclaim when compaction may not succeed") Signed-off-by: David Rientjes <rientjes@google.com> Signed-off-by: Michal Hocko <mhocko@suse.com> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Mel Gorman <mgorman@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-14lib/test_meminit: add a kmem_cache_alloc_bulk() testAlexander Potapenko1-0/+27
Make sure allocations from kmem_cache_alloc_bulk() and kmem_cache_free_bulk() are properly initialized. Link: http://lkml.kernel.org/r/20191007091605.30530-2-glider@google.com Signed-off-by: Alexander Potapenko <glider@google.com> Cc: Kees Cook <keescook@chromium.org> Cc: Christoph Lameter <cl@linux.com> Cc: Laura Abbott <labbott@redhat.com> Cc: Thibaut Sautereau <thibaut@sautereau.fr> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-14mm/slub.c: init_on_free=1 should wipe freelist ptr for bulk allocationsAlexander Potapenko1-6/+16
slab_alloc_node() already zeroed out the freelist pointer if init_on_free was on. Thibaut Sautereau noticed that the same needs to be done for kmem_cache_alloc_bulk(), which performs the allocations separately. kmem_cache_alloc_bulk() is currently used in two places in the kernel, so this change is unlikely to have a major performance impact. SLAB doesn't require a similar change, as auto-initialization makes the allocator store the freelist pointers off-slab. Link: http://lkml.kernel.org/r/20191007091605.30530-1-glider@google.com Fixes: 6471384af2a6 ("mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options") Signed-off-by: Alexander Potapenko <glider@google.com> Reported-by: Thibaut Sautereau <thibaut@sautereau.fr> Reported-by: Kees Cook <keescook@chromium.org> Cc: Christoph Lameter <cl@linux.com> Cc: Laura Abbott <labbott@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-14lib/generic-radix-tree.c: add kmemleak annotationsEric Biggers1-6/+26
Kmemleak is falsely reporting a leak of the slab allocation in sctp_stream_init_ext(): BUG: memory leak unreferenced object 0xffff8881114f5d80 (size 96): comm "syz-executor934", pid 7160, jiffies 4294993058 (age 31.950s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<00000000ce7a1326>] kmemleak_alloc_recursive include/linux/kmemleak.h:55 [inline] [<00000000ce7a1326>] slab_post_alloc_hook mm/slab.h:439 [inline] [<00000000ce7a1326>] slab_alloc mm/slab.c:3326 [inline] [<00000000ce7a1326>] kmem_cache_alloc_trace+0x13d/0x280 mm/slab.c:3553 [<000000007abb7ac9>] kmalloc include/linux/slab.h:547 [inline] [<000000007abb7ac9>] kzalloc include/linux/slab.h:742 [inline] [<000000007abb7ac9>] sctp_stream_init_ext+0x2b/0xa0 net/sctp/stream.c:157 [<0000000048ecb9c1>] sctp_sendmsg_to_asoc+0x946/0xa00 net/sctp/socket.c:1882 [<000000004483ca2b>] sctp_sendmsg+0x2a8/0x990 net/sctp/socket.c:2102 [...] But it's freed later. Kmemleak misses the allocation because its pointer is stored in the generic radix tree sctp_stream::out, and the generic radix tree uses raw pages which aren't tracked by kmemleak. Fix this by adding the kmemleak hooks to the generic radix tree code. Link: http://lkml.kernel.org/r/20191004065039.727564-1-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Reported-by: <syzbot+7f3b6b106be8dcdcdeec@syzkaller.appspotmail.com> Reviewed-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Kent Overstreet <kent.overstreet@gmail.com> Cc: Vlad Yasevich <vyasevich@gmail.com> Cc: Xin Long <lucien.xin@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-14mm/slub: fix a deadlock in show_slab_objects()Qian Cai1-2/+11
A long time ago we fixed a similar deadlock in show_slab_objects() [1]. However, it is apparently due to the commits like 01fb58bcba63 ("slab: remove synchronous synchronize_sched() from memcg cache deactivation path") and 03afc0e25f7f ("slab: get_online_mems for kmem_cache_{create,destroy,shrink}"), this kind of deadlock is back by just reading files in /sys/kernel/slab which will generate a lockdep splat below. Since the "mem_hotplug_lock" here is only to obtain a stable online node mask while racing with NUMA node hotplug, in the worst case, the results may me miscalculated while doing NUMA node hotplug, but they shall be corrected by later reads of the same files. WARNING: possible circular locking dependency detected ------------------------------------------------------ cat/5224 is trying to acquire lock: ffff900012ac3120 (mem_hotplug_lock.rw_sem){++++}, at: show_slab_objects+0x94/0x3a8 but task is already holding lock: b8ff009693eee398 (kn->count#45){++++}, at: kernfs_seq_start+0x44/0xf0 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (kn->count#45){++++}: lock_acquire+0x31c/0x360 __kernfs_remove+0x290/0x490 kernfs_remove+0x30/0x44 sysfs_remove_dir+0x70/0x88 kobject_del+0x50/0xb0 sysfs_slab_unlink+0x2c/0x38 shutdown_cache+0xa0/0xf0 kmemcg_cache_shutdown_fn+0x1c/0x34 kmemcg_workfn+0x44/0x64 process_one_work+0x4f4/0x950 worker_thread+0x390/0x4bc kthread+0x1cc/0x1e8 ret_from_fork+0x10/0x18 -> #1 (slab_mutex){+.+.}: lock_acquire+0x31c/0x360 __mutex_lock_common+0x16c/0xf78 mutex_lock_nested+0x40/0x50 memcg_create_kmem_cache+0x38/0x16c memcg_kmem_cache_create_func+0x3c/0x70 process_one_work+0x4f4/0x950 worker_thread+0x390/0x4bc kthread+0x1cc/0x1e8 ret_from_fork+0x10/0x18 -> #0 (mem_hotplug_lock.rw_sem){++++}: validate_chain+0xd10/0x2bcc __lock_acquire+0x7f4/0xb8c lock_acquire+0x31c/0x360 get_online_mems+0x54/0x150 show_slab_objects+0x94/0x3a8 total_objects_show+0x28/0x34 slab_attr_show+0x38/0x54 sysfs_kf_seq_show+0x198/0x2d4 kernfs_seq_show+0xa4/0xcc seq_read+0x30c/0x8a8 kernfs_fop_read+0xa8/0x314 __vfs_read+0x88/0x20c vfs_read+0xd8/0x10c ksys_read+0xb0/0x120 __arm64_sys_read+0x54/0x88 el0_svc_handler+0x170/0x240 el0_svc+0x8/0xc other info that might help us debug this: Chain exists of: mem_hotplug_lock.rw_sem --> slab_mutex --> kn->count#45 Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(kn->count#45); lock(slab_mutex); lock(kn->count#45); lock(mem_hotplug_lock.rw_sem); *** DEADLOCK *** 3 locks held by cat/5224: #0: 9eff00095b14b2a0 (&p->lock){+.+.}, at: seq_read+0x4c/0x8a8 #1: 0eff008997041480 (&of->mutex){+.+.}, at: kernfs_seq_start+0x34/0xf0 #2: b8ff009693eee398 (kn->count#45){++++}, at: kernfs_seq_start+0x44/0xf0 stack backtrace: Call trace: dump_backtrace+0x0/0x248 show_stack+0x20/0x2c dump_stack+0xd0/0x140 print_circular_bug+0x368/0x380 check_noncircular+0x248/0x250 validate_chain+0xd10/0x2bcc __lock_acquire+0x7f4/0xb8c lock_acquire+0x31c/0x360 get_online_mems+0x54/0x150 show_slab_objects+0x94/0x3a8 total_objects_show+0x28/0x34 slab_attr_show+0x38/0x54 sysfs_kf_seq_show+0x198/0x2d4 kernfs_seq_show+0xa4/0xcc seq_read+0x30c/0x8a8 kernfs_fop_read+0xa8/0x314 __vfs_read+0x88/0x20c vfs_read+0xd8/0x10c ksys_read+0xb0/0x120 __arm64_sys_read+0x54/0x88 el0_svc_handler+0x170/0x240 el0_svc+0x8/0xc I think it is important to mention that this doesn't expose the show_slab_objects to use-after-free. There is only a single path that might really race here and that is the slab hotplug notifier callback __kmem_cache_shrink (via slab_mem_going_offline_callback) but that path doesn't really destroy kmem_cache_node data structures. [1] http://lkml.iu.edu/hypermail/linux/kernel/1101.0/02850.html [akpm@linux-foundation.org: add comment explaining why we don't need mem_hotplug_lock] Link: http://lkml.kernel.org/r/1570192309-10132-1-git-send-email-cai@lca.pw Fixes: 01fb58bcba63 ("slab: remove synchronous synchronize_sched() from memcg cache deactivation path") Fixes: 03afc0e25f7f ("slab: get_online_mems for kmem_cache_{create,destroy,shrink}") Signed-off-by: Qian Cai <cai@lca.pw> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Tejun Heo <tj@kernel.org> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Cc: Roman Gushchin <guro@fb.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-10-14mm, page_owner: rename flag indicating that page is allocatedVlastimil Babka2-7/+7
Commit 37389167a281 ("mm, page_owner: keep owner info when freeing the page") has introduced a flag PAGE_EXT_OWNER_ACTIVE to indicate that page is tracked as being allocated. Kirril suggested naming it PAGE_EXT_OWNER_ALLOCATED to make it more clear, as "active is somewhat loaded term for a page". Link: http://lkml.kernel.org/r/20190930122916.14969-4-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Suggested-by: Kirill A. Shutemov <kirill@shutemov.name> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Walter Wu <walter-zh.wu@mediatek.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>