Age | Commit message (Collapse) | Author | Files | Lines |
|
The increase_address_space() function has to check the PM_LEVEL_SIZE()
condition again under the domain->lock to avoid a false trigger of the
WARN_ON_ONCE() and to avoid that the address space is increase more
often than necessary.
Reported-by: Qian Cai <cai@lca.pw>
Fixes: 754265bcab78 ("iommu/amd: Fix race in increase_address_space()")
Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
In the current code, the nvme is using a fixed 4k PRP entry size,
but if the kernel use a page size which is more than 4k, we should
consider the situation that the bv_offset may be larger than the
dev->ctrl.page_size. Otherwise we may miss setting the prp2 and then
cause the command can't be executed correctly.
Fixes: dff824b2aadb ("nvme-pci: optimize mapping of small single segment requests")
Cc: stable@vger.kernel.org
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
|
|
When enabling KASAN and DEBUG_TEST_DRIVER_REMOVE, I find this KASAN
warning:
[ 20.872057] BUG: KASAN: use-after-free in pcc_data_alloc+0x40/0xb8
[ 20.878226] Read of size 4 at addr ffff00236cdeb684 by task swapper/0/1
[ 20.884826]
[ 20.886309] CPU: 19 PID: 1 Comm: swapper/0 Not tainted 5.4.0-rc1-00009-ge7f7df3db5bf-dirty #289
[ 20.894994] Hardware name: Huawei D06 /D06, BIOS Hisilicon D06 UEFI RC0 - V1.16.01 03/15/2019
[ 20.903505] Call trace:
[ 20.905942] dump_backtrace+0x0/0x200
[ 20.909593] show_stack+0x14/0x20
[ 20.912899] dump_stack+0xd4/0x130
[ 20.916291] print_address_description.isra.9+0x6c/0x3b8
[ 20.921592] __kasan_report+0x12c/0x23c
[ 20.925417] kasan_report+0xc/0x18
[ 20.928808] __asan_load4+0x94/0xb8
[ 20.932286] pcc_data_alloc+0x40/0xb8
[ 20.935938] acpi_cppc_processor_probe+0x4e8/0xb08
[ 20.940717] __acpi_processor_start+0x48/0xb0
[ 20.945062] acpi_processor_start+0x40/0x60
[ 20.949235] really_probe+0x118/0x548
[ 20.952887] driver_probe_device+0x7c/0x148
[ 20.957059] device_driver_attach+0x94/0xa0
[ 20.961231] __driver_attach+0xa4/0x110
[ 20.965055] bus_for_each_dev+0xe8/0x158
[ 20.968966] driver_attach+0x30/0x40
[ 20.972531] bus_add_driver+0x234/0x2f0
[ 20.976356] driver_register+0xbc/0x1d0
[ 20.980182] acpi_processor_driver_init+0x40/0xe4
[ 20.984875] do_one_initcall+0xb4/0x254
[ 20.988700] kernel_init_freeable+0x24c/0x2f8
[ 20.993047] kernel_init+0x10/0x118
[ 20.996524] ret_from_fork+0x10/0x18
[ 21.000087]
[ 21.001567] Allocated by task 1:
[ 21.004785] save_stack+0x28/0xc8
[ 21.008089] __kasan_kmalloc.isra.9+0xbc/0xd8
[ 21.012435] kasan_kmalloc+0xc/0x18
[ 21.015913] pcc_data_alloc+0x94/0xb8
[ 21.019564] acpi_cppc_processor_probe+0x4e8/0xb08
[ 21.024343] __acpi_processor_start+0x48/0xb0
[ 21.028689] acpi_processor_start+0x40/0x60
[ 21.032860] really_probe+0x118/0x548
[ 21.036512] driver_probe_device+0x7c/0x148
[ 21.040684] device_driver_attach+0x94/0xa0
[ 21.044855] __driver_attach+0xa4/0x110
[ 21.048680] bus_for_each_dev+0xe8/0x158
[ 21.052591] driver_attach+0x30/0x40
[ 21.056155] bus_add_driver+0x234/0x2f0
[ 21.059980] driver_register+0xbc/0x1d0
[ 21.063805] acpi_processor_driver_init+0x40/0xe4
[ 21.068497] do_one_initcall+0xb4/0x254
[ 21.072322] kernel_init_freeable+0x24c/0x2f8
[ 21.076667] kernel_init+0x10/0x118
[ 21.080144] ret_from_fork+0x10/0x18
[ 21.083707]
[ 21.085186] Freed by task 1:
[ 21.088056] save_stack+0x28/0xc8
[ 21.091360] __kasan_slab_free+0x118/0x180
[ 21.095445] kasan_slab_free+0x10/0x18
[ 21.099183] kfree+0x80/0x268
[ 21.102139] acpi_cppc_processor_exit+0x1a8/0x1b8
[ 21.106832] acpi_processor_stop+0x70/0x80
[ 21.110917] really_probe+0x174/0x548
[ 21.114568] driver_probe_device+0x7c/0x148
[ 21.118740] device_driver_attach+0x94/0xa0
[ 21.122912] __driver_attach+0xa4/0x110
[ 21.126736] bus_for_each_dev+0xe8/0x158
[ 21.130648] driver_attach+0x30/0x40
[ 21.134212] bus_add_driver+0x234/0x2f0
[ 21.0x10/0x18
[ 21.161764]
[ 21.163244] The buggy address belongs to the object at ffff00236cdeb600
[ 21.163244] which belongs to the cache kmalloc-256 of size 256
[ 21.175750] The buggy address is located 132 bytes inside of
[ 21.175750] 256-byte region [ffff00236cdeb600, ffff00236cdeb700)
[ 21.187473] The buggy address belongs to the page:
[ 21.192254] page:fffffe008d937a00 refcount:1 mapcount:0 mapping:ffff002370c0fa00 index:0x0 compound_mapcount: 0
[ 21.202331] flags: 0x1ffff00000010200(slab|head)
[ 21.206940] raw: 1ffff00000010200 dead000000000100 dead000000000122 ffff002370c0fa00
[ 21.214671] raw: 0000000000000000 00000000802a002a 00000001ffffffff 0000000000000000
[ 21.222400] page dumped because: kasan: bad access detected
[ 21.227959]
[ 21.229438] Memory state around the buggy address:
[ 21.234218] ffff00236cdeb580: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 21.241427] ffff00236cdeb600: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[ 21.248637] >ffff00236cdeb680: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[ 21.255845] ^
[ 21.259062] ffff00236cdeb700: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 21.266272] ffff00236cdeb780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[ 21.273480] ==================================================================
It seems that global pcc_data[pcc_ss_id] can be freed in
acpi_cppc_processor_exit(), but we may later reference this value, so
NULLify it when freed.
Also remove the useless setting of data "pcc_channel_acquired", which
we're about to free.
Fixes: 85b1407bf6d2 ("ACPI / CPPC: Make CPPC ACPI driver aware of PCC subspace IDs")
Signed-off-by: John Garry <john.garry@huawei.com>
Cc: 4.15+ <stable@vger.kernel.org> # 4.15+
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
If ctx->cached_sq_head < nxt_sq_head, we should add UINT_MAX to tmp, not
tmp_nxt.
Fixes: 5da0fb1ab34c ("io_uring: consider the overflow of sequence for timeout req")
Signed-off-by: yangerkun <yangerkun@huawei.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
We've got two issues with the non-regular file handling for non-blocking
IO:
1) We don't want to re-do a short read in full for a non-regular file,
as we can't just read the data again.
2) For non-regular files that don't support non-blocking IO attempts,
we need to punt to async context even if the file is opened as
non-blocking. Otherwise the caller always gets -EAGAIN.
Add two new request flags to handle these cases. One is just a cache
of the inode S_ISREG() status, the other tells io_uring that we always
need to punt this request to async context, even if REQ_F_NOWAIT is set.
Cc: stable@vger.kernel.org
Reported-by: Hrvoje Zeba <zeba.hrvoje@gmail.com>
Tested-by: Hrvoje Zeba <zeba.hrvoje@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
While it is useful for new drivers to use devm_platform_ioremap_resource,
this script is currently used to spam maintainers, often updating very
old drivers. The net benefit is the removal of 2 lines of code in the
driver but the review load for the maintainers is huge. As of now, more
that 560 patches have been sent, some of them obviously broken, as in:
https://lore.kernel.org/lkml/9bbcce19c777583815c92ce3c2ff2586@www.loen.fr/
Remove the script to reduce the spam.
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Acked-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Przemysław Kopa reports that since commit b516ea586d71 ("PCI: Enable
NVIDIA HDA controllers"), the discrete GPU Nvidia GeForce GT 540M on his
2011 Samsung laptop refuses to runtime suspend, resulting in a power
regression and excessive heat.
Rivera Valdez witnesses the same issue with a GeForce GT 525M (GF108M)
of the same era, as does another Arch Linux user named "R0AR" with a
more recent GeForce GTX 1050 Ti (GP107M).
The commit exposes the discrete GPU's HDA controller and all four codecs
on the controller do not set the CLKSTOP and EPSS bits in the Supported
Power States Response. They also do not set the PS-ClkStopOk bit in the
Get Power State Response. hda_codec_runtime_suspend() therefore does
not call snd_hdac_codec_link_down(), which prevents each codec and the
PCI device from runtime suspending.
The same issue is present on some AMD discrete GPUs and we addressed it
by forcing runtime PM despite the bits not being set, see commit
57cb54e53bdd ("ALSA: hda - Force to link down at runtime suspend on
ATI/AMD HDMI").
Do the same for Nvidia HDMI codecs.
Fixes: b516ea586d71 ("PCI: Enable NVIDIA HDA controllers")
Link: https://bbs.archlinux.org/viewtopic.php?pid=1865512
Link: https://bugs.freedesktop.org/show_bug.cgi?id=75985#c81
Reported-by: Przemysław Kopa <prymoo@gmail.com>
Reported-by: Rivera Valdez <riveravaldez@ysinembargo.com>
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Cc: Daniel Drake <dan@reactivated.net>
Cc: stable@vger.kernel.org # v5.3+
Link: https://lore.kernel.org/r/3086bc75135c1e3567c5bc4f3cc4ff5cbf7a56c2.1571324194.git.lukas@wunner.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
GFP_NOWAIT allocation can fail anytime - it doesn't wait for memory being
available and it fails if the mempool is exhausted and there is not enough
memory.
If we go down this path:
map_bio -> mg_start -> alloc_migration -> mempool_alloc(GFP_NOWAIT)
we can see that map_bio() doesn't check the return value of mg_start(),
and the bio is leaked.
If we go down this path:
map_bio -> mg_start -> mg_lock_writes -> alloc_prison_cell ->
dm_bio_prison_alloc_cell_v2 -> mempool_alloc(GFP_NOWAIT) ->
mg_lock_writes -> mg_complete
the bio is ended with an error - it is unacceptable because it could
cause filesystem corruption if the machine ran out of memory
temporarily.
Change GFP_NOWAIT to GFP_NOIO, so that the mempool code will properly
wait until memory becomes available. mempool_alloc with GFP_NOIO can't
fail, so remove the code paths that deal with allocation failure.
Cc: stable@vger.kernel.org
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
|
|
On Asus MJ401TA (with Realtek ALC256), the headset mic is connected to
pin 0x19, with default configuration value 0x411111f0 (indicating no
physical connection).
Enable this by quirking the pin. Mic jack detection was also tested and
found to be working.
This enables use of the headset mic on this product.
Signed-off-by: Daniel Drake <drake@endlessm.com>
Cc: <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20191017081501.17135-1-drake@endlessm.com
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
BOSS Katana amplifiers cannot be used for recording or playback if quirks
are applied
BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=195223
Signed-off-by: Szabolcs Szőke <szszoke.code@gmail.com>
Cc: <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20191011171937.8013-1-szszoke.code@gmail.com
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
As preempt-to-busy leaves the request on the HW as the resubmission is
processed, that request may complete in the background and even cause a
second virtual request to enter queue. This second virtual request
breaks our "single request in the virtual pipeline" assumptions.
Furthermore, as the virtual request may be completed and retired, we
lose the reference the virtual engine assumes is held. Normally, just
removing the request from the scheduler queue removes it from the
engine, but the virtual engine keeps track of its singleton request via
its ve->request. This pointer needs protecting with a reference.
v2: Drop unnecessary motion of rq->engine = owner
Fixes: 22b7a426bbe1 ("drm/i915/execlists: Preempt-to-busy")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190923152844.8914-1-chris@chris-wilson.co.uk
(cherry picked from commit b647c7df01b75761b4c0b1cb6f4841088c0b1121)
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
|
|
Daniel Vetter uncovered a nasty cycle in using the mmu-notifiers to
invalidate userptr objects which also happen to be pulled into GGTT
mmaps. That is when we unbind the userptr object (on mmu invalidation),
we revoke all CPU mmaps, which may then recurse into mmu invalidation.
We looked for ways of breaking the cycle, but the revocation on
invalidation is required and cannot be avoided. The only solution we
could see was to not allow such GGTT bindings of userptr objects in the
first place. In practice, no one really wants to use a GGTT mmapping of
a CPU pointer...
Just before Daniel's explosive lockdep patches land in v5.4-rc1, we got
a genuine blip from CI:
<4>[ 246.793958] ======================================================
<4>[ 246.793972] WARNING: possible circular locking dependency detected
<4>[ 246.793989] 5.3.0-gbd6c56f50d15-drmtip_372+ #1 Tainted: G U
<4>[ 246.794003] ------------------------------------------------------
<4>[ 246.794017] kswapd0/145 is trying to acquire lock:
<4>[ 246.794030] 000000003f565be6 (&dev->struct_mutex/1){+.+.}, at: userptr_mn_invalidate_range_start+0x18f/0x220 [i915]
<4>[ 246.794250]
but task is already holding lock:
<4>[ 246.794263] 000000001799cef9 (&anon_vma->rwsem){++++}, at: page_lock_anon_vma_read+0xe6/0x2a0
<4>[ 246.794291]
which lock already depends on the new lock.
<4>[ 246.794307]
the existing dependency chain (in reverse order) is:
<4>[ 246.794322]
-> #3 (&anon_vma->rwsem){++++}:
<4>[ 246.794344] down_write+0x33/0x70
<4>[ 246.794357] __vma_adjust+0x3d9/0x7b0
<4>[ 246.794370] __split_vma+0x16a/0x180
<4>[ 246.794385] mprotect_fixup+0x2a5/0x320
<4>[ 246.794399] do_mprotect_pkey+0x208/0x2e0
<4>[ 246.794413] __x64_sys_mprotect+0x16/0x20
<4>[ 246.794429] do_syscall_64+0x55/0x1c0
<4>[ 246.794443] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4>[ 246.794456]
-> #2 (&mapping->i_mmap_rwsem){++++}:
<4>[ 246.794478] down_write+0x33/0x70
<4>[ 246.794493] unmap_mapping_pages+0x48/0x130
<4>[ 246.794519] i915_vma_revoke_mmap+0x81/0x1b0 [i915]
<4>[ 246.794519] i915_vma_unbind+0x11d/0x4a0 [i915]
<4>[ 246.794519] i915_vma_destroy+0x31/0x300 [i915]
<4>[ 246.794519] __i915_gem_free_objects+0xb8/0x4b0 [i915]
<4>[ 246.794519] drm_file_free.part.0+0x1e6/0x290
<4>[ 246.794519] drm_release+0xa6/0xe0
<4>[ 246.794519] __fput+0xc2/0x250
<4>[ 246.794519] task_work_run+0x82/0xb0
<4>[ 246.794519] do_exit+0x35b/0xdb0
<4>[ 246.794519] do_group_exit+0x34/0xb0
<4>[ 246.794519] __x64_sys_exit_group+0xf/0x10
<4>[ 246.794519] do_syscall_64+0x55/0x1c0
<4>[ 246.794519] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4>[ 246.794519]
-> #1 (&vm->mutex){+.+.}:
<4>[ 246.794519] i915_gem_shrinker_taints_mutex+0x6d/0xe0 [i915]
<4>[ 246.794519] i915_address_space_init+0x9f/0x160 [i915]
<4>[ 246.794519] i915_ggtt_init_hw+0x55/0x170 [i915]
<4>[ 246.794519] i915_driver_probe+0xc9f/0x1620 [i915]
<4>[ 246.794519] i915_pci_probe+0x43/0x1b0 [i915]
<4>[ 246.794519] pci_device_probe+0x9e/0x120
<4>[ 246.794519] really_probe+0xea/0x3d0
<4>[ 246.794519] driver_probe_device+0x10b/0x120
<4>[ 246.794519] device_driver_attach+0x4a/0x50
<4>[ 246.794519] __driver_attach+0x97/0x130
<4>[ 246.794519] bus_for_each_dev+0x74/0xc0
<4>[ 246.794519] bus_add_driver+0x13f/0x210
<4>[ 246.794519] driver_register+0x56/0xe0
<4>[ 246.794519] do_one_initcall+0x58/0x300
<4>[ 246.794519] do_init_module+0x56/0x1f6
<4>[ 246.794519] load_module+0x25bd/0x2a40
<4>[ 246.794519] __se_sys_finit_module+0xd3/0xf0
<4>[ 246.794519] do_syscall_64+0x55/0x1c0
<4>[ 246.794519] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4>[ 246.794519]
-> #0 (&dev->struct_mutex/1){+.+.}:
<4>[ 246.794519] __lock_acquire+0x15d8/0x1e90
<4>[ 246.794519] lock_acquire+0xa6/0x1c0
<4>[ 246.794519] __mutex_lock+0x9d/0x9b0
<4>[ 246.794519] userptr_mn_invalidate_range_start+0x18f/0x220 [i915]
<4>[ 246.794519] __mmu_notifier_invalidate_range_start+0x85/0x110
<4>[ 246.794519] try_to_unmap_one+0x76b/0x860
<4>[ 246.794519] rmap_walk_anon+0x104/0x280
<4>[ 246.794519] try_to_unmap+0xc0/0xf0
<4>[ 246.794519] shrink_page_list+0x561/0xc10
<4>[ 246.794519] shrink_inactive_list+0x220/0x440
<4>[ 246.794519] shrink_node_memcg+0x36e/0x740
<4>[ 246.794519] shrink_node+0xcb/0x490
<4>[ 246.794519] balance_pgdat+0x241/0x580
<4>[ 246.794519] kswapd+0x16c/0x530
<4>[ 246.794519] kthread+0x119/0x130
<4>[ 246.794519] ret_from_fork+0x24/0x50
<4>[ 246.794519]
other info that might help us debug this:
<4>[ 246.794519] Chain exists of:
&dev->struct_mutex/1 --> &mapping->i_mmap_rwsem --> &anon_vma->rwsem
<4>[ 246.794519] Possible unsafe locking scenario:
<4>[ 246.794519] CPU0 CPU1
<4>[ 246.794519] ---- ----
<4>[ 246.794519] lock(&anon_vma->rwsem);
<4>[ 246.794519] lock(&mapping->i_mmap_rwsem);
<4>[ 246.794519] lock(&anon_vma->rwsem);
<4>[ 246.794519] lock(&dev->struct_mutex/1);
<4>[ 246.794519]
*** DEADLOCK ***
v2: Say no to mmap_ioctl
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111744
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111870
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: stable@vger.kernel.org
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190928082546.3473-1-chris@chris-wilson.co.uk
(cherry picked from commit a4311745bba9763e3c965643d4531bd5765b0513)
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
|
|
The first come first served apporoach to handling the VBT
child device AUX ch conflicts has backfired. We have machines
in the wild where the VBT specifies both port A eDP and
port E DP (in that order) with port E being the real one.
So let's try to flip the preference around and let the last
child device win once again.
Cc: stable@vger.kernel.org
Cc: Jani Nikula <jani.nikula@intel.com>
Tested-by: Masami Ichikawa <masami256@gmail.com>
Tested-by: Torsten <freedesktop201910@liggy.de>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111966
Fixes: 36a0f92020dc ("drm/i915/bios: make child device order the priority order")
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191011202030.8829-1-ville.syrjala@linux.intel.com
Acked-by: Jani Nikula <jani.nikula@intel.com>
(cherry picked from commit 41e35ffb380bde1379e4030bb5b2ac824d5139cf)
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
|
|
Pull setting -EIO on the hung requests into its own utility function.
Having allowed ourselves to short-circuit submission of completed
requests, we can now do the mark_eio() prior to submission and avoid
some redundant operations.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190923110056.15176-4-chris@chris-wilson.co.uk
(cherry picked from commit 0d7cf7bc15e75bf79f2f65d61d19f896609f816a)
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
|
|
Sign-extending TTBR1 addresses when converting to an untagged address
breaks the documented POSIX semantics for mlock() in some obscure error
cases where we end up returning -EINVAL instead of -ENOMEM as a direct
result of rewriting the upper address bits.
Rework the untagged_addr() macro to preserve the upper address bits for
TTBR1 addresses and only clear the tag bits for user addresses. This
matches the behaviour of the 'clear_address_tag' assembly macro, so
rename that and align the implementations at the same time so that they
use the same instruction sequences for the tag manipulation.
Link: https://lore.kernel.org/stable/20191014162651.GF19200@arrakis.emea.arm.com/
Reported-by: Jan Stancek <jstancek@redhat.com>
Tested-by: Jan Stancek <jstancek@redhat.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Reviewed-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Will Deacon <will@kernel.org>
|
|
When detecting a spurious EL1 translation fault, we have the CPU retry
the translation using an AT S1E1R instruction, and inspect PAR_EL1 to
determine if the fault was spurious.
When PAR_EL1.F == 0, the AT instruction successfully translated the
address without a fault, which implies the original fault was spurious.
However, in this case we return false and treat the original fault as if
it was not spurious.
Invert the return value so that we treat such a case as spurious.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Fixes: 42f91093b043 ("arm64: mm: Ignore spurious translation faults taken from the kernel")
Tested-by: James Morse <james.morse@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
|
|
The 'F' field of the PAR_EL1 register lives in bit 0, not bit 1.
Fix the broken definition in 'sysreg.h'.
Fixes: e8620cff9994 ("arm64: sysreg: Add some field definitions for PAR_EL1")
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Preempting from IRQ-return means that the task has its PSTATE saved
on the stack, which will get restored when the task is resumed and does
the actual IRQ return.
However, enabling some CPU features requires modifying the PSTATE. This
means that, if a task was scheduled out during an IRQ-return before all
CPU features are enabled, the task might restore a PSTATE that does not
include the feature enablement changes once scheduled back in.
* Task 1:
PAN == 0 ---| |---------------
| |<- return from IRQ, PSTATE.PAN = 0
| <- IRQ |
+--------+ <- preempt() +--
^
|
reschedule Task 1, PSTATE.PAN == 1
* Init:
--------------------+------------------------
^
|
enable_cpu_features
set PSTATE.PAN on all CPUs
Worse than this, since PSTATE is untouched when task switching is done,
a task missing the new bits in PSTATE might affect another task, if both
do direct calls to schedule() (outside of IRQ/exception contexts).
Fix this by preventing preemption on IRQ-return until features are
enabled on all CPUs.
This way the only PSTATE values that are saved on the stack are from
synchronous exceptions. These are expected to be fatal this early, the
exception is BRK for WARN_ON(), but as this uses do_debug_exception()
which keeps IRQs masked, it shouldn't call schedule().
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
[james: Replaced a really cool hack, with an even simpler static key in C.
expanded commit message with Julien's cover-letter ascii art]
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
|
|
The message should match the parameter, i.e. raid0.default_layout.
Fixes: c84a1372df92 ("md/raid0: avoid RAID0 data corruption due to layout confusion.")
Cc: NeilBrown <neilb@suse.de>
Reported-by: Ivan Topolsky <doktor.yak@gmail.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
|
|
The __kthread_queue_delayed_work is not exported so
make it static, to avoid the following sparse warning:
kernel/kthread.c:869:6: warning: symbol '__kthread_queue_delayed_work' was not declared. Should it be static?
Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
On a machine with a 64K PAGE_SIZE, the nested for loops in
test_check_nonzero_user() can lead to soft lockups, eg:
watchdog: BUG: soft lockup - CPU#4 stuck for 22s! [modprobe:611]
Modules linked in: test_user_copy(+) vmx_crypto gf128mul crc32c_vpmsum virtio_balloon ip_tables x_tables autofs4
CPU: 4 PID: 611 Comm: modprobe Tainted: G L 5.4.0-rc1-gcc-8.2.0-00001-gf5a1a536fa14-dirty #1151
...
NIP __might_sleep+0x20/0xc0
LR __might_fault+0x40/0x60
Call Trace:
check_zeroed_user+0x12c/0x200
test_user_copy_init+0x67c/0x1210 [test_user_copy]
do_one_initcall+0x60/0x340
do_init_module+0x7c/0x2f0
load_module+0x2d94/0x30e0
__do_sys_finit_module+0xc8/0x150
system_call+0x5c/0x68
Even with a 4K PAGE_SIZE the test takes multiple seconds. Instead
tweak it to only scan a 1024 byte region, but make it cross the
page boundary.
Fixes: f5a1a536fa14 ("lib: introduce copy_struct_from_user() helper")
Suggested-by: Aleksa Sarai <cyphar@cyphar.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Aleksa Sarai <cyphar@cyphar.com>
Acked-by: Christian Brauner <christian.brauner@ubuntu.com>
Link: https://lore.kernel.org/r/20191016122732.13467-1-mpe@ellerman.id.au
Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
|
|
If there are neither processor objects nor processor device objects
in the ACPI tables, the per-CPU processors table will not be
initialized and attempting to dereference pointers from there will
cause the kernel to crash. This happens in acpi_processor_ppc_init()
and acpi_thermal_cpufreq_init() after commit d15ce412737a ("ACPI:
cpufreq: Switch to QoS requests instead of cpufreq notifier")
which didn't add the requisite NULL pointer checks in there.
Add the NULL pointer checks to acpi_processor_ppc_init() and
acpi_thermal_cpufreq_init(), and to the corresponding "exit"
routines.
While at it, drop redundant return instructions from
acpi_processor_ppc_init() and acpi_thermal_cpufreq_init().
Fixes: d15ce412737a ("ACPI: cpufreq: Switch to QoS requests instead of cpufreq notifier")
Reported-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
|
|
change_bit implementation for XCHAL_HAVE_EXCLUSIVE case changes all bits
except the one required due to copy-paste error from clear_bit.
Cc: stable@vger.kernel.org # v5.2+
Fixes: f7c34874f04a ("xtensa: add exclusive atomics support")
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
This patch fixes the virtual address layout in pgtable.h. The virtual
address of FIXADDR_START and VMEMMAP_START should not be overlapped.
Fixes: d95f1a542c3d ("RISC-V: Implement sparsemem")
Signed-off-by: Greentime Hu <greentime.hu@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
[paul.walmsley@sifive.com: fixed patch description]
Signed-off-by: Paul Walmsley <paul.walmsley@sifive.com>
|
|
This reverts commit 883a2a80f79ca5c0c105605fafabd1f3df99b34c.
Apparently use dmi_get_bios_year() as manufacturing date isn't accurate
and this breaks older laptops with new BIOS update.
So let's revert this patch.
There are still new HP laptops still need to use SMBus to support all
features, but it'll be enabled via a whitelist.
Signed-off-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
Acked-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20191001070845.9720-1-kai.heng.feng@canonical.com
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
|
|
There is an arbitrary difference between the system resume and
runtime resume code paths for PCI devices regarding the delay to
apply when switching the devices from D3cold to D0.
Namely, pci_restore_standard_config() used in the runtime resume
code path calls pci_set_power_state() which in turn invokes
__pci_start_power_transition() to power up the device through the
platform firmware and that function applies the transition delay
(as per PCI Express Base Specification Revision 2.0, Section 6.6.1).
However, pci_pm_default_resume_early() used in the system resume
code path calls pci_power_up() which doesn't apply the delay at
all and that causes issues to occur during resume from
suspend-to-idle on some systems where the delay is required.
Since there is no reason for that difference to exist, modify
pci_power_up() to follow pci_set_power_state() more closely and
invoke __pci_start_power_transition() from there to call the
platform firmware to power up the device (in case that's necessary).
Fixes: db288c9c5f9d ("PCI / PM: restore the original behavior of pci_set_power_state()")
Reported-by: Daniel Drake <drake@endlessm.com>
Tested-by: Daniel Drake <drake@endlessm.com>
Link: https://lore.kernel.org/linux-pm/CAD8Lp44TYxrMgPLkHCqF9hv6smEurMXvmmvmtyFhZ6Q4SE+dig@mail.gmail.com/T/#m21be74af263c6a34f36e0fc5c77c5449d9406925
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Cc: 3.10+ <stable@vger.kernel.org> # 3.10+
|
|
virt device tree incorrectly uses 0xf0000000 on both sides of PCI IO
ports address space mapping. This results in incorrect port address
assignment in PCI IO BARs and subsequent crash on attempt to access
them. Use 0 as base address in PCI IO ports address space.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
Commit c312ef176399 "libata/ahci: Drop PCS quirk for Denverton and
beyond" got the polarity wrong on the check for which board-ids should
have the quirk applied. The board type board_ahci_pcs7 is defined at the
end of the list such that "pcs7" boards can be special cased in the
future if they need the quirk. All prior Intel board ids "<
board_ahci_pcs7" should proceed with applying the quirk.
Reported-by: Andreas Friedrich <afrie@gmx.net>
Reported-by: Stephen Douthit <stephend@silicom-usa.com>
Fixes: c312ef176399 ("libata/ahci: Drop PCS quirk for Denverton and beyond")
Cc: <stable@vger.kernel.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
HAVE_FAST_GUP enables the lockless quick page table walker for simple
cases, and is a nice optimization for some random loads that can then
use get_user_pages_fast() rather than the more careful page walker.
However, for some unexplained reason, it seems to be subtly broken on
sparc64. The breakage is only with some compiler versions and some
hardware, and nobody seems to have figured out what triggers it,
although there's a simple reprodicer for the problem when it does
trigger.
The problem was introduced with the conversion to the generic GUP code
in commit 7b9afb86b632 ("sparc64: use the generic get_user_pages_fast
code"), but nothing looks obviously wrong in that conversion. It may be
a compiler bug that just hits us with the code reorganization. Or it
may be something very specific to sparc64.
This disables HAVE_FAST_GUP entirely. That makes things like futexes a
bit slower, but at least they work. If we can figure out the trigger,
that would be lovely, but it's been three months already..
Link: https://lore.kernel.org/lkml/20190717215956.GA30369@altlinux.org/
Fixes: 7b9afb86b632 ("sparc64: use the generic get_user_pages_fast code")
Reported-by: Dmitry V Levin <ldv@altlinux.org>
Reported-by: Anatoly Pugachev <matorola@gmail.com>
Requested-by: Meelis Roos <mroos@linux.ee>
Suggested-by: Christoph Hellwig <hch@infradead.org>
Cc: David Miller <davem@davemloft.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Panfrost uses multiple schedulers (one for each slot, so 2 in reality),
and on a timeout has to stop all the schedulers to safely perform a
reset. However more than one scheduler can trigger a timeout at the same
time. This race condition results in jobs being freed while they are
still in use.
When stopping other slots use cancel_delayed_work_sync() to ensure that
any timeout started for that slot has completed. Also use
mutex_trylock() to obtain reset_lock. This means that only one thread
attempts the reset, the other threads will simply complete without doing
anything (the first thread will wait for this in the call to
cancel_delayed_work_sync()).
While we're here and since the function is already dependent on
sched_job not being NULL, let's remove the unnecessary checks.
Fixes: aa20236784ab ("drm/panfrost: Prevent concurrent resets")
Tested-by: Neil Armstrong <narmstrong@baylibre.com>
Signed-off-by: Steven Price <steven.price@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Rob Herring <robh@kernel.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20191009094456.9704-1-steven.price@arm.com
|
|
rq_qos_del() incorrectly assigns the node being deleted to the head if
it was the first on the list in the !prev path. Fix it by iterating
with ** instead.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Josef Bacik <josef@toxicpanda.com>
Fixes: a79050434b45 ("blk-rq-qos: refactor out common elements of blk-wbt")
Cc: stable@vger.kernel.org # v4.19+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
blkcg_activate_policy() has the following bugs.
* cf09a8ee19ad ("blkcg: pass @q and @blkcg into
blkcg_pol_alloc_pd_fn()") added @blkcg to ->pd_alloc_fn(); however,
blkcg_activate_policy() ends up using pd's allocated for the root
blkcg for all preallocations, so ->pd_init_fn() for non-root blkcgs
can be passed in pd's which are allocated for the root blkcg.
For blk-iocost, this means that ->pd_init_fn() can write beyond the
end of the allocated object as it determines the length of the flex
array at the end based on the blkcg's nesting level.
* Each pd is initialized as they get allocated. If alloc fails, the
policy will get freed with pd's initialized on it.
* After the above partial failure, the partial pds are not freed.
This patch fixes all the above issues by
* Restructuring blkcg_activate_policy() so that alloc and init passes
are separate. Init takes place only after all allocs succeeded and
on failure all allocated pds are freed.
* Unifying and fixing the cleanup of the remaining pd_prealloc.
Signed-off-by: Tejun Heo <tj@kernel.org>
Fixes: cf09a8ee19ad ("blkcg: pass @q and @blkcg into blkcg_pol_alloc_pd_fn()")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
64-bit time is a signed quantity in the kernel, so the bulkstat
structure should reflect that. Note that the structure size stays
the same and that we have not yet published userspace headers for this
new ioctl so there are no users to break.
Fixes: 7035f9724f84 ("xfs: introduce new v5 bulkstat structure")
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Carlos Maiolino <cmaiolino@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
|
|
There is a warning message in my test with below steps:
# rbd bench --io-type write --io-size 4K --io-threads 1 --io-pattern rand test &
# sleep 5
# pkill -9 rbd
# rbd map test &
# sleep 5
# pkill rbd
The reason is that the rbd_add_acquire_lock() is interruptable,
that means, when we kill the waiting on ->acquire_wait, the lock_dwork
could be still running.
1. do_rbd_add() 2. lock_dwork
rbd_add_acquire_lock()
- queue_delayed_work()
lock_dwork queued
- wait_for_completion_killable_timeout() <-- kill happen
rbd_dev_image_unlock() <-- UNLOCKED now, nothing to do.
rbd_dev_device_release()
rbd_dev_image_release()
- ...
lock successed here
- cancel_delayed_work_sync(&rbd_dev->lock_dwork)
Then when we reach the rbd_dev_free(), WARN_ON is triggered because
lock_state is not RBD_LOCK_STATE_UNLOCKED.
To fix it, this commit make sure the lock_dwork was finished before
calling rbd_dev_image_unlock().
On the other hand, this would not happend in do_rbd_remove(), because
after rbd mapped, lock_dwork will only be queued for IO request, and
request will continue unless lock_dwork finished. when we call
rbd_dev_image_unlock() in do_rbd_remove(), all requests are done.
That means, lock_state should not be locked again after
rbd_dev_image_unlock().
[ Cancel lock_dwork in rbd_add_acquire_lock(), only if the wait is
interrupted. ]
Fixes: 637cd060537d ("rbd: new exclusive lock wait/wake code")
Signed-off-by: Dongsheng Yang <dongsheng.yang@easystack.cn>
Reviewed-by: Ilya Dryomov <idryomov@gmail.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
|
|
In the future, we're going to want to extend the ceph_reply_info_extra
for create replies. Currently though, the kernel code doesn't accept an
extra blob that is larger than the expected data.
Change the code to skip over any unrecognized fields at the end of the
extra blob, rather than returning -EIO.
Cc: stable@vger.kernel.org
Signed-off-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
|
|
Now we recalculate the sequence of timeout with 'req->sequence =
ctx->cached_sq_head + count - 1', judge the right place to insert
for timeout_list by compare the number of request we still expected for
completion. But we have not consider about the situation of overflow:
1. ctx->cached_sq_head + count - 1 may overflow. And a bigger count for
the new timeout req can have a small req->sequence.
2. cached_sq_head of now may overflow compare with before req. And it
will lead the timeout req with small req->sequence.
This overflow will lead to the misorder of timeout_list, which can lead
to the wrong order of the completion of timeout_list. Fix it by reuse
req->submit.sequence to store the count, and change the logic of
inserting sort in io_timeout.
Signed-off-by: yangerkun <yangerkun@huawei.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
During nvme_tcp_setup_cmd_pdu error flow, one must call nvme_cleanup_cmd
since it's symmetric to nvme_setup_cmd.
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Keith Busch <kbusch@kernel.org>
|
|
During nvme_loop_queue_rq error flow, one must call nvme_cleanup_cmd since
it's symmetric to nvme_setup_cmd.
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Keith Busch <kbusch@kernel.org>
|
|
IOMMU Event Log encodes 20-bit PASID for events:
ILLEGAL_DEV_TABLE_ENTRY
IO_PAGE_FAULT
PAGE_TAB_HARDWARE_ERROR
INVALID_DEVICE_REQUEST
as:
PASID[15:0] = bit 47:32
PASID[19:16] = bit 19:16
Note that INVALID_PPR_REQUEST event has different encoding
from the rest of the events as the following:
PASID[15:0] = bit 31:16
PASID[19:16] = bit 45:42
So, fixes the decoding logic.
Fixes: d64c0486ed50 ("iommu/amd: Update the PASID information printed to the system log")
Cc: Joerg Roedel <jroedel@suse.de>
Cc: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
As platform_get_irq() now prints an error when the interrupt does not
exist, calling it gratuitously causes scary messages like:
ipmmu-vmsa e6740000.mmu: IRQ index 0 not found
Fix this by moving the call to platform_get_irq() down, where the
existence of the interrupt is mandatory.
Fixes: 7723f4c5ecdb8d83 ("driver core: platform: Add an error message to platform_get_irq*()")
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Tested-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Reviewed-by: Stephen Boyd <swboyd@chromium.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Till now the Rockchip iommu driver walked through the irq list via
platform_get_irq() until it encountered an ENXIO error. With the
recent change to add a central error message, this always results
in such an error for each iommu on probe and shutdown.
To not confuse people, switch to platform_count_irqs() to get the
actual number of interrupts before walking through them.
Fixes: 7723f4c5ecdb ("driver core: platform: Add an error message to platform_get_irq*()")
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
Tested-by: Enric Balletbo i Serra <enric.balletbo@collabora.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
There is a bug in create_safe_exec_page(), when page table is allocated
it is not checked that table is allocated successfully:
But it is dereferenced in: pgd_none(READ_ONCE(*pgdp)). Check that
allocation was successful.
Fixes: 82869ac57b5d ("arm64: kernel: Add support for hibernate/suspend-to-disk")
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Signed-off-by: Will Deacon <will@kernel.org>
|
|
If CONFIG_ARM64_SVE=n then we fail to report ID_AA64ZFR0_EL1 as 0 when
read by userspace, despite being required by the architecture. Although
this is theoretically a change in ABI, userspace will first check for
the presence of SVE via the HWCAP or the ID_AA64PFR0_EL1.SVE field
before probing the ID_AA64ZFR0_EL1 register. Given that these are
reported correctly for this configuration, we can safely tighten up the
current behaviour.
Ensure ID_AA64ZFR0_EL1 is treated as RAZ when CONFIG_ARM64_SVE=n.
Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Dave Martin <dave.martin@arm.com>
Fixes: 06a916feca2b ("arm64: Expose SVE2 features for userspace")
Signed-off-by: Will Deacon <will@kernel.org>
|
|
We switch the default handler to be handle_bad_irq() instead of
handle_simple_irq() (which was not correct anyway).
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|
The driver wants to initialize related registers before IRQ chip will be added.
That's why move it to a corresponding callback. It also fixes the NULL pointer
dereference.
Fixes: 8f86a5b4ad67 ("gpio: merrifield: Pass irqchip when adding gpiochip")
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|
The driver wants to initialize related registers before IRQ chip will be added.
That's why move it to a corresponding callback. It also fixes the NULL pointer
dereference.
Fixes: 7b1e889436a1 ("gpio: lynxpoint: Pass irqchip when adding gpiochip")
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|
The driver wants to initialize related registers before IRQ chip will be added.
That's why move it to a corresponding callback. It also fixes the NULL pointer
dereference.
Fixes: 8069e69a9792 ("gpio: intel-mid: Pass irqchip when adding gpiochip")
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|
After changing the drivers to use GPIO core to add an IRQ chip
it appears that some of them requires a hardware initialization
before adding the IRQ chip.
Add an optional callback ->init_hw() to allow that drivers
to initialize hardware if needed.
This change is a part of the fix NULL pointer dereference
brought to the several drivers recently.
Cc: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|
During conversion to internal IRQ chip initialization the commit
8f86a5b4ad67 ("gpio: merrifield: Pass irqchip when adding gpiochip")
lost the irq_base assignment.
drivers/gpio/gpio-merrifield.c: In function ‘mrfld_gpio_probe’:
drivers/gpio/gpio-merrifield.c:405:17: warning: variable ‘irq_base’ set but not used [-Wunused-but-set-variable]
Assign the girq->first to it.
Fixes: 8f86a5b4ad67 ("gpio: merrifield: Pass irqchip when adding gpiochip")
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|
Custom outs*/ins* implementations are long gone from the xtensa port,
remove matching EXPORT_SYMBOLs.
This fixes the following build warnings issued by modpost since commit
15bfc2348d54 ("modpost: check for static EXPORT_SYMBOL* functions"):
WARNING: "insb" [vmlinux] is a static EXPORT_SYMBOL
WARNING: "insw" [vmlinux] is a static EXPORT_SYMBOL
WARNING: "insl" [vmlinux] is a static EXPORT_SYMBOL
WARNING: "outsb" [vmlinux] is a static EXPORT_SYMBOL
WARNING: "outsw" [vmlinux] is a static EXPORT_SYMBOL
WARNING: "outsl" [vmlinux] is a static EXPORT_SYMBOL
Cc: stable@vger.kernel.org
Fixes: d38efc1f150f ("xtensa: adopt generic io routines")
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|