aboutsummaryrefslogtreecommitdiffstats
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2021-06-11x86, lto: Pass -stack-alignment only on LLD < 13.0.0Tor Vic1-2/+3
Since LLVM commit 3787ee4, the '-stack-alignment' flag has been dropped [1], leading to the following error message when building a LTO kernel with Clang-13 and LLD-13: ld.lld: error: -plugin-opt=-: ld.lld: Unknown command line argument '-stack-alignment=8'. Try 'ld.lld --help' ld.lld: Did you mean '--stackrealign=8'? It also appears that the '-code-model' flag is not necessary anymore starting with LLVM-9 [2]. Drop '-code-model' and make '-stack-alignment' conditional on LLD < 13.0.0. These flags were necessary because these flags were not encoded in the IR properly, so the link would restart optimizations without them. Now there are properly encoded in the IR, and these flags exposing implementation details are no longer necessary. [1] https://reviews.llvm.org/D103048 [2] https://reviews.llvm.org/D52322 Cc: stable@vger.kernel.org Link: https://github.com/ClangBuiltLinux/linux/issues/1377 Signed-off-by: Tor Vic <torvic9@mailbox.org> Reviewed-by: Nathan Chancellor <nathan@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/f2c018ee-5999-741e-58d4-e482d5246067@mailbox.org
2021-06-11objtool: Only rewrite unconditional retpoline thunk callsPeter Zijlstra1-0/+4
It turns out that the compilers generate conditional branches to the retpoline thunks like: 5d5: 0f 85 00 00 00 00 jne 5db <cpuidle_reflect+0x22> 5d7: R_X86_64_PLT32 __x86_indirect_thunk_r11-0x4 while the rewrite can only handle JMP/CALL to the thunks. The result is the alternative wrecking the code. Make sure to skip writing the alternatives for conditional branches. Fixes: 9bc0bb50727c ("objtool/x86: Rewrite retpoline thunk calls") Reported-by: Lukasz Majczak <lma@semihalf.com> Reported-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Nathan Chancellor <nathan@kernel.org>
2021-06-10async_xor: check src_offs is not NULL before updating itXiao Ni1-1/+2
When PAGE_SIZE is greater than 4kB, multiple stripes may share the same page. Thus, src_offs is added to async_xor_offs() with array of offsets. However, async_xor() passes NULL src_offs to async_xor_offs(). In such case, src_offs should not be updated. Add a check before the update. Fixes: ceaf2966ab08(async_xor: increase src_offs when dropping destination page) Cc: stable@vger.kernel.org # v5.10+ Reported-by: Oleksandr Shchirskyi <oleksandr.shchirskyi@linux.intel.com> Tested-by: Oleksandr Shchirskyi <oleksandr.shchirskyi@intel.com> Signed-off-by: Xiao Ni <xni@redhat.com> Signed-off-by: Song Liu <song@kernel.org>
2021-06-10io_uring: add feature flag for rsrc tagsPavel Begunkov2-1/+3
Add IORING_FEAT_RSRC_TAGS indicating that io_uring supports a bunch of new IORING_REGISTER operations, in particular IORING_REGISTER_[FILES[,UPDATE]2,BUFFERS[2,UPDATE]] that support rsrc tagging, and also indicating implemented dynamic fixed buffer updates. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/9b995d4045b6c6b4ab7510ca124fd25ac2203af7.1623339162.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-06-10io_uring: change registration/upd/rsrc tagging ABIPavel Begunkov2-21/+36
There are ABI moments about recently added rsrc registration/update and tagging that might become a nuisance in the future. First, IORING_REGISTER_RSRC[_UPD] hide different types of resources under it, so breaks fine control over them by restrictions. It works for now, but once those are wanted under restrictions it would require a rework. It was also inconvenient trying to fit a new resource not supporting all the features (e.g. dynamic update) into the interface, so better to return to IORING_REGISTER_* top level dispatching. Second, register/update were considered to accept a type of resource, however that's not a good idea because there might be several ways of registration of a single resource type, e.g. we may want to add non-contig buffers or anything more exquisite as dma mapped memory. So, remove IORING_RSRC_[FILE,BUFFER] out of the ABI, and place them internally for now to limit changes. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/9b554897a7c17ad6e3becc48dfed2f7af9f423d5.1623339162.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-06-10coredump: Limit what can interrupt coredumpsEric W. Biederman1-1/+1
Olivier Langlois has been struggling with coredumps being incompletely written in processes using io_uring. Olivier Langlois <olivier@trillion01.com> writes: > io_uring is a big user of task_work and any event that io_uring made a > task waiting for that occurs during the core dump generation will > generate a TIF_NOTIFY_SIGNAL. > > Here are the detailed steps of the problem: > 1. io_uring calls vfs_poll() to install a task to a file wait queue > with io_async_wake() as the wakeup function cb from io_arm_poll_handler() > 2. wakeup function ends up calling task_work_add() with TWA_SIGNAL > 3. task_work_add() sets the TIF_NOTIFY_SIGNAL bit by calling > set_notify_signal() The coredump code deliberately supports being interrupted by SIGKILL, and depends upon prepare_signal to filter out all other signals. Now that signal_pending includes wake ups for TIF_NOTIFY_SIGNAL this hack in dump_emitted by the coredump code no longer works. Make the coredump code more robust by explicitly testing for all of the wakeup conditions the coredump code supports. This prevents new wakeup conditions from breaking the coredump code, as well as fixing the current issue. The filesystem code that the coredump code uses already limits itself to only aborting on fatal_signal_pending. So it should not develop surprising wake-up reasons either. v2: Don't remove the now unnecessary code in prepare_signal. Cc: stable@vger.kernel.org Fixes: 12db8b690010 ("entry: Add support for TIF_NOTIFY_SIGNAL") Reported-by: Olivier Langlois <olivier@trillion01.com> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-06-10hwmon: (tps23861) correct shunt LSB valuesRobert Marko1-2/+2
Current shunt LSB values got reversed during in the original driver commit. So, correct the current shunt LSB values according to the datasheet. This caused reading slightly skewed current values. Fixes: fff7b8ab2255 ("hwmon: add Texas Instruments TPS23861 driver") Signed-off-by: Robert Marko <robert.marko@sartura.hr> Link: https://lore.kernel.org/r/20210609220728.499879-3-robert.marko@sartura.hr Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2021-06-10hwmon: (tps23861) set current shunt valueRobert Marko1-0/+12
TPS23861 has a configuration bit for setting of the current shunt value used on the board. Its bit 0 of the General Mask 1 register. According to the datasheet bit values are: 0 for 255 mOhm (Default) 1 for 250 mOhm So, configure the bit before registering the hwmon device according to the value passed in the DTS or default one if none is passed. This caused potentially reading slightly skewed values due to max current value being 1.02A when 250mOhm shunt is used instead of 1.0A when 255mOhm is used. Fixes: fff7b8ab2255 ("hwmon: add Texas Instruments TPS23861 driver") Signed-off-by: Robert Marko <robert.marko@sartura.hr> Link: https://lore.kernel.org/r/20210609220728.499879-2-robert.marko@sartura.hr Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2021-06-10hwmon: (tps23861) define regmap max registerRobert Marko1-0/+1
Define the max register address the device supports. This allows reading the whole register space via regmap debugfs, without it only register 0x0 is visible. This was forgotten in the original driver commit. Fixes: fff7b8ab2255 ("hwmon: add Texas Instruments TPS23861 driver") Signed-off-by: Robert Marko <robert.marko@sartura.hr> Link: https://lore.kernel.org/r/20210609220728.499879-1-robert.marko@sartura.hr Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2021-06-10ALSA: seq: Fix race of snd_seq_timer_open()Takashi Iwai1-1/+9
The timer instance per queue is exclusive, and snd_seq_timer_open() should have managed the concurrent accesses. It looks as if it's checking the already existing timer instance at the beginning, but it's not right, because there is no protection, hence any later concurrent call of snd_seq_timer_open() may override the timer instance easily. This may result in UAF, as the leftover timer instance can keep running while the queue itself gets closed, as spotted by syzkaller recently. For avoiding the race, add a proper check at the assignment of tmr->timeri again, and return -EBUSY if it's been already registered. Reported-by: syzbot+ddc1260a83ed1cbf6fb5@syzkaller.appspotmail.com Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/000000000000dce34f05c42f110c@google.com Link: https://lore.kernel.org/r/20210610152059.24633-1-tiwai@suse.de Signed-off-by: Takashi Iwai <tiwai@suse.de>
2021-06-10drm/msm/dsi: Stash away calculated vco frequency on recalcStephen Boyd2-0/+2
A problem was reported on CoachZ devices where the display wouldn't come up, or it would be distorted. It turns out that the PLL code here wasn't getting called once dsi_pll_10nm_vco_recalc_rate() started returning the same exact frequency, down to the Hz, that the bootloader was setting instead of 0 when the clk was registered with the clk framework. After commit 001d8dc33875 ("drm/msm/dsi: remove temp data from global pll structure") we use a hardcoded value for the parent clk frequency, i.e. VCO_REF_CLK_RATE, and we also hardcode the value for FRAC_BITS, instead of getting it from the config structure. This combination of changes to the recalc function allows us to properly calculate the frequency of the PLL regardless of whether or not the PLL has been clk_prepare()d or clk_set_rate()d. That's a good improvement. Unfortunately, this means that now we won't call down into the PLL clk driver when we call clk_set_rate() because the frequency calculated in the framework matches the frequency that is set in hardware. If the rate is the same as what we want it should be OK to not call the set_rate PLL op. The real problem is that the prepare op in this driver uses a private struct member to stash away the vco frequency so that it can call the set_rate op directly during prepare. Once the set_rate op is never called because recalc_rate told us the rate is the same, we don't set this private struct member before the prepare op runs, so we try to call the set_rate function directly with a frequency of 0. This effectively kills the PLL and configures it for a rate that won't work. Calling set_rate from prepare is really quite bad and will confuse any downstream clks about what the rate actually is of their parent. Fixing that will be a rather large change though so we leave that to later. For now, let's stash away the rate we calculate during recalc so that the prepare op knows what frequency to set, instead of 0. This way things keep working and the display can enable the PLL properly. In the future, we should remove that code from the prepare op so that it doesn't even try to call the set rate function. Cc: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Cc: Abhinav Kumar <abhinavk@codeaurora.org> Fixes: 001d8dc33875 ("drm/msm/dsi: remove temp data from global pll structure") Signed-off-by: Stephen Boyd <swboyd@chromium.org> Link: https://lore.kernel.org/r/20210608195519.125561-1-swboyd@chromium.org Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-06-10cgroup1: don't allow '\n' in renamingAlexander Kuznetsov1-0/+4
cgroup_mkdir() have restriction on newline usage in names: $ mkdir $'/sys/fs/cgroup/cpu/test\ntest2' mkdir: cannot create directory '/sys/fs/cgroup/cpu/test\ntest2': Invalid argument But in cgroup1_rename() such check is missed. This allows us to make /proc/<pid>/cgroup unparsable: $ mkdir /sys/fs/cgroup/cpu/test $ mv /sys/fs/cgroup/cpu/test $'/sys/fs/cgroup/cpu/test\ntest2' $ echo $$ > $'/sys/fs/cgroup/cpu/test\ntest2' $ cat /proc/self/cgroup 11:pids:/ 10:freezer:/ 9:hugetlb:/ 8:cpuset:/ 7:blkio:/user.slice 6:memory:/user.slice 5:net_cls,net_prio:/ 4:perf_event:/ 3:devices:/user.slice 2:cpu,cpuacct:/test test2 1:name=systemd:/ 0::/ Signed-off-by: Alexander Kuznetsov <wwfq@yandex-team.ru> Reported-by: Andrey Krasichkov <buglloc@yandex-team.ru> Acked-by: Dmitry Yakunin <zeil@yandex-team.ru> Cc: stable@vger.kernel.org Signed-off-by: Tejun Heo <tj@kernel.org>
2021-06-10IB/mlx5: Fix initializing CQ fragments bufferAlaa Hleihel1-5/+4
The function init_cq_frag_buf() can be called to initialize the current CQ fragments buffer cq->buf, or the temporary cq->resize_buf that is filled during CQ resize operation. However, the offending commit started to use function get_cqe() for getting the CQEs, the issue with this change is that get_cqe() always returns CQEs from cq->buf, which leads us to initialize the wrong buffer, and in case of enlarging the CQ we try to access elements beyond the size of the current cq->buf and eventually hit a kernel panic. [exception RIP: init_cq_frag_buf+103] [ffff9f799ddcbcd8] mlx5_ib_resize_cq at ffffffffc0835d60 [mlx5_ib] [ffff9f799ddcbdb0] ib_resize_cq at ffffffffc05270df [ib_core] [ffff9f799ddcbdc0] llt_rdma_setup_qp at ffffffffc0a6a712 [llt] [ffff9f799ddcbe10] llt_rdma_cc_event_action at ffffffffc0a6b411 [llt] [ffff9f799ddcbe98] llt_rdma_client_conn_thread at ffffffffc0a6bb75 [llt] [ffff9f799ddcbec8] kthread at ffffffffa66c5da1 [ffff9f799ddcbf50] ret_from_fork_nospec_begin at ffffffffa6d95ddd Fix it by getting the needed CQE by calling mlx5_frag_buf_get_wqe() that takes the correct source buffer as a parameter. Fixes: 388ca8be0037 ("IB/mlx5: Implement fragmented completion queue (CQ)") Link: https://lore.kernel.org/r/90a0e8c924093cfa50a482880ad7e7edb73dc19a.1623309971.git.leonro@nvidia.com Signed-off-by: Alaa Hleihel <alaa@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-06-10RDMA/mlx5: Delete right entry from MR signature databaseAharon Landau1-2/+2
The value mr->sig is stored in the entry upon mr allocation, however, ibmr is wrongly entered here as "old", therefore, xa_cmpxchg() does not replace the entry with NULL, which leads to the following trace: WARNING: CPU: 28 PID: 2078 at drivers/infiniband/hw/mlx5/main.c:3643 mlx5_ib_stage_init_cleanup+0x4d/0x60 [mlx5_ib] Modules linked in: nvme_rdma nvme_fabrics nvme_core 8021q garp mrp bonding bridge stp llc rfkill rpcrdma sunrpc rdma_ucm ib_srpt ib_isert iscsi_tad CPU: 28 PID: 2078 Comm: reboot Tainted: G X --------- --- 5.13.0-0.rc2.19.el9.x86_64 #1 Hardware name: Dell Inc. PowerEdge R430/03XKDV, BIOS 2.9.1 12/07/2018 RIP: 0010:mlx5_ib_stage_init_cleanup+0x4d/0x60 [mlx5_ib] Code: 8d bb 70 1f 00 00 be 00 01 00 00 e8 9d 94 ce da 48 3d 00 01 00 00 75 02 5b c3 0f 0b 5b c3 0f 0b 48 83 bb b0 20 00 00 00 74 d5 <0f> 0b eb d1 4 RSP: 0018:ffffa8db06d33c90 EFLAGS: 00010282 RAX: 0000000000000000 RBX: ffff97f890a44000 RCX: ffff97f900ec0160 RDX: 0000000000000000 RSI: 0000000080080001 RDI: ffff97f890a44000 RBP: ffffffffc0c189b8 R08: 0000000000000001 R09: 0000000000000000 R10: 0000000000000001 R11: 0000000000000300 R12: ffff97f890a44000 R13: ffffffffc0c36030 R14: 00000000fee1dead R15: 0000000000000000 FS: 00007f0d5a8a3b40(0000) GS:ffff98077fb80000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000555acbf4f450 CR3: 00000002a6f56002 CR4: 00000000001706e0 Call Trace: mlx5r_remove+0x39/0x60 [mlx5_ib] auxiliary_bus_remove+0x1b/0x30 __device_release_driver+0x17a/0x230 device_release_driver+0x24/0x30 bus_remove_device+0xdb/0x140 device_del+0x18b/0x3e0 mlx5_detach_device+0x59/0x90 [mlx5_core] mlx5_unload_one+0x22/0x60 [mlx5_core] shutdown+0x31/0x3a [mlx5_core] pci_device_shutdown+0x34/0x60 device_shutdown+0x15b/0x1c0 __do_sys_reboot.cold+0x2f/0x5b ? vfs_writev+0xc7/0x140 ? handle_mm_fault+0xc5/0x290 ? do_writev+0x6b/0x110 do_syscall_64+0x40/0x80 entry_SYSCALL_64_after_hwframe+0x44/0xae Fixes: e6fb246ccafb ("RDMA/mlx5: Consolidate MR destruction to mlx5_ib_dereg_mr()") Link: https://lore.kernel.org/r/f3f585ea0db59c2a78f94f65eedeafc5a2374993.1623309971.git.leonro@nvidia.com Signed-off-by: Aharon Landau <aharonl@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-06-10RDMA: Verify port when creating flow ruleMaor Gottlieb3-6/+7
Validate port value provided by the user and with that remove no longer needed validation by the driver. The missing check in the mlx5_ib driver could cause to the below oops. Call trace: _create_flow_rule+0x2d4/0xf28 [mlx5_ib] mlx5_ib_create_flow+0x2d0/0x5b0 [mlx5_ib] ib_uverbs_ex_create_flow+0x4cc/0x624 [ib_uverbs] ib_uverbs_handler_UVERBS_METHOD_INVOKE_WRITE+0xd4/0x150 [ib_uverbs] ib_uverbs_cmd_verbs.isra.7+0xb28/0xc50 [ib_uverbs] ib_uverbs_ioctl+0x158/0x1d0 [ib_uverbs] do_vfs_ioctl+0xd0/0xaf0 ksys_ioctl+0x84/0xb4 __arm64_sys_ioctl+0x28/0xc4 el0_svc_common.constprop.3+0xa4/0x254 el0_svc_handler+0x84/0xa0 el0_svc+0x10/0x26c Code: b9401260 f9615681 51000400 8b001c20 (f9403c1a) Fixes: 436f2ad05a0b ("IB/core: Export ib_create/destroy_flow through uverbs") Link: https://lore.kernel.org/r/faad30dc5219a01727f47db3dc2f029d07c82c00.1623309971.git.leonro@nvidia.com Reviewed-by: Mark Bloch <markb@mellanox.com> Signed-off-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-06-10drm: Lock pointer access in drm_master_release()Desmond Cheong Zhi Xi1-1/+2
This patch eliminates the following smatch warning: drivers/gpu/drm/drm_auth.c:320 drm_master_release() warn: unlocked access 'master' (line 318) expected lock '&dev->master_mutex' The 'file_priv->master' field should be protected by the mutex lock to '&dev->master_mutex'. This is because other processes can concurrently modify this field and free the current 'file_priv->master' pointer. This could result in a use-after-free error when 'master' is dereferenced in subsequent function calls to 'drm_legacy_lock_master_cleanup()' or to 'drm_lease_revoke()'. An example of a scenario that would produce this error can be seen from a similar bug in 'drm_getunique()' that was reported by Syzbot: https://syzkaller.appspot.com/bug?id=148d2f1dfac64af52ffd27b661981a540724f803 In the Syzbot report, another process concurrently acquired the device's master mutex in 'drm_setmaster_ioctl()', then overwrote 'fpriv->master' in 'drm_new_set_master()'. The old value of 'fpriv->master' was subsequently freed before the mutex was unlocked. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Desmond Cheong Zhi Xi <desmondcheongzx@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210609092119.173590-1-desmondcheongzx@gmail.com
2021-06-10objtool: Fix .symtab_shndx handling for elf_create_undef_symbol()Peter Zijlstra1-1/+24
When an ELF object uses extended symbol section indexes (IOW it has a .symtab_shndx section), these must be kept in sync with the regular symbol table (.symtab). So for every new symbol we emit, make sure to also emit a .symtab_shndx value to keep the arrays of equal size. Note: since we're writing an UNDEF symbol, most GElf_Sym fields will be 0 and we can repurpose one (st_size) to host the 0 for the xshndx value. Fixes: 2f2f7e47f052 ("objtool: Add elf_create_undef_symbol()") Reported-by: Nick Desaulniers <ndesaulniers@google.com> Suggested-by: Fangrui Song <maskray@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lkml.kernel.org/r/YL3q1qFO9QIRL/BA@hirez.programming.kicks-ass.net
2021-06-10x86/nmi_watchdog: Fix old-style NMI watchdog regression on old Intel CPUsCodyYao-oc1-2/+2
The following commit: 3a4ac121c2ca ("x86/perf: Add hardware performance events support for Zhaoxin CPU.") Got the old-style NMI watchdog logic wrong and broke it for basically every Intel CPU where it was active. Which is only truly old CPUs, so few people noticed. On CPUs with perf events support we turn off the old-style NMI watchdog, so it was pretty pointless to add the logic for X86_VENDOR_ZHAOXIN to begin with ... :-/ Anyway, the fix is to restore the old logic and add a 'break'. [ mingo: Wrote a new changelog. ] Fixes: 3a4ac121c2ca ("x86/perf: Add hardware performance events support for Zhaoxin CPU.") Signed-off-by: CodyYao-oc <CodyYao-oc@zhaoxin.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210607025335.9643-1-CodyYao-oc@zhaoxin.com
2021-06-10irq_work: Make irq_work_queue() NMI-safe againPeter Zijlstra1-3/+0
Someone carelessly put NMI unsafe code in irq_work_queue(), breaking just about every single user. Also, someone has a terrible comment style. Fixes: e2b5bcf9f5ba ("irq_work: record irq_work_queue() call stack") Reported-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/YL+uBq8LzXXZsYVf@hirez.programming.kicks-ass.net
2021-06-09hwmon: (scpi-hwmon) shows the negative temperature properlyRiwen Lu1-0/+9
The scpi hwmon shows the sub-zero temperature in an unsigned integer, which would confuse the users when the machine works in low temperature environment. This shows the sub-zero temperature in an signed value and users can get it properly from sensors. Signed-off-by: Riwen Lu <luriwen@kylinos.cn> Tested-by: Xin Chen <chenxin@kylinos.cn> Link: https://lore.kernel.org/r/20210604030959.736379-1-luriwen@kylinos.cn Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2021-06-09hwmon: (corsair-psu) fix suspend behaviorWilken Gottwalt1-0/+14
During standby some PSUs turn off the microcontroller. A re-init is required during resume or the microcontroller stays unresponsive. Fixes: d115b51e0e56 ("hwmon: add Corsair PSU HID controller driver") Signed-off-by: Wilken Gottwalt <wilken.gottwalt@posteo.net> Link: https://lore.kernel.org/r/YLjCJiVtu5zgTabI@monster.powergraphx.local Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2021-06-09dt-bindings: hwmon: Fix typo in TI ADS7828 bindingsNobuhiro Iwamatsu1-1/+1
Fix typo in example for DT binding, changed from 'comatible' to 'compatible'. Signed-off-by: Nobuhiro Iwamatsu <iwamatsu@nigauri.org> Link: https://lore.kernel.org/r/20210531134655.720462-1-iwamatsu@nigauri.org Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2021-06-09ACPI: Pass the same capabilities to the _OSC regardless of the query flagMika Westerberg1-19/+8
Commit 719e1f561afb ("ACPI: Execute platform _OSC also with query bit clear") makes acpi_bus_osc_negotiate_platform_control() not only query the platforms capabilities but it also commits the result back to the firmware to report which capabilities are supported by the OS back to the firmware On certain systems the BIOS loads SSDT tables dynamically based on the capabilities the OS claims to support. However, on these systems the _OSC actually clears some of the bits (under certain conditions) so what happens is that now when we call the _OSC twice the second time we pass the cleared values and that results errors like below to appear on the system log: ACPI BIOS Error (bug): Could not resolve symbol [\_PR.PR00._CPC], AE_NOT_FOUND (20210105/psargs-330) ACPI Error: Aborting method \_PR.PR01._CPC due to previous error (AE_NOT_FOUND) (20210105/psparse-529) In addition the ACPI 6.4 spec says following [1]: If the OS declares support of a feature in the Support Field in one call to _OSC, then it must preserve the set state of that bit (declaring support for that feature) in all subsequent calls. Based on the above we can fix the issue by passing the same set of capabilities to the platform wide _OSC in both calls regardless of the query flag. While there drop the context.ret.length checks which were wrong to begin with (as the length is number of bytes not elements). This is already checked in acpi_run_osc() that also returns an error in that case. Includes fixes by Hans de Goede. [1] https://uefi.org/specs/ACPI/6.4/06_Device_Configuration/Device_Configuration.html#sequence-of-osc-calls BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=213023 BugLink: https://bugzilla.redhat.com/show_bug.cgi?id=1963717 Fixes: 719e1f561afb ("ACPI: Execute platform _OSC also with query bit clear") Cc: 5.12+ <stable@vger.kernel.org> # 5.12+ Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Reviewed-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2021-06-09drm/mcde: Fix off by 10^3 in calculationLinus Walleij1-1/+1
The calclulation of how many bytes we stuff into the DSI pipeline for video mode panels is off by three orders of magnitude because we did not account for the fact that the DRM mode clock is in kilohertz rather than hertz. This used to be: drm_mode_vrefresh(mode) * mode->htotal * mode->vtotal which would become for example for s6e63m0: 60 x 514 x 831 = 25628040 Hz, but mode->clock is 25628 as it is in kHz. This affects only the Samsung GT-I8190 "Golden" phone right now since it is the only MCDE device with a video mode display. Curiously some specimen work with this code and wild settings in the EOL and empty packets at the end of the display, but I have noticed an eeire flicker until now. Others were not so lucky and got black screens. Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reported-by: Stephan Gerhold <stephan@gerhold.net> Fixes: 920dd1b1425b ("drm/mcde: Use mode->clock instead of reverse calculating it from the vrefresh") Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Tested-by: Stephan Gerhold <stephan@gerhold.net> Reviewed-by: Stephan Gerhold <stephan@gerhold.net> Link: https://patchwork.freedesktop.org/patch/msgid/20210608213318.3897858-1-linus.walleij@linaro.org
2021-06-09pinctrl: qcom: Make it possible to select SC8180x TLMMBjorn Andersson1-1/+1
It's currently not possible to select the SC8180x TLMM driver, due to it selecting PINCTRL_MSM, rather than depending on the same. Fix this. Fixes: 97423113ec4b ("pinctrl: qcom: Add sc8180x TLMM driver") Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Link: https://lore.kernel.org/r/20210608180702.2064253-1-bjorn.andersson@linaro.org Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2021-06-09kvm: fix previous commit for 32-bit buildsPaolo Bonzini1-2/+2
array_index_nospec does not work for uint64_t on 32-bit builds. However, the size of a memory slot must be less than 20 bits wide on those system, since the memory slot must fit in the user address space. So just store it in an unsigned long. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-08media: dt-bindings: media: renesas,drif: Fix fck definitionFabrizio Castro1-3/+1
dt_binding_check reports the below error with the latest schema: Documentation/devicetree/bindings/media/renesas,drif.yaml: properties:clock-names:maxItems: False schema does not allow 1 Documentation/devicetree/bindings/media/renesas,drif.yaml: ignoring, error in schema: properties: clock-names: maxItems This patch fixes the problem. Signed-off-by: Fabrizio Castro <fabrizio.castro.jz@renesas.com> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20210408202436.3706-1-fabrizio.castro.jz@renesas.com
2021-06-08kvm: avoid speculation-based attacks from out-of-range memslot accessesPaolo Bonzini1-1/+9
KVM's mechanism for accessing guest memory translates a guest physical address (gpa) to a host virtual address using the right-shifted gpa (also known as gfn) and a struct kvm_memory_slot. The translation is performed in __gfn_to_hva_memslot using the following formula: hva = slot->userspace_addr + (gfn - slot->base_gfn) * PAGE_SIZE It is expected that gfn falls within the boundaries of the guest's physical memory. However, a guest can access invalid physical addresses in such a way that the gfn is invalid. __gfn_to_hva_memslot is called from kvm_vcpu_gfn_to_hva_prot, which first retrieves a memslot through __gfn_to_memslot. While __gfn_to_memslot does check that the gfn falls within the boundaries of the guest's physical memory or not, a CPU can speculate the result of the check and continue execution speculatively using an illegal gfn. The speculation can result in calculating an out-of-bounds hva. If the resulting host virtual address is used to load another guest physical address, this is effectively a Spectre gadget consisting of two consecutive reads, the second of which is data dependent on the first. Right now it's not clear if there are any cases in which this is exploitable. One interesting case was reported by the original author of this patch, and involves visiting guest page tables on x86. Right now these are not vulnerable because the hva read goes through get_user(), which contains an LFENCE speculation barrier. However, there are patches in progress for x86 uaccess.h to mask kernel addresses instead of using LFENCE; once these land, a guest could use speculation to read from the VMM's ring 3 address space. Other architectures such as ARM already use the address masking method, and would be susceptible to this same kind of data-dependent access gadgets. Therefore, this patch proactively protects from these attacks by masking out-of-bounds gfns in __gfn_to_hva_memslot, which blocks speculation of invalid hvas. Sean Christopherson noted that this patch does not cover kvm_read_guest_offset_cached. This however is limited to a few bytes past the end of the cache, and therefore it is unlikely to be useful in the context of building a chain of data dependent accesses. Reported-by: Artemiy Margaritov <artemiy.margaritov@gmail.com> Co-developed-by: Artemiy Margaritov <artemiy.margaritov@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-08KVM: x86: Unload MMU on guest TLB flush if TDP disabled to force MMU syncLai Jiangshan1-0/+13
When using shadow paging, unload the guest MMU when emulating a guest TLB flush to ensure all roots are synchronized. From the guest's perspective, flushing the TLB ensures any and all modifications to its PTEs will be recognized by the CPU. Note, unloading the MMU is overkill, but is done to mirror KVM's existing handling of INVPCID(all) and ensure the bug is squashed. Future cleanup can be done to more precisely synchronize roots when servicing a guest TLB flush. If TDP is enabled, synchronizing the MMU is unnecessary even if nested TDP is in play, as a "legacy" TLB flush from L1 does not invalidate L1's TDP mappings. For EPT, an explicit INVEPT is required to invalidate guest-physical mappings; for NPT, guest mappings are always tagged with an ASID and thus can only be invalidated via the VMCB's ASID control. This bug has existed since the introduction of KVM_VCPU_FLUSH_TLB. It was only recently exposed after Linux guests stopped flushing the local CPU's TLB prior to flushing remote TLBs (see commit 4ce94eabac16, "x86/mm/tlb: Flush remote and local TLBs concurrently"), but is also visible in Windows 10 guests. Tested-by: Maxim Levitsky <mlevitsk@redhat.com> Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Fixes: f38a7b75267f ("KVM: X86: support paravirtualized help for TLB shootdowns") Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com> [sean: massaged comment and changelog] Message-Id: <20210531172256.2908-1-jiangshanlai@gmail.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-08bcache: avoid oversized read request in cache missing code pathColy Li1-2/+7
In the cache missing code path of cached device, if a proper location from the internal B+ tree is matched for a cache miss range, function cached_dev_cache_miss() will be called in cache_lookup_fn() in the following code block, [code block 1] 526 unsigned int sectors = KEY_INODE(k) == s->iop.inode 527 ? min_t(uint64_t, INT_MAX, 528 KEY_START(k) - bio->bi_iter.bi_sector) 529 : INT_MAX; 530 int ret = s->d->cache_miss(b, s, bio, sectors); Here s->d->cache_miss() is the call backfunction pointer initialized as cached_dev_cache_miss(), the last parameter 'sectors' is an important hint to calculate the size of read request to backing device of the missing cache data. Current calculation in above code block may generate oversized value of 'sectors', which consequently may trigger 2 different potential kernel panics by BUG() or BUG_ON() as listed below, 1) BUG_ON() inside bch_btree_insert_key(), [code block 2] 886 BUG_ON(b->ops->is_extents && !KEY_SIZE(k)); 2) BUG() inside biovec_slab(), [code block 3] 51 default: 52 BUG(); 53 return NULL; All the above panics are original from cached_dev_cache_miss() by the oversized parameter 'sectors'. Inside cached_dev_cache_miss(), parameter 'sectors' is used to calculate the size of data read from backing device for the cache missing. This size is stored in s->insert_bio_sectors by the following lines of code, [code block 4] 909 s->insert_bio_sectors = min(sectors, bio_sectors(bio) + reada); Then the actual key inserting to the internal B+ tree is generated and stored in s->iop.replace_key by the following lines of code, [code block 5] 911 s->iop.replace_key = KEY(s->iop.inode, 912 bio->bi_iter.bi_sector + s->insert_bio_sectors, 913 s->insert_bio_sectors); The oversized parameter 'sectors' may trigger panic 1) by BUG_ON() from the above code block. And the bio sending to backing device for the missing data is allocated with hint from s->insert_bio_sectors by the following lines of code, [code block 6] 926 cache_bio = bio_alloc_bioset(GFP_NOWAIT, 927 DIV_ROUND_UP(s->insert_bio_sectors, PAGE_SECTORS), 928 &dc->disk.bio_split); The oversized parameter 'sectors' may trigger panic 2) by BUG() from the agove code block. Now let me explain how the panics happen with the oversized 'sectors'. In code block 5, replace_key is generated by macro KEY(). From the definition of macro KEY(), [code block 7] 71 #define KEY(inode, offset, size) \ 72 ((struct bkey) { \ 73 .high = (1ULL << 63) | ((__u64) (size) << 20) | (inode), \ 74 .low = (offset) \ 75 }) Here 'size' is 16bits width embedded in 64bits member 'high' of struct bkey. But in code block 1, if "KEY_START(k) - bio->bi_iter.bi_sector" is very probably to be larger than (1<<16) - 1, which makes the bkey size calculation in code block 5 is overflowed. In one bug report the value of parameter 'sectors' is 131072 (= 1 << 17), the overflowed 'sectors' results the overflowed s->insert_bio_sectors in code block 4, then makes size field of s->iop.replace_key to be 0 in code block 5. Then the 0- sized s->iop.replace_key is inserted into the internal B+ tree as cache missing check key (a special key to detect and avoid a racing between normal write request and cache missing read request) as, [code block 8] 915 ret = bch_btree_insert_check_key(b, &s->op, &s->iop.replace_key); Then the 0-sized s->iop.replace_key as 3rd parameter triggers the bkey size check BUG_ON() in code block 2, and causes the kernel panic 1). Another kernel panic is from code block 6, is by the bvecs number oversized value s->insert_bio_sectors from code block 4, min(sectors, bio_sectors(bio) + reada) There are two possibility for oversized reresult, - bio_sectors(bio) is valid, but bio_sectors(bio) + reada is oversized. - sectors < bio_sectors(bio) + reada, but sectors is oversized. From a bug report the result of "DIV_ROUND_UP(s->insert_bio_sectors, PAGE_SECTORS)" from code block 6 can be 344, 282, 946, 342 and many other values which larther than BIO_MAX_VECS (a.k.a 256). When calling bio_alloc_bioset() with such larger-than-256 value as the 2nd parameter, this value will eventually be sent to biovec_slab() as parameter 'nr_vecs' in following code path, bio_alloc_bioset() ==> bvec_alloc() ==> biovec_slab() Because parameter 'nr_vecs' is larger-than-256 value, the panic by BUG() in code block 3 is triggered inside biovec_slab(). From the above analysis, we know that the 4th parameter 'sector' sent into cached_dev_cache_miss() may cause overflow in code block 5 and 6, and finally cause kernel panic in code block 2 and 3. And if result of bio_sectors(bio) + reada exceeds valid bvecs number, it may also trigger kernel panic in code block 3 from code block 6. Now the almost-useless readahead size for cache missing request back to backing device is removed, this patch can fix the oversized issue with more simpler method. - add a local variable size_limit, set it by the minimum value from the max bkey size and max bio bvecs number. - set s->insert_bio_sectors by the minimum value from size_limit, sectors, and the sectors size of bio. - replace sectors by s->insert_bio_sectors to do bio_next_split. By the above method with size_limit, s->insert_bio_sectors will never result oversized replace_key size or bio bvecs number. And split bio 'miss' from bio_next_split() will always match the size of 'cache_bio', that is the current maximum bio size we can sent to backing device for fetching the cache missing data. Current problmatic code can be partially found since Linux v3.13-rc1, therefore all maintained stable kernels should try to apply this fix. Reported-by: Alexander Ullrich <ealex1979@gmail.com> Reported-by: Diego Ercolani <diego.ercolani@gmail.com> Reported-by: Jan Szubiak <jan.szubiak@linuxpolska.pl> Reported-by: Marco Rebhan <me@dblsaiko.net> Reported-by: Matthias Ferdinand <bcache@mfedv.net> Reported-by: Victor Westerhuis <victor@westerhu.is> Reported-by: Vojtech Pavlik <vojtech@suse.cz> Reported-and-tested-by: Rolf Fokkens <rolf@rolffokkens.nl> Reported-and-tested-by: Thorsten Knabe <linux@thorsten-knabe.de> Signed-off-by: Coly Li <colyli@suse.de> Cc: stable@vger.kernel.org Cc: Christoph Hellwig <hch@lst.de> Cc: Kent Overstreet <kent.overstreet@gmail.com> Cc: Nix <nix@esperi.org.uk> Cc: Takashi Iwai <tiwai@suse.com> Link: https://lore.kernel.org/r/20210607125052.21277-3-colyli@suse.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-06-08bcache: remove bcache device self-defined readaheadColy Li5-32/+1
For read cache missing, bcache defines a readahead size for the read I/O request to the backing device for the missing data. This readahead size is initialized to 0, and almost no one uses it to avoid unnecessary read amplifying onto backing device and write amplifying onto cache device. Considering upper layer file system code has readahead logic allready and works fine with readahead_cache_policy sysfile interface, we don't have to keep bcache self-defined readahead anymore. This patch removes the bcache self-defined readahead for cache missing request for backing device, and the readahead sysfs file interfaces are removed as well. This is the preparation for next patch to fix potential kernel panic due to oversized request in a simpler method. Reported-by: Alexander Ullrich <ealex1979@gmail.com> Reported-by: Diego Ercolani <diego.ercolani@gmail.com> Reported-by: Jan Szubiak <jan.szubiak@linuxpolska.pl> Reported-by: Marco Rebhan <me@dblsaiko.net> Reported-by: Matthias Ferdinand <bcache@mfedv.net> Reported-by: Victor Westerhuis <victor@westerhu.is> Reported-by: Vojtech Pavlik <vojtech@suse.cz> Reported-and-tested-by: Rolf Fokkens <rolf@rolffokkens.nl> Reported-and-tested-by: Thorsten Knabe <linux@thorsten-knabe.de> Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: stable@vger.kernel.org Cc: Kent Overstreet <kent.overstreet@gmail.com> Cc: Nix <nix@esperi.org.uk> Cc: Takashi Iwai <tiwai@suse.com> Link: https://lore.kernel.org/r/20210607125052.21277-2-colyli@suse.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-06-08tracing: Correct the length check which causes memory corruptionLiangyan1-1/+1
We've suffered from severe kernel crashes due to memory corruption on our production environment, like, Call Trace: [1640542.554277] general protection fault: 0000 [#1] SMP PTI [1640542.554856] CPU: 17 PID: 26996 Comm: python Kdump: loaded Tainted:G [1640542.556629] RIP: 0010:kmem_cache_alloc+0x90/0x190 [1640542.559074] RSP: 0018:ffffb16faa597df8 EFLAGS: 00010286 [1640542.559587] RAX: 0000000000000000 RBX: 0000000000400200 RCX: 0000000006e931bf [1640542.560323] RDX: 0000000006e931be RSI: 0000000000400200 RDI: ffff9a45ff004300 [1640542.560996] RBP: 0000000000400200 R08: 0000000000023420 R09: 0000000000000000 [1640542.561670] R10: 0000000000000000 R11: 0000000000000000 R12: ffffffff9a20608d [1640542.562366] R13: ffff9a45ff004300 R14: ffff9a45ff004300 R15: 696c662f65636976 [1640542.563128] FS: 00007f45d7c6f740(0000) GS:ffff9a45ff840000(0000) knlGS:0000000000000000 [1640542.563937] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [1640542.564557] CR2: 00007f45d71311a0 CR3: 000000189d63e004 CR4: 00000000003606e0 [1640542.565279] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [1640542.566069] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [1640542.566742] Call Trace: [1640542.567009] anon_vma_clone+0x5d/0x170 [1640542.567417] __split_vma+0x91/0x1a0 [1640542.567777] do_munmap+0x2c6/0x320 [1640542.568128] vm_munmap+0x54/0x70 [1640542.569990] __x64_sys_munmap+0x22/0x30 [1640542.572005] do_syscall_64+0x5b/0x1b0 [1640542.573724] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [1640542.575642] RIP: 0033:0x7f45d6e61e27 James Wang has reproduced it stably on the latest 4.19 LTS. After some debugging, we finally proved that it's due to ftrace buffer out-of-bound access using a debug tool as follows: [ 86.775200] BUG: Out-of-bounds write at addr 0xffff88aefe8b7000 [ 86.780806] no_context+0xdf/0x3c0 [ 86.784327] __do_page_fault+0x252/0x470 [ 86.788367] do_page_fault+0x32/0x140 [ 86.792145] page_fault+0x1e/0x30 [ 86.795576] strncpy_from_unsafe+0x66/0xb0 [ 86.799789] fetch_memory_string+0x25/0x40 [ 86.804002] fetch_deref_string+0x51/0x60 [ 86.808134] kprobe_trace_func+0x32d/0x3a0 [ 86.812347] kprobe_dispatcher+0x45/0x50 [ 86.816385] kprobe_ftrace_handler+0x90/0xf0 [ 86.820779] ftrace_ops_assist_func+0xa1/0x140 [ 86.825340] 0xffffffffc00750bf [ 86.828603] do_sys_open+0x5/0x1f0 [ 86.832124] do_syscall_64+0x5b/0x1b0 [ 86.835900] entry_SYSCALL_64_after_hwframe+0x44/0xa9 commit b220c049d519 ("tracing: Check length before giving out the filter buffer") adds length check to protect trace data overflow introduced in 0fc1b09ff1ff, seems that this fix can't prevent overflow entirely, the length check should also take the sizeof entry->array[0] into account, since this array[0] is filled the length of trace data and occupy addtional space and risk overflow. Link: https://lkml.kernel.org/r/20210607125734.1770447-1-liangyan.peng@linux.alibaba.com Cc: stable@vger.kernel.org Cc: Ingo Molnar <mingo@redhat.com> Cc: Xunlei Pang <xlpang@linux.alibaba.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Fixes: b220c049d519 ("tracing: Check length before giving out the filter buffer") Reviewed-by: Xunlei Pang <xlpang@linux.alibaba.com> Reviewed-by: yinbinbin <yinbinbin@alibabacloud.com> Reviewed-by: Wetp Zhang <wetp.zy@linux.alibaba.com> Tested-by: James Wang <jnwang@linux.alibaba.com> Signed-off-by: Liangyan <liangyan.peng@linux.alibaba.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2021-06-08ftrace: Do not blindly read the ip address in ftrace_bug()Steven Rostedt (VMware)1-1/+7
It was reported that a bug on arm64 caused a bad ip address to be used for updating into a nop in ftrace_init(), but the error path (rightfully) returned -EINVAL and not -EFAULT, as the bug caused more than one error to occur. But because -EINVAL was returned, the ftrace_bug() tried to report what was at the location of the ip address, and read it directly. This caused the machine to panic, as the ip was not pointing to a valid memory address. Instead, read the ip address with copy_from_kernel_nofault() to safely access the memory, and if it faults, report that the address faulted, otherwise report what was in that location. Link: https://lore.kernel.org/lkml/20210607032329.28671-1-mark-pk.tsai@mediatek.com/ Cc: stable@vger.kernel.org Fixes: 05736a427f7e1 ("ftrace: warn on failure to disable mcount callers") Reported-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com> Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2021-06-08tools/bootconfig: Fix a build error accroding to undefined fallthroughMasami Hiramatsu1-0/+4
Since the "fallthrough" is defined only in the kernel, building lib/bootconfig.c as a part of user-space tools causes a build error. Add a dummy fallthrough to avoid the build error. Link: https://lkml.kernel.org/r/162087519356.442660.11385099982318160180.stgit@devnote2 Cc: Ingo Molnar <mingo@kernel.org> Cc: stable@vger.kernel.org Fixes: 4c1ca831adb1 ("Revert "lib: Revert use of fallthrough pseudo-keyword in lib/"") Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2021-06-08tools/bootconfig: Fix error return code in apply_xbc()Zhen Lei1-0/+1
Fix to return a negative error code from the error handling case instead of 0, as done elsewhere in this function. Link: https://lkml.kernel.org/r/20210508034216.2277-1-thunder.leizhen@huawei.com Fixes: a995e6bc0524 ("tools/bootconfig: Fix to check the write failure correctly") Reported-by: Hulk Robot <hulkci@huawei.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2021-06-08RDMA/mlx5: Block FDB rules when not in switchdev modeMark Bloch1-0/+6
Allow creating FDB steering rules only when in switchdev mode. The only software model where a userspace application can manipulate FDB entries is when it manages the eswitch. This is only possible in switchdev mode where we expose a single RDMA device with representors for all the vports that are connected to the eswitch. Fixes: 52438be44112 ("RDMA/mlx5: Allow inserting a steering rule to the FDB") Link: https://lore.kernel.org/r/e928ae7c58d07f104716a2a8d730963d1bd01204.1623052923.git.leonro@nvidia.com Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-06-08drm/msm/a6xx: avoid shadow NULL reference in failure pathJonathan Marek1-1/+1
If a6xx_hw_init() fails before creating the shadow_bo, the a6xx_pm_suspend code referencing it will crash. Change the condition to one that avoids this problem (note: creation of shadow_bo is behind this same condition) Fixes: e8b0b994c3a5 ("drm/msm/a6xx: Clear shadow on suspend") Signed-off-by: Jonathan Marek <jonathan@marek.ca> Reviewed-by: Akhil P Oommen <akhilpo@codeaurora.org> Link: https://lore.kernel.org/r/20210513171431.18632-6-jonathan@marek.ca Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-06-08drm/msm/a6xx: fix incorrectly set uavflagprd_inv field for A650Jonathan Marek1-1/+1
Value was shifted in the wrong direction, resulting in the field always being zero, which is incorrect for A650. Fixes: d0bac4e9cd66 ("drm/msm/a6xx: set ubwc config for A640 and A650") Signed-off-by: Jonathan Marek <jonathan@marek.ca> Reviewed-by: Akhil P Oommen <akhilpo@codeaurora.org> Link: https://lore.kernel.org/r/20210513171431.18632-4-jonathan@marek.ca Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-06-08drm/msm/a6xx: update/fix CP_PROTECT initializationJonathan Marek2-40/+113
Update CP_PROTECT register programming based on downstream. A6XX_PROTECT_RW is renamed to A6XX_PROTECT_NORDWR to make things aligned and also be more clear about what it does. Note that this required switching to use the CP_ALWAYS_ON_COUNTER as the GMU counter is not accessible from the cmdstream. Which also means using the CPU counter for the msm_gpu_submit_flush() tracepoint (as catapult depends on being able to compare this to the start/end values captured in cmdstream). This may need to be revisited when IFPC is enabled. Also, compared to downstream, this opens up CP_PERFCTR_CP_SEL as the userspace performance tooling (fdperf and pps-producer) expect to be able to configure the CP counters. Fixes: 4b565ca5a2cb ("drm/msm: Add A6XX device support") Signed-off-by: Jonathan Marek <jonathan@marek.ca> Reviewed-by: Akhil P Oommen <akhilpo@codeaurora.org> Link: https://lore.kernel.org/r/20210513171431.18632-5-jonathan@marek.ca [switch to CP_ALWAYS_ON_COUNTER, open up CP_PERFCNTR_CP_SEL, and spiff up commit msg] Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-06-08radeon: use memcpy_to/fromio for UVD fw uploadChen Li1-2/+2
I met a gpu addr bug recently and the kernel log tells me the pc is memcpy/memset and link register is radeon_uvd_resume. As we know, in some architectures, optimized memcpy/memset may not work well on device memory. Trival memcpy_toio/memset_io can fix this problem. BTW, amdgpu has already done it in: commit ba0b2275a678 ("drm/amdgpu: use memcpy_to/fromio for UVD fw upload"), that's why it has no this issue on the same gpu and platform. Signed-off-by: Chen Li <chenli@uniontech.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2021-06-08drm/amd/pm: Fix fall-through warning for ClangGustavo A. R. Silva1-0/+1
In preparation to enable -Wimplicit-fallthrough for Clang, fix a warning by explicitly adding a break statement instead of letting the code fall through to the next case. Link: https://github.com/KSPP/linux/issues/115 Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2021-06-08drm/amdgpu: Fix incorrect register offsets for Sienna CichlidRohit Khaire1-5/+21
RLC_CP_SCHEDULERS and RLC_SPARE_INT0 have different offsets for Sienna Cichlid Signed-off-by: Rohit Khaire <rohit.khaire@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2021-06-08drm/amdgpu: Use drm_dbg_kms for reporting failure to get a GEM FBMichel Dänzer1-2/+2
drm_err meant broken user space could spam dmesg. Fixes: f258907fdd835e "drm/amdgpu: Verify bo size can fit framebuffer size on init." Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Michel Dänzer <mdaenzer@redhat.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2021-06-08drm/amdgpu: switch kzalloc to kvzalloc in amdgpu_bo_createChangfeng1-2/+2
It will cause error when alloc memory larger than 128KB in amdgpu_bo_create->kzalloc. So it needs to switch kzalloc to kvzalloc. Call Trace: alloc_pages_current+0x6a/0xe0 kmalloc_order+0x32/0xb0 kmalloc_order_trace+0x1e/0x80 __kmalloc+0x249/0x2d0 amdgpu_bo_create+0x102/0x500 [amdgpu] ? xas_create+0x264/0x3e0 amdgpu_bo_create_vm+0x32/0x60 [amdgpu] amdgpu_vm_pt_create+0xf5/0x260 [amdgpu] amdgpu_vm_init+0x1fd/0x4d0 [amdgpu] Signed-off-by: Changfeng <Changfeng.Zhu@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2021-06-08KVM: x86: Ensure liveliness of nested VM-Enter fail tracepoint messageSean Christopherson1-3/+3
Use the __string() machinery provided by the tracing subystem to make a copy of the string literals consumed by the "nested VM-Enter failed" tracepoint. A complete copy is necessary to ensure that the tracepoint can't outlive the data/memory it consumes and deference stale memory. Because the tracepoint itself is defined by kvm, if kvm-intel and/or kvm-amd are built as modules, the memory holding the string literals defined by the vendor modules will be freed when the module is unloaded, whereas the tracepoint and its data in the ring buffer will live until kvm is unloaded (or "indefinitely" if kvm is built-in). This bug has existed since the tracepoint was added, but was recently exposed by a new check in tracing to detect exactly this type of bug. fmt: '%s%s ' current_buffer: ' vmx_dirty_log_t-140127 [003] .... kvm_nested_vmenter_failed: ' WARNING: CPU: 3 PID: 140134 at kernel/trace/trace.c:3759 trace_check_vprintf+0x3be/0x3e0 CPU: 3 PID: 140134 Comm: less Not tainted 5.13.0-rc1-ce2e73ce600a-req #184 Hardware name: ASUS Q87M-E/Q87M-E, BIOS 1102 03/03/2014 RIP: 0010:trace_check_vprintf+0x3be/0x3e0 Code: <0f> 0b 44 8b 4c 24 1c e9 a9 fe ff ff c6 44 02 ff 00 49 8b 97 b0 20 RSP: 0018:ffffa895cc37bcb0 EFLAGS: 00010282 RAX: 0000000000000000 RBX: ffffa895cc37bd08 RCX: 0000000000000027 RDX: 0000000000000027 RSI: 00000000ffffdfff RDI: ffff9766cfad74f8 RBP: ffffffffc0a041d4 R08: ffff9766cfad74f0 R09: ffffa895cc37bad8 R10: 0000000000000001 R11: 0000000000000001 R12: ffffffffc0a041d4 R13: ffffffffc0f4dba8 R14: 0000000000000000 R15: ffff976409f2c000 FS: 00007f92fa200740(0000) GS:ffff9766cfac0000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000559bd11b0000 CR3: 000000019fbaa002 CR4: 00000000001726e0 Call Trace: trace_event_printf+0x5e/0x80 trace_raw_output_kvm_nested_vmenter_failed+0x3a/0x60 [kvm] print_trace_line+0x1dd/0x4e0 s_show+0x45/0x150 seq_read_iter+0x2d5/0x4c0 seq_read+0x106/0x150 vfs_read+0x98/0x180 ksys_read+0x5f/0xe0 do_syscall_64+0x40/0xb0 entry_SYSCALL_64_after_hwframe+0x44/0xae Cc: Steven Rostedt <rostedt@goodmis.org> Fixes: 380e0055bc7e ("KVM: nVMX: trace nested VM-Enter failures detected by H/W") Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Message-Id: <20210607175748.674002-1-seanjc@google.com>
2021-06-08selftests: kvm: Add support for customized slot0 memory sizeZhenzhong Duan5-15/+45
Until commit 39fe2fc96694 ("selftests: kvm: make allocation of extra memory take effect", 2021-05-27), parameter extra_mem_pages was used only to calculate the page table size for all the memory chunks, because real memory allocation happened with calls of vm_userspace_mem_region_add() after vm_create_default(). Commit 39fe2fc96694 however changed the meaning of extra_mem_pages to the size of memory slot 0. This makes the memory allocation more flexible, but makes it harder to account for the number of pages needed for the page tables. For example, memslot_perf_test has a small amount of memory in slot 0 but a lot in other slots, and adding that memory twice (both in slot 0 and with later calls to vm_userspace_mem_region_add()) causes an error that was fixed in commit 000ac4295339 ("selftests: kvm: fix overlapping addresses in memslot_perf_test", 2021-05-29) Since both uses are sensible, add a new parameter slot0_mem_pages to vm_create_with_vcpus() and some comments to clarify the meaning of slot0_mem_pages and extra_mem_pages. With this change, memslot_perf_test can go back to passing the number of memory pages as extra_mem_pages. Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com> Message-Id: <20210608233816.423958-4-zhenzhong.duan@intel.com> [Squashed in a single patch and rewrote the commit message. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-08proc: Track /proc/$pid/attr/ opener mm_structKees Cook1-1/+8
Commit bfb819ea20ce ("proc: Check /proc/$pid/attr/ writes against file opener") tried to make sure that there could not be a confusion between the opener of a /proc/$pid/attr/ file and the writer. It used struct cred to make sure the privileges didn't change. However, there were existing cases where a more privileged thread was passing the opened fd to a differently privileged thread (during container setup). Instead, use mm_struct to track whether the opener and writer are still the same process. (This is what several other proc files already do, though for different reasons.) Reported-by: Christian Brauner <christian.brauner@ubuntu.com> Reported-by: Andrea Righi <andrea.righi@canonical.com> Tested-by: Andrea Righi <andrea.righi@canonical.com> Fixes: bfb819ea20ce ("proc: Check /proc/$pid/attr/ writes against file opener") Cc: stable@vger.kernel.org Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-06-08KVM: selftests: introduce P47V64 for s390xChristian Borntraeger2-1/+7
s390x can have up to 47bits of physical guest and 64bits of virtual address bits. Add a new address mode to avoid errors of testcases going beyond 47bits. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Message-Id: <20210608123954.10991-1-borntraeger@de.ibm.com> Fixes: ef4c9f4f6546 ("KVM: selftests: Fix 32-bit truncation of vm_get_max_gfn()") Cc: stable@vger.kernel.org Reviewed-by: David Matlack <dmatlack@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-08KVM: x86: Ensure PV TLB flush tracepoint reflects KVM behaviorLai Jiangshan1-2/+4
In record_steal_time(), st->preempted is read twice, and trace_kvm_pv_tlb_flush() might output result inconsistent if kvm_vcpu_flush_tlb_guest() see a different st->preempted later. It is a very trivial problem and hardly has actual harm and can be avoided by reseting and reading st->preempted in atomic way via xchg(). Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com> Message-Id: <20210531174628.10265-1-jiangshanlai@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-08drm/msm: Init mm_list before accessing it for use_vram pathAlexey Minnekhanov1-0/+7
Fix NULL pointer dereference caused by update_inactive() trying to list_del() an uninitialized mm_list who's prev/next pointers are NULL. Fixes: 64fcbde772c7 ("drm/msm: Track potentially evictable objects") Signed-off-by: Alexey Minnekhanov <alexeymin@postmarketos.org> Link: https://lore.kernel.org/r/20210518102624.1193955-1-alexeymin@postmarketos.org Signed-off-by: Rob Clark <robdclark@chromium.org>