aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2021-08-25ARC: mm: hack to allow 2 level build with 4 level codeVineet Gupta1-0/+8
PMD_SHIFT is mapped to PUD_SHIFT or PGD_SHIFT by asm-generic/pgtable-* but only for !__ASSEMBLY__ tlbex.S asm code has PTRS_PER_PTE which uses PMD_SHIFT hence barfs for CONFIG_PGTABLE_LEVEL={2,3} and works for 4. So add a workaround local to tlbex.S - the proper fix is to change asm-generic/pgtable-* headers to expose the defines for __ASSEMBLY__ too Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: mm: disintegrate pgtable.h into levels and flagsVineet Gupta3-273/+248
- pgtable-bits-arcv2.h (MMU specific page table flags) - pgtable-levels.h (paging levels) No functional changes, but paves way for easy addition of new MMU code with different bits and levels etc Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: mm: disintegrate mmu.h (arcv2 bits out)Vineet Gupta3-84/+105
non functional change Tested-by: kernel test robot <lkp@intel.com> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: mm: move MMU specific bits out of entry code ...Vineet Gupta3-5/+11
... to avoid polluting shared entry code (across three ISA variants) with ISA/MMU specific code. Cc: Jose Abreu <joabreu@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: mm: move MMU specific bits out of ASID allocatorVineet Gupta3-22/+30
And while at it, rewrite commentary on ASID allocator Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: mm: non-functional code movement/cleanupVineet Gupta1-14/+16
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: mm: pmd_populate* to use the canonical set_pmd (and drop pmd_set)Vineet Gupta2-10/+10
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: ioremap: use more commonly used PAGE_KERNEL based uncached flagVineet Gupta2-4/+2
and remove the one off uncached definition for ARC Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: mm: Enable STRICT_MM_TYPECHECKSVineet Gupta2-27/+1
In the past I've refrained from doing this (at least 2 times) due to the slight code bloat due to ABI implications of pte_t etc becoming struct Per ARC ABI, functions return struct via memory and not through register r0, even if the struct would fit in register(s) - caller allocates space on stack and passes the address as first arg (r0), shifting rest of args by one - callee creates return struct in memory (referenced via r0) This time around the code actually shrunk slightly (due to subtle inlining heuristic effects), but still slightly inefficient due to return values passed through memory. That however seems like a small cost compared to maintenance burden given the impending new mmu support for page walk etc Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: mm: Fixes to allow STRICT_MM_TYPECHECKSVineet Gupta1-5/+8
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: mm: move mmu/cache externs out to setup.hVineet Gupta3-10/+10
Don't pollute mmu.h and cache.h with ARC internal bootlog/setup related functions. Move them aside to setup.h Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: mm: remove tlb paranoid codeVineet Gupta4-99/+0
This was used back in arc700 days when ASID allocator was fragile. Not needed in last 5 years Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: mm: use SCRATCH_DATA0 register for caching pgdir in ARCv2 onlyVineet Gupta7-40/+5
MMU SCRATCH_DATA0 register is intended to cache task pgd. However in ARC700 SMP port, it has to be repurposed for re-entrant interrupt handling, while UP port doesn't. We currently handle these use-cases using a fabricated #define which has usual issues of dependency nesting and obvious ugliness. So clean this up: for ARC700 don't use to cache pgd (even in UP) and do the opposite for ARCv2. And while here, switch to canonical pgd_offset(). Acked-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: retire MMUv1 and MMUv2 supportVineet Gupta7-412/+42
There's no known/active customer using them with latest kernels anyways. Removal helps cleanup code and remove the hack for MMU_VER to MMU_V[3-4] conversion Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: retire ARC750 supportVineet Gupta1-11/+1
There's no known/active customer using them with latest kernels anyways. Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: atomic_cmpxchg/atomic_xchg: implement relaxed variantsVineet Gupta2-23/+27
And move them out of cmpxchg.h to canonical atomic.h Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: cmpxchg/xchg: implement relaxed variants (LLSC config only)Vineet Gupta1-9/+2
It only makes sense to do this for the LLSC config Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: cmpxchg/xchg: rewrite as macros to make type safeVineet Gupta1-96/+117
Existing code forces/assume args to type "long" which won't work in LP64 regime, so prepare code for that Interestingly this should be a non functional change but I do see some codegen changes | bloat-o-meter vmlinux-cmpxchg-A vmlinux-cmpxchg-B | add/remove: 0/0 grow/shrink: 17/12 up/down: 218/-150 (68) | | Function old new delta | rwsem_optimistic_spin 518 550 +32 | rwsem_down_write_slowpath 1244 1274 +30 | __do_sys_perf_event_open 2576 2600 +24 | down_read 192 200 +8 | __down_read 192 200 +8 ... | task_work_run 168 148 -20 | dma_fence_chain_walk.part 760 736 -24 | __genradix_ptr_alloc 674 646 -28 Total: Before=6187409, After=6187477, chg +0.00% Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: xchg: !LLSC: remove UP micro-optimization/hackVineet Gupta1-7/+1
It gets in the way of cleaning things up and is a maintenance pain-in-neck ! Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: bitops: fls/ffs to take int (vs long) per asm-generic definesVineet Gupta1-2/+2
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: switch to generic bitopsVineet Gupta3-198/+2
- !LLSC now only needs a single spinlock for atomics and bitops - Some codegen changes (slight bloat) with generic bitops 1. code increase due to LD-check-atomic paradigm vs. unconditonal atomic (but dirty'ing the cache line even if set already). So despite increase, generic is right thing to do. 2. code decrease (but use of costlier instructions such as DIV vs. shifts based math) due to signed arithmetic. This needs to be revisited seperately. arc: static inline int test_bit(unsigned int nr, const volatile unsigned long *addr) ^^^^^^^^^^^^ generic: static inline int test_bit(int nr, const volatile unsigned long *addr) ^^^ Link: https://lore.kernel.org/r/20180830135749.GA13005@arm.com Signed-off-by: Will Deacon <will@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> [vgupta: wrote patch based on Will's poc, analysed codegen diffs] Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: atomics: implement relaxed variantsVineet Gupta2-30/+26
The current ARC fetch/return atomics provide fully ordered semantics only with 2 full barriers around the operation. Instead implement them as relaxed variants without any barriers and rely on generic code to generate the fully-ordered, acquire and release varaints by adding the appropriate full barriers. This helps elide some extra barriers in case of acquire/release/relaxed calls. bloat-o-meter for hsdk defconfig shows codegen improvements, although numbers below inflated due to unrelated inlining heuristic changes | bloat-o-meter vmlinux-643babe34fd7-non-relaxed vmlinux-45aa05cb44d7-relaxed | add/remove: 2/5 grow/shrink: 42/1222 up/down: 4158/-14312 (-10154) | Function old new delta | .. | sys_renameat 462 476 +14 | ip_mc_inc_group 424 436 +12 | do_read_cache_page 1882 1894 +12 | .. | refcount_dec_and_mutex_lock 254 250 -4 | refcount_dec_and_lock_irqsave 258 254 -4 | refcount_dec_and_lock 254 250 -4 | .. | tcp_v6_route_req 246 238 -8 | tcp_v4_destroy_sock 286 278 -8 | tcp_twsk_unique 352 344 -8 Link: https://lore.kernel.org/r/20180830144344.GW24142@hirez.programming.kicks-ass.net Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: atomic64: LLSC: elide unused atomic_{and,or,xor,andnot}_returnVineet Gupta1-0/+6
This is a non-functional change since those wrappers are not used in kernel sources at all. Link: http://lists.infradead.org/pipermail/linux-snps-arc/2018-August/004246.html Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: atomic: !LLSC: use int data type consistentlyVineet Gupta1-2/+2
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: atomic: !LLSC: remove hack in atomic_set() for for UPVineet Gupta1-13/+4
!LLSC atomics use spinlock (SMP) or irq-disable (UP) to implement criticla regions. UP atomic_set() however was "cheating" by not doing any of that so and still being functional. Remove this anomaly (primarily as cleanup for future code improvements) given that this config is not worth hassle of special case code. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: atomics: disintegrate headerVineet Gupta4-424/+461
Non functional change, to ease future addition/removal Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: export clear_user_page() for modulesRandy Dunlap1-1/+1
0day bot reports a build error: ERROR: modpost: "clear_user_page" [drivers/media/v4l2-core/videobuf-dma-sg.ko] undefined! so export it in arch/arc/ to fix the build error. In most ARCHes, clear_user_page() is a macro. OTOH, in a few ARCHes it is a function and needs to be exported. PowerPC exported it in 2004. It looks like nds32 and nios2 still need to have it exported. Fixes: 4102b53392d63 ("ARC: [mm] Aliasing VIPT dcache support 2/4") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reported-by: kernel test robot <lkp@intel.com> Cc: Guenter Roeck <linux@roeck-us.net> Cc: linux-snps-arc@lists.infradead.org Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24arch/arc/kernel/: fix misspellings using codespell toolChangcheng Deng3-3/+3
Some typos are found out by codespell tool: ./intc-compact.c:145: prioity ==> priority ./smp.c:286: recevier ==> receiver ./stacktrace.c:152 prelogue ==> prologue Fix typos found by codespell. Reported-by: Zeal Robot <zealci@zte.com.cn> Signed-off-by: Changcheng Deng <deng.changcheng@zte.com.cn> Signed-off-by: Yi Wang <wang.yi59@zte.com.cn> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-22Linux 5.14-rc7Linus Torvalds1-1/+1
2021-08-21fs: warn about impending deprecation of mandatory locksJeff Layton1-1/+5
We've had CONFIG_MANDATORY_FILE_LOCKING since 2015 and a lot of distros have disabled it. Warn the stragglers that still use "-o mand" that we'll be dropping support for that mount option. Cc: stable@vger.kernel.org Signed-off-by: Jeff Layton <jlayton@kernel.org>
2021-08-20io_uring: fix xa_alloc_cycle() error return value checkJens Axboe1-4/+5
We currently check for ret != 0 to indicate error, but '1' is a valid return and just indicates that the allocation succeeded with a wrap. Correct the check to be for < 0, like it was before the xarray conversion. Cc: stable@vger.kernel.org Fixes: 61cf93700fe6 ("io_uring: Convert personality_idr to XArray") Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-20hugetlb: don't pass page cache pages to restore_reserve_on_errorMike Kravetz1-5/+14
syzbot hit kernel BUG at fs/hugetlbfs/inode.c:532 as described in [1]. This BUG triggers if the HPageRestoreReserve flag is set on a page in the page cache. It should never be set, as the routine huge_add_to_page_cache explicitly clears the flag after adding a page to the cache. The only code other than huge page allocation which sets the flag is restore_reserve_on_error. It will potentially set the flag in rare out of memory conditions. syzbot was injecting errors to cause memory allocation errors which exercised this specific path. The code in restore_reserve_on_error is doing the right thing. However, there are instances where pages in the page cache were being passed to restore_reserve_on_error. This is incorrect, as once a page goes into the cache reservation information will not be modified for the page until it is removed from the cache. Error paths do not remove pages from the cache, so even in the case of error, the page will remain in the cache and no reservation adjustment is needed. Modify routines that potentially call restore_reserve_on_error with a page cache page to no longer do so. Note on fixes tag: Prior to commit 846be08578ed ("mm/hugetlb: expand restore_reserve_on_error functionality") the routine would not process page cache pages because the HPageRestoreReserve flag is not set on such pages. Therefore, this issue could not be trigggered. The code added by commit 846be08578ed ("mm/hugetlb: expand restore_reserve_on_error functionality") is needed and correct. It exposed incorrect calls to restore_reserve_on_error which is the root cause addressed by this commit. [1] https://lore.kernel.org/linux-mm/00000000000050776d05c9b7c7f0@google.com/ Link: https://lkml.kernel.org/r/20210818213304.37038-1-mike.kravetz@oracle.com Fixes: 846be08578ed ("mm/hugetlb: expand restore_reserve_on_error functionality") Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Reported-by: <syzbot+67654e51e54455f1c585@syzkaller.appspotmail.com> Cc: Mina Almasry <almasrymina@google.com> Cc: Axel Rasmussen <axelrasmussen@google.com> Cc: Peter Xu <peterx@redhat.com> Cc: Muchun Song <songmuchun@bytedance.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Naoya Horiguchi <naoya.horiguchi@linux.dev> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-08-20kfence: fix is_kfence_address() for addresses below KFENCE_POOL_SIZEMarco Elver1-3/+4
Originally the addr != NULL check was meant to take care of the case where __kfence_pool == NULL (KFENCE is disabled). However, this does not work for addresses where addr > 0 && addr < KFENCE_POOL_SIZE. This can be the case on NULL-deref where addr > 0 && addr < PAGE_SIZE or any other faulting access with addr < KFENCE_POOL_SIZE. While the kernel would likely crash, the stack traces and report might be confusing due to double faults upon KFENCE's attempt to unprotect such an address. Fix it by just checking that __kfence_pool != NULL instead. Link: https://lkml.kernel.org/r/20210818130300.2482437-1-elver@google.com Fixes: 0ce20dd84089 ("mm: add Kernel Electric-Fence infrastructure") Signed-off-by: Marco Elver <elver@google.com> Reported-by: Kuan-Ying Lee <Kuan-Ying.Lee@mediatek.com> Acked-by: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: <stable@vger.kernel.org> [5.12+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-08-20mm: vmscan: fix missing psi annotation for node_reclaim()Johannes Weiner1-0/+3
In a debugging session the other day, Rik noticed that node_reclaim() was missing memstall annotations. This means we'll miss pressure and lost productivity resulting from reclaim on an overloaded local NUMA node when vm.zone_reclaim_mode is enabled. There haven't been any reports, but that's likely because vm.zone_reclaim_mode hasn't been a commonly used feature recently, and the intersection between such setups and psi users is probably nil. But secondary memory such as CXL-connected DIMMS, persistent memory etc, and the page demotion patches that handle them (https://lore.kernel.org/lkml/20210401183216.443C4443@viggo.jf.intel.com/) could soon make this a more common codepath again. Link: https://lkml.kernel.org/r/20210818152457.35846-1-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reported-by: Rik van Riel <riel@surriel.com> Reviewed-by: Shakeel Butt <shakeelb@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-08-20mm/hwpoison: retry with shake_page() for unhandlable pagesNaoya Horiguchi1-3/+9
HWPoisonHandlable() sometimes returns false for typical user pages due to races with average memory events like transfers over LRU lists. This causes failures in hwpoison handling. There's retry code for such a case but does not work because the retry loop reaches the retry limit too quickly before the page settles down to handlable state. Let get_any_page() call shake_page() to fix it. [naoya.horiguchi@nec.com: get_any_page(): return -EIO when retry limit reached] Link: https://lkml.kernel.org/r/20210819001958.2365157-1-naoya.horiguchi@linux.dev Link: https://lkml.kernel.org/r/20210817053703.2267588-1-naoya.horiguchi@linux.dev Fixes: 25182f05ffed ("mm,hwpoison: fix race with hugetlb page allocation") Signed-off-by: Naoya Horiguchi <naoya.horiguchi@nec.com> Reported-by: Tony Luck <tony.luck@intel.com> Reviewed-by: Yang Shi <shy828301@gmail.com> Cc: Oscar Salvador <osalvador@suse.de> Cc: Muchun Song <songmuchun@bytedance.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Michal Hocko <mhocko@suse.com> Cc: <stable@vger.kernel.org> [5.13+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-08-20mm: memcontrol: fix occasional OOMs due to proportional memory.low reclaimJohannes Weiner2-22/+34
We've noticed occasional OOM killing when memory.low settings are in effect for cgroups. This is unexpected and undesirable as memory.low is supposed to express non-OOMing memory priorities between cgroups. The reason for this is proportional memory.low reclaim. When cgroups are below their memory.low threshold, reclaim passes them over in the first round, and then retries if it couldn't find pages anywhere else. But when cgroups are slightly above their memory.low setting, page scan force is scaled down and diminished in proportion to the overage, to the point where it can cause reclaim to fail as well - only in that case we currently don't retry, and instead trigger OOM. To fix this, hook proportional reclaim into the same retry logic we have in place for when cgroups are skipped entirely. This way if reclaim fails and some cgroups were scanned with diminished pressure, we'll try another full-force cycle before giving up and OOMing. [akpm@linux-foundation.org: coding-style fixes] Link: https://lkml.kernel.org/r/20210817180506.220056-1-hannes@cmpxchg.org Fixes: 9783aa9917f8 ("mm, memcg: proportional memory.{low,min} reclaim") Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reported-by: Leon Yang <lnyng@fb.com> Reviewed-by: Rik van Riel <riel@surriel.com> Reviewed-by: Shakeel Butt <shakeelb@google.com> Acked-by: Roman Gushchin <guro@fb.com> Acked-by: Chris Down <chris@chrisdown.name> Acked-by: Michal Hocko <mhocko@suse.com> Cc: <stable@vger.kernel.org> [5.4+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-08-20MAINTAINERS: update ClangBuiltLinux IRC chatNathan Chancellor1-1/+1
Everyone has moved from Freenode to Libera so updated the channel entry for MAINTAINERS. Link: https://github.com/ClangBuiltLinux/linux/issues/1402 Link: https://lkml.kernel.org/r/20210818022339.3863058-1-nathan@kernel.org Signed-off-by: Nathan Chancellor <nathan@kernel.org> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Kees Cook <keescook@chromium.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-08-20mmflags.h: add missing __GFP_ZEROTAGS and __GFP_SKIP_KASAN_POISON namesMike Rapoport1-1/+3
printk("%pGg") outputs these two flags as hexadecimal number, rather than as a string, e.g: GFP_KERNEL|0x1800000 Fix this by adding missing names of __GFP_ZEROTAGS and __GFP_SKIP_KASAN_POISON flags to __def_gfpflag_names. Link: https://lkml.kernel.org/r/20210816133502.590-1-rppt@kernel.org Fixes: 013bb59dbb7c ("arm64: mte: handle tags zeroing at page allocation time") Fixes: c275c5c6d50a ("kasan: disable freed user page poisoning with HW tags") Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Cc: Peter Collingbourne <pcc@google.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-08-20mm/page_alloc: don't corrupt pcppage_migratetypeDoug Berger1-13/+12
When placing pages on a pcp list, migratetype values over MIGRATE_PCPTYPES get added to the MIGRATE_MOVABLE pcp list. However, the actual migratetype is preserved in the page and should not be changed to MIGRATE_MOVABLE or the page may end up on the wrong free_list. The impact is that HIGHATOMIC or CMA pages getting bulk freed from the PCP lists could potentially end up on the wrong buddy list. There are various consequences but minimally NR_FREE_CMA_PAGES accounting could get screwed up. [mgorman@techsingularity.net: changelog update] Link: https://lkml.kernel.org/r/20210811182917.2607994-1-opendmb@gmail.com Fixes: df1acc856923 ("mm/page_alloc: avoid conflating IRQs disabled with zone->lock") Signed-off-by: Doug Berger <opendmb@gmail.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Mel Gorman <mgorman@techsingularity.net> Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-08-20Revert "mm: swap: check if swap backing device is congested or not"Yang Shi1-7/+0
Due to the change about how block layer detects congestion the justification of commit 8fd2e0b505d1 ("mm: swap: check if swap backing device is congested or not") doesn't stand anymore, so the commit could be just reverted in order to solve the race reported by commit 2efa33fc7f6e ("mm/shmem: fix shmem_swapin() race with swapoff"). The fix was reverted by the previous patch. Link: https://lkml.kernel.org/r/20210810202936.2672-3-shy828301@gmail.com Signed-off-by: Yang Shi <shy828301@gmail.com> Suggested-by: Hugh Dickins <hughd@google.com> Acked-by: Hugh Dickins <hughd@google.com> Cc: "Huang, Ying" <ying.huang@intel.com> Cc: Miaohe Lin <linmiaohe@huawei.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Minchan Kim <minchan@kernel.org> Cc: David Hildenbrand <david@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-08-20Revert "mm/shmem: fix shmem_swapin() race with swapoff"Yang Shi1-13/+1
Due to the change about how block layer detects congestion the justification of commit 8fd2e0b505d1 ("mm: swap: check if swap backing device is congested or not") doesn't stand anymore, so the commit could be just reverted in order to solve the race reported by commit 2efa33fc7f6e ("mm/shmem: fix shmem_swapin() race with swapoff"), so the fix commit could be just reverted as well. And that fix is also kind of buggy as discussed by [1] and [2]. [1] https://lore.kernel.org/linux-mm/24187e5e-069-9f3f-cefe-39ac70783753@google.com/ [2] https://lore.kernel.org/linux-mm/e82380b9-3ad4-4a52-be50-6d45c7f2b5da@google.com/ Link: https://lkml.kernel.org/r/20210810202936.2672-2-shy828301@gmail.com Signed-off-by: Yang Shi <shy828301@gmail.com> Suggested-by: Hugh Dickins <hughd@google.com> Acked-by: Hugh Dickins <hughd@google.com> Cc: "Huang, Ying" <ying.huang@intel.com> Cc: Miaohe Lin <linmiaohe@huawei.com> Cc: David Hildenbrand <david@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-08-20riscv: Fix a number of free'd resources in init_resources()Petr Pavlu1-2/+2
Function init_resources() allocates a boot memory block to hold an array of resources which it adds to iomem_resource. The array is filled in from its end and the function then attempts to free any unused memory at the beginning. The problem is that size of the unused memory is incorrectly calculated and this can result in releasing memory which is in use by active resources. Their data then gets corrupted later when the memory is reused by a different part of the system. Fix the size of the released memory to correctly match the number of unused resource entries. Fixes: ffe0e5261268 ("RISC-V: Improve init_resources()") Signed-off-by: Petr Pavlu <petr.pavlu@suse.com> Reviewed-by: Sunil V L <sunilvl@ventanamicro.com> Acked-by: Nick Kossifidis <mick@ics.forth.gr> Tested-by: Sunil V L <sunilvl@ventanamicro.com> Cc: stable@vger.kernel.org Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-08-19dt-bindings: sifive-l2-cache: Fix 'select' matchingRob Herring1-4/+4
When the schema fixups are applied to 'select' the result is a single entry is required for a match, but that will never match as there should be 2 entries. Also, a 'select' schema should have the widest possible match, so use 'contains' which matches the compatible string(s) in any position and not just the first position. Fixes: 993dcfac64eb ("dt-bindings: riscv: sifive-l2-cache: convert bindings to json-schema") Signed-off-by: Rob Herring <robh@kernel.org> Cc: stable@vger.kernel.org Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-08-19net: dpaa2-switch: disable the control interface on error pathVladimir Oltean1-18/+18
Currently dpaa2_switch_takedown has a funny name and does not do the opposite of dpaa2_switch_init, which makes probing fail when we need to handle an -EPROBE_DEFER. A sketch of what dpaa2_switch_init does: dpsw_open dpaa2_switch_detect_features dpsw_reset for (i = 0; i < ethsw->sw_attr.num_ifs; i++) { dpsw_if_disable dpsw_if_set_stp dpsw_vlan_remove_if_untagged dpsw_if_set_tci dpsw_vlan_remove_if } dpsw_vlan_remove alloc_ordered_workqueue dpsw_fdb_remove dpaa2_switch_ctrl_if_setup When dpaa2_switch_takedown is called from the error path of dpaa2_switch_probe(), the control interface, enabled by dpaa2_switch_ctrl_if_setup from dpaa2_switch_init, remains enabled, because dpaa2_switch_takedown does not call dpaa2_switch_ctrl_if_teardown. Since dpaa2_switch_probe might fail due to EPROBE_DEFER of a PHY, this means that a second probe of the driver will happen with the control interface directly enabled. This will trigger a second error: [ 93.273528] fsl_dpaa2_switch dpsw.0: dpsw_ctrl_if_set_pools() failed [ 93.281966] fsl_dpaa2_switch dpsw.0: fsl_mc_driver_probe failed: -13 [ 93.288323] fsl_dpaa2_switch: probe of dpsw.0 failed with error -13 Which if we investigate the /dev/dpaa2_mc_console log, we find out is caused by: [E, ctrl_if_set_pools:2211, DPMNG] ctrl_if must be disabled So make dpaa2_switch_takedown do the opposite of dpaa2_switch_init (in reasonable limits, no reason to change STP state, re-add VLANs etc), and rename it to something more conventional, like dpaa2_switch_teardown. Fixes: 613c0a5810b7 ("staging: dpaa2-switch: enable the control interface") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Link: https://lore.kernel.org/r/20210819141755.1931423-1-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-19Revert "flow_offload: action should not be NULL when it is referenced"Ido Schimmel1-7/+5
This reverts commit 9ea3e52c5bc8bb4a084938dc1e3160643438927a. Cited commit added a check to make sure 'action' is not NULL, but 'action' is already dereferenced before the check, when calling flow_offload_has_one_action(). Therefore, the check does not make any sense and results in a smatch warning: include/net/flow_offload.h:322 flow_action_mixed_hw_stats_check() warn: variable dereferenced before check 'action' (see line 319) Fix by reverting this commit. Cc: gushengxian <gushengxian@yulong.com> Fixes: 9ea3e52c5bc8 ("flow_offload: action should not be NULL when it is referenced") Signed-off-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Link: https://lore.kernel.org/r/20210819105842.1315705-1-idosch@idosch.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-19iavf: Fix ping is lost after untrusted VF had tried to change MACSylwester Dziedziuch3-2/+47
Make changes to MAC address dependent on the response of PF. Disallow changes to HW MAC address and MAC filter from untrusted VF, thanks to that ping is not lost if VF tries to change MAC. Add a new field in iavf_mac_filter, to indicate whether there was response from PF for given filter. Based on this field pass or discard the filter. If untrusted VF tried to change it's address, it's not changed. Still filter was changed, because of that ping couldn't go through. Fixes: c5c922b3e09b ("iavf: fix MAC address setting for VFs when filter is rejected") Signed-off-by: Przemyslaw Patynowski <przemyslawx.patynowski@intel.com> Signed-off-by: Sylwester Dziedziuch <sylwesterx.dziedziuch@intel.com> Signed-off-by: Mateusz Palczewski <mateusz.palczewski@intel.com> Tested-by: Gurucharan G <Gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-19i40e: Fix ATR queue selectionArkadiusz Kubalewski1-2/+1
Without this patch, ATR does not work. Receive/transmit uses queue selection based on SW DCB hashing method. If traffic classes are not configured for PF, then use netdev_pick_tx function for selecting queue for packet transmission. Instead of calling i40e_swdcb_skb_tx_hash, call netdev_pick_tx, which ensures that packet is transmitted/received from CPU that is running the application. Reproduction steps: 1. Load i40e driver 2. Map each MSI interrupt of i40e port for each CPU 3. Disable ntuple, enable ATR i.e.: ethtool -K $interface ntuple off ethtool --set-priv-flags $interface flow-director-atr 4. Run application that is generating traffic and is bound to a single CPU, i.e.: taskset -c 9 netperf -H 1.1.1.1 -t TCP_RR -l 10 5. Observe behavior: Application's traffic should be restricted to the CPU provided in taskset. Fixes: 89ec1f0886c1 ("i40e: Fix queue-to-TC mapping on Tx") Signed-off-by: Przemyslaw Patynowski <przemyslawx.patynowski@intel.com> Signed-off-by: Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com> Tested-by: Dave Switzer <david.switzer@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-19ASoC: intel: atom: Fix breakage for PCM buffer address setupTakashi Iwai1-1/+1
The commit 2e6b836312a4 ("ASoC: intel: atom: Fix reference to PCM buffer address") changed the reference of PCM buffer address to substream->runtime->dma_addr as the buffer address may change dynamically. However, I forgot that the dma_addr field is still not set up for the CONTINUOUS buffer type (that this driver uses) yet in 5.14 and earlier kernels, and it resulted in garbage I/O. The problem will be fixed in 5.15, but we need to address it quickly for now. The fix is to deduce the address again from the DMA pointer with virt_to_phys(), but from the right one, substream->runtime->dma_area. Fixes: 2e6b836312a4 ("ASoC: intel: atom: Fix reference to PCM buffer address") Reported-and-tested-by: Hans de Goede <hdegoede@redhat.com> Cc: <stable@vger.kernel.org> Acked-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/2048c6aa-2187-46bd-6772-36a4fb3c5aeb@redhat.com Link: https://lore.kernel.org/r/20210819152945.8510-1-tiwai@suse.de Signed-off-by: Takashi Iwai <tiwai@suse.de>
2021-08-19ALSA: hda/realtek: Limit mic boost on HP ProBook 445 G8Kai-Heng Feng1-2/+9
The mic has lots of noises if mic boost is enabled. So disable mic boost to get crystal clear audio capture. Signed-off-by: Kai-Heng Feng <kai.heng.feng@canonical.com> Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20210818144119.121738-1-kai.heng.feng@canonical.com Signed-off-by: Takashi Iwai <tiwai@suse.de>
2021-08-19PCI/sysfs: Use correct variable for the legacy_mem sysfs objectKrzysztof Wilczyński1-1/+1
Two legacy PCI sysfs objects "legacy_io" and "legacy_mem" were updated to use an unified address space in the commit 636b21b50152 ("PCI: Revoke mappings like devmem"). This allows for revocations to be managed from a single place when drivers want to take over and mmap() a /dev/mem range. Following the update, both of the sysfs objects should leverage the iomem_get_mapping() function to get an appropriate address range, but only the "legacy_io" has been correctly updated - the second attribute seems to be using a wrong variable to pass the iomem_get_mapping() function to. Thus, correct the variable name used so that the "legacy_mem" sysfs object would also correctly call the iomem_get_mapping() function. Fixes: 636b21b50152 ("PCI: Revoke mappings like devmem") Link: https://lore.kernel.org/r/20210812132144.791268-1-kw@linux.com Signed-off-by: Krzysztof Wilczyński <kw@linux.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>