aboutsummaryrefslogtreecommitdiffstats
path: root/lib (follow)
AgeCommit message (Collapse)AuthorFilesLines
2025-05-11lib/test_vmalloc.c: replace RWSEM to SRCU for setupUladzislau Rezki (Sony)1-10/+7
The test has the initialization step during which threads are created. To prevent the workers from starting prematurely a write lock was previously used by the main setup thread, while each worker would block on a read lock. Replace this RWSEM based synchronization with a simpler SRCU based approach. Which does two basic steps: - Main thread wraps the setup phase in an SRCU read-side critical section. Pair of srcu_read_lock()/srcu_read_unlock(). - Each worker calls synchronize_srcu() on entry, ensuring it waits for the initialization phase to be completed. This patch eliminates the need for down_read()/up_read() and down_write()/up_write() pairs thus simplifying the logic and improving clarity. [urezki@gmail.com: fix compile error with CONFIG_TINY_RCU] Link: https://lkml.kernel.org/r/20250420142029.103169-1-urezki@gmail.com Link: https://lkml.kernel.org/r/20250417161216.88318-1-urezki@gmail.com Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com> Reviewed-by: Adrian Huang <ahuang12@lenovo.com> Tested-by: Adrian Huang <ahuang12@lenovo.com> Cc: Baoquan He <bhe@redhat.com> Cc: Christop Hellwig <hch@infradead.org> Cc: Mateusz Guzik <mjguzik@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-05-11maple_tree: reorder mas->store_type case statementsSidhartha Kumar1-26/+25
Move the unlikely case that mas->store_type is invalid to be the last evaluated case and put liklier cases higher up. Link: https://lkml.kernel.org/r/20250410191446.2474640-7-sidhartha.kumar@oracle.com Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com> Suggested-by: Liam R. Howlett <liam.howlett@oracle.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Wei Yang <richard.weiyang@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-05-11maple_tree: add sufficient heightSidhartha Kumar1-3/+16
In order to support rebalancing and spanning stores using less than the worst case number of nodes, we need to track more than just the vacant height. Using only vacant height to reduce the worst case maple node allocation count can lead to a shortcoming of nodes in the following scenarios. For rebalancing writes, when a leaf node becomes insufficient, it may be combined with a sibling into a single node. This means that the parent node which has entries for this children will lose one entry. If this parent node was just meeting the minimum entries, losing one entry will now cause this parent node to be insufficient. This leads to a cascading operation of rebalancing at different levels and can lead to more node allocations than simply using vacant height can return. For spanning writes, a similar situation occurs. At the location at which a spanning write is detected, the number of ancestor nodes may similarly need to rebalanced into a smaller number of nodes and the same cascading situation could occur. To use less than the full height of the tree for the number of allocations, we also need to track the height at which a non-leaf node cannot become insufficient. This means even if a rebalance occurs to a child of this node, it currently has enough entries that it can lose one without any further action. This field is stored in the maple write state as sufficient height. In mas_prealloc_calc() when figuring out how many nodes to allocate, we check if the vacant node is lower in the tree than a sufficient node (has a larger value). If it is, we cannot use the vacant height and must use the difference in the height and sufficient height as the basis for the number of nodes needed. An off by one bug was also discovered in mast_overflow() where it is using >= rather than >. This caused extra iterations of the mas_spanning_rebalance() loop and lead to unneeded allocations. A test is also added to check the number of allocations is correct. Link: https://lkml.kernel.org/r/20250410191446.2474640-6-sidhartha.kumar@oracle.com Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Wei Yang <richard.weiyang@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-05-11maple_tree: break on convergence in mas_spanning_rebalance()Sidhartha Kumar1-3/+13
This allows support for using the vacant height to calculate the worst case number of nodes needed for wr_rebalance operation. mas_spanning_rebalance() was seen to perform unnecessary node allocations. We can reduce allocations by breaking early during the rebalancing loop once we realize that we have ascended to a common ancestor. Link: https://lkml.kernel.org/r/20250410191446.2474640-5-sidhartha.kumar@oracle.com Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com> Suggested-by: Liam Howlett <liam.howlett@oracle.com> Reviewed-by: Wei Yang <richard.weiyang@gmail.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-05-11maple_tree: use vacant nodes to reduce worst case allocationsSidhartha Kumar1-4/+9
In order to determine the store type for a maple tree operation, a walk of the tree is done through mas_wr_walk(). This function descends the tree until a spanning write is detected or we reach a leaf node. While descending, keep track of the height at which we encounter a node with available space. This is done by checking if mas->end is less than the number of slots a given node type can fit. Now that the height of the vacant node is tracked, we can use the difference between the height of the tree and the height of the vacant node to know how many levels we will have to propagate creating new nodes. Update mas_prealloc_calc() to consider the vacant height and reduce the number of worst-case allocations. Rebalancing and spanning stores are not supported and fall back to using the full height of the tree for allocations. Update preallocation testing assertions to take into account vacant height. Link: https://lkml.kernel.org/r/20250410191446.2474640-4-sidhartha.kumar@oracle.com Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Wei Yang <richard.weiyang@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-05-11maple_tree: use height and depth consistentlySidhartha Kumar1-40/+44
For the maple tree, the root node is defined to have a depth of 0 with a height of 1. Each level down from the node, these values are incremented by 1. Various code paths define a root with depth 1 which is inconsisent with the definition. Modify the code to be consistent with this definition. In mas_spanning_rebalance(), l_mas.depth was being used to track the height based on the number of iterations done in the main loop. This information was then used in mas_put_in_tree() to set the height. Rather than overload the l_mas.depth field to track height, simply keep track of height in the local variable new_height and directly pass this to mas_wmb_replace() which will be passed into mas_put_in_tree(). This allows up to remove writes to l_mas.depth. Link: https://lkml.kernel.org/r/20250410191446.2474640-3-sidhartha.kumar@oracle.com Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Wei Yang <richard.weiyang@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-05-11maple_tree: convert mas_prealloc_calc() to take in a maple write stateSidhartha Kumar1-8/+8
Patch series "Track node vacancy to reduce worst case allocation counts", v5. ================ overview ======================== Currently, the maple tree preallocates the worst case number of nodes for given store type by taking into account the whole height of the tree. This comes from a worst case scenario of every node in the tree being full and having to propagate node allocation upwards until we reach the root of the tree. This can be optimized if there are vacancies in nodes that are at a lower depth than the root node. This series implements tracking the level at which there is a vacant node so we only need to allocate until this level is reached, rather than always using the full height of the tree. The ma_wr_state struct is modified to add a field which keeps track of the vacant height and is updated during walks of the tree. This value is then read in mas_prealloc_calc() when we decide how many nodes to allocate. For rebalancing and spanning stores, we also need to track the lowest height at which a node has 1 more entry than the minimum sufficient number of entries. This is because rebalancing can cause a parent node to become insufficient which results in further node allocations. In this case, we need to use the sufficient height as the worst case rather than the vacant height. patch 1-2: preparatory patches patch 3: implement vacant height tracking + update the tests patch 4: support vacant height tracking for rebalancing writes patch 5: implement sufficient height tracking patch 6: reorder switch case statements ================ results ========================= Bpftrace was used to profile the allocation path for requesting new maple nodes while running stress-ng mmap 120s. The histograms below represent requests to kmem_cache_alloc_bulk() and show the count argument. This represnts how many maple nodes the caller is requesting in kmem_cache_alloc_bulk() command: stress-ng --mmap 4 --timeout 120 mm-unstable @bulk_alloc_req: [3, 4) 4 | | [4, 5) 54170 |@ | [5, 6) 0 | | [6, 7) 893057 |@@@@@@@@@@@@@@@@@@@@ | [7, 8) 4 | | [8, 9) 2230287 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| [9, 10) 55811 |@ | [10, 11) 77834 |@ | [11, 12) 0 | | [12, 13) 1368684 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ | [13, 14) 0 | | [14, 15) 0 | | [15, 16) 367197 |@@@@@@@@ | @maple_node_total: 46,630,160 @total_vmas: 46184591 mm-unstable + this series @bulk_alloc_req: [2, 3) 198 | | [3, 4) 4 | | [4, 5) 43 | | [5, 6) 0 | | [6, 7) 1069503 |@@@@@@@@@@@@@@@@@@@@@ | [7, 8) 4 | | [8, 9) 2597268 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| [9, 10) 472191 |@@@@@@@@@ | [10, 11) 191904 |@@@ | [11, 12) 0 | | [12, 13) 247316 |@@@@ | [13, 14) 0 | | [14, 15) 0 | | [15, 16) 98769 |@ | @maple_node_total: 37,813,856 @total_vmas: 43493287 This represents a ~19% reduction in the number of bulk maple nodes allocated. For more reproducible results, a historgram of the return value of mas_prealloc_calc() is displayed while running the maple_tree_tests whcih have a deterministic store pattern mas_prealloc_calc() return value mm-unstable 1 : (12068) 3 : (11836) 5 : ***** (271192) 7 : ************************************************** (2329329) 9 : *********** (534186) 10 : (435) 11 : *************** (704306) 13 : ******** (409781) mas_prealloc_calc() return value mm-unstable + this series 1 : (12070) 3 : ************************************************** (3548777) 5 : ******** (633458) 7 : (65081) 9 : (11224) 10 : (341) 11 : (2973) 13 : (68) do_mmap latency was also measured for regressions: command: stress-ng --mmap 4 --timeout 120 mm-unstable: avg = 7162 nsecs, total: 16101821292 nsecs, count: 2248034 mm-unstable + this series: avg = 6689 nsecs, total: 15135391764 nsecs, count: 2262726 stress-ng --mmap4 --timeout 120 with vacant_height: stress-ng: info: [257] 21526312 Maple Tree Read 0.176 M/sec stress-ng: info: [257] 339979348 Maple Tree Write 2.774 M/sec without vacant_height: stress-ng: info: [8228] 20968900 Maple Tree Read 0.171 M/sec stress-ng: info: [8228] 312214648 Maple Tree Write 2.547 M/sec This represents an increase of ~3% read throughput and ~9% increase in write throughput. This patch (of 6): In a subsequent patch, mas_prealloc_calc() will need to access fields only in the ma_wr_state. Convert the function to take in a ma_wr_state and modify all callers. There is no functional change. Link: https://lkml.kernel.org/r/20250410191446.2474640-1-sidhartha.kumar@oracle.com Link: https://lkml.kernel.org/r/20250410191446.2474640-2-sidhartha.kumar@oracle.com Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Wei Yang <richard.weiyang@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-05-11xarray: make xa_alloc_cyclic() return 0 on all success casesPrzemek Kitszel1-2/+15
Change xa_alloc_cyclic() to return 0 even on wrap-around. Do the same for xa_alloc_cyclic_irq() and xa_alloc_cyclic_bh(). This will prevent any future bug of treating return of 1 as an error: int ret = xa_alloc_cyclic(...) if (ret) // currently mishandles ret==1 goto failure; If there will be someone interested in when wrap-around occurs, there is still __xa_alloc_cyclic() that behaves as before. For now there is no such user. Link: https://lkml.kernel.org/r/20250320102219.8101-1-przemyslaw.kitszel@intel.com Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Suggested-by: Matthew Wilcox <willy@infradead.org> Link: https://lore.kernel.org/netdev/Z9gUd-5t8b5NX2wE@casper.infradead.org Cc: Andriy Shevchenko <andriy.shevchenko@intel.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Cc: Przemek Kitszel <przemyslaw.kitszel@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-05-11iov_iter: convert iov_iter_extract_xarray_pages() to use foliosMatthew Wilcox (Oracle)1-8/+8
ITER_XARRAY is exclusively used with xarrays that contain folios, not pages, so extract folio pointers from it, not page pointers. Removes a use of find_subpage(). Link: https://lkml.kernel.org/r/20250402210612.2444135-5-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-05-11iov_iter: convert iter_xarray_populate_pages() to use foliosMatthew Wilcox (Oracle)1-7/+7
ITER_XARRAY is exclusively used with xarrays that contain folios, not pages, so extract folio pointers from it, not page pointers. Removes a hidden call to compound_head() and a use of find_subpage(). Link: https://lkml.kernel.org/r/20250402210612.2444135-4-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-05-08ratelimit: Drop redundant accesses to burstPaul E. McKenney1-2/+2
Now that there is the "burst <= 0" fastpath, for all later code, burst must be strictly greater than zero. Therefore, drop the redundant checks of this local variable. Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Use nolock_ret restructuring to collapse common case codePaul E. McKenney1-10/+3
Now that unlock_ret releases the lock, then falls into nolock_ret, which handles ->missed based on the value of ret, the common-case lock-held code can be collapsed into a single "if" statement with a single-statement "then" clause. Yes, we could go further and just assign the "if" condition to ret, but in the immortal words of MSDOS, "Are you sure?". Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Use nolock_ret label to collapse lock-failure codePaul E. McKenney1-14/+4
Now that we have a nolock_ret label that handles ->missed correctly based on the value of ret, we can eliminate a local variable and collapse several "if" statements on the lock-acquisition-failure code path. Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Use nolock_ret label to save a couple of lines of codePaul E. McKenney1-5/+3
Create a nolock_ret label in order to start consolidating the unlocked return paths that conditionally invoke ratelimit_state_inc_miss(). Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Simplify common-case exit pathPaul E. McKenney1-9/+5
By making "ret" always be initialized, and moving the final call to ratelimit_state_inc_miss() out from under the lock, we save a goto and a couple lines of code. This also saves a couple of lines of code from the unconditional enable/disable slowpath. Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Warn if ->interval or ->burst are negativePetr Mladek1-0/+1
Currently, ___ratelimit() treats a negative ->interval or ->burst as if it was zero, but this is an accident of the current implementation. Therefore, splat in this case, which might have the benefit of detecting use of uninitialized ratelimit_state structures on the one hand or easing addition of new features on the other. Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Avoid atomic decrement under lock if already rate-limitedPaul E. McKenney1-5/+8
Currently, if the lock is acquired, the code unconditionally does an atomic decrement on ->rs_n_left, even if that atomic operation is guaranteed to return a limit-rate verdict. A limit-rate verdict will in fact be the common case when something is spewing into a rate limit. This unconditional atomic operation incurs needless overhead and also raises the spectre of counter wrap. Therefore, do the atomic decrement only if there is some chance that rates won't be limited. Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Avoid atomic decrement if already rate-limitedPaul E. McKenney1-1/+3
Currently, if the lock could not be acquired, the code unconditionally does an atomic decrement on ->rs_n_left, even if that atomic operation is guaranteed to return a limit-rate verdict. This incurs needless overhead and also raises the spectre of counter wrap. Therefore, do the atomic decrement only if there is some chance that rates won't be limited. Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Don't flush misses counter if RATELIMIT_MSG_ON_RELEASEPaul E. McKenney1-3/+3
Restore the previous semantics where the misses counter is unchanged if the RATELIMIT_MSG_ON_RELEASE flag is set. Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Force re-initialization when rate-limiting re-enabledPaul E. McKenney1-1/+14
Currently, if rate limiting is disabled, ___ratelimit() does an immediate early return with no state changes. This can result in false-positive drops when re-enabling rate limiting. Therefore, mark the ratelimit_state structure "uninitialized" when rate limiting is disabled. [ paulmck: Apply Petr Mladek feedback. ] Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Allow zero ->burst to disable ratelimitingPaul E. McKenney1-2/+6
If ->interval is zero, then rate-limiting will be disabled. Alternatively, if interval is greater than zero and ->burst is zero, then rate-limiting will be applied unconditionally. The point of this distinction is to handle current users that pass zero-initialized ratelimit_state structures to ___ratelimit(), and in such cases the ->lock field will be uninitialized. Acquiring ->lock in this case is clearly not a strategy to win. Therefore, make this classification be lockless. Note that although negative ->interval and ->burst happen to be treated as if they were zero, this is an accident of the current implementation. The semantics of negative values for these fields is subject to change without notice. Especially given that Bert Karwatzki determined that no current calls to ___ratelimit() ever have negative values for these fields. This commit replaces an earlier buggy versions. Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Reported-by: Bert Karwatzki <spasswolf@web.de> Reported-by: "Aithal, Srikanth" <sraithal@amd.com> Closes: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Reported-by: Mark Brown <broonie@kernel.org> Closes: https://lore.kernel.org/all/257c3b91-e30f-48be-9788-d27a4445a416@sirena.org.uk/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Tested-by: "Aithal, Srikanth" <sraithal@amd.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Reduce ___ratelimit() false-positive rate limitingPetr Mladek1-13/+38
Retain the locked design, but check rate-limiting even when the lock could not be acquired. Link: https://lore.kernel.org/all/Z_VRo63o2UsVoxLG@pathway.suse.cz/ Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Avoid jiffies=0 special casePaul E. McKenney1-1/+3
The ___ratelimit() function special-cases the jiffies-counter value of zero as "uninitialized". This works well on 64-bit systems, where the jiffies counter is not going to return to zero for more than half a billion years on systems with HZ=1000, but similar 32-bit systems take less than 50 days to wrap the jiffies counter. And although the consequences of wrapping the jiffies counter seem to be limited to minor confusion on the duration of the rate-limiting interval that happens to end at time zero, it is almost no work to avoid this confusion. Therefore, introduce a RATELIMIT_INITIALIZED bit to the ratelimit_state structure's ->flags field so that a ->begin value of zero is no longer special. Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Count misses due to lock contentionPaul E. McKenney1-1/+3
The ___ratelimit() function simply returns zero ("do ratelimiting") if the trylock fails, but does not adjust the ->missed field. This means that the resulting dropped printk()s are dropped silently, which could seriously confuse people trying to do console-log-based debugging. Therefore, increment the ->missed field upon trylock failure. Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Convert the ->missed field to atomic_tPaul E. McKenney1-1/+1
The ratelimit_state structure's ->missed field is sometimes incremented locklessly, and it would be good to avoid lost counts. This is also needed to count the number of misses due to trylock failure. Therefore, convert the ratelimit_state structure's ->missed field to atomic_t. Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08ratelimit: Create functions to handle ratelimit_state internalsPaul E. McKenney1-4/+4
A number of ratelimit use cases do open-coded access to the ratelimit_state structure's ->missed field. This works, but is a bit messy and makes it more annoying to make changes to this field. Therefore, provide a ratelimit_state_inc_miss() function that increments the ->missed field, a ratelimit_state_get_miss() function that reads out the ->missed field, and a ratelimit_state_reset_miss() function that reads out that field, but that also resets its value to zero. These functions will replace client-code open-coded uses of ->missed. In addition, a new ratelimit_state_reset_interval() function encapsulates what was previously open-coded lock acquisition and direct field updates. [ paulmck: Apply kernel test robot feedback. ] Link: https://lore.kernel.org/all/fbe93a52-365e-47fe-93a4-44a44547d601@paulmck-laptop/ Link: https://lore.kernel.org/all/20250423115409.3425-1-spasswolf@web.de/ Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: John Ogness <john.ogness@linutronix.de> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
2025-05-08lib/tests: randstruct: Add deep function pointer layout testKees Cook1-14/+65
The recent fix in commit c2ea09b193d2 ("randstruct: gcc-plugin: Remove bogus void member") has fixed another issue: it was not always detecting composite structures made only of function pointers and structures of function pointers. Add a test for this case, and break out the layout tests since this issue is actually a problem for Clang as well[1]. Link: https://github.com/llvm/llvm-project/issues/138355 [1] Link: https://lore.kernel.org/r/20250502224116.work.591-kees@kernel.org Signed-off-by: Kees Cook <kees@kernel.org>
2025-05-08lib/tests: Add randstruct KUnit testKees Cook3-0/+292
Perform basic validation about layout randomization and initialization tracking when using CONFIG_RANDSTRUCT=y. Tested using: $ ./tools/testing/kunit/kunit.py run \ --kconfig_add CONFIG_RANDSTRUCT_FULL=y \ randstruct [17:22:30] ================= randstruct (2 subtests) ================== [17:22:30] [PASSED] randstruct_layout [17:22:30] [PASSED] randstruct_initializers [17:22:30] =================== [PASSED] randstruct ==================== [17:22:30] ============================================================ [17:22:30] Testing complete. Ran 2 tests: passed: 2 [17:22:30] Elapsed time: 5.091s total, 0.001s configuring, 4.974s building, 0.086s running Adding "--make_option LLVM=1" can be used to test Clang, which also passes. Acked-by: David Gow <davidgow@google.com> Signed-off-by: Kees Cook <kees@kernel.org>
2025-05-08kbuild: Switch from -Wvla to -Wvla-larger-than=1Kees Cook1-1/+0
Variable Length Arrays (VLAs) on the stack must not be used in the kernel. Function parameter VLAs[1] should be usable, but -Wvla will warn for those. For example, this will produce a warning but it is not using a stack VLA: int something(size_t n, int array[n]) { ... Clang has no way yet to distinguish between the VLA types[2], so depend on GCC for now to keep stack VLAs out of the tree by using GCC's -Wvla-larger-than=N option (though GCC may split -Wvla similarly[3] to how Clang is planning to). While GCC 8+ supports -Wvla-larger-than, only 9+ supports ...=0[4], so use -Wvla-larger-than=1. Adjust mm/kasan/Makefile to remove it from CFLAGS (GCC <9 appears unable to disable the warning correctly[5]). The VLA usage in lib/test_ubsan.c was removed in commit 9d7ca61b1366 ("lib/test_ubsan.c: VLA no longer used in kernel") so the lib/Makefile disabling of VLA checking can be entirely removed. Link: https://en.cppreference.com/w/c/language/array [1] Link: https://github.com/llvm/llvm-project/issues/57098 [2] Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98217 [3] Link: https://lore.kernel.org/lkml/7780883c-0ac8-4aaa-b850-469e33b50672@linux.ibm.com/ [4] Link: https://lore.kernel.org/r/202505071331.4iOzqmuE-lkp@intel.com/ [5] Reviewed-by: Nathan Chancellor <nathan@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Link: https://lore.kernel.org/r/20250418213235.work.532-kees@kernel.org Signed-off-by: Kees Cook <kees@kernel.org>
2025-05-08kunit/overflow: Add tests for STACK_FLEX_ARRAY_SIZE() helperGustavo A. R. Silva1-0/+4
Add a couple of tests for new STACK_FLEX_ARRAY_SIZE() helper. Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Link: https://lore.kernel.org/r/c127631a03cdd7f59bfa091b9666a93bf69d0322.1745355442.git.gustavoars@kernel.org Signed-off-by: Kees Cook <kees@kernel.org>
2025-05-07KVM: arm64: Introduce CONFIG_UBSAN_KVM_EL2Mostafa Saleh2-2/+13
Add a new Kconfig CONFIG_UBSAN_KVM_EL2 for KVM which enables UBSAN for EL2 code (in protected/nvhe/hvhe) modes. This will re-use the same checks enabled for the kernel for the hypervisor. The only difference is that for EL2 it always emits a "brk" instead of implementing hooks as the hypervisor can't print reports. The KVM code will re-use the same code for the kernel "report_ubsan_failure()" so #ifdefs are changed to also have this code for CONFIG_UBSAN_KVM_EL2 Signed-off-by: Mostafa Saleh <smostafa@google.com> Reviewed-by: Kees Cook <kees@kernel.org> Link: https://lore.kernel.org/r/20250430162713.1997569-4-smostafa@google.com Signed-off-by: Marc Zyngier <maz@kernel.org>
2025-05-07ubsan: Remove regs from report_ubsan_failure()Mostafa Saleh1-1/+1
report_ubsan_failure() doesn't use argument regs, and soon it will be called from the hypervisor context were regs are not available. So, remove the unused argument. Signed-off-by: Mostafa Saleh <smostafa@google.com> Acked-by: Kees Cook <kees@kernel.org> Link: https://lore.kernel.org/r/20250430162713.1997569-3-smostafa@google.com Signed-off-by: Marc Zyngier <maz@kernel.org>
2025-05-06crypto: lib/poly1305 - Build main library on LIB_POLY1305 and split generic code outHerbert Xu3-11/+30
Split the lib poly1305 code just as was done with sha256. Make the main library code conditional on LIB_POLY1305 instead of LIB_POLY1305_GENERIC. Reported-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Fixes: 10a6d72ea355 ("crypto: lib/poly1305 - Use block-only interface") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-06Merge drm/drm-next into drm-misc-nextThomas Zimmermann10-16/+39
Backmerging drm-next to get fixes from v6.15-rc5. Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
2025-05-06BackMerge tag 'v6.15-rc5' into drm-nextDave Airlie10-16/+39
Linux 6.15-rc5, requested by tzimmerman for fixes required in drm-next. Signed-off-by: Dave Airlie <airlied@redhat.com>
2025-05-05devres: Export devm_ioremap_resource_wc()Rob Herring (Arm)1-0/+1
devm_ioremap_resource_wc() is not exported, so add one. Link: https://lore.kernel.org/r/20250423-dt-memory-region-v2-v2-3-2fbd6ebd3c88@kernel.org Acked-by: Arnaud Pouliquen <arnaud.pouliquen@foss.st.com> Signed-off-by: Rob Herring (Arm) <robh@kernel.org>
2025-05-05crypto: lib/sha256 - Use generic block helperHerbert Xu1-61/+10
Use the BLOCK_HASH_UPDATE_BLOCKS helper instead of duplicating partial block handling. Also remove the unused lib/sha256 force-generic interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05crypto: lib/sha256 - Add helpers for block-based shashHerbert Xu2-23/+17
Add an internal sha256_finup helper and move the finalisation code from __sha256_final into it. Also add sha256_choose_blocks and CRYPTO_ARCH_HAVE_LIB_SHA256_SIMD so that the Crypto API can use the SIMD block function unconditionally. The Crypto API must not be used in hard IRQs and there is no reason to have a fallback path for hardirqs. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05crypto: lib/sha256 - improve function prototypesEric Biggers1-4/+4
Follow best practices by changing the length parameters to size_t and explicitly specifying the length of the output digest arrays. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05crypto: sparc/sha256 - implement library instead of shashEric Biggers1-0/+3
Instead of providing crypto_shash algorithms for the arch-optimized SHA-256 code, instead implement the SHA-256 library. This is much simpler, it makes the SHA-256 library functions be arch-optimized, and it fixes the longstanding issue where the arch-optimized SHA-256 was disabled by default. SHA-256 still remains available through crypto_shash, but individual architectures no longer need to handle it. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05crypto: sha256 - support arch-optimized lib and expose through shashEric Biggers4-109/+246
As has been done for various other algorithms, rework the design of the SHA-256 library to support arch-optimized implementations, and make crypto/sha256.c expose both generic and arch-optimized shash algorithms that wrap the library functions. This allows users of the SHA-256 library functions to take advantage of the arch-optimized code, and this makes it much simpler to integrate SHA-256 for each architecture. Note that sha256_base.h is not used in the new design. It will be removed once all the architecture-specific code has been updated. Move the generic block function into its own module to avoid a circular dependency from libsha256.ko => sha256-$ARCH.ko => libsha256.ko. Signed-off-by: Eric Biggers <ebiggers@google.com> Add export and import functions to maintain existing export format. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05crypto: lib/poly1305 - Use block-only interfaceHerbert Xu1-13/+26
Now that every architecture provides a block function, use that to implement the lib/poly1305 and remove the old per-arch code. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05crypto: lib/poly1305 - Add block-only interfaceHerbert Xu1-31/+23
Add a block-only interface for poly1305. Implement the generic code first. Also use the generic partial block helper. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05Merge git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux v6.15-rc5Herbert Xu11-47/+65
Merge mainline to pick up bcachefs poly1305 patch 4bf4b5046de0 ("bcachefs: use library APIs for ChaCha20 and Poly1305"). This is a prerequisite for removing the poly1305 shash algorithm.
2025-05-01ASoC: codec: twl4030: Convert to GPIO descriptorsMark Brown10-16/+39
Merge series from "Peng Fan (OSS)" <peng.fan@oss.nxp.com>: This is separated from [1]. With an update that sorting the headers in a separate patch. No other changes, so I still keep Linus' R-b for Patch 2. [1] https://lore.kernel.org/all/20250408-asoc-gpio-v1-3-c0db9d3fd6e9@nxp.com/
2025-04-30gcc-plugins: remove SANCOV gcc pluginArnd Bergmann1-6/+0
With the minimum gcc version raised to 8.1, all supported compilers now understand the -fsanitize-coverage=trace-pc option, and there is no longer a need for the separate compiler plugin. Since only gcc-5 was able to use the plugin for several year now, it was already likely unused. Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2025-04-30Kbuild: remove structleak gcc pluginArnd Bergmann2-12/+2
gcc-12 and higher support the -ftrivial-auto-var-init= flag, after gcc-8 is the minimum version, this is half of the supported ones, and the vast majority of the versions that users are actually likely to have, so it seems like a good time to stop having the fallback plugin implementation Older toolchains are still able to build kernels normally without this plugin, but won't be able to use variable initialization.. Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2025-04-30raid6: skip avx512 checksArnd Bergmann4-19/+0
It is no longer necessary to check for CONFIG_AS_AVX512, since the minimum assembler version is now from binutils-2.30 and this always supports it. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2025-04-30kbuild: require gcc-8 and binutils-2.30Arnd Bergmann1-4/+1
Commit a3e8fe814ad1 ("x86/build: Raise the minimum GCC version to 8.1") raised the minimum compiler version as enforced by Kbuild to gcc-8.1 and clang-15 for x86. This is actually the same gcc version that has been discussed as the minimum for all architectures several times in the past, with little objection. A previous concern was the kernel for SLE15-SP7 needing to be built with gcc-7. As this ended up still using linux-6.4 and there is no plan for an SP8, this is no longer a problem. Change it for all architectures and adjust the documentation accordingly. A few version checks can be removed in the process. The binutils version 2.30 is the lowest version used in combination with gcc-8 on common distros, so use that as the corresponding minimum. Link: https://lore.kernel.org/lkml/20240925150059.3955569-32-ardb+git@google.com/ Link: https://lore.kernel.org/lkml/871q7yxrgv.wl-tiwai@suse.de/ Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2025-04-29test_bits: add tests for BIT_U*()Vincent Mailhol1-0/+10
Add some additional tests in lib/tests/test_bits.c to cover the expected results of the fixed type BIT_U*() macros. Signed-off-by: Vincent Mailhol <mailhol.vincent@wanadoo.fr> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Yury Norov <yury.norov@gmail.com>