aboutsummaryrefslogtreecommitdiffstats
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2024-11-05gfs2: gfs2_evict_inode clarificationAndreas Gruenbacher1-3/+3
When function evict_should_delete() returns SHOULD_DEFER_EVICTION, gh is never initialized, but that isn't obvious; if it did initialize gh and then return SHOULD_DEFER_EVICTION, gfs2_evict_inode() would fail to release it. To clarify the code, change gfs2_evict_inode() to always check if gh needs to be released, no matter what evict_should_delete() returns. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2024-11-05gfs2: Make gfs2_inode_refresh staticAndreas Gruenbacher2-3/+1
Function gfs2_inode_refresh() is only used in fs/gfs2/glops.c. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2024-11-05gfs2: Use get_random_u32 in gfs2_orlov_skipAndreas Gruenbacher1-3/+1
Use get_random_u32() instead of get_random_bytes() to remove the last remaining call to get_random_bytes(). Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2024-11-05gfs2: Randomize GLF_VERIFY_DELETE work delayAndreas Gruenbacher1-1/+2
Randomize the delay of GLF_VERIFY_DELETE work. This avoids thundering herd problems when multiple nodes schedule that kind of work in response to an inode being unlinked remotely. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2024-11-05gfs2: Use mod_delayed_work in gfs2_queue_try_to_evictAndreas Gruenbacher1-2/+1
In the unlikely case that we're trying to queue GLF_TRY_TO_EVICT work for an inode that already has GLF_VERIFY_DELETE work queued, we want to make sure that the GLF_TRY_TO_EVICT work gets scheduled immediately instead of waiting for the delayed work timer to expire. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2024-11-05gfs2: Update to the evict / remote delete documentationAndreas Gruenbacher2-26/+11
Try to be a bit more clear and remove some duplications. We cannot actually get rid of the verification step eventually, so remove the comment saying so. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2024-11-05gfs2: Call gfs2_queue_verify_delete from gfs2_evict_inodeAndreas Gruenbacher2-12/+10
Move calls to gfs2_queue_verify_delete() into gfs2_evict_inode(). Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2024-11-05gfs2: Clean up delete work processingAndreas Gruenbacher1-7/+7
Function delete_work_func() was previously assuming that the GLF_TRY_TO_EVICT and GLF_VERIFY_DELETE flags won't both be set at the same time, but there probably are races in which that can happen, so handle that case correctly. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2024-11-05gfs2: Minor delete_work_func cleanupAndreas Gruenbacher1-2/+3
Move those definitions into the the scope in which they are used. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2024-11-05gfs2: Return enum evict_behavior from gfs2_upgrade_iopen_glockAndreas Gruenbacher1-6/+14
In case an iopen glock cannot be upgraded, function gfs2_upgrade_iopen_glock() needs to communicate to gfs2_evict_inode() whether deleting the inode should be deferred or skipped altogether. Change the function to return the appropriate enum evict_behavior value to indicate that. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2024-11-05gfs2: Rename dinode_demise to evict_behaviorAndreas Gruenbacher1-18/+19
Rename enum dinode_demise to evict_behavior and its items SHOULD_DELETE_DINODE to EVICT_SHOULD_DELETE, SHOULD_NOT_DELETE_DINODE to EVICT_SHOULD_SKIP_DELETE, and SHOULD_DEFER_EVICTION to EVICT_SHOULD_DEFER_DELETE. In gfs2_evict_inode(), add a separate variable of type enum evict_behavior instead of implicitly casting to int. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2024-11-05gfs2: Rename GIF_{DEFERRED -> DEFER}_DELETEAndreas Gruenbacher3-4/+4
The GIF_DEFERRED_DELETE flag indicates an action that gfs2_evict_inode() should take, so rename the flag to GIF_DEFER_DELETE to clarify. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2024-11-05gfs2: Faster gfs2_upgrade_iopen_glock wakeupsAndreas Gruenbacher3-18/+15
Move function needs_demote() to glock.h and rename it to glock_needs_demote(). In handle_callback(), wake up the glock when setting the GLF_PENDING_DEMOTE flag as well. (Setting the GLF_DEMOTE flag already triggered a wake-up.) With that, check for glock_needs_demote() in gfs2_upgrade_iopen_glock() to wake up when either of those flags is set for the inode glock: the faster we can react to contention, the better. The GLF_PENDING_DEMOTE flag is only used for inode glocks (see gfs2_glock_cb()) so it's okay to only check for the GLF_DEMOTE flag in gfs2_drop_inode(). Still, using glock_needs_demote() there as well makes the code a little easier to read. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2024-10-22KMSAN: uninit-value in inode_go_dump (5)Qianqiang Liu1-0/+2
When mounting of a corrupted disk image fails, the error message printed can reference uninitialized inode fields. To prevent that from happening, always initialize those fields. Reported-by: syzbot+aa0730b0a42646eb1359@syzkaller.appspotmail.com Signed-off-by: Qianqiang Liu <qianqiang.liu@163.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2024-09-25gfs2: Fix unlinked inode cleanupAndreas Gruenbacher4-3/+4
Before commit f0e56edc2ec7 ("gfs2: Split the two kinds of glock "delete" work"), function delete_work_func() was used to trigger the eviction of in-memory inodes from remote as well as deleting unlinked inodes at a later point. These two kinds of work were then split into two kinds of work, and the two places in the code were deferred deletion of inodes is required accidentally ended up queuing the wrong kind of work. This caused unlinked inodes to be left behind, which could in the worst case fill up filesystems and require a filesystem check to recover. Fix that by queuing the right kind of work in try_rgrp_unlink() and gfs2_drop_inode(). Fixes: f0e56edc2ec7 ("gfs2: Split the two kinds of glock "delete" work") Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2024-09-25gfs2: Allow immediate GLF_VERIFY_DELETE workAndreas Gruenbacher1-5/+6
Add an argument to gfs2_queue_verify_delete() that allows it to queue GLF_VERIFY_DELETE work for immediate execution. This is used in the next patch. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2024-09-24gfs2: Initialize gl_no_formal_ino earlierAndreas Gruenbacher3-2/+9
Set gl_no_formal_ino of the iopen glock to the generation of the associated inode (ip->i_no_formal_ino) as soon as that value is known. This saves us from setting it later, possibly repeatedly, when queuing GLF_VERIFY_DELETE work. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2024-09-24gfs2: Rename GLF_VERIFY_EVICT to GLF_VERIFY_DELETEAndreas Gruenbacher2-8/+8
Rename the GLF_VERIFY_EVICT flag to GLF_VERIFY_DELETE: that flag indicates that we want to delete an inode / verify that it has been deleted. To match, rename gfs2_queue_verify_evict() to gfs2_queue_verify_delete(). Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2024-09-23mm: fix build on 32-bit targets without MAX_PHYSMEM_BITSLinus Torvalds1-1/+1
The merge resolution to deal with the conflict between commits ea72ce5da228 ("x86/kaslr: Expose and use the end of the physical memory address space") and 99185c10d5d9 ("resource, kunit: add test case for region_intersects()") ended up being broken in configurations didn't define a MAX_PHYSMEM_BITS and that had a 32-bit 'phys_addr_t'. The fallback to using all bits set (ie "(-1ULL)") ended up causing a build error: kernel/resource.c: In function ‘gfr_start’: include/linux/minmax.h:93:30: error: conversion from ‘long long unsigned int’ to ‘resource_size_t’ {aka ‘unsigned int’} changes value from ‘18446744073709551615’ to ‘4294967295’ [-Werror=overflow] this was reported by Geert for m68k, but he points out that it happens on other 32-bit architectures too, eg mips, xtensa, parisc, and powerpc. Limiting 'PHYSMEM_END' to a 'phys_addr_t' (which is the same as 'resource_size_t') fixes the build, but Geert points out that it will then cause a silent overflow in mm/sparse.c: unsigned long max_sparsemem_pfn = (PHYSMEM_END + 1) >> PAGE_SHIFT; so we actually do want PHYSMEM_END to be defined a 64-bit type - just not all ones, and not larger than 'phys_addr_t'. The proper fix is probably to not have some kind of default fallback at all, but just make sure every architecture has a valid MAX_PHYSMEM_BITS. But in the meantime, this just applies the rule that PHYSMEM_END is the largest value that fits in a 'phys_addr_t', but does not have the high bit set in 64 bits. Ugly, ugly. Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Huang Ying <ying.huang@intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-09-23hexagon: vdso: Fix build failureGuenter Roeck1-1/+1
Hexagon images fail to build with the following error. arch/hexagon/kernel/vdso.c:57:3: error: use of undeclared identifier 'name' name = "[vdso]", ^ Add the missing '.' to fix the problem. Fixes: 497258dfafcc ("mm: remove legacy install_special_mapping() code") Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Reviewed-by: Brian Cain <bcain@quicinc.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-09-22seqcount: replace smp_rmb() in read_seqcount() with load acquireChristoph Lameter (Ampere)1-20/+5
Many architectures support load acquire which can replace a memory barrier and save some cycles. A typical sequence do { seq = read_seqcount_begin(&s); <something> } while (read_seqcount_retry(&s, seq); requires 13 cycles on an N1 Neoverse arm64 core (Ampere Altra, to be specific) for an empty loop. Two read memory barriers are needed. One for each of the seqcount_* functions. We can replace the first read barrier with a load acquire of the seqcount which saves us one barrier. On the Altra doing so reduces the cycle count from 13 to 8. According to ARM, this is a general improvement for the ARM64 architecture and not specific to a certain processor. See https://developer.arm.com/documentation/102336/0100/Load-Acquire-and-Store-Release-instructions "Weaker ordering requirements that are imposed by Load-Acquire and Store-Release instructions allow for micro-architectural optimizations, which could reduce some of the performance impacts that are otherwise imposed by an explicit memory barrier. If the ordering requirement is satisfied using either a Load-Acquire or Store-Release, then it would be preferable to use these instructions instead of a DMB" [ NOTE! This is my original minimal patch that unconditionally switches over to using smp_load_acquire(), instead of the much more involved and subtle patch that Christoph Lameter wrote that made it conditional. But Christoph gets authorship credit because I had initially thought that we needed the more complex model, and Christoph ran with it it and did the work. Only after looking at code generation for all the relevant architectures, did I come to the conclusion that nobody actually really needs the old "smp_rmb()" model. Even architectures without load-acquire support generally do as well or better with smp_load_acquire(). So credit to Christoph, but if this then causes issues on other architectures, put the blame solidly on me. Also note as part of the ruthless simplification, this gets rid of the overly subtle optimization where some code uses a non-barrier version of the sequence count (see the __read_seqcount_begin() users in fs/namei.c). They then play games with their own barriers and/or with nested sequence counts. Those optimizations are literally meaningless on x86, and questionable elsewhere. If somebody can show that they matter, we need to re-do them more cleanly than "use an internal helper". - Linus ] Signed-off-by: Christoph Lameter (Ampere) <cl@gentwo.org> Link: https://lore.kernel.org/all/20240912-seq_optimize-v3-1-8ee25e04dffa@gentwo.org/ Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-09-22x86: make the masked_user_access_begin() macro use its argument only onceLinus Torvalds1-2/+3
This doesn't actually matter for any of the current users, but before merging it mainline, make sure we don't have any surprising semantics. We don't actually want to use an inline function here, because we want to allow - but not require - const pointer arguments, and return them as such. But we already had a local auto-type variable, so let's just use it to avoid any possible double evaluation. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-09-22perf: Fix topology_sibling_cpumask check warning on ARMKan Liang1-5/+5
The below warning is triggered when building with arm multi_v7_defconfig. kernel/events/core.c: In function 'perf_event_setup_cpumask': kernel/events/core.c:14012:13: warning: the comparison will always evaluate as 'true' for the address of 'thread_sibling' will never be NULL [-Waddress] 14012 | if (!topology_sibling_cpumask(cpu)) { The perf_event_init_cpu() may be invoked at the early boot stage, while the topology_*_cpumask hasn't been initialized yet. The check is to specially handle the case, and initialize the perf_online_<domain>_masks on the boot CPU. X86 uses a per-cpu cpumask pointer, which could be NULL at the early boot stage. However, ARM uses a global variable, which never be NULL. Use perf_online_mask as an indicator instead. Only initialize the perf_online_<domain>_masks when perf_online_mask is empty. Fix a typo as well. Fixes: 4ba4f1afb6a9 ("perf: Generic hotplug support for a PMU with a scope") Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Closes: https://lore.kernel.org/lkml/20240911153854.240bbc1f@canb.auug.org.au/ Reported-by: Steven Price <steven.price@arm.com> Closes: https://lore.kernel.org/lkml/1835eb6d-3e05-47f3-9eae-507ce165c3bf@arm.com/ Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Tested-by: Steven Price <steven.price@arm.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-09-21bcachefs: return err ptr instead of null in read sb cleanDiogo Jahchan Koike1-1/+1
syzbot reported a null-ptr-deref in bch2_fs_start. [0] When a sb is marked clear but doesn't have a clean section bch2_read_superblock_clean returns NULL which PTR_ERR_OR_ZERO lets through, eventually leading to a null ptr dereference down the line. Adjust read sb clean to return an ERR_PTR indicating the invalid clean section. [0] https://syzkaller.appspot.com/bug?extid=1cecc37d87c4286e5543 Reported-by: syzbot+1cecc37d87c4286e5543@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=1cecc37d87c4286e5543 Signed-off-by: Diogo Jahchan Koike <djahchankoike@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: Remove duplicated include in backpointers.cYang Li1-1/+0
The header files bbpos.h is included twice in backpointers.c, so one inclusion of each can be removed. Reported-by: Abaci Robot <abaci@linux.alibaba.com> Closes: https://bugzilla.openanolis.cn/show_bug.cgi?id=10783 Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: Don't drop devices with stripe pointersKent Overstreet4-9/+32
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: bch2_ec_stripe_head_get() now checks for change in rw devicesKent Overstreet2-27/+60
This factors out ec_strie_head_devs_update(), which initializes the bitmap of devices we're allocating from, and runs it every time c->rw_devs_change_count changes. We also cancel pending, not allocated stripes, since they may refer to devices that are no longer available. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: bch_fs.rw_devs_change_countKent Overstreet2-4/+9
Add a counter that's incremented whenever rw devices change; this will be used for erasure coding so that it can keep ec_stripe_head in sync and not deadlock on a new stripe when a device it wants goes away. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: bch2_dev_remove_stripes()Kent Overstreet4-3/+74
We can now correctly force-remove a device that has stripes on it; this uses the new BCH_SB_MEMBER_INVALID sentinal value. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: bch2_trigger_ptr() calculates sectors even when no deviceKent Overstreet2-10/+21
This is necessary for erasure coded pointers to devices that have been removed. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: improve error messages in bch2_ec_read_extent()Kent Overstreet3-19/+23
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: improve error message on too few devices for ecKent Overstreet1-3/+16
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: improve bch2_new_stripe_to_text()Kent Overstreet1-0/+2
also print out the new stripe key Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: ec_stripe_head.nr_createdKent Overstreet2-2/+6
additional debug stat Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: bch_stripe.disk_labelKent Overstreet4-16/+43
When reshaping existing stripes, we should keep them on the same target that they were allocated on; to do this, we need to add a field to the btree stripe type. This is a tad awkward, because we only have 8 bits left, and targets are 16 bits - but we only need to store a label, not a full target. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: stripe_to_mem()Kent Overstreet1-18/+15
factor out a common helper Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: EIO errcode cleanupKent Overstreet5-27/+33
We want to be using private errcodes whenever possible, for better error messages. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: Rework btree node pinningKent Overstreet7-75/+150
In backpointers fsck, we do a seqential scan of one btree, and check references to another: extents <-> backpointers Checking references generates random lookups, so we want to pin that btree in memory (or only a range, if it doesn't fit in ram). Previously, this was done with a simple check in the shrinker - "if btree node is in range being pinned, don't free it" - but this generated OOMs, as our shrinker wasn't well behaved if there was less memory available than expected. Instead, we now have two different shrinkers and lru lists; the second shrinker being for pinned nodes, with seeks set much higher than normal - so they can still be freed if necessary, but we'll prefer not to. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: split up btree cache counters for live, freeableKent Overstreet6-32/+47
this is prep for introducing a second live list and shrinker for pinned nodes Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: btree cache counters should be size_tKent Overstreet6-36/+37
32 bits won't overflow any time soon, but size_t is the correct type for counting objects in memory. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: Don't count "skipped access bit" as touched in btree cache scanKent Overstreet1-0/+1
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: Failed devices no longer require mounting in degraded modeKent Overstreet1-1/+1
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: bch2_dev_rcu_noerror()Kent Overstreet6-13/+22
bch2_dev_rcu() now properly errors if the device is invalid Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: Progress indicator for extents_to_backpointersKent Overstreet1-6/+82
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: bch2_opts_to_text()Kent Overstreet3-21/+35
Factor out bch2_show_options() into a generic helper, for debugging option passing issues. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: improve "no device to read from" messageKent Overstreet1-1/+7
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: Fix compilation error for bch2_sb_member_allocHongbo Li1-6/+10
Fix the following compilation error: ``` fs/bcachefs/sb-members.c: In function ‘bch2_sb_member_alloc’: fs/bcachefs/sb-members.c:508:2: error: a label can only be part of a statement and a declaration is not a statement 508 | unsigned nr_devices = max_t(unsigned, dev_idx + 1, c->sb.nr_devices); ``` Fixes: a7d364a133c7 ("bcachefs: bch2_sb_member_alloc()") Signed-off-by: Hongbo Li <lihongbo22@huawei.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: bch2_sb_member_alloc()Kent Overstreet3-46/+53
refactoring Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: bch2_dev_remove_alloc() -> alloc_background.cKent Overstreet3-27/+30
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-09-21bcachefs: Move tabstop setup to bch2_dev_usage_to_text()Kent Overstreet2-7/+9
No reason for it not to be where it's needed. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>