aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2020-03-13do_last(): rejoin the common path even earlier in FMODE_{OPENED,CREATED} caseAl Viro1-10/+4
... getting may_create_in_sticky() checks in FMODE_OPENED case as well. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13do_last(): simplify the liveness analysis past finish_open_createdAl Viro1-17/+11
Don't mess with got_write there - it is guaranteed to be false on entry and it will be set true if and only if we decide to go for truncation and manage to get write access for that. Don't carry acc_mode through the entire thing - it's only used in that part. And don't bother with gotos in there - compiler is quite capable of optimizing that. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13do_last(): rejoing the common path earlier in FMODE_{OPENED,CREATED} caseAl Viro1-13/+8
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13do_last(): don't bother with keeping got_write in FMODE_OPENED caseAl Viro1-20/+11
it's easier to drop it right after lookup_open() and regain if needed (i.e. if we will need to truncate). On the non-FMODE_OPENED path we do that anyway. In case of FMODE_CREATED we won't be needing it. And it's easier to prove correctness that way, especially since the initial failure to get write access is not always fatal; proving that we'll never end up truncating in that case is rather convoluted. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13do_last(): merge the may_open() callsAl Viro1-7/+3
have FMODE_OPENED case rejoin the main path at earlier point Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13atomic_open(): lift the call of may_open() into do_last()Al Viro1-15/+11
there we'll be able to merge it with its counterparts in other cases, and there's no reason to do it before the parent has been unlocked Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13atomic_open(): return the right dentry in FMODE_OPENED caseAl Viro1-1/+5
->atomic_open() might have used a different alias than the one we'd passed to it; in "not opened" case we take care of that, in "opened" one we don't. Currently we don't care downstream of "opened" case which alias to return; however, that will change shortly when we get to unifying may_open() calls. It's not hard to get right in all cases, anyway. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13new helper: traverse_mounts()Al Viro1-105/+72
common guts of follow_down() and follow_managed() taken to a new helper - traverse_mounts(). The remnants of follow_managed() are folded into its sole remaining caller (handle_mounts()). Calling conventions of handle_mounts() slightly sanitized - instead of the weird "1 for success, -E... for failure" that used to be imposed by the calling conventions of walk_component() et.al. we can use the normal "0 for success, -E... for failure". Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13massage __follow_mount_rcu() a bitAl Viro1-35/+35
make the loop more similar to that in follow_managed(), with explicit tracking of flags, etc. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13namei: have link_path_walk() maintain LOOKUP_PARENTAl Viro1-11/+6
set on entry, clear when we get to the last component. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13link_path_walk(): simplify stack handlingAl Viro1-9/+5
We use nd->stack to store two things: pinning down the symlinks we are resolving and resuming the name traversal when a nested symlink is finished. Currently, nd->depth is used to keep track of both. It's 0 when we call link_path_walk() for the first time (for the pathname itself) and 1 on all subsequent calls (for trailing symlinks, if any). That's fine, as far as pinning symlinks goes - when handling a trailing symlink, the string we are interpreting is the body of symlink pinned down in nd->stack[0]. It's rather inconvenient with respect to handling nested symlinks, though - when we run out of a string we are currently interpreting, we need to decide whether it's a nested symlink (in which case we need to pick the string saved back when we started to interpret that nested symlink and resume its traversal) or not (in which case we are done with link_path_walk()). Current solution is a bit of a kludge - in handling of trailing symlink (in lookup_last() and open_last_lookups() we clear nd->stack[0].name. That allows link_path_walk() to use the following rules when running out of a string to interpret: * if nd->depth is zero, we are at the end of pathname itself. * if nd->depth is positive, check the saved string; for nested symlink it will be non-NULL, for trailing symlink - NULL. It works, but it's rather non-obvious. Note that we have two sets: the set of symlinks currently being traversed and the set of postponed pathname tails. The former is stored in nd->stack[0..nd->depth-1].link and it's valid throught the pathname resolution; the latter is valid only during an individual call of link_path_walk() and it occupies nd->stack[0..nd->depth-1].name for the first call of link_path_walk() and nd->stack[1..nd->depth-1].name for subsequent ones. The kludge is basically a way to recognize the second set becoming empty. The things get simpler if we keep track of the second set's size explicitly and always store it in nd->stack[0..depth-1].name. We access the second set only inside link_path_walk(), so its size can live in a local variable; that way the check becomes trivial without the need of that kludge. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13pick_link(): check for WALK_TRAILING, not LOOKUP_PARENTAl Viro1-5/+5
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13namei: invert the meaning of WALK_FOLLOWAl Viro1-6/+6
old flags & WALK_FOLLOW <=> new !(flags & WALK_TRAILING) That's what that flag had really been used for. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13sanitize handling of nd->last_type, kill LAST_BINDAl Viro3-8/+4
->last_type values are set in 3 places: path_init() (sets to LAST_ROOT), link_path_walk (LAST_NORM/DOT/DOTDOT) and pick_link (LAST_BIND). The are checked in walk_component(), lookup_last() and do_last(). They also get copied to the caller by filename_parentat(). In the last 3 cases the value is what we had at the return from link_path_walk(). In case of walk_component() it's either directly downstream from assignment in link_path_walk() or, when called by lookup_last(), the value we have at the return from link_path_walk(). The value at the entry into link_path_walk() can survive to return only if the pathname contains nothing but slashes. Note that pick_link() never returns such - pure jumps are handled directly. So for the calls of link_path_walk() for trailing symlinks it does not matter what value had been there at the entry; the value at the return won't depend upon it. There are 3 call chains that might have pick_link() storing LAST_BIND: 1) pick_link() from step_into() from walk_component() from link_path_walk(). In that case we will either be parsing the next component immediately after return into link_path_walk(), which will overwrite the ->last_type before anyone has a chance to look at it, or we'll fail, in which case nobody will be looking at ->last_type at all. 2) pick_link() from step_into() from walk_component() from lookup_last(). The value is never looked at due to the above; it won't affect the value seen at return from any link_path_walk(). 3) pick_link() from step_into() from do_last(). Ditto. In other words, assignemnt in pick_link() is pointless, and so is LAST_BIND itself; nothing ever looks at that value. Kill it off. And make link_path_walk() _always_ assign ->last_type - in the only case when the value at the entry might survive to the return that value is always LAST_ROOT, inherited from path_init(). Move that assignment from path_init() into the beginning of link_path_walk(), to consolidate the things. Historical note: LAST_BIND used to be used for the kludge with trailing pure jump symlinks (extra iteration through the top-level loop). No point keeping it anymore... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13finally fold get_link() into pick_link()Al Viro1-74/+61
kill nd->link_inode, while we are at it Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13merging pick_link() with get_link(), part 6Al Viro1-8/+5
move the only remaining call of get_link() into pick_link() Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13merging pick_link() with get_link(), part 5Al Viro1-25/+18
move get_link() call into step_into(). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13merging pick_link() with get_link(), part 4Al Viro1-33/+26
Move the call of get_link() into walk_component(). Change the calling conventions for walk_component() to returning the link body to follow (if any). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13merging pick_link() with get_link(), part 3Al Viro1-9/+9
After a pure jump ("/" or procfs-style symlink) we don't need to hold the link anymore. link_path_walk() dropped it if such case had been detected, lookup_last/do_last() (i.e. old trailing_symlink()) left it on the stack - it ended up calling terminate_walk() shortly anyway, which would've purged the entire stack. Do it in get_link() itself instead. Simpler logics that way... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13merging pick_link() with get_link(), part 2Al Viro1-28/+40
Fold trailing_symlink() into lookup_last() and do_last(), change the calling conventions of those two. Rules change: success, we are done => NULL instead of 0 error => ERR_PTR(-E...) instead of -E... got a symlink to follow => return the path to be followed instead of 1 The loops calling those (in path_lookupat() and path_openat()) adjusted. A subtle change of control flow here: originally a pure-jump trailing symlink ("/" or procfs one) would've passed through the upper level loop once more, with "" for path to traverse. That would've brought us back to the lookup_last/do_last entry and we would've hit LAST_BIND case (LAST_BIND left from get_link() called by trailing_symlink()) and pretty much skip to the point right after where we'd left the sucker back when we picked that trailing symlink. Now we don't bother with that extra pass through the upper level loop - if get_link() says "I've just done a pure jump, nothing else to do", we just treat that as non-symlink case. Boilerplate added on that step will go away shortly - it'll migrate into walk_component() and then to step_into(), collapsing into the change of calling conventions for those. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13merging pick_link() with get_link(), part 1Al Viro1-5/+7
Move restoring LOOKUP_PARENT and zeroing nd->stack.name[0] past the call of get_link() (nothing _currently_ uses them in there). That allows to moved the call of may_follow_link() into get_link() as well, since now the presence of LOOKUP_PARENT distinguishes the callers from each other (link_path_walk() has it, trailing_symlink() doesn't). Preparations for folding trailing_symlink() into callers (lookup_last() and do_last()) and changing the calling conventions of those. Next stage after that will have get_link() call migrate into walk_component(), then - into step_into(). It's tricky enough to warrant doing that in stages, unfortunately... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13expand the only remaining call of path_lookup_conditional()Al Viro1-9/+5
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13LOOKUP_MOUNTPOINT: fold path_mountpointat() into path_lookupat()Al Viro5-90/+12
New LOOKUP flag, telling path_lookupat() to act as path_mountpointat(). IOW, traverse mounts at the final point and skip revalidation of the location where it ends up. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13fold handle_mounts() into step_into()Al Viro1-26/+15
The following is true: * calls of handle_mounts() and step_into() are always paired in sequences like err = handle_mounts(nd, dentry, &path, &inode, &seq); if (unlikely(err < 0)) return err; err = step_into(nd, &path, flags, inode, seq); * in all such sequences path is uninitialized before and unused after this pair of calls * in all such sequences inode and seq are unused afterwards. So the call of handle_mounts() can be shifted inside step_into(), turning 'path' into a local variable in the combined function. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13new step_into() flag: WALK_NOFOLLOWAl Viro1-6/+4
Tells step_into() not to follow symlinks, regardless of LOOKUP_FOLLOW. Allows to switch handle_lookup_down() to of step_into(), getting all follow_managed() and step_into() calls paired. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13step_into() callers: dismiss the symlink earlierAl Viro1-3/+7
We need to dismiss a symlink when we are done traversing it; currently that's done when we call step_into() for its last component. For the cases when we do not call step_into() for that component (i.e. when it's . or ..) we do the same symlink dismissal after the call of handle_dots(). What we need to guarantee is that the symlink won't be dismissed while we are still using nd->last.name - it's pointing into the body of said symlink. step_into() is sufficiently late - by the time it's called we'd already obtained the dentry, so the name we'd been looking up is no longer needed. However, it turns out to be cleaner to have that ("we are done with that component now, can dismiss the link") done explicitly - in the callers of step_into(). In handle_dots() case we won't be using the component string at all, so for . and .. the corresponding point is actually _before_ the call of handle_dots(), not after it. Fix a minor irregularity in do_last(), while we are at it - if trailing symlink ended with . or .. we forgot to dismiss it. Not a problem, since nameidata is about to be done with (neither . nor .. can be a trailing symlink, so this is the last iteration through the loop) and terminate_walk() will clean the stack anyway, but let's keep it more regular. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13lookup_fast(): take mount traversal into callersAl Viro1-26/+24
Current calling conventions: -E... on error, 0 on cache miss, result of handle_mounts(nd, dentry, path, inode, seqp) on success. Turn that into returning ERR_PTR(-E...), NULL and dentry resp.; deal with handle_mounts() in the callers. The thing is, they already do that in cache miss handling case, so we just need to supply dentry to them and unify the mount traversal in those cases. Fewer arguments that way, and we get closer to merging handle_mounts() and step_into(). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13teach handle_mounts() to handle RCU modeAl Viro1-29/+17
... and make the callers of __follow_mount_rcu() use handle_mounts(). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13lookup_fast(): consolidate the RCU success caseAl Viro1-3/+4
1) in case of __follow_mount_rcu() failure, lookup_fast() proceeds to call unlazy_child() and, should it succeed, handle_mounts(). Note that we have status > 0 (or we wouldn't be calling __follow_mount_rcu() at all), so all stuff conditional upon non-positive status won't be even touched. Consolidate just that sequence after the call of __follow_mount_rcu(). 2) calling d_is_negative() and keeping its result is pointless - we either don't get past checking ->d_seq (and don't use the results of d_is_negative() at all), or we are guaranteed that ->d_inode and type bits of ->d_flags had been consistent at the time of d_is_negative() call. IOW, we could only get to the use of its result if it's equal to !inode. The same ->d_seq check guarantees that after that point this CPU won't observe ->d_flags values older than ->d_inode update. So 'negative' variable is completely pointless these days. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-12handle_mounts(): pass dentry in, turn path into a pure out argumentAl Viro1-19/+18
All callers are equivalent to path->dentry = dentry; path->mnt = nd->path.mnt; err = handle_mounts(path, ...) Pass dentry as an explicit argument, fill *path in handle_mounts() itself. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-12do_last(): collapse the call of path_to_nameidata()Al Viro1-3/+4
... and shift filling struct path to just before the call of handle_mounts(). All callers of handle_mounts() are immediately preceded by path->mnt = nd->path.mnt now. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-12lookup_open(): saner calling conventions (return dentry on success)Al Viro1-27/+19
same story as for atomic_open() in the previous commit. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-02-27atomic_open(): saner calling conventions (return dentry on success)Al Viro1-17/+24
Currently it either returns -E... or puts (nd->path.mnt,dentry) into *path and returns 0. Make it return ERR_PTR(-E...) or dentry; adjust the caller. Fewer arguments and it's easier to keep track of *path contents that way. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-02-27handle_mounts(): start building a sane wrapper for follow_managed()Al Viro1-16/+16
All callers of follow_managed() follow it on success with the same steps - d_backing_inode(path->dentry) is calculated and stored into some struct inode * variable and, in all but one case, an unsigned variable (nd->seq to be) is zeroed. The single exception is lookup_fast() and there zeroing is correct thing to do - not doing it is a pointless microoptimization. Add a wrapper for follow_managed() that would do that combination. It's mostly a vehicle for code massage - it will be changing quite a bit, and the current calling conventions are by no means final. Right now it takes path, nameidata and (as out params) inode and seq, similar to __follow_mount_rcu(). Which will soon get folded into it... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-02-27make build_open_flags() treat O_CREAT | O_EXCL as implying O_NOFOLLOWAl Viro2-11/+8
O_CREAT | O_EXCL means "-EEXIST if we run into a trailing symlink". As it is, we might or might not have LOOKUP_FOLLOW in op->intent in that case - that depends upon having O_NOFOLLOW in open flags. It doesn't matter, since we won't be checking it in that case - do_last() bails out earlier. However, making sure it's not set (i.e. acting as if we had an explicit O_NOFOLLOW) makes the behaviour more explicit and allows to reorder the check for O_CREAT | O_EXCL in do_last() with the call of step_into() immediately following it. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-02-27follow_automount() doesn't need the entire nameidataAl Viro1-5/+5
Only the address of ->total_link_count and the flags. And fix an off-by-one is ELOOP detection - make it consistent with symlink following, where we check if the pre-increment value has reached 40, rather than check the post-increment one. [kudos to Christian Brauner for spotted braino] Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-02-27follow_automount(): get rid of dead^Wstillborn codeAl Viro2-26/+11
1) no instances of ->d_automount() have ever made use of the "return ERR_PTR(-EISDIR) if you don't feel like mounting anything" - that's a rudiment of plans that got superseded before the thing went into the tree. Despite the comment in follow_automount(), autofs has never done that. 2) if there's no ->d_automount() in dentry_operations, filesystems should not set DCACHE_NEED_AUTOMOUNT in the first place. None have ever done so... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-02-27fix automount/automount race properlyAl Viro2-32/+37
Protection against automount/automount races (two threads hitting the same referral point at the same time) is based upon do_add_mount() prevention of identical overmounts - trying to overmount the root of mounted tree with the same tree fails with -EBUSY. It's unreliable (the other thread might've mounted something on top of the automount it has triggered) *and* causes no end of headache for follow_automount() and its caller, since finish_automount() behaves like do_new_mount() - if the mountpoint to be is overmounted, it mounts on top what's overmounting it. It's not only wrong (we want to go into what's overmounting the automount point and quietly discard what we planned to mount there), it introduces the possibility of original parent mount getting dropped. That's what 8aef18845266 (VFS: Fix vfsmount overput on simultaneous automount) deals with, but it can't do anything about the reliability of conflict detection - if something had been overmounted the other thread's automount (e.g. that other thread having stepped into automount in mount(2)), we don't get that -EBUSY and the result is referral point under automounted NFS under explicit overmount under another copy of automounted NFS What we need is finish_automount() *NOT* digging into overmounts - if it finds one, it should just quietly discard the thing it was asked to mount. And don't bother with actually crossing into the results of finish_automount() - the same loop that calls follow_automount() will do that just fine on the next iteration. IOW, instead of calling lock_mount() have finish_automount() do it manually, _without_ the "move into overmount and retry" part. And leave crossing into the results to the caller of follow_automount(), which simplifies it a lot. Moral: if you end up with a lot of glue working around the calling conventions of something, perhaps these calling conventions are simply wrong... Fixes: 8aef18845266 (VFS: Fix vfsmount overput on simultaneous automount) Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-02-10do_add_mount(): lift lock_mount/unlock_mount into callersAl Viro1-23/+24
preparation to finish_automount() fix (next commit) Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-02-09Linux 5.6-rc1Linus Torvalds1-2/+2
2020-02-09irqchip/gic-v4.1: Avoid 64bit division for the sake of 32bit ARMMarc Zyngier1-2/+2
In order to allow the GICv4 code to link properly on 32bit ARM, make sure we don't use 64bit divisions when it isn't strictly necessary. Fixes: 4e6437f12d6e ("irqchip/gic-v4.1: Ensure L2 vPE table is allocated at RD level") Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Zenghui Yu <yuzenghui@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-02-08fs: Add VirtualBox guest shared folder (vboxsf) supportHans de Goede12-0/+3280
VirtualBox hosts can share folders with guests, this commit adds a VFS driver implementing the Linux-guest side of this, allowing folders exported by the host to be mounted under Linux. This driver depends on the guest <-> host IPC functions exported by the vboxguest driver. Acked-by: Christoph Hellwig <hch@infradead.org> Signed-off-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-02-08Fix up remaining devm_ioremap_nocache() in SGI IOC3 8250 UART driverLinus Torvalds1-1/+1
This is a merge error on my part - the driver was merged into mainline by commit c5951e7c8ee5 ("Merge tag 'mips_5.6' of git://../mips/linux") over a week ago, but nobody apparently noticed that it didn't actually build due to still having a reference to the devm_ioremap_nocache() function, removed a few days earlier through commit 6a1000bd2703 ("Merge tag 'ioremap-5.6' of git://../ioremap"). Apparently this didn't get any build testing anywhere. Not perhaps all that surprising: it's restricted to 64-bit MIPS only, and only with the new SGI_MFD_IOC3 support enabled. I only noticed because the ioremap conflicts in the ARM SoC driver update made me check there weren't any others hiding, and I found this one. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-02-08pipe: use exclusive waits when reading or writingLinus Torvalds4-30/+51
This makes the pipe code use separate wait-queues and exclusive waiting for readers and writers, avoiding a nasty thundering herd problem when there are lots of readers waiting for data on a pipe (or, less commonly, lots of writers waiting for a pipe to have space). While this isn't a common occurrence in the traditional "use a pipe as a data transport" case, where you typically only have a single reader and a single writer process, there is one common special case: using a pipe as a source of "locking tokens" rather than for data communication. In particular, the GNU make jobserver code ends up using a pipe as a way to limit parallelism, where each job consumes a token by reading a byte from the jobserver pipe, and releases the token by writing a byte back to the pipe. This pattern is fairly traditional on Unix, and works very well, but will waste a lot of time waking up a lot of processes when only a single reader needs to be woken up when a writer releases a new token. A simplified test-case of just this pipe interaction is to create 64 processes, and then pass a single token around between them (this test-case also intentionally passes another token that gets ignored to test the "wake up next" logic too, in case anybody wonders about it): #include <unistd.h> int main(int argc, char **argv) { int fd[2], counters[2]; pipe(fd); counters[0] = 0; counters[1] = -1; write(fd[1], counters, sizeof(counters)); /* 64 processes */ fork(); fork(); fork(); fork(); fork(); fork(); do { int i; read(fd[0], &i, sizeof(i)); if (i < 0) continue; counters[0] = i+1; write(fd[1], counters, (1+(i & 1)) *sizeof(int)); } while (counters[0] < 1000000); return 0; } and in a perfect world, passing that token around should only cause one context switch per transfer, when the writer of a token causes a directed wakeup of just a single reader. But with the "writer wakes all readers" model we traditionally had, on my test box the above case causes more than an order of magnitude more scheduling: instead of the expected ~1M context switches, "perf stat" shows 231,852.37 msec task-clock # 15.857 CPUs utilized 11,250,961 context-switches # 0.049 M/sec 616,304 cpu-migrations # 0.003 M/sec 1,648 page-faults # 0.007 K/sec 1,097,903,998,514 cycles # 4.735 GHz 120,781,778,352 instructions # 0.11 insn per cycle 27,997,056,043 branches # 120.754 M/sec 283,581,233 branch-misses # 1.01% of all branches 14.621273891 seconds time elapsed 0.018243000 seconds user 3.611468000 seconds sys before this commit. After this commit, I get 5,229.55 msec task-clock # 3.072 CPUs utilized 1,212,233 context-switches # 0.232 M/sec 103,951 cpu-migrations # 0.020 M/sec 1,328 page-faults # 0.254 K/sec 21,307,456,166 cycles # 4.074 GHz 12,947,819,999 instructions # 0.61 insn per cycle 2,881,985,678 branches # 551.096 M/sec 64,267,015 branch-misses # 2.23% of all branches 1.702148350 seconds time elapsed 0.004868000 seconds user 0.110786000 seconds sys instead. Much better. [ Note! This kernel improvement seems to be very good at triggering a race condition in the make jobserver (in GNU make 4.2.1) for me. It's a long known bug that was fixed back in June 2017 by GNU make commit b552b0525198 ("[SV 51159] Use a non-blocking read with pselect to avoid hangs."). But there wasn't a new release of GNU make until 4.3 on Jan 19 2020, so a number of distributions may still have the buggy version. Some have backported the fix to their 4.2.1 release, though, and even without the fix it's quite timing-dependent whether the bug actually is hit. ] Josh Triplett says: "I've been hammering on your pipe fix patch (switching to exclusive wait queues) for a month or so, on several different systems, and I've run into no issues with it. The patch *substantially* improves parallel build times on large (~100 CPU) systems, both with parallel make and with other things that use make's pipe-based jobserver. All current distributions (including stable and long-term stable distributions) have versions of GNU make that no longer have the jobserver bug" Tested-by: Josh Triplett <josh@joshtriplett.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-02-08compat_ioctl: fix FIONREAD on devicesArnd Bergmann1-4/+7
My final cleanup patch for sys_compat_ioctl() introduced a regression on the FIONREAD ioctl command, which is used for both regular and special files, but only works on regular files after my patch, as I had missed the warning that Al Viro put into a comment right above it. Change it back so it can work on any file again by moving the implementation to do_vfs_ioctl() instead. Fixes: 77b9040195de ("compat_ioctl: simplify the implementation") Reported-and-tested-by: Christian Zigotzky <chzigotzky@xenosoft.de> Reported-and-tested-by: youling257 <youling257@gmail.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2020-02-08net: thunderx: use proper interface type for RGMIITim Harvey1-1/+1
The configuration of the OCTEONTX XCV_DLL_CTL register via xcv_init_hw() is such that the RGMII RX delay is bypassed leaving the RGMII TX delay enabled in the MAC: /* Configure DLL - enable or bypass * TX no bypass, RX bypass */ cfg = readq_relaxed(xcv->reg_base + XCV_DLL_CTL); cfg &= ~0xFF03; cfg |= CLKRX_BYP; writeq_relaxed(cfg, xcv->reg_base + XCV_DLL_CTL); This would coorespond to a interface type of PHY_INTERFACE_MODE_RGMII_RXID and not PHY_INTERFACE_MODE_RGMII. Fixing this allows RGMII PHY drivers to do the right thing (enable RX delay in the PHY) instead of erroneously enabling both delays in the PHY. Signed-off-by: Tim Harvey <tharvey@gateworks.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-02-08powerpc: Fix CONFIG_TRACE_IRQFLAGS with CONFIG_VMAP_STACKChristophe Leroy1-1/+1
When CONFIG_PROVE_LOCKING is selected together with (now default) CONFIG_VMAP_STACK, kernel enter deadlock during boot. At the point of checking whether interrupts are enabled or not, the value of MSR saved on stack is read using the physical address of the stack. But at this point, when using VMAP stack the DATA MMU translation has already been re-enabled, leading to deadlock. Don't use the physical address of the stack when CONFIG_VMAP_STACK is set. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Reported-by: Guenter Roeck <linux@roeck-us.net> Fixes: 028474876f47 ("powerpc/32: prepare for CONFIG_VMAP_STACK") Tested-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/daeacdc0dec0416d1c587cc9f9e7191ad3068dc0.1581095957.git.christophe.leroy@c-s.fr
2020-02-08powerpc/futex: Fix incorrect user access blockingMichael Ellerman1-4/+6
The early versions of our kernel user access prevention (KUAP) were written by Russell and Christophe, and didn't have separate read/write access. At some point I picked up the series and added the read/write access, but I failed to update the usages in futex.h to correctly allow read and write. However we didn't notice because of another bug which was causing the low-level code to always enable read and write. That bug was fixed recently in commit 1d8f739b07bd ("powerpc/kuap: Fix set direction in allow/prevent_user_access()"). futex_atomic_cmpxchg_inatomic() is passed the user address as %3 and does: 1: lwarx %1, 0, %3 cmpw 0, %1, %4 bne- 3f 2: stwcx. %5, 0, %3 Which clearly loads and stores from/to %3. The logic in arch_futex_atomic_op_inuser() is similar, so fix both of them to use allow_read_write_user(). Without this fix, and with PPC_KUAP_DEBUG=y, we see eg: Bug: Read fault blocked by AMR! WARNING: CPU: 94 PID: 149215 at arch/powerpc/include/asm/book3s/64/kup-radix.h:126 __do_page_fault+0x600/0xf30 CPU: 94 PID: 149215 Comm: futex_requeue_p Tainted: G W 5.5.0-rc7-gcc9x-g4c25df5640ae #1 ... NIP [c000000000070680] __do_page_fault+0x600/0xf30 LR [c00000000007067c] __do_page_fault+0x5fc/0xf30 Call Trace: [c00020138e5637e0] [c00000000007067c] __do_page_fault+0x5fc/0xf30 (unreliable) [c00020138e5638c0] [c00000000000ada8] handle_page_fault+0x10/0x30 --- interrupt: 301 at cmpxchg_futex_value_locked+0x68/0xd0 LR = futex_lock_pi_atomic+0xe0/0x1f0 [c00020138e563bc0] [c000000000217b50] futex_lock_pi_atomic+0x80/0x1f0 (unreliable) [c00020138e563c30] [c00000000021b668] futex_requeue+0x438/0xb60 [c00020138e563d60] [c00000000021c6cc] do_futex+0x1ec/0x2b0 [c00020138e563d90] [c00000000021c8b8] sys_futex+0x128/0x200 [c00020138e563e20] [c00000000000b7ac] system_call+0x5c/0x68 Fixes: de78a9c42a79 ("powerpc: Add a framework for Kernel Userspace Access Protection") Cc: stable@vger.kernel.org # v5.2+ Reported-by: syzbot+e808452bad7c375cbee6@syzkaller-ppc64.appspotmail.com Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Link: https://lore.kernel.org/r/20200207122145.11928-1-mpe@ellerman.id.au
2020-02-08irqchip/gic-v3-its: Rename VPENDBASER/VPROPBASER accessorsZenghui Yu3-24/+24
V{PEND,PROP}BASER registers are actually located in VLPI_base frame of the *redistributor*. Rename their accessors to reflect this fact. No functional changes. Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200206075711.1275-7-yuzenghui@huawei.com
2020-02-08irqchip/gic-v3-its: Remove superfluous WARN_ONZenghui Yu1-1/+0
"ITS virtual pending table not cleaning" is already complained inside its_clear_vpend_valid(), there's no need to trigger a WARN_ON again. Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200206075711.1275-6-yuzenghui@huawei.com