aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/tools/perf/scripts/python/export-to-postgresql.py (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2021-02-22ice: Set trusted VF as default VSI when setting allmulti onBrett Creeley1-1/+1
Currently the PF will only set a trusted VF as the default VSI when it requests FLAG_VF_UNICAST_PROMISC over VIRTCHNL. However, when FLAG_VF_MULTICAST_PROMISC is set it's expected that the trusted VF will see multicast packets that don't have a matching destination MAC in the devices internal switch. Fix this by setting the trusted VF as the default VSI if either FLAG_VF_UNICAST_PROMISC or FLAG_VF_MULTICAST_PROMISC is set. Fixes: 01b5e89aab49 ("ice: Add VF promiscuous support") Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Tony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-02-22ice: report correct max number of TCsDave Ertman1-1/+1
In the driver currently, we are reporting max number of TCs to the DCBNL callback as a kernel define set to 8. This is preventing userspace applications performing DCBx to correctly down map the TCs from requested to actual values. Report the actual max TC value to userspace from the capability struct. Fixes: b94b013eb626 ("ice: Implement DCBNL support") Signed-off-by: Dave Ertman <david.m.ertman@intel.com> Tested-by: Tony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-02-21octeontx2-af: Fix an off by one in rvu_dbg_qsize_write()Dan Carpenter1-1/+1
This code does not allocate enough memory for the NUL terminator so it ends up putting it one character beyond the end of the buffer. Fixes: 8756828a8148 ("octeontx2-af: Add NPA aura and pool contexts to debugfs") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-20tty: protect tty_write from odd low-level tty disciplinesLinus Torvalds1-1/+4
Al root-caused a new warning from syzbot to the ttyprintk tty driver returning a write count larger than the data the tty layer actually gave it. Which confused the tty write code mightily, and with the new iov_iter based code, caused a WARNING in iov_iter_revert(). syzbot correctly bisected the source of the new warning to commit 9bb48c82aced ("tty: implement write_iter"), but the oddity goes back much further, it just didn't get caught by anything before. Reported-by: syzbot+3d2c27c2b7dc2a94814d@syzkaller.appspotmail.com Fixes: 9bb48c82aced ("tty: implement write_iter") Debugged-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-02-20fix handling of nd->depth on LOOKUP_CACHED failures in try_to_unlazy*Al Viro1-4/+5
After switching to non-RCU mode, we want nd->depth to match the number of entries in nd->stack[] that need eventual path_put(). legitimize_links() takes care of that on failures; unfortunately, failure exits added for LOOKUP_CACHED do not. We could add the logics for that into those failure exits, both in try_to_unlazy() and in try_to_unlazy_next(), but since both checks are immediately followed by legitimize_links() and there's no calls of legitimize_links() other than those two... It's easier to move the check (and required handling of nd->depth on failure) into legitimize_links() itself. [caught by Jens: ... and since we are zeroing ->depth here, we need to do drop_links() first] Fixes: 6c6ec2b0a3e0 "fs: add support for LOOKUP_CACHED" Tested-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-02-18pstore: Fix typo in compression option nameJiri Bohac1-2/+2
Both pstore_compress() and decompress_record() use a mistyped config option name ("PSTORE_COMPRESSION" instead of "PSTORE_COMPRESS"). As a result compression and decompression of pstore records was always disabled. Use the correct config option name. Signed-off-by: Jiri Bohac <jbohac@suse.cz> Fixes: fd49e03280e5 ("pstore: Fix linking when crypto API disabled") Acked-by: Matteo Croce <mcroce@microsoft.com> Signed-off-by: Kees Cook <keescook@chromium.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210218111547.johvp5klpv3xrpnn@dwarf.suse.cz
2021-02-17octeontx2-pf: Fix otx2_get_fecparam()Dan Carpenter1-1/+1
Static checkers complained about an off by one read overflow in otx2_get_fecparam() and we applied two conflicting fixes for it. Correct: b0aae0bde26f ("octeontx2: Fix condition.") Wrong: 93efb0c65683 ("octeontx2-pf: Fix out-of-bounds read in otx2_get_fecparam()") Revert the incorrect fix. Fixes: 93efb0c65683 ("octeontx2-pf: Fix out-of-bounds read in otx2_get_fecparam()") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-17cteontx2-pf: cn10k: Prevent harmless double shift bugsDan Carpenter1-3/+3
These defines are used with set_bit() and test_bit() which take a bit number. In other words, the code is doing: if (BIT(BIT(1)) & pf->hw.cap_flag) { This was done consistently so it did not cause a problem at runtime but it's still worth fixing. Fixes: facede8209ef ("octeontx2-pf: cn10k: Add mbox support for CN10K") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-17net: stmmac: Add PCI bus info to ethtool driver query outputWong Vee Khee3-0/+6
This patch populates the PCI bus info in the ethtool driver query data. Users will be able to view PCI bus info using 'ethtool -i <interface>'. Signed-off-by: Wong Vee Khee <vee.khee.wong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-17ptp: ptp_clockmatrix: clean-up - parenthesis around a == b are unnecessaryVincent Cheng1-10/+8
Code clean-up. Signed-off-by: Vincent Cheng <vincent.cheng.xh@renesas.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-17ptp: ptp_clockmatrix: Simplify code - remove unnecessary `err` variable.Vincent Cheng1-4/+1
Code clean-up. Signed-off-by: Vincent Cheng <vincent.cheng.xh@renesas.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-17ptp: ptp_clockmatrix: Coding style - tighten vertical spacing.Vincent Cheng1-79/+11
Code clean-up. * Remove blank line between variable declarations. * Remove blank line between: err = blah(...) if (err) ... * Remove unnecessary blank line before/after loop constructs. Signed-off-by: Vincent Cheng <vincent.cheng.xh@renesas.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-17ptp: ptp_clockmatrix: Clean-up dev_*() messages.Vincent Cheng1-79/+43
Code clean-up. * Remove unnecessary \n termination from dev_*() messages. * Remove 'char *fmt' to define strings to stay within 80 column limit. Not needed since coding guidelines increased to 100 columns limit. Keeping format in place allows static code checkers to validate the arguments. * Tighten up vertical spacing. Signed-off-by: Vincent Cheng <vincent.cheng.xh@renesas.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-17ptp: ptp_clockmatrix: Remove unused header declarations.Vincent Cheng1-2/+0
Removed unused header declarations. Signed-off-by: Vincent Cheng <vincent.cheng.xh@renesas.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-17ptp: ptp_clockmatrix: Add alignment of 1 PPS to idtcm_perout_enable.Vincent Cheng1-3/+13
When enabling output using PTP_CLK_REQ_PEROUT, need to align the output clock to the internal 1 PPS clock. Signed-off-by: Vincent Cheng <vincent.cheng.xh@renesas.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-17ptp: ptp_clockmatrix: Add wait_for_sys_apll_dpll_lock.Vincent Cheng3-2/+87
Part of the device initialization aligns the rising edge of the output clock to the internal 1 PPS clock. If the system APLL and DPLL is not locked, then the alignment will fail and there will be a fixed offset between the internal 1 PPS clock and the output clock. After loading the device firmware, poll the system APLL and DPLL for locked state prior to initialization, timing out after 2 seconds. Signed-off-by: Vincent Cheng <vincent.cheng.xh@renesas.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-17net: stmmac: dwmac-sun8i: Add a shutdown callbackSamuel Holland1-0/+10
The Ethernet MAC and PHY are usually major consumers of power on boards which may not be able to fully power off (those with no PMIC). Powering down the MAC and internal PHY saves power while these boards are "off". Reviewed-by: Chen-Yu Tsai <wens@csie.org> Signed-off-by: Samuel Holland <samuel@sholland.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-17net: stmmac: dwmac-sun8i: Minor probe function cleanupSamuel Holland1-1/+3
Adjust the spacing and use an explicit "return 0" in the success path to make the function easier to parse. Reviewed-by: Chen-Yu Tsai <wens@csie.org> Signed-off-by: Samuel Holland <samuel@sholland.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-17net: stmmac: dwmac-sun8i: Use reset_control_resetSamuel Holland1-4/+4
Use the appropriate function instead of reimplementing it, and update the error message to match the code. Reviewed-by: Chen-Yu Tsai <wens@csie.org> Signed-off-by: Samuel Holland <samuel@sholland.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-17net: stmmac: dwmac-sun8i: Remove unnecessary PHY power checkSamuel Holland1-4/+2
sun8i_dwmac_unpower_internal_phy already checks if the PHY is powered, so there is no need to do it again here. Reviewed-by: Chen-Yu Tsai <wens@csie.org> Signed-off-by: Samuel Holland <samuel@sholland.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-17net: stmmac: dwmac-sun8i: Return void from PHY unpowerSamuel Holland1-3/+2
This is a deinitialization function that always returned zero, and that return value was always ignored. Have it return void instead. Reviewed-by: Chen-Yu Tsai <wens@csie.org> Signed-off-by: Samuel Holland <samuel@sholland.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-17r8169: use macro pm_ptrHeiner Kallweit1-3/+1
Use macro pm_ptr(), this helps to avoid some ifdeffery. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-17net: mdio: Remove of_phy_attach()Florian Fainelli3-41/+1
We have no in-tree users, also update the sfp-phylink.rst documentation to indicate that phy_attach_direct() is used instead of of_phy_attach(). Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-17net: mscc: ocelot: select PACKING in the KconfigVladimir Oltean1-0/+1
Ocelot now uses include/linux/dsa/ocelot.h which makes use of CONFIG_PACKING to pack/unpack bits into the Injection/Extraction Frame Headers. So it needs to explicitly select it, otherwise there might be build errors due to the missing dependency. Fixes: 40d3f295b5fe ("net: mscc: ocelot: use common tag parsing code with DSA") Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-02-17sched,x86: Allow !PREEMPT_DYNAMICPeter Zijlstra1-6/+18
Allow building x86 with PREEMPT_DYNAMIC=n, this is needed for PREEMPT_RT as it makes no sense to not have full preemption on PREEMPT_RT. Fixes: 8c98e8cf723c ("preempt/dynamic: Provide preempt_schedule[_notrace]() static calls") Reported-by: Mike Galbraith <efault@gmx.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Mike Galbraith <efault@gmx.de> Link: https://lkml.kernel.org/r/YCK1+JyFNxQnWeXK@hirez.programming.kicks-ass.net
2021-02-17entry/kvm: Explicitly flush pending rcuog wakeup before last rescheduling pointFrederic Weisbecker4-10/+50
Following the idle loop model, cleanly check for pending rcuog wakeup before the last rescheduling point upon resuming to guest mode. This way we can avoid to do it from rcu_user_enter() with the last resort self-IPI hack that enforces rescheduling. Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20210131230548.32970-6-frederic@kernel.org
2021-02-17entry: Explicitly flush pending rcuog wakeup before last rescheduling pointFrederic Weisbecker2-5/+14
Following the idle loop model, cleanly check for pending rcuog wakeup before the last rescheduling point on resuming to user mode. This way we can avoid to do it from rcu_user_enter() with the last resort self-IPI hack that enforces rescheduling. Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20210131230548.32970-5-frederic@kernel.org
2021-02-17rcu/nocb: Trigger self-IPI on late deferred wake up before user resumeFrederic Weisbecker3-11/+37
Entering RCU idle mode may cause a deferred wake up of an RCU NOCB_GP kthread (rcuog) to be serviced. Unfortunately the call to rcu_user_enter() is already past the last rescheduling opportunity before we resume to userspace or to guest mode. We may escape there with the woken task ignored. The ultimate resort to fix every callsites is to trigger a self-IPI (nohz_full depends on arch to implement arch_irq_work_raise()) that will trigger a reschedule on IRQ tail or guest exit. Eventually every site that want a saner treatment will need to carefully place a call to rcu_nocb_flush_deferred_wakeup() before the last explicit need_resched() check upon resume. Fixes: 96d3fd0d315a (rcu: Break call_rcu() deadlock involving scheduler and perf) Reported-by: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20210131230548.32970-4-frederic@kernel.org
2021-02-17rcu/nocb: Perform deferred wake up before last idle's need_resched() checkFrederic Weisbecker4-3/+8
Entering RCU idle mode may cause a deferred wake up of an RCU NOCB_GP kthread (rcuog) to be serviced. Usually a local wake up happening while running the idle task is handled in one of the need_resched() checks carefully placed within the idle loop that can break to the scheduler. Unfortunately the call to rcu_idle_enter() is already beyond the last generic need_resched() check and we may halt the CPU with a resched request unhandled, leaving the task hanging. Fix this with splitting the rcuog wakeup handling from rcu_idle_enter() and place it before the last generic need_resched() check in the idle loop. It is then assumed that no call to call_rcu() will be performed after that in the idle loop until the CPU is put in low power mode. Fixes: 96d3fd0d315a (rcu: Break call_rcu() deadlock involving scheduler and perf) Reported-by: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20210131230548.32970-3-frederic@kernel.org
2021-02-17rcu: Pull deferred rcuog wake up to rcu_eqs_enter() callersFrederic Weisbecker1-1/+10
Deferred wakeup of rcuog kthreads upon RCU idle mode entry is going to be handled differently whether initiated by idle, user or guest. Prepare with pulling that control up to rcu_eqs_enter() callers. Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20210131230548.32970-2-frederic@kernel.org
2021-02-17sched/features: Distinguish between NORMAL and DEADLINE hrtickJuri Lelli5-7/+30
The HRTICK feature has traditionally been servicing configurations that need precise preemptions point for NORMAL tasks. More recently, the feature has been extended to also service DEADLINE tasks with stringent runtime enforcement needs (e.g., runtime < 1ms with HZ=1000). Enabling HRTICK sched feature currently enables the additional timer and task tick for both classes, which might introduced undesired overhead for no additional benefit if one needed it only for one of the cases. Separate HRTICK sched feature in two (and leave the traditional case name unmodified) so that it can be selectively enabled when needed. With: $ echo HRTICK > /sys/kernel/debug/sched_features the NORMAL/fair hrtick gets enabled. With: $ echo HRTICK_DL > /sys/kernel/debug/sched_features the DEADLINE hrtick gets enabled. Signed-off-by: Juri Lelli <juri.lelli@redhat.com> Signed-off-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com> Signed-off-by: Daniel Bristot de Oliveira <bristot@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210208073554.14629-3-juri.lelli@redhat.com
2021-02-17sched/features: Fix hrtick reprogrammingJuri Lelli2-5/+4
Hung tasks and RCU stall cases were reported on systems which were not 100% busy. Investigation of such unexpected cases (no sign of potential starvation caused by tasks hogging the system) pointed out that the periodic sched tick timer wasn't serviced anymore after a certain point and that caused all machinery that depends on it (timers, RCU, etc.) to stop working as well. This issues was however only reproducible if HRTICK was enabled. Looking at core dumps it was found that the rbtree of the hrtimer base used also for the hrtick was corrupted (i.e. next as seen from the base root and actual leftmost obtained by traversing the tree are different). Same base is also used for periodic tick hrtimer, which might get "lost" if the rbtree gets corrupted. Much alike what described in commit 1f71addd34f4c ("tick/sched: Do not mess with an enqueued hrtimer") there is a race window between hrtimer_set_expires() in hrtick_start and hrtimer_start_expires() in __hrtick_restart() in which the former might be operating on an already queued hrtick hrtimer, which might lead to corruption of the base. Use hrtick_start() (which removes the timer before enqueuing it back) to ensure hrtick hrtimer reprogramming is entirely guarded by the base lock, so that no race conditions can occur. Signed-off-by: Juri Lelli <juri.lelli@redhat.com> Signed-off-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com> Signed-off-by: Daniel Bristot de Oliveira <bristot@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210208073554.14629-2-juri.lelli@redhat.com
2021-02-17sched/deadline: Reduce rq lock contention in dl_add_task_root_domain()Dietmar Eggemann1-4/+7
dl_add_task_root_domain() is called during sched domain rebuild: rebuild_sched_domains_locked() partition_and_rebuild_sched_domains() rebuild_root_domains() for all top_cpuset descendants: update_tasks_root_domain() for all tasks of cpuset: dl_add_task_root_domain() Change it so that only the task pi lock is taken to check if the task has a SCHED_DEADLINE (DL) policy. In case that p is a DL task take the rq lock as well to be able to safely de-reference root domain's DL bandwidth structure. Most of the tasks will have another policy (namely SCHED_NORMAL) and can now bail without taking the rq lock. One thing to note here: Even in case that there aren't any DL user tasks, a slow frequency switching system with cpufreq gov schedutil has a DL task (sugov) per frequency domain running which participates in DL bandwidth management. Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Quentin Perret <qperret@google.com> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com> Reviewed-by: Daniel Bristot de Oliveira <bristot@redhat.com> Acked-by: Juri Lelli <juri.lelli@redhat.com> Link: https://lkml.kernel.org/r/20210119083542.19856-1-dietmar.eggemann@arm.com
2021-02-17uprobes: (Re)add missing get_uprobe() in __find_uprobe()Sven Schnelle1-1/+1
commit c6bc9bd06dff ("rbtree, uprobes: Use rbtree helpers") accidentally removed the refcount increase. Add it again. Fixes: c6bc9bd06dff ("rbtree, uprobes: Use rbtree helpers") Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210209150711.36778-1-svens@linux.ibm.com
2021-02-17smp: Process pending softirqs in flush_smp_call_function_from_idle()Sebastian Andrzej Siewior1-0/+4
send_call_function_single_ipi() may wake an idle CPU without sending an IPI. The woken up CPU will process the SMP-functions in flush_smp_call_function_from_idle(). Any raised softirq from within the SMP-function call will not be processed. Should the CPU have no tasks assigned, then it will go back to idle with pending softirqs and the NOHZ will rightfully complain. Process pending softirqs on return from flush_smp_call_function_queue(). Fixes: b2a02fc43a1f4 ("smp: Optimize send_call_function_single_ipi()") Reported-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210123201027.3262800-2-bigeasy@linutronix.de
2021-02-17sched: Harden PREEMPT_DYNAMICPeter Zijlstra4-8/+8
Use the new EXPORT_STATIC_CALL_TRAMP() / static_call_mod() to unexport the static_call_key for the PREEMPT_DYNAMIC calls such that modules can no longer update these calls. Having modules change/hi-jack the preemption calls would be horrible. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2021-02-17static_call: Allow module use without exposing static_call_keyJosh Poimboeuf7-11/+149
When exporting static_call_key; with EXPORT_STATIC_CALL*(), the module can use static_call_update() to change the function called. This is not desirable in general. Not exporting static_call_key however also disallows usage of static_call(), since objtool needs the key to construct the static_call_site. Solve this by allowing objtool to create the static_call_site using the trampoline address when it builds a module and cannot find the static_call_key symbol. The module loader will then try and map the trampole back to a key before it constructs the normal sites list. Doing this requires a trampoline -> key associsation, so add another magic section that keeps those. Originally-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210127231837.ifddpn7rhwdaepiu@treble
2021-02-17sched: Add /debug/sched_preemptPeter Zijlstra1-9/+126
Add a debugfs file to muck about with the preempt mode at runtime. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/YAsGiUYf6NyaTplX@hirez.programming.kicks-ass.net
2021-02-17preempt/dynamic: Support dynamic preempt with preempt= boot optionPeter Zijlstra (Intel)1-1/+67
Support the preempt= boot option and patch the static call sites accordingly. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-9-frederic@kernel.org
2021-02-17preempt/dynamic: Provide irqentry_exit_cond_resched() static callPeter Zijlstra (Intel)2-1/+13
Provide static call to control IRQ preemption (called in CONFIG_PREEMPT) so that we can override its behaviour when preempt= is overriden. Since the default behaviour is full preemption, its call is initialized to provide IRQ preemption when preempt= isn't passed. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-8-frederic@kernel.org
2021-02-17preempt/dynamic: Provide preempt_schedule[_notrace]() static callsPeter Zijlstra (Intel)2-8/+38
Provide static calls to control preempt_schedule[_notrace]() (called in CONFIG_PREEMPT) so that we can override their behaviour when preempt= is overriden. Since the default behaviour is full preemption, both their calls are initialized to the arch provided wrapper, if any. [fweisbec: only define static calls when PREEMPT_DYNAMIC, make it less dependent on x86 with __preempt_schedule_func] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-7-frederic@kernel.org
2021-02-17preempt/dynamic: Provide cond_resched() and might_resched() static callsPeter Zijlstra (Intel)3-10/+56
Provide static calls to control cond_resched() (called in !CONFIG_PREEMPT) and might_resched() (called in CONFIG_PREEMPT_VOLUNTARY) to that we can override their behaviour when preempt= is overriden. Since the default behaviour is full preemption, both their calls are ignored when preempt= isn't passed. [fweisbec: branch might_resched() directly to __cond_resched(), only define static calls when PREEMPT_DYNAMIC] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-6-frederic@kernel.org
2021-02-17preempt: Introduce CONFIG_PREEMPT_DYNAMICMichal Hocko4-0/+36
Preemption mode selection is currently hardcoded on Kconfig choices. Introduce a dedicated option to tune preemption flavour at boot time, This will be only available on architectures efficiently supporting static calls in order not to tempt with the feature against additional overhead that might be prohibitive or undesirable. CONFIG_PREEMPT_DYNAMIC is automatically selected by CONFIG_PREEMPT if the architecture provides the necessary support (CONFIG_STATIC_CALL_INLINE, CONFIG_GENERIC_ENTRY, and provide with __preempt_schedule_function() / __preempt_schedule_notrace_function()). Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Michal Hocko <mhocko@suse.com> Signed-off-by: Frederic Weisbecker <frederic@kernel.org> [peterz: relax requirement to HAVE_STATIC_CALL] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-5-frederic@kernel.org
2021-02-17static_call: Provide DEFINE_STATIC_CALL_RET0()Frederic Weisbecker1-8/+14
DECLARE_STATIC_CALL() must pass the original function targeted for a given static call. But DEFINE_STATIC_CALL() may want to initialize it as off. In this case we can't pass NULL (for functions without return value) or __static_call_return0 (for functions returning a value) directly to DEFINE_STATIC_CALL() as that may trigger a static call redeclaration with a different function prototype. Type casts neither can work around that as they don't get along with typeof(). The proper way to do that for functions that don't return a value is to use DEFINE_STATIC_CALL_NULL(). But functions returning a actual value don't have an equivalent yet. Provide DEFINE_STATIC_CALL_RET0() to solve this situation. Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-3-frederic@kernel.org
2021-02-17static_call/x86: Add __static_call_return0()Peter Zijlstra3-2/+32
Provide a stub function that return 0 and wire up the static call site patching to replace the CALL with a single 5 byte instruction that clears %RAX, the return value register. The function can be cast to any function pointer type that has a single %RAX return (including pointers). Also provide a version that returns an int for convenience. We are clearing the entire %RAX register in any case, whether the return value is 32 or 64 bits, since %RAX is always a scratch register anyway. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-2-frederic@kernel.org
2021-02-17static_call: Pull some static_call declarations to the type headersPeter Zijlstra3-21/+54
Some static call declarations are going to be needed on low level header files. Move the necessary material to the dedicated static call types header to avoid inclusion dependency hell. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-4-frederic@kernel.org
2021-02-17sched/core: Update task_prio() function headerDietmar Eggemann1-2/+6
The description of the RT offset and the values for 'normal' tasks needs update. Moreover there are DL tasks now. task_prio() has to stay like it is to guarantee compatibility with the /proc/<pid>/stat priority field: # cat /proc/<pid>/stat | awk '{ print $18; }' Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210128131040.296856-4-dietmar.eggemann@arm.com
2021-02-17sched: Remove USER_PRIO, TASK_USER_PRIO and MAX_USER_PRIODietmar Eggemann3-11/+2
The only remaining use of MAX_USER_PRIO (and USER_PRIO) is the SCALE_PRIO() definition in the PowerPC Cell architecture's Synergistic Processor Unit (SPU) scheduler. TASK_USER_PRIO isn't used anymore. Commit fe443ef2ac42 ("[POWERPC] spusched: Dynamic timeslicing for SCHED_OTHER") copied SCALE_PRIO() from the task scheduler in v2.6.23. Commit a4ec24b48dde ("sched: tidy up SCHED_RR") removed it from the task scheduler in v2.6.24. Commit 3ee237dddcd8 ("sched/prio: Add 3 macros of MAX_NICE, MIN_NICE and NICE_WIDTH in prio.h") introduced NICE_WIDTH much later. With: MAX_USER_PRIO = USER_PRIO(MAX_PRIO) = MAX_PRIO - MAX_RT_PRIO MAX_PRIO = MAX_RT_PRIO + NICE_WIDTH MAX_USER_PRIO = MAX_RT_PRIO + NICE_WIDTH - MAX_RT_PRIO MAX_USER_PRIO = NICE_WIDTH MAX_USER_PRIO can be replaced by NICE_WIDTH to be able to remove all the {*_}USER_PRIO defines. Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210128131040.296856-3-dietmar.eggemann@arm.com
2021-02-17sched: Remove MAX_USER_RT_PRIODietmar Eggemann2-12/+4
Commit d46523ea32a7 ("[PATCH] fix MAX_USER_RT_PRIO and MAX_RT_PRIO") was introduced due to a a small time period in which the realtime patch set was using different values for MAX_USER_RT_PRIO and MAX_RT_PRIO. This is no longer true, i.e. now MAX_RT_PRIO == MAX_USER_RT_PRIO. Get rid of MAX_USER_RT_PRIO and make everything use MAX_RT_PRIO instead. Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210128131040.296856-2-dietmar.eggemann@arm.com
2021-02-17sched/topology: Fix sched_domain_topology_level alloc in sched_init_numa()Dietmar Eggemann1-1/+1
Commit "sched/topology: Make sched_init_numa() use a set for the deduplicating sort" allocates 'i + nr_levels (level)' instead of 'i + nr_levels + 1' sched_domain_topology_level. This led to an Oops (on Arm64 juno with CONFIG_SCHED_DEBUG): sched_init_domains build_sched_domains() __free_domain_allocs() __sdt_free() { ... for_each_sd_topology(tl) ... sd = *per_cpu_ptr(sdd->sd, j); <-- ... } Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Vincent Guittot <vincent.guittot@linaro.org> Tested-by: Barry Song <song.bao.hua@hisilicon.com> Link: https://lkml.kernel.org/r/6000e39e-7d28-c360-9cd6-8798fd22a9bf@arm.com