aboutsummaryrefslogtreecommitdiffstats
path: root/kernel/sched
diff options
context:
space:
mode:
authorPeter Zijlstra <peterz@infradead.org>2015-09-28 18:52:36 +0200
committerIngo Molnar <mingo@kernel.org>2015-10-06 17:08:20 +0200
commit499d79559ffe4b9c0c3031752f6a40abd532fb75 (patch)
treedaf8531a25c735f418dc95ddee834f43996361a9 /kernel/sched
parentsched/core: Kill PREEMPT_ACTIVE (diff)
downloadlinux-dev-499d79559ffe4b9c0c3031752f6a40abd532fb75.tar.xz
linux-dev-499d79559ffe4b9c0c3031752f6a40abd532fb75.zip
sched/core: More notrace annotations
preempt_schedule_common() is marked notrace, but it does not use _notrace() preempt_count functions and __schedule() is also not marked notrace, which means that its perfectly possible to end up in the tracer from preempt_schedule_common(). Steve says: | Yep, there's some history to this. This was originally the issue that | caused function tracing to go into infinite recursion. But now we have | preempt_schedule_notrace(), which is used by the function tracer, and | that function must not be traced till preemption is disabled. | | Now if function tracing is running and we take an interrupt when | NEED_RESCHED is set, it calls | | preempt_schedule_common() (not traced) | | But then that calls preempt_disable() (traced) | | function tracer calls preempt_disable_notrace() followed by | preempt_enable_notrace() which will see NEED_RESCHED set, and it will | call preempt_schedule_notrace(), which stops the recursion, but | still calls __schedule() here, and that means when we return, we call | the __schedule() from preempt_schedule_common(). | | That said, I prefer this patch. Preemption is disabled before calling | __schedule(), and we get rid of a one round recursion with the | scheduler. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Steven Rostedt <rostedt@goodmis.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched')
-rw-r--r--kernel/sched/core.c6
1 files changed, 3 insertions, 3 deletions
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index ca260cc5d881..98c4cf8182cf 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3057,7 +3057,7 @@ again:
*
* WARNING: must be called with preemption disabled!
*/
-static void __sched __schedule(bool preempt)
+static void __sched notrace __schedule(bool preempt)
{
struct task_struct *prev, *next;
unsigned long *switch_count;
@@ -3203,9 +3203,9 @@ void __sched schedule_preempt_disabled(void)
static void __sched notrace preempt_schedule_common(void)
{
do {
- preempt_disable();
+ preempt_disable_notrace();
__schedule(true);
- sched_preempt_enable_no_resched();
+ preempt_enable_no_resched_notrace();
/*
* Check again in case we missed a preemption opportunity