aboutsummaryrefslogtreecommitdiffstats
path: root/kernel/sched
diff options
context:
space:
mode:
authorPeter Zijlstra <peterz@infradead.org>2015-09-28 17:57:39 +0200
committerIngo Molnar <mingo@kernel.org>2015-10-06 17:08:17 +0200
commit1dc0fffc48af94513e621f95dff730ed4f7317ec (patch)
tree602dbd67f0565830ea99196d71e7f47b17d849e3 /kernel/sched
parentsched/core: Stop setting PREEMPT_ACTIVE (diff)
downloadlinux-dev-1dc0fffc48af94513e621f95dff730ed4f7317ec.tar.xz
linux-dev-1dc0fffc48af94513e621f95dff730ed4f7317ec.zip
sched/core: Robustify preemption leak checks
When we warn about a preempt_count leak; reset the preempt_count to the known good value such that the problem does not ripple forward. This is most important on x86 which has a per cpu preempt_count that is not saved/restored (after this series). So if you schedule with an invalid (!2*PREEMPT_DISABLE_OFFSET) preempt_count the next task is messed up too. Enforcing this invariant limits the borkage to just the one task. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Frederic Weisbecker <fweisbec@gmail.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Steven Rostedt <rostedt@goodmis.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched')
-rw-r--r--kernel/sched/core.c4
1 files changed, 3 insertions, 1 deletions
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 6344d82a84f6..d6989f85c641 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2968,8 +2968,10 @@ static inline void schedule_debug(struct task_struct *prev)
BUG_ON(unlikely(task_stack_end_corrupted(prev)));
#endif
- if (unlikely(in_atomic_preempt_off()))
+ if (unlikely(in_atomic_preempt_off())) {
__schedule_bug(prev);
+ preempt_count_set(PREEMPT_DISABLED);
+ }
rcu_sleep_check();
profile_hit(SCHED_PROFILING, __builtin_return_address(0));