aboutsummaryrefslogtreecommitdiffstats
path: root/arch/x86/kernel/kprobes.c
diff options
context:
space:
mode:
authorJiri Olsa <jolsa@redhat.com>2011-02-21 15:25:13 +0100
committerIngo Molnar <mingo@elte.hu>2011-03-08 17:22:12 +0100
commit2a8247a2600c3e087a568fc68a6ec4eedac27ef1 (patch)
treedf834946650e392288b93e318377702aaa9fe055 /arch/x86/kernel/kprobes.c
parentx86: Separate out entry text section (diff)
downloadlinux-dev-2a8247a2600c3e087a568fc68a6ec4eedac27ef1.tar.xz
linux-dev-2a8247a2600c3e087a568fc68a6ec4eedac27ef1.zip
kprobes: Disabling optimized kprobes for entry text section
You can crash the kernel (with root/admin privileges) using kprobe tracer by running: echo "p system_call_after_swapgs" > ./kprobe_events echo 1 > ./events/kprobes/enable The reason is that at the system_call_after_swapgs label, the kernel stack is not set up. If optimized kprobes are enabled, the user space stack is being used in this case (see optimized kprobe template) and this might result in a crash. There are several places like this over the entry code (entry_$BIT). As it seems there's no any reasonable/maintainable way to disable only those places where the stack is not ready, I switched off the whole entry code from kprobe optimizing. Signed-off-by: Jiri Olsa <jolsa@redhat.com> Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: acme@redhat.com Cc: fweisbec@gmail.com Cc: ananth@in.ibm.com Cc: davem@davemloft.net Cc: a.p.zijlstra@chello.nl Cc: eric.dumazet@gmail.com Cc: 2nddept-manager@sdl.hitachi.co.jp LKML-Reference: <1298298313-5980-3-git-send-email-jolsa@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'arch/x86/kernel/kprobes.c')
-rw-r--r--arch/x86/kernel/kprobes.c8
1 files changed, 8 insertions, 0 deletions
diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
index d91c477b3f62..c969fd9d1566 100644
--- a/arch/x86/kernel/kprobes.c
+++ b/arch/x86/kernel/kprobes.c
@@ -1276,6 +1276,14 @@ static int __kprobes can_optimize(unsigned long paddr)
if (!kallsyms_lookup_size_offset(paddr, &size, &offset))
return 0;
+ /*
+ * Do not optimize in the entry code due to the unstable
+ * stack handling.
+ */
+ if ((paddr >= (unsigned long )__entry_text_start) &&
+ (paddr < (unsigned long )__entry_text_end))
+ return 0;
+
/* Check there is enough space for a relative jump. */
if (size - offset < RELATIVEJUMP_SIZE)
return 0;