aboutsummaryrefslogtreecommitdiffstats
path: root/arch/ia64
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2022-06-03 16:13:25 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2022-06-03 16:13:25 -0700
commit67850b7bdcd2803e10d019f0da5673a92139b43a (patch)
tree3f8914ce44edd9b5dd3a55ca4fb47efe0a5a2552 /arch/ia64
parentMerge tag 'kthread-cleanups-for-v5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace (diff)
parentsched,signal,ptrace: Rework TASK_TRACED, TASK_STOPPED state (diff)
downloadlinux-dev-67850b7bdcd2803e10d019f0da5673a92139b43a.tar.xz
linux-dev-67850b7bdcd2803e10d019f0da5673a92139b43a.zip
Merge tag 'ptrace_stop-cleanup-for-v5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace
Pull ptrace_stop cleanups from Eric Biederman: "While looking at the ptrace problems with PREEMPT_RT and the problems Peter Zijlstra was encountering with ptrace in his freezer rewrite I identified some cleanups to ptrace_stop that make sense on their own and move make resolving the other problems much simpler. The biggest issue is the habit of the ptrace code to change task->__state from the tracer to suppress TASK_WAKEKILL from waking up the tracee. No other code in the kernel does that and it is straight forward to update signal_wake_up and friends to make that unnecessary. Peter's task freezer sets frozen tasks to a new state TASK_FROZEN and then it stores them by calling "wake_up_state(t, TASK_FROZEN)" relying on the fact that all stopped states except the special stop states can tolerate spurious wake up and recover their state. The state of stopped and traced tasked is changed to be stored in task->jobctl as well as in task->__state. This makes it possible for the freezer to recover tasks in these special states, as well as serving as a general cleanup. With a little more work in that direction I believe TASK_STOPPED can learn to tolerate spurious wake ups and become an ordinary stop state. The TASK_TRACED state has to remain a special state as the registers for a process are only reliably available when the process is stopped in the scheduler. Fundamentally ptrace needs acess to the saved register values of a task. There are bunch of semi-random ptrace related cleanups that were found while looking at these issues. One cleanup that deserves to be called out is from commit 57b6de08b5f6 ("ptrace: Admit ptrace_stop can generate spuriuos SIGTRAPs"). This makes a change that is technically user space visible, in the handling of what happens to a tracee when a tracer dies unexpectedly. According to our testing and our understanding of userspace nothing cares that spurious SIGTRAPs can be generated in that case" * tag 'ptrace_stop-cleanup-for-v5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace: sched,signal,ptrace: Rework TASK_TRACED, TASK_STOPPED state ptrace: Always take siglock in ptrace_resume ptrace: Don't change __state ptrace: Admit ptrace_stop can generate spuriuos SIGTRAPs ptrace: Document that wait_task_inactive can't fail ptrace: Reimplement PTRACE_KILL by always sending SIGKILL signal: Use lockdep_assert_held instead of assert_spin_locked ptrace: Remove arch_ptrace_attach ptrace/xtensa: Replace PT_SINGLESTEP with TIF_SINGLESTEP ptrace/um: Replace PT_DTRACE with TIF_SINGLESTEP signal: Replace __group_send_sig_info with send_signal_locked signal: Rename send_signal send_signal_locked
Diffstat (limited to 'arch/ia64')
-rw-r--r--arch/ia64/include/asm/ptrace.h4
-rw-r--r--arch/ia64/kernel/ptrace.c57
2 files changed, 0 insertions, 61 deletions
diff --git a/arch/ia64/include/asm/ptrace.h b/arch/ia64/include/asm/ptrace.h
index a10a498eede1..402874489890 100644
--- a/arch/ia64/include/asm/ptrace.h
+++ b/arch/ia64/include/asm/ptrace.h
@@ -139,10 +139,6 @@ static inline long regs_return_value(struct pt_regs *regs)
#define arch_ptrace_stop_needed() \
(!test_thread_flag(TIF_RESTORE_RSE))
- extern void ptrace_attach_sync_user_rbs (struct task_struct *);
- #define arch_ptrace_attach(child) \
- ptrace_attach_sync_user_rbs(child)
-
#define arch_has_single_step() (1)
#define arch_has_block_step() (1)
diff --git a/arch/ia64/kernel/ptrace.c b/arch/ia64/kernel/ptrace.c
index 4fc6e38a8459..ab8aeb34d1d9 100644
--- a/arch/ia64/kernel/ptrace.c
+++ b/arch/ia64/kernel/ptrace.c
@@ -618,63 +618,6 @@ void ia64_sync_krbs(void)
}
/*
- * After PTRACE_ATTACH, a thread's register backing store area in user
- * space is assumed to contain correct data whenever the thread is
- * stopped. arch_ptrace_stop takes care of this on tracing stops.
- * But if the child was already stopped for job control when we attach
- * to it, then it might not ever get into ptrace_stop by the time we
- * want to examine the user memory containing the RBS.
- */
-void
-ptrace_attach_sync_user_rbs (struct task_struct *child)
-{
- int stopped = 0;
- struct unw_frame_info info;
-
- /*
- * If the child is in TASK_STOPPED, we need to change that to
- * TASK_TRACED momentarily while we operate on it. This ensures
- * that the child won't be woken up and return to user mode while
- * we are doing the sync. (It can only be woken up for SIGKILL.)
- */
-
- read_lock(&tasklist_lock);
- if (child->sighand) {
- spin_lock_irq(&child->sighand->siglock);
- if (READ_ONCE(child->__state) == TASK_STOPPED &&
- !test_and_set_tsk_thread_flag(child, TIF_RESTORE_RSE)) {
- set_notify_resume(child);
-
- WRITE_ONCE(child->__state, TASK_TRACED);
- stopped = 1;
- }
- spin_unlock_irq(&child->sighand->siglock);
- }
- read_unlock(&tasklist_lock);
-
- if (!stopped)
- return;
-
- unw_init_from_blocked_task(&info, child);
- do_sync_rbs(&info, ia64_sync_user_rbs);
-
- /*
- * Now move the child back into TASK_STOPPED if it should be in a
- * job control stop, so that SIGCONT can be used to wake it up.
- */
- read_lock(&tasklist_lock);
- if (child->sighand) {
- spin_lock_irq(&child->sighand->siglock);
- if (READ_ONCE(child->__state) == TASK_TRACED &&
- (child->signal->flags & SIGNAL_STOP_STOPPED)) {
- WRITE_ONCE(child->__state, TASK_STOPPED);
- }
- spin_unlock_irq(&child->sighand->siglock);
- }
- read_unlock(&tasklist_lock);
-}
-
-/*
* Write f32-f127 back to task->thread.fph if it has been modified.
*/
inline void