fix ptrace slowness
This patch fixes bug #12208: Bug-Entry : http://bugzilla.kernel.org/show_bug.cgi?id=12208 Subject : uml is very slow on 2.6.28 host This turned out to be not a scheduler regression, but an already existing problem in ptrace being triggered by subtle scheduler changes. The problem is this: - task A is ptracing task B - task B stops on a trace event - task A is woken up and preempts task B - task A calls ptrace on task B, which does ptrace_check_attach() - this calls wait_task_inactive(), which sees that task B is still on the runq - task A goes to sleep for a jiffy - ... Since UML does lots of the above sequences, those jiffies quickly add up to make it slow as hell. This patch solves this by not rescheduling in read_unlock() after ptrace_stop() has woken up the tracer. Thanks to Oleg Nesterov and Ingo Molnar for the feedback. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> CC: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
b0dcb4a91d
commit
53da1d9456
1 changed files with 8 additions and 0 deletions
|
@ -1575,7 +1575,15 @@ static void ptrace_stop(int exit_code, int clear_code, siginfo_t *info)
|
|||
read_lock(&tasklist_lock);
|
||||
if (may_ptrace_stop()) {
|
||||
do_notify_parent_cldstop(current, CLD_TRAPPED);
|
||||
/*
|
||||
* Don't want to allow preemption here, because
|
||||
* sys_ptrace() needs this task to be inactive.
|
||||
*
|
||||
* XXX: implement read_unlock_no_resched().
|
||||
*/
|
||||
preempt_disable();
|
||||
read_unlock(&tasklist_lock);
|
||||
preempt_enable_no_resched();
|
||||
schedule();
|
||||
} else {
|
||||
/*
|
||||
|
|
Loading…
Reference in a new issue