tracing: Robustify wait loop
authorPeter Zijlstra <peterz@infradead.org>
Wed, 8 Oct 2014 16:51:10 +0000 (18:51 +0200)
committerSteven Rostedt <rostedt@goodmis.org>
Wed, 8 Oct 2014 23:51:01 +0000 (19:51 -0400)
commitfe0e01c77dd9f7a60916aec2149d8a1182baf63c
tree471eff9f1eefdacc887936f196c7be2e3f7af40d
parentbfe01a5ba2490f299e1d2d5508cbbbadd897bbe9
tracing: Robustify wait loop

The pending nested sleep debugging triggered on the potential stale
TASK_INTERRUPTIBLE in this code.

While there, fix the loop such that we won't revert to a while(1)
yield() 'spin' loop if we ever get a spurious wakeup.

And fix the actual issue by properly terminating the 'wait' loop by
setting TASK_RUNNING.

Link: http://lkml.kernel.org/p/20141008165110.GA14547@worktop.programming.kicks-ass.net
Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
kernel/trace/trace_events.c
This page took 0.026331 seconds and 5 git commands to generate.