aboutsummaryrefslogtreecommitdiffstats
path: root/kernel/trace/trace_hwlat.c
diff options
context:
space:
mode:
authorViktor Rosendahl (BMW) <viktor.rosendahl@gmail.com>2019-10-09 00:08:21 +0200
committerSteven Rostedt (VMware) <rostedt@goodmis.org>2019-11-13 09:37:28 -0500
commit91edde2e6ae1dd5e33812f076f3fe4cb7ccbfdd0 (patch)
tree7cef22a683ec5d90c520e0d8dab7159bddcd5872 /kernel/trace/trace_hwlat.c
parentftrace: Add information on number of page groups allocated (diff)
downloadlinux-dev-91edde2e6ae1dd5e33812f076f3fe4cb7ccbfdd0.tar.xz
linux-dev-91edde2e6ae1dd5e33812f076f3fe4cb7ccbfdd0.zip
ftrace: Implement fs notification for tracing_max_latency
This patch implements the feature that the tracing_max_latency file, e.g. /sys/kernel/debug/tracing/tracing_max_latency will receive notifications through the fsnotify framework when a new latency is available. One particularly interesting use of this facility is when enabling threshold tracing, through /sys/kernel/debug/tracing/tracing_thresh, together with the preempt/irqsoff tracers. This makes it possible to implement a user space program that can, with equal probability, obtain traces of latencies that occur immediately after each other in spite of the fact that the preempt/irqsoff tracers operate in overwrite mode. This facility works with the hwlat, preempt/irqsoff, and wakeup tracers. The tracers may call the latency_fsnotify() from places such as __schedule() or do_idle(); this makes it impossible to call queue_work() directly without risking a deadlock. The same would happen with a softirq, kernel thread or tasklet. For this reason we use the irq_work mechanism to call queue_work(). This patch creates a new workqueue. The reason for doing this is that I wanted to use the WQ_UNBOUND and WQ_HIGHPRI flags. My thinking was that WQ_UNBOUND might help with the latency in some important cases. If we use: queue_work(system_highpri_wq, &tr->fsnotify_work); then the work will (almost) always execute on the same CPU but if we are unlucky that CPU could be too busy while there could be another CPU in the system that would be able to process the work soon enough. queue_work_on() could be used to queue the work on another CPU but it seems difficult to select the right CPU. Link: http://lkml.kernel.org/r/20191008220824.7911-2-viktor.rosendahl@gmail.com Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> Signed-off-by: Viktor Rosendahl (BMW) <viktor.rosendahl@gmail.com> [ Added max() to have one compare for max latency ] Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Diffstat (limited to 'kernel/trace/trace_hwlat.c')
-rw-r--r--kernel/trace/trace_hwlat.c11
1 files changed, 7 insertions, 4 deletions
diff --git a/kernel/trace/trace_hwlat.c b/kernel/trace/trace_hwlat.c
index 862f4b0139fc..63526670605a 100644
--- a/kernel/trace/trace_hwlat.c
+++ b/kernel/trace/trace_hwlat.c
@@ -237,6 +237,7 @@ static int get_sample(void)
/* If we exceed the threshold value, we have found a hardware latency */
if (sample > thresh || outer_sample > thresh) {
struct hwlat_sample s;
+ u64 latency;
ret = 1;
@@ -253,11 +254,13 @@ static int get_sample(void)
s.nmi_count = nmi_count;
trace_hwlat_sample(&s);
+ latency = max(sample, outer_sample);
+
/* Keep a running maximum ever recorded hardware latency */
- if (sample > tr->max_latency)
- tr->max_latency = sample;
- if (outer_sample > tr->max_latency)
- tr->max_latency = outer_sample;
+ if (latency > tr->max_latency) {
+ tr->max_latency = latency;
+ latency_fsnotify(tr);
+ }
}
out: