DEBUG: sched/fair: Fix sched_load_avg_cpu events for task_groups

The current sched_load_avg_cpu event traces the load for any cfs_rq that is
updated. This is not representative of the CPU load - instead we should only
trace this event when the cfs_rq being updated is in the root_task_group.

Change-Id: I345c2f13f6b5718cb4a89beb247f7887ce97ed6b
Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
This commit is contained in:
Brendan Jackman 2017-01-10 11:31:01 +00:00
parent 7f18f0963d
commit 1cb392e103

View file

@ -2726,6 +2726,8 @@ static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
cfs_rq->load_last_update_time_copy = sa->last_update_time;
#endif
/* Trace CPU load, unless cfs_rq belongs to a non-root task_group */
if (cfs_rq == &rq_of(cfs_rq)->cfs)
trace_sched_load_avg_cpu(cpu_of(rq_of(cfs_rq)), cfs_rq);
return decayed || removed;