Revert "ANDROID: sched/tune: Initialize raw_spin_lock in boosted_groups"

This reverts commit c5616f2f874faa20b59b116177b99bf3948586df.

If we re-init the per-cpu boostgroup spinlock every time that
we add a new boosted cgroup, we can easily wipe out (reinit)
a spinlock struct while in a critical section. We should only
be setting up the per-cpu boostgroup data, and the spin_lock
initialization need only happen once - which we're already
doing in a postcore_initcall.

For example:

     -------- CPU 0 --------   | -------- CPU1 --------
cgroupX boost group added      |
schedtune_enqueue_task         |
  acquires(bg->lock)           | cgroupY boost group added
                               |  for_each_cpu()
                               |    raw_spin_lock_init(bg->lock)
  releases(bg->lock)           |
      BUG (already unlocked)   |
			       |

This results in the following BUG from the debug spinlock code:
	BUG: spinlock already unlocked on CPU#5, rcuop/6/68

CRs-fixed: 2113062
Change-Id: I1cd780d9ba5801cf99bfe46504b18a88e45f17a8
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
This commit is contained in:
Vikram Mulukutla 2017-09-21 17:24:24 -07:00 committed by Gerrit - the friendly Code Review server
parent 6f777b2385
commit 2bbf3a762a

View file

@ -829,7 +829,6 @@ schedtune_boostgroup_init(struct schedtune *st)
bg = &per_cpu(cpu_boost_groups, cpu);
bg->group[st->idx].boost = 0;
bg->group[st->idx].tasks = 0;
raw_spin_lock_init(&bg->lock);
}
return 0;