Commit graph

94 commits

Author SHA1 Message Date
Pavankumar Kondeti
97fe3984e9 sched/walt: Fix the memory leak of idle task load pointers
The memory for task load pointers are allocated twice for each
idle thread except for the boot CPU. This happens during boot
from idle_threads_init()->idle_init() in the following 2 paths.

1. idle_init()->fork_idle()->copy_process()->
		sched_fork()->init_new_task_load()

2. idle_init()->fork_idle()-> init_idle()->init_new_task_load()

The memory allocation for all tasks happens through the 1st path,
so use the same for idle tasks and kill the 2nd path. Since
the idle thread of boot CPU does not go through fork_idle(),
allocate the memory for it separately.

Change-Id: I4696a414ffe07d4114b56d326463026019e278f1
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
[schikk@codeaurora.org: resolved merge conflicts]
Signed-off-by: Swetha Chikkaboraiah <schikk@codeaurora.org>
2019-08-30 09:21:10 +02:00
liochen
8148b9d900 Synchronize codes for OnePlus5 & 5T OxygenOS 9.0.0
kernel device tree source code for OnePlus 5 & 5T P device

Change-Id: I84f40e66833ea1ce30eb1d9a710d6e1529e9e637
2018-12-26 11:02:39 +08:00
John Dias
bd4ac8e584 sched: walt: fix out-of-bounds access
A computation in update_top_tasks() is indexing
off the end of a top_tasks array. There's code
to limit the index in the computation, but it's
insufficient.

Bug: 110529282
Change-Id: Idb5ff5e5800c014394bcb04638844bf1e057a40c
Signed-off-by: John Dias <joaodias@google.com>
[pkondeti@codeaurora.org: Backported to 4.4 for HMP scheduler]
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2018-08-08 09:09:01 +05:30
Pavankumar Kondeti
c6a5b958e6 sched/walt: Fix use after free in trace_sched_update_task_ravg()
commit 4d09122c1868 ("sched: Fix spinlock recursion in sched_exit()")
moved freeing of task's current and previous window arrays outside
the rq->lock. These arrays can be accessed from another CPU in parallel
and end up using freed memory. For example,

CPU#0                                 CPU#1
----------------------------------    -------------------------------
sched_exit()                          try_to_wake_up()--> The task wakes
                                                          up on CPU#0
 task_rq_lock()                        set_task_cpu()
                                        fixup_busy_time() --> waiting for
					                  CPU#0's rq->lock

 task_rq_unlock()                       fixup_busy_time()-->lock acquired
 free_task_load_ptrs()
  kfree(p->ravg.curr_window_cpu)         update_task_ravg()-->called on
                                                          current of CPU#0
					  trace_sched_update_task_ravg()
					          --> access freed memory
  p->ravg.curr_window_cpu = NULL;

To fix this issue, window array pointers must be set to NULL before
freeing the memory. Since this happens outside the lock, memory barriers
are needed on write and read paths. A much simpler alternative would be
skipping update_task_ravg() trace point for tasks that are marked as dead.
The window stats of dead tasks are not updated any ways. While at it, skip
this trace point for newly created tasks for which also window stats are
not updated.

Change-Id: I4d7cb8a3cf7cf84270b09721140d35205643b7ab
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
[spathi@codeaurora.org: moved changes to hmp.c since EAS is not supported]
Signed-off-by: Srinivasarao P <spathi@codeaurora.org>
2018-05-02 21:46:48 -07:00
Linux Build Service Account
335cf65347 Merge "sched: Update tracepoint to include task info" 2018-01-09 15:40:35 -08:00
Puja Gupta
f9e96dfcb8 sched: Update tracepoint to include task info
Update sched_get_task_cpu_cycles trace to include pid and name of the
task to help with debug better.

Change-Id: Ic307ebcf0a44c94bf0a2aa1a02b8aeff39010b29
Signed-off-by: Puja Gupta <pujag@codeaurora.org>
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2018-01-05 09:15:54 +05:30
Pavankumar Kondeti
9c933388d8 sched: Fix spinlock recursion in sched_exit()
The exiting task's prev_window and curr_window arrays are freed
with rq->lock acquired. The kfree() may wakeup kswapd and if
kswapd wakeup needs the same rq->lock, we hit a deadlock. Fix
this issue by freeing these arrays after releasing the lock.
Since the task is already marked as exiting under lock, delaying
the freeing of the current and window arrays will not have
any side effect.

Change-Id: I3282d91ba715765e38177b9d66be32aaed989303
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2017-12-30 11:51:59 +05:30
Linux Build Service Account
1d5844ba9d Merge "sched: hmp: Optimize cycle counter reads" 2017-06-06 13:21:50 -07:00
Linux Build Service Account
0d1b465cb8 Merge "sched: Fix load tracking bug to avoid adding phantom task demand" 2017-06-06 13:21:39 -07:00
Vikram Mulukutla
259636e7d0 sched: hmp: Optimize cycle counter reads
The cycle counter read is a bit of an expensive operation and requires
locking across all CPUs in a cluster. Optimize this by returning the
same value if the delta between two reads is zero (so if two reads are
done in the same sched context) or if the last read was within a
specific time period prior to the current read.

Change-Id: I99da5a704d3652f53c8564ba7532783d3288f227
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
2017-05-31 18:16:30 -07:00
Pavankumar Kondeti
57fd979fc9 core_ctl: un-isolate BIG CPUs more aggressively
The current algorithm to bring additional BIG CPUs is very
conservative. It works when BIG tasks alone run on BIG
cluster. When co-location and scheduler boost features
are activated, small/medium tasks also run on BIG cluster.
We don't want these tasks to downmigrate, when BIG CPUs are
available but isolated. The following changes are done to
un-isolate CPUs more aggressively.

(1) Round up the big_avg. When the big_avg indicates that
there are 1.5 tasks on an average in the last window, it
indicates that we need 2 BIG CPUs not 1 BIG CPU.

(2) Track the maximum number of running tasks in the last
window on all CPUs. If any of the CPU in a cluster has more
than 4 runnable tasks in the last window, bring an additional
CPU to help out.

Change-Id: Id05d9983af290760cec6d93d1bdc45bc5e924cce
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2017-05-31 08:33:48 +05:30
Pavankumar Kondeti
f37f0680d7 sched: Improve short sleeping tasks detection
When a short sleeping task goes for a long sleep, the task's
avg_sleep_time signal gets boosted. This signal will not go
below short_sleep threshold for a long time time even when the
task run in short bursts. This results in frequent preemption
of other tasks as the short burst tasks are placed on busy CPUs.

The idea behind tracking avg_sleep_time signal is to detect if a
task is short sleeping or not. Limit the sleep time to twice the
short sleep threshold to make avg_sleep_time signal more responsive.
This won't affect regular long sleeping tasks, as the avg_sleep_time
would be higher than threshold.

Change-Id: Ic0838e81ef7f5d83864a58b318553afc42812853
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2017-05-31 08:30:01 +05:30
Syed Rameez Mustafa
5b138bd514 sched: Fix load tracking bug to avoid adding phantom task demand
When update_task_ravg() is called with the TASK_UPDATE event on a task
that is not on the runqueue, task demand accounting incorrectly treats
the time delta as execution time. This can happen when a sleeping
task is moved to/from colocation groups. This phantom execution time can
cause unpredictable changes to demand that in turn can result in
incorrect task placement. Fix the issue by adding special handling of
TASK_UPDATE in task demand accounting. CPU busy time accounting already
has all the necessary checks.

Change-Id: Ibb42d83ac353bf2e849055fa3cb5c22e7acd56de
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2017-05-19 12:30:51 -07:00
Pavankumar Kondeti
73f527b67c sched: Print aggregation status in sched_get_busy trace event
Aggregation for frequency is not enabled all the time. The aggregated
load is attached to the most busy CPU only when the group load is above
a certain threshold. Print the aggregation status in sched_get_busy
trace event to make debugging and testing easier.

Change-Id: Icb916f362ea0fa8b5dc7d23cb384168d86159687
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2017-02-27 10:00:51 +05:30
Linux Build Service Account
8da6726d48 Merge "sched: don't assume higher capacity means higher power in tick migration" 2017-02-15 17:01:25 -08:00
Pavankumar Kondeti
ab05391aa6 sched: don't assume higher capacity means higher power in tick migration
When an upmigrate ineligible task running on the maximum capacity CPU,
we check if it can be migrated to a lower capacity CPU in tick path.
Add a power cost based check there to prevent the task migration
from a power efficient CPU.

Change-Id: I291c62d7dbf169d5123faba5f5246ad44a7a40dd
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2017-02-15 08:48:17 +05:30
Linux Build Service Account
0e39052658 Merge "sched: remove sched_new_task_windows tunable" 2017-02-09 22:09:26 -08:00
Pavankumar Kondeti
9c32e32899 sched: fix bug in auto adjustment of group upmigrate/downmigrate
sched_group_upmigrate tunable can accept values greater than
100%. Don't limit it to 100% while doing the auto adjustment.

Change-Id: I3d1c1e84f2f4dec688235feb1536b9261a3e808b
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2017-02-08 15:07:45 +05:30
Pavankumar Kondeti
b61c01f52f sched: remove sched_new_task_windows tunable
The sched_new_task_windows tunable is set to 5 in the scheduler
and it is not changed from user space. Remove this unused tunable.

Change-Id: I771e12b44876efe75ce87a90e4e9d69c22168b64
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2017-02-08 09:46:57 +05:30
Linux Build Service Account
69ca7214e3 Merge "sched: fix argument type in update_task_burst()" 2017-02-07 02:08:11 -08:00
Linux Build Service Account
a2c6971ce7 Merge "sysctl: define upper limit for sched_freq_reporting_policy" 2017-02-07 02:08:10 -08:00
Linux Build Service Account
fc17b426dd Merge "sched: Remove sched_enable_hmp flag" 2017-02-03 06:26:43 -08:00
Pavankumar Kondeti
00861ed665 sysctl: define upper limit for sched_freq_reporting_policy
Setting sched_freq_reporting_policy tunable to an unsupported
values results in a warning from the scheduler. The previous
policy setting is also lost.

As sched_freq_reporting_policy can not be set to an incorrect
value now, remove the WARN_ON_ONCE from the scheduler.

Change-Id: I58d7e5dfefb7d11d2309bc05a1dd66acdc11b766
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2017-02-03 10:55:52 +05:30
Olav Haugan
475820b5bc sched: Remove sched_enable_hmp flag
Clean up the code and make it more maintainable by removing dependency
on the sched_enable_hmp flag. We do not support HMP scheduler without
recompiling. Enabling the HMP scheduler is done through enabling the
CONFIG_SCHED_HMP config.

Change-Id: I246c1b1889f8dcbc8f0a0805077c0ce5d4f083b0
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2017-02-02 10:23:08 -08:00
Pavankumar Kondeti
f439dd8a41 sched: fix argument type in update_task_burst()
update_task_burst() function's runtime argument type should
be u64 not int. Fix this to avoid potential overflow.

Change-Id: I33757b7b42f142138c1a099bb8be18c2a3bed331
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2017-02-02 11:07:49 +05:30
Pavankumar Kondeti
b559daa261 sched: maintain group busy time counters in runqueue
There is no advantage of tracking busy time counters per related
thread group. We need busy time across all groups for either a CPU
or a frequency domain. Hence maintain group busy time counters in
the runqueue itself. When CPU window is rolled over, the group busy
counters are also rolled over. This eliminates the overhead of
individual group's window_start maintenance.

As we are preallocating related thread group now, this patch saves
40 * nr_cpu_ids * (nr_grp - 1) bytes memory.

Change-Id: Ieaaccea483b377f54ea1761e6939ee23a78a5e9c
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2017-02-01 09:16:39 +05:30
Syed Rameez Mustafa
196069b1bc sched: Update capacity and load scale factor for all clusters at boot
Cluster capacities should reflect differences in efficiency of
different clusters even in the absence of cpufreq. Currently
capacity is updated only when cpufreq policy notifier is received.
Therefore placement is suboptimal when cpufreq is turned off. Fix
this by updating capacities and load scaling factors during cluster
detection.

Change-Id: I47f63c1e374bbfd247a4302525afb37d55334bad
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2017-01-20 16:58:11 -08:00
Linux Build Service Account
4f0a0766d1 Merge "sched: kill sync_cpu maintenance" 2017-01-19 09:52:20 -08:00
Linux Build Service Account
fbbaeb656a Merge "sched: hmp: Remove the global sysctl_sched_enable_colocation tunable" 2017-01-18 23:48:38 -08:00
Pavankumar Kondeti
6d63f38bf2 sched: kill sync_cpu maintenance
We assume boot CPU as a sync CPU and initialize it's window_start
to sched_ktime_clock(). As windows are synchronized across all
CPUs, the secondary CPUs' window_start are initialized from the
sync_cpu's window_start. A CPU's window_start is never reset, so
this synchronization happens only once for a given CPU. Given this
fact, there is no need to reassigning the sync_cpu role to another
CPU when the boot CPU is going offline. Remove this unnecessary
maintenance of sync_cpu and use any online CPU's window_start as
reference.

Change-Id: I169a8e80573c6dbcb1edeab0659c07c17102f4c9
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2017-01-19 12:30:19 +05:30
Vikram Mulukutla
e7dd50fa46 sched: hmp: Remove the global sysctl_sched_enable_colocation tunable
Colocation in HMP includes a tunable that turns on or off the feature
globally across all colocation groups. Supporting this tunable correctly
would result in complexity that would outweigh any foreseeable benefits.
For example, disabling the feature globally would involve deleting all
colocation groups one by one while ensuring no placement decisions are
made during the process.

Remove the tunable. Adding or removing a task from a colocation group is
still possible and so we're not losing functionality.

Change-Id: I4cb8bcdbee98d3bdd168baacbac345eca9ea8879
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
2017-01-18 09:45:44 -08:00
Vikram Mulukutla
2768f0352b sched: hmp: Ensure that best_cluster() never returns NULL
There are certain conditions under which group_will_fit() may return 0 for
all clusters in the system, especially under changing thermal conditions.
This may result in crashes such as this one:

        CPU 0                    |               CPU 1
====================================================================
select_best_cpu()                |
 -> env.rtg = rtgA               |
    rtgA.pref_cluster=C_big      |
                                 |   set_pref_cluster() for rtgA
                                 |     -> best_cluster()
                                 |        C_little doesn't fit
                                 |
                                 |   IRQ: thermal mitigation
                                 |   C_big capacity now less
                                 |   than C_little capacity
                                 |
                                 |     -> best_cluster() continues
                                 |        C_big doesn't fit
                                 |   set_pref_cluster() sets
                                 |   rtgA.pref_cluster = NULL
                                 |
select_least_power_cluster()     |
  -> cluster_first_cpu()         |
     -> BUG()                    |

To add lock protection around accesses to the group's preferred cluster
would be expensive and defeat the point of the usage of RCU to protect
access to the related_thread_group structure. Therefore, ensure that
best_cluster() can never return NULL. In the worst case, we'll select the
wrong cluster for a related_thread_group's demand, but this should be
fixed in the next tick or wakeup etc. Locking would have still led to the
momentary wrong decision with the additional expense!

Also, don't set preferred cluster to NULL when colocation is disabled.

Change-Id: Id3f514b149add9b3ed33d104fa6a9bd57bec27e2
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
2017-01-18 09:45:40 -08:00
Linux Build Service Account
1e5081e1b2 Merge "sched: Initialize variables" 2017-01-16 04:29:07 -08:00
Linux Build Service Account
a1e7739089 Merge "sched: fix a bug in handling top task table rollover" 2017-01-14 03:42:58 -08:00
Olav Haugan
68b55fe985 sched: Initialize variables
Initialize variable at definition to avoid compiler warning when
compiling with CONFIG_OPTIMIZE_FOR_SIZE=n.

Change-Id: Ibd201877b2274c70ced9d7240d0e527bc77402f3
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2017-01-13 17:06:01 -08:00
Syed Rameez Mustafa
47f7e0415a sched: Convert the global wake_up_idle flag to a per cluster flag
Since clusters can vary significantly in the power and performance
characteristics, there may be a need to have different CPU selection
policies based on which cluster a task is being placed on. For example
the placement policy can be more aggressive in using idle CPUs on
cluster that are power efficient and less aggressive on clusters
that are geared towards performance. Add support for per cluster
wake_up_idle flag to allow greater flexibility in placement policies.

Change-Id: I18cd3d907cd965db03a13f4655870dc10c07acfe
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2017-01-10 11:01:52 -08:00
Pavankumar Kondeti
add97fe0da sched: fix a bug in handling top task table rollover
When frequency aggregation is enabled, there is a possibility of
rolling over the top task table multiple times in a single
window.

For example

- utra() is called with PUT_PREV_TASK for task 'A' which does not
belong to any related thread grp. Lets say window rollover happens.
rq counters and top task table rollover is done.

- utra() is called with PICK_NEXT_TASK/TASK_WAKE for task 'B' which
belongs to a related thread grp. Lets say this happens before
the grp's cpu_time->window_start is in sync with rq->window_start.
In this case, grp's cpu_time counters are rolled over and the
top task table is also rolled over again.

Roll over the top task table in the context of current running task
to fix this.

Change-Id: Iea3075e0ea460a9279a01ba42725890c46edd713
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2017-01-07 13:50:04 +05:30
Pavankumar Kondeti
432662eb4d sched: fix stale predicted load in trace_sched_get_busy()
When early detection notification is pending, we skip calculating
predicted load. Initialize it to 0 so that stale value does not
get printed in trace_sched_get_busy().

Change-Id: I36287c0081f6c12191235104666172b7cae2a583
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2017-01-07 13:49:22 +05:30
Linux Build Service Account
ed4ef900e9 Merge "sched: Delete heavy task heuristics in prediction code" 2017-01-05 02:09:05 -08:00
Linux Build Service Account
dbbac4f76f Merge "sched: Fix new task accounting bug in transfer_busy_time()" 2017-01-05 02:08:58 -08:00
Rohit Gupta
f43931e819 sched: Delete heavy task heuristics in prediction code
Heavy task prediction code needs further tuning to avoid any
negative power impact. Delete the code for now instead of adding
tunables to avoid inefficiencies in the scheduler path.

Change-Id: I71e3b37a5c99e24bc5be93cc825d7e171e8ff7ce
Signed-off-by: Rohit Gupta <rohgup@codeaurora.org>
2017-01-04 15:55:29 -08:00
Syed Rameez Mustafa
3997e768ac sched: Fix new task accounting bug in transfer_busy_time()
In transfer_busy_time(), the new_task flag is set based on the active
window count prior to the call to update_task_ravg(). update_task_ravg()
however, can then increment the active window count and consequently
the new_task flag above becomes stale. This is turn leads to inaccurate
accounting whereby update_task_ravg() does accounting based on the fact
that the task is not new whereas transfer_busy_time() then continues to
do further accounting assuming that the task is new. The accounting
discrepancies are sometimes caught by some of the scheduler BUGs.

Fix the described problem by moving the check is_new_task() after the
call to update_task_ravg(). Also add two missing BUGs that would catch
the problem sooner rather than later.

Change-Id: I8dc4822e97cc03ebf2ca1ee2de95eb4e5851f459
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2017-01-03 19:22:23 -08:00
Pavankumar Kondeti
f67dcbea7f sched: Fix deadlock between cpu hotplug and upmigrate change
There is a circular dependency between cpu_hotplug.lock and
HMP scheduler policy mutex. Prevent this by enforcing the
same lock order.

Here CPU0 and CPU4 are governed by different cpufreq policies.

----------------                        --------------------
    CPU 0                                          CPU 4
---------------                         --------------------

proc_sys_call_handler()                 cpu_up()

                                        --> acquired cpu_hotplug.lock

sched_hmp_proc_update_handler()         cpufreq_cpu_callback()

--> acquired policy_mutex

                                        cpufreq_governor_interactive()

get_online_cpus()                       sched_set_window()

--> waiting for cpu_hotplug.lock        --> waiting for policy_mutex

Change-Id: I39efc394f4f00815b72adc975021fdb16fe6e30a
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2016-12-30 12:31:01 +05:30
Linux Build Service Account
6294b5b2d7 Merge "sched: Fix out of bounds array access in sched_reset_all_window_stats()" 2016-12-21 15:48:15 -08:00
Srivatsa Vaddagiri
f3e2e2863a sched: Avoid packing tasks with low sleep time
Low sleep time can be an indication that waking tasks will not receive
any vruntime bonus and hence would suffer from latency when packed.
short-burst tasks sleeping on an average more than sched_short_sleep_ns
are not eligible for packing. This policy covers the case where a
task runs in short bursts and sleeping for smaller duration in between.

Change-Id: Ib81fa37809b85c267949cd433bc6115dd89f100e
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2016-12-20 14:15:59 +05:30
Srivatsa Vaddagiri
92dc28458c sched: Track average sleep time
Similar to tracking average burst length for tasks, average sleep time
indicates how much a task sleeps on an average before waking up to run.
Very low sleep and burst lengths indicates tasks that could be
sensitive to task-wake latencies and hence should not be packed.

Change-Id: Ife68a9a9a9e596246aab5029f60e41c5bad781e4
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2016-12-16 16:50:52 +05:30
Srivatsa Vaddagiri
0dee0d1411 sched: Avoid waking idle cpu for short-burst tasks
Introduce sched_short_burst tunable to classify "short-burst" tasks.
These tasks are eligible for packing to avoid overhead associated with
waking up an idle CPU. select_best_cpu() ignores power-cost and selects
the CPU with least wakeup latency which is not loaded with IRQs and
can accommodate this task without exceeding spill limits. The ties are
broken with load followed by previous CPU.

This policy does not affect cluster selection but only CPU selection
in the selected cluster. The tasks eligible for "wakeup-up-idle" and
"boost" are not considered for packing. This policy is applied for
both "fair" and "rt" scheduling class tasks.

Change-Id: I2a05493fde93f58636725f18d0ce8dbce4418a30
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2016-12-16 16:50:52 +05:30
Srivatsa Vaddagiri
f8c7c6ffdf sched: Track burst length for tasks
Track burst length for tasks as time they ran from wakeup to sleep.
This is used to predict average time a task may run when it wakes up
and thus avoid waking up idle cpu for "short-burst" tasks.

Change-Id: Ie71d3163630fb8aa0db8ee8383768f8748270cf9
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2016-12-16 16:50:51 +05:30
Linux Build Service Account
b832093be4 Merge "sched: pre-allocate colocation groups" 2016-12-01 16:39:40 -08:00
Joonwoo Park
7437cd7c4b sched: pre-allocate colocation groups
At present, sched_set_group_id() dynamically allocates structure for
colocation group to assign the given task to the group.  However
this can cause deadlock as memory allocator can wakeup a task which
also tries to acquire related_thread_group_lock.

Avoid such deadlock by pre-allocating colocation structures.  This
limits maximum colocation groups to static number but it's fine as it's
never expected to be a lot.

Change-Id: Ifc32ab4ead63c382ae390358ed86f7cc5b6eb2dc
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2016-12-01 11:28:01 -08:00