Commit graph

564751 commits

Author SHA1 Message Date
Syed Rameez Mustafa
a6f510aa0a sched: update ld_moved for active balance from the load balancer
ld_moved is currently left set to 0 when the load balancer calls upon
active balance. This behavior is incorrect as it prevents the termination
of load balance for parent sched domains. Currently the feature is used
quite frequently for power active balance and sched boost. This means that
while sched boost is in effect we could run into a scenario where a more
power efficient newly idle big CPU first triggers active migration from a
less power efficient busy big CPU. It then continues to load balance at the
cluster level causing active migration for a task running on a little CPU.
Consequently the more power efficient big CPU ends up with two tasks where
as the less power efficient big CPU may become idle. Fix this problem by
updating ld_moved when active migration has been requested.

Change-Id: I52e84eafb77249fd9378ebe531abe2d694178537
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-03-23 20:00:35 -07:00
Syed Rameez Mustafa
9aecd4c576 sched: actively migrate tasks to idle big CPUs during sched boost
The sched boost feature is currently tick driven, i.e. task placement
decisions only take place at a tick (or wakeup). The load balancer
does not have any knowledge of boost being in effect. Tasks that are
woken up on a little CPU when all big CPUs are busy will continue
executing there at least until the next tick even if one of the big
CPUs becomes idle. Reduce this latency by adding support for detecting
whether boost is in effect or not in the load balancer.  If boost is
in effect any big CPU running idle balance will trigger active
migration from a little CPU with the highest task load.

Change-Id: Ib2828809efa0f9857f5009b29931f63b276a59f3
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-03-23 20:00:34 -07:00
Syed Rameez Mustafa
97b9ad42d9 sched: always do idle balance with a NEWLY_IDLE idle environment
With the introduction of energy aware scheduling, if idle_balance() is
to be called on behalf of a different CPU which is idle, CPU_IDLE is
used in the environment for load_balance(). This, however, introduces
subtle differences in load calculations and policies in the load
balancer. For example there are restrictions on which CPU is permitted
to do load balancing during !CPU_NEWLY_IDLE (see update_sg_lb_stats)
and find_busiest_group() uses different criteria to detect the
presence of a busy group. There are other differences as well. Revert
back to using the NEWLY_IDLE environment irrespective of whether
idle_balance() is called for the newly idle CPU or on behalf on
already existing idle CPU. This will ensure that task movement logic
while doing idle balance remains unaffected.

Change-Id: I388b0ad9a38ca550667895c8ed19628f3d25ce1a
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-03-23 20:00:33 -07:00
Syed Rameez Mustafa
251081550f sched: fix bail condition in bail_inter_cluster_balance()
Following commit efcad25cbfb (revert "sched: influence cpu_power based
on max_freq and efficiency), all CPUs in the system have the same
cpu_power and consequently the same group capacity. Therefore, the
check in bail_inter_cluster_balance() can now no longer be used to
distinguish a higher performance cluster from one with lower
performance. The check is currently broken and always returns true for
every load balancing attempt. Fix this by using runqueue capacity
instead which can still be used as a good measure of cluster
capabilities.

Change-Id: Idecfd1ed221d27d4324b20539e5224a92bf8b751
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-03-23 20:00:32 -07:00
Srivatsa Vaddagiri
b7f98009c5 sched: Initialize env->loop variable to 0
load_balance() function does not explicitly initialize env->loop
variable to 0. As a result, there is a vague possibility of
move_tasks() hitting a very long (unnecessary) loop when its unable to
move tasks from src_cpu. This can lead to unpleasant results like a
watchdog bark. Fix this by explicitly initializing env->loop variable
to 0 (in both load_balance() and active_load_balance_cpu_stop()).

Change-Id: I36b84c91a9753870fa16ef9c9339db7b706527be
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2016-03-23 20:00:31 -07:00
Srivatsa Vaddagiri
84d1fa51ee sched: window-stats: use policy_mutex in sched_set_window()
Several configuration variable change will result in
reset_all_window_stats() being called. All of them, except
sched_set_window(), are serialized via policy_mutex. Take
policy_mutex in sched_set_window() as well to serialize use of
reset_all_window_stats() function

Change-Id: Iada7ff8ac85caa1517e2adcf6394c5b050e3968a
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2016-03-23 20:00:30 -07:00
Srivatsa Vaddagiri
da4ffc0b59 sched: window-stats: Avoid taking all cpu's rq->lock for long
reset_all_window_stats() walks task-list with all cpu's rq->lock held,
which can cause spinlock timeouts if task-list is huge (and hence lead
to a spinlock bug report). Avoid this by walking task-list without
cpu's rq->lock held.

Change-Id: Id09afd8b730fa32c76cd3bff5da7c0cd7aeb8dfb
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2016-03-23 20:00:29 -07:00
Srivatsa Vaddagiri
29581dc620 sched: window_stats: Add "disable" mode support
"disabled" mode (sched_disble_window_stats = 1) disables all
window-stats related activity. This is useful when changing key
configuration variables associated with window-stats feature (like
policy or window size).

Change-Id: I9e55c9eb7f7e3b1b646079c3aa338db6259a9cfe
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2016-03-23 20:00:28 -07:00
Srivatsa Vaddagiri
2a7d718b3d sched: window-stats: Fix exit race
Exiting tasks are removed from tasklist and hence at some point will
become invisible to do_each_thread/for_each_thread task iterators.
This breaks the functionality of reset_all_windows_stats() which *has*
to reset stats for *all* tasks.

This patch causes exiting tasks stats to be reset *before* they are
removed from tasklist. DONT_ACCOUNT bit in exiting task's ravg.flags
is also marked so that their remaining execution time is not accounted
in cpu busy time counters (rq->curr/prev_runnable_sum).
reset_all_windows_stats() is thus guaranteed to return with all task's
stats reset to 0.

Change-Id: I5f101156a4f958c1b3f31eb0db8cd06e621b75e9
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2016-03-23 20:00:27 -07:00
Srivatsa Vaddagiri
dfeae566bb sched: window-stats: code cleanup
Provide a wrapper function to reset task's window statistics. This will be
reused by a subsequent patch

Change-Id: Ied7d32325854088c91285d8fee55d5a5e8a954b3
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2016-03-23 20:00:26 -07:00
Srivatsa Vaddagiri
d8932ae7df sched: window-stats: legacy mode
Support legacy mode, which results in busy time being seen by governor
that is close to what it would have seen via existing APIs i.e
get_cpu_idle_time_us(), get_cpu_iowait_time_us() and
get_cpu_idle_time_jiffy(). In particular, legacy mode means that only
task execution time is counted in rq->curr_runnable_sum and
rq->prev_runnable_sum. Also task migration does not result in
adjustment of those counters.

Change-Id: If374ccc084aa73f77374b6b3ab4cd0a4ca7b8c90
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2016-03-23 20:00:26 -07:00
Srivatsa Vaddagiri
32e4c4a368 sched: window-stats: Code cleanup
Collapse duplicated comments about keeping few of sysctl knobs
initialized to same value as their non-sysctl copies

Change-Id: Idc8261d86b9f36e5f2f2ab845213bae268ae9028
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2016-03-23 20:00:25 -07:00
Srivatsa Vaddagiri
e39131c3be sched: window-stats: Code cleanup
Remove code duplication associated with update of various window-stats
related sysctl tunables

Change-Id: I64e29ac065172464ba371a03758937999c42a71f
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2016-03-23 20:00:24 -07:00
Srivatsa Vaddagiri
90a01bb623 sched: window-stats: Code cleanup
add_task_demand() and 'long_sleep' calculation in it are not strictly
required. rq_freq_margin() check for need to change frequency, which
removes need for long_sleep calculation. Once that is removed, need
for add_task_demand() vanishes.

Change-Id: I936540c06072eb8238fc18754aba88789ee3c9f5
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
[joonwoop@codeaurora.org: fixed minior conflict in core.c]
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2016-03-23 20:00:23 -07:00
Srivatsa Vaddagiri
9425ce4309 sched: window-stats: Remove unused prev_window variable
Remove unused prev_window variable in 'struct ravg'

Change-Id: I22ec040bae6fa5810f9f8771aa1cb873a2183746
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2016-03-23 20:00:22 -07:00
Steve Muckle
6ed9cab723 sched: disable frequency notifications by default
The frequency notifications from the scheduler do not currently respect
synchronous topologies. If demand on CPU 0 is driving frequency high and
CPU 1 is in the same frequency domain, and demand on CPU 1 is low,
frequency notifiers will be continuously sent by CPU 1 in an attempt to
have its frequency lowered.

Until the notifiers are fixed, disable them by default. They can still
be re-enabled at runtime.

Change-Id: Ic8a927af2236d8fe83b4f4a633b20a8ddcfba359
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2016-03-23 20:00:21 -07:00
Steve Muckle
ecae24dd92 sched: fix misalignment between requested and actual windows
When set_window_start() is first executed sched_clock() has not yet
stabilized. Refresh the sched_init_jiffy and sched_clock_at_init_jiffy
values until it is known that sched_clock has stabilized - this will
be the case by the time a client calls the sched_set_window() API.

Change-Id: Icd057707ff44c3b240e5e7e96891b23c95733daa
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2016-03-23 20:00:20 -07:00
Olav Haugan
8eede4a8d5 sched: Make RAVG_HIST_SIZE tunable
Make RAVG_HIST_SIZE available from /proc/sys/kernel/sched_ravg_hist_size
to allow tuning of the size of the history that is used in computation
of task demand.

CRs-fixed: 706138
Change-Id: Id54c1e4b6e974a62d787070a0af1b4e8ce3b4be6
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
[joonwoop@codeaurora.org: fixed minor conflict in sysctl.h]
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2016-03-23 20:00:19 -07:00
Srivatsa Vaddagiri
778ce1a13c sched: Fix possibility of "stuck" reserved flag
check_for_migration() could mark a thread for migration (in
rq->push_task) and invoke active_load_balance_cpu_stop(). However that
thread could get migrated to another cpu by the time
active_load_balance_cpu_stop() runs, which could fail to clear
reserved flag for a cpu and drop task_sruct reference when cpu has
only one task (stopper thread running
active_load_balance_cpu_stop()). This would cause a cpu to have
reserved bit stuck, which prevents it from being used effectively.

Fix this by having active_load_balance_cpu_stop() drop reserved bit
always.

Change-Id: I2464a46b4ddb52376a95518bcc95dd9768e891f9
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
[rameezmustafa@codeaurora.org]: Port to msm-3.18]
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org
2016-03-23 20:00:18 -07:00
Srivatsa Vaddagiri
35e98218fd sched: initialize env->flags variable to 0
env->flags and env->new_dst_cpu fields are not initialized in
load_balance() function. As a result, load_balance() could wrongly see
LBF_SOME_PINNED flag set and access (bogus) new_dst_cpu's runqueue
leading to invalid memory reference. Fix this by initializing
env->flags field to 0. While we are at it, fix similar issue in
active_load_balance_cpu_stop() function, although there is no harm
present currently in that function with uninitialized env->flags
variable.

Change-Id: Ied470b0abd65bf2ecfa33fa991ba554a5393f649
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2016-03-23 20:00:17 -07:00
Srivatsa Vaddagiri
13b29fc0f7 sched: window-stats: 64-bit type for curr/prev_runnable_sum
Expand rq->curr_runnable_sum and rq->prev_runnable_sum to be 64-bit
counters as otherwise they can easily overflow when a cpu has many
tasks.

Change-Id: I68ab2658ac6a3174ddb395888ecd6bf70ca70473
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2016-03-23 20:00:16 -07:00
Srivatsa Vaddagiri
4641b37da8 sched: window-stats: Allow acct_wait_time to be tuned
Add sysctl interface to tune sched_acct_wait_time variable at runtime

Change-Id: I38339cdb388a507019e429709a7c28e80b5b3585
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2016-03-23 20:00:15 -07:00
Srivatsa Vaddagiri
c097c9b574 sched: window-stats: Account interrupt handling time as busy time
Account cycles spent by idle cpu handling interrupts (irq or softirq)
towards its busy time.

Change-Id: I84cc084ced67502e1cfa7037594f29ed2305b2b1
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
[joonwoop@codeaurora.org: fixed minor conflict in core.c]
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2016-03-23 20:00:14 -07:00
Srivatsa Vaddagiri
c20a41478d sched: window-stats: Account idle time as busy time
Provide a knob to consider idle time as busy time, when cpu becomes
idle as a result of io_schedule() call. This will let governor
parameter 'io_is_busy' to be appropriately honored.

Change-Id: Id9fb4fe448e8e4909696aa8a3be5a165ad7529d3
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2016-03-23 20:00:13 -07:00
Srivatsa Vaddagiri
900b44b621 sched: window-stats: Account wait time
Extend window-based task load accounting mechanism to include
wait-time as part of task demand. A subsequent patch will make this
feature configurable at runtime.

Change-Id: I8e79337c30a19921d5c5527a79ac0133b385f8a9
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2016-03-23 20:00:12 -07:00
Srivatsa Vaddagiri
b9f8d63c08 sched: window-stats: update task demand on tick
A task can execute on a cpu for a long time without being preempted
or migrated. In such case, its demand can become outdated for a long
time. Prevent that from happening by updating demand of currently
running task during scheduler tick.

Change-Id: I321917b4590635c0a612560e3a1baf1e6921e792
CRs-Fixed: 698662
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
[joonwoop@codeaurora.org: fixed trivial merge conflict in core.c]
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2016-03-23 20:00:11 -07:00
Srivatsa Vaddagiri
8e526b1ab4 sched: Fix herding issue
check_for_migration() could run concurrently on multiple cpus,
resulting in multiple tasks wanting to migrate to same cpu. This could
cause cpus to be underutilized and lead to increased scheduling
latencies for tasks. Fix this by serializing select_best_cpu() calls
from cpus running check_for_migration() check and marking selected
cpus as reserved, so that subsequent call to select_best_cpu() from
check_for_migration() will skip reserved cpus.

Change-Id: I73a22cacab32dee3c14267a98b700f572aa3900c
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
[rameezmustafa@codeaurora.org]: Port to msm-3.18]
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org
2016-03-23 20:00:10 -07:00
Srivatsa Vaddagiri
c820f1c5f2 sched: window-stats: print window size in /proc/sched_debug
Printing window size in /proc/sched_debug would provide useful
information to debug scheduler issues.

Change-Id: Ia12ab2cb544f41a61c8a1d87bf821b85a19e09fd
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2016-03-23 20:00:10 -07:00
Srivatsa Vaddagiri
69fec0486f sched: Extend ftrace event to record boost and reason code
Add a new ftrace event to record changes to boost setting. Also extend
sched_task_load() ftrace event to record boost setting and reason code
passed to select_best_cpu(). This will be useful for debug purpose.

Change-Id: Idac72f86d954472abe9f88a8db184343b7730287
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2016-03-23 20:00:09 -07:00
Srivatsa Vaddagiri
8f8c8db1c5 sched: Avoid needless migration
Restrict check_for_migration() to operate on fair_sched class tasks
only.

Also check_for_migration() can result in a call to select_best_cpu()
to look for a better cpu for currently running task on a cpu. However
select_best_cpu() can end up suggesting a cpu that is not necessarily
better than the cpu on which task is running currently. This will
result in unnecessary migration. Prevent that from happening.

Change-Id: I391cdda0d7285671d5f79aa2da12eaaa6cae42d7
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2016-03-23 20:00:08 -07:00
Srivatsa Vaddagiri
35bf2d9d10 sched: Drop active balance request upon cpu going offline
A cpu could mark its currently running task to be migrated to another
cpu (via rq->push_task/rq->push_cpu) and could go offline before
active load balance handles the request. In such case, clear the
active load balance request.

Change-Id: Ia3e668e34edbeb91d8559c1abb4cbffa25b1830b
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2016-03-23 20:00:06 -07:00
Srivatsa Vaddagiri
1f6363e54c sched: trigger immediate migration of tasks upon boost
Currently turning on boost does not immediately trigger migration of
tasks from lower capacity cpus. Tasks could incur migration latency
of up to one timer tick (when check_for_migration() is run).

Fix this by triggering a migration check on cpus with lower capacity
as soon as boost is turned on for first time.

Change-Id: I244649f9cb6608862d87631325967b887b7f4b7e
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
[rameezmustafa@codeaurora.org]: Port to msm-3.18]
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org
2016-03-23 20:00:05 -07:00
Srivatsa Vaddagiri
07a758521c sched: Extend boost benefit for small and low-prio tasks
Allow small and low-prio tasks to benefit from boost, which is
expected to last for a short duration. Any task that wishes to run
during that short period is allowed boost benefit.

Change-Id: I02979a0c5feeba0f1256b7ee3d73f6b283fcfafa
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2016-03-23 20:00:04 -07:00
Srivatsa Vaddagiri
1ffae4dc94 sched: window-stats: Handle policy change properly
sched_window_stat_policy influences task demand and thus various
statistics maintained per-cpu like curr_runnable_sum. Changing policy
non-atomically would lead to improper accounting. For example, when
task is enqueued on a cpu's runqueue, its demand that is added to
rq->cumulative_runnable_avg could be based on AVG policy and when its
dequeued its demand that is removed can be based on MAX, leading to
erroneous accounting.

This change causes policy change to be "atomic" i.e all cpu's rq->lock
are held and all task's window-stats are reset before policy is changed.

Change-Id: I6a3e4fb7bc299dfc5c367693b5717a1ef518c32d
CRs-Fixed: 687409
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
[joonwoop@codeaurora.org: fixed minor conflict in
 include/linux/sched/sysctl.h.
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2016-03-23 20:00:03 -07:00
Srivatsa Vaddagiri
0b210afc21 sched: window-stats: Reset all window stats
Currently, few of the window statistics for tasks are not reset when
window size is changing. Fix this to completely reset all window
statistics for tasks and cpus. Move the reset code to a function,
which can be reused by a subsequent patch that resets same statistics
upon policy change.

Change-Id: Ic626260245b89007c4d70b9a07ebd577e217f283
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2016-03-23 20:00:02 -07:00
Srivatsa Vaddagiri
730e262d6a sched: window-stats: Additional error checking in sched_set_window()
Check for invalid window size passed as argument to sched_set_window()
Also move up local_irq_disable() call to avoid thread from being
preempted during calculation of window_start and its comparison
against sched_clock(). Use right macro to evluate whether window_start
argument is ahead in time or not.

Change-Id: Idc0d3ab17ede08471ae63b72a2d55e7f84868fd6
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2016-03-23 20:00:01 -07:00
Srivatsa Vaddagiri
f41fd0eca9 sched: window-stats: Fix incorrect calculation of partial_demand
When using MAX_POLICY, partial_demand is calculated incorrectly as 0.
Fix this by picking maximum of previous 4 windows and most recent
sample.

Change-Id: I27850a510746a63b5382c84761920fc021b876c5
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2016-03-23 20:00:00 -07:00
Srivatsa Vaddagiri
39f974c488 sched: window-stats: Fix potential wrong use of rq
'rq' reference to a cpu where a waking task last ran can be
potentially incorrect leading to incorrect accounting. This happens
when task_cpu() changes between points A & B in try_to_wake_up()
listed below:

try_to_wake_up()
{

cpu = src_cpu = task_cpu(p);
rq = cpu_rq(src_cpu);		-> Point A

..

while (p->on_cpu)
	cpu_relax();

smp_rmb();

raw_spin_lock(&rq->lock);	-> Point B

Fix this by initializing 'rq' variable after task has slept (its
on_cpu field becomes 0).

Also avoid adding task demand to its old cpu runqueue
(prev_runnable_sum) in case it's gone offline.

Change-Id: I9e5d3beeca01796d944137b5416805b983a6e06e
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-03-23 19:59:59 -07:00
Steve Muckle
e9e5d9a8ca sched: set initial task load to just above a small task
To maximize power savings, set the intial load of newly created
tasks to just above a small task. Setting it below the small
task threshold would cause new tasks to be packed which is
very likely too aggressive.

Change-Id: Idace26cc0252e31a5472c73534d2f5277a1e3fa4
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2016-03-23 19:59:58 -07:00
Olav Haugan
7516622507 sched/fair: Check whether any CPUs are available
There is a possibility that there are no allowed CPUs online when we try
to select the best cpu for a small task. Add a check to ensure we don't
continue if there are no CPUs available.

CRs-fixed: 692505
Change-Id: Iff955fb0d0b07e758a893539f7bc8ea8aa09d9c4
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-03-23 19:59:57 -07:00
Steve Muckle
b32244628f sched: enable hmp, power aware scheduling for targets with > 4 CPUs
Enabling and disabling hmp/power-aware scheduling is meant to be done
via kernel command line options. Until that is fully supported however,
take advantage of the fact that current targets with more than 4 CPUs
will need these features.

Change-Id: I4916805881d58eeb54747e4b972816ffc96caae7
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-03-23 19:59:56 -07:00
Srivatsa Vaddagiri
f27b626521 sched: remove sysctl control for HMP and power-aware task placement
There is no real need to control HMP and power-aware task placement at
runtime after kernel has booted. Boot-time control should be
sufficient. Not allowing for runtime (sysctl) support simplifies the
code quite a bit.

Also rename sysctl_sched_enable_hmp_task_placement to be shorter.

Change-Id: I60cae51a173c6f73b79cbf90c50ddd41a27604aa
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
[joonwoop@codeaurora.org: fixed minor conflict.  p->nr_cpus_allowed == 1
 has moved to core.c
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2016-03-23 19:59:55 -07:00
Srivatsa Vaddagiri
ad25ca2afb sched: support legacy mode better
It should be possible to bypass all HMP scheduler changes at runtime
by setting sysctl_sched_enable_hmp_task_placement and
sysctl_sched_enable_power_aware to 0.  Fix various code paths to honor
this requirement.

Change-Id: I74254e68582b3f9f1b84661baf7dae14f981c025
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
[joonwoop@codeaurora.org: fixed conflict in rt.c, p->nr_cpus_allowed ==
 1 is now moved in core.c]
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2016-03-23 19:59:54 -07:00
Srivatsa Vaddagiri
7c9b849b11 sched: code cleanup
Avoid the long if() block of code in set_task_cpu(). Move that code to
its own function

Change-Id: Ia80a99867ff9c23a614635e366777759abaccee4
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-03-23 19:59:54 -07:00
Srivatsa Vaddagiri
38e78bb4ad sched: Add BUG_ON when task_cpu() is incorrect
It would be fatal if task_cpu() information for a task does not
accurately represent the cpu on which its running. All sorts of wierd
issues can arise if that were to happen! Add a BUG_ON() in context
switch to detect such cases.

Change-Id: I4eb2c96c850e2247e22f773bbb6eedb8ccafa49c
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2016-03-23 19:59:53 -07:00
Srivatsa Vaddagiri
961624dadc sched: avoid active migration of tasks not in TASK_RUNNING state
Avoid wasting effort in migrating tasks that are about to sleep.

Change-Id: Icf9520b1c8fa48d3e071cb9fa1c5526b3b36ff16
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
[rameezmustafa@codeaurora.org]: Port to msm-3.18]
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org
2016-03-23 19:59:52 -07:00
Srivatsa Vaddagiri
b310ce69b8 sched: fix up task load during migration
Fix the hack to set task's on_rq to 0 during task migration. Task's
load is temporarily added back to its runqueue so that
update_task_ravg() can fixup task's load when its demand is changing.
Task's load is removed immediately afterwards.

Temporarily setting p->on_rq to 0 introduces a race condition with
try_to_wake_up(). Another task (task A) may be attempting to wake
up the migrating task (task B). As long as task A sees task B's
p->on_rq as 1, the wake up will not continue. Changing p->on_rq to
0, then back to 1, allows task A to continue "waking" task B, at
which point we have both try_to_wake_up and the migration code
attempting to set the cpu of task B at the same time.

CRs-Fixed: 695071
Change-Id: I525745f144da4ffeba1d539890b4d46720ec3ef1
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2016-03-23 19:59:51 -07:00
Prasad Sodagudi
c4529b59bc sched: avoid pushing tasks to an offline CPU
Currently active_load_balance_cpu_stop is run by cpu stopper and it
pushes running tasks off the busiest CPU onto idle target CPU. But
there is no check to see whether target cpu is offline or not before
pushing the tasks.  With the introduction of active migration in the
scheduler tick path (see check_for_migration()) there have been
instances of attempts to migrate tasks to offline CPUs.

Add a check as to whether the target cpu is online or not to prevent
scheduling on offline CPUs.

Change-Id: Ib8ac7f8aeabd3ca7365f3eae977075952dab4f21
Signed-off-by: Prasad Sodagudi <psodagud@codeaurora.org>
[rameezmustafa@codeaurora.org]: Port to msm-3.18]
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org
2016-03-23 19:59:50 -07:00
Syed Rameez Mustafa
3e7b06d9cf sched: Add a per rq max_possible_capacity for use in power calculations
In the absence of a power driver providing real power values, the scheduler
currently defaults to using capacity of a CPU as a measure of power. This,
however, is not a good measure since the capacity of a CPU can change due
to thermal conditions and/or other hardware restrictions. These frequency
restrictions have no effect on the power efficiency of those CPUs.
Introduce max possible capacity of a CPU to track an absolute measure of
capacity which translates into a good absolute measure of power efficiency.
Max possible capacity takes the max possible frequency of CPUs into account
instead of max frequency.

Change-Id: Ia970b853e43a90eb8cc6fd990b5c47fca7e50db8
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-03-23 19:59:49 -07:00
Syed Rameez Mustafa
320e5d6710 sched: Disable interrupts when holding the rq lock in sched_get_busy()
Interrupts can end up waking processes on the same cpu as the one for
which sched_get_busy() is called. Since sched_get_busy() takes the rq
lock this can result in a deadlock as the same rq lock is required to
enqueue the waking up task. Fix the deadlock by disabling interrupts
when taking the rq lock.

Change-Id: I46e14a14789c2fb0ead42363fbaaa0a303a5818f
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-03-23 19:59:48 -07:00