Commit graph

2241 commits

Author SHA1 Message Date
Olav Haugan
04daea81fc sched/hmp: Fix range checking for target load
The range check for target load is incorrect. Fix this. This is only a
sanity check to catch badly specified target loads.

Change-Id: Ia90d020f5e0bdf37c600661a1c246dab5b637b3b
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-10-19 10:35:52 -07:00
Olav Haugan
76ac2a2803 sched/core_ctl: Move header file to global location
Move the header file of core control to the standard linux include
directory to allow other entities to include this file.

Change-Id: I2ddb8b3b96063be3c6a6cb6bc333998e007f9de7
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-10-18 18:10:52 -07:00
Olav Haugan
651e7eb964 core_ctl: Add refcounting to boost api
More than one client may call the core_ctl_set_boost api. Add support
for this.
Also add a new trace event that is emitted when this api is called.

Change-Id: Iad0a9fc45f1ce87433995e8e549bfca80e8b9cb2
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-10-18 14:48:14 -07:00
Guenter Roeck
7bb5218b77 cgroup: Remove leftover instances of allow_attach
Fix:

kernel/sched/tune.c:718:2: error:
	unknown field ‘allow_attach’ specified in initializer
kernel/cpuset.c:2087:2: error:
	unknown field 'allow_attach' specified in initializer

Change-Id: Ie524350ffc6158f3182d90095cca502e58b6f197
Fixes: e78f134a78 ("CHROMIUM: remove Android's cgroup generic permissions checks")
Signed-off-by: Guenter Roeck <groeck@chromium.org>
2016-10-18 12:35:03 -07:00
Dmitry Torokhov
e78f134a78 CHROMIUM: remove Android's cgroup generic permissions checks
The implementation is utterly broken, resulting in all processes being
allows to move tasks between sets (as long as they have access to the
"tasks" attribute), and upstream is heading towards checking only
capability anyway, so let's get rid of this code.

BUG=b:31790445,chromium:647994
TEST=Boot android container, examine logcat

Change-Id: I2f780a5992c34e52a8f2d0b3557fc9d490da2779
Signed-off-by: Dmitry Torokhov <dtor@chromium.org>
Reviewed-on: https://chromium-review.googlesource.com/394967
Reviewed-by: Ricky Zhou <rickyz@chromium.org>
Reviewed-by: John Stultz <john.stultz@linaro.org>
2016-10-17 22:55:19 -07:00
Olav Haugan
34418fc16e sched/fair: Fix issue with trace flag not being set properly
During scheduler boost the sched_task_load ftrace event might not log
the correct flag value. Ensure that the flag is always initialized with
the selected cluster information.

Change-Id: Ia986d0fbc512c8e9ed1b5fb5b2ac4bc564cc4ba9
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-10-17 17:05:54 -07:00
Syed Rameez Mustafa
066f712e43 sched: Add multiple load reporting policies for cpu frequency
The previous patches in this series introduce the mechanics of CPU
load tracking without fixups for intra cluster migration and top task
load tracking. Add a tunable that dictates what of the above needs to
be considered when reporting load to the governor. The default policy
is to take the maximum of the CPU load and top task load.

Change-Id: Ie585a11ed774b929910d04c41471db3a2a102ec5
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-10-17 12:45:54 -07:00
Syed Rameez Mustafa
dc09dd60a0 sched: Optimize the next top task search logic upon task migration
find_next_top_index() is responsible for finding the second top task
on a CPU when the top task migrates away from that CPU. This operation
is expensive as we need to iterate the entire array of top tasks to
find the second top task.

Optimize this by introducing bitmaps for tracking top task indices.
There are two bitmaps; one for the previous window and one for the
current window. Each bit in a bitmap tracks whether the corresponding
bucket in the top task hashmap has a non zero refcount. The bit is set
when the refcount becomes non zero and is cleared when it becomes zero.

Finding the second top task upon migration is then simply a matter of
finding the highest set bit in the bitmap.

Change-Id: Ibafaf66eed756b0328704dfaa89c17ab0d84e359
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-10-17 12:45:51 -07:00
Syed Rameez Mustafa
7bd09f2441 sched: Add the mechanics of top task tracking for frequency guidance
The previous patches in this rewrite of scheduler guided frequency
selection reintroduces the part-picture problem that we addressed in
our initial implementation. In that, when tasks migrate across CPUs
within a cluster, we end up losing the complete picture of the
sequential nature of the workload.

This patch aims to solve that problem slightly differently. We track
the top task on every CPU within a window. Top task is defined as the
task that runs the most in a given window. This enhances our ability
to detect the sequential nature of workloads. A single migrating task
executing for an entire window will cause 100% load to be reported
for frequency guidance instead of the maximum footprint left on any
individual CPU in the task's trail. There are cases, that this new
approach does not address. Namely, cases where the sum of two or more
tasks accurately reflects the true sequential nature of the workload.
Future optimizations might aim to tackle that problem.

To track top tasks, we first realize that there is no strict need to
maintain the task struct itself as long as we know the load exerted by
the top task. We also realize that to maintain top tasks on every CPU
we have to track the execution of every single task that runs during
the window. The load associated with a task needs to be migrated when
the task migrates from one CPU to another. When the top task migrates
away, we need to locate the second top task and so on.

Given the above realizations, we use hashmaps to track top task load
both for the current and the previous window. This hashmap is
implemented as an array of fixed size. The key of the hashmap is given
by task_execution_time_in_a_window / array_size. The size of the array
(number of buckets in the hashmap) dictate the load granularity of each
bucket. The value stored in each bucket is a refcount of all the tasks
that executed long enough to be in that bucket.

This approach has a few benefits. Firstly, any top task stats update
now take O(1) time. While task migration is also O(1), it does still
involve going through up to the size of the array to find the second
top task. Further patches will aim to optimize this behavior. Secondly,
and more importantly, not having to store the task struct itself saves
a lot of memory usage in that 1) there is no need to retrieve task
structs later causing cache misses and 2) we don't have to unnecessarily
hold up task memory for up to 2 full windows by calling get_task_struct()
after a task exits.

Change-Id: I004dba474f41590db7d3f40d9deafe86e71359ac
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-10-17 12:43:55 -07:00
Syed Rameez Mustafa
7e1a4f15b2 sched: Enhance the scheduler migration load fixup feature
In the current frequency guidance implementation the scheduler migrates
task load from the source CPU to the destination CPU when a task migrates.
The underlying assumption is that a task will stay on the destination CPU
following the migration. Hence a CPU's load should reflect the sum of
all tasks that last ran on that CPU prior to window expiration even if
these tasks executed on some other CPU in that window prior to being
migrated.

However, given the ubiquitous nature of migrations the above assumption
is flawed causing the scheduler to often add up load on a single CPU
that in reality ran concurrently on multiple CPUs and will continue to
run concurrently in subsequent windows. This leads to load over
reporting on a single CPU which in turn causes CPU frequency to be higher
than necessary.

This is the first patch in a series of patches that attempts to change
how load fixups are done upon migration to prevent load over reporting.
In this patch, we stop doing migration fixups for intra-cluster
migrations. Inter-cluster migration fixups are still retained.

In order to achieve the above, we make use the per CPU footprint of each
task introduced in the previous patch. Upon inter cluster migration, we
go through every CPU in the source cluster to subtract the migrating
task's contribution to the busy time on each one of those CPUs. The sum
of the contributions is then added to the destination CPU allowing it
to ramp up to the appropriate frequency for that task.

Subtracting load from each of the source CPUs is not trivial, however,
as it would require all runqueue locks to held. To get around this
we introduce a deferred load subtraction mechanism whereby subtracting
load from each of the source CPUs in deferred until an opportune moment.
This opportune moment is when the governor comes asking the scheduler
for load. At that time, all necessary runqueue locks are already held.

There are a few cases to consider when doing deferred subtraction. Since
we are not holding all runqueue locks other CPUs in the source cluster
can be in a different window than the source CPU where the task
is migrating from.

Case 1: Other CPU in the source cluster is in the same window
No special consideration

Case 2: Other CPU in the source cluster is ahead by 1 window
In this case, we will be doing redundant updates to subtraction load
for the prev window. There is no way to avoid this redundant update
though, without holding the rq lock.

Case 3: Other CPU in the source cluster is trailing by 1 window
In this case, we might end up overwriting old data for that CPU. But
this is not a problem as when the other CPU calls update_task_ravg()
it will move to the same window. This relies on maintaining
synchronized windows between CPUs, which is true today.

Finally, we must deal with frequency aggregation. When frequency
aggregation is in effect, there is little point in dealing with per
CPU footprint since the load of all related tasks have to be reported
on a single CPU. Therefore when a task enters a related group we clear
out all per CPU contributions and add it to the task CPU's cpu_time
struct. From that point onwards we stop managing per CPU contributions
upon inter cluster migrations since that work is redundant. Finally
when a task exits a related group we must walk every CPU in reset
all CPU contributions. We then set the task CPU contribution to the
respective curr/prev sum values and add that sum to the task CPU
rq runnable sum.

Change-Id: I1f8d596e6c930f3f6f00e24109ddbe8b121f8d6b
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-10-17 12:43:54 -07:00
Syed Rameez Mustafa
eb7300e9a8 sched: Add per CPU load tracking for each task
Keeping a track of the load footprint of each task on every CPU
that it executed on gives the scheduler much more flexibility in
terms of the number of frequency guidance policies. These new fields
will be used in subsequent patches as we alter the load fixup
mechanism upon task migration. We still need to maintain the
curr/prev_window sums as they will also be required in subsequent
patches as we start to track top tasks based on cumulative load.

Also, we need to call init_new_task_load() for the idle task. This
is an existing harmless bug as load tracking for the idle task is
irrelevant. However, in this patch we are adding pointers to the
ravg structure. These pointers have to be initialized even for the
idle task.

Finally move init_new_task_load() to sched_fork(). This was always
the more appropriate place, however, following the introduction of
new pointers in the ravg struct, this is necessary to avoid races
with functions such as reset_all_task_stats().

Change-Id: Ib584372eb539706da4319973314e54dae04e5934
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-10-17 12:43:53 -07:00
Linux Build Service Account
4de07155f9 Merge "sched/cgroup: Fix/cleanup cgroup teardown/init" 2016-10-13 19:11:24 -07:00
Linux Build Service Account
a6138ccde2 Merge "sched: bucketize CPU c-state levels" 2016-10-13 12:29:04 -07:00
Linux Build Service Account
daa54e82ab Merge "sched: use wakeup latency as c-state determinant" 2016-10-13 12:29:04 -07:00
Joonwoo Park
825b7ef93a sched: bucketize CPU c-state levels
C-state aware scheduler takes note of wakeup latency of each c-state
level to determine whether to pack or wake up LPM CPU.  But it doesn't
distinguish small and large delta as it's inefficient for scheduler to
do so on its critical path.

Disregard wakeup latencies less than 64 us between different c-state
levels.  This reduces unnecessary task packing.

CRs-fixed: 1074879
Change-Id: Ib0cadbd390d1a0b6da3e39c98010cedb43e5bf60
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2016-10-12 14:14:16 -07:00
Joonwoo Park
15d2c97d2a sched: use wakeup latency as c-state determinant
C-state aware scheduler at present, uses a raw c-state index number as
its determinant and avoids task placement on deeper c-state CPUs at
cost of latency.  However there are CPUs offering comparable wake-up
latency at different c-state levels and the wake-up latency at each
c-state levels are already have being fed to scheduler.

Hence use the wakeup_latency as c-state determinant instead of raw
c-state index to avoid unnecessary task packing where it's doable.

CRs-fixed: 1074879
Change-Id: If927f84f6c8ba719716d99669e5d1f1b19aaacbe
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2016-10-12 14:14:06 -07:00
Syed Rameez Mustafa
2a5b04bf9b sched/tune: Remove redundant checks for NULL css
The check for NULL css is redundant as upper layers are already
making sure that css cannot be NULL. Remove this check. It helps
to silence static analysis errors as well.

Change-Id: I64585ff8cceb307904e20ff788e52eb05c000e1f
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-10-12 13:28:03 -07:00
John Stultz
fc1d6c8c6a sched: Add Kconfig option DEFAULT_USE_ENERGY_AWARE to set ENERGY_AWARE feature flag
The ENERGY_AWARE sched feature flag cannot be set unless
CONFIG_SCHED_DEBUG is enabled.

So this patch allows the flag to default to true at build time
if the config is set.

Change-Id: I8835a571fdb7a8f8ee6a54af1e11a69f3b5ce8e6
Signed-off-by: John Stultz <john.stultz@linaro.org>
2016-10-12 17:34:22 +05:30
Caesar Wang
ca4b83c547 sched/fair: remove printk while schedule is in progress
It will cause deadlock and while(1) if call printk while schedule is in
progress. The block state like as below:

cpu0(hold the console sem):
printk->console_unlock->up_sem->spin_lock(&sem->lock)->wake_up_process(cpu1)
->try_to_wake_up(cpu1)->while(p->on_cpu).

cpu1(request console sem):
console_lock->down_sem->schedule->idle_banlance->update_cpu_capacity->
printk->console_trylock->spin_lock(&sem->lock).

p->on_cpu will be 1 forever, because the task is still running on cpu1,
so cpu0 is blocked in while(p->on_cpu), but cpu1 could not get
spin_lock(&sem->lock), it is blocked too, it means the task will running
on cpu1 forever.

Signed-off-by: Caesar Wang <wxt@rock-chips.com>
2016-10-12 17:34:22 +05:30
Chris Redpath
27f0430c61 sched/walt: Drop arch-specific timer access
On at least one platform, occasionally the timer providing the wallclock
was able to be reset/go backwards for at least some time after wakeup.

Accept that this might happen and warn the first time, but otherwise just
carry on.

Change-Id: Id3164477ba79049561af7f0889cbeebc199ead4e
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2016-10-12 17:34:22 +05:30
Srinath Sridharan
09c5f91afe eas/sched/fair: Fixing comments in find_best_target.
Change-Id: I83f5b9887e98f9fdb81318cde45408e7ebfc4b13
Signed-off-by: Srinath Sridharan <srinathsr@google.com>
2016-10-12 17:34:22 +05:30
Alex Shi
16d185eee4 Merge branch 'linux-linaro-lsk-v4.4' into linux-linaro-lsk-v4.4-android
Conflicts:
	kernel/cpuset.c
2016-10-11 23:33:37 +02:00
Syed Rameez Mustafa
2640728359 sched: Add cgroup attach functionality to the tune controller
This is required to allow tasks to freely move between cgroups associated
with the tune controller.

Change-Id: I1f39b957462034586edc2fdc0a35488b314e9c8c
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-10-10 11:10:42 -07:00
Syed Rameez Mustafa
14b52227eb sched: Update the number of tune groups to 5
The schedtune controller will mimic the cpusets controller configuration
for now. For that we need to make 4 groups in addition to the root
group present by default.

Change-Id: I082f1e4e4ebf863e623cf66ee127eac70a3e2716
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-10-10 11:10:41 -07:00
Patrick Bellasi
e89595cd93 sched/tune: add initial support for CGroups based boosting
To support task performance boosting, the usage of a single knob has the
advantage to be a simple solution, both from the implementation and the
usability standpoint.  However, on a real system it can be difficult to
identify a single value for the knob which fits the needs of multiple
different tasks. For example, some kernel threads and/or user-space
background services should be better managed the "standard" way while we
still want to be able to boost the performance of specific workloads.

In order to improve the flexibility of the task boosting mechanism this
patch is the first of a small series which extends the previous
implementation to introduce a "per task group" support.
This first patch introduces just the basic CGroups support, a new
"schedtune" CGroups controller is added which allows to configure
different boost value for different groups of tasks.
To keep the implementation simple but still effective for a boosting
strategy, the new controller:
  1. allows only a two layer hierarchy
  2. supports only a limited number of boost groups

A two layer hierarchy allows to place each task either:
  a) in the root control group
     thus being subject to a system-wide boosting value
  b) in a child of the root group
     thus being subject to the specific boost value defined by that
     "boost group"

The limited number of "boost groups" supported is mainly motivated by
the observation that in a real system it could be useful to have only
few classes of tasks which deserve different treatment.
For example, background vs foreground or interactive vs low-priority.
As an additional benefit, a limited number of boost groups allows also
to have a simpler implementation especially for the code required to
compute the boost value for CPUs which have runnable tasks belonging to
different boost groups.

Change-Id: I1304e33a8440bfdad9c8bcf8129ff390216f2e32
cc: Tejun Heo <tj@kernel.org>
cc: Li Zefan <lizefan@huawei.com>
cc: Johannes Weiner <hannes@cmpxchg.org>
cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Git-commit: 13001f47c9
Git-repo: https://android.googlesource.com/kernel/common
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
2016-10-10 11:09:53 -07:00
Linux Build Service Account
9a87b7449d Merge "sched/tune: add sysctl interface to define a boost value" 2016-10-06 19:45:49 -07:00
Linux Build Service Account
6312438224 Merge "sched: Fix a division by zero bug in scale_exec_time()" 2016-10-06 01:07:16 -07:00
Linux Build Service Account
9d4ed2cb20 Merge "sched: Fix integer overflow in sched_update_nr_prod()" 2016-10-05 19:29:24 -07:00
Linux Build Service Account
fb89803f09 Merge "sched: Add a device tree property to specify the sched boost type" 2016-10-05 19:29:18 -07:00
Patrick Bellasi
754a122792 sched/tune: add sysctl interface to define a boost value
The current (CFS) scheduler implementation does not allow "to boost"
tasks performance by running them at a higher OPP compared to the
minimum required to meet their workload demands.

To support tasks performance boosting the scheduler should provide a
"knob" which allows to tune how much the system is going to be optimised
for energy efficiency vs performance.

This patch is the first of a series which provides a simple interface to
define a tuning knob. One system-wide "boost" tunable is exposed via:
  /proc/sys/kernel/sched_cfs_boost
which can be configured in the range [0..100], to define a percentage
where:
  - 0%   boost requires to operate in "standard" mode by scheduling
         tasks at the minimum capacities required by the workload demand
  - 100% boost requires to push at maximum the task performances,
         "regardless" of the incurred energy consumption

A boost value in between these two boundaries is used to bias the
power/performance trade-off, the higher the boost value the more the
scheduler is biased toward performance boosting instead of energy
efficiency.

Change-Id: I59a41725e2d8f9238a61dfb0c909071b53560fc0
cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Git-commit: 63c8fad2b06805ef88f1220551289f0a3c3529f1
Git-repo: https://source.codeaurora.org/quic/la/kernel/msm-4.4
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
2016-10-05 17:24:23 -07:00
Syed Rameez Mustafa
475125d9f9 sched: Initialize HMP stats inside init_sd_lb_stats()
This ensures that the load balancer always works correctly even
without compiler optimizations.

Change-Id: I36408ae65833b624401e60edfb50c19cc061d7bf
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-10-05 17:24:22 -07:00
Peter Zijlstra
678e5640b9 sched/cgroup: Fix/cleanup cgroup teardown/init
The CPU controller hasn't kept up with the various changes in the whole
cgroup initialization / destruction sequence, and commit:

  2e91fa7f6d ("cgroup: keep zombies associated with their original cgroups")

caused it to explode.

The reason for this is that zombies do not inhibit css_offline() from
being called, but do stall css_released(). Now we tear down the cfs_rq
structures on css_offline() but zombies can run after that, leading to
use-after-free issues.

The solution is to move the tear-down to css_released(), which
guarantees nobody (including no zombies) is still using our cgroup.

Furthermore, a few simple cleanups are possible too. There doesn't
appear to be any point to us using css_online() (anymore?) so fold that
in css_alloc().

And since cgroup code guarantees an RCU grace period between
css_released() and css_free() we can forgo using call_rcu() and free the
stuff immediately.

Change-Id: I51af3d4f0e5dd1c9df6375cce4bb933f67f1022e
Suggested-by: Tejun Heo <tj@kernel.org>
Reported-by: Kazuki Yamaguchi <k@rhe.jp>
Reported-by: Niklas Cassel <niklas.cassel@axis.com>
Tested-by: Niklas Cassel <niklas.cassel@axis.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Tejun Heo <tj@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: 2e91fa7f6d ("cgroup: keep zombies associated with their original cgroups")
Link: http://lkml.kernel.org/r/20160316152245.GY6344@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Git-commit: 2f5177f0fd7e531b26d54633be62d1d4cb94621c
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Signed-off-by: Satya Durga Srinivasu Prabhala <satyap@codeaurora.org>
2016-10-05 14:10:10 -07:00
Peter Zijlstra
2eab0c7176 sched/cgroup: Fix cgroup entity load tracking tear-down
When a cgroup's CPU runqueue is destroyed, it should remove its
remaining load accounting from its parent cgroup.

The current site for doing so it unsuited because its far too late and
unordered against other cgroup removal (->css_free() will be, but we're also
in an RCU callback).

Put it in the ->css_offline() callback, which is the start of cgroup
destruction, right after the group has been made unavailable to
userspace. The ->css_offline() callbacks are called in hierarchical order
after the following v4.4 commit:

  aa226ff4a1ce ("cgroup: make sure a parent css isn't offlined before its children")

Change-Id: Ice7cbd71d9e545da84d61686aa46c7213607bb9d
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Li Zefan <lizefan@huawei.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20160121212416.GL6357@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Git-commit: 6fe1f348b3dd1f700f9630562b7d38afd6949568
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Signed-off-by: Satya Durga Srinivasu Prabhala <satyap@codeaurora.org>
2016-10-05 14:09:53 -07:00
Pavankumar Kondeti
f20772adf3 sched: Fix integer overflow in sched_update_nr_prod()
"int" type is used to hold the time difference between the successive
updates to nr_run in sched_update_nr_prod(). This can result in
overflow, if the function is called ~2.15 sec after it was called
before. The most probable scenarios are when CPU is idle and
hotplugged. But as we update the last_time of all possible CPUs in
sched_get_nr_running_avg() periodically from a deferrable timer context
(core_ctl module), this overflow is observed only when the system is
completely idle for long time. When this overflow happens we hit
a BUG_ON() in sched_get_nr_running_avg().

Use "u64" type instead of "int" for holding the time difference and
add additional BUG_ON() to catch the instances where sched_clock()
returns a backward value.

Change-Id: I284abb5889ceb8cf9cc689c79ed69422a0e74986
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2016-10-04 08:47:48 +05:30
Linux Build Service Account
f02700dfc6 Merge "sched: Fix CPU selection when all online CPUs are isolated" 2016-10-03 10:35:02 -07:00
Linux Build Service Account
ae0165688c Merge "sched: Add a stub function for init_clusters()" 2016-10-03 10:34:59 -07:00
Linux Build Service Account
a6e4924acb Merge "sched: add a knob to prefer the waker CPU for sync wakeups" 2016-10-03 10:34:58 -07:00
Pavankumar Kondeti
a86b380f35 sched: Add a device tree property to specify the sched boost type
The HMP scheduler has two types of task placement boost policies.

(1) boost-on-big policy make use of all big CPUs up to their full capacity
before using the little CPUs. This improves performance on true b.L systems
where the big CPUs have higher efficiency compared to the little CPUs.

(2) boost-on-all policy place the tasks on the CPU having the highest
spare capacity. This policy is optimal for SMP like systems.

The scheduler sets the boost policy to boost-on-big on systems which has
CPUs of different efficiencies. However it is possible that CPUs of the
same micro architecture to have slight difference in efficiency due to
other factors like cache size. Selecting the boost-on-big policy based
on relative difference in efficiency is not optimal on such systems.
The boost-policy device tree property is introduced to specify the
required boost type and it overrides the default selection of boost
type in the scheduler. The possible values for this property are
"boost-on-big" and "boost-on-all".

Change-Id: Iac19183fa7d4bfd9e5746b02a02b2b19cf64b78d
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2016-10-02 10:54:45 +05:30
Pavankumar Kondeti
c7e3dde08c sched: Add a stub function for init_clusters()
Add a stub function for init_cluster() and remove a ifdefry
for SCHED_HMP in sched_init()

Change-Id: I6745485152d735436d8398818f7fb5e70ce5ee65
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2016-10-02 10:52:15 +05:30
Pavankumar Kondeti
f1e9995fe4 sched: add a knob to prefer the waker CPU for sync wakeups
The current policy has a preference to select an idle CPU in the waker
cluster compared to the waker CPU running only 1 task. By selecting
an idle CPU, it eliminates the chance of waker migrating to a
different CPU after the wakee preempts it. This policy is also not
susceptible to the incorrect "sync" usage i.e the waker does not
goto sleep after waking up the wakee.

However LPM exit latency associated with an idle CPU outweigh the
above benefits on some targets. So add a knob to prefer the waker
CPU having only 1 runnable task over idle CPUs in the waker cluster.

Change-Id: Id974748c07625c1b19112235f426a5d204dfdb33
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2016-10-02 10:52:06 +05:30
Pavankumar Kondeti
7c3461a6ac sched: Fix a division by zero bug in scale_exec_time()
When cycle_counter is used to estimate the frequency, calling
update_task_ravg() twice on the same task without refreshing
the wallclock results in a division by zero bug. Add a safety
check in update_task_ravg() to prevent this.

The above bug is hit from __schedule() when next == prev. There
is no need to call update_task_ravg() twice for PUT_PREV_TASK
and PICK_NEXT_TASK events for the same task. Calling
update_task_ravg() with TASK_UPDATE event is sufficient.

Change-Id: Ib3af9004f2462618c535b8195377bedb584d0261
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2016-10-01 10:05:08 +05:30
Linux Build Service Account
818156d2ac Merge "sched: don't assume higher capacity means higher power in lb" 2016-09-30 18:23:48 -07:00
Linux Build Service Account
4b2199d820 Merge "sched: panic on corrupted stack end" 2016-09-30 18:23:31 -07:00
Syed Rameez Mustafa
84589336cd sched: Fix CPU selection when all online CPUs are isolated
After the introduction of "33c24b sched: add cpu isolation support"
select_fallback_rq() might sometimes be unable find any CPU to place
a task on. This happens when the all online CPUs are isolated and
the allow isolated flag is set to false. In such cases, we have
little choice but to use an isolated CPU and wait for core control
to eventually un-isolate one or more online CPUs.

Change-Id: Id8738bd8493c11731c5491efcc99eb90f051233e
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-09-30 17:25:51 -07:00
Linux Build Service Account
f6d68e27bf Merge "sched: constrain HMP scheduler tunable range with in better way" 2016-09-29 11:20:30 -07:00
Pavankumar Kondeti
9d128dbca3 sched: don't assume higher capacity means higher power in lb
The load balancer restrictions are in place to control the tasks
migration from the lower capacity cluster to higher capacity
cluster to save power. The assumption here is that higher capacity
cluster will have higher power cost which may not be necessarily
true for all platforms. Use power cost based checks instead of
capacity based checks while applying the inter cluster migration
restrictions.

Change-Id: Id9519eb8f7b183a2e9fca87a23cf95e951aa4005
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2016-09-29 10:07:19 +05:30
Dmitry Shmidt
734bcf32c2 This is the 4.4.22 stable release
-----BEGIN PGP SIGNATURE-----
 
 iQIcBAABCAAGBQJX5jR2AAoJEDjbvchgkmk+nXwQAML5WFM1xDL8frXh3vIS3RzD
 fP2YHP0Bm+xE/G9jDnlcoqJmxg4DKPUCP4T/rCZmeNRWc/RaIBX+VTyfVhN969uo
 v5f8jN6fc4TO9WMD+G++Vx3MZqupJbSAXlY2ZSUTF389lM/jHvaWj+DfA1qGLmGJ
 UbfO1jNszadZGIb8yOo/qmR+E3sSV/nT+/y7Sa2rSqkKt5+YI+z1Q1ezLo7BZ+uO
 6p968djKTXSOO7SHciddoegJ8lF2hhgY4cW95CEV+Dqu2O6AVyFyMz+ngYivEueZ
 ZwwQCaYIl+68ssAoI61VmtQHEvuaikTx5g9vjAApScWWijZU+V/M65BLAL6GAMWH
 kWOmilbtZKhyirecAxgnRIkJR8Tp0YcgUYAivsqkYqVPelcPsHvOFRfr4D6HrcBt
 wLrjaoBj+1vAjskozKJEymDNGQJ2Me/nBAWgN44MQYLRGg4kdBxNS/CGyeh8O8wO
 gEeVqa+zDOQCSeg2LJdiql3TdMQfQ+kpCsfjcrrl1oRkRX7OX130+gLuI8Tt1Fno
 6niq6w+QeAY445RSyM45vLeJ6vXB7oFadtuD4QvsB5YFr0X0P0KF3GKlHl0xiyEV
 JFpWJiXYsnOvM8entT23aeCSTlDT1p6os3jLh8p7CBn9TvP3uW2nfgG/FKvy0wGD
 7L7FKYb4Mw+YSrROfxBT
 =5+OA
 -----END PGP SIGNATURE-----

Merge tag 'v4.4.22' into android-4.4.y

This is the 4.4.22 stable release

Change-Id: Id49e3c87d2cacb2fa85d85a17226f718f4a5ac28
2016-09-26 10:37:43 -07:00
Olav Haugan
e42aae958b sched/core_ctl: Integrate core control with cpu isolation
Replace hotplug functionality in core control with cpu isolation
and integrate into scheduler.

Change-Id: I4f1514ba5bac2e259a1105fcafb31d6a92ddd249
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-09-24 11:00:06 -07:00
Olav Haugan
e7c8de6756 sched/core_ctl: Refactor cpu data
Refactor cpu data into cpu data and cluster data to improve readability and
ease of understanding the code.

Change-Id: I96505aeb9d07a6fa3a2c28648ffa299e0cfa2e41
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-09-24 11:00:05 -07:00
Olav Haugan
5d1e98d51d trace: Move core control trace events to scheduler
Move the core control trace events to scheduler trace event file.

Change-Id: I65943d8e4a9eac1f9f5a40ad5aaf166679215f48
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-09-24 11:00:04 -07:00