Commit graph

22119 commits

Author SHA1 Message Date
Linux Build Service Account
daa54e82ab Merge "sched: use wakeup latency as c-state determinant" 2016-10-13 12:29:04 -07:00
Joonwoo Park
15d2c97d2a sched: use wakeup latency as c-state determinant
C-state aware scheduler at present, uses a raw c-state index number as
its determinant and avoids task placement on deeper c-state CPUs at
cost of latency.  However there are CPUs offering comparable wake-up
latency at different c-state levels and the wake-up latency at each
c-state levels are already have being fed to scheduler.

Hence use the wakeup_latency as c-state determinant instead of raw
c-state index to avoid unnecessary task packing where it's doable.

CRs-fixed: 1074879
Change-Id: If927f84f6c8ba719716d99669e5d1f1b19aaacbe
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2016-10-12 14:14:06 -07:00
Syed Rameez Mustafa
2a5b04bf9b sched/tune: Remove redundant checks for NULL css
The check for NULL css is redundant as upper layers are already
making sure that css cannot be NULL. Remove this check. It helps
to silence static analysis errors as well.

Change-Id: I64585ff8cceb307904e20ff788e52eb05c000e1f
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-10-12 13:28:03 -07:00
Syed Rameez Mustafa
2640728359 sched: Add cgroup attach functionality to the tune controller
This is required to allow tasks to freely move between cgroups associated
with the tune controller.

Change-Id: I1f39b957462034586edc2fdc0a35488b314e9c8c
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-10-10 11:10:42 -07:00
Syed Rameez Mustafa
14b52227eb sched: Update the number of tune groups to 5
The schedtune controller will mimic the cpusets controller configuration
for now. For that we need to make 4 groups in addition to the root
group present by default.

Change-Id: I082f1e4e4ebf863e623cf66ee127eac70a3e2716
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-10-10 11:10:41 -07:00
Patrick Bellasi
e89595cd93 sched/tune: add initial support for CGroups based boosting
To support task performance boosting, the usage of a single knob has the
advantage to be a simple solution, both from the implementation and the
usability standpoint.  However, on a real system it can be difficult to
identify a single value for the knob which fits the needs of multiple
different tasks. For example, some kernel threads and/or user-space
background services should be better managed the "standard" way while we
still want to be able to boost the performance of specific workloads.

In order to improve the flexibility of the task boosting mechanism this
patch is the first of a small series which extends the previous
implementation to introduce a "per task group" support.
This first patch introduces just the basic CGroups support, a new
"schedtune" CGroups controller is added which allows to configure
different boost value for different groups of tasks.
To keep the implementation simple but still effective for a boosting
strategy, the new controller:
  1. allows only a two layer hierarchy
  2. supports only a limited number of boost groups

A two layer hierarchy allows to place each task either:
  a) in the root control group
     thus being subject to a system-wide boosting value
  b) in a child of the root group
     thus being subject to the specific boost value defined by that
     "boost group"

The limited number of "boost groups" supported is mainly motivated by
the observation that in a real system it could be useful to have only
few classes of tasks which deserve different treatment.
For example, background vs foreground or interactive vs low-priority.
As an additional benefit, a limited number of boost groups allows also
to have a simpler implementation especially for the code required to
compute the boost value for CPUs which have runnable tasks belonging to
different boost groups.

Change-Id: I1304e33a8440bfdad9c8bcf8129ff390216f2e32
cc: Tejun Heo <tj@kernel.org>
cc: Li Zefan <lizefan@huawei.com>
cc: Johannes Weiner <hannes@cmpxchg.org>
cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Git-commit: 13001f47c9
Git-repo: https://android.googlesource.com/kernel/common
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
2016-10-10 11:09:53 -07:00
Prasad Sodagudi
4e38f843b6 genirq: Avoid race between cpu hot plug and irq_desc() allocation paths
One of the core might have just allocated irq_desc() and other core
might be doing irq migration in the hot plug path. In the hot plug path
during the IRQ migration, for_each_active_irq macro is trying to get
irqs whose bits are set in allocated_irqs bit map but there is no return
value check after irq_to_desc for desc validity.

[   24.566381] msm_thermal:do_core_control Set Offline: CPU4 Temp: 73
[   24.568821] Unable to handle kernel NULL pointer dereference at virtual address 000000a4
[   24.568931] pgd = ffffffc002184000
[   24.568995] [000000a4] *pgd=0000000178df5003, *pud=0000000178df5003, *pmd=0000000178df6003, *pte=0060000017a00707
[   24.569153] ------------[ cut here ]------------
[   24.569228] Kernel BUG at ffffffc0000f3060 [verbose debug info unavailable]
[   24.569334] Internal error: Oops - BUG: 96000005 [#1] PREEMPT SMP
[   24.569422] Modules linked in:
[   24.569486] CPU: 4 PID: 28 Comm: migration/4 Tainted: G        W       4.4.8-perf-9407222-eng #1
[   24.569684] task: ffffffc0f28f0e80 ti: ffffffc0f2a84000 task.ti: ffffffc0f2a84000
[   24.569785] PC is at do_raw_spin_lock+0x20/0x160
[   24.569859] LR is at _raw_spin_lock+0x34/0x40
[   24.569931] pc : [<ffffffc0000f3060>] lr : [<ffffffc001023dec>] pstate: 200001c5
[   24.570029] sp : ffffffc0f2a87bc0
[   24.570091] x29: ffffffc0f2a87bc0 x28: ffffffc001033988
[   24.570174] x27: ffffffc001adebb0 x26: 0000000000000000
[   24.570257] x25: 00000000000000a0 x24: 0000000000000020
[   24.570339] x23: ffffffc001033978 x22: ffffffc001adeb80
[   24.570421] x21: 000000000000027e x20: 0000000000000000
[   24.570502] x19: 00000000000000a0 x18: 000000000000000d
[   24.570584] x17: 0000000000005f00 x16: 0000000000000003
[   24.570666] x15: 000000000000bd39 x14: 0ffffffffffffffe
[   24.570748] x13: 0000000000000000 x12: 0000000000000018
[   24.570829] x11: 0101010101010101 x10: 7f7f7f7f7f7f7f7f
[   24.570911] x9 : fefefefeff332e6d x8 : 7f7f7f7f7f7f7f7f
[   24.570993] x7 : ffffffc00344f6a0 x6 : 0000000000000000
[   24.571075] x5 : 0000000000000001 x4 : ffffffc00344f488
[   24.571157] x3 : 0000000000000000 x2 : 0000000000000000
[   24.571238] x1 : ffffffc0f2a84000 x0 : 0000000000004ead
...
...
...
[   24.581324] Call trace:
[   24.581379] [<ffffffc0000f3060>] do_raw_spin_lock+0x20/0x160
[   24.581463] [<ffffffc001023dec>] _raw_spin_lock+0x34/0x40
[   24.581546] [<ffffffc000103f10>] irq_migrate_all_off_this_cpu+0x84/0x1c4
[   24.581641] [<ffffffc00008ec84>] __cpu_disable+0x54/0x74
[   24.581722] [<ffffffc0000a3368>] take_cpu_down+0x1c/0x58
[   24.581803] [<ffffffc00013ac08>] multi_cpu_stop+0xb0/0x10c
[   24.581885] [<ffffffc00013ad60>] cpu_stopper_thread+0xb8/0x184
[   24.581972] [<ffffffc0000c4920>] smpboot_thread_fn+0x26c/0x290
[   24.582057] [<ffffffc0000c0f84>] kthread+0x100/0x108
[   24.582135] Code: aa0003f3 aa1e03e0 d503201f 5289d5a0 (b9400661)
[   24.582224] ---[ end trace 609f38584306f5d9 ]---

CRs-Fixed: 1074611
Change-Id: I6cc5399e511b6d62ec7fbc4cac21f4f41023520e
Signed-off-by: Prasad Sodagudi <psodagud@codeaurora.org>
Signed-off-by: Trilok Soni <tsoni@codeaurora.org>
Signed-off-by: Runmin Wang <runminw@codeaurora.org>
2016-10-07 13:07:10 -07:00
Linux Build Service Account
9a87b7449d Merge "sched/tune: add sysctl interface to define a boost value" 2016-10-06 19:45:49 -07:00
Linux Build Service Account
6312438224 Merge "sched: Fix a division by zero bug in scale_exec_time()" 2016-10-06 01:07:16 -07:00
Linux Build Service Account
9d4ed2cb20 Merge "sched: Fix integer overflow in sched_update_nr_prod()" 2016-10-05 19:29:24 -07:00
Linux Build Service Account
fb89803f09 Merge "sched: Add a device tree property to specify the sched boost type" 2016-10-05 19:29:18 -07:00
Linux Build Service Account
3ff37b4bac Merge "RFC: FROMLIST: cgroup: reduce read locked section of cgroup_threadgroup_rwsem during fork" 2016-10-05 19:29:13 -07:00
Patrick Bellasi
754a122792 sched/tune: add sysctl interface to define a boost value
The current (CFS) scheduler implementation does not allow "to boost"
tasks performance by running them at a higher OPP compared to the
minimum required to meet their workload demands.

To support tasks performance boosting the scheduler should provide a
"knob" which allows to tune how much the system is going to be optimised
for energy efficiency vs performance.

This patch is the first of a series which provides a simple interface to
define a tuning knob. One system-wide "boost" tunable is exposed via:
  /proc/sys/kernel/sched_cfs_boost
which can be configured in the range [0..100], to define a percentage
where:
  - 0%   boost requires to operate in "standard" mode by scheduling
         tasks at the minimum capacities required by the workload demand
  - 100% boost requires to push at maximum the task performances,
         "regardless" of the incurred energy consumption

A boost value in between these two boundaries is used to bias the
power/performance trade-off, the higher the boost value the more the
scheduler is biased toward performance boosting instead of energy
efficiency.

Change-Id: I59a41725e2d8f9238a61dfb0c909071b53560fc0
cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Git-commit: 63c8fad2b06805ef88f1220551289f0a3c3529f1
Git-repo: https://source.codeaurora.org/quic/la/kernel/msm-4.4
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
2016-10-05 17:24:23 -07:00
Syed Rameez Mustafa
475125d9f9 sched: Initialize HMP stats inside init_sd_lb_stats()
This ensures that the load balancer always works correctly even
without compiler optimizations.

Change-Id: I36408ae65833b624401e60edfb50c19cc061d7bf
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-10-05 17:24:22 -07:00
Pavankumar Kondeti
f20772adf3 sched: Fix integer overflow in sched_update_nr_prod()
"int" type is used to hold the time difference between the successive
updates to nr_run in sched_update_nr_prod(). This can result in
overflow, if the function is called ~2.15 sec after it was called
before. The most probable scenarios are when CPU is idle and
hotplugged. But as we update the last_time of all possible CPUs in
sched_get_nr_running_avg() periodically from a deferrable timer context
(core_ctl module), this overflow is observed only when the system is
completely idle for long time. When this overflow happens we hit
a BUG_ON() in sched_get_nr_running_avg().

Use "u64" type instead of "int" for holding the time difference and
add additional BUG_ON() to catch the instances where sched_clock()
returns a backward value.

Change-Id: I284abb5889ceb8cf9cc689c79ed69422a0e74986
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2016-10-04 08:47:48 +05:30
Linux Build Service Account
f02700dfc6 Merge "sched: Fix CPU selection when all online CPUs are isolated" 2016-10-03 10:35:02 -07:00
Linux Build Service Account
ae0165688c Merge "sched: Add a stub function for init_clusters()" 2016-10-03 10:34:59 -07:00
Linux Build Service Account
a6e4924acb Merge "sched: add a knob to prefer the waker CPU for sync wakeups" 2016-10-03 10:34:58 -07:00
Linux Build Service Account
08d58a723b Merge "hrtimer: Ensure timer is not running before migrating" 2016-10-03 10:34:56 -07:00
Pavankumar Kondeti
a86b380f35 sched: Add a device tree property to specify the sched boost type
The HMP scheduler has two types of task placement boost policies.

(1) boost-on-big policy make use of all big CPUs up to their full capacity
before using the little CPUs. This improves performance on true b.L systems
where the big CPUs have higher efficiency compared to the little CPUs.

(2) boost-on-all policy place the tasks on the CPU having the highest
spare capacity. This policy is optimal for SMP like systems.

The scheduler sets the boost policy to boost-on-big on systems which has
CPUs of different efficiencies. However it is possible that CPUs of the
same micro architecture to have slight difference in efficiency due to
other factors like cache size. Selecting the boost-on-big policy based
on relative difference in efficiency is not optimal on such systems.
The boost-policy device tree property is introduced to specify the
required boost type and it overrides the default selection of boost
type in the scheduler. The possible values for this property are
"boost-on-big" and "boost-on-all".

Change-Id: Iac19183fa7d4bfd9e5746b02a02b2b19cf64b78d
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2016-10-02 10:54:45 +05:30
Pavankumar Kondeti
c7e3dde08c sched: Add a stub function for init_clusters()
Add a stub function for init_cluster() and remove a ifdefry
for SCHED_HMP in sched_init()

Change-Id: I6745485152d735436d8398818f7fb5e70ce5ee65
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2016-10-02 10:52:15 +05:30
Pavankumar Kondeti
f1e9995fe4 sched: add a knob to prefer the waker CPU for sync wakeups
The current policy has a preference to select an idle CPU in the waker
cluster compared to the waker CPU running only 1 task. By selecting
an idle CPU, it eliminates the chance of waker migrating to a
different CPU after the wakee preempts it. This policy is also not
susceptible to the incorrect "sync" usage i.e the waker does not
goto sleep after waking up the wakee.

However LPM exit latency associated with an idle CPU outweigh the
above benefits on some targets. So add a knob to prefer the waker
CPU having only 1 runnable task over idle CPUs in the waker cluster.

Change-Id: Id974748c07625c1b19112235f426a5d204dfdb33
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2016-10-02 10:52:06 +05:30
Pavankumar Kondeti
7c3461a6ac sched: Fix a division by zero bug in scale_exec_time()
When cycle_counter is used to estimate the frequency, calling
update_task_ravg() twice on the same task without refreshing
the wallclock results in a division by zero bug. Add a safety
check in update_task_ravg() to prevent this.

The above bug is hit from __schedule() when next == prev. There
is no need to call update_task_ravg() twice for PUT_PREV_TASK
and PICK_NEXT_TASK events for the same task. Calling
update_task_ravg() with TASK_UPDATE event is sufficient.

Change-Id: Ib3af9004f2462618c535b8195377bedb584d0261
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2016-10-01 10:05:08 +05:30
Linux Build Service Account
818156d2ac Merge "sched: don't assume higher capacity means higher power in lb" 2016-09-30 18:23:48 -07:00
Linux Build Service Account
4b2199d820 Merge "sched: panic on corrupted stack end" 2016-09-30 18:23:31 -07:00
Syed Rameez Mustafa
84589336cd sched: Fix CPU selection when all online CPUs are isolated
After the introduction of "33c24b sched: add cpu isolation support"
select_fallback_rq() might sometimes be unable find any CPU to place
a task on. This happens when the all online CPUs are isolated and
the allow isolated flag is set to false. In such cases, we have
little choice but to use an isolated CPU and wait for core control
to eventually un-isolate one or more online CPUs.

Change-Id: Id8738bd8493c11731c5491efcc99eb90f051233e
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-09-30 17:25:51 -07:00
Olav Haugan
62abf225df hrtimer: Ensure timer is not running before migrating
A timer might be running when we are trying to move the timer to another
CPU so ensure that we wait for the timer to finish before migrating.

Change-Id: I4c9ee39c715baebfbdb8a50476a475e38b092f70
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-09-30 17:04:11 -07:00
Linux Build Service Account
f6d68e27bf Merge "sched: constrain HMP scheduler tunable range with in better way" 2016-09-29 11:20:30 -07:00
Linux Build Service Account
95ce9d98db Merge "sched/core_ctl: Integrate core control with cpu isolation" 2016-09-29 11:20:17 -07:00
Linux Build Service Account
461424e5ec Merge "sched/core_ctl: Refactor cpu data" 2016-09-29 11:20:17 -07:00
Linux Build Service Account
bbf8724641 Merge "core_ctrl: Move core control into kernel" 2016-09-29 11:20:17 -07:00
Pavankumar Kondeti
9d128dbca3 sched: don't assume higher capacity means higher power in lb
The load balancer restrictions are in place to control the tasks
migration from the lower capacity cluster to higher capacity
cluster to save power. The assumption here is that higher capacity
cluster will have higher power cost which may not be necessarily
true for all platforms. Use power cost based checks instead of
capacity based checks while applying the inter cluster migration
restrictions.

Change-Id: Id9519eb8f7b183a2e9fca87a23cf95e951aa4005
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
2016-09-29 10:07:19 +05:30
Olav Haugan
e42aae958b sched/core_ctl: Integrate core control with cpu isolation
Replace hotplug functionality in core control with cpu isolation
and integrate into scheduler.

Change-Id: I4f1514ba5bac2e259a1105fcafb31d6a92ddd249
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-09-24 11:00:06 -07:00
Olav Haugan
e7c8de6756 sched/core_ctl: Refactor cpu data
Refactor cpu data into cpu data and cluster data to improve readability and
ease of understanding the code.

Change-Id: I96505aeb9d07a6fa3a2c28648ffa299e0cfa2e41
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-09-24 11:00:05 -07:00
Olav Haugan
5d1e98d51d trace: Move core control trace events to scheduler
Move the core control trace events to scheduler trace event file.

Change-Id: I65943d8e4a9eac1f9f5a40ad5aaf166679215f48
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-09-24 11:00:04 -07:00
Olav Haugan
59f16ae034 core_ctrl: Move core control into kernel
Move core control from out-of-tree module into the kernel proper.

Core control monitors load on CPUs and controls how many CPUs are
available for the system to use at any point in time. This can help save
power. Core control can be configured through sysfs interface.

Change-Id: Ia78e701468ea3828195c2a15c9cf9fafd099804a
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-09-24 11:00:03 -07:00
Olav Haugan
7f2c523643 core_ctl_helper: Remove code since it is not used anymore
Remove the core control helper code since this is not needed anymore
with subsequent patches that moves core control into the kernel.

Change-Id: I62acddeb707fc7d5626580166b3466e63f45fd89
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-09-24 11:00:02 -07:00
Olav Haugan
639c8ad52d perf: Add cpu isolation awareness
Ensure perf events does not wake up idle cores when core is isolated.

Change-Id: Ifefb2f1cf6c24af7bc46fc62797955b8c8ad5815
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-09-24 10:59:59 -07:00
Olav Haugan
e38c1ce123 smp: Do not wake up all idle CPUs
Do not wake up cpus that are isolated.

Change-Id: I07702bb5b738c1c75c49a2ca4cb08be0231ccb12
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-09-24 10:59:58 -07:00
Olav Haugan
bc24c063ef pmqos: Enable cpu isolation awareness
Set long latency requirement for isolated cores to ensure LPM logic will
select a deep sleep state.

Change-Id: I83e9fbb800df259616a145d311b50627dc42a5ff
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-09-24 10:59:57 -07:00
Olav Haugan
4400ef145f irq: Make irq affinity function cpu isolation aware
Prohibit setting the affinity of an IRQ to an isolated core.

Change-Id: I7b50778615541a64f9956573757c7f28748c4f69
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-09-24 10:59:55 -07:00
Olav Haugan
0a17b36a20 sched/core: Add trace point for cpu isolation
Add tracepoint to capture the cpu isolation event including KPI for
time it took to isolate.

Change-Id: If2d30000f068afc50db953940f4636ef6a089b24
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-09-24 10:59:42 -07:00
Olav Haugan
e33c24bfec sched: add cpu isolation support
This adds cpu isolation APIs to the scheduler to isolate and unisolate
CPUs. Isolating and unisolating a CPU can be used in place of hotplug.
Isolating and unisolating a CPU is faster than hotplug and can thus be
used to optimize the performance and power of multi-core CPUs.

Isolating works by migrating non-pinned IRQs and tasks to other CPUS and
marking the CPU as not available to the scheduler and load balancer.
Pinned tasks and IRQs are still allowed to run but it is expected that
this would be minimal.

Unisolation works by just marking the CPU available for scheduler and
load balancer.

Change-Id: I0bbddb56238c2958c5987877c5bfc3e79afa67cc
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-09-24 10:55:17 -07:00
Linux Build Service Account
a7186f9754 Merge "cgroup: make sure a parent css isn't freed before its children" 2016-09-24 02:00:29 -07:00
Joonwoo Park
cc60f0790f sched: constrain HMP scheduler tunable range with in better way
HMP scheduler tunables can be constrained via extra1 and extra2 of
ctl_table.  Having valid range in the sysctl table gives clearer
view of tunable's range.

Also add range for sched_select_prev_cpu_us so we can avoid invalid
value configuration of that tunable.

CRs-fixed: 1056910
Change-Id: I09fcc019133f4d37b7be3287da8e0733e40fc0ac
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2016-09-21 16:50:42 -07:00
Olav Haugan
3fe956359c watchdog: Add support for cpu isolation
Open up interface to allow external subsystem to enable and disable hard
lockup detector.

Change-Id: I88a728ee1d54aaa887fab52e5e40d1d4e4fc69ca
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-09-20 17:47:13 -07:00
Olav Haugan
fc70615291 cpumask: Add cpu isolation support
Add bitmask and corresponding supporting functions for cpu isolation.

Change-Id: Ice1a9503666a2b720bdb324289ca55ceb33097cd
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-09-20 17:47:13 -07:00
Olav Haugan
922fed628c timer: Do not require CPUSETS to be enabled for migration
Do not require CPUSETS to be enabled to allow migration of timers and
hrtimers.

Change-Id: Ib911a0d34c250c4df020bdb265b92d2b8df8db93
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-09-20 17:47:13 -07:00
Santosh Shukla
bba552f4fc timer: Add function to migrate timers
Add function to migrate timer that will be used by later patch set.

Change-Id: I370e404001344e635a663822b07557abbe0f6f52
Signed-off-by: Santosh Shukla <santosh.shukla@linaro.org>
[ohaugan@codeaurora.org: Updated commit text and fixed trivial merge conflict]
Git-commit: 3633b88d8fcb4273807574c27c328b6908a741e5
Git-repo: git://git.linaro.org/people/mike.holmes/santosh.shukla/lng-isol.git
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-09-20 17:47:12 -07:00
Gary S. Robertson
84e39dcb3b hrtimer.h: prevent pinned timer state from breaking inactive test
An hrtimer may be pinned to a CPU but inactive, so it is no longer valid
to test the hrtimer.state struct member as having no bits set when inactive.
Changed the test function to mask out the HRTIMER_STATE_PINNED bit when
checking for inactive state.

Change-Id: I632f37874ef79887ee1202a028ef734f392d6ed0
Signed-off-by: Gary S. Robertson <gary.robertson@linaro.org>
[ohaugan@codeaurora.org: Port to 4.4]
Git-commit: 902e4d4eb0d2158d2792166221a72a829caecf07
Git-repo: git://git.linaro.org/people/mike.holmes/santosh.shukla/lng-isol.git
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-09-20 17:47:12 -07:00