The current policy has a preference to select an idle CPU in the waker
cluster compared to the waker CPU running only 1 task. By selecting
an idle CPU, it eliminates the chance of waker migrating to a
different CPU after the wakee preempts it. This policy is also not
susceptible to the incorrect "sync" usage i.e the waker does not
goto sleep after waking up the wakee.
However LPM exit latency associated with an idle CPU outweigh the
above benefits on some targets. So add a knob to prefer the waker
CPU having only 1 runnable task over idle CPUs in the waker cluster.
Change-Id: Id974748c07625c1b19112235f426a5d204dfdb33
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
Replace hotplug functionality in core control with cpu isolation
and integrate into scheduler.
Change-Id: I4f1514ba5bac2e259a1105fcafb31d6a92ddd249
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
Refactor cpu data into cpu data and cluster data to improve readability and
ease of understanding the code.
Change-Id: I96505aeb9d07a6fa3a2c28648ffa299e0cfa2e41
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
Move the core control trace events to scheduler trace event file.
Change-Id: I65943d8e4a9eac1f9f5a40ad5aaf166679215f48
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
Move core control from out-of-tree module into the kernel proper.
Core control monitors load on CPUs and controls how many CPUs are
available for the system to use at any point in time. This can help save
power. Core control can be configured through sysfs interface.
Change-Id: Ia78e701468ea3828195c2a15c9cf9fafd099804a
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
Remove the core control helper code since this is not needed anymore
with subsequent patches that moves core control into the kernel.
Change-Id: I62acddeb707fc7d5626580166b3466e63f45fd89
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
Ensure perf events does not wake up idle cores when core is isolated.
Change-Id: Ifefb2f1cf6c24af7bc46fc62797955b8c8ad5815
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
Set long latency requirement for isolated cores to ensure LPM logic will
select a deep sleep state.
Change-Id: I83e9fbb800df259616a145d311b50627dc42a5ff
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
Prohibit setting the affinity of an IRQ to an isolated core.
Change-Id: I7b50778615541a64f9956573757c7f28748c4f69
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
Add tracepoint to capture the cpu isolation event including KPI for
time it took to isolate.
Change-Id: If2d30000f068afc50db953940f4636ef6a089b24
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
This adds cpu isolation APIs to the scheduler to isolate and unisolate
CPUs. Isolating and unisolating a CPU can be used in place of hotplug.
Isolating and unisolating a CPU is faster than hotplug and can thus be
used to optimize the performance and power of multi-core CPUs.
Isolating works by migrating non-pinned IRQs and tasks to other CPUS and
marking the CPU as not available to the scheduler and load balancer.
Pinned tasks and IRQs are still allowed to run but it is expected that
this would be minimal.
Unisolation works by just marking the CPU available for scheduler and
load balancer.
Change-Id: I0bbddb56238c2958c5987877c5bfc3e79afa67cc
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
HMP scheduler tunables can be constrained via extra1 and extra2 of
ctl_table. Having valid range in the sysctl table gives clearer
view of tunable's range.
Also add range for sched_select_prev_cpu_us so we can avoid invalid
value configuration of that tunable.
CRs-fixed: 1056910
Change-Id: I09fcc019133f4d37b7be3287da8e0733e40fc0ac
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
Open up interface to allow external subsystem to enable and disable hard
lockup detector.
Change-Id: I88a728ee1d54aaa887fab52e5e40d1d4e4fc69ca
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
Add bitmask and corresponding supporting functions for cpu isolation.
Change-Id: Ice1a9503666a2b720bdb324289ca55ceb33097cd
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
Do not require CPUSETS to be enabled to allow migration of timers and
hrtimers.
Change-Id: Ib911a0d34c250c4df020bdb265b92d2b8df8db93
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
Add function to migrate timer that will be used by later patch set.
Change-Id: I370e404001344e635a663822b07557abbe0f6f52
Signed-off-by: Santosh Shukla <santosh.shukla@linaro.org>
[ohaugan@codeaurora.org: Updated commit text and fixed trivial merge conflict]
Git-commit: 3633b88d8fcb4273807574c27c328b6908a741e5
Git-repo: git://git.linaro.org/people/mike.holmes/santosh.shukla/lng-isol.git
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
An hrtimer may be pinned to a CPU but inactive, so it is no longer valid
to test the hrtimer.state struct member as having no bits set when inactive.
Changed the test function to mask out the HRTIMER_STATE_PINNED bit when
checking for inactive state.
Change-Id: I632f37874ef79887ee1202a028ef734f392d6ed0
Signed-off-by: Gary S. Robertson <gary.robertson@linaro.org>
[ohaugan@codeaurora.org: Port to 4.4]
Git-commit: 902e4d4eb0d2158d2792166221a72a829caecf07
Git-repo: git://git.linaro.org/people/mike.holmes/santosh.shukla/lng-isol.git
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
To isolate CPUs (isolate from hrtimers) from sysfs using cpusets, we need some
support from the hrtimer core. i.e. A routine hrtimer_quiesce_cpu() which would
migrate away all the unpinned hrtimers, but shouldn't touch the pinned ones.
This patch creates this routine.
Change-Id: I51259ea41e3bd5cdba50b718201a6840174a7224
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
[forward port to 3.18]
Signed-off-by: Santosh Shukla <santosh.shukla@linaro.org>
[ohaugan@codeaurora.org: Port to 4.4]
Git-commit: d4d50a0ddc35e58ee95137ba4d14e74fea8b682f
Git-repo: git://git.linaro.org/people/mike.holmes/santosh.shukla/lng-isol.git
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
'Pinned' information would be required in migrate_hrtimers() now, as we can
migrate non-pinned timers away without a hotplug (i.e. with cpuset.quiesce). And
so we may need to identify pinned timers now, as we can't migrate them.
This patch reuses the timer->state variable for setting this flag as there were
enough number of free bits available in this variable. And there is no point
increasing size of this struct by adding another field.
Change-Id: If3b3770e547971809e789ea7c8033c48ec2aa92d
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
[forward port to 3.18]
Signed-off-by: Santosh Shukla <santosh.shukla@linaro.org>
[ohaugan@codeaurora.org: Port to 4.4]
Git-commit: 62feaf1ed0b64c04868d143d8bdb92d60dc3189b
Git-repo: git://git.linaro.org/people/mike.holmes/santosh.shukla/lng-isol.git
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
To isolate CPUs (isolate from timers) from sysfs using cpusets, we need some
support from the timer core. i.e. A routine timer_quiesce_cpu() which would
migrates away all the unpinned timers, but shouldn't touch the pinned ones.
This patch creates this routine.
Change-Id: I8624e0659b86b7b8fa425a3fafdb0784fe005124
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
[forward port to 3.18]
Signed-off-by: Santosh Shukla <santosh.shukla@linaro.org>
[ohaugan@codeaurora.org: Port to 4.4. Fixes for compilation error]
Git-commit: 313910b70ea0c73f8789d9189c11e1f339080646
Git-repo: git://git.linaro.org/people/mike.holmes/santosh.shukla/lng-isol.git
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
This is needed to support migration of timers during cpu isolation. A
timer might be running on the CPU that we want to isolate so we are
unable to migrate the timers at this point. We are adding a spin-loop to
wait for the timer to finish before migrating the timers.
Change-Id: I24d6e91b6dff468c640c2fe3a37a7f31b6f0c79a
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
When kernel.perf_event_open is set to 3 (or greater), disallow all
access to performance events by users without CAP_SYS_ADMIN.
Add a Kconfig symbol CONFIG_SECURITY_PERF_EVENTS_RESTRICT that
makes this value the default.
This is based on a similar feature in grsecurity
(CONFIG_GRKERNSEC_PERF_HARDEN). This version doesn't include making
the variable read-only. It also allows enabling further restriction
at run-time regardless of whether the default is changed.
https://lkml.org/lkml/2016/1/11/587
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Git-repo: https://android.googlesource.com/kernel/common.git
Git-commit: 012b0adcf7299f6509d4984cf46ee11e6eaed4e4
[d-cagle@codeaurora.org: Resolve trivial merge conflicts]
Signed-off-by: Dennis Cagle <d-cagle@codeaurora.org>
Bug: 29054680
Change-Id: Iff5bff4fc1042e85866df9faa01bce8d04335ab8
A discrepancy between cpu_online_mask and cpuset's effective_cpus
mask is inevitable during hotplug since cpuset defers updating of
effective_cpus mask using a workqueue, during which time nothing
prevents the system from more hotplug operations. For that reason
guarantee_online_cpus() walks up the cpuset hierarchy until it finds
an intersection under the assumption that top cpuset's effective_cpus
mask intersects with cpu_online_mask even with such a race occurring.
However a sequence of CPU hotplugs can open a time window, during which
none of the effective CPUs in the top cpuset intersect with
cpu_online_mask.
For example when there are 4 possible CPUs 0-3 and only CPU0 is online:
======================== ===========================
cpu_online_mask top_cpuset.effective_cpus
======================== ===========================
echo 1 > cpu2/online.
CPU hotplug notifier woke up hotplug work but not yet scheduled.
[0,2] [0]
echo 0 > cpu0/online.
The workqueue is still runnable.
[2] [0]
======================== ===========================
Now there is no intersection between cpu_online_mask and
top_cpuset.effective_cpus. Thus invoking sys_sched_setaffinity() at
this moment can cause following:
Unable to handle kernel NULL pointer dereference at virtual address 000000d0
------------[ cut here ]------------
Kernel BUG at ffffffc0001389b0 [verbose debug info unavailable]
Internal error: Oops - BUG: 96000005 [#1] PREEMPT SMP
Modules linked in:
CPU: 2 PID: 1420 Comm: taskset Tainted: G W 4.4.8+ #98
task: ffffffc06a5c4880 ti: ffffffc06e124000 task.ti: ffffffc06e124000
PC is at guarantee_online_cpus+0x2c/0x58
LR is at cpuset_cpus_allowed+0x4c/0x6c
<snip>
Process taskset (pid: 1420, stack limit = 0xffffffc06e124020)
Call trace:
[<ffffffc0001389b0>] guarantee_online_cpus+0x2c/0x58
[<ffffffc00013b208>] cpuset_cpus_allowed+0x4c/0x6c
[<ffffffc0000d61f0>] sched_setaffinity+0xc0/0x1ac
[<ffffffc0000d6374>] SyS_sched_setaffinity+0x98/0xac
[<ffffffc000085cb0>] el0_svc_naked+0x24/0x28
The top cpuset's effective_cpus are guaranteed to be identical to
cpu_online_mask eventually. Hence fall back to cpu_online_mask when
there is no intersection between top cpuset's effective_cpus and
cpu_online_mask.
CRs-fixed: 1058529
Change-Id: I83ee4619feff2ca7452119c9baecb6ffde755287
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
Acked-by: Li Zefan <lizefan@huawei.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: cgroups@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: <stable@vger.kernel.org> # 3.17+
Signed-off-by: Tejun Heo <tj@kernel.org>
* tmp-bab1564:
ANDROID: mmc: Add CONFIG_MMC_SIMULATE_MAX_SPEED
android: base-cfg: Add CONFIG_INET_DIAG_DESTROY
cpufreq: interactive: only apply interactive boost when enabled
cpufreq: interactive: fix policy locking
ANDROID: dm verity fec: add sysfs attribute fec/corrected
ANDROID: android: base-cfg: enable CONFIG_DM_VERITY_FEC
UPSTREAM: dm verity: add ignore_zero_blocks feature
UPSTREAM: dm verity: add support for forward error correction
UPSTREAM: dm verity: factor out verity_for_bv_block()
UPSTREAM: dm verity: factor out structures and functions useful to separate object
UPSTREAM: dm verity: move dm-verity.c to dm-verity-target.c
UPSTREAM: dm verity: separate function for parsing opt args
UPSTREAM: dm verity: clean up duplicate hashing code
UPSTREAM: dm: don't save and restore bi_private
mm: Export do_munmap
sdcardfs: remove unneeded __init and __exit
sdcardfs: Remove unused code
fs: Export d_absolute_path
sdcardfs: remove effectless config option
inotify: Fix erroneous update of bit count
fs: sdcardfs: Declare LOOKUP_CASE_INSENSITIVE unconditionally
trace: cpufreq: fix typo in min/max cpufreq
sdcardfs: Add support for d_canonical_path
vfs: add d_canonical_path for stacked filesystem support
sdcardfs: Bring up to date with Android M permissions:
Changed type-casting in packagelist management
Port of sdcardfs to 4.4
Included sdcardfs source code for kernel 3.0
ANDROID: usb: gadget: Add support for MTP OS desc
CHROMIUM: usb: gadget: f_accessory: add .raw_request callback
CHROMIUM: usb: gadget: audio_source: add .free_func callback
CHROMIUM: usb: gadget: f_mtp: fix usb_ss_ep_comp_descriptor
CHROMIUM: usb: gadget: f_mtp: Add SuperSpeed support
FROMLIST: mmc: block: fix ABI regression of mmc_blk_ioctl
FROMLIST: mm: ASLR: use get_random_long()
FROMLIST: drivers: char: random: add get_random_long()
FROMLIST: pstore-ram: fix NULL reference when used with pdata
usb: u_ether: Add missing rx_work init
ANDROID: dm-crypt: run in a WQ_HIGHPRI workqueue
misc: uid_stat: Include linux/atomic.h instead of asm/atomic.h
hid-sensor-hub.c: fix wrong do_div() usage
power: Provide dummy log_suspend_abort_reason() if SUSPEND is disabled
PM / suspend: Add dependency on RTC_LIB
drivers: power: use 'current' instead of 'get_current()'
video: adf: Set ADF_MEMBLOCK to boolean
video: adf: Fix modular build
net: ppp: Fix modular build for PPPOLAC and PPPOPNS
net: pppolac/pppopns: Replace msg.msg_iov with iov_iter_kvec()
ANDROID: mmc: sdio: Disable retuning in sdio_reset_comm()
ANDROID: mmc: Move tracepoint creation and export symbols
ANDROID: kernel/watchdog: fix unused variable warning
ANDROID: usb: gadget: f_mtp: don't use le16 for u8 field
ANDROID: lowmemorykiller: fix declaration order warnings
ANDROID: net: fix 'const' warnings
net: diag: support v4mapped sockets in inet_diag_find_one_icsk()
net: tcp: deal with listen sockets properly in tcp_abort.
tcp: diag: add support for request sockets to tcp_abort()
net: diag: Support destroying TCP sockets.
net: diag: Support SOCK_DESTROY for inet sockets.
net: diag: Add the ability to destroy a socket.
net: diag: split inet_diag_dump_one_icsk into two
Revert "mmc: Extend wakelock if bus is dead"
Revert "mmc: core: Hold a wake lock accross delayed work + mmc rescan"
ANDROID: mmc: move to a SCHED_FIFO thread
Conflicts:
drivers/cpufreq/cpufreq_interactive.c
drivers/misc/uid_stat.c
drivers/mmc/card/block.c
drivers/mmc/card/queue.c
drivers/mmc/card/queue.h
drivers/mmc/core/core.c
drivers/mmc/core/sdio.c
drivers/staging/android/lowmemorykiller.c
drivers/usb/gadget/function/f_mtp.c
kernel/watchdog.c
Signed-off-by: Runmin Wang <runminw@codeaurora.org>
Change-Id: Ibb4db11c57395f67dee86211a110c462e6181552
Frequency-demand conversion data structures are only used under
CONFIG_SCHED_HMP. Move them out of sched.h into hmp.c to where they
actually belong after the recent refactor.
Change-Id: I3c3eebca86062f11b80af93ba3716695eb787376
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
commit 8bb5ef79bc0f4016ecf79e8dce6096a3c63603e4 upstream.
There are three subsystem callbacks in css shutdown path -
css_offline(), css_released() and css_free(). Except for
css_released(), cgroup core didn't guarantee the order of invocation.
css_offline() or css_free() could be called on a parent css before its
children. This behavior is unexpected and led to bugs in cpu and
memory controller.
The previous patch updated ordering for css_offline() which fixes the
cpu controller issue. While there currently isn't a known bug caused
by misordering of css_free() invocations, let's fix it too for
consistency.
css_free() ordering can be trivially fixed by moving putting of the
parent css below css_free() invocation.
Change-Id: I97febdd414ef5cd57490ce2746650dde7fdda28f
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>i
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Git-commit: 8bb5ef79bc0f4016ecf79e8dce6096a3c63603e4
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Signed-off-by: Runmin Wang <runminw@codeaurora.org>
The structures being moved around are only used for trace events
defined under CONFIG_SCHED_HMP. Move code to hmp.c to reflect
the same.
Change-Id: Ib959355264405ab779b24948f111a2ca61d367de
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
This reverts commit 9d6fd2c3e9 ("Merge remote-tracking branch
'msm-4.4/tmp-510d0a3f' into msm-4.4"), because it breaks the
dump parsing tools due to kernel can be loaded anywhere in the memory
now and not fixed at linear mapping.
Change-Id: Id416f0a249d803442847d09ac47781147b0d0ee6
Signed-off-by: Trilok Soni <tsoni@codeaurora.org>
On arm systems the spin on owner optimization can intermittently cause a
lockup that's usually as long as the waiting thread's cpu timeslice. The
repeated mutex aquisitions + atomics in a single spinning thread can
completely lock out the owner from releasing the kernel mutex. The
owner needs to acquire a spinlock on the relase path and this spinlock
can share a monitor with the other locks and atomics on the waiter path.
Rate limit the waiter so that the thread releasing the mutex never
is starved.
Bug 23036902
Change-Id: Ie1b64275a0c6141f94faaf3e63fcbf9b5438140c
Signed-off-by: Riley Andrews <riandrews@google.com>
Git-commit: 84d8ce7e0025cac60a8a379a7ee3e59d640fbc03
Git-repo: https://android.googlesource.com/kernel/msm.git
Signed-off-by: Trilok Soni <tsoni@codeaurora.org>
This deliberately changes the behavior of the per-cpuset
cpus file to not be effected by hotplug. When a cpu is offlined,
it will be removed from the cpuset/cpus file. When a cpu is onlined,
if the cpuset originally requested that that cpu was part of the cpuset, that
cpu will be restored to the cpuset. The cpus files still
have to be hierachical, but the ranges no longer have to be out of
the currently online cpus, just the physically present cpus.
Change-Id: I3efbae24a1f6384be1e603fb56f0d3baef61d924
[ohaugan@codeaurora.org: Port to 4.4]
Git-commit: f180bcac788464a0baf3d79d76dd86d6972ea413
Git-repo: https://android.googlesource.com/kernel/common/msm.git
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
sysctl_sched_wake_to_idle is a means to allow or disallow a global
task placement preference for idle CPUs. It has been unused thus
far since we've preferred to use a per-task flag instead to control
placement for individual tasks. Using this global flag, however, does
allow greater flexibility for testing and system evaluation.
Incorporate sysctl_sched_wake_to_idle in the placement policy.
Change-Id: I7e830bc914eb9c159ae18f165bc8b0278ec9af40
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
Do the aggregation for frequency only when the total group busy time
is above sched_freq_aggregate_threshold. This filtering is especially
needed for the cases where groups are created by including all threads
of an application process. This knob can be tuned to apply aggregation
only for the heavy workload applications.
When this knob is enabled and load is aggregated, the load is not
clipped to 100% @ current frequency to ramp up the frequency faster.
Change-Id: Icfd91c85938def101a989af3597d3dcaa8026d16
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
The load reporting during frequency alert notifications is broken under
load aggregation. When aggregation is enabled, the total group busy
time is accounted towards the maximum busy CPU of a frequency domain.
If this CPU has a notification pending, it's group busy time alone is
accounted and other CPU's group busy time is completely ignored.
Similarly if any CPU other than maximum busy CPU has a pending
notification, its group busy time is accounted twice.
Maintain the frequency alert notification flag per frequency domain.
When the notification is pending, don't clip the load to 100% @ fur
for any of the CPUs in the frequency domain.
Change-Id: Iebc7d74d6fafa20430fa1c7d80f34a6ab198832d
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
When sysctl_sched_enable_thread_grouping is set to 1, any new tasks
created are put in the same group as their group leader.
Change-Id: If1837dd7c8120c8b097cfffa1dc52eb4781f1641
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
Add a flag to the trace event that indicates why we picked a particular
CPU. This is very useful information/statistic that can be used to
analyse the effectiveness of the scheduler.
Change-Id: Ic9462fef751f9442ae504c09fbf4418e08f018b0
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
notify_migration() is a HMP specific function that relies on all
of its contents to be stubbed out for !CONFIG_SCHED_HMP. However,
it still maintains calls to rcu_read_lock/unlock(). In the !HMP
case these calls are simply redundant. Move the function under
CONFIG_SCHED_HMP and add a stub when the config is not defined so
that there is no overhead.
Change-Id: Iad914f31b629e81e403b0e89796b2b0f1d081695
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>