Commit graph

568532 commits

Author SHA1 Message Date
Morten Rasmussen
3935105f57 sched/fair: Energy-aware wake-up task placement
When the systems is not overutilized, place waking tasks on the most
energy efficient cpu. Previous attempts reduced the search space by
matching task utilization to cpu capacity before consulting the energy
model as this is an expensive operation. The search heuristics didn't
work very well and lacking any better alternatives this patch takes the
brute-force route and tries all potential targets.

This approach doesn't scale, but it might be sufficient for many
embedded applications while work is continuing on a heuristic that can
minimize the necessary computations. The heuristic must be derrived from
the platform energy model rather than make additional assumptions, such
lower capacity implies better energy efficiency. PeterZ mentioned in the
past that we might be able to derrive some simpler deciding functions
using mathematical (modal?) analysis.

Change-Id: I772bacb4c8fd599f8006fa422f842e66377a9c6c
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
[rebase: on top of msm-google/android-msm-marlin-3.18]
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
(cherry picked from commit a894422dbdb7b77ea2acfe7ff909ccb5ded23514)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:52 -07:00
Morten Rasmussen
02cbde61f4 sched/fair: Add energy_diff dead-zone margin
It is not worth the overhead to migrate tasks for tiny insignificant
energy savings. To prevent this, an energy margin is introduced in
energy_diff() which effectively adds a dead-zone that rounds tiny energy
differences to zero. Since no scale is enforced for energy model data
the margin can't be absolute. Instead it is defined as +/-1.56% energy
saving compared to the current total estimated energy consumption.

Change-Id: I6be069c752c701fb825430896b3b768a7ab2fee4
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
[rebase: on top of msm-google/android-msm-marlin-3.18,
         massage original patch which changes code in energy_diff()
	 into __energy_diff() introduced by SchedTune]
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
(cherry picked from commit 780cb5a5fa47adf13d4fc2b77e8e94448cd56098)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:52 -07:00
Dietmar Eggemann
3b6ba235bc sched/fair: Decommission energy_aware_wake_cpu()
The EAS functionality in the wakeup path will be brought back by the
following patch ("sched/fair: Energy-aware wake-up task placement")
providing the function select_energy_cpu_brute().

Change-Id: I927fb9e8261cfacfe404695f853941c7959aa146
[ Trivial merge conflicts resolved. ]
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
(cherry picked from commit 80aee424fb7765a777267e144037642625a71304)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:52 -07:00
Dietmar Eggemann
168228463c sched/fair: Do not force want_affine eq. true if EAS is enabled
This lets us use Capacity-Aware Scheduling (CAS) if EAS is enabled.

Change-Id: I2e647a201ea0b733d1487c3e153047a49fb22847
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
(cherry picked from commit 00b7da2ae58bf568529e67614980f77e275b8d29)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:52 -07:00
Dietmar Eggemann
65cad23dcd arm64: Set SD_ASYM_CPUCAPACITY sched_domain flag on DIE level
Enable Capacity-Aware_scheduling.

This is a shortcut from the EAS integration implementation. We know
there are SoCs in use which have asymmetric cpu capacities on DIE level.

Change-Id: I7f1326e86b8fc08b65d1c1341da7d1ef1013dcbc
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
(cherry picked from commit 82f95a8cdff67770f02b7e4f5b1c084f42afc0e2)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:52 -07:00
Morten Rasmussen
c6cc7ca915 UPSTREAM: sched/fair: Fix incorrect comment for capacity_margin
The comment for capacity_margin introduced in:

  3273163c6775 ("sched/fair: Let asymmetric CPU configurations balance at wake-up")

... got its usage the wrong way round - fix it.

Change-Id: Ie46eac3e5ff43397b5bed61d0999d2817f1a1d96
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dietmar.eggemann@arm.com
Cc: freedom.tan@mediatek.com
Cc: keita.kobayashi.ym@renesas.com
Cc: mgalbraith@suse.de
Cc: sgurrappadi@nvidia.com
Cc: vincent.guittot@linaro.org
Cc: yuyang.du@intel.com
Link: http://lkml.kernel.org/r/1476452472-24740-7-git-send-email-morten.rasmussen@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 893c5d2279041afeb593f1fa8edd9d02edf5b7cb)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:52 -07:00
Morten Rasmussen
adc7f08b2f UPSTREAM: sched/fair: Avoid pulling tasks from non-overloaded higher capacity groups
For asymmetric CPU capacity systems it is counter-productive for
throughput if low capacity CPUs are pulling tasks from non-overloaded
CPUs with higher capacity. The assumption is that higher CPU capacity is
preferred over running alone in a group with lower CPU capacity.

This patch rejects higher CPU capacity groups with one or less task per
CPU as potential busiest group which could otherwise lead to a series of
failing load-balancing attempts leading to a force-migration.

Change-Id: I428875bb6267c780026ef75e2882300738d016e7
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dietmar.eggemann@arm.com
Cc: freedom.tan@mediatek.com
Cc: keita.kobayashi.ym@renesas.com
Cc: mgalbraith@suse.de
Cc: sgurrappadi@nvidia.com
Cc: vincent.guittot@linaro.org
Cc: yuyang.du@intel.com
Link: http://lkml.kernel.org/r/1476452472-24740-5-git-send-email-morten.rasmussen@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 9e0994c0a1c1f82c705f1f66388e1bcffcee8bb9)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:52 -07:00
Morten Rasmussen
60cc9f4e1e UPSTREAM: sched/fair: Add per-CPU min capacity to sched_group_capacity
struct sched_group_capacity currently represents the compute capacity
sum of all CPUs in the sched_group.

Unless it is divided by the group_weight to get the average capacity
per CPU, it hides differences in CPU capacity for mixed capacity systems
(e.g. high RT/IRQ utilization or ARM big.LITTLE).

But even the average may not be sufficient if the group covers CPUs of
different capacities.

Instead, by extending struct sched_group_capacity to indicate min per-CPU
capacity in the group a suitable group for a given task utilization can
more easily be found such that CPUs with reduced capacity can be avoided
for tasks with high utilization (not implemented by this patch).

Change-Id: If3cae1be62d01a199e752bca5abb45357d5d0fbd
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dietmar.eggemann@arm.com
Cc: freedom.tan@mediatek.com
Cc: keita.kobayashi.ym@renesas.com
Cc: mgalbraith@suse.de
Cc: sgurrappadi@nvidia.com
Cc: vincent.guittot@linaro.org
Cc: yuyang.du@intel.com
Link: http://lkml.kernel.org/r/1476452472-24740-4-git-send-email-morten.rasmussen@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit bf475ce0a3dd75b5d1df6c6c14ae25168caa15ac)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:52 -07:00
Morten Rasmussen
f3f132b8e5 UPSTREAM: sched/fair: Consider spare capacity in find_idlest_group()
In low-utilization scenarios comparing relative loads in
find_idlest_group() doesn't always lead to the most optimum choice.
Systems with groups containing different numbers of cpus and/or cpus of
different compute capacity are significantly better off when considering
spare capacity rather than relative load in those scenarios.

In addition to existing load based search an alternative spare capacity
based candidate sched_group is found and selected instead if sufficient
spare capacity exists. If not, existing behaviour is preserved.

Change-Id: I6097af76c302a5a12e240ca24c70f707ad118242
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dietmar.eggemann@arm.com
Cc: freedom.tan@mediatek.com
Cc: keita.kobayashi.ym@renesas.com
Cc: mgalbraith@suse.de
Cc: sgurrappadi@nvidia.com
Cc: vincent.guittot@linaro.org
Cc: yuyang.du@intel.com
Link: http://lkml.kernel.org/r/1476452472-24740-3-git-send-email-morten.rasmussen@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 6a0b19c0f39a7a7b7fb77d3867a733136ff059a3)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:52 -07:00
Morten Rasmussen
68c27298cd UPSTREAM: sched/fair: Compute task/cpu utilization at wake-up correctly
At task wake-up load-tracking isn't updated until the task is enqueued.
The task's own view of its utilization contribution may therefore not be
aligned with its contribution to the cfs_rq load-tracking which may have
been updated in the meantime. Basically, the task's own utilization
hasn't yet accounted for the sleep decay, while the cfs_rq may have
(partially). Estimating the cfs_rq utilization in case the task is
migrated at wake-up as task_rq(p)->cfs.avg.util_avg - p->se.avg.util_avg
is therefore incorrect as the two load-tracking signals aren't time
synchronized (different last update).

To solve this problem, this patch synchronizes the task utilization with
its previous rq before the task utilization is used in the wake-up path.
Currently the update/synchronization is done _after_ the task has been
placed by select_task_rq_fair(). The synchronization is done without
having to take the rq lock using the existing mechanism used in
remove_entity_load_avg().

Change-Id: I5605cca0c94c6ba43d9ce11554765a2456cf85bc
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dietmar.eggemann@arm.com
Cc: freedom.tan@mediatek.com
Cc: keita.kobayashi.ym@renesas.com
Cc: mgalbraith@suse.de
Cc: sgurrappadi@nvidia.com
Cc: vincent.guittot@linaro.org
Cc: yuyang.du@intel.com
Link: http://lkml.kernel.org/r/1476452472-24740-2-git-send-email-morten.rasmussen@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 104cb16d9eb684f071d5bf3aa87c0d01af259b7c)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:52 -07:00
Morten Rasmussen
abfff9dece UPSTREAM: sched/fair: Let asymmetric CPU configurations balance at wake-up
Currently, SD_WAKE_AFFINE always takes priority over wakeup balancing if
SD_BALANCE_WAKE is set on the sched_domains. For asymmetric
configurations SD_WAKE_AFFINE is only desirable if the waking task's
compute demand (utilization) is suitable for the waking CPU and the
previous CPU, and all CPUs within their respective
SD_SHARE_PKG_RESOURCES domains (sd_llc). If not, let wakeup balancing
take over (find_idlest_{group, cpu}()).

This patch makes affine wake-ups conditional on whether both the waker
CPU and the previous CPU has sufficient capacity for the waking task,
or not, assuming that the CPU capacities within an SD_SHARE_PKG_RESOURCES
domain (sd_llc) are homogeneous.

Change-Id: I6d5d0426713da9ef6198f574ad9afbe58dacc1f0
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dietmar.eggemann@arm.com
Cc: freedom.tan@mediatek.com
Cc: keita.kobayashi.ym@renesas.com
Cc: mgalbraith@suse.de
Cc: sgurrappadi@nvidia.com
Cc: yuyang.du@intel.com
Link: http://lkml.kernel.org/r/1469453670-2660-10-git-send-email-morten.rasmussen@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 3273163c6775c4c21823985304c2364b08ca6ea2)
[removed existing definition of capacity_margin]
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:52 -07:00
Morten Rasmussen
5173a85e22 UPSTREAM: sched/core: Enable SD_BALANCE_WAKE for asymmetric capacity systems
A domain with the SD_ASYM_CPUCAPACITY flag set indicate that
sched_groups at this level and below do not include CPUs of all
capacities available (e.g. group containing little-only or big-only CPUs
in big.LITTLE systems). It is therefore necessary to put in more effort
in finding an appropriate CPU at task wake-up by enabling balancing at
wake-up (SD_BALANCE_WAKE) on all lower (child) levels.

Change-Id: I4615917f540d03d7e7ef7de8f0da33b1ad97387c
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dietmar.eggemann@arm.com
Cc: freedom.tan@mediatek.com
Cc: keita.kobayashi.ym@renesas.com
Cc: mgalbraith@suse.de
Cc: sgurrappadi@nvidia.com
Cc: vincent.guittot@linaro.org
Cc: yuyang.du@intel.com
Link: http://lkml.kernel.org/r/1469453670-2660-8-git-send-email-morten.rasmussen@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 9ee1cda5ee25c7dd82acf25892e0d229e818f8c7)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:52 -07:00
Morten Rasmussen
e51c057694 UPSTREAM: sched/core: Pass child domain into sd_init()
If behavioural sched_domain flags depend on topology flags set at higher
domain levels we need a way to update the child domain flags. Moving the
child pointer assignment inside sd_init() should make that possible.

Change-Id: If043921fdf102c310adcc9e0280afa33c48c4783
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dietmar.eggemann@arm.com
Cc: freedom.tan@mediatek.com
Cc: keita.kobayashi.ym@renesas.com
Cc: mgalbraith@suse.de
Cc: sgurrappadi@nvidia.com
Cc: vincent.guittot@linaro.org
Cc: yuyang.du@intel.com
Link: http://lkml.kernel.org/r/1469453670-2660-7-git-send-email-morten.rasmussen@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 3676b13e8524c576825fe1e731e347dba0083888)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:52 -07:00
Morten Rasmussen
68a3b157d9 UPSTREAM: sched/core: Introduce SD_ASYM_CPUCAPACITY sched_domain topology flag
Add a topology flag to the sched_domain hierarchy indicating the lowest
domain level where the full range of CPU capacities is represented by
the domain members for asymmetric capacity topologies (e.g. ARM
big.LITTLE).

The flag is intended to indicate that extra care should be taken when
placing tasks on CPUs and this level spans all the different types of
CPUs found in the system (no need to look further up the domain
hierarchy). This information is currently only available through
iterating through the capacities of all the CPUs at parent levels in the
sched_domain hierarchy.

  SD 2      [  0      1      2      3]  SD_ASYM_CPUCAPACITY

  SD 1      [  0      1] [   2      3]  !SD_ASYM_CPUCAPACITY

  CPU:         0      1      2      3
  capacity:  756    756   1024   1024

If the topology in the example above is duplicated to create an eight
CPU example with third sched_domain level on top (SD 3), this level
should not have the flag set (!SD_ASYM_CPUCAPACITY) as its two group
would both have all CPU capacities represented within them.

Change-Id: I1526407b90567cac387419719b7d7fdc8b259a85
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dietmar.eggemann@arm.com
Cc: freedom.tan@mediatek.com
Cc: keita.kobayashi.ym@renesas.com
Cc: mgalbraith@suse.de
Cc: sgurrappadi@nvidia.com
Cc: vincent.guittot@linaro.org
Cc: yuyang.du@intel.com
Link: http://lkml.kernel.org/r/1469453670-2660-6-git-send-email-morten.rasmussen@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 1f6e6c7cb9bcd58abb5ee11243e0eefe6b36fc8e)
[trivial merge conflict]
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:52 -07:00
Morten Rasmussen
3e9cdd5ae9 UPSTREAM: sched/core: Remove unnecessary NULL-pointer check
Checking if the sched_domain pointer returned by sd_init() is NULL seems
pointless as sd_init() neither checks if it is valid to begin with nor
set it to NULL.

Change-Id: I5e16fd0c2ca7234b097be7c95409ddb15c5e9de9
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dietmar.eggemann@arm.com
Cc: freedom.tan@mediatek.com
Cc: keita.kobayashi.ym@renesas.com
Cc: mgalbraith@suse.de
Cc: sgurrappadi@nvidia.com
Cc: vincent.guittot@linaro.org
Cc: yuyang.du@intel.com
Link: http://lkml.kernel.org/r/1469453670-2660-5-git-send-email-morten.rasmussen@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 0e6d2a67a41321b3ef650b780a279a37855de08e)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:51 -07:00
Morten Rasmussen
bc7c939b3a UPSTREAM: sched/fair: Optimize find_idlest_cpu() when there is no choice
In the current find_idlest_group()/find_idlest_cpu() search we end up
calling find_idlest_cpu() in a sched_group containing only one CPU in
the end. Checking idle-states becomes pointless when there is no
alternative, so bail out instead.

Change-Id: Ic62bf09b53a7984143ac2431aaa69c69b204cd56
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dietmar.eggemann@arm.com
Cc: linux-kernel@vger.kernel.org
Cc: mgalbraith@suse.de
Cc: vincent.guittot@linaro.org
Cc: yuyang.du@intel.com
Link: http://lkml.kernel.org/r/1466615004-3503-4-git-send-email-morten.rasmussen@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit eaecf41f5abf80b63c8e025fcb9ee4aa203c3038)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:51 -07:00
Morten Rasmussen
3bb3d7e7d9 BACKPORT: sched/fair: Make the use of prev_cpu consistent in the wakeup path
In commit:

  ac66f54772 ("sched/numa: Introduce migrate_swap()")

select_task_rq() got a 'cpu' argument to enable overriding of prev_cpu
in special cases (NUMA task swapping).

However, the select_task_rq_fair() helper functions: wake_affine() and
select_idle_sibling(), still use task_cpu(p) directly to work out
prev_cpu, which leads to inconsistencies.

This patch passes prev_cpu (potentially overridden by NUMA code) into
the helper functions to ensure prev_cpu is indeed the same CPU
everywhere in the wakeup path.

Change-Id: I4951c4eead2e6045e4fb34e89f6cda17d881d4d7
cc: Ingo Molnar <mingo@redhat.com>
cc: Rik van Riel <riel@redhat.com>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dietmar.eggemann@arm.com
Cc: linux-kernel@vger.kernel.org
Cc: mgalbraith@suse.de
Cc: vincent.guittot@linaro.org
Cc: yuyang.du@intel.com
Link: http://lkml.kernel.org/r/1466615004-3503-3-git-send-email-morten.rasmussen@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 772bd008cd9a1d4e8ce566f2edcc61d1c28fcbe5)
[merged with Android/EAS wakeup path]
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:51 -07:00
Morten Rasmussen
986aa1d498 UPSTREAM: sched/core: Fix power to capacity renaming in comment
It is seems that this one escaped Nico's renaming of cpu_power to
cpu_capacity a while back.

Change-Id: Ic2569d714db7b740f1df4ccc381ba8c1772c2793
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dietmar.eggemann@arm.com
Cc: linux-kernel@vger.kernel.org
Cc: mgalbraith@suse.de
Cc: vincent.guittot@linaro.org
Cc: yuyang.du@intel.com
Link: http://lkml.kernel.org/r/1466615004-3503-2-git-send-email-morten.rasmussen@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit bd425d4bfc7a1a6064dbbadfbac9c7eec0e426ec)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:51 -07:00
Dietmar Eggemann
cb88574a68 Partial Revert: "WIP: sched: Add cpu capacity awareness to wakeup balancing"
Revert the changes in find_idlest_cpu() and find_idlest_group().

Keep the infrastructure bits which are used in following EAS patches.

Change-Id: Id516ca5f3e51b9a13db1ebb8de2df3aa25f9679b
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
2017-06-02 08:01:51 -07:00
Dietmar Eggemann
bd6ff3505f Revert "WIP: sched: Consider spare cpu capacity at task wake-up"
This reverts commit 75a9695b619741019363f889c99c97c7bb823797.

Change-Id: I846b21f2bdeb0b0ca30ad65683564ed07a429428
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
[ minor merge changes ]
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:51 -07:00
Viresh Kumar
5c015afebd FROM-LIST: cpufreq: schedutil: Redefine the rate_limit_us tunable
The rate_limit_us tunable is intended to reduce the possible overhead
from running the schedutil governor.  However, that overhead can be
divided into two separate parts: the governor computations and the
invocation of the scaling driver to set the CPU frequency.  The latter
is where the real overhead comes from.  The former is much less
expensive in terms of execution time and running it every time the
governor callback is invoked by the scheduler, after rate_limit_us
interval has passed since the last frequency update, would not be a
problem.

For this reason, redefine the rate_limit_us tunable so that it means the
minimum time that has to pass between two consecutive invocations of the
scaling driver by the schedutil governor (to set the CPU frequency).

Change-Id: Iced64116b826c25441ef537c27a3dabfcf81919e
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
[pulled from linux-pm linux-next https://patchwork.kernel.org/patch/9583949/ ]
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:51 -07:00
Steve Muckle
51b20b214f cpufreq: schedutil: add up/down frequency transition rate limits
The rate-limit tunable in the schedutil governor applies to transitions
to both lower and higher frequencies. On several platforms it is not the
ideal tunable though, as it is difficult to get best power/performance
figures using the same limit in both directions.

It is common on mobile platforms with demanding user interfaces to want
to increase frequency rapidly for example but decrease slowly.

One of the example can be a case where we have short busy periods
followed by similar or longer idle periods. If we keep the rate-limit
high enough, we will not go to higher frequencies soon enough. On the
other hand, if we keep it too low, we will have too many frequency
transitions, as we will always reduce the frequency after the busy
period.

It would be very useful if we can set low rate-limit while increasing
the frequency (so that we can respond to the short busy periods quickly)
and high rate-limit while decreasing frequency (so that we don't reduce
the frequency immediately after the short busy period and that may avoid
frequency transitions before the next busy period).

Implement separate up/down transition rate limits. Note that the
governor avoids frequency recalculations for a period equal to minimum
of up and down rate-limit. A global mutex is also defined to protect
updates to min_rate_limit_us via two separate sysfs files.

Note that this wouldn't change behavior of the schedutil governor for
the platforms which wish to keep same values for both up and down rate
limits.

This is tested with the rt-app [1] on ARM Exynos, dual A15 processor
platform.

Testcase: Run a SCHED_OTHER thread on CPU0 which will emulate work-load
for X ms of busy period out of the total period of Y ms, i.e. Y - X ms
of idle period. The values of X/Y taken were: 20/40, 20/50, 20/70, i.e
idle periods of 20, 30 and 50 ms respectively. These were tested against
values of up/down rate limits as: 10/10 ms and 10/40 ms.

For every test we noticed a performance increase of 5-10% with the
schedutil governor, which was very much expected.

[Viresh]: Simplified user interface and introduced min_rate_limit_us +
	      mutex, rewrote commit log and included test results.

[1] https://github.com/scheduler-tools/rt-app/

Change-Id: I18720a83855b196b8e21dcdc8deae79131635b84
Signed-off-by: Steve Muckle <smuckle.linux@gmail.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
(applied from https://marc.info/?l=linux-kernel&m=147936011103832&w=2)
[trivial adaptations]
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
2017-06-02 08:01:51 -07:00
Juri Lelli
3c5c4e9724 trace/sched: add rq utilization signal for WALT
It is useful to be able to check current capacity against rq utilization
signal generated by WALT (to check how a cpufreq governor is behaving
for example).

Add rq utilization signal (same scale as capacity) to the walt_update_
task_ravg tracepoint.

Change-Id: I9aae3884a741d23ac494bef80d2303f107f135ce
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
2017-06-02 08:01:51 -07:00
Juri Lelli
f71d9f01c6 sched/cpufreq: make schedutil use WALT signal
If WALT is available and enabled, make schedutil governor use its
utilization signal.

Change-Id: I92bc37989447a76616e9bcc4e9e8616774fb9925
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
[we need to use boosted_cpu_util for schedutil, so make it
 not static]
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:51 -07:00
Steve Muckle
e5da6c11b2 sched: cpufreq: use rt_avg as estimate of required RT CPU capacity
A policy of going to fmax on any RT activity will be detrimental
for power on many platforms. Often RT accounts for only a small amount
of CPU activity so sending the CPU frequency to fmax is overkill. Worse
still, some platforms may not be able to even complete the CPU frequency
change before the RT activity has already completed.

Cpufreq governors have not treated RT activity this way in the past so
it is not part of the expected semantics of the RT scheduling class. The
DL class offers guarantees about task completion and could be used for
this purpose.

Modify the schedutil algorithm to instead use rt_avg as an estimate of
RT utilization of the CPU.

Based on previous work by Vincent Guittot <vincent.guittot@linaro.org>.

Change-Id: I1ed605a3e2512a94d34217a8e57c3fd97cca60be
Signed-off-by: Steve Muckle <smuckle@linaro.org>
2017-06-02 08:01:51 -07:00
Viresh Kumar
e2aa75a4c7 cpufreq: schedutil: move slow path from workqueue to SCHED_FIFO task
If slow path frequency changes are conducted in a SCHED_OTHER context
then they may be delayed for some amount of time, including
indefinitely, when real time or deadline activity is taking place.

Move the slow path to a real time kernel thread. In the future the
thread should be made SCHED_DEADLINE. The RT priority is arbitrarily set
to 50 for now.

Hackbench results on ARM Exynos, dual core A15 platform for 10
iterations:

$ hackbench -s 100 -l 100 -g 10 -f 20

Before			After
---------------------------------
1.808			1.603
1.847			1.251
2.229			1.590
1.952			1.600
1.947			1.257
1.925			1.627
2.694			1.620
1.258			1.621
1.919			1.632
1.250			1.240

Average:

1.8829			1.5041

Based on initial work by Steve Muckle.

Change-Id: I8f53037e94f353960c6d10abf07822d671631ef7
Signed-off-by: Steve Muckle <smuckle.linux@gmail.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
(cherry picked from 02a7b1ee3baa)
[adapt to the 3.18 kthread interface]
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
2017-06-02 08:01:51 -07:00
Petr Mladek
78213729a7 BACKPORT: kthread: allow to cancel kthread work
We are going to use kthread workers more widely and sometimes we will need
to make sure that the work is neither pending nor running.

This patch implements cancel_*_sync() operations as inspired by
workqueues.  Well, we are synchronized against the other operations via
the worker lock, we use del_timer_sync() and a counter to count parallel
cancel operations.  Therefore the implementation might be easier.

First, we check if a worker is assigned.  If not, the work has newer been
queued after it was initialized.

Second, we take the worker lock.  It must be the right one.  The work must
not be assigned to another worker unless it is initialized in between.

Third, we try to cancel the timer when it exists.  The timer is deleted
synchronously to make sure that the timer call back is not running.  We
need to temporary release the worker->lock to avoid a possible deadlock
with the callback.  In the meantime, we set work->canceling counter to
avoid any queuing.

Fourth, we try to remove the work from a worker list. It might be
the list of either normal or delayed works.

Fifth, if the work is running, we call kthread_flush_work().  It might
take an arbitrary time.  We need to release the worker-lock again.  In the
meantime, we again block any queuing by the canceling counter.

As already mentioned, the check for a pending kthread work is done under a
lock.  In compare with workqueues, we do not need to fight for a single
PENDING bit to block other operations.  Therefore we do not suffer from
the thundering storm problem and all parallel canceling jobs might use
kthread_flush_work().  Any queuing is blocked until the counter gets zero.

Change-Id: I8a8ece0f93c828f311d0ad5c88d80db2388e4808
Link: http://lkml.kernel.org/r/1470754545-17632-10-git-send-email-pmladek@suse.com
Signed-off-by: Petr Mladek <pmladek@suse.com>
Acked-by: Tejun Heo <tj@kernel.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Borislav Petkov <bp@suse.de>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry-picked from 37be45d49dec2a411e29d50c9597cfe8184b5645)
[major changes to the original patch while cherry-picking; only rebased
the sync variant]
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
2017-06-02 08:01:50 -07:00
Steve Muckle
ca7b7d3c99 sched/cpufreq: fix tunables for schedfreq governor
The schedfreq governor does not currently handle cpufreq drivers which
use a global set of tunables (!have_governor_per_policy).

For example on x86 and using the acpi cpufreq driver, doing this

  cat /sys/devices/system/cpu/cpufreq/sched/up_throttle_nsec

will result in a bad pointer access.

Update the tunable code using the upstream schedutil tunable code by
Rafael Wysocki as a guide.

Includes a partial backport of the reorganized cpufreq tunable
infrastructure.

Change-Id: I7e6f8de1dac297077ad43f37dd2f6ddbfe921c98
Signed-off-by: Steve Muckle <smuckle@linaro.org>
[fixed cherry-pick issue]
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
[fixed cherry-pick issue]
Signed-off-by: Thierry Strudel <tstrudel@google.com>
2017-06-02 08:01:50 -07:00
Steve Muckle
6bc6115c16 BACKPORT: cpufreq: schedutil: New governor based on scheduler utilization data
Add a new cpufreq scaling governor, called "schedutil", that uses
scheduler-provided CPU utilization information as input for making
its decisions.

Doing that is possible after commit 34e2c55 (cpufreq: Add
mechanism for registering utilization update callbacks) that
introduced cpufreq_update_util() called by the scheduler on
utilization changes (from CFS) and RT/DL task status updates.
In particular, CPU frequency scaling decisions may be based on
the the utilization data passed to cpufreq_update_util() by CFS.

The new governor is relatively simple.

The frequency selection formula used by it depends on whether or not
the utilization is frequency-invariant.  In the frequency-invariant
case the new CPU frequency is given by

	next_freq = 1.25 * max_freq * util / max

where util and max are the last two arguments of cpufreq_update_util().
In turn, if util is not frequency-invariant, the maximum frequency in
the above formula is replaced with the current frequency of the CPU:

	next_freq = 1.25 * curr_freq * util / max

The coefficient 1.25 corresponds to the frequency tipping point at
(util / max) = 0.8.

All of the computations are carried out in the utilization update
handlers provided by the new governor.  One of those handlers is
used for cpufreq policies shared between multiple CPUs and the other
one is for policies with one CPU only (and therefore it doesn't need
to use any extra synchronization means).

The governor supports fast frequency switching if that is supported
by the cpufreq driver in use and possible for the given policy.
In the fast switching case, all operations of the governor take
place in its utilization update handlers.  If fast switching cannot
be used, the frequency switch operations are carried out with the
help of a work item which only calls __cpufreq_driver_target()
(under a mutex) to trigger a frequency update (to a value already
computed beforehand in one of the utilization update handlers).

Currently, the governor treats all of the RT and DL tasks as
"unknown utilization" and sets the frequency to the allowed
maximum when updated from the RT or DL sched classes.  That
heavy-handed approach should be replaced with something more
subtle and specifically targeted at RT and DL tasks.

The governor shares some tunables management code with the
"ondemand" and "conservative" governors and uses some common
definitions from cpufreq_governor.h, but apart from that it
is stand-alone.

Change-Id: I03876e622768e4b3ee4dc28682af7cce771f2f4c
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
(cherry-picked from 9bdcb44e391da5c41b98573bf0305a0e0b1c9569)
[ Backport the schedutil cpufreq governor from 4.9. Some cpufreq
  tunable infrastructure as well as the resolve_freq API is also
  backported as those are dependencies]
Signed-off-by: Steve Muckle <smuckle@linaro.org>
[trivial cherry-picking fixes]
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
[fixed default governor machinery]
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:50 -07:00
Steve Muckle
f02702dcf2 sched: backport cpufreq hooks from 4.9-rc4
The scheduler cpufreq hooks are required by the schedutil cpufreq
governor.

Change-Id: Ied6c46262bb33b7e81bbb3d3d2761124e0c676b7
Signed-off-by: Steve Muckle <smuckle@linaro.org>
[trivial cherry-picking fixes]
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:50 -07:00
Ganesh Mahendran
5df19f969e ANDROID: Kconfig: add depends for UID_SYS_STATS
uid_io depends on TASK_XACCT and TASK_IO_ACCOUNTING.
So add depends in Kconfig before compiling code.

Change-Id: Ie6bf57ec7c2eceffadf4da0fc2aca001ce10c36e
Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
2017-06-02 01:45:14 +00:00
Dmitry Torokhov
df6e5af086 ANDROID: hid: uhid: implement refcount for open and close
Fix concurrent open and close activity sending a UHID_CLOSE while
some consumers still have the device open.

Temporary solution for reference counts on device open and close
calls, absent a facility for this in the HID core likely to appear
in the future.

[toddpoynor@google.com: commit text]
Bug: 38448648
Signed-off-by: Todd Poynor <toddpoynor@google.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Change-Id: I57413e42ec961a960a8ddc4942228df22c730d80
2017-06-01 15:23:02 -07:00
Eric Biggers
7d16e880c6 Revert "ext4: require encryption feature for EXT4_IOC_SET_ENCRYPTION_POLICY"
This reverts commit e2968fb8e7.

For various reasons, we've had to start enforcing upstream that ext4
encryption can only be used if the filesystem superblock has the
EXT4_FEATURE_INCOMPAT_ENCRYPT flag set, as was the intended design.
Unfortunately, Android isn't ready for this quite yet, since its
userspace still needs to be updated to set the flag at mkfs time, or
else fix it later with tune2fs.  It will need some more time to be fixed
properly, so for now to avoid breaking some devices, revert the kernel
change.

Bug: 36231741
Signed-off-by: Eric Biggers <ebiggers@google.com>
Change-Id: I30bd54afb68dbaf9801f8954099dffa90a2f8df1
2017-05-30 21:17:57 +00:00
Daniel Rosenberg
d73d07673e ANDROID: mnt: Fix next_descendent
next_descendent did not properly handle the case
where the initial mount had no slaves. In this case,
we would look for the next slave, but since don't
have a master, the check for wrapping around to the
start of the list will always fail. Instead, we check
for this case, and ensure that we end the iteration
when we come back to the root.

Signed-off-by: Daniel Rosenberg <drosen@google.com>
Bug: 62094374
Change-Id: I43dfcee041aa3730cb4b9a1161418974ef84812e
2017-05-29 19:11:06 -07:00
Greg Kroah-Hartman
9bc462220d This is the 4.4.70 stable release
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAlkm0zAACgkQONu9yGCS
 aT5QnxAAh9uZYFJtQ7wYngD7cQcDH1KVztqEYxCP5OtxzAZBrSNBufLdhKBbc1ZP
 C04Mo+FzzNiJtBwkmlOqYaEPYUSx/uwCEk9mNX85VtchIhKBrwWF7GxkeXCPs6e5
 yP5TUXmxbbSp3qM4q2Z4XSW8eEPZ2l3zoy0fkjz2kS02e4RW0yQ34dvzw0BG2urr
 +9ocyVjDBoU3QNKyVw3fd1AltKesSZK0fa2vEO+TOTW6Bm3xD4egCJdOzu9saUwK
 hfSKXsJ0/pf1r1iyfz2foR/Hi3i4j6vRqnneyqozT7nxEJEuBQ3B5WhnsbDfzrXu
 +CY23KBkDkQ1RBngmtTQd3ABHEN1E2StpBImG5RUr+5giV6/e4rdz0/HWGMvCvAz
 iWqXdgZNdCnc96HPEWaDGUKxndCxsiaJOhgZwW2zm/0drVWRE+vjsOmFLyUp2Ky1
 1vnKfwlvTFU4xjQ5H44AuuSHQsv+GNEtPPIHrbBv/wg90/2VuF0aYuNYjHSsc4Ca
 3YM53S6/sjQqmsKixWboax8Kh2wRrEuFbqSFQV64JjFpGau61JQFMtRNl4+FFXzm
 Cm+26Fan4Wtyo5zB9xnBZbDwCOXqwTXQYUP2SejtObq+Uk2tXxF05emeta9pURF3
 vdgv6N0cTPm4K3VZyBZvj8JitEr2OEaIxoUqE2BXkA1MPmbqOoI=
 =Z1no
 -----END PGP SIGNATURE-----

Merge 4.4.70 into android-4.4

Changes in 4.4.70
	usb: misc: legousbtower: Fix buffers on stack
	usb: misc: legousbtower: Fix memory leak
	USB: ene_usb6250: fix DMA to the stack
	watchdog: pcwd_usb: fix NULL-deref at probe
	char: lp: fix possible integer overflow in lp_setup()
	USB: core: replace %p with %pK
	ARM: tegra: paz00: Mark panel regulator as enabled on boot
	tpm_crb: check for bad response size
	infiniband: call ipv6 route lookup via the stub interface
	dm btree: fix for dm_btree_find_lowest_key()
	dm raid: select the Kconfig option CONFIG_MD_RAID0
	dm bufio: avoid a possible ABBA deadlock
	dm bufio: check new buffer allocation watermark every 30 seconds
	dm cache metadata: fail operations if fail_io mode has been established
	dm bufio: make the parameter "retain_bytes" unsigned long
	dm thin metadata: call precommit before saving the roots
	dm space map disk: fix some book keeping in the disk space map
	md: update slab_cache before releasing new stripes when stripes resizing
	rtlwifi: rtl8821ae: setup 8812ae RFE according to device type
	mwifiex: pcie: fix cmd_buf use-after-free in remove/reset
	ima: accept previously set IMA_NEW_FILE
	KVM: x86: Fix load damaged SSEx MXCSR register
	KVM: X86: Fix read out-of-bounds vulnerability in kvm pio emulation
	regulator: tps65023: Fix inverted core enable logic.
	s390/kdump: Add final note
	s390/cputime: fix incorrect system time
	ath9k_htc: Add support of AirTies 1eda:2315 AR9271 device
	ath9k_htc: fix NULL-deref at probe
	drm/amdgpu: Avoid overflows/divide-by-zero in latency_watermark calculations.
	drm/amdgpu: Make display watermark calculations more accurate
	drm/nouveau/therm: remove ineffective workarounds for alarm bugs
	drm/nouveau/tmr: ack interrupt before processing alarms
	drm/nouveau/tmr: fix corruption of the pending list when rescheduling an alarm
	drm/nouveau/tmr: avoid processing completed alarms when adding a new one
	drm/nouveau/tmr: handle races with hw when updating the next alarm time
	cdc-acm: fix possible invalid access when processing notification
	proc: Fix unbalanced hard link numbers
	of: fix sparse warning in of_pci_range_parser_one
	iio: dac: ad7303: fix channel description
	pid_ns: Sleep in TASK_INTERRUPTIBLE in zap_pid_ns_processes
	pid_ns: Fix race between setns'ed fork() and zap_pid_ns_processes()
	USB: serial: ftdi_sio: fix setting latency for unprivileged users
	USB: serial: ftdi_sio: add Olimex ARM-USB-TINY(H) PIDs
	ext4 crypto: don't let data integrity writebacks fail with ENOMEM
	ext4 crypto: fix some error handling
	net: qmi_wwan: Add SIMCom 7230E
	fscrypt: fix context consistency check when key(s) unavailable
	f2fs: check entire encrypted bigname when finding a dentry
	fscrypt: avoid collisions when presenting long encrypted filenames
	sched/fair: Do not announce throttled next buddy in dequeue_task_fair()
	sched/fair: Initialize throttle_count for new task-groups lazily
	usb: host: xhci-plat: propagate return value of platform_get_irq()
	xhci: apply PME_STUCK_QUIRK and MISSING_CAS quirk for Denverton
	usb: host: xhci-mem: allocate zeroed Scratchpad Buffer
	net: irda: irda-usb: fix firmware name on big-endian hosts
	usbvision: fix NULL-deref at probe
	mceusb: fix NULL-deref at probe
	ttusb2: limit messages to buffer size
	usb: musb: tusb6010_omap: Do not reset the other direction's packet size
	USB: iowarrior: fix info ioctl on big-endian hosts
	usb: serial: option: add Telit ME910 support
	USB: serial: qcserial: add more Lenovo EM74xx device IDs
	USB: serial: mct_u232: fix big-endian baud-rate handling
	USB: serial: io_ti: fix div-by-zero in set_termios
	USB: hub: fix SS hub-descriptor handling
	USB: hub: fix non-SS hub-descriptor handling
	ipx: call ipxitf_put() in ioctl error path
	iio: proximity: as3935: fix as3935_write
	ceph: fix recursion between ceph_set_acl() and __ceph_setattr()
	gspca: konica: add missing endpoint sanity check
	s5p-mfc: Fix unbalanced call to clock management
	dib0700: fix NULL-deref at probe
	zr364xx: enforce minimum size when reading header
	dvb-frontends/cxd2841er: define symbol_rate_min/max in T/C fe-ops
	cx231xx-audio: fix init error path
	cx231xx-audio: fix NULL-deref at probe
	cx231xx-cards: fix NULL-deref at probe
	powerpc/book3s/mce: Move add_taint() later in virtual mode
	powerpc/pseries: Fix of_node_put() underflow during DLPAR remove
	powerpc/64e: Fix hang when debugging programs with relocated kernel
	ARM: dts: at91: sama5d3_xplained: fix ADC vref
	ARM: dts: at91: sama5d3_xplained: not all ADC channels are available
	arm64: xchg: hazard against entire exchange variable
	arm64: uaccess: ensure extension of access_ok() addr
	arm64: documentation: document tagged pointer stack constraints
	xc2028: Fix use-after-free bug properly
	mm/huge_memory.c: respect FOLL_FORCE/FOLL_COW for thp
	staging: rtl8192e: fix 2 byte alignment of register BSSIDR.
	staging: rtl8192e: rtl92e_get_eeprom_size Fix read size of EPROM_CMD.
	iommu/vt-d: Flush the IOTLB to get rid of the initial kdump mappings
	metag/uaccess: Fix access_ok()
	metag/uaccess: Check access_ok in strncpy_from_user
	uwb: fix device quirk on big-endian hosts
	genirq: Fix chained interrupt data ordering
	osf_wait4(): fix infoleak
	tracing/kprobes: Enforce kprobes teardown after testing
	PCI: Fix pci_mmap_fits() for HAVE_PCI_RESOURCE_TO_USER platforms
	PCI: Freeze PME scan before suspending devices
	drm/edid: Add 10 bpc quirk for LGD 764 panel in HP zBook 17 G2
	nfsd: encoders mustn't use unitialized values in error cases
	drivers: char: mem: Check for address space wraparound with mmap()
	Linux 4.4.70

Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
2017-05-25 17:31:28 +02:00
Greg Kroah-Hartman
b409ba3b05 Linux 4.4.70 2017-05-25 14:50:50 +02:00
Julius Werner
837bfdb413 drivers: char: mem: Check for address space wraparound with mmap()
commit b299cde245b0b76c977f4291162cf668e087b408 upstream.

/dev/mem currently allows mmap() mappings that wrap around the end of
the physical address space, which should probably be illegal. It
circumvents the existing STRICT_DEVMEM permission check because the loop
immediately terminates (as the start address is already higher than the
end address). On the x86_64 architecture it will then cause a panic
(from the BUG(start >= end) in arch/x86/mm/pat.c:reserve_memtype()).

This patch adds an explicit check to make sure offset + size will not
wrap around in the physical address type.

Signed-off-by: Julius Werner <jwerner@chromium.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-25 14:30:18 +02:00
J. Bruce Fields
52cf247694 nfsd: encoders mustn't use unitialized values in error cases
commit f961e3f2acae94b727380c0b74e2d3954d0edf79 upstream.

In error cases, lgp->lg_layout_type may be out of bounds; so we
shouldn't be using it until after the check of nfserr.

This was seen to crash nfsd threads when the server receives a LAYOUTGET
request with a large layout type.

GETDEVICEINFO has the same problem.

Reported-by: Ari Kauppi <Ari.Kauppi@synopsys.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-25 14:30:18 +02:00
Mario Kleiner
da922dc48d drm/edid: Add 10 bpc quirk for LGD 764 panel in HP zBook 17 G2
commit e345da82bd6bdfa8492f80b3ce4370acfd868d95 upstream.

The builtin eDP panel in the HP zBook 17 G2 supports 10 bpc,
as advertised by the Laptops product specs and verified via
injecting a fixed edid + photometer measurements, but edid
reports unknown depth, so drivers fall back to 6 bpc.

Add a quirk to get the full 10 bpc.

Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Acked-by: Harry Wentland <harry.wentland@amd.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/1492787108-23959-1-git-send-email-mario.kleiner.de@gmail.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-25 14:30:17 +02:00
Lukas Wunner
bc428e9407 PCI: Freeze PME scan before suspending devices
commit ea00353f36b64375518662a8ad15e39218a1f324 upstream.

Laurent Pinchart reported that the Renesas R-Car H2 Lager board (r8a7790)
crashes during suspend tests.  Geert Uytterhoeven managed to reproduce the
issue on an M2-W Koelsch board (r8a7791):

  It occurs when the PME scan runs, once per second.  During PME scan, the
  PCI host bridge (rcar-pci) registers are accessed while its module clock
  has already been disabled, leading to the crash.

One reproducer is to configure s2ram to use "s2idle" instead of "deep"
suspend:

  # echo 0 > /sys/module/printk/parameters/console_suspend
  # echo s2idle > /sys/power/mem_sleep
  # echo mem > /sys/power/state

Another reproducer is to write either "platform" or "processors" to
/sys/power/pm_test.  It does not (or is less likely) to happen during full
system suspend ("core" or "none") because system suspend also disables
timers, and thus the workqueue handling PME scans no longer runs.  Geert
believes the issue may still happen in the small window between disabling
module clocks and disabling timers:

  # echo 0 > /sys/module/printk/parameters/console_suspend
  # echo platform > /sys/power/pm_test    # Or "processors"
  # echo mem > /sys/power/state

(Make sure CONFIG_PCI_RCAR_GEN2 and CONFIG_USB_OHCI_HCD_PCI are enabled.)

Rafael Wysocki agrees that PME scans should be suspended before the host
bridge registers become inaccessible.  To that end, queue the task on a
workqueue that gets frozen before devices suspend.

Rafael notes however that as a result, some wakeup events may be missed if
they are delivered via PME from a device without working IRQ (which hence
must be polled) and occur after the workqueue has been frozen.  If that
turns out to be an issue in practice, it may be possible to solve it by
calling pci_pme_list_scan() once directly from one of the host bridge's
pm_ops callbacks.

Stacktrace for posterity:

  PM: Syncing filesystems ... [   38.566237] done.
  PM: Preparing system for sleep (mem)
  Freezing user space processes ... [   38.579813] (elapsed 0.001 seconds) done.
  Freezing remaining freezable tasks ... (elapsed 0.001 seconds) done.
  PM: Suspending system (mem)
  PM: suspend of devices complete after 152.456 msecs
  PM: late suspend of devices complete after 2.809 msecs
  PM: noirq suspend of devices complete after 29.863 msecs
  suspend debug: Waiting for 5 second(s).
  Unhandled fault: asynchronous external abort (0x1211) at 0x00000000
  pgd = c0003000
  [00000000] *pgd=80000040004003, *pmd=00000000
  Internal error: : 1211 [#1] SMP ARM
  Modules linked in:
  CPU: 1 PID: 20 Comm: kworker/1:1 Not tainted
  4.9.0-rc1-koelsch-00011-g68db9bc814362e7f #3383
  Hardware name: Generic R8A7791 (Flattened Device Tree)
  Workqueue: events pci_pme_list_scan
  task: eb56e140 task.stack: eb58e000
  PC is at pci_generic_config_read+0x64/0x6c
  LR is at rcar_pci_cfg_base+0x64/0x84
  pc : [<c041d7b4>]    lr : [<c04309a0>]    psr: 600d0093
  sp : eb58fe98  ip : c041d750  fp : 00000008
  r10: c0e2283c  r9 : 00000000  r8 : 600d0013
  r7 : 00000008  r6 : eb58fed6  r5 : 00000002  r4 : eb58feb4
  r3 : 00000000  r2 : 00000044  r1 : 00000008  r0 : 00000000
  Flags: nZCv  IRQs off  FIQs on  Mode SVC_32  ISA ARM  Segment user
  Control: 30c5387d  Table: 6a9f6c80  DAC: 55555555
  Process kworker/1:1 (pid: 20, stack limit = 0xeb58e210)
  Stack: (0xeb58fe98 to 0xeb590000)
  fe80:                                                       00000002 00000044
  fea0: eb6f5800 c041d9b0 eb58feb4 00000008 00000044 00000000 eb78a000 eb78a000
  fec0: 00000044 00000000 eb9aff00 c0424bf0 eb78a000 00000000 eb78a000 c0e22830
  fee0: ea8a6fc0 c0424c5c eaae79c0 c0424ce0 eb55f380 c0e22838 eb9a9800 c0235fbc
  ff00: eb55f380 c0e22838 eb55f380 eb9a9800 eb9a9800 eb58e000 eb9a9824 c0e02100
  ff20: eb55f398 c02366c4 eb56e140 eb5631c0 00000000 eb55f380 c023641c 00000000
  ff40: 00000000 00000000 00000000 c023a928 cd105598 00000000 40506a34 eb55f380
  ff60: 00000000 00000000 dead4ead ffffffff ffffffff eb58ff74 eb58ff74 00000000
  ff80: 00000000 dead4ead ffffffff ffffffff eb58ff90 eb58ff90 eb58ffac eb5631c0
  ffa0: c023a844 00000000 00000000 c0206d68 00000000 00000000 00000000 00000000
  ffc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
  ffe0: 00000000 00000000 00000000 00000000 00000013 00000000 3a81336c 10ccd1dd
  [<c041d7b4>] (pci_generic_config_read) from [<c041d9b0>]
  (pci_bus_read_config_word+0x58/0x80)
  [<c041d9b0>] (pci_bus_read_config_word) from [<c0424bf0>]
  (pci_check_pme_status+0x34/0x78)
  [<c0424bf0>] (pci_check_pme_status) from [<c0424c5c>] (pci_pme_wakeup+0x28/0x54)
  [<c0424c5c>] (pci_pme_wakeup) from [<c0424ce0>] (pci_pme_list_scan+0x58/0xb4)
  [<c0424ce0>] (pci_pme_list_scan) from [<c0235fbc>]
  (process_one_work+0x1bc/0x308)
  [<c0235fbc>] (process_one_work) from [<c02366c4>] (worker_thread+0x2a8/0x3e0)
  [<c02366c4>] (worker_thread) from [<c023a928>] (kthread+0xe4/0xfc)
  [<c023a928>] (kthread) from [<c0206d68>] (ret_from_fork+0x14/0x2c)
  Code: ea000000 e5903000 f57ff04f e3a00000 (e5843000)
  ---[ end trace 667d43ba3aa9e589 ]---

Fixes: df17e62e5b ("PCI: Add support for polling PME state on suspended legacy PCI devices")
Reported-and-tested-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Reported-and-tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Mika Westerberg <mika.westerberg@linux.intel.com>
Cc: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Cc: Simon Horman <horms+renesas@verge.net.au>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Matthew Garrett <mjg59@srcf.ucam.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-25 14:30:17 +02:00
David Woodhouse
5f36c8b4e4 PCI: Fix pci_mmap_fits() for HAVE_PCI_RESOURCE_TO_USER platforms
commit 6bccc7f426abd640f08d8c75fb22f99483f201b4 upstream.

In the PCI_MMAP_PROCFS case when the address being passed by the user is a
'user visible' resource address based on the bus window, and not the actual
contents of the resource, that's what we need to be checking it against.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-25 14:30:17 +02:00
Thomas Gleixner
6384f782a6 tracing/kprobes: Enforce kprobes teardown after testing
commit 30e7d894c1478c88d50ce94ddcdbd7f9763d9cdd upstream.

Enabling the tracer selftest triggers occasionally the warning in
text_poke(), which warns when the to be modified page is not marked
reserved.

The reason is that the tracer selftest installs kprobes on functions marked
__init for testing. These probes are removed after the tests, but that
removal schedules the delayed kprobes_optimizer work, which will do the
actual text poke. If the work is executed after the init text is freed,
then the warning triggers. The bug can be reproduced reliably when the work
delay is increased.

Flush the optimizer work and wait for the optimizing/unoptimizing lists to
become empty before returning from the kprobes tracer selftest. That
ensures that all operations which were queued due to the probes removal
have completed.

Link: http://lkml.kernel.org/r/20170516094802.76a468bb@gandalf.local.home

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Fixes: 6274de498 ("kprobes: Support delayed unoptimizing")
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-25 14:30:17 +02:00
Al Viro
d5fb96b955 osf_wait4(): fix infoleak
commit a8c39544a6eb2093c04afd5005b6192bd0e880c6 upstream.

failing sys_wait4() won't fill struct rusage...

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-25 14:30:17 +02:00
Thomas Gleixner
e07db0d720 genirq: Fix chained interrupt data ordering
commit 2c4569ca26986d18243f282dd727da27e9adae4c upstream.

irq_set_chained_handler_and_data() sets up the chained interrupt and then
stores the handler data.

That's racy against an immediate interrupt which gets handled before the
store of the handler data happened. The handler will dereference a NULL
pointer and crash.

Cure it by storing handler data before installing the chained handler.

Reported-by: Borislav Petkov <bp@alien8.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-25 14:30:17 +02:00
Johan Hovold
1736f2b3de uwb: fix device quirk on big-endian hosts
commit 41318a2b82f5d5fe1fb408f6d6e0b22aa557111d upstream.

Add missing endianness conversion when using the USB device-descriptor
idProduct field to apply a hardware quirk.

Fixes: 1ba47da527 ("uwb: add the i1480 DFU driver")
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-25 14:30:17 +02:00
James Hogan
ca19dd15e7 metag/uaccess: Check access_ok in strncpy_from_user
commit 3a158a62da0673db918b53ac1440845a5b64fd90 upstream.

The metag implementation of strncpy_from_user() doesn't validate the src
pointer, which could allow reading of arbitrary kernel memory. Add a
short access_ok() check to prevent that.

Its still possible for it to read across the user/kernel boundary, but
it will invariably reach a NUL character after only 9 bytes, leaking
only a static kernel address being loaded into D0Re0 at the beginning of
__start, which is acceptable for the immediate fix.

Reported-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-metag@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-25 14:30:16 +02:00
James Hogan
2d9b2e7808 metag/uaccess: Fix access_ok()
commit 8a8b56638bcac4e64cccc88bf95a0f9f4b19a2fb upstream.

The __user_bad() macro used by access_ok() has a few corner cases
noticed by Al Viro where it doesn't behave correctly:

 - The kernel range check has off by 1 errors which permit access to the
   first and last byte of the kernel mapped range.

 - The kernel range check ends at LINCORE_BASE rather than
   META_MEMORY_LIMIT, which is ineffective when the kernel is in global
   space (an extremely uncommon configuration).

There are a couple of other shortcomings here too:

 - Access to the whole of the other address space is permitted (i.e. the
   global half of the address space when the kernel is in local space).
   This isn't ideal as it could theoretically still contain privileged
   mappings set up by the bootloader.

 - The size argument is unused, permitting user copies which start on
   valid pages at the end of the user address range and cross the
   boundary into the kernel address space (e.g. addr = 0x3ffffff0, size
   > 0x10).

It isn't very convenient to add size checks when disallowing certain
regions, and it seems far safer to be sure and explicit about what
userland is able to access, so invert the logic to allow certain regions
instead, and fix the off by 1 errors and missing size checks. This also
allows the get_fs() == KERNEL_DS check to be more easily optimised into
the user address range case.

We now have 3 such allowed regions:

 - The user address range (incorporating the get_fs() == KERNEL_DS
   check).

 - NULL (some kernel code expects this to work, and we'll always catch
   the fault anyway).

 - The core code memory region.

Fixes: 373cd784d0 ("metag: Memory handling")
Reported-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-metag@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-25 14:30:16 +02:00
KarimAllah Ahmed
98d5e84363 iommu/vt-d: Flush the IOTLB to get rid of the initial kdump mappings
commit f73a7eee900e95404b61408a23a1df5c5811704c upstream.

Ever since commit 091d42e43d ("iommu/vt-d: Copy translation tables from
old kernel") the kdump kernel copies the IOMMU context tables from the
previous kernel. Each device mappings will be destroyed once the driver
for the respective device takes over.

This unfortunately breaks the workflow of mapping and unmapping a new
context to the IOMMU. The mapping function assumes that either:

1) Unmapping did the proper IOMMU flushing and it only ever flush if the
   IOMMU unit supports caching invalid entries.
2) The system just booted and the initialization code took care of
   flushing all IOMMU caches.

This assumption is not true for the kdump kernel since the context
tables have been copied from the previous kernel and translations could
have been cached ever since. So make sure to flush the IOTLB as well
when we destroy these old copied mappings.

Cc: Joerg Roedel <joro@8bytes.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Anthony Liguori <aliguori@amazon.com>
Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
Acked-by: David Woodhouse <dwmw@amazon.co.uk>
Fixes: 091d42e43d ("iommu/vt-d: Copy translation tables from old kernel")
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-25 14:30:16 +02:00
Malcolm Priestley
cb89b1f9df staging: rtl8192e: rtl92e_get_eeprom_size Fix read size of EPROM_CMD.
commit 90be652c9f157d44b9c2803f902a8839796c090d upstream.

EPROM_CMD is 2 byte aligned on PCI map so calling with rtl92e_readl
will return invalid data so use rtl92e_readw.

The device is unable to select the right eeprom type.

Signed-off-by: Malcolm Priestley <tvboxspy@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-25 14:30:16 +02:00
Malcolm Priestley
427907e599 staging: rtl8192e: fix 2 byte alignment of register BSSIDR.
commit 867510bde14e7b7fc6dd0f50b48f6753cfbd227a upstream.

BSSIDR has two byte alignment on PCI ioremap correct the write
by swapping to 16 bits first.

This fixes a problem that the device associates fail because
the filter is not set correctly.

Signed-off-by: Malcolm Priestley <tvboxspy@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-05-25 14:30:16 +02:00