Commit graph

563227 commits

Author SHA1 Message Date
Morten Rasmussen
19a5ebe13d sched, cpuidle: Track cpuidle state index in the scheduler
The idle-state of each cpu is currently pointed to by rq->idle_state but
there isn't any information in the struct cpuidle_state that can used to
look up the idle-state energy model data stored in struct
sched_group_energy. For this purpose is necessary to store the idle
state index as well. Ideally, the idle-state data should be unified.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
2016-05-10 16:49:51 +08:00
Morten Rasmussen
1b5ec5d8ab sched: Add over-utilization/tipping point indicator
Energy-aware scheduling is only meant to be active while the system is
_not_ over-utilized. That is, there are spare cycles available to shift
tasks around based on their actual utilization to get a more
energy-efficient task distribution without depriving any tasks. When
above the tipping point task placement is done the traditional way based
on load_avg, spreading the tasks across as many cpus as possible based
on priority scaled load to preserve smp_nice. Below the tipping point we
want to use util_avg instead. We need to define a criteria for when we
make the switch.

The util_avg for each cpu converges towards 100% (1024) regardless of
how many task additional task we may put on it. If we define
over-utilized as:

sum_{cpus}(rq.cfs.avg.util_avg) + margin > sum_{cpus}(rq.capacity)

some individual cpus may be over-utilized running multiple tasks even
when the above condition is false. That should be okay as long as we try
to spread the tasks out to avoid per-cpu over-utilization as much as
possible and if all tasks have the _same_ priority. If the latter isn't
true, we have to consider priority to preserve smp_nice.

For example, we could have n_cpus nice=-10 util_avg=55% tasks and
n_cpus/2 nice=0 util_avg=60% tasks. Balancing based on util_avg we are
likely to end up with nice=-10 tasks sharing cpus and nice=0 tasks
getting their own as we 1.5*n_cpus tasks in total and 55%+55% is less
over-utilized than 55%+60% for those cpus that have to be shared. The
system utilization is only 85% of the system capacity, but we are
breaking smp_nice.

To be sure not to break smp_nice, we have defined over-utilization
conservatively as when any cpu in the system is fully utilized at it's
highest frequency instead:

cpu_rq(any).cfs.avg.util_avg + margin > cpu_rq(any).capacity

IOW, as soon as one cpu is (nearly) 100% utilized, we switch to load_avg
to factor in priority to preserve smp_nice.

With this definition, we can skip periodic load-balance as no cpu has an
always-running task when the system is not over-utilized. All tasks will
be periodic and we can balance them at wake-up. This conservative
condition does however mean that some scenarios that could benefit from
energy-aware decisions even if one cpu is fully utilized would not get
those benefits.

For system where some cpus might have reduced capacity on some cpus
(RT-pressure and/or big.LITTLE), we want periodic load-balance checks as
soon a just a single cpu is fully utilized as it might one of those with
reduced capacity and in that case we want to migrate it.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
2016-05-10 16:49:51 +08:00
Morten Rasmussen
2c6a8a48a7 sched: Estimate energy impact of scheduling decisions
Adds a generic energy-aware helper function, energy_diff(), that
calculates energy impact of adding, removing, and migrating utilization
in the system.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
2016-05-10 16:49:51 +08:00
Morten Rasmussen
df2030c841 sched: Extend sched_group_energy to test load-balancing decisions
Extended sched_group_energy() to support energy prediction with usage
(tasks) added/removed from a specific cpu or migrated between a pair of
cpus. Useful for load-balancing decision making.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
2016-05-10 16:49:51 +08:00
Morten Rasmussen
c1770a5213 sched: Calculate energy consumption of sched_group
For energy-aware load-balancing decisions it is necessary to know the
energy consumption estimates of groups of cpus. This patch introduces a
basic function, sched_group_energy(), which estimates the energy
consumption of the cpus in the group and any resources shared by the
members of the group.

NOTE: The function has five levels of identation and breaks the 80
character limit. Refactoring is necessary.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
2016-05-10 16:49:51 +08:00
Morten Rasmussen
5ec8ccabfe sched: Highest energy aware balancing sched_domain level pointer
Add another member to the family of per-cpu sched_domain shortcut
pointers. This one, sd_ea, points to the highest level at which energy
model is provided. At this level and all levels below all sched_groups
have energy model data attached.

Partial energy model information is possible but restricted to providing
energy model data for lower level sched_domains (sd_ea and below) and
leaving load-balancing on levels above to non-energy-aware
load-balancing. For example, it is possible to apply energy-aware
scheduling within each socket on a multi-socket system and let normal
scheduling handle load-balancing between sockets.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
2016-05-10 16:49:51 +08:00
Morten Rasmussen
b6c0399e0f sched: Relocated cpu_util() and change return type
Move cpu_util() to an earlier position in fair.c and change return
type to unsigned long as negative usage doesn't make much sense. All
other load and capacity related functions use unsigned long including
the caller of cpu_util().

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
2016-05-10 16:49:51 +08:00
Morten Rasmussen
3e55d2f2fc sched: Compute cpu capacity available at current frequency
capacity_orig_of() returns the max available compute capacity of a cpu.
For scale-invariant utilization tracking and energy-aware scheduling
decisions it is useful to know the compute capacity available at the
current OPP of a cpu.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
2016-05-10 16:49:50 +08:00
Juri Lelli
7c791bf110 arm64: Cpu invariant scheduler load-tracking and capacity support
Provides the scheduler with a cpu scaling correction factor for more
accurate load-tracking and cpu capacity handling.

The Energy Model (EM) (in fact the capacity value of the last element
of the capacity states vector of the core (MC) level sched_group_energy
structure) is used as the source for this cpu scaling factor.

The cpu capacity value depends on the micro-architecture and the
maximum frequency of the cpu.

The maximum frequency part should not be confused with the frequency
invariant scheduler load-tracking support which deals with frequency
related scaling due to DFVS functionality.

Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
2016-05-10 16:49:50 +08:00
Dietmar Eggemann
c7aeeb88c7 arm: Cpu invariant scheduler load-tracking and capacity support
Provides the scheduler with a cpu scaling correction factor for more
accurate load-tracking and cpu capacity handling.

The Energy Model (EM) (in fact the capacity value of the last element
of the capacity states vector of the core (MC) level sched_group_energy
structure) is used instead of the arm arch specific cpu_efficiency and
dtb property 'clock-frequency' values as the source for this cpu
scaling factor.

The cpu capacity value depends on the micro-architecture and the
maximum frequency of the cpu.

The maximum frequency part should not be confused with the frequency
invariant scheduler load-tracking support which deals with frequency
related scaling due to DFVS functionality.

Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
2016-05-10 16:49:50 +08:00
Morten Rasmussen
f0f739d887 sched: Introduce SD_SHARE_CAP_STATES sched_domain flag
cpufreq is currently keeping it a secret which cpus are sharing
clock source. The scheduler needs to know about clock domains as well
to become more energy aware. The SD_SHARE_CAP_STATES domain flag
indicates whether cpus belonging to the sched_domain share capacity
states (P-states).

There is no connection with cpufreq (yet). The flag must be set by
the arch specific topology code.

cc: Russell King <linux@arm.linux.org.uk>
cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
2016-05-10 16:49:50 +08:00
Dietmar Eggemann
4ce990ec65 sched: Initialize energy data structures
The sched_group_energy (sge) pointer of the first sched_group (sg) in
the sched_domain (sd) is initialized to point to the appropriate (in
terms of sd level and cpu) sge data defined in the arch and so to the
correct part of the Energy Model (EM).

Energy-aware scheduling allows that a system has only EM data up to a
certain sd level (so called highest energy aware balancing sd level).
A check in init_sched_energy() enforces that all sd's below this sd
level contain EM data.

The 'int cpu' parameter of sched_domain_energy_f requires that
check_sched_energy_data() makes sure that all cpus spanned by a sg
are provisioned with the same EM data.

This patch has also been tested with feature FORCE_SD_OVERLAP enabled.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
2016-05-10 16:49:50 +08:00
Dietmar Eggemann
0b3bda54d2 sched: Introduce energy data structures
The struct sched_group_energy represents the per sched_group related
data which is needed for energy aware scheduling. It contains:

  (1) number of elements of the idle state array
  (2) pointer to the idle state array which comprises 'power consumption'
      for each idle state
  (3) number of elements of the capacity state array
  (4) pointer to the capacity state array which comprises 'compute
      capacity and power consumption' tuples for each capacity state

The struct sched_group obtains a pointer to a struct sched_group_energy.

The function pointer sched_domain_energy_f is introduced into struct
sched_domain_topology_level which will allow the arch to pass a particular
struct sched_group_energy from the topology shim layer into the scheduler
core.

The function pointer sched_domain_energy_f has an 'int cpu' parameter
since the folding of two adjacent sd levels via sd degenerate doesn't work
for all sd levels. I.e. it is not possible for example to use this feature
to provide per-cpu energy in sd level DIE on ARM's TC2 platform.

It was discussed that the folding of sd levels approach is preferable
over the cpu parameter approach, simply because the user (the arch
specifying the sd topology table) can introduce less errors. But since
it is not working, the 'int cpu' parameter is the only way out. It's
possible to use the folding of sd levels approach for
sched_domain_flags_f and the cpu parameter approach for the
sched_domain_energy_f at the same time though. With the use of the
'int cpu' parameter, an extra check function has to be provided to make
sure that all cpus spanned by a sched group are provisioned with the same
energy data.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
2016-05-10 16:49:50 +08:00
Morten Rasmussen
e496f328c8 sched: Make energy awareness a sched feature
This patch introduces the ENERGY_AWARE sched feature, which is
implemented using jump labels when SCHED_DEBUG is defined. It is
statically set false when SCHED_DEBUG is not defined. Hence this doesn't
allow energy awareness to be enabled without SCHED_DEBUG. This
sched_feature knob will be replaced later with a more appropriate
control knob when things have matured a bit.

ENERGY_AWARE is based on per-entity load-tracking hence FAIR_GROUP_SCHED
must be enable. This dependency isn't checked at compile time yet.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
2016-05-10 16:49:50 +08:00
Morten Rasmussen
d8c8088f41 sched: Documentation for scheduler energy cost model
This documentation patch provides an overview of the experimental
scheduler energy costing model, associated data structures, and a
reference recipe on how platforms can be characterized to derive energy
models.

Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
2016-05-10 16:49:49 +08:00
Morten Rasmussen
681fa1413b sched: Prevent unnecessary active balance of single task in sched group
Scenarios with the busiest group having just one task and the local
being idle on topologies with sched groups with different numbers of
cpus manage to dodge all load-balance bailout conditions resulting the
nr_balance_failed counter to be incremented. This eventually causes a
pointless active migration of the task. This patch prevents this by not
incrementing the counter when the busiest group only has one task.
ASYM_PACKING migrations and migrations due to reduced capacity should
still take place as these are explicitly captured by
need_active_balance().

A better solution would be to not attempt the load-balance in the first
place, but that requires significant changes to the order of bailout
conditions and statistics gathering.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
2016-05-10 16:49:49 +08:00
Dietmar Eggemann
cda2bd333b sched: Enable idle balance to pull single task towards cpu with higher capacity
We do not want to miss out on the ability to pull a single remaining
task from a potential source cpu towards an idle destination cpu. Add an
extra criteria to need_active_balance() to kick off active load balance
if the source cpu is over-utilized and has lower capacity than the
destination cpu.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
2016-05-10 16:49:49 +08:00
Morten Rasmussen
8a5c0339bb sched: Consider spare cpu capacity at task wake-up
find_idlest_group() selects the wake-up target group purely
based on group load which leads to suboptimal choices in low load
scenarios. An idle group with reduced capacity (due to RT tasks or
different cpu type) isn't necessarily a better target than a lightly
loaded group with higher capacity.

The patch adds spare capacity as an additional group selection
parameter. The target group is now selected based on the following
criteria:

1. Return the group with the cpu with most spare capacity and this
capacity is significant if such group exists. Significant spare capacity
is currently at least 20% to spare.

2. Return the group with the lowest load, unless it is the local group
in which case NULL is returned and the search is continued at the next
(lower) level.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
2016-05-10 16:49:49 +08:00
Morten Rasmussen
e8bcb272bc sched: Add cpu capacity awareness to wakeup balancing
Wakeup balancing is completely unaware of cpu capacity, cpu utilization
and task utilization. The task is preferably placed on a cpu which is
idle in the instant the wakeup happens. New tasks
(SD_BALANCE_{FORK,EXEC} are placed on an idle cpu in the idlest group if
such can be found, otherwise it goes on the least loaded one. Existing
tasks (SD_BALANCE_WAKE) are placed on the previous cpu or an idle cpu
sharing the same last level cache unless the wakee_flips heuristic in
wake_wide() decides to fallback to considering cpus outside SD_LLC.
Hence existing tasks are not guaranteed to get a chance to migrate to a
different group at wakeup in case the current one has reduced cpu
capacity (due RT/IRQ pressure or different uarch e.g. ARM big.LITTLE).
They may eventually get pulled by other cpus doing
periodic/idle/nohz_idle balance, but it may take quite a while before it
happens.

This patch adds capacity awareness to find_idlest_{group,queue} (used by
SD_BALANCE_{FORK,EXEC} and SD_BALANCE_WAKE under certain circumstances)
such that groups/cpus that can accommodate the waking task based on task
utilization are preferred. In addition, wakeup of existing tasks
(SD_BALANCE_WAKE) is sent through find_idlest_{group,queue} also if the
task doesn't fit the capacity of the previous cpu to allow it to escape
(override wake_affine) when necessary instead of relying on
periodic/idle/nohz_idle balance to eventually sort it out.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
2016-05-10 16:49:49 +08:00
Dietmar Eggemann
1eb2b8aa45 sched: Store system-wide maximum cpu capacity in root domain
To be able to compare the capacity of the target cpu with the highest
cpu capacity of the system in the wakeup path, store the system-wide
maximum cpu capacity in the root domain.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
2016-05-10 16:49:49 +08:00
Morten Rasmussen
0f8edc8274 arm: Update arch_scale_cpu_capacity() to reflect change to define
arch_scale_cpu_capacity() is no longer a weak function but a #define
instead. Include the #define in topology.h.

cc: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
2016-05-10 16:49:49 +08:00
Dietmar Eggemann
8411396e9c arm64: Enable frequency invariant scheduler load-tracking support
Defines arch_scale_freq_capacity() to use cpufreq implementation.

Including <linux/cpufreq.h> in topology.h like for the arm arch doesn't
work because of CONFIG_COMPAT=y (Kernel support for 32-bit EL0).
That's why cpufreq_scale_freq_capacity() has to be declared extern in
topology.h.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
2016-05-10 16:49:48 +08:00
Dietmar Eggemann
372b62344c arm: Enable frequency invariant scheduler load-tracking support
Defines arch_scale_freq_capacity() to use cpufreq implementation.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
2016-05-10 16:49:48 +08:00
Dietmar Eggemann
259bd4cd8a cpufreq: Frequency invariant scheduler load-tracking support
Implements cpufreq_scale_freq_capacity() to provide the scheduler with a
frequency scaling correction factor for more accurate load-tracking.

The factor is:

	current_freq(cpu) << SCHED_CAPACITY_SHIFT / max_freq(cpu)

In fact, freq_scale should be a struct cpufreq_policy data member. But
this would require that the scheduler hot path (__update_load_avg()) would
have to grab the cpufreq lock. This can be avoided by using per-cpu data
initialized to SCHED_CAPACITY_SCALE for freq_scale.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
2016-05-10 16:49:48 +08:00
Yuyang Du
0ecbd590a7 sched/fair: Fix new task's load avg removed from source CPU in wake_up_new_task()
If a newly created task is selected to go to a different CPU in fork
balance when it wakes up the first time, its load averages should
not be removed from the source CPU since they are never added to
it before. The same is also applicable to a never used group entity.

Fix it in remove_entity_load_avg(): when entity's last_update_time
is 0, simply return. This should precisely identify the case in
question, because in other migrations, the last_update_time is set
to 0 after remove_entity_load_avg().

Reported-by: Steve Muckle <steve.muckle@linaro.org>
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
[peterz: cfs_rq_last_update_time]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Juri Lelli <Juri.Lelli@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Patrick Bellasi <patrick.bellasi@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Link: http://lkml.kernel.org/r/20151216233427.GJ28098@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-05-10 16:48:38 +08:00
Dmitry Shmidt
f355b21139 fiq_debugger: Add option to apply uart overlay by FIQ_DEBUGGER_UART_OVERLAY
fiq_debugger is taking over uart, so it is necessary to disable
original uart in DT file. It can be done manually or by overlay.

Change-Id: I9f50ec15b0e22e602d73b9f745fc8666f8925d09
Signed-off-by: Dmitry Shmidt <dimitrysh@google.com>
2016-05-05 16:17:12 -07:00
Amit Pundir
b2cee99f2b Revert "Recreate asm/mach/mmc.h include file"
This reverts commit 5b42ae3eda.

This recereated arch/arm/include/asm/mach/mmc.h include file has
no active user in android-4.x kernels. Also all the necessary bits
are already moved to include/linux/amba/mmci.h.

6ef297f86b (ARM: 5720/1: Move MMCI header to amba include dir)

Change-Id: Ibf258b355d17f54f49b777a8f6e0089e9b59a3a5
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-05-05 10:27:27 +05:30
Amit Pundir
8b43e279a2 Revert "ARM: Add 'card_present' state to mmc_platfrom_data"
This reverts commit 541632275e.

mmc_platform_data (or arch/arm/include/asm/mach/mmc.h in general)
has no active user in android-4.x kernels. Also all the necessary
bits are already moved to include/linux/amba/mmci.h.

6ef297f86b (ARM: 5720/1: Move MMCI header to amba include dir)

Change-Id: Iff384eb527327bf88543408e0257241c1fd99a43
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-05-05 10:26:56 +05:30
Jack Pham
d114f2c357 usb: dual-role: make stub functions inline
If CONFIG_DUAL_ROLE_USB_INTF is disabled but the exported functions
are referenced, the build will result in warnings such as:

	In file included from include/linux/usb/class-dual-role.h:112:13:
	warning: ‘dual_role_instance_changed’ defined but not used
	[-Wunused-function]

These stub functions should be static inline.

Change-Id: I5a9ef58dca32306fac5a4c7f28cdaa36fa8ae078
Signed-off-by: Jack Pham <jackp@codeaurora.org>
(cherry picked from commit 2d152dbb0743526b21d6bbefe097f874c027f860)
(cherry picked from commit 8ad66cafaa10e6ba94ff79a8dbc2cc437c6bfe93)
2016-05-03 23:45:13 +00:00
Amit Pundir
f24888af18 Revert "mmc: Add status IRQ and status callback function to mmc platform data"
This reverts commit 91fa97e1e5.

This patch is no longer valid. There are no users for this status irq and
callback in android-4.x. The Qcom platform (mach-msm/qsd8x50, HTC Dream..)
and SDCC controller (msm_sdcc) using this status IRQ and callback are
dropped from mainline sometime back.

27842bb18b00 (mmc: Remove msm_sdcc driver)
c0c89fafa2 (ARM: Remove mach-msm and associated ARM architecture code)

Change-Id: Ia38e42a06dc184395f79c1ec1d306bf9775704d5
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-05-02 18:04:46 +05:30
Yongqin Liu
3a343a1540 quick selinux support for tracefs
Here is just the quick fix for tracefs with selinux.
just add tracefs to the list of whitelisted filesystem
types in selinux_is_sblabel_mnt(), but the right fix would be to
generalize this logic as described in the last item on the todo list,
https://bitbucket.org/seandroid/wiki/wiki/ToDo

Change-Id: I2aa803ccffbcd2802a7287514da7648e6b364157
Signed-off-by: Yongqin Liu <yongqin.liu@linaro.org>
2016-04-28 14:06:30 +08:00
Amit Pundir
9fc7b46c3d Revert "hid-multitouch: Filter collections by application usage."
This reverts commit 0840b80cb9.

This patch is already upstreamed in v4.4, commit
658d4aed59 (HID: hid-multitouch: Filter collections by application usage.),
and further fixed/cleaned up afterwards in commits
c2ef8f21ea (HID: multitouch: add support for trackpads),
76f5902aeb (HID: hid-multitouch: Simplify setup and frame synchronization) et al.

By having this duplicate patch in AOSP we are doing redundant
checks for Touchscreen and Touchpad devices.

Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-04-27 10:16:58 -07:00
Amit Pundir
379cf6e8dd Revert "HID: steelseries: validate output report details"
This reverts commit 90037b2720.

Remove duplicate code. This patch is already upstreamed in v4.4,
commit 41df7f6d43 (HID: steelseries: validate output report details).

Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-04-26 12:52:38 -07:00
John Stultz
4e461c777e xt_qtaguid: Fix panic caused by synack processing
In upstream commit ca6fb06518
(tcp: attach SYNACK messages to request sockets instead of
listener)
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=ca6fb0651883

The building of synack messages was changed, which made it so
the skb->sk points to a casted request_sock. This is problematic,
as there is no sk_socket in a request_sock. So when the qtaguid_mt
function tries to access the sk->sk_socket, it accesses uninitialized
memory.

After looking at how other netfilter implementations handle this,
I realized there was a skb_to_full_sk() helper added, which the
xt_qtaguid code isn't yet using.

This patch adds its use, and resovles panics seen when accessing
uninitialzed memory when processing synack packets.

Reported-by: YongQin Liu <yongquin.liu@linaro.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
2016-04-26 12:49:26 -07:00
Dmitry Shmidt
ad95c12f66 Revert "mm: vmscan: Add a debug file for shrinkers"
Kernel panic when type "cat /sys/kernel/debug/shrinker"

Unable to handle kernel paging request at virtual address 0af37d40
pgd = d4dec000
[0af37d40] *pgd=00000000
Internal error: Oops: 5 [#1] PREEMPT SMP ARM
[<c0bb8f24>] (_raw_spin_lock) from [<c020aa08>] (list_lru_count_one+0x14/0x28)
[<c020aa08>] (list_lru_count_one) from [<c02309a8>] (super_cache_count+0x40/0xa0)
[<c02309a8>] (super_cache_count) from [<c01f6ab0>] (debug_shrinker_show+0x50/0x90)
[<c01f6ab0>] (debug_shrinker_show) from [<c024fa5c>] (seq_read+0x1ec/0x48c)
[<c024fa5c>] (seq_read) from [<c022e8f8>] (__vfs_read+0x20/0xd0)
[<c022e8f8>] (__vfs_read) from [<c022f0d0>] (vfs_read+0x7c/0x104)
[<c022f0d0>] (vfs_read) from [<c022f974>] (SyS_read+0x44/0x9c)
[<c022f974>] (SyS_read) from [<c0107580>] (ret_fast_syscall+0x0/0x3c)
Code: e1a04000 e3a00001 ebd66b39 f594f000 (e1943f9f)
---[ end trace 60c74014a63a9688 ]---
Kernel panic - not syncing: Fatal exception

shrink_control.nid is used but not initialzed, same for
shrink_control.memcg.

This reverts commit b0e7a582b2.

Change-Id: I108de88fa4baaef99a53c4e4c6a1d8c4b4804157
Reported-by: Xiaowen Liu <xiaowen.liu@freescale.com>
Signed-off-by: Dmitry Shmidt <dimitrysh@google.com>
2016-04-26 12:45:11 -07:00
Amit Pundir
66daa5f5e2 Revert "SELinux: Enable setting security contexts on rootfs inodes."
This reverts commit 78d36d2111.

Drop this duplicate patch. This patch is already upstreamed in v4.4. Commits
5c73fceb8c (SELinux: Enable setting security contexts on rootfs inodes.),
12f348b9dc (SELinux: rename SE_SBLABELSUPP to SBLABEL_MNT), and
b43e725d8d (SELinux: use a helper function to determine seclabel),
for reference.

Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-04-26 16:24:47 +05:30
Amit Pundir
5a0725f49f Revert "SELinux: build fix for 4.1"
This reverts commit 43e1b4f528.

This patch is part of code which is already upstreamed in v4.4. Commits
5c73fceb8c (SELinux: Enable setting security contexts on rootfs inodes.),
12f348b9dc (SELinux: rename SE_SBLABELSUPP to SBLABEL_MNT), and
b43e725d8d (SELinux: use a helper function to determine seclabel).
for reference.

Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-04-26 16:24:43 +05:30
Daniel Rosenberg
00ee836d89 fuse: Add support for d_canonical_path
Allows FUSE to report to inotify that it is acting
as a layered filesystem. The userspace component
returns a string representing the location of the
underlying file. If the string cannot be resolved
into a path, the top level path is returned instead.

bug: 23904372
Change-Id: Iabdca0bbedfbff59e9c820c58636a68ef9683d9f
Signed-off-by: Daniel Rosenberg <drosen@google.com>
2016-04-25 19:16:11 -07:00
Daniel Rosenberg
4b7de83752 vfs: change d_canonical_path to take two paths
bug: 23904372
Change-Id: I4a686d64b6de37decf60019be1718e1d820193e6
Signed-off-by: Daniel Rosenberg <drosen@google.com>
2016-04-25 19:15:48 -07:00
Amit Pundir
5e00c3098b android: recommended.cfg: remove CONFIG_UID_STAT
Remove UID Stat driver.

Change-Id: Ifc9d2c6fe27900f30e6407398f5b24222518bffc
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-04-25 12:55:23 -07:00
Amit Pundir
7c79aca516 netfilter: xt_qtaguid: seq_printf fixes
Update seq_printf() usage in xt_qtaguid to align
with changes from mainline commit 6798a8caaf
"fs/seq_file: convert int seq_vprint/seq_printf/etc...
returns to void".

Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-04-25 22:19:46 +05:30
Amit Pundir
ece28ad441 Revert "misc: uidstat: Adding uid stat driver to collect network statistics."
This reverts commit 6b6d5fbf9a.

Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-04-25 22:19:46 +05:30
Amit Pundir
42d9422a80 Revert "net: activity_stats: Add statistics for network transmission activity"
This reverts commit afedd7beba.

Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-04-25 22:19:46 +05:30
Amit Pundir
5c0d8ae10a Revert "net: activity_stats: Stop using obsolete create_proc_read_entry api"
This reverts commit 7c121720fa.

Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-04-25 22:19:46 +05:30
Amit Pundir
cded5596a3 Revert "misc: uidstat: avoid create_stat() race and blockage."
This reverts commit f7a8121740.

Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-04-25 22:19:46 +05:30
Amit Pundir
1ada37dc07 Revert "misc: uidstat: Remove use of obsolete create_proc_read_entry api"
This reverts commit fccab646d3.

Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-04-25 22:19:46 +05:30
Amit Pundir
cdb6973ae1 Revert "misc seq_printf fixes for 4.4"
This reverts commit 5c7566a29b.

This patch revert some changes in net/netfilter/xt_qtaguid.c as well.
I'll submit another patch to restore those changes.

Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-04-25 22:19:46 +05:30
Amit Pundir
a7eabf90ae Revert "misc: uid_stat: Include linux/atomic.h instead of asm/atomic.h"
This reverts commit 8d3a6c1538.

This series of patches revert AOSP UID_STAT and NET_ACTIVITY_STATS drivers.
I could not find any meaningful usage of these interfaces in AOSP master.

UID_STAT driver expose "/proc/uid_stat/*" interfaces but it is only
used in AOSP master as in what appears be an out of date bandwidth
test in frameworks/base and in somewhat recent battery utils test
in external/chromium-trace project.

NET_ACTIVITY_STATS driver expose "/proc/net/stat/activity" interface
but I can not track its usage anywhere in AOSP at all.

Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-04-25 22:19:46 +05:30
Dmitry Shmidt
8591a83cae Revert "net: socket ioctl to reset connections matching local address"
Use SOCK_DESTROY from now instead of SIOCKILLADDR

This reverts commit 38f0ec724f.

Change-Id: I2dcd833b66c88a48de8978dce9d72ab78f9af549
2016-04-22 11:04:14 -07:00
Dmitry Shmidt
20de1c6bc4 Revert "net: fix iterating over hashtable in tcp_nuke_addr()"
This reverts commit 4747299b2c.
2016-04-21 15:44:25 -07:00