Commit graph

563367 commits

Author SHA1 Message Date
Badhri Jagan Sridharan
b24e99b3ae UPSTREAM: USB: iowarrior: fix oops with malicious USB descriptors
commit 4ec0ef3a82125efc36173062a50624550a900ae0 upstream.

The iowarrior driver expects at least one valid endpoint.  If given
malicious descriptors that specify 0 for the number of endpoints,
it will crash in the probe function.  Ensure there is at least
one endpoint on the interface before using it.

The full report of this issue can be found here:
http://seclists.org/bugtraq/2016/Mar/87

BUG: 28242610

Reported-by: Ralf Spenneberg <ralf@spenneberg.net>
Signed-off-by: Josh Boyer <jwboyer@fedoraproject.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Badhri Jagan Sridharan <Badhri@google.com>
Change-Id: If5161c23928e9ef77cb3359cba9b36622b1908df
2016-08-30 13:41:35 -07:00
Badhri Jagan Sridharan
ae6790121c UPSTREAM: USB: usb_driver_claim_interface: add sanity checking
commit 0b818e3956fc1ad976bee791eadcbb3b5fec5bfd upstream.

Attacks that trick drivers into passing a NULL pointer
to usb_driver_claim_interface() using forged descriptors are
known. This thwarts them by sanity checking.

BUG: 28242610

Signed-off-by: Oliver Neukum <ONeukum@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Badhri Jagan Sridharan <Badhri@google.com>
Change-Id: Ib43ec5edb156985a9db941785a313f6801df092a
2016-08-30 13:41:35 -07:00
Badhri Jagan Sridharan
ee5c7a9739 UPSTREAM: USB: mct_u232: add sanity checking in probe
commit 4e9a0b05257f29cf4b75f3209243ed71614d062e upstream.

An attack using the lack of sanity checking in probe is known. This
patch checks for the existence of a second port.

CVE-2016-3136
BUG: 28242610
Signed-off-by: Oliver Neukum <ONeukum@suse.com>
[johan: add error message ]
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

Signed-off-by: Badhri Jagan Sridharan <Badhri@google.com>
Change-Id: I284ad648c2087c34a098d67e0cc6d948a568413c
2016-08-30 13:41:35 -07:00
Badhri Jagan Sridharan
125d90ca2f UPSTREAM: USB: cypress_m8: add endpoint sanity check
commit c55aee1bf0e6b6feec8b2927b43f7a09a6d5f754 upstream.

An attack using missing endpoints exists.

CVE-2016-3137

BUG: 28242610
Signed-off-by: Oliver Neukum <ONeukum@suse.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Badhri Jagan Sridharan <Badhri@google.com>
Change-Id: I1cc7957a5924175d24f12fdc41162ece67c907e5
2016-08-30 13:41:35 -07:00
Badhri Jagan Sridharan
258a8f519a UPSTREAM: Input: powermate - fix oops with malicious USB descriptors
The powermate driver expects at least one valid USB endpoint in its
probe function.  If given malicious descriptors that specify 0 for
the number of endpoints, it will crash.  Validate the number of
endpoints on the interface before using them.

The full report for this issue can be found here:
http://seclists.org/bugtraq/2016/Mar/85

BUG: 28242610

Reported-by: Ralf Spenneberg <ralf@spenneberg.net>
Signed-off-by: Josh Boyer <jwboyer@fedoraproject.org>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Badhri Jagan Sridharan <Badhri@google.com>
Change-Id: I1cb956a35f3bba73324240d5bd0a029f49d3c456
2016-08-30 13:41:35 -07:00
Jason Baron
ba52437821 BACKPORT: tcp: enable per-socket rate limiting of all 'challenge acks'
(cherry picked from commit 083ae308280d13d187512b9babe3454342a7987e)

The per-socket rate limit for 'challenge acks' was introduced in the
context of limiting ack loops:

commit f2b2c582e8 ("tcp: mitigate ACK loops for connections as tcp_sock")

And I think it can be extended to rate limit all 'challenge acks' on a
per-socket basis.

Since we have the global tcp_challenge_ack_limit, this patch allows for
tcp_challenge_ack_limit to be set to a large value and effectively rely on
the per-socket limit, or set tcp_challenge_ack_limit to a lower value and
still prevents a single connections from consuming the entire challenge ack
quota.

It further moves in the direction of eliminating the global limit at some
point, as Eric Dumazet has suggested. This a follow-up to:
Subject: tcp: make challenge acks less predictable

Cc: Eric Dumazet <edumazet@google.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Yue Cao <ycao009@ucr.edu>
Signed-off-by: Jason Baron <jbaron@akamai.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

Change-Id: I622d5ae96e9387e775a0196c892d8d0e1a5564a7
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-08-30 16:58:07 +00:00
Balbir Singh
e91f1799ff RFC: FROMLIST: cgroup: reduce read locked section of cgroup_threadgroup_rwsem during fork
cgroup_threadgroup_rwsem is acquired in read mode during process exit
and fork.  It is also grabbed in write mode during
__cgroups_proc_write().  I've recently run into a scenario with lots
of memory pressure and OOM and I am beginning to see

systemd

 __switch_to+0x1f8/0x350
 __schedule+0x30c/0x990
 schedule+0x48/0xc0
 percpu_down_write+0x114/0x170
 __cgroup_procs_write.isra.12+0xb8/0x3c0
 cgroup_file_write+0x74/0x1a0
 kernfs_fop_write+0x188/0x200
 __vfs_write+0x6c/0xe0
 vfs_write+0xc0/0x230
 SyS_write+0x6c/0x110
 system_call+0x38/0xb4

This thread is waiting on the reader of cgroup_threadgroup_rwsem to
exit.  The reader itself is under memory pressure and has gone into
reclaim after fork. There are times the reader also ends up waiting on
oom_lock as well.

 __switch_to+0x1f8/0x350
 __schedule+0x30c/0x990
 schedule+0x48/0xc0
 jbd2_log_wait_commit+0xd4/0x180
 ext4_evict_inode+0x88/0x5c0
 evict+0xf8/0x2a0
 dispose_list+0x50/0x80
 prune_icache_sb+0x6c/0x90
 super_cache_scan+0x190/0x210
 shrink_slab.part.15+0x22c/0x4c0
 shrink_zone+0x288/0x3c0
 do_try_to_free_pages+0x1dc/0x590
 try_to_free_pages+0xdc/0x260
 __alloc_pages_nodemask+0x72c/0xc90
 alloc_pages_current+0xb4/0x1a0
 page_table_alloc+0xc0/0x170
 __pte_alloc+0x58/0x1f0
 copy_page_range+0x4ec/0x950
 copy_process.isra.5+0x15a0/0x1870
 _do_fork+0xa8/0x4b0
 ppc_clone+0x8/0xc

In the meanwhile, all processes exiting/forking are blocked almost
stalling the system.

This patch moves the threadgroup_change_begin from before
cgroup_fork() to just before cgroup_canfork().  There is no nee to
worry about threadgroup changes till the task is actually added to the
threadgroup.  This avoids having to call reclaim with
cgroup_threadgroup_rwsem held.

tj: Subject and description edits.

Signed-off-by: Balbir Singh <bsingharora@gmail.com>
Acked-by: Zefan Li <lizefan@huawei.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: stable@vger.kernel.org # v4.2+
Signed-off-by: Tejun Heo <tj@kernel.org>
[jstultz: Cherry-picked from:
 git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git 568ac888215c7f]
Change-Id: Ie8ece84fb613cf6a7b08cea1468473a8df2b9661
Signed-off-by: John Stultz <john.stultz@linaro.org>
2016-08-26 09:37:56 -07:00
Peter Zijlstra
0c3240a1ef RFC: FROMLIST: cgroup: avoid synchronize_sched() in __cgroup_procs_write()
The current percpu-rwsem read side is entirely free of serializing insns
at the cost of having a synchronize_sched() in the write path.

The latency of the synchronize_sched() is too high for cgroups. The
commit 1ed1328792 talks about the write path being a fairly cold path
but this is not the case for Android which moves task to the foreground
cgroup and back around binder IPC calls from foreground processes to
background processes, so it is significantly hotter than human initiated
operations.

Switch cgroup_threadgroup_rwsem into the slow mode for now to avoid the
problem, hopefully it should not be that slow after another commit
80127a39681b ("locking/percpu-rwsem: Optimize readers and reduce global
impact").

We could just add rcu_sync_enter() into cgroup_init() but we do not want
another synchronize_sched() at boot time, so this patch adds the new helper
which doesn't block but currently can only be called before the first use.

Cc: Tejun Heo <tj@kernel.org>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Reported-by: John Stultz <john.stultz@linaro.org>
Reported-by: Dmitry Shmidt <dimitrysh@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
[jstultz: backported to 4.4]
Change-Id: I34aa9c394d3052779b56976693e96d861bd255f2
Mailing-list-URL: https://lkml.org/lkml/2016/8/11/557
Signed-off-by: John Stultz <john.stultz@linaro.org>
2016-08-26 09:37:43 -07:00
Peter Zijlstra
3228c5eb7a RFC: FROMLIST: locking/percpu-rwsem: Optimize readers and reduce global impact
Currently the percpu-rwsem switches to (global) atomic ops while a
writer is waiting; which could be quite a while and slows down
releasing the readers.

This patch cures this problem by ordering the reader-state vs
reader-count (see the comments in __percpu_down_read() and
percpu_down_write()). This changes a global atomic op into a full
memory barrier, which doesn't have the global cacheline contention.

This also enables using the percpu-rwsem with rcu_sync disabled in order
to bias the implementation differently, reducing the writer latency by
adding some cost to readers.

Mailing-list-URL: https://lkml.org/lkml/2016/8/9/181
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
[jstultz: Backported to 4.4]
Change-Id: I8ea04b4dca2ec36f1c2469eccafde1423490572f
Signed-off-by: John Stultz <john.stultz@linaro.org>
2016-08-26 09:37:34 -07:00
Lorenzo Colitti
75c591ea57 net: ipv6: Fix ping to link-local addresses.
ping_v6_sendmsg does not set flowi6_oif in response to
sin6_scope_id or sk_bound_dev_if, so it is not possible to use
these APIs to ping an IPv6 address on a different interface.
Instead, it sets flowi6_iif, which is incorrect but harmless.

Stop setting flowi6_iif, and support various ways of setting oif
in the same priority order used by udpv6_sendmsg.

[Backport of net 5e457896986e16c440c97bb94b9ccd95dd157292]

Bug: 29370996
Change-Id: Ibe1b9434c00ed96f1e30acb110734c6570b087b8
Tested: https://android-review.googlesource.com/#/c/254470/
Signed-off-by: Lorenzo Colitti <lorenzo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-08-26 16:26:53 +09:00
Hannes Frederic Sowa
8f5bf76871 ipv6: fix endianness error in icmpv6_err
IPv6 ping socket error handler doesn't correctly convert the new 32 bit
mtu to host endianness before using.

[Cherry-pick of net dcb94b88c09ce82a80e188d49bcffdc83ba215a6]

Bug: 29370996
Change-Id: Iea0ca79f16c2a1366d82b3b0a3097093d18da8b7
Cc: Lorenzo Colitti <lorenzo@google.com>
Fixes: 6d0bfe2261 ("net: ipv6: Add IPv6 support to the ping socket.")
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Acked-by: Lorenzo Colitti <lorenzo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-08-26 16:26:53 +09:00
Badhri Jagan Sridharan
8a58605e46 ANDROID: dm: android-verity: Allow android-verity to be compiled as an independent module
Exports the device mapper callbacks of linear and dm-verity-target
methods.

Signed-off-by: Badhri Jagan Sridharan <Badhri@google.com>
Change-Id: I0358be0615c431dce3cc78575aaac4ccfe3aacd7
2016-08-25 15:59:13 +00:00
Mohamad Ayyash
1c6df5fdcb Revert "Android: MMC/UFS IO Latency Histograms."
This reverts commit 8d525c5122.

Change-Id: I69350b98d9de9b1c9f591e03a90f133e328ba72a
2016-08-25 00:59:21 +00:00
Mohan Srinivasan
8d525c5122 Android: MMC/UFS IO Latency Histograms.
This patch adds a new sysfs node (latency_hist) and reports IO
(svc time) latency histograms. Disabled by default, can be enabled
by echoing 0 into latency_hist, stats can be cleared by writing 2
into latency_hist.

Bug: 30677035
Change-Id: I625938135ea33e6e87cf6af1fc7edc136d8b4b32
Signed-off-by: Mohan Srinivasan <srmohan@google.com>
2016-08-24 19:09:22 +00:00
Rainer Weikusat
50acf9e6e3 UPSTREAM: af_unix: Guard against other == sk in unix_dgram_sendmsg
(cherry picked from commit a5527dda344fff0514b7989ef7a755729769daa1)

The unix_dgram_sendmsg routine use the following test

if (unlikely(unix_peer(other) != sk && unix_recvq_full(other))) {

to determine if sk and other are in an n:1 association (either
established via connect or by using sendto to send messages to an
unrelated socket identified by address). This isn't correct as the
specified address could have been bound to the sending socket itself or
because this socket could have been connected to itself by the time of
the unix_peer_get but disconnected before the unix_state_lock(other). In
both cases, the if-block would be entered despite other == sk which
might either block the sender unintentionally or lead to trying to unlock
the same spin lock twice for a non-blocking send. Add a other != sk
check to guard against this.

Fixes: 7d267278a9 ("unix: avoid use-after-free in ep_remove_wait_queue")
Reported-By: Philipp Hahn <pmhahn@pmhahn.de>
Signed-off-by: Rainer Weikusat <rweikusat@mobileactivedefense.com>
Tested-by: Philipp Hahn <pmhahn@pmhahn.de>
Signed-off-by: David S. Miller <davem@davemloft.net>

Change-Id: I4ebef6a390df3487903b166b837e34c653e01cb2
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-08-24 13:48:36 +05:30
Takashi Iwai
a20900c4aa UPSTREAM: ALSA: timer: Fix race among timer ioctls
(cherry picked from commit af368027a49a751d6ff4ee9e3f9961f35bb4fede)

ALSA timer ioctls have an open race and this may lead to a
use-after-free of timer instance object.  A simplistic fix is to make
each ioctl exclusive.  We have already tread_sem for controlling the
tread, and extend this as a global mutex to be applied to each ioctl.

The downside is, of course, the worse concurrency.  But these ioctls
aren't to be parallel accessible, in anyway, so it should be fine to
serialize there.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Tested-by: Dmitry Vyukov <dvyukov@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Change-Id: I1ac52f1cba5e7408fd88c8fc1c30ca2e83967ebb
Bug: 28694392
2016-08-22 23:15:17 -07:00
Eric Dumazet
ca9b7b070b UPSTREAM: tcp: make challenge acks less predictable
(cherry picked from commit 75ff39ccc1bd5d3c455b6822ab09e533c551f758)

Yue Cao claims that current host rate limiting of challenge ACKS
(RFC 5961) could leak enough information to allow a patient attacker
to hijack TCP sessions. He will soon provide details in an academic
paper.

This patch increases the default limit from 100 to 1000, and adds
some randomization so that the attacker can no longer hijack
sessions without spending a considerable amount of probes.

Based on initial analysis and patch from Linus.

Note that we also have per socket rate limiting, so it is tempting
to remove the host limit in the future.

v2: randomize the count of challenge acks per second, not the period.

Fixes: 282f23c6ee ("tcp: implement RFC 5961 3.2")
Reported-by: Yue Cao <ycao009@ucr.edu>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Acked-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Change-Id: Ib46ba66f5e4a5a7c81bfccd7b0aa83c3d9e1b3bb
Bug: 30809774
2016-08-16 17:33:30 -07:00
Waiman Long
76554612b4 sched/fair: Avoid redundant idle_cpu() call in update_sg_lb_stats()
Part of the responsibility of the update_sg_lb_stats() function is to
update the idle_cpus statistical counter in struct sg_lb_stats. This
check is done by calling idle_cpu(). The idle_cpu() function, in
turn, checks a number of fields within the run queue structure such
as rq->curr and rq->nr_running.

With the current layout of the run queue structure, rq->curr and
rq->nr_running are in separate cachelines. The rq->curr variable is
checked first followed by nr_running. As nr_running is also accessed
by update_sg_lb_stats() earlier, it makes no sense to load another
cacheline when nr_running is not 0 as idle_cpu() will always return
false in this case.

This patch eliminates this redundant cacheline load by checking the
cached nr_running before calling idle_cpu().

Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Douglas Hatch <doug.hatch@hpe.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Scott J Norton <scott.norton@hpe.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1448478580-26467-2-git-send-email-Waiman.Long@hpe.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit a426f99c91d1036767a7819aaaba6bd3191b7f06)
Signed-off-by: Javi Merino <javi.merino@arm.com>
2016-08-15 13:23:38 -07:00
Ricky Liang
34828bd3e7 FIXUP: sched: scheduler-driven cpu frequency selection
Two fixups that have been reported on LKML. The next version of
scheduler-driver cpu frequency selection patch set should include
these fixes and we can drop this patch then.

Signed-off-by: Ricky Liang <jcliang@chromium.org>

Change-Id: Ia2f8b5c0dd5dac06580256eeb4b259929688af68
2016-08-15 13:22:59 -07:00
Winter Wang
1a0cf8ace1 UPSTREAM: usb: gadget: configfs: add mutex lock before unregister gadget
There may be a race condition if f_fs calls unregister_gadget_item in
ffs_closed() when unregister_gadget is called by UDC store at the same time.
this leads to a kernel NULL pointer dereference:

[  310.644928] Unable to handle kernel NULL pointer dereference at virtual address 00000004
[  310.645053] init: Service 'adbd' is being killed...
[  310.658938] pgd = c9528000
[  310.662515] [00000004] *pgd=19451831, *pte=00000000, *ppte=00000000
[  310.669702] Internal error: Oops: 817 [#1] PREEMPT SMP ARM
[  310.675211] Modules linked in:
[  310.678294] CPU: 0 PID: 1537 Comm: ->transport Not tainted 4.1.15-03725-g793404c #2
[  310.685958] Hardware name: Freescale i.MX6 Quad/DualLite (Device Tree)
[  310.692493] task: c8e24200 ti: c945e000 task.ti: c945e000
[  310.697911] PC is at usb_gadget_unregister_driver+0xb4/0xd0
[  310.703502] LR is at __mutex_lock_slowpath+0x10c/0x16c
[  310.708648] pc : [<c075efc0>]    lr : [<c0bfb0bc>]    psr: 600f0113
<snip..>
[  311.565585] [<c075efc0>] (usb_gadget_unregister_driver) from [<c075e2b8>] (unregister_gadget_item+0x1c/0x34)
[  311.575426] [<c075e2b8>] (unregister_gadget_item) from [<c076fcc8>] (ffs_closed+0x8c/0x9c)
[  311.583702] [<c076fcc8>] (ffs_closed) from [<c07736b8>] (ffs_data_reset+0xc/0xa0)
[  311.591194] [<c07736b8>] (ffs_data_reset) from [<c07738ac>] (ffs_data_closed+0x90/0xd0)
[  311.599208] [<c07738ac>] (ffs_data_closed) from [<c07738f8>] (ffs_ep0_release+0xc/0x14)
[  311.607224] [<c07738f8>] (ffs_ep0_release) from [<c023e030>] (__fput+0x80/0x1d0)
[  311.614635] [<c023e030>] (__fput) from [<c014e688>] (task_work_run+0xb0/0xe8)
[  311.621788] [<c014e688>] (task_work_run) from [<c010afdc>] (do_work_pending+0x7c/0xa4)
[  311.629718] [<c010afdc>] (do_work_pending) from [<c010770c>] (work_pending+0xc/0x20)

for functions using functionFS, i.e. android adbd will close /dev/usb-ffs/adb/ep0
when usb IO thread fails, but switch adb from on to off also triggers write
"none" > UDC. These 2 operations both call unregister_gadget, which will lead
to the panic above.

add a mutex before calling unregister_gadget for api used in f_fs.

Signed-off-by: Winter Wang <wente.wang@nxp.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
2016-08-15 09:53:21 -07:00
Badhri Jagan Sridharan
aa3cda16a5 ANDROID: dm-verity: adopt changes made to dm callbacks
v4.4 introduced changes to the callbacks used for
dm-linear and dm-verity-target targets. Move to those headers
in dm-android-verity.

Verified on hikey while having
BOARD_USES_RECOVERY_AS_BOOT := true
BOARD_BUILD_SYSTEM_ROOT_IMAGE := true

BUG: 27339727
Signed-off-by: Badhri Jagan Sridharan <Badhri@google.com>
Change-Id: Ic64950c3b55f0a6eaa570bcedc2ace83bbf3005e
2016-08-12 14:18:03 -07:00
Al Viro
4804692cb1 UPSTREAM: ecryptfs: fix handling of directory opening
(cherry picked from commit 6a480a7842545ec520a91730209ec0bae41694c1)

First of all, trying to open them r/w is idiocy; it's guaranteed to fail.
Moreover, assigning ->f_pos and assuming that everything will work is
blatantly broken - try that with e.g. tmpfs as underlying layer and watch
the fireworks.  There may be a non-trivial amount of state associated with
current IO position, well beyond the numeric offset.  Using the single
struct file associated with underlying inode is really not a good idea;
we ought to open one for each ecryptfs directory struct file.

Additionally, file_operations both for directories and non-directories are
full of pointless methods; non-directories should *not* have ->iterate(),
directories should not have ->flush(), ->fasync() and ->splice_read().

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>

Change-Id: I4813ce803f270fdd364758ce1dc108b76eab226e
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-08-12 13:44:30 -07:00
Jeff Mahoney
7997255b0d UPSTREAM: ecryptfs: don't allow mmap when the lower fs doesn't support it
(cherry picked from commit f0fe970df3838c202ef6c07a4c2b36838ef0a88b)

There are legitimate reasons to disallow mmap on certain files, notably
in sysfs or procfs.  We shouldn't emulate mmap support on file systems
that don't offer support natively.

CVE-2016-1583

Signed-off-by: Jeff Mahoney <jeffm@suse.com>
Cc: stable@vger.kernel.org
[tyhicks: clean up f_op check by using ecryptfs_file_to_lower()]
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>

Change-Id: I66e3670771630a25b0608f10019d1584e9ce73a6
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-08-12 13:44:17 -07:00
Jeff Mahoney
1da2a42de4 UPSTREAM: Revert "ecryptfs: forbid opening files without mmap handler"
(cherry picked from commit 78c4e172412de5d0456dc00d2b34050aa0b683b5)

This reverts upstream commit 2f36db71009304b3f0b95afacd8eba1f9f046b87.

It fixed a local root exploit but also introduced a dependency on
the lower file system implementing an mmap operation just to open a file,
which is a bit of a heavy hammer.  The right fix is to have mmap depend
on the existence of the mmap handler instead.

Signed-off-by: Jeff Mahoney <jeffm@suse.com>
Cc: stable@vger.kernel.org
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>

Fixes: Change-Id I0be77c7f8bd3046bc34cd87ef577529792d479bc
       ("UPSTREAM: ecryptfs: forbid opening files without mmap handler")
Change-Id: Ib9bc87099f7f89e4e12dbc1a79e884dcadb1befb
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-08-12 13:44:01 -07:00
Amit Pundir
0b848391c2 ANDROID: net: core: fix UID-based routing
Fix RTA_UID enum to match it with the Android userspace code which
assumes RTA_UID=18.

With this patch all Android kernel networking unit tests mentioned here
https://source.android.com/devices/tech/config/kernel_network_tests.html
are success.

Without this patch multinetwork_test.py unit test fails.

Change-Id: I3ff36670f7d4e5bf5f01dce584ae9d53deabb3ed
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-08-12 13:38:04 -07:00
Amit Pundir
ef58c0a269 ANDROID: net: fib: remove duplicate assignment
Remove duplicate FRA_GOTO assignment.

Fixes: fd2cf795f3 ("net: core: Support UID-based routing.")

Change-Id: I462c24b16fdef42ae2332571a0b95de3ef9d2e25
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-08-12 13:37:44 -07:00
John Stultz
fb8741a0ba FROMLIST: proc: Fix timerslack_ns CAP_SYS_NICE check when adjusting self
In changing from checking ptrace_may_access(p, PTRACE_MODE_ATTACH_FSCREDS)
to capable(CAP_SYS_NICE), I missed that ptrace_my_access succeeds
when p == current, but the CAP_SYS_NICE doesn't.

Thus while the previous commit was intended to loosen the needed
privledges to modify a processes timerslack, it needlessly restricted
a task modifying its own timerslack via the proc/<tid>/timerslack_ns
(which is permitted also via the PR_SET_TIMERSLACK method).

This patch corrects this by checking if p == current before checking
the CAP_SYS_NICE value.

Cc: Kees Cook <keescook@chromium.org>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
CC: Arjan van de Ven <arjan@linux.intel.com>
Cc: Oren Laadan <orenl@cellrox.com>
Cc: Ruchi Kandoi <kandoiruchi@google.com>
Cc: Rom Lemarchand <romlem@android.com>
Cc: Todd Kjos <tkjos@google.com>
Cc: Colin Cross <ccross@android.com>
Cc: Nick Kralevich <nnk@google.com>
Cc: Dmitry Shmidt <dimitrysh@google.com>
Cc: Elliott Hughes <enh@google.com>
Cc: Android Kernel Team <kernel-team@android.com>
Mailing-list-url: http://www.spinics.net/lists/kernel/msg2317488.html
Change-Id: Ia3e8aff07c2d41f55b6617502d33c39b7d781aac
Signed-off-by: John Stultz <john.stultz@linaro.org>
2016-08-11 15:20:37 -07:00
Matt Wagantall
2a4445395f sched/rt: Add Kconfig option to enable panicking for RT throttling
This may be useful for detecting and debugging RT throttling issues.

Change-Id: I5807a897d11997d76421c1fcaa2918aad988c6c9
Signed-off-by: Matt Wagantall <mattw@codeaurora.org>
[rameezmustafa@codeaurora.org]: Port to msm-3.18]
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
[jstultz: forwardported to 4.4]
Signed-off-by: John Stultz <john.stultz@linaro.org>
2016-08-11 14:26:55 -07:00
Matt Wagantall
989f33f789 sched/rt: print RT tasks when RT throttling is activated
Existing debug prints do not provide any clues about which tasks
may have triggered RT throttling. Print the names and PIDs of
all tasks on the throttled rt_rq to help narrow down the source
of the problem.

Change-Id: I180534c8a647254ed38e89d0c981a8f8bccd741c
Signed-off-by: Matt Wagantall <mattw@codeaurora.org>
[rameezmustafa@codeaurora.org]: Port to msm-3.18]
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-08-11 14:26:54 -07:00
Peter Zijlstra
3276c3d7bc UPSTREAM: sched: Fix a race between __kthread_bind() and sched_setaffinity()
Because sched_setscheduler() checks p->flags & PF_NO_SETAFFINITY
without locks, a caller might observe an old value and race with the
set_cpus_allowed_ptr() call from __kthread_bind() and effectively undo
it:

	__kthread_bind()
	  do_set_cpus_allowed()
						<SYSCALL>
						  sched_setaffinity()
						    if (p->flags & PF_NO_SETAFFINITIY)
						    set_cpus_allowed_ptr()
	  p->flags |= PF_NO_SETAFFINITY

Fix the bug by putting everything under the regular scheduler locks.

This also closes a hole in the serialization of task_struct::{nr_,}cpus_allowed.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Tejun Heo <tj@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dedekind1@gmail.com
Cc: juri.lelli@arm.com
Cc: mgorman@suse.de
Cc: riel@redhat.com
Cc: rostedt@goodmis.org
Link: http://lkml.kernel.org/r/20150515154833.545640346@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 25834c73f9)
Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>

BUG=chrome-os-partner:44828
TEST=Boot kernel on Oak.
TEST=smaug-release and strago-release trybots.

Change-Id: Id3c898c5ee1a22ed704e83f2ecf5f78199280d38
Reviewed-on: https://chromium-review.googlesource.com/321264
Commit-Ready: Ricky Liang <jcliang@chromium.org>
Tested-by: Ricky Liang <jcliang@chromium.org>
Reviewed-by: Ricky Liang <jcliang@chromium.org>

Conflicts:
	kernel/sched/core.c
2016-08-11 14:26:54 -07:00
Srinath Sridharan
07ec7db165 sched/fair: Favor higher cpus only for boosted tasks
This CL separates the notion of boost and prefer_idle schedtune
attributes in cpu selection. Today only top-app
tasks are boosted. The CPU selection is slightly tweaked such that
higher order cpus are preferred only for boosted tasks (top-app) and the
rest would be skewed towards lower order cpus.
This avoids starvation issues for fg tasks when interacting with high
priority top-app tasks (a problem often seen in the case of system_server).

bug: 30245369
bug: 30292998
Change-Id: I0377e00893b9f6586eec55632a265518fd2fa8a1

Conflicts:
	kernel/sched/fair.c
2016-08-11 14:26:53 -07:00
Christoph Lameter
142b2acc79 vmstat: make vmstat_updater deferrable again and shut down on idle
Currently the vmstat updater is not deferrable as a result of commit
ba4877b9ca ("vmstat: do not use deferrable delayed work for
vmstat_update").  This in turn can cause multiple interruptions of the
applications because the vmstat updater may run at

Make vmstate_update deferrable again and provide a function that folds
the differentials when the processor is going to idle mode thus
addressing the issue of the above commit in a clean way.

Note that the shepherd thread will continue scanning the differentials
from another processor and will reenable the vmstat workers if it
detects any changes.

Change-Id: Idf256cfacb40b4dc8dbb6795cf06b34e8fec7a06
Fixes: ba4877b9ca ("vmstat: do not use deferrable delayed work for vmstat_update")
Signed-off-by: Christoph Lameter <cl@linux.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Git-commit: 0eb77e9880321915322d42913c3b53241739c8aa
[shashim@codeaurora.org: resolve minor merge conflicts]
Signed-off-by: Shiraz Hashim <shashim@codeaurora.org>
[jstultz: fwdport to 4.4]
Signed-off-by: John Stultz <john.stultz@linaro.org>
2016-08-11 14:26:53 -07:00
Juri Lelli
74b4fa8e5c sched/fair: call OPP update when going idle after migration
When a task leaves a rq because it is migrated away it carries its
utilization with him. In this case and OPP update on the src rq might be
needed. The corresponding update at dst rq will happen at enqueue time.

Change-Id: I22754a43760fc8d22a488fe15044af93787ea7a8

sched/fair: Fix uninitialised variable in idle_balance

compiler warned, looks legit.

Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2016-08-11 14:26:52 -07:00
Juri Lelli
bf93a3672c sched/cpufreq_sched: fix thermal capping events
cpufreq_sched_limits (called when CPUFREQ_GOV_LIMITS event happens)
bails out if policy->rwsem is already locked. However, that rwsem is
always guaranteed to be locked when we get here after a thermal
throttling event happens:

 th_throttling ->
   cpufreq_update_policy()
     ...
     down_write(&policy->rwsem);
     ...
     cpufreq_set_policy() ->
       ...
       __cpufreq_governor(policy, CPUFREQ_GOV_LIMITS); ->
         cpufreq_sched_limits()
         ...
         if (!down_write_trylock(&policy->rwsem))
                 return; <-- BAIL OUT!

So, we don't currently react immediately to thermal capping event (even
if reaction is still quick in practice, ~1ms, as lots of events are likely
to trigger a frequency selection on a high loaded system).

Fix this bug by removing the bail out condition.

While we are at it we also slightly change handling of the new limits by
clamping the last requested_freq between policy's max and min. Doing so
gives us the oppurtunity to correctly restore the last requested
frequency as soon as a thermal unthrottling event happens.

bug: 30481949

Change-Id: I3c13e818f238c1ffa66b34e419e8b87314b57427
Suggested-by: Javi Merino <javi.merino@arm.com>
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Signed-off-by: Srinath Sridharan <srinathsr@google.com>
[jstultz: fwdported to 4.4]
Signed-off-by: John Stultz <john.stultz@linaro.org>
2016-08-11 14:26:51 -07:00
Srinath Sridharan
c80a9af21a sched/fair: Picking cpus with low OPPs for tasks that prefer idle CPUs
When idle cpus cannot be found for Top-app/FG tasks, the cpu selection
algorithm picks a cpu with lowest OPP amongst the busy cpus as a second
choice.

Mitigates the "runnable" time for ui and render threads.

bug: 30481949
bug: 30342017
bug: 30508678
Change-Id: I5a97e31d33284895c0fa6f6942102713ee576d77
2016-08-11 14:26:51 -07:00
Patrick Bellasi
b9534b8f01 FIXUP: sched/tune: do initialization as a postcore_initicall
SchedTune needs to walk the scheduling domains to compute the energy
normalization constants used for PE space filtering. To build such
constants we need the energy model data for each CPU in the system.
However, by walking the SDs as a late initcall stage, the userspace has
been already initialized and it could happen that some CPUs are
hotplugged out.
For example, this could happen if a user-space thermal manager daemon
detects that CPUs are to much hot during the boot process.

To avoid such a race condition we can anticipate the SchedTune
initialization code to be a postcore_initicall. This allows to keep the
SchedTune initialization code as simple as an initcall while still safely
relaying on SDs provided data.

Such calls are executed before user-space is initialized and thus, apart
from the case of unlucky early-init kernel space generated hotplugs,
this solution should be safe enough to get all the data we need.

Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
[jstultz: fwdported to 4.4]
Signed-off-by: John Stultz <john.stultz@linaro.org>
2016-08-11 14:26:50 -07:00
Patrick Bellasi
93db70f21c DEBUG: sched: add tracepoint for RD overutilized
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
2016-08-11 14:26:49 -07:00
Srinath Sridharan
c5a00c2dad sched/tune: Introducing a new schedtune attribute prefer_idle
Hint to enable biasing of tasks towards idle cpus, even when a given
task is negatively boosted. The mechanism allows upto 20% reduction in
camera power without hurting performance.

bug: 28312446
Change-Id: I97ea5671aa1e6bcb165408b41e17bc82e41c2c9e
2016-08-11 14:26:49 -07:00
Todd Kjos
d4cda03828 sched: use util instead of capacity to select busy cpu
If cpus are busy, the cpu selection algorithm was favoring
cpus with lower capacity. This can result in uneven packing
since there will be a bias toward the same cpu until there
is a capacity change. Instead use the utilization so there
is immediate feedback as tasks are assigned

BUG: 30115868

Change-Id: I0ac7ae3ab5d8f2f5a5838c29bb6da2c3e8ef44e8
2016-08-11 14:26:48 -07:00
Chris Redpath
23ed57dbcc arch_timer: add error handling when the MPM global timer is cleared
Bug: 29000863
Signed-off-by: albert.zl_huang <albert.zl_huang@htc.com>
Change-Id: I2b5a28b0a9edb31bdaa1ca2310397dd2f36f6c23

Updated to use arch_timer_read_counter() as arch_counter_get_cntvct
doesn't exist in this kernel.

Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2016-08-11 14:26:47 -07:00
Todd Kjos
8935b6b4d2 FIXUP: sched: Fix double-release of spinlock in move_queued_task
BUG: 29519455
Change-Id: I4d1c27a1b4bcbba03d4b175d170cfe1701a90ffd
2016-08-11 14:26:47 -07:00
Todd Kjos
740d312ce8 FIXUP: sched/fair: Fix hang during suspend in sched_group_energy
BUG: 29353986
Change-Id: I0d0d8d5c107a2e0bd219819e036091106bb40e11
2016-08-11 14:26:46 -07:00
Patrick Bellasi
5156b67204 FIXUP: sched: fix SchedFreq integration for both PELT and WALT
The current kernel allows to use either PELT or WALT to track CPUs utilizations.
One of the main differences between the two approaches is that PELT
tracks only utilization of SCHED_OTHER classes while WALT tracks all tasks
with a single signal.

The current sched_freq_tick does not make this distinction and, when WALT
is in use, we end up adding multiple time the contribution related to
the RT and DL classes. This patch fixes this issue by:

1. providing two different code paths for PELT and WALT, thus granting that
   when we switch to PELT we get the original behaviour based on the assumption
   that class aggregations is done underneath by SchedFreq.

2. avoiding the double accounting of DL and RT workloads, when WALT is in use,
   by just adding a margin to the original WALT signal when we need to check
   if the CFS capacity has to be increased.

Change-Id: I7326fd50e868e97fb5e12351917e9d2969bfdae7
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
2016-08-11 14:26:45 -07:00
Todd Kjos
782c9d64a1 sched: EAS: Avoid causing spikes to max-freq unnecessarily
During scheduler tick handling, the frequency was being set to
max-freq if the current frequency is less than the current
utilization. Change to just request "right" frequency instead
of max.

BUG: 29871410
Change-Id: I6fe65b14413da44b1520ba116f72320083eb92f8
2016-08-11 14:26:45 -07:00
Patrick Bellasi
dfc1151b46 FIXUP: sched: fix set_cfs_cpu_capacity when WALT is in use
The CPU utilization reported when WALT is in use already tracks the
contributions due to RT and DL workloads. However, SchedFreq exposes
different capacity update functions, one for each class, and does classes
utilization internally at update_cpu_capacity_request() call time.

This patch ensures that when WALT is in use, the:
  cpu_sched_capacity_reqs::cfs
value is tracking just the load generated by SCHED_OTHER tasks.

Change-Id: Ibd9c9a10874a1d91f62477034548f7664e57cd6a
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
2016-08-11 14:26:44 -07:00
Srinath Sridharan
519c62750e sched/walt: Accounting for number of irqs pending on each core
Schedules on a core whose irq count is less than a threshold.
Improves I/O performance of EAS.

Change-Id: I08ff7dd0d22502a0106fc636b1af2e6fe9e758b5
2016-08-11 14:26:43 -07:00
Srivatsa Vaddagiri
efb86bd08a sched: Introduce Window Assisted Load Tracking (WALT)
use a window based view of time in order to track task
demand and CPU utilization in the scheduler.

Window Assisted Load Tracking (WALT) implementation credits:
 Srivatsa Vaddagiri, Steve Muckle, Syed Rameez Mustafa, Joonwoo Park,
 Pavan Kumar Kondeti, Olav Haugan

2016-03-06: Integration with EAS/refactoring by Vikram Mulukutla
            and Todd Kjos

Change-Id: I21408236836625d4e7d7de1843d20ed5ff36c708

Includes fixes for issues:

eas/walt: Use walt_ktime_clock() instead of ktime_get_ns() to avoid a
race resulting in watchdog resets
BUG: 29353986
Change-Id: Ic1820e22a136f7c7ebd6f42e15f14d470f6bbbdb

Handle walt accounting anomoly during resume

During resume, there is a corner case where on wakeup, a task's
prev_runnable_sum can go negative. This is a workaround that
fixes the condition and warns (instead of crashing).

BUG: 29464099
Change-Id: I173e7874324b31a3584435530281708145773508

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Srinath Sridharan <srinathsr@google.com>
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
[jstultz: fwdported to 4.4]
Signed-off-by: John Stultz <john.stultz@linaro.org>
2016-08-11 14:26:43 -07:00
Patrick Bellasi
345be81dba sched/tune: fix PB and PC cuts indexes definition
The current definition of the Performance Boost (PB) and Performance Constraint
(PC) regions is has two main issues:
1) in the computation of the boost index we overflow the thresholds_gains
   table for boost=100
2) the two cuts had _NOT_ the same ratio

The last point means that when boost=0 we do _not_ have a "standard" EAS
behaviour, i.e. accepting all candidate which decrease energy regardless
of their impact on performances. Instead, we accept only schedule candidate
which are in the Optimal region, i.e. decrease energy while increasing
performances.

This behaviour can have a negative impact also on CPU selection policies
which tries to spread tasks to reduce latencies. Indeed, for example
we could end up rejecting a schedule candidate which want to move a task
from a congested CPU to an idle one while, specifically in the case where
the target CPU will be running on a lower OPP.

This patch fixes these two issues by properly clamping the boost value
in the appropriate range to compute the threshold indexes as well as
by using the same threshold index for both cuts.

Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Signed-off-by: Srinath Sridharan <srinathsr@google.com>

sched/tune: fix update of threshold index for boost groups

When SchedTune is configured to work with CGroup mode, each time we update
the boost value of a group we do not update the threshed indexes for the
definition of the Performance Boost (PC) and Performance Constraint (PC)
region. This means that while the OPP boosting and CPU biasing selection
is working as expected, the __schedtune_accept_deltas function is always
using the initial values for these cuts.

This patch ensure that each time a new boost value is configured for a
boost group, the cuts for the PB and PC region are properly updated too.

Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Signed-off-by: Srinath Sridharan <srinathsr@google.com>

sched/tune: update PC and PB cuts definition

The current definition of Performance Boost (PB) and Performance
Constraint (PC) cuts defines two "dead regions":
- up to 20% boost: we are in energy-reduction only mode, i.e.
  accept all candidate which reduce energy
- over 70% boost: we are in performance-increase only mode, i.e.
  accept only sched candidate which do not reduce performances

This patch uses a more fine grained configuration where these two "dead
regions" are reduced to: up to 10% and over 90%.
This should allow to have some boosting benefits starting from 10% boost
values as well as not being to much permissive starting from boost values
of 80%.

Suggested-by: Leo Yan <leo.yan@linaro.org>
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Signed-off-by: Srinath Sridharan <srinathsr@google.com>

bug: 28312446
Change-Id: Ia326c66521e38c98e7a7eddbbb7c437875efa1ba

Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
2016-08-11 14:26:42 -07:00
Todd Kjos
369bcbbae4 sched/fair: optimize idle cpu selection for boosted tasks
find_best_target CPU selection is biased towards lower CPU IDs. Bias
towards higher CPUs for boosted tasks. For boosted tasks unconditionally
use the idle CPU returned by find_best_target.

BUG: 29512132
Change-Id: I3d650051752163fcf3dc7909751d1fde3f9d17c0

Conflicts:
	kernel/sched/fair.c
2016-08-11 14:26:41 -07:00
Patrick Bellasi
af14760e19 FIXUP: sched/tune: fix accounting for runnable tasks
Contains:

sched/tune: fix accounting for runnable tasks (1/5)

The accounting for tasks into boost groups of different CPUs is currently
broken mainly because:
a) we do not properly track the change of boost group of a RUNNABLE task
b) there are race conditions between migration code and accounting code

This patch provides a fixes to ensure enqueue/dequeue
accounting also for throttled tasks.

Without this patch is can happen that a task is enqueued into a throttled
RQ thus not being accounted for the boosting of the corresponding RQ.
We could argue that a throttled task should not boost a CPU, however:
a) properly implementing CPU boosting considering throttled tasks will
   increase a lot the complexity of the solution
b) it's not easy to quantify the benefits introduced by such a more
   complex solution

Since task throttling requires the usage of the CFS bandwidth controller,
which is not widely used on mobile systems (at least not by Android kernels
so far), for the time being we go for the simple solution and boost also
for throttled RQs.

sched/tune: fix accounting for runnable tasks (2/5)

This patch provides the code required to enforce proper locking.
A per boost group spinlock has been added to grant atomic
accounting of tasks as well as to serialise enqueue/dequeue operations,
triggered by tasks migrations, with cgroups's attach/detach operations.

sched/tune: fix accounting for runnable tasks (3/5)

This patch adds cgroups {allow,can,cancel}_attach callbacks.

Since a task can be migrated between boost groups while it's running,
the CGroups's attach callbacks have been added to properly migrate
boost contributions of RUNNABLE tasks.

The RQ's lock is used to serialise enqueue/dequeue operations, triggered
by tasks migrations, with cgroups's attach/detach operations. While the
SchedTune's CPU lock is used to grant atrocity of the accounting within
the CPU.

NOTE: the current implementation does not allows a concurrent CPU migration
      and CGroups change.

sched/tune: fix accounting for runnable tasks (4/5)

This fixes accounting for exiting tasks by adding a dedicated call early
in the do_exit() syscall, which disables SchedTune accounting as soon as a
task is flagged PF_EXITING.

This flag is set before the multiple dequeue/enqueue dance triggered
by cgroup_exit() which is useful only to inject useless tasks movements
thus increasing possibilities for race conditions with the migration code.
The schedtune_exit_task() call does the last dequeue of a task from its
current boost group. This is a solution more aligned with what happens in
mainline kernels (>v4.4) where the exit_cgroup does not move anymore a dying
task to the root control group.

sched/tune: fix accounting for runnable tasks (5/5)

To avoid accounting issues at startup, this patch disable the SchedTune
accounting until the required data structures have been properly
initialized.

Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
[jstultz: fwdported to 4.4]
Signed-off-by: John Stultz <john.stultz@linaro.org>
2016-08-11 14:26:41 -07:00