[ Upstream commit e67b8a685c7c984e834e3181ef4619cd7025a136 ]
Neither ___bpf_prog_run nor the JITs accept it.
Also adds a new test case.
Fixes: 17a5267067 ("bpf: verifier (add verifier core)")
Signed-off-by: Edward Cree <ecree@solarflare.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
In order to set rq->misfit_task in time, call update_task_ravg() prior
to task_tick. This reduces upmigration delay by 1 scheduler window.
Change-Id: I7cc80badd423f2e7684125fbfd853b0a3610f0e8
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
At present, sched_freq_tick() skips updating of capacity update when
current frequency is fmax. This can cause incorrect frequency drop
when a CPU bound task goes into sleep for example :
1) A task (A) enqueues onto CPU 0 and executes for long time.
2) A new task (B) which has low task demand enqueues onto CPU 1 and
executes long so becomes a CPU bound task.
3) Both CPU 0 and 1 gets scheduler tick but skip sched_freq_tick()
since current frequency is fmax.
4) Task (A) sleeps and lower the CPU 0's capacity request.
5) Because task (B) voted CPU capacity at step 2 with low demand and
skipped to request afterwards, cluster frequency for both CPU 0
and 1 drops to match capacity voted by CPU 1 at step 2 even though
task (B) on CPU 1 requires max capacity.
Fix such incorrectness by not skipping CPU capacity voting at tick
path.
Change-Id: Ieb46af1ac96ffce7a5532c58c7f07bf1ada06b86
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
At present need_active_balance() determines whether an active
upmigration is needed by using capacity_of(). A CPU's capacity
may be reduced by RT pressure, and therefore distinguishing
capability differences with capacity_of() may lead to suboptimal
active migrations to less capable CPUs. Use capacity_orig_of
to distinguish differently capable CPUs in addition to
capacity_of(), thus avoiding placing tasks on less capable CPUs
due to instantaneous RT pressure.
Change-Id: I3e1435246a8edc3ad618ef98a34866cfbd8c16a5
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
[markivx: Reworked the commit text a bit]
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
There's no need for a separate hierarchy of notifiers, APIs
and variables in walt.c for the purpose of applying frequency
and IPC invariance. Let's just use capacity_curr_of and get
rid of a lot of the infrastructure relating to capacity,
load_scale_factor etc.
Change-Id: Ia220e2c896373fa535db05bff60f9aa33aefc978
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
commit 28585a832602747cbfa88ad8934013177a3aae38 upstream.
A number of architecture invoke rcu_irq_enter() on exception entry in
order to allow RCU read-side critical sections in the exception handler
when the exception is from an idle or nohz_full CPU. This works, at
least unless the exception happens in an NMI handler. In that case,
rcu_nmi_enter() would already have exited the extended quiescent state,
which would mean that rcu_irq_enter() would (incorrectly) cause RCU
to think that it is again in an extended quiescent state. This will
in turn result in lockdep splats in response to later RCU read-side
critical sections.
This commit therefore causes rcu_irq_enter() and rcu_irq_exit() to
take no action if there is an rcu_nmi_enter() in effect, thus avoiding
the unscheduled return to RCU quiescent state. This in turn should
make the kernel safe for on-demand RCU voyeurism.
Link: http://lkml.kernel.org/r/20170922211022.GA18084@linux.vnet.ibm.com
Cc: stable@vger.kernel.org
Fixes: 0be964be0 ("module: Sanitize RCU usage and locking")
Reported-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Task->on_rq has three states:
0 - Task is not on runqueue (rq)
1 (TASK_ON_RQ_QUEUED) - Task is on rq
2 (TASK_ON_RQ_MIGRATING) - Task is on rq but in the
process of being migrated to another rq
When a task is moving between rqs task->on_rq state should be
TASK_ON_RQ_MIGRATING in order for WALT to account rq's cumulative
runnable average correctly. Without such state marking for all the
classes, WALT's update_history() would try to fixup task's demand
which was never contributed to any of CPUs during migration.
Change-Id: Iced3428f3924fe8ab5d0075698273ead04f12d5b
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
[joonwoop: Reinforced changelog to explain why this is needed by WALT.
Fixed conflicts in deadline.c]
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
commit 50e76632339d4655859523a39249dd95ee5e93e7 upstream.
Cpusets vs. suspend-resume is _completely_ broken. And it got noticed
because it now resulted in non-cpuset usage breaking too.
On suspend cpuset_cpu_inactive() doesn't call into
cpuset_update_active_cpus() because it doesn't want to move tasks about,
there is no need, all tasks are frozen and won't run again until after
we've resumed everything.
But this means that when we finally do call into
cpuset_update_active_cpus() after resuming the last frozen cpu in
cpuset_cpu_active(), the top_cpuset will not have any difference with
the cpu_active_mask and this it will not in fact do _anything_.
So the cpuset configuration will not be restored. This was largely
hidden because we would unconditionally create identity domains and
mobile users would not in fact use cpusets much. And servers what do use
cpusets tend to not suspend-resume much.
An addition problem is that we'd not in fact wait for the cpuset work to
finish before resuming the tasks, allowing spurious migrations outside
of the specified domains.
Fix the rebuild by introducing cpuset_force_rebuild() and fix the
ordering with cpuset_wait_for_hotplug().
Reported-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: <stable@vger.kernel.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: deb7aa308e ("cpuset: reorganize CPU / memory hotplug handling")
Link: http://lkml.kernel.org/r/20170907091338.orwxrqkbfkki3c24@hirez.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 2b0b8499ae75df91455bbeb7491d45affc384fb0 upstream.
The trampoline allocated by function tracer was overwriten by function_graph
tracer, and caused a memory leak. The save_global_trampoline should have
saved the previous trampoline in register_ftrace_graph() and restored it in
unregister_ftrace_graph(). But as it is implemented, save_global_trampoline was
only used in unregister_ftrace_graph as default value 0, and it overwrote the
previous trampoline's value. Causing the previous allocated trampoline to be
lost.
kmmeleak backtrace:
kmemleak_vmalloc+0x77/0xc0
__vmalloc_node_range+0x1b5/0x2c0
module_alloc+0x7c/0xd0
arch_ftrace_update_trampoline+0xb5/0x290
ftrace_startup+0x78/0x210
register_ftrace_function+0x8b/0xd0
function_trace_init+0x4f/0x80
tracing_set_tracer+0xe6/0x170
tracing_set_trace_write+0x90/0xd0
__vfs_write+0x37/0x170
vfs_write+0xb2/0x1b0
SyS_write+0x55/0xc0
do_syscall_64+0x67/0x180
return_from_SYSCALL_64+0x0/0x6a
[
Looking further into this, I found that this was left over from when the
function and function graph tracers shared the same ftrace_ops. But in
commit 5f151b2401 ("ftrace: Fix function_profiler and function tracer
together"), the two were separated, and the save_global_trampoline no
longer was necessary (and it may have been broken back then too).
-- Steven Rostedt
]
Link: http://lkml.kernel.org/r/20170912021454.5976-1-shuwang@redhat.com
Fixes: 5f151b2401 ("ftrace: Fix function_profiler and function tracer together")
Signed-off-by: Shu Wang <shuwang@redhat.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Preempt and irq trace events can be used for tracing the start and
end of an atomic section which can be used by a trace viewer like
systrace to graphically view the start and end of an atomic section and
correlate them with latencies and scheduling issues.
This also serves as a prelude to using synthetic events or probes to
rewrite the preempt and irqsoff tracers, along with numerous benefits of
using trace events features for these events.
Change-Id: I718d40f7c3c48579adf9d7121b21495a669c89bd
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Peter Zilstra <peterz@infradead.org>
Cc: kernel-team@android.com
Link: https://patchwork.kernel.org/patch/9988157/
Signed-off-by: Joel Fernandes <joelaf@google.com>
In preparation of adding irqsoff and preemptsoff enable and disable trace
events, move required functions and code to make it easier to add these events
in a later patch. This patch is just code movement and no functional change.
Change-Id: I587d411da5efbc4959bcccd7a05c7a66c231e1e0
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-team@android.com
Link: https://patchwork.kernel.org/patch/9988159/
Signed-off-by: Joel Fernandes <joelaf@google.com>
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAlnV4kwACgkQONu9yGCS
aT560A//f5HQvKsfbDuzGTeCggXr+m9cd8L6GfKvQbmAKwPG9qfRiBM2EO4zqDkq
Q6A6yjy0YVvsnQ3rJCA44yg/MC9JojlVgmBowtgD9uSh0Z8Q8qDdzAi35xdOF+BD
O+2opoKSCKuc1ckpb7AoY/803XllTbNWbYd2eDzBvjxXd5i+qCF4GnBTTvkMIshm
Fis8va+fgNHjuBlHgmV+sCR3CRWGv6PqtkDcG79nv69JkwQ2tx3JbMwtDOrgnR5X
nIlvgNtZwYKtorxin6qaDWfmhLBHiI4Xhr9L1gAKLsi9S5m3nQ4Ozzsndqtlxppa
bOHpsdCzVRkBz1UB2QQfZOJzE3tQvCBaAxUGeMAcO/F5wcgeyHl9Wo2bblqidJSc
MdtN044pSE1yFTYtd14CdUKl+Jx/R9lFYM/o7IzxTrrRHfBuylTSA8fx6OIdPxJA
Wmd+4HwVJxXmCBNaWnH4LRhd6rp7FB3wzUSt3Euxfq5GRa5+522u6VEriGRuBEeW
SOrcU++U9mIuR2Zk6A6vBVwB1g78vEvGlyQehFzJWghcLtRqqzEPYz281CklcOe/
G3p8+v8wSZo/hHEyeJLRwX74Nlna/ZjoAdxS+ngW9BuNXFeZErTN32HgJNtBF5La
4MIKiOBWCytmxhrS+PIykQAmal6HlDEvjue6xGzqKxeBq5l250I=
=NPlq
-----END PGP SIGNATURE-----
Merge 4.4.90 into android-4.4
Changes in 4.4.90
cifs: release auth_key.response for reconnect.
mac80211: flush hw_roc_start work before cancelling the ROC
KVM: PPC: Book3S: Fix race and leak in kvm_vm_ioctl_create_spapr_tce()
tracing: Fix trace_pipe behavior for instance traces
tracing: Erase irqsoff trace with empty write
md/raid5: fix a race condition in stripe batch
md/raid5: preserve STRIPE_ON_UNPLUG_LIST in break_stripe_batch_list
scsi: scsi_transport_iscsi: fix the issue that iscsi_if_rx doesn't parse nlmsg properly
crypto: talitos - Don't provide setkey for non hmac hashing algs.
crypto: talitos - fix sha224
KEYS: fix writing past end of user-supplied buffer in keyring_read()
KEYS: prevent creating a different user's keyrings
KEYS: prevent KEYCTL_READ on negative key
powerpc/pseries: Fix parent_dn reference leak in add_dt_node()
Fix SMB3.1.1 guest authentication to Samba
SMB: Validate negotiate (to protect against downgrade) even if signing off
SMB3: Don't ignore O_SYNC/O_DSYNC and O_DIRECT flags
vfs: Return -ENXIO for negative SEEK_HOLE / SEEK_DATA offsets
nl80211: check for the required netlink attributes presence
bsg-lib: don't free job in bsg_prepare_job
seccomp: fix the usage of get/put_seccomp_filter() in seccomp_get_filter()
arm64: Make sure SPsel is always set
arm64: fault: Route pte translation faults via do_translation_fault
KVM: VMX: Do not BUG() on out-of-bounds guest IRQ
kvm: nVMX: Don't allow L2 to access the hardware CR8
PCI: Fix race condition with driver_override
btrfs: fix NULL pointer dereference from free_reloc_roots()
btrfs: propagate error to btrfs_cmp_data_prepare caller
btrfs: prevent to set invalid default subvolid
x86/fpu: Don't let userspace set bogus xcomp_bv
gfs2: Fix debugfs glocks dump
timer/sysclt: Restrict timer migration sysctl values to 0 and 1
KVM: VMX: do not change SN bit in vmx_update_pi_irte()
KVM: VMX: remove WARN_ON_ONCE in kvm_vcpu_trigger_posted_interrupt
cxl: Fix driver use count
dmaengine: mmp-pdma: add number of requestors
ARM: pxa: add the number of DMA requestor lines
ARM: pxa: fix the number of DMA requestor lines
KVM: VMX: use cmpxchg64
video: fbdev: aty: do not leak uninitialized padding in clk to userspace
swiotlb-xen: implement xen_swiotlb_dma_mmap callback
fix xen_swiotlb_dma_mmap prototype
Linux 4.4.90
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
commit b94bf594cf8ed67cdd0439e70fa939783471597a upstream.
timer_migration sysctl acts as a boolean switch, so the allowed values
should be restricted to 0 and 1.
Add the necessary extra fields to the sysctl table entry to enforce that.
[ tglx: Rewrote changelog ]
Signed-off-by: Myungho Jung <mhjungk@gmail.com>
Link: http://lkml.kernel.org/r/1492640690-3550-1-git-send-email-mhjungk@gmail.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Kazuhiro Hayashi <kazuhiro3.hayashi@toshiba.co.jp>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 66a733ea6b611aecf0119514d2dddab5f9d6c01e upstream.
As Chris explains, get_seccomp_filter() and put_seccomp_filter() can end
up using different filters. Once we drop ->siglock it is possible for
task->seccomp.filter to have been replaced by SECCOMP_FILTER_FLAG_TSYNC.
Fixes: f8e529ed94 ("seccomp, ptrace: add support for dumping seccomp filters")
Reported-by: Chris Salls <chrissalls5@gmail.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
[tycho: add __get_seccomp_filter vs. open coding refcount_inc()]
Signed-off-by: Tycho Andersen <tycho@docker.com>
[kees: tweak commit log]
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 8dd33bcb7050dd6f8c1432732f930932c9d3a33e upstream.
One convenient way to erase trace is "echo > trace". However, this
is currently broken if the current tracer is irqsoff tracer. This
is because irqsoff tracer use max_buffer as the default trace
buffer.
Set the max_buffer as the one to be cleared when it's the trace
buffer currently in use.
Link: http://lkml.kernel.org/r/1505754215-29411-1-git-send-email-byan@nvidia.com
Cc: <mingo@redhat.com>
Fixes: 4acd4d00f ("tracing: give easy way to clear trace buffer")
Signed-off-by: Bo Yan <byan@nvidia.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 75df6e688ccd517e339a7c422ef7ad73045b18a2 upstream.
When reading data from trace_pipe, tracing_wait_pipe() performs a
check to see if tracing has been turned off after some data was read.
Currently, this check always looks at global trace state, but it
should be checking the trace instance where trace_pipe is located at.
Because of this bug, cat instances/i1/trace_pipe in the following
script will immediately exit instead of waiting for data:
cd /sys/kernel/debug/tracing
echo 0 > tracing_on
mkdir -p instances/i1
echo 1 > instances/i1/tracing_on
echo 1 > instances/i1/events/sched/sched_process_exec/enable
cat instances/i1/trace_pipe
Link: http://lkml.kernel.org/r/20170917102348.1615-1-tahsin@google.com
Fixes: 10246fa35d ("tracing: give easy way to clear trace buffer")
Signed-off-by: Tahsin Erdogan <tahsin@google.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Currently, sugov_next_freq_shared() uses last_freq_update_time as a
reference to decide when to start considering CPU contributions as
stale.
However, since last_freq_update_time is set by the last CPU that issued
a frequency transition, this might cause problems in certain cases. In
practice, the detection of stale utilization values fails whenever the
CPU with such values was the last to update the policy. For example (and
please note again that the SCHED_CPUFREQ_RT flag is not the problem
here, but only the detection of after how much time that flag has to be
considered stale), suppose a policy with 2 CPUs:
CPU0 | CPU1
|
| RT task scheduled
| SCHED_CPUFREQ_RT is set
| CPU1->last_update = now
| freq transition to max
| last_freq_update_time = now
|
more than TICK_NSEC nsecs
|
a small CFS wakes up |
CPU0->last_update = now1 |
delta_ns(CPU0) < TICK_NSEC* |
CPU0's util is considered |
delta_ns(CPU1) = |
last_freq_update_time - |
CPU1->last_update = 0 |
< TICK_NSEC |
CPU1 is still considered |
CPU1->SCHED_CPUFREQ_RT is set |
we stay at max (until CPU1 |
exits from idle) |
* delta_ns is actually negative as now1 > last_freq_update_time
While last_freq_update_time is a sensible reference for rate limiting,
it doesn't seem to be useful for working around stale CPU states.
Fix the problem by always considering now (time) as the reference for
deciding when CPUs have stale contributions.
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Acked-by: Vincent Guittot <vincent.guittot@linaro.org>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
(cherry picked from commit d86ab9cff8b936aadde444d0e263a8db5ff0349b)
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAlnLaLoACgkQONu9yGCS
aT7hDw/+Ipx/xnjIUJFV/aqo8lTh3XqP/TjD5whoi+yYC8axLEZBLiOSLZceVjsG
hi2mP22gKn1i7GLXNeWIZ+rMtVzAN+qNg7i8cjWNfFp1fA7cCfFaYvlV0LVrO2tK
WnvvE8r5kQAKyQG8498ebEjianxwxHVERnNiE5/SDpCNj14DnwCJBTEYM0tEnuXZ
/jBIIs4xvndVa0fFfUjuAzh65AefAT1BmgsPll4GnFMUFHh30smYdFla5LL0GNIq
FQGFvIi8Q02disSMg9lFJVOlazc/HUREiFB1qy1DRtGMnS6/Q0HW0kCxeRi/7QEi
+HN2rLxtbpnuD5P7W4lDJ5/cyCHMIv8SJ8OqUd8uxbTWz31P/QxbM7d35d+w3rq8
dv3sQ6CMRnuIXGL5dFHh7zYqlzNS9PKjLmxzAw9grDf+nVsDxE4KUfJy00DSN1I1
Bopi1kCD2nUMOiBrmxkIczN6OOvcGBHh6/TTB2WEKVHn42D0fjLnO66kJVJLMsBm
vDdKJDDSGM/0HiUa5ydr6R0Ae7My3h5AJZRa5gn0kL/myatX/vsa0B2ZLpHlVipM
GhODBsDFkI4k4yceONDZPJmhhVab1lewTMuIW5D2KRMsgpQqLmlOyL5gykfH0rTx
FVnLSoMAHsgm6qVPwRS5BqK/UnXogfqjiB0iXzNNZnkiABWWoUQ=
=Skkr
-----END PGP SIGNATURE-----
Merge 4.4.89 into android-4.4
Changes in 4.4.89
ipv6: accept 64k - 1 packet length in ip6_find_1stfragopt()
ipv6: add rcu grace period before freeing fib6_node
ipv6: fix sparse warning on rt6i_node
qlge: avoid memcpy buffer overflow
Revert "net: phy: Correctly process PHY_HALTED in phy_stop_machine()"
tcp: initialize rcv_mss to TCP_MIN_MSS instead of 0
Revert "net: use lib/percpu_counter API for fragmentation mem accounting"
Revert "net: fix percpu memory leaks"
gianfar: Fix Tx flow control deactivation
ipv6: fix memory leak with multiple tables during netns destruction
ipv6: fix typo in fib6_net_exit()
f2fs: check hot_data for roll-forward recovery
x86/fsgsbase/64: Report FSBASE and GSBASE correctly in core dumps
md/raid5: release/flush io in raid5_do_work()
nfsd: Fix general protection fault in release_lock_stateid()
mm: prevent double decrease of nr_reserved_highatomic
tty: improve tty_insert_flip_char() fast path
tty: improve tty_insert_flip_char() slow path
tty: fix __tty_insert_flip_char regression
Input: i8042 - add Gigabyte P57 to the keyboard reset table
MIPS: math-emu: <MAX|MAXA|MIN|MINA>.<D|S>: Fix quiet NaN propagation
MIPS: math-emu: <MAX|MAXA|MIN|MINA>.<D|S>: Fix cases of both inputs zero
MIPS: math-emu: <MAX|MIN>.<D|S>: Fix cases of both inputs negative
MIPS: math-emu: <MAXA|MINA>.<D|S>: Fix cases of input values with opposite signs
MIPS: math-emu: <MAXA|MINA>.<D|S>: Fix cases of both infinite inputs
MIPS: math-emu: MINA.<D|S>: Fix some cases of infinity and zero inputs
crypto: AF_ALG - remove SGL terminator indicator when chaining
ext4: fix incorrect quotaoff if the quota feature is enabled
ext4: fix quota inconsistency during orphan cleanup for read-only mounts
powerpc: Fix DAR reporting when alignment handler faults
block: Relax a check in blk_start_queue()
md/bitmap: disable bitmap_resize for file-backed bitmaps.
skd: Avoid that module unloading triggers a use-after-free
skd: Submit requests to firmware before triggering the doorbell
scsi: zfcp: fix queuecommand for scsi_eh commands when DIX enabled
scsi: zfcp: add handling for FCP_RESID_OVER to the fcp ingress path
scsi: zfcp: fix capping of unsuccessful GPN_FT SAN response trace records
scsi: zfcp: fix passing fsf_req to SCSI trace on TMF to correlate with HBA
scsi: zfcp: fix missing trace records for early returns in TMF eh handlers
scsi: zfcp: fix payload with full FCP_RSP IU in SCSI trace records
scsi: zfcp: trace HBA FSF response by default on dismiss or timedout late response
scsi: zfcp: trace high part of "new" 64 bit SCSI LUN
scsi: megaraid_sas: Check valid aen class range to avoid kernel panic
scsi: megaraid_sas: Return pended IOCTLs with cmd_status MFI_STAT_WRONG_STATE in case adapter is dead
scsi: storvsc: fix memory leak on ring buffer busy
scsi: sg: remove 'save_scat_len'
scsi: sg: use standard lists for sg_requests
scsi: sg: off by one in sg_ioctl()
scsi: sg: factor out sg_fill_request_table()
scsi: sg: fixup infoleak when using SG_GET_REQUEST_TABLE
scsi: qla2xxx: Fix an integer overflow in sysfs code
ftrace: Fix selftest goto location on error
tracing: Apply trace_clock changes to instance max buffer
ARC: Re-enable MMU upon Machine Check exception
PCI: shpchp: Enable bridge bus mastering if MSI is enabled
media: v4l2-compat-ioctl32: Fix timespec conversion
media: uvcvideo: Prevent heap overflow when accessing mapped controls
bcache: initialize dirty stripes in flash_dev_run()
bcache: Fix leak of bdev reference
bcache: do not subtract sectors_to_gc for bypassed IO
bcache: correct cache_dirty_target in __update_writeback_rate()
bcache: Correct return value for sysfs attach errors
bcache: fix for gc and write-back race
bcache: fix bch_hprint crash and improve output
ftrace: Fix memleak when unregistering dynamic ops when tracing disabled
Linux 4.4.89
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
commit edb096e00724f02db5f6ec7900f3bbd465c6c76f upstream.
If function tracing is disabled by the user via the function-trace option or
the proc sysctl file, and a ftrace_ops that was allocated on the heap is
unregistered, then the shutdown code exits out without doing the proper
clean up. This was found via kmemleak and running the ftrace selftests, as
one of the tests unregisters with function tracing disabled.
# cat kmemleak
unreferenced object 0xffffffffa0020000 (size 4096):
comm "swapper/0", pid 1, jiffies 4294668889 (age 569.209s)
hex dump (first 32 bytes):
55 ff 74 24 10 55 48 89 e5 ff 74 24 18 55 48 89 U.t$.UH...t$.UH.
e5 48 81 ec a8 00 00 00 48 89 44 24 50 48 89 4c .H......H.D$PH.L
backtrace:
[<ffffffff81d64665>] kmemleak_vmalloc+0x85/0xf0
[<ffffffff81355631>] __vmalloc_node_range+0x281/0x3e0
[<ffffffff8109697f>] module_alloc+0x4f/0x90
[<ffffffff81091170>] arch_ftrace_update_trampoline+0x160/0x420
[<ffffffff81249947>] ftrace_startup+0xe7/0x300
[<ffffffff81249bd2>] register_ftrace_function+0x72/0x90
[<ffffffff81263786>] trace_selftest_ops+0x204/0x397
[<ffffffff82bb8971>] trace_selftest_startup_function+0x394/0x624
[<ffffffff81263a75>] run_tracer_selftest+0x15c/0x1d7
[<ffffffff82bb83f1>] init_trace_selftests+0x75/0x192
[<ffffffff81002230>] do_one_initcall+0x90/0x1e2
[<ffffffff82b7d620>] kernel_init_freeable+0x350/0x3fe
[<ffffffff81d61ec3>] kernel_init+0x13/0x122
[<ffffffff81d72c6a>] ret_from_fork+0x2a/0x40
[<ffffffffffffffff>] 0xffffffffffffffff
Fixes: 12cce594fa ("ftrace/x86: Allow !CONFIG_PREEMPT dynamic ops to use allocated trampolines")
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 170b3b1050e28d1ba0700e262f0899ffa4fccc52 upstream.
Currently trace_clock timestamps are applied to both regular and max
buffers only for global trace. For instance trace, trace_clock
timestamps are applied only to regular buffer. But, regular and max
buffers can be swapped, for example, following a snapshot. So, for
instance trace, bad timestamps can be seen following a snapshot.
Let's apply trace_clock timestamps to instance max buffer as well.
Link: http://lkml.kernel.org/r/ebdb168d0be042dcdf51f81e696b17fabe3609c1.1504642143.git.tom.zanussi@linux.intel.com
Fixes: 277ba0446 ("tracing: Add interface to allow multiple trace buffers")
Signed-off-by: Baohong Liu <baohong.liu@intel.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 46320a6acc4fb58f04bcf78c4c942cc43b20f986 upstream.
In the second iteration of trace_selftest_ops(), the error goto label is
wrong in the case where trace_selftest_test_global_cnt is off. In the
case of error, it leaks the dynamic ops that was allocated.
Fixes: 95950c2e ("ftrace: Add self-tests for multiple function trace users")
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This reverts commit c5616f2f874faa20b59b116177b99bf3948586df.
If we re-init the per-cpu boostgroup spinlock every time that
we add a new boosted cgroup, we can easily wipe out (reinit)
a spinlock struct while in a critical section. We should only
be setting up the per-cpu boostgroup data, and the spin_lock
initialization need only happen once - which we're already
doing in a postcore_initcall.
For example:
-------- CPU 0 -------- | -------- CPU1 --------
cgroupX boost group added |
schedtune_enqueue_task |
acquires(bg->lock) | cgroupY boost group added
| for_each_cpu()
| raw_spin_lock_init(bg->lock)
releases(bg->lock) |
BUG (already unlocked) |
|
This results in the following BUG from the debug spinlock code:
BUG: spinlock already unlocked on CPU#5, rcuop/6/68
Change-Id: I3016702780b461a0cd95e26c538cd18df27d6316
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
(cherry picked from commit ec8d7c14ea14922fe21945b458a75e39f11dd832)
Tetsuo has properly noted that mmput slow path might get blocked waiting
for another party (e.g. exit_aio waits for an IO). If that happens the
oom_reaper would be put out of the way and will not be able to process
next oom victim. We should strive for making this context as reliable
and independent on other subsystems as much as possible.
Introduce mmput_async which will perform the slow path from an async
(WQ) context. This will delay the operation but that shouldn't be a
problem because the oom_reaper has reclaimed the victim's address space
for most cases as much as possible and the remaining context shouldn't
bind too much memory anymore. The only exception is when mmap_sem
trylock has failed which shouldn't happen too often.
The issue is only theoretical but not impossible.
Signed-off-by: Michal Hocko <mhocko@suse.com>
Reported-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Only backports mmput_async.
Change-Id: I5fe54abcc629e7d9eab9fe03908903d1174177f1
Signed-off-by: Arve Hjønnevåg <arve@android.com>
Commit b235beea9e99 ("Clarify naming of thread info/stack allocators")
breaks the build on some powerpc configs, where THREAD_SIZE < PAGE_SIZE:
kernel/fork.c:235:2: error: implicit declaration of function 'free_thread_stack'
kernel/fork.c:355:8: error: assignment from incompatible pointer type
stack = alloc_thread_stack_node(tsk, node);
^
Fix it by renaming free_stack() to free_thread_stack(), and updating the
return type of alloc_thread_stack_node().
Fixes: b235beea9e99 ("Clarify naming of thread info/stack allocators")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Bug: 38331309
Change-Id: I5b7f920b459fb84adf5fc75f83bb488b855c4deb
(cherry picked from commit 9521d39976db20f8ef9b56af66661482a17d5364)
Signed-off-by: Zubin Mithra <zsm@google.com>
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAlm5nrQACgkQONu9yGCS
aT7YkBAAuOKsiNi1UcZQY7MTr9BYM8hDi6wpLYrOltGlRGyJnlkkP5of0ukpulO6
Cfp3RlLjhJ8a/ZPm+bEudnqISR7GsIyW40QiNOHLCoLUwbz0qImSCBCP1OREg5B8
+KTsJ6UVJ5VXuqFaHAZLFtJlqmZVo9PpH0CPmL8bZylOx56dOZ8f/KkhXexBOZR3
/CCrcCqiRs/bqJ3PAcEGcMcZYKh20SlmdNgj/GxSotvJ+xKFgBaqtHI2e9ftoMWZ
RC1+h0plq7onjz2WMNe+hSbyODITGmJuti3TeJaZGtRpYRHv7S0Yuqs0QTvJCyjV
iUcT0Z5tC2a1xIhiIhABZ9sveVRiop24d7qBdxqZhqLDn/jmCETZpsUaxkHs0Nk2
bKPMT7guopS/e5xxJb0Acl8StPfv/EAogWw5XNeBlwtG1ZxsvHg2/g8jUV6k3yEc
QH+vZUtGRp/aGBmxlTHyiI3gUSUOyqBD+kG8yCq1ySfHWFFT03D6qIsZThh2GB6B
eiq4kHzhXsOI3IL8BjXmAWRa0KJydELMr+ofgQWNkFiIVnNRedS39a8t9Aulnxoc
1T6vz9+laYiHdXkaIxsWNM2WPKzvdJfiEf2MKLyxQ5jWgqh6jSemx5b3BH6z2c9J
0RZMMVNm9BH5JBTiL01/PE6m+e+EaeuB21HgmkzHENWiFlQnphE=
=SSJQ
-----END PGP SIGNATURE-----
Merge 4.4.88 into android-4.4
Changes in 4.4.88
usb: quirks: add delay init quirk for Corsair Strafe RGB keyboard
USB: serial: option: add support for D-Link DWM-157 C1
usb: Add device quirk for Logitech HD Pro Webcam C920-C
usb:xhci:Fix regression when ATI chipsets detected
USB: core: Avoid race of async_completed() w/ usbdev_release()
staging/rts5208: fix incorrect shift to extract upper nybble
driver core: bus: Fix a potential double free
intel_th: pci: Add Cannon Lake PCH-H support
intel_th: pci: Add Cannon Lake PCH-LP support
ath10k: fix memory leak in rx ring buffer allocation
Input: trackpoint - assume 3 buttons when buttons detection fails
rtlwifi: rtl_pci_probe: Fix fail path of _rtl_pci_find_adapter
Bluetooth: Add support of 13d3:3494 RTL8723BE device
dlm: avoid double-free on error path in dlm_device_{register,unregister}
mwifiex: correct channel stat buffer overflows
drm/nouveau/pci/msi: disable MSI on big-endian platforms by default
workqueue: Fix flag collision
cs5536: add support for IDE controller variant
scsi: sg: protect against races between mmap() and SG_SET_RESERVED_SIZE
scsi: sg: recheck MMAP_IO request length with lock held
drm: adv7511: really enable interrupts for EDID detection
drm/bridge: adv7511: Fix mutex deadlock when interrupts are disabled
drm/bridge: adv7511: Use work_struct to defer hotplug handing to out of irq context
drm/bridge: adv7511: Switch to using drm_kms_helper_hotplug_event()
drm/bridge: adv7511: Re-write the i2c address before EDID probing
btrfs: resume qgroup rescan on rw remount
locktorture: Fix potential memory leak with rw lock test
ALSA: msnd: Optimize / harden DSP and MIDI loops
Bluetooth: Properly check L2CAP config option output buffer length
ARM: 8692/1: mm: abort uaccess retries upon fatal signal
NFS: Fix 2 use after free issues in the I/O code
xfs: XFS_IS_REALTIME_INODE() should be false if no rt device present
Linux 4.4.88
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAlmw6HcACgkQONu9yGCS
aT4VPxAAmbpailVo2jlHZAvAR/TDYQ1cZPe3Y3J9h25fPHj/zE1/wNFmjMvJbs3Z
U7GWP1MLSEkbboB+N6+w+8CNDQUAhW8cAy83BZAyfG7g1OcgFEYk/Ai5IQog3Ea+
lfgrd/xHBYMlm6SFLa0soChICx6Qwrq3gJj37nuwjx7qfkHveSjiKA+OmnraBfYg
VVd9Fo6NJHLUvF+xy7jVW4rmyUbGy0x3w+PikJS1Bw9C4lNmhgmn7dceO5Q6M620
TeDALEMbB3NaITg2LS1hO/6hbPFSHyEJhKmjBazU5FuDM+px1XL5TZEZ0U2QiaJh
SS5Egvykrqf9rry3JA0CUcL5RzCa2c8/lNpKHMYgeng4SgBLtE91Lri8W51gSfKj
YKEg9qjX3pGue9PJ3EmBAc0zJr5YiG+3XE+P8IEnk2kWKfJ1V/zpGWhDLUl+cVLs
4sEIQKwgQEEs+xINl94JfpovP7a8UgyqhTSatyjbbFLpGrWx4dJCpIPaDrEkvK2+
v6BC1WM8RhyV6mTI8PPy4k01Rntcx4qzysXRkAGpW3J//ys8jzyITPsxaAw5JNqe
p2+1E8Ni4v2D3QMjqxmZLivuQtGXA7XVkCfUv7uP3h1wyqWYp1nw7ZCLe6CiY4cQ
tyQkC6s/uzSP4bu+miXop0tWemHBSHoJzyBmI05pOp/QPkmXBIA=
=m7eT
-----END PGP SIGNATURE-----
Merge 4.4.87 into android-4.4
Changes in 4.4.87
irqchip: mips-gic: SYNC after enabling GIC region
i2c: ismt: Don't duplicate the receive length for block reads
i2c: ismt: Return EMSGSIZE for block reads with bogus length
ceph: fix readpage from fscache
cpumask: fix spurious cpumask_of_node() on non-NUMA multi-node configs
cpuset: Fix incorrect memory_pressure control file mapping
alpha: uapi: Add support for __SANE_USERSPACE_TYPES__
CIFS: Fix maximum SMB2 header size
CIFS: remove endian related sparse warning
wl1251: add a missing spin_lock_init()
xfrm: policy: check policy direction value
drm/ttm: Fix accounting error when fail to get pages for pool
kvm: arm/arm64: Fix race in resetting stage2 PGD
kvm: arm/arm64: Force reading uncached stage2 PGD
epoll: fix race between ep_poll_callback(POLLFREE) and ep_free()/ep_remove()
crypto: algif_skcipher - only call put_page on referenced and used pages
Linux 4.4.87
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
commit 1c08c22c874ac88799cab1f78c40f46110274915 upstream.
The memory_pressure control file was incorrectly set up without
a private value (0, by default). As a result, this control
file was treated like memory_migrate on read. By adding back the
FILE_MEMORY_PRESSURE private value, the correct memory pressure value
will be returned.
Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Fixes: 7dbdb199d3 ("cgroup: replace cftype->mode with CFTYPE_WORLD_WRITABLE")
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Make the schedutil governor take the initial (default) value of the
rate_limit_us sysfs attribute from the (new) transition_delay_us
policy parameter (to be set by the scaling driver).
That will allow scaling drivers to make schedutil use smaller default
values of rate_limit_us and reduce the default average time interval
between consecutive frequency changes.
Make intel_pstate set transition_delay_us to 500.
BACKPORT: Modified to support the separate up_rate_limit_us and
down_rate_limit_us (upstream just has a single rate_limit_us). Also
dropped the changes for intel_pstate as there's a merge conflict.
Change-Id: I62a8543879a4d8582cdcb31ebd55607705d1c8b1
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
(cherry picked from commit 1b72e7fd304639f1cd49d1e11955c4974936d88c)
Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
commit 05384213436ab690c46d9dfec706b80ef8d671ab upstream.
Starting from GCC 7.1, __gcov_exit is a new symbol expected to be
implemented in a profiling runtime.
[akpm@linux-foundation.org: coding-style fixes]
[mliska@suse.cz: v2]
Link: http://lkml.kernel.org/r/e63a3c59-0149-c97e-4084-20ca8f146b26@suse.cz
Link: http://lkml.kernel.org/r/8c4084fa-3885-29fe-5fc4-0d4ca199c785@suse.cz
Signed-off-by: Martin Liska <mliska@suse.cz>
Acked-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The initial window start needs to be close to ktime ns = 0 to be
aligned with scheduler tick.
Change-Id: Ia91f74efce2f910106622a054a6fcd507e763ca5
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
EAS won't allow NOHZ idle balancer until CPU's over utilized. However
nohz_kick_needed() can return true. This causes idle CPU wake up for
nothing.
Change-Id: I6e548442e29e4f85cda695e4c7101dd591b12fe6
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
In order to calculate energy difference we currently iterates CPUs under
the same sched doamin to accumulate total energy cost and compare before
and after :
for_each_domain(cpu)
total_energy_before += (cpu_util * power) >> SCHED_CAPACITY_SHIFT;
for_each_domain(cpu)
total_energy_after += (cpu_util * power) >> SCHED_CAPACITY_SHIFT;
Doing such can incorrectly calculate and report abs(delta) > 0 when
there is actually no energy delta between before and after because the
same total accumulated cpu_util of all the CPUs can be distributed
differently before and after and it causes different amount of rounding
error.
Fix such incorrectness by shifting just once with accumulated
total_energy.
Change-Id: I82f1e2e358367058960938b4ef81714f57e921cf
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
(moved part to another commit)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
WALT's function cpu_util(cpu) reports CPU's load without taking into
account of waking task's load. Thus currently cpu_overutilized()
underestimates load on the previous CPU of waking task.
Take into account of task's load to determine whether previous CPU is
overutilzed to bail out early without running energy_diff() which is
expensive.
Change-Id: I30f146984a880ad2cc1b8a4ce35bd239a8c9a607
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
(minor rebase conflicts)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
With WALT all the scheduler classes' load are accounted in scr->cfs and
update_cpu_capacity_request() adds capacity margin. At present, at tick
path, scheduler also adds capacity margin. Therefore the margin applied
twice.
Fix such error by using margin applied cpu utilization only for checking
whether frequency increase is needed.
Change-Id: Id7d8cc73b2e4eec70b274ca66e09bb0b16bf6f09
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
(trivial rebase conflict)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
WALT CPU utilization reports CPU load of all the scheduler classes.
Therefore adding RT class's load additionally will cause frequency
overshooting. Fix such issue by not accounting RT class load when
requesting capacity.
Change-Id: I29600d7af7ca8c00e0d2ff1e13872024ccaa72bf
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
WALT accounts two major statistics; CPU load and cumulative tasks
demand.
The CPU load which is account of accumulated each CPU's absolute
execution time is for CPU frequency guidance. Whereas cumulative
tasks demand which is each CPU's instantaneous load to reflect
CPU's load at given time is for task placement decision.
Use cumulative tasks demand for cpu_util() for task placement and
introduce cpu_util_freq() for frequency guidance.
Change-Id: Id928f01dbc8cb2a617cdadc584c1f658022565c5
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
When running tasks's ravg.demand is changed update_history() adjusts
rq->cumulative_runnable_avg to reflect change of CPU load. Currently
this fixup is broken by accumulating task's new demand without
subtracting the task's old demand.
Fix the fixup logic to subtract the task's old demand.
Change-Id: I61beb32a4850879ccb39b733f5564251e465bfeb
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
Account cumulative runnable average for WALT CPU utilization accounting.
Change-Id: I56934894e626dec183740eeaf89a57d2ef638143
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
Make iowait_boost and iowait_boost_max as unsigned int since its unit is kHz
and this is consistent with struct cpufreq_policy. Also change the local
variables in sugov_iowait_boost to match this.
Change-Id: I6c67ed94c57c4bdb24bada4b97045593fcb95d2e
Cc: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Cc: Len Brown <lenb@kernel.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Joel Fernandes <joelaf@google.com>
Currently the iowait_boost feature in schedutil makes the frequency go
to max on iowait wakeups. This feature was added to handle a case that
Peter described where the throughput of operations involving continuous
I/O requests [1] is reduced due to running at a lower frequency, however
the lower throughput itself causes utilization to be low and hence
causing frequency to be low hence its "stuck".
Instead of going to max, its also possible to achieve the same effect by
ramping up to max if there are repeated in_iowait wakeups happening.
This patch is an attempt to do that. We start from a lower frequency
(policy->min) and double the boost for every consecutive iowait update
until we reach the maximum iowait boost frequency (iowait_boost_max).
I ran a synthetic test (continuous O_DIRECT writes in a loop) on an x86
machine with intel_pstate in passive mode using schedutil. In this test
the iowait_boost value ramped from 800MHz to 4GHz in 60ms. The patch
achieves the desired improved throughput as the existing behavior.
[1] https://patchwork.kernel.org/patch/9735885/
Change-Id: I4a018434a50f4ca29ec15b03465f6dc212e54423
Cc: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Cc: Len Brown <lenb@kernel.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Suggested-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Joel Fernandes <joelaf@google.com>
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAlmmdSUACgkQONu9yGCS
aT4lHg/7BJMLfX+Cu7XVaZgxNFym3gdh6+AnsSvqGqenbjRirCeh+bdK4u6iNM8v
h8rGYyp92rYJ168piFxdsRoAl2u4dZBpczOqhpEkwFDx8tI+/B+icWeILI4SX0N2
QWhim6tTTWy2Thw862M7lh5aJl2GxwJtxi/RXXzHq4u4w0NKPFUb+AfXEmUHDoXB
Q6Hz8mo6dcjsW5gyNsBvsYQwvqHpB935Ok2Juz7dwarHx7CWJ+v2fqk9cIf3Nll8
Ia04sg1HCRTePyWD0yld6jCpL51X2ZMVLa37RZCw/9WEDotFdVQO5NUg2ryCQQzN
hNmoiJ47QLBXbZR2rQn5XEtSfWZtplOnm0tB+UYRvxJxtxJGzGTdwUNFdu4iBG4+
xDSXbchTfyH7x93TxsvSZ+PS1NfFblYX8HETvoI2MO8PrGDdeHBZllVfF32xcK3L
VyU+wA1L3quPk0h3MvaFXwoOW8gUAIUyQZEXGXOWTMFDCz88UeBbvPkRAfkyIeYs
UhN8mlnM5cHhC3pPyQKFJ3kTFdQ6pZ79KLNqhvmordvfXBjTZwPt0zNYOlZKWTQR
49WFvxEGH4B68TVc2D4mHGbciqtb+GoTQx4w3HsmyS6FF3hzPqR0L4UOvhiMaDVe
kumziwhF9C6viis7dRlgXyJ5iydUJIcD5mJydfuPT2XIkG85eiU=
=SWxy
-----END PGP SIGNATURE-----
Merge 4.4.85 into android-4.4
Changes in 4.4.85
af_key: do not use GFP_KERNEL in atomic contexts
dccp: purge write queue in dccp_destroy_sock()
dccp: defer ccid_hc_tx_delete() at dismantle time
ipv4: fix NULL dereference in free_fib_info_rcu()
net_sched/sfq: update hierarchical backlog when drop packet
ipv4: better IP_MAX_MTU enforcement
sctp: fully initialize the IPv6 address in sctp_v6_to_addr()
tipc: fix use-after-free
ipv6: reset fn->rr_ptr when replacing route
ipv6: repair fib6 tree in failure case
tcp: when rearming RTO, if RTO time is in past then fire RTO ASAP
irda: do not leak initialized list.dev to userspace
net: sched: fix NULL pointer dereference when action calls some targets
net_sched: fix order of queue length updates in qdisc_replace()
mei: me: add broxton pci device ids
mei: me: add lewisburg device ids
Input: trackpoint - add new trackpoint firmware ID
Input: elan_i2c - add ELAN0602 ACPI ID to support Lenovo Yoga310
ALSA: core: Fix unexpected error at replacing user TLV
ALSA: hda - Add stereo mic quirk for Lenovo G50-70 (17aa:3978)
ARCv2: PAE40: Explicitly set MSB counterpart of SLC region ops addresses
i2c: designware: Fix system suspend
drm: Release driver tracking before making the object available again
drm/atomic: If the atomic check fails, return its value first
drm: rcar-du: lvds: Fix PLL frequency-related configuration
drm: rcar-du: lvds: Rename PLLEN bit to PLLON
drm: rcar-du: Fix crash in encoder failure error path
drm: rcar-du: Fix display timing controller parameter
drm: rcar-du: Fix H/V sync signal polarity configuration
tracing: Fix freeing of filter in create_filter() when set_str is false
cifs: Fix df output for users with quota limits
cifs: return ENAMETOOLONG for overlong names in cifs_open()/cifs_lookup()
nfsd: Limit end of page list when decoding NFSv4 WRITE
perf/core: Fix group {cpu,task} validation
Bluetooth: hidp: fix possible might sleep error in hidp_session_thread
Bluetooth: cmtp: fix possible might sleep error in cmtp_session
Bluetooth: bnep: fix possible might sleep error in bnep_session
binder: use group leader instead of open thread
binder: Use wake up hint for synchronous transactions.
ANDROID: binder: fix proc->tsk check.
iio: imu: adis16480: Fix acceleration scale factor for adis16480
iio: hid-sensor-trigger: Fix the race with user space powering up sensors
staging: rtl8188eu: add RNX-N150NUB support
ASoC: simple-card: don't fail if sysclk setting is not supported
ASoC: rsnd: disable SRC.out only when stop timing
ASoC: rsnd: avoid pointless loop in rsnd_mod_interrupt()
ASoC: rsnd: Add missing initialization of ADG req_rate
ASoC: rsnd: ssi: 24bit data needs right-aligned settings
ASoC: rsnd: don't call update callback if it was NULL
ntb_transport: fix qp count bug
ntb_transport: fix bug calculating num_qps_mw
ACPI: ioapic: Clear on-stack resource before using it
ACPI / APEI: Add missing synchronize_rcu() on NOTIFY_SCI removal
Linux 4.4.85
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
commit 64aee2a965cf2954a038b5522f11d2cd2f0f8f3e upstream.
Regardless of which events form a group, it does not make sense for the
events to target different tasks and/or CPUs, as this leaves the group
inconsistent and impossible to schedule. The core perf code assumes that
these are consistent across (successfully intialised) groups.
Core perf code only verifies this when moving SW events into a HW
context. Thus, we can violate this requirement for pure SW groups and
pure HW groups, unless the relevant PMU driver happens to perform this
verification itself. These mismatched groups subsequently wreak havoc
elsewhere.
For example, we handle watchpoints as SW events, and reserve watchpoint
HW on a per-CPU basis at pmu::event_init() time to ensure that any event
that is initialised is guaranteed to have a slot at pmu::add() time.
However, the core code only checks the group leader's cpu filter (via
event_filter_match()), and can thus install follower events onto CPUs
violating thier (mismatched) CPU filters, potentially installing them
into a CPU without sufficient reserved slots.
This can be triggered with the below test case, resulting in warnings
from arch backends.
#define _GNU_SOURCE
#include <linux/hw_breakpoint.h>
#include <linux/perf_event.h>
#include <sched.h>
#include <stdio.h>
#include <sys/prctl.h>
#include <sys/syscall.h>
#include <unistd.h>
static int perf_event_open(struct perf_event_attr *attr, pid_t pid, int cpu,
int group_fd, unsigned long flags)
{
return syscall(__NR_perf_event_open, attr, pid, cpu, group_fd, flags);
}
char watched_char;
struct perf_event_attr wp_attr = {
.type = PERF_TYPE_BREAKPOINT,
.bp_type = HW_BREAKPOINT_RW,
.bp_addr = (unsigned long)&watched_char,
.bp_len = 1,
.size = sizeof(wp_attr),
};
int main(int argc, char *argv[])
{
int leader, ret;
cpu_set_t cpus;
/*
* Force use of CPU0 to ensure our CPU0-bound events get scheduled.
*/
CPU_ZERO(&cpus);
CPU_SET(0, &cpus);
ret = sched_setaffinity(0, sizeof(cpus), &cpus);
if (ret) {
printf("Unable to set cpu affinity\n");
return 1;
}
/* open leader event, bound to this task, CPU0 only */
leader = perf_event_open(&wp_attr, 0, 0, -1, 0);
if (leader < 0) {
printf("Couldn't open leader: %d\n", leader);
return 1;
}
/*
* Open a follower event that is bound to the same task, but a
* different CPU. This means that the group should never be possible to
* schedule.
*/
ret = perf_event_open(&wp_attr, 0, 1, leader, 0);
if (ret < 0) {
printf("Couldn't open mismatched follower: %d\n", ret);
return 1;
} else {
printf("Opened leader/follower with mismastched CPUs\n");
}
/*
* Open as many independent events as we can, all bound to the same
* task, CPU0 only.
*/
do {
ret = perf_event_open(&wp_attr, 0, 0, -1, 0);
} while (ret >= 0);
/*
* Force enable/disble all events to trigger the erronoeous
* installation of the follower event.
*/
printf("Opened all events. Toggling..\n");
for (;;) {
prctl(PR_TASK_PERF_EVENTS_DISABLE, 0, 0, 0, 0);
prctl(PR_TASK_PERF_EVENTS_ENABLE, 0, 0, 0, 0);
}
return 0;
}
Fix this by validating this requirement regardless of whether we're
moving events.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Zhou Chengming <zhouchengming1@huawei.com>
Link: http://lkml.kernel.org/r/1498142498-15758-1-git-send-email-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 8b0db1a5bdfcee0dbfa89607672598ae203c9045 upstream.
Performing the following task with kmemleak enabled:
# cd /sys/kernel/tracing/events/irq/irq_handler_entry/
# echo 'enable_event:kmem:kmalloc:3 if irq >' > trigger
# echo 'enable_event:kmem:kmalloc:3 if irq > 31' > trigger
# echo scan > /sys/kernel/debug/kmemleak
# cat /sys/kernel/debug/kmemleak
unreferenced object 0xffff8800b9290308 (size 32):
comm "bash", pid 1114, jiffies 4294848451 (age 141.139s)
hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace:
[<ffffffff81cef5aa>] kmemleak_alloc+0x4a/0xa0
[<ffffffff81357938>] kmem_cache_alloc_trace+0x158/0x290
[<ffffffff81261c09>] create_filter_start.constprop.28+0x99/0x940
[<ffffffff812639c9>] create_filter+0xa9/0x160
[<ffffffff81263bdc>] create_event_filter+0xc/0x10
[<ffffffff812655e5>] set_trigger_filter+0xe5/0x210
[<ffffffff812660c4>] event_enable_trigger_func+0x324/0x490
[<ffffffff812652e2>] event_trigger_write+0x1a2/0x260
[<ffffffff8138cf87>] __vfs_write+0xd7/0x380
[<ffffffff8138f421>] vfs_write+0x101/0x260
[<ffffffff8139187b>] SyS_write+0xab/0x130
[<ffffffff81cfd501>] entry_SYSCALL_64_fastpath+0x1f/0xbe
[<ffffffffffffffff>] 0xffffffffffffffff
The function create_filter() is passed a 'filterp' pointer that gets
allocated, and if "set_str" is true, it is up to the caller to free it, even
on error. The problem is that the pointer is not freed by create_filter()
when set_str is false. This is a bug, and it is not up to the caller to free
the filter on error if it doesn't care about the string.
Link: http://lkml.kernel.org/r/1502705898-27571-2-git-send-email-chuhu@redhat.com
Fixes: 38b78eb85 ("tracing: Factorize filter creation")
Reported-by: Chunyu Hu <chuhu@redhat.com>
Tested-by: Chunyu Hu <chuhu@redhat.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>