(from https://patchwork.kernel.org/patch/9817789/)
For correct behavior we need to hold the inner lock when
dequeuing and processing node work in binder_thread_read.
We now hold the inner lock when we enter the switch statement
and release it after processing anything that might be
affected by other threads.
We also need to hold the inner lock to protect the node
weak/strong ref tracking fields as long as node->proc
is non-NULL (if it is NULL then we are guaranteed that
we don't have any node work queued).
This means that other functions that manipulate these fields
must hold the inner lock. Refactored these functions to use
the inner lock.
Change-Id: I02c5cfdd3ab6dadea7f07f2a275faf3e27be77ad
Test: tested manually
Signed-off-by: Todd Kjos <tkjos@google.com>
(from https://patchwork.kernel.org/patch/9817791/)
There are 3 main spinlocks which must be acquired in this
order:
1) proc->outer_lock : protects most fields of binder_proc,
binder_thread, and binder_ref structures. binder_proc_lock()
and binder_proc_unlock() are used to acq/rel.
2) node->lock : protects most fields of binder_node.
binder_node_lock() and binder_node_unlock() are
used to acq/rel
3) proc->inner_lock : protects the thread and node lists
(proc->threads, proc->nodes) and all todo lists associated
with the binder_proc (proc->todo, thread->todo,
proc->delivered_death and node->async_todo).
binder_inner_proc_lock() and binder_inner_proc_unlock()
are used to acq/rel
Any lock under procA must never be nested under any lock at the same
level or below on procB.
Functions that require a lock held on entry indicate which lock
in the suffix of the function name:
foo_olocked() : requires node->outer_lock
foo_nlocked() : requires node->lock
foo_ilocked() : requires proc->inner_lock
foo_iolocked(): requires proc->outer_lock and proc->inner_lock
foo_nilocked(): requires node->lock and proc->inner_lock
Change-Id: Ied42674486092a0e3bdde64356e45b2494844558
Signed-off-by: Todd Kjos <tkjos@google.com>
(from https://patchwork.kernel.org/patch/9817795/)
When obtaining a node via binder_get_node(),
binder_get_node_from_ref() or binder_new_node(),
increment node->tmp_refs to take a
temporary reference on the node to ensure the node
persists while being used. binder_put_node() must
be called to remove the temporary reference.
Change-Id: I962b39d5cd80b2d7e4786bb87236ede7914e2db7
Signed-off-by: Todd Kjos <tkjos@google.com>
(from https://patchwork.kernel.org/patch/9817781/)
Once locks are added, binder_ref's will only be accessed
safely with the proc lock held. Refactor the inc/dec paths
to make them atomic with the binder_get_ref* paths and
node inc/dec. For example, instead of:
ref = binder_get_ref(proc, handle, strong);
...
binder_dec_ref(ref, strong);
we now have:
ret = binder_dec_ref_for_handle(proc, handle, strong, &rdata);
Since the actual ref is no longer exposed to callers, a
new struct binder_ref_data is introduced which can be used
to return a copy of ref state.
Change-Id: I7de22107f8ebc967cee63251d584fceb4ea56250
Signed-off-by: Todd Kjos <tkjos@google.com>
(from https://patchwork.kernel.org/patch/9817787/)
binder_thread and binder_proc may be accessed by other
threads when processing transaction. Therefore they
must be prevented from being freed while a transaction
is in progress that references them.
This is done by introducing a temporary reference
counter for threads and procs that indicates that the
object is in use and must not be freed. binder_thread_dec_tmpref()
and binder_proc_dec_tmpref() are used to decrement
the temporary reference.
It is safe to free a binder_thread if there
is no reference and it has been released
(indicated by thread->is_dead).
It is safe to free a binder_proc if it has no
remaining threads and no reference.
A spinlock is added to the binder_transaction
to safely access and set references for t->from
and for debug code to safely access t->to_thread
and t->to_proc.
Change-Id: I0a00a0294c3e93aea8b3f141c6f18e77ad244078
Signed-off-by: Todd Kjos <tkjos@google.com>
(from https://patchwork.kernel.org/patch/9817787/)
When initiating a transaction, the target_node must
have a strong ref on it. Then we take a second
strong ref to make sure the node survives until the
transaction is complete.
Change-Id: If7429cb43eda520ab89d45df6c19327cee97c60c
Signed-off-by: Todd Kjos <tkjos@google.com>
(from https://patchwork.kernel.org/patch/9817805/)
Since errors are tracked in the return_error/return_error2
fields of the binder_thread object and BR_TRANSACTION_COMPLETEs
can be tracked either in those fields or via the thread todo
work list, it is possible for errors to be reported ahead
of the associated txn complete.
Use the thread todo work list for errors to guarantee
order. Also changed binder_send_failed_reply to pop
the transaction even if it failed to send a reply.
Bug: 37218618
Test: tested manually
Change-Id: I196cfaeed09fdcd697f8ab25eea4e04241fdb08f
Signed-off-by: Todd Kjos <tkjos@google.com>
(from https://lkml.org/lkml/2017/6/29/754)
binder_pop_transaction needs to be split into 2 pieces to
to allow the proc lock to be held on entry to dequeue the
transaction stack, but no lock when kfree'ing the transaction.
Split into binder_pop_transaction_locked and binder_free_transaction
(the actual locks are still to be added).
Change-Id: I848ae994cc27b3cd083cff2dbd1071762784f4a3
Test: tested manually
Signed-off-by: Todd Kjos <tkjos@google.com>
(from https://patchwork.kernel.org/patch/9817807/)
The log->next index for the transaction log was
not protected when incremented. This led to a
case where log->next++ resulted in an index
larger than ARRAY_SIZE(log->entry) and eventually
a bad access to memory.
Fixed by making the log index an atomic64 and
converting to an array by using "% ARRAY_SIZE(log->entry)"
Also added "complete" field to the log entry which is
written last to tell the print code whether the
entry is complete
Bug: 62038227
Test: tested manually
Change-Id: I1bb1c1a332a6ac458a626f5bedd05022b56b91f2
Signed-off-by: Todd Kjos <tkjos@google.com>
(from https://patchwork.kernel.org/patch/9817815/)
Adds protection against malicious user code freeing
the same buffer at the same time which could cause
a crash. Cannot happen under normal use.
Bug: 36650912
Change-Id: I43e078cbf31c0789aaff5ceaf8f1a94c75f79d45
Test: tested manually
Signed-off-by: Todd Kjos <tkjos@google.com>
(from https://patchwork.kernel.org/patch/9817819/)
node is always non-NULL in binder_get_ref_for_node so the
conditional and else clause are not needed
Change-Id: I23f011ba59e1869d9577e6bf28e1f1dd38f45713
Signed-off-by: Todd Kjos <tkjos@google.com>
(from https://patchwork.kernel.org/patch/9817799/)
The looper member of struct binder_thread is a bitmask
of control bits. All of the existing bits are modified
by the affected thread except for BINDER_LOOPER_STATE_NEED_RETURN
which can be modified in binder_deferred_flush() by
another thread.
To avoid adding a spinlock around all read-mod-writes to
modify a bit, the BINDER_LOOPER_STATE_NEED_RETURN flag
is replaced by a separate field in struct binder_thread.
Bug: 33250092 32225111
Change-Id: Ia4cefbdbd683c6cb17c323ba7d278de5f2ca0745
Signed-off-by: Todd Kjos <tkjos@google.com>
(from https://patchwork.kernel.org/patch/9817813/)
Currently, the transaction complete work item is queued
after the transaction. This means that it is possible
for the transaction to be handled and a reply to be
enqueued in the current thread before the transaction
complete is enqueued, which violates the protocol
with userspace who may not expect the transaction
complete. Fixed by always enqueing the transaction
complete first.
Also, once the transaction is enqueued, it is unsafe
to access since it might be freed. Currently,
t->flags is accessed to determine whether a sync
wake is needed. Changed to access tr->flags
instead.
Change-Id: I6c01566e167a39cf17c9027c3817618182e56975
Signed-off-by: Todd Kjos <tkjos@google.com>
(from https://patchwork.kernel.org/patch/9817757/)
In binder_thread_read, the BINDER_WORK_NODE command is used
to communicate the references on the node to userspace. It
can take a couple of iterations in the loop to construct
the list of commands for user space. When locking is added,
the lock would need to be release on each iteration which
means the state could change. The work item is not dequeued
during this process which prevents a simpler queue management
that can just dequeue up front and handle the work item.
Fixed by changing the BINDER_WORK_NODE algorithm in
binder_thread_read to determine which commands to send
to userspace atomically in 1 pass so it stays consistent
with the kernel view.
The work item is now dequeued immediately since only
1 pass is needed.
Change-Id: I9b4109997b2d53ba661867b14d7336cd076be06d
Signed-off-by: Todd Kjos <tkjos@google.com>
(from https://patchwork.kernel.org/patch/9817751/)
Add additional information to determine the cause of binder
failures. Adds the following to failed transaction log and
kernel messages:
return_error : value returned for transaction
return_error_param : errno returned by binder allocator
return_error_line : line number where error detected
Also, return BR_DEAD_REPLY if an allocation error indicates
a dead proc (-ESRCH)
Bug: 36406078
Change-Id: Ifc8881fa5adfcced3f2d67f9030fbd3efa3e2cab
Test: tested manually
Signed-off-by: Todd Kjos <tkjos@google.com>
(from https://patchwork.kernel.org/patch/9817755/)
Use atomics for stats to avoid needing to lock for
increments/decrements
Bug: 33250092 32225111
Change-Id: I13e69b7f0485ccf16673e25091455781e1933a98
Signed-off-by: Todd Kjos <tkjos@google.com>
(from https://patchwork.kernel.org/patch/9817811/)
With the global lock, there was a mechanism to acceess
binder driver debugging information with the global
lock disabled to debug deadlocks or other issues.
This mechanism is rarely (if ever) used anymore
and wasn't needed during the development of
fine-grained locking in the binder driver.
Removing it.
Change-Id: Ie1cf45748cefa433607a97c2ef322e7906efe0f7
Signed-off-by: Todd Kjos <tkjos@google.com>
(from https://patchwork.kernel.org/patch/9817753/)
Move the binder allocator functionality to its own file
Continuation of splitting the binder allocator from the binder
driver. Split binder_alloc functions from normal binder functions.
Add kernel doc comments to functions declared extern in
binder_alloc.h
Change-Id: I8f1a967375359078b8e63c7b6b88a752c374a64a
Signed-off-by: Todd Kjos <tkjos@google.com>
(from https://patchwork.kernel.org/patch/9817753/)
Continuation of splitting the binder allocator from the binder
driver. Separate binder_alloc functions from normal binder
functions. Protect the allocator with a separate mutex.
Bug: 33250092 32225111
Change-Id: I634637415aa03c74145159d84f1749bbe785a8f3
Signed-off-by: Todd Kjos <tkjos@google.com>
(from https://patchwork.kernel.org/patch/9817817/)
The buffer's transaction has already been freed before
binder_deferred_release. No need to do it again.
Change-Id: I412709c23879c5e45a6c7304a588bba0a5223fc3
Signed-off-by: Todd Kjos <tkjos@google.com>
(from https://patchwork.kernel.org/patch/9817745/)
The binder allocator is logically separate from the rest
of the binder drivers. Separating the data structures
to prepare for splitting into separate file with separate
locking.
Change-Id: I5ca4df567ac4b8c8d6ee2116ae67c0c1d75c9747
Signed-off-by: Todd Kjos <tkjos@google.com>
(from https://patchwork.kernel.org/patch/9817747)
Use wake_up_interruptible_sync() to hint to the scheduler binder
transactions are synchronous wakeups. Disable preemption while waking
to avoid ping-ponging on the binder lock.
Change-Id: Ic406a232d0873662f80148e37acefe5243d912a0
Signed-off-by: Todd Kjos <tkjos@google.com>
Git-commit: 443c026e90820170aa3db2c21d2933ae5922f900
Git-repo: https://android.googlesource.com/kernel/msm
Signed-off-by: Omprakash Dhyade <odhyade@codeaurora.org>
Due to rounding error hrtimer tick interval becomes 3333333 ns when HZ=300.
Consequently the tick time stamp nearest to the WALT's default window size
20ms will be also 19999998 (3333333 * 6).
Change-Id: I08f9bd2dbecccbb683e4490d06d8b0da703d3ab2
Suggested-by: Joel Fernandes <joelaf@google.com>
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
This config option is not required by Android.
Bug: 63578267
Change-Id: I163fa19183734a1a343d525e885a000a495c242e
Signed-off-by: Steve Muckle <smuckle@google.com>
Use the VFS mount_nodev instead of customized mount_nodev_with_options
and fix generic_shutdown_super to kill_anon_super because of set_anon_super
Signed-off-by: Gao Xiang <gaoxiang25@huawei.com>
Change-Id: Ibe46647aa2ce49d79291aa9d0295e9625cfccd80
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAllc3f0ACgkQONu9yGCS
aT4fmA/+OHeYbhpaMRKqrUpsxB3NpROr2Z47ow6vaVjYZzd0irrODLlfIfDQ6EEo
N3v28povu16VeYXk+4h8bsAP2K2j6/BlRaSi2hB6dmnY8GDMaXEfRojPYAlzVz50
qnK/6152siDDarUx1h5Zc8GcmX/tEl6h3bOOxDcwLR+RvyIcWxenuR+uqRM/AV6o
BPEiOuMu7P6LjID7KYgBTFNajVBMLrDXt4SCWdzOZmlNt0QXgKB9yw68vTcc+edC
ZcXqa0M6nEWSDvwobbwBZhFL8H2dJjzweyjeFBgxnxgmOrRh6kvZG2wsz2c8O3/P
g8TuMxU7siu+I3lFwKy+dgZ/1REz+6Q3oFBqXsuddrcPYu23rV6mz/GxqWy4cerb
M4eTWz6L9vA2GoYpvBaWi0tKC9tkNM49g48Y24a6CW1O4dJWlz3RrpTiZmequbNF
mo8EKomSXn4kYAm1xT03DGljQkK/i2JtyI5sk2hLEqqxKvZ/3q9xxLLKOVx8dPvs
PIbfpapfYMXXMWgR6e+UKueNLgevfWE12X/OU4SgvSY4n/07/mH40XEd3zd82IsZ
1Mw0qj3JnqCAFDBBMsDYa+OvABaGD1dHARuiv+aeqW8tqoBglFHxWqF+SQVNXLIE
qTLiKz78vjQpH0zGpkA3HEOh/h4L7a0y3qRMECsk5SUxXsgu1gg=
=bwNU
-----END PGP SIGNATURE-----
Merge 4.4.76 into android-4.4
Changes in 4.4.76
ipv6: release dst on error in ip6_dst_lookup_tail
net: don't call strlen on non-terminated string in dev_set_alias()
decnet: dn_rtmsg: Improve input length sanitization in dnrmg_receive_user_skb
net: Zero ifla_vf_info in rtnl_fill_vfinfo()
af_unix: Add sockaddr length checks before accessing sa_family in bind and connect handlers
Fix an intermittent pr_emerg warning about lo becoming free.
net: caif: Fix a sleep-in-atomic bug in cfpkt_create_pfx
igmp: acquire pmc lock for ip_mc_clear_src()
igmp: add a missing spin_lock_init()
ipv6: fix calling in6_ifa_hold incorrectly for dad work
net/mlx5: Wait for FW readiness before initializing command interface
decnet: always not take dst->__refcnt when inserting dst into hash table
net: 8021q: Fix one possible panic caused by BUG_ON in free_netdev
sfc: provide dummy definitions of vswitch functions
ipv6: Do not leak throw route references
rtnetlink: add IFLA_GROUP to ifla_policy
netfilter: xt_TCPMSS: add more sanity tests on tcph->doff
netfilter: synproxy: fix conntrackd interaction
NFSv4: fix a reference leak caused WARNING messages
drm/ast: Handle configuration without P2A bridge
mm, swap_cgroup: reschedule when neeed in swap_cgroup_swapoff()
MIPS: Avoid accidental raw backtrace
MIPS: pm-cps: Drop manual cache-line alignment of ready_count
MIPS: Fix IRQ tracing & lockdep when rescheduling
ALSA: hda - Fix endless loop of codec configure
ALSA: hda - set input_path bitmap to zero after moving it to new place
drm/vmwgfx: Free hash table allocated by cmdbuf managed res mgr
usb: gadget: f_fs: Fix possibe deadlock
sysctl: enable strict writes
block: fix module reference leak on put_disk() call for cgroups throttle
mm: numa: avoid waiting on freed migrated pages
KVM: x86: fix fixing of hypercalls
scsi: sd: Fix wrong DPOFUA disable in sd_read_cache_type
scsi: lpfc: Set elsiocb contexts to NULL after freeing it
qla2xxx: Fix erroneous invalid handle message
ARM: dts: BCM5301X: Correct GIC_PPI interrupt flags
net: mvneta: Fix for_each_present_cpu usage
MIPS: ath79: fix regression in PCI window initialization
net: korina: Fix NAPI versus resources freeing
MIPS: ralink: MT7688 pinmux fixes
MIPS: ralink: fix USB frequency scaling
MIPS: ralink: Fix invalid assignment of SoC type
MIPS: ralink: fix MT7628 pinmux typos
MIPS: ralink: fix MT7628 wled_an pinmux gpio
mtd: bcm47xxpart: limit scanned flash area on BCM47XX (MIPS) only
bgmac: fix a missing check for build_skb
mtd: bcm47xxpart: don't fail because of bit-flips
bgmac: Fix reversed test of build_skb() return value.
net: bgmac: Fix SOF bit checking
net: bgmac: Start transmit queue in bgmac_open
net: bgmac: Remove superflous netif_carrier_on()
powerpc/eeh: Enable IO path on permanent error
gianfar: Do not reuse pages from emergency reserve
Btrfs: fix truncate down when no_holes feature is enabled
virtio_console: fix a crash in config_work_handler
swiotlb-xen: update dev_addr after swapping pages
xen-netfront: Fix Rx stall during network stress and OOM
scsi: virtio_scsi: Reject commands when virtqueue is broken
platform/x86: ideapad-laptop: handle ACPI event 1
amd-xgbe: Check xgbe_init() return code
net: dsa: Check return value of phy_connect_direct()
drm/amdgpu: check ring being ready before using
vfio/spapr: fail tce_iommu_attach_group() when iommu_data is null
virtio_net: fix PAGE_SIZE > 64k
vxlan: do not age static remote mac entries
ibmveth: Add a proper check for the availability of the checksum features
kernel/panic.c: add missing \n
HID: i2c-hid: Add sleep between POWER ON and RESET
scsi: lpfc: avoid double free of resource identifiers
spi: davinci: use dma_mapping_error()
mac80211: initialize SMPS field in HT capabilities
x86/mpx: Use compatible types in comparison to fix sparse error
coredump: Ensure proper size of sparse core files
swiotlb: ensure that page-sized mappings are page-aligned
s390/ctl_reg: make __ctl_load a full memory barrier
be2net: fix status check in be_cmd_pmac_add()
perf probe: Fix to show correct locations for events on modules
net/mlx4_core: Eliminate warning messages for SRQ_LIMIT under SRIOV
sctp: check af before verify address in sctp_addr_id2transport
ravb: Fix use-after-free on `ifconfig eth0 down`
jump label: fix passing kbuild_cflags when checking for asm goto support
xfrm: fix stack access out of bounds with CONFIG_XFRM_SUB_POLICY
xfrm: NULL dereference on allocation failure
xfrm: Oops on error in pfkey_msg2xfrm_state()
watchdog: bcm281xx: Fix use of uninitialized spinlock.
sched/loadavg: Avoid loadavg spikes caused by delayed NO_HZ accounting
ARM64/ACPI: Fix BAD_MADT_GICC_ENTRY() macro implementation
ARM: 8685/1: ensure memblock-limit is pmd-aligned
x86/mpx: Correctly report do_mpx_bt_fault() failures to user-space
x86/mm: Fix flush_tlb_page() on Xen
ocfs2: o2hb: revert hb threshold to keep compatible
iommu/vt-d: Don't over-free page table directories
iommu: Handle default domain attach failure
iommu/amd: Fix incorrect error handling in amd_iommu_bind_pasid()
cpufreq: s3c2416: double free on driver init error path
KVM: x86: fix emulation of RSM and IRET instructions
KVM: x86/vPMU: fix undefined shift in intel_pmu_refresh()
KVM: x86: zero base3 of unusable segments
KVM: nVMX: Fix exception injection
Linux 4.4.76
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
commit d4912215d1031e4fb3d1038d2e1857218dba0d0a upstream.
WARNING: CPU: 3 PID: 2840 at arch/x86/kvm/vmx.c:10966 nested_vmx_vmexit+0xdcd/0xde0 [kvm_intel]
CPU: 3 PID: 2840 Comm: qemu-system-x86 Tainted: G OE 4.12.0-rc3+ #23
RIP: 0010:nested_vmx_vmexit+0xdcd/0xde0 [kvm_intel]
Call Trace:
? kvm_check_async_pf_completion+0xef/0x120 [kvm]
? rcu_read_lock_sched_held+0x79/0x80
vmx_queue_exception+0x104/0x160 [kvm_intel]
? vmx_queue_exception+0x104/0x160 [kvm_intel]
kvm_arch_vcpu_ioctl_run+0x1171/0x1ce0 [kvm]
? kvm_arch_vcpu_load+0x47/0x240 [kvm]
? kvm_arch_vcpu_load+0x62/0x240 [kvm]
kvm_vcpu_ioctl+0x384/0x7b0 [kvm]
? kvm_vcpu_ioctl+0x384/0x7b0 [kvm]
? __fget+0xf3/0x210
do_vfs_ioctl+0xa4/0x700
? __fget+0x114/0x210
SyS_ioctl+0x79/0x90
do_syscall_64+0x81/0x220
entry_SYSCALL64_slow_path+0x25/0x25
This is triggered occasionally by running both win7 and win2016 in L2, in
addition, EPT is disabled on both L1 and L2. It can't be reproduced easily.
Commit 0b6ac343fc (KVM: nVMX: Correct handling of exception injection) mentioned
that "KVM wants to inject page-faults which it got to the guest. This function
assumes it is called with the exit reason in vmcs02 being a #PF exception".
Commit e011c663 (KVM: nVMX: Check all exceptions for intercept during delivery to
L2) allows to check all exceptions for intercept during delivery to L2. However,
there is no guarantee the exit reason is exception currently, when there is an
external interrupt occurred on host, maybe a time interrupt for host which should
not be injected to guest, and somewhere queues an exception, then the function
nested_vmx_check_exception() will be called and the vmexit emulation codes will
try to emulate the "Acknowledge interrupt on exit" behavior, the warning is
triggered.
Reusing the exit reason from the L2->L0 vmexit is wrong in this case,
the reason must always be EXCEPTION_NMI when injecting an exception into
L1 as a nested vmexit.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
Fixes: e011c663b9 ("KVM: nVMX: Check all exceptions for intercept during delivery to L2")
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f0367ee1d64d27fa08be2407df5c125442e885e3 upstream.
Static checker noticed that base3 could be used uninitialized if the
segment was not present (useable). Random stack values probably would
not pass VMCS entry checks.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Fixes: 1aa366163b ("KVM: x86 emulator: consolidate segment accessors")
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 34b0dadbdf698f9b277a31b2747b625b9a75ea1f upstream.
Static analysis noticed that pmu->nr_arch_gp_counters can be 32
(INTEL_PMC_MAX_GENERIC) and therefore cannot be used to shift 'int'.
I didn't add BUILD_BUG_ON for it as we have a better checker.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Fixes: 25462f7f52 ("KVM: x86/vPMU: Define kvm_pmu_ops to support vPMU function dispatch")
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6ed071f051e12cf7baa1b69d3becb8f232fdfb7b upstream.
On AMD, the effect of set_nmi_mask called by emulate_iret_real and em_rsm
on hflags is reverted later on in x86_emulate_instruction where hflags are
overwritten with ctxt->emul_flags (the kvm_set_hflags call). This manifests
as a hang when rebooting Windows VMs with QEMU, OVMF, and >1 vcpu.
Instead of trying to merge ctxt->emul_flags into vcpu->arch.hflags after
an instruction is emulated, this commit deletes emul_flags altogether and
makes the emulator access vcpu->arch.hflags using two new accessors. This
way all changes, on the emulator side as well as in functions called from
the emulator and accessing vcpu state with emul_to_vcpu, are preserved.
More details on the bug and its manifestation with Windows and OVMF:
It's a KVM bug in the interaction between SMI/SMM and NMI, specific to AMD.
I believe that the SMM part explains why we started seeing this only with
OVMF.
KVM masks and unmasks NMI when entering and leaving SMM. When KVM emulates
the RSM instruction in em_rsm, the set_nmi_mask call doesn't stick because
later on in x86_emulate_instruction we overwrite arch.hflags with
ctxt->emul_flags, effectively reverting the effect of the set_nmi_mask call.
The AMD-specific hflag of interest here is HF_NMI_MASK.
When rebooting the system, Windows sends an NMI IPI to all but the current
cpu to shut them down. Only after all of them are parked in HLT will the
initiating cpu finish the restart. If NMI is masked, other cpus never get
the memo and the initiating cpu spins forever, waiting for
hal!HalpInterruptProcessorsStarted to drop. That's the symptom we observe.
Fixes: a584539b24 ("KVM: x86: pass the whole hflags field to emulator and back")
Signed-off-by: Ladi Prosek <lprosek@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a69261e4470d680185a15f748d9cdafb37c57a33 upstream.
The "goto err_armclk;" error path already does a clk_put(s3c_freq->hclk);
so this is a double free.
Fixes: 34ee550752 ([CPUFREQ] Add S3C2416/S3C2450 cpufreq driver)
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 73dbd4a4230216b6a5540a362edceae0c9b4876b upstream.
In function amd_iommu_bind_pasid(), the control flow jumps
to label out_free when pasid_state->mm and mm is NULL. And
mmput(mm) is called. In function mmput(mm), mm is
referenced without validation. This will result in a NULL
dereference bug. This patch fixes the bug.
Signed-off-by: Pan Bian <bianpan2016@163.com>
Fixes: f0aac63b87 ('iommu/amd: Don't hold a reference to mm_struct')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 797a8b4d768c58caac58ee3e8cb36a164d1b7751 upstream.
We wouldn't normally expect ops->attach_dev() to fail, but on IOMMUs
with limited hardware resources, or generally misconfigured systems,
it is certainly possible. We report failure correctly from the external
iommu_attach_device() interface, but do not do so in iommu_group_add()
when attaching to the default domain. The result of failure there is
that the device, group and domain all get left in a broken,
part-configured state which leads to weird errors and misbehaviour down
the line when IOMMU API calls sort-of-but-don't-quite work.
Check the return value of __iommu_attach_device() on the default domain,
and refactor the error handling paths to cope with its failure and clean
up correctly in such cases.
Fixes: e39cb8a3aa ("iommu: Make sure a device is always attached to a domain")
Reported-by: Punit Agrawal <punit.agrawal@arm.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f7116e115acdd74bc75a4daf6492b11d43505125 upstream.
dma_pte_free_level() recurses down the IOMMU page tables and frees
directory pages that are entirely contained in the given PFN range.
Unfortunately, it incorrectly calculates the starting address covered
by the PTE under consideration, which can lead to it clearing an entry
that is still in use.
This occurs if we have a scatterlist with an entry that has a length
greater than 1026 MB and is aligned to 2 MB for both the IOMMU and
physical addresses. For example, if __domain_mapping() is asked to map a
two-entry scatterlist with 2 MB and 1028 MB segments to PFN 0xffff80000,
it will ask if dma_pte_free_pagetable() is asked to PFNs from
0xffff80200 to 0xffffc05ff, it will also incorrectly clear the PFNs from
0xffff80000 to 0xffff801ff because of this issue. The current code will
set level_pfn to 0xffff80200, and 0xffff80200-0xffffc01ff fits inside
the range being cleared. Properly setting the level_pfn for the current
level under consideration catches that this PTE is outside of the range
being cleared.
This patch also changes the value passed into dma_pte_free_level() when
it recurses. This only affects the first PTE of the range being cleared,
and is handled by the existing code that ensures we start our cursor no
lower than start_pfn.
This was found when using dma_map_sg() to map large chunks of contiguous
memory, which immediatedly led to faults on the first access of the
erroneously-deleted mappings.
Fixes: 3269ee0bd6 ("intel-iommu: Fix leaks in pagetable freeing")
Reviewed-by: Benjamin Serebrin <serebrin@google.com>
Signed-off-by: David Dillow <dillow@google.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 33496c3c3d7b88dcbe5e55aa01288b05646c6aca upstream.
Configfs is the interface for ocfs2-tools to set configure to kernel and
$configfs_dir/cluster/$clustername/heartbeat/dead_threshold is the one
used to configure heartbeat dead threshold. Kernel has a default value
of it but user can set O2CB_HEARTBEAT_THRESHOLD in /etc/sysconfig/o2cb
to override it.
Commit 45b997737a ("ocfs2/cluster: use per-attribute show and store
methods") changed heartbeat dead threshold name while ocfs2-tools did
not, so ocfs2-tools won't set this configurable and the default value is
always used. So revert it.
Fixes: 45b997737a ("ocfs2/cluster: use per-attribute show and store methods")
Link: http://lkml.kernel.org/r/1490665245-15374-1-git-send-email-junxiao.bi@oracle.com
Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
Acked-by: Joseph Qi <jiangqi903@gmail.com>
Cc: Mark Fasheh <mfasheh@versity.com>
Cc: Joel Becker <jlbec@evilplan.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit dbd68d8e84c606673ebbcf15862f8c155fa92326 upstream.
flush_tlb_page() passes a bogus range to flush_tlb_others() and
expects the latter to fix it up. native_flush_tlb_others() has the
fixup but Xen's version doesn't. Move the fixup to
flush_tlb_others().
AFAICS the only real effect is that, without this fix, Xen would
flush everything instead of just the one page on remote vCPUs in
when flush_tlb_page() was called.
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: e7b52ffd45 ("x86/flush_tlb: try flush_tlb_single one by one in flush_tlb_range")
Link: http://lkml.kernel.org/r/10ed0e4dfea64daef10b87fb85df1746999b4dba.1492844372.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5ed386ec09a5d75bcf073967e55e895c2607a5c3 upstream.
When this function fails it just sends a SIGSEGV signal to
user-space using force_sig(). This signal is missing
essential information about the cause, e.g. the trap_nr or
an error code.
Fix this by propagating the error to the only caller of
mpx_handle_bd_fault(), do_bounds(), which sends the correct
SIGSEGV signal to the process.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: fe3d197f84 ('x86, mpx: On-demand kernel allocation of bounds tables')
Link: http://lkml.kernel.org/r/1491488362-27198-1-git-send-email-joro@8bytes.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9e25ebfe56ece7541cd10a20d715cbdd148a2e06 upstream.
The pmd containing memblock_limit is cleared by prepare_page_table()
which creates the opportunity for early_alloc() to allocate unmapped
memory if memblock_limit is not pmd aligned causing a boot-time hang.
Commit 965278dcb8 ("ARM: 8356/1: mm: handle non-pmd-aligned end of RAM")
attempted to resolve this problem, but there is a path through the
adjust_lowmem_bounds() routine where if all memory regions start and
end on pmd-aligned addresses the memblock_limit will be set to
arm_lowmem_limit.
Since arm_lowmem_limit can be affected by the vmalloc early parameter,
the value of arm_lowmem_limit may not be pmd-aligned. This commit
corrects this oversight such that memblock_limit is always rounded
down to pmd-alignment.
Fixes: 965278dcb8 ("ARM: 8356/1: mm: handle non-pmd-aligned end of RAM")
Signed-off-by: Doug Berger <opendmb@gmail.com>
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit cb7cf772d83d2d4e6995c5bb9e0fb59aea8f7080 upstream.
The BAD_MADT_GICC_ENTRY() macro checks if a GICC MADT entry passes
muster from an ACPI specification standpoint. Current macro detects the
MADT GICC entry length through ACPI firmware version (it changed from 76
to 80 bytes in the transition from ACPI 5.1 to ACPI 6.0 specification)
but always uses (erroneously) the ACPICA (latest) struct (ie struct
acpi_madt_generic_interrupt - that is 80-bytes long) length to check if
the current GICC entry memory record exceeds the MADT table end in
memory as defined by the MADT table header itself, which may result in
false negatives depending on the ACPI firmware version and how the MADT
entries are laid out in memory (ie on ACPI 5.1 firmware MADT GICC
entries are 76 bytes long, so by adding 80 to a GICC entry start address
in memory the resulting address may well be past the actual MADT end,
triggering a false negative).
Fix the BAD_MADT_GICC_ENTRY() macro by reshuffling the condition checks
and update them to always use the firmware version specific MADT GICC
entry length in order to carry out boundary checks.
Fixes: b6cfb27737 ("ACPI / ARM64: add BAD_MADT_GICC_ENTRY() macro")
Reported-by: Julien Grall <julien.grall@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Al Stone <ahs3@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6e5f32f7a43f45ee55c401c0b9585eb01f9629a8 upstream.
If we crossed a sample window while in NO_HZ we will add LOAD_FREQ to
the pending sample window time on exit, setting the next update not
one window into the future, but two.
This situation on exiting NO_HZ is described by:
this_rq->calc_load_update < jiffies < calc_load_update
In this scenario, what we should be doing is:
this_rq->calc_load_update = calc_load_update [ next window ]
But what we actually do is:
this_rq->calc_load_update = calc_load_update + LOAD_FREQ [ next+1 window ]
This has the effect of delaying load average updates for potentially
up to ~9seconds.
This can result in huge spikes in the load average values due to
per-cpu uninterruptible task counts being out of sync when accumulated
across all CPUs.
It's safe to update the per-cpu active count if we wake between sample
windows because any load that we left in 'calc_load_idle' will have
been zero'd when the idle load was folded in calc_global_load().
This issue is easy to reproduce before,
commit 9d89c257df ("sched/fair: Rewrite runnable load and utilization average tracking")
just by forking short-lived process pipelines built from ps(1) and
grep(1) in a loop. I'm unable to reproduce the spikes after that
commit, but the bug still seems to be present from code review.
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Fixes: commit 5167e8d ("sched/nohz: Rewrite and fix load-avg computation -- again")
Link: http://lkml.kernel.org/r/20170217120731.11868-2-matt@codeblueprint.co.uk
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1e3d0c2c70cd3edb5deed186c5f5c75f2b84a633 upstream.
There are some missing error codes here so we accidentally return NULL
instead of an error pointer. It results in a NULL pointer dereference.
Fixes: df71837d50 ("[LSM-IPSec]: Security association restriction.")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e747f64336fc15e1c823344942923195b800aa1e upstream.
The default error code in pfkey_msg2xfrm_state() is -ENOBUFS. We
added a new call to security_xfrm_state_alloc() which sets "err" to zero
so there several places where we can return ERR_PTR(0) if kmalloc()
fails. The caller is expecting error pointers so it leads to a NULL
dereference.
Fixes: df71837d50 ("[LSM-IPSec]: Security association restriction.")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9b3eb54106cf6acd03f07cf0ab01c13676a226c2 upstream.
When CONFIG_XFRM_SUB_POLICY=y, xfrm_dst stores a copy of the flowi for
that dst. Unfortunately, the code that allocates and fills this copy
doesn't care about what type of flowi (flowi, flowi4, flowi6) gets
passed. In multiple code paths (from raw_sendmsg, from TCP when
replying to a FIN, in vxlan, geneve, and gre), the flowi that gets
passed to xfrm is actually an on-stack flowi4, so we end up reading
stuff from the stack past the end of the flowi4 struct.
Since xfrm_dst->origin isn't used anywhere following commit
ca116922af ("xfrm: Eliminate "fl" and "pol" args to
xfrm_bundle_ok()."), just get rid of it. xfrm_dst->partner isn't used
either, so get rid of that too.
Fixes: 9d6ec93801 ("ipv4: Use flowi4 in public route lookup interfaces.")
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7292ae3d5a18fb922be496e6bb687647193569b4 upstream.
The latest change of asm goto support check added passing of KBUILD_CFLAGS
to compiler. When these flags reference gcc plugins that are not built yet,
the check fails.
When one runs "make bzImage" followed by "make modules", the kernel is always
built with HAVE_JUMP_LABEL disabled, while the modules are built depending on
CONFIG_JUMP_LABEL. If HAVE_JUMP_LABEL macro happens to be different, modules
are built with undefined references, e.g.:
ERROR: "static_key_slow_inc" [net/netfilter/xt_TEE.ko] undefined!
ERROR: "static_key_slow_dec" [net/netfilter/xt_TEE.ko] undefined!
ERROR: "static_key_slow_dec" [net/netfilter/nft_meta.ko] undefined!
ERROR: "static_key_slow_inc" [net/netfilter/nft_meta.ko] undefined!
ERROR: "nf_hooks_needed" [net/netfilter/ipvs/ip_vs.ko] undefined!
ERROR: "nf_hooks_needed" [net/ipv6/ipv6.ko] undefined!
ERROR: "static_key_count" [net/ipv6/ipv6.ko] undefined!
ERROR: "static_key_slow_inc" [net/ipv6/ipv6.ko] undefined!
This change moves the check before all these references are added
to KBUILD_CFLAGS. This is correct because subsequent KBUILD_CFLAGS
modifications are not relevant to this check.
Reported-by: Anton V. Boyarshinov <boyarsh@altlinux.org>
Fixes: 35f860f9ba6a ("jump label: pass kbuild_cflags when checking for asm goto support")
Signed-off-by: Gleb Fotengauer-Malinovskiy <glebfm@altlinux.org>
Signed-off-by: Dmitry V. Levin <ldv@altlinux.org>
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Acked-by: David Lin <dtwlin@google.com>
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>