commit 435019b48033138581a6171093b181fc6b4d3d30 upstream.
The allocated buffer was not freed if usb_submit_urb() failed.
Signed-off-by: Jimmy Assarsson <jimmyassarsson@gmail.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f6c23b174c3c96616514827407769cbcfc8005cf upstream.
After commit d75b1ade56 ("net: less interrupt masking in NAPI") napi
repoll is done only when work_done == budget.
So we need to return budget if there are still packets to receive.
Signed-off-by: Oliver Stäbler <oliver.staebler@bytesatwork.ch>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d86b5672b1adb98b4cdd6fbf0224bbfb03db6e2e upstream.
Unavoidable crashes in netfront_resume() and netback_changed() after a
previous fail in talk_to_netback() (e.g. when we fail to read MAC from
xenstore) were discovered. The failure path in talk_to_netback() does
unregister/free for netdev but we don't reset drvdata and we try accessing
it after resume.
Fix the bug by removing the whole xen device completely with
device_unregister(), this guarantees we won't have any calls into netfront
after a failure.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Cc: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1d5a31582ef046d3b233f0da1a68ae26519b2f0a upstream.
The variable temp is incorrectly being updated, instead it should
be offset otherwise the loop just reads the same capability value
and loops forever. Thanks to Alan Stern for pointing out the
correct fix to my original fix. Fix also cleans up clang warning:
drivers/usb/host/ehci-dbg.c:840:4: warning: Value stored to 'temp'
is never read
Fixes: d49d431744 ("USB: misc ehci updates")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 446f666da9f019ce2ffd03800995487e79a91462 upstream.
USBDEVFS_URB_ISO_ASAP must be accepted only for ISO endpoints.
Improve sanity checking.
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Oliver Neukum <oneukum@suse.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 57999d1107c1e60c2ca7088f2ac0f819e2f554b3 upstream.
There used to be an integer overflow check in proc_do_submiturb() but
we removed it. It turns out that it's still required. The
uurb->buffer_length variable is a signed integer and it's controlled by
the user. It can lead to an integer overflow when we do:
num_sgs = DIV_ROUND_UP(uurb->buffer_length, USB_SG_SIZE);
If we strip away the macro then that line looks like this:
num_sgs = (uurb->buffer_length + USB_SG_SIZE - 1) / USB_SG_SIZE;
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It's the first addition which can overflow.
Fixes: 1129d270cbfb ("USB: Increase usbfs transfer limit")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1129d270cbfbb7e2b1ec3dede4a13930bdd10e41 upstream.
Promote a variable keeping track of USB transfer memory usage to a
wider data type and allow for higher bandwidth transfers from a large
number of USB devices connected to a single host.
Signed-off-by: Mateusz Berezecki <mateuszb@fastmail.fm>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 81cf4a45360f70528f1f64ba018d61cb5767249a upstream.
As most of BOS descriptors are longer in length than their header
'struct usb_dev_cap_header', comparing solely with it is not sufficient
to avoid out-of-bounds access to BOS descriptors.
This patch adds descriptor type specific length check in
usb_get_bos_descriptor() to fix the issue.
Signed-off-by: Masakazu Mokuno <masakazu.mokuno@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 446fa3a95df1e8b78f25e1babc41e46edd200821 upstream.
The SuperspeedPlus Device Capability Descriptor has a variable size
depending on the number of sublink speed attributes.
This patch adds a macro to calculate that size. The macro takes one
argument, the Sublink Speed Attribute Count (SSAC) as reported by the
descriptor in bmAttributes[4:0].
See USB 3.1 9.6.2.5, Table 9-19.
Signed-off-by: John Youn <johnyoun@synopsys.com>
Signed-off-by: Felipe Balbi <balbi@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit faee822c5a7ab99de25cd34fcde3f8d37b6b9923 upstream.
USB 3.1 devices that support precision time measurement have an
additional PTM cabaility descriptor as part of the full BOS descriptor
Look for this descriptor while parsing the BOS descriptor, and store it in
struct usb_hub_bos if it exists.
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 973593a960ddac0f14f0d8877d2d0abe0afda795 upstream.
Sometimes the USB device gets confused about the state of the initialization and
the connection fails. In particular, the device thinks that it's already set up
and running while the host thinks the device still needs to be configured. To
work around this issue, power-cycle the hub's output to issue a sort of "reset"
to the device. This makes the device restart its state machine and then the
initialization succeeds.
This fixes problems where the kernel reports a list of errors like this:
usb 1-1.3: device not accepting address 19, error -71
The end result is a non-functioning device. After this patch, the sequence
becomes like this:
usb 1-1.3: new high-speed USB device number 18 using ci_hdrc
usb 1-1.3: device not accepting address 18, error -71
usb 1-1.3: new high-speed USB device number 19 using ci_hdrc
usb 1-1.3: device not accepting address 19, error -71
usb 1-1-port3: attempt power cycle
usb 1-1.3: new high-speed USB device number 21 using ci_hdrc
usb-storage 1-1.3:1.2: USB Mass Storage device detected
Signed-off-by: Mike Looijmans <mike.looijmans@topic.nl>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This reverts commit c4baa4a587 which is
commit 28f5a8a7c033cbf3e32277f4cc9c6afd74f05300 upstream.
It shouldn't be applied to the 4.4-stable tree.
Ben and Alex write:
> Now that ocfs2_setattr() calls this outside of the inode locked region,
> what prevents another task adding a new dio request immediately
> afterward?
>
In the kernel 4.6, firstly, we use the inode_lock() in do_truncate() to
prevent another bio to be issued from this node.
Furthermore, we use the ocfs2_rw_lock() and ocfs2_inode_lock() in ocfs2_setattr()
to guarantee no more bio will be issued from the other nodes in this cluster.
> Also, ocfs2_dio_end_io_write() was introduced in 4.6 and it looks like
> the dio completion path didn't previously take the inode lock. So it
> doesn't look this fix is needed in 3.18 or 4.4.
Yes, ocfs2_dio_end_io_write() was introduced in 4.6 and the problem this patch
fixes is only exist in the kernel 4.6 and above 4.6.
Reported-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
Cc: Alex Chen <alex.chen@huawei.com>
Cc: Jun Piao <piaojun@huawei.com>
Cc: Joseph Qi <jiangqi903@gmail.com>
Cc: Changwei Ge <ge.changwei@h3c.com>
Cc: Mark Fasheh <mfasheh@versity.com>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: Junxiao Bi <junxiao.bi@oracle.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 01f8902bcf3ff124d0aeb88a774180ebcec20ace ]
Fix hardware setup of multicast address hash:
- Never clear the hardware hash (to avoid packet loss)
- Construct the hash register values in software and then write once
to hardware
Signed-off-by: Rui Sousa <rui.sousa@nxp.com>
Signed-off-by: Fugang Duan <fugang.duan@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit e2e004acc7cbe3c531e752a270a74e95cde3ea48 ]
This fixes a crash when running out of grant refs when creating many
queues across many netdevs.
* If creating queues fails (i.e. there are no grant refs available),
call xenbus_dev_fatal() to ensure that the xenbus device is set to the
closed state.
* If no queues are created, don't call xennet_disconnect_backend as
netdev->real_num_tx_queues will not have been set correctly.
* If setup_netfront() fails, ensure that all the queues created are
cleaned up, not just those that have been set up.
* If any queues were set up and an error occurs, call
xennet_destroy_queues() to clean up the napi context.
* If any fatal error occurs, unregister and destroy the netdev to avoid
leaving around a half setup network device.
Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 0911d0041c22922228ca52a977d7b0b0159fee4b ]
Some ->page_mkwrite handlers may return VM_FAULT_RETRY as its return
code (GFS2 or Lustre can definitely do this). However VM_FAULT_RETRY
from ->page_mkwrite is completely unhandled by the mm code and results
in locking and writeably mapping the page which definitely is not what
the caller wanted.
Fix Lustre and block_page_mkwrite_ret() used by other filesystems
(notably GFS2) to return VM_FAULT_NOPAGE instead which results in
bailing out from the fault code, the CPU then retries the access, and we
fault again effectively doing what the handler wanted.
Link: http://lkml.kernel.org/r/20170203150729.15863-1-jack@suse.cz
Signed-off-by: Jan Kara <jack@suse.cz>
Reported-by: Al Viro <viro@ZenIV.linux.org.uk>
Reviewed-by: Jinshan Xiong <jinshan.xiong@intel.com>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 56d806222ace4c3aeae516cd7a855340fb2839d8 ]
sock_reset_flag() maps to __clear_bit() not the atomic version clear_bit().
Thus, we need smp_mb(), smp_mb__after_atomic() is not sufficient.
Fixes: 3c7151275c ("tcp: add memory barriers to write space paths")
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Jason Baron <jbaron@akamai.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Reported-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 91539eb1fda2d530d3b268eef542c5414e54bf1a ]
The static bug finder EBA (http://www.iagoabal.eu/eba/) reported the
following double-lock bug:
Double lock:
1. spin_lock_irqsave(pch->lock, flags) at pl330_free_chan_resources:2236;
2. call to function `pl330_release_channel' immediately after;
3. call to function `dma_pl330_rqcb' in line 1753;
4. spin_lock_irqsave(pch->lock, flags) at dma_pl330_rqcb:1505.
I have fixed it as suggested by Marek Szyprowski.
First, I have replaced `pch->lock' with `pl330->lock' in functions
`pl330_alloc_chan_resources' and `pl330_free_chan_resources'. This avoids
the double-lock by acquiring a different lock than `dma_pl330_rqcb'.
NOTE that, as a result, `pl330_free_chan_resources' executes
`list_splice_tail_init' on `pch->work_list' under lock `pl330->lock',
whereas in the rest of the code `pch->work_list' is protected by
`pch->lock'. I don't know if this may cause race conditions. Similarly
`pch->cyclic' is written by `pl330_alloc_chan_resources' under
`pl330->lock' but read by `pl330_tx_submit' under `pch->lock'.
Second, I have removed locking from `pl330_request_channel' and
`pl330_release_channel' functions. Function `pl330_request_channel' is
only called from `pl330_alloc_chan_resources', so the lock is already
held. Function `pl330_release_channel' is called from
`pl330_free_chan_resources', which already holds the lock, and from
`pl330_del'. Function `pl330_del' is called in an error path of
`pl330_probe' and at the end of `pl330_remove', but I assume that there
cannot be concurrent accesses to the protected data at those points.
Signed-off-by: Iago Abal <mail@iagoabal.eu>
Reviewed-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 35e22e49a5d6a741ebe7f2dd280b2052c3003ef7 ]
In tipc_server_stop(), we iterate over the connections with limiting
factor as server's idr_in_use. We ignore the fact that this variable
is decremented in tipc_close_conn(), leading to premature exit.
In this commit, we iterate until the we have no connections left.
Acked-by: Ying Xue <ying.xue@windriver.com>
Acked-by: Jon Maloy <jon.maloy@ericsson.com>
Tested-by: John Thompson <thompa.atl@gmail.com>
Signed-off-by: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 0e73fc9a56f22f2eec4d2b2910c649f7af67b74d ]
The comparison on the timeout can lead to an array overrun
read on sctp_timer_tbl because of an off-by-one error. Fix
this by using < instead of <= and also compare to the array
size rather than SCTP_EVENT_TIMEOUT_MAX.
Fixes CoverityScan CID#1397639 ("Out-of-bounds read")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 11d8bcef7a0399e1d2519f207fd575fc404306b4 ]
DECON_TV requires STANDALONE_UPDATE after output enabling, otherwise it does
not start. This change is neutral for DECON.
Signed-off-by: Andrzej Hajda <a.hajda@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit c6180a6237174f481dc856ed6e890d8196b6f0fb ]
If the server reboots multiple times, the client should rely on the
server to tell it that it cannot reclaim state as per section 9.6.3.4
in RFC7530 and section 8.4.2.1 in RFC5661.
Currently, the client is being to conservative, and is assuming that
if the server reboots while state recovery is in progress, then it must
ignore state that was not recovered before the reboot.
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 63e41226afc3f7a044b70325566fa86ac3142538 ]
When a VCPU blocks (WFI) and has programmed the vtimer, we program a
soft timer to expire in the future to wake up the vcpu thread when
appropriate. Because such as wake up involves a vcpu kick, and the
timer expire function can get called from interrupt context, and the
kick may sleep, we have to schedule the kick in the work function.
The work function currently has a warning that gets raised if it turns
out that the timer shouldn't fire when it's run, which was added because
the idea was that in that case the work should never have been cancelled.
However, it turns out that this whole thing is racy and we can get
spurious warnings. The problem is that we clear the armed flag in the
work function, which may run in parallel with the
kvm_timer_unschedule->timer_disarm() call. This results in a possible
situation where the timer_disarm() call does not call
cancel_work_sync(), which effectively synchronizes the completion of the
work function with running the VCPU. As a result, the VCPU thread
proceeds before the work function completees, causing changes to the
timer state such that kvm_timer_should_fire(vcpu) returns false in the
work function.
All we do in the work function is to kick the VCPU, and an occasional
rare extra kick never harmed anyone. Since the race above is extremely
rare, we don't bother checking if the race happens but simply remove the
check and the clearing of the armed flag from the work function.
Reported-by: Matthias Brugger <mbrugger@suse.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 4b09ec4b14a168bf2c687e1f598140c3c11e9222 ]
I have reports of a crash that look like __fput() was called twice for
a NFSv4.0 file. It seems possible that the state manager could try to
reclaim a lock and take a reference on the fl->fl_file at the same time the
file is being released if, during the close(), a signal interrupts the wait
for outstanding IO while removing locks which then skips the removal
of that lock.
Since 83bfff23e9 ("nfs4: have do_vfs_lock take an inode pointer") has
removed the need to traverse fl->fl_file->f_inode in nfs4_lock_done(),
taking that reference is no longer necessary.
Signed-off-by: Benjamin Coddington <bcodding@redhat.com>
Reviewed-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 18a3ed59d09cf81a6447aadf6931bf0c9ffec5e0 ]
Remove Rx overflow log messages as in an environment where logging results
in network traffic logging may cause further overflows.
Fixes: c156633f13 ("Renesas Ethernet AVB driver proper")
Signed-off-by: Kazuya Mizuguchi <kazuya.mizuguchi.ks@renesas.com>
[simon: reworked changelog]
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
Acked-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit ce7e40c432ba84da104438f6799d460a4cad41bc ]
ipddp_route structs contain alignment padding so kernel heap memory
is leaked when they are copied to user space in
ipddp_ioctl(SIOCFINDIPDDPRT). Change kmalloc() to kzalloc() to clear
that memory.
Signed-off-by: Vlad Tsyrklevich <vlad@tsyrklevich.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 93e246f783e6bd1bc64fdfbfe68b18161f69b28e ]
vti6 interface is registered before the rtnl_link_ops block
is attached. As a result the resulting RTM_NEWLINK is missing
IFLA_INFO_KIND. Re-order attachment of rtnl_link_ops block to fix.
Signed-off-by: Dave Forster <dforster@brocade.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 657279778af54f35e54b07b6687918f254a2992c ]
OMAP1510, OMAP5910 and OMAP310 have only 9 logical channels.
OMAP1610, OMAP5912, OMAP1710, OMAP730, and OMAP850 have 16 logical channels
available.
The wired 17 for the lch_count must have been used to cover the 16 + 1
dedicated LCD channel, in reality we can only use 9 or 16 channels.
The d->chan_count is not used by the omap-dma stack, so we can skip the
setup. chan_count was configured to the number of logical channels and not
the actual number of physical channels anyways.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Aaro Koskinen <aaro.koskinen@iki.fi>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 38e5a85562a6cd911fc26d951d576551a688574c ]
Inserting the TSB means adding an extra 8 bytes in front the of packet
that is going to be used as metadata information by the TDMA engine, but
stripped off, so it does not really help with the packet padding.
For some odd packet sizes that fall below the 60 bytes payload (e.g: ARP)
we can end-up padding them after the TSB insertion, thus making them 64
bytes, but with the TDMA stripping off the first 8 bytes, they could
still be smaller than 64 bytes which is required to ingress the switch.
Fix this by swapping the padding and TSB insertion, guaranteeing that
the packets have the right sizes.
Fixes: 80105befdb ("net: systemport: add Broadcom SYSTEMPORT Ethernet MAC driver")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit bb7da333d0a9f3bddc08f84187b7579a3f68fd24 ]
Since we need to pad our packets, utilize skb_put_padto() which
increases skb->len by how much we need to pad, allowing us to eliminate
the test on skb->len right below.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 22905582f6dd4bbd0c370fe5732c607452010c04 ]
Command perf test -v 16 (Setup struct perf_event_attr test) always
reports success even if the test case fails. It works correctly if you
also specify -F (for don't fork).
root@s35lp76 perf]# ./perf test -v 16
15: Setup struct perf_event_attr :
--- start ---
running './tests/attr/test-record-no-delay'
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.002 MB /tmp/tmp4E1h7R/perf.data
(1 samples) ]
expected task=0, got 1
expected precise_ip=0, got 3
expected wakeup_events=1, got 0
FAILED './tests/attr/test-record-no-delay' - match failure
test child finished with 0
---- end ----
Setup struct perf_event_attr: Ok
The reason for the wrong error reporting is the return value of the
system() library call. It is called in run_dir() file tests/attr.c and
returns the exit status, in above case 0xff00.
This value is given as parameter to the exit() function which can only
handle values 0-0xff.
The child process terminates with exit value of 0 and the parent does
not detect any error.
This patch corrects the error reporting and prints the correct test
result.
Signed-off-by: Thomas-Mich Richter <tmricht@linux.vnet.ibm.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Thomas-Mich Richter <tmricht@linux.vnet.ibm.com>
LPU-Reference: 20170913081209.39570-2-tmricht@linux.vnet.ibm.com
Link: http://lkml.kernel.org/n/tip-rdube6rfcjsr1nzue72c7lqn@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit b00bebbc301c8e1f74f230dc82282e56b7e7a6db ]
When kernel configuration SMP,PREEMPT and DEBUG_PREEMPT are enabled,
echo 1 >/proc/sys/kernel/sysrq
echo p >/proc/sysrq-trigger
kernel will print call trace as below:
sysrq: SysRq : Show Regs
BUG: using __this_cpu_read() in preemptible [00000000] code: sh/435
caller is __this_cpu_preempt_check+0x18/0x20
Call trace:
[<ffffff8008088e80>] dump_backtrace+0x0/0x1d0
[<ffffff8008089074>] show_stack+0x24/0x30
[<ffffff8008447970>] dump_stack+0x90/0xb0
[<ffffff8008463950>] check_preemption_disabled+0x100/0x108
[<ffffff8008463998>] __this_cpu_preempt_check+0x18/0x20
[<ffffff80084c9194>] sysrq_handle_showregs+0x1c/0x40
[<ffffff80084c9c7c>] __handle_sysrq+0x12c/0x1a0
[<ffffff80084ca140>] write_sysrq_trigger+0x60/0x70
[<ffffff8008251e00>] proc_reg_write+0x90/0xd0
[<ffffff80081f1788>] __vfs_write+0x48/0x90
[<ffffff80081f241c>] vfs_write+0xa4/0x190
[<ffffff80081f3354>] SyS_write+0x54/0xb0
[<ffffff80080833f0>] el0_svc_naked+0x24/0x28
This can be seen on a common board like an r-pi3.
This happens because when echo p >/proc/sysrq-trigger,
get_irq_regs() is called outside of IRQ context,
if preemption is enabled in this situation,kernel will
print the call trace. Since many prior discussions on
the mailing lists have made it clear that get_irq_regs
either just returns NULL or stale data when used outside
of IRQ context,we simply avoid calling it outside of
IRQ context.
Signed-off-by: Jibin Xu <jibin.xu@windriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit da20ab35180780e4a6eadc804544f1fa967f3567 ]
We do not have tracepoints for sys_modify_ldt() because we define
it directly instead of using the normal SYSCALL_DEFINEx() macros.
However, there is a reason sys_modify_ldt() does not use the macros:
it has an 'int' return type instead of 'unsigned long'. This is
a bug, but it's a bug cemented in the ABI.
What does this mean? If we return -EINVAL from a function that
returns 'int', we have 0x00000000ffffffea in %rax. But, if we
return -EINVAL from a function returning 'unsigned long', we end
up with 0xffffffffffffffea in %rax, which is wrong.
To work around this and maintain the 'int' behavior while using
the SYSCALL_DEFINEx() macros, so we add a cast to 'unsigned int'
in both implementations of sys_modify_ldt().
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: Andy Lutomirski <luto@kernel.org>
Reviewed-by: Brian Gerst <brgerst@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20171018172107.1A79C532@viggo.jf.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 0ab84da2e076948c49d36197ee7d254125c53eab ]
The upper four bits of the XR17V35x fractional divisor register (DLD)
control general chip function (RS-485 direction pin polarity, multidrop
mode, XON/XOFF parity check, and fast IR mode). Don't allow these bits
to be clobbered when setting the baudrate.
Signed-off-by: Aaron Sierra <asierra@xes-inc.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit ce035409bfa892a2fabb89720b542e1b335c3426 ]
If devm_extcon_dev_allocate() fails, we should disable clk before return.
Found by Linux Driver Verification project (linuxtesting.org).
Signed-off-by: Alexey Khoroshilov <khoroshilov@ispras.ru>
Fixes: 860d2686fd ("usb: phy: tahvo: Use devm_extcon_dev_[allocate|register]() and replace deprecated API")
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 3236a965486ba0c6043cf2c7b51943d8b382ae29 ]
This driver's ->rs485_config callback checks if SER_RS485_RTS_ON_SEND
and SER_RS485_RTS_AFTER_SEND have the same value. If they do, it means
the user has passed in invalid data with the TIOCSRS485 ioctl()
since RTS must have a different polarity when sending and when not
sending. In this case, rs485 mode is not enabled (the RS485_URA bit
is not set in the RS485 Enable Register) and this is supposed to be
signaled back to the user by clearing the SER_RS485_ENABLED bit in
struct serial_rs485 ... except a missing tilde character is preventing
that from happening.
Fixes: 28e3fb6c4d ("serial: Add support for Fintek F81216A LPC to 4 UART")
Cc: Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com>
Cc: "Ji-Ze Hong (Peter Hong)" <hpeter@gmail.com>
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit fec8f5ae1715a01c72ad52cb2ecd8aacaf142302 ]
We weren't testing the .limit and .limit_in_pages fields very well.
Add more tests.
This addition seems to trigger the "bits 16:19 are undefined" issue
that was fixed in an earlier patch. I think that, at least on my
CPU, the high nibble of the limit ends in LAR bits 16:19.
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bpetkov@suse.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/5601c15ea9b3113d288953fd2838b18bedf6bc67.1509794321.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 48070c73058be6de9c0d754d441ed7092dfc8f12 ]
As of today QEMU does not provide the AIS facility to its guest. This
prevents Linux guests from using PCI devices as the ais facility is
checked during init. As this is just a performance optimization, we can
move the ais check into the code where we need it (calling the SIC
instruction). This is used at initialization and on interrupt. Both
places do not require any serialization, so we can simply skip the
instruction.
Since we will now get all interrupts, we can also avoid the 2nd scan.
As we can have multiple interrupts in parallel we might trigger spurious
irqs more often for the non-AIS case but the core code can handle that.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Pierre Morel <pmorel@linux.vnet.ibm.com>
Reviewed-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Acked-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit ebe7c0a7be92bbd34c6ff5b55810546a0ee05bee ]
The hash_setup function always sets the hash_setup_done flag, even
when the hash algorithm is invalid. This prevents the default hash
algorithm defined as CONFIG_IMA_DEFAULT_HASH from being used.
This patch sets hash_setup_done flag only for valid hash algorithms.
Fixes: e7a2ad7eb6 "ima: enable support for larger default filedata hash algorithms"
Signed-off-by: Boshi Wang <wangboshi@huawei.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c654b21ede93845863597de9ad774fd30db5f2ab upstream.
Quectel BG96 is an Qualcomm MDM9206 based IoT modem, supporting both
CAT-M and NB-IoT. Tested hardware is BG96 mounted on Quectel
development board (EVB). The USB id is added to option.c to allow
DIAG,GPS,AT and modem communication with the BG96.
Signed-off-by: Sebastian Sjoholm <ssjoholm@mac.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 8d9047f8b967ce6181fd824ae922978e1b055cc0 upstream.
Free data structures required for runtime instrumentation from
arch_release_task_struct(). This allows to simplify the code a bit,
and also makes the semantics a bit easier: arch_release_task_struct()
is never called from the task that is being removed.
In addition this allows to get rid of exit_thread() in a later patch.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 3bfd1300abfe3adb18e84a89d97a0e82a22124bb upstream.
This device will be used in future Amazon EC2 instances as the primary
serial port (i.e., data sent to this port will be available via the
GetConsoleOuput [1] EC2 API).
[1] http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_GetConsoleOutput.html
Signed-off-by: Matt Wilson <msw@amazon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e43a12f1793ae1fe006e26fe9327a8840a92233c upstream.
KY-688 USB 3.1 Type-C Hub internally uses a Genesys Logic hub to connect
to Realtek r8153.
Similar to commit ("7496cfe5431f2 usb: quirks: Add no-lpm quirk for Moshi
USB to Ethernet Adapter"), no-lpm can make r8153 ethernet work.
Signed-off-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7fee72d5e8f1e7b8d8212e28291b1a0243ecf2f1 upstream.
We've been adding this as a quirk on a per device basis hoping that
newer disk enclosures would do better, but that has not happened,
so simply apply this quirk to all Seagate devices.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e393aa2446150536929140739f09c6ecbcbea7f0 upstream.
When we send a read request and hit the clean data in cache device, there
is a situation called cache read race in bcache(see the commit in the tail
of cache_look_up(), the following explaination just copy from there):
The bucket we're reading from might be reused while our bio is in flight,
and we could then end up reading the wrong data. We guard against this
by checking (in bch_cache_read_endio()) if the pointer is stale again;
if so, we treat it as an error (s->iop.error = -EINTR) and reread from
the backing device (but we don't pass that error up anywhere)
It should be noted that cache read race happened under normal
circumstances, not the circumstance when SSD failed, it was counted
and shown in /sys/fs/bcache/XXX/internal/cache_read_races.
Without this patch, when we use writeback mode, we will never reread from
the backing device when cache read race happened, until the whole cache
device is clean, because the condition
(s->recoverable && (dc && !atomic_read(&dc->has_dirty))) is false in
cached_dev_read_error(). In this situation, the s->iop.error(= -EINTR)
will be passed up, at last, user will receive -EINTR when it's bio end,
this is not suitable, and wield to up-application.
In this patch, we use s->read_dirty_data to judge whether the read
request hit dirty data in cache device, it is safe to reread data from
the backing device when the read request hit clean data. This can not
only handle cache read race, but also recover data when failed read
request from cache device.
[edited by mlyle to fix up whitespace, commit log title, comment
spelling]
Fixes: d59b23795933 ("bcache: only permit to recovery read error when cache device is clean")
Signed-off-by: Hua Rui <huarui.dev@gmail.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Reviewed-by: Coly Li <colyli@suse.de>
Signed-off-by: Michael Lyle <mlyle@lyle.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d59b23795933678c9638fd20c942d2b4f3cd6185 upstream.
When bcache does read I/Os, for example in writeback or writethrough mode,
if a read request on cache device is failed, bcache will try to recovery
the request by reading from cached device. If the data on cached device is
not synced with cache device, then requester will get a stale data.
For critical storage system like database, providing stale data from
recovery may result an application level data corruption, which is
unacceptible.
With this patch, for a failed read request in writeback or writethrough
mode, recovery a recoverable read request only happens when cache device
is clean. That is to say, all data on cached device is up to update.
For other cache modes in bcache, read request will never hit
cached_dev_read_error(), they don't need this patch.
Please note, because cache mode can be switched arbitrarily in run time, a
writethrough mode might be switched from a writeback mode. Therefore
checking dc->has_data in writethrough mode still makes sense.
Changelog:
V4: Fix parens error pointed by Michael Lyle.
v3: By response from Kent Oversteet, he thinks recovering stale data is a
bug to fix, and option to permit it is unnecessary. So this version
the sysfs file is removed.
v2: rename sysfs entry from allow_stale_data_on_failure to
allow_stale_data_on_failure, and fix the confusing commit log.
v1: initial patch posted.
[small change to patch comment spelling by mlyle]
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Michael Lyle <mlyle@lyle.org>
Reported-by: Arne Wolf <awolf@lenovo.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Kent Overstreet <kent.overstreet@gmail.com>
Cc: Nix <nix@esperi.org.uk>
Cc: Kai Krakow <hurikhan77@gmail.com>
Cc: Eric Wheeler <bcache@lists.ewheeler.net>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>