When the local IPv4 endpoint is wilcard (0.0.0.0), the prefix length is
correctly set, ie 64 if the address is a link local one or 96 if the address is
a v4 mapped one.
But when the local endpoint is specified, the prefix length is set to 128 for
both kind of address. This patch fix this wrong prefix length.
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
These strings come from a copy_from_user() and there is no way to be
sure they are NUL terminated.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Bug: The fallback device is created in sit_init_net and assumed to be
freed in sit_exit_net. First, it is dereferenced in that function, in
sit_destroy_tunnels:
struct net *net = dev_net(sitn->fb_tunnel_dev);
Prior to this, rtnl_unlink_register has removed all devices that match
rtnl_link_ops == sit_link_ops.
Commit 205983c437 added the line
+ sitn->fb_tunnel_dev->rtnl_link_ops = &sit_link_ops;
which cases the fallback device to match here and be freed before it
is last dereferenced.
Fix: This commit adds an explicit .delllink callback to sit_link_ops
that skips deallocation at rtnl_unlink_register for the fallback
device. This mechanism is comparable to the one in ip_tunnel.
It also modifies sit_destroy_tunnels and its only caller sit_exit_net
to avoid the offending dereference in the first place. That double
lookup is more complicated than required.
Test: The bug is only triggered when CONFIG_NET_NS is enabled. It
causes a GPF only when CONFIG_DEBUG_SLAB is enabled. Verified that
this bug exists at the mentioned commit, at davem-net HEAD and at
3.11.y HEAD. Verified that it went away after applying this patch.
Fixes: 205983c437 ("sit: allow to use rtnl ops on fb tunnel")
Signed-off-by: Willem de Bruijn <willemb@google.com>
Acked-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The driver fails to check the results of DMA mapping and results in
the following warning: (with kernel config "CONFIG_DMA_API_DEBUG" enable)
------------[ cut here ]------------
WARNING: at lib/dma-debug.c:937 check_unmap+0x43c/0x7d8()
fec 2188000.ethernet: DMA-API: device driver failed to check map
error[device address=0x00000000383a8040] [size=2048 bytes] [mapped as single]
Modules linked in:
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.10.17-16827-g9cdb0ba-dirty #188
[<80013c4c>] (unwind_backtrace+0x0/0xf8) from [<80011704>] (show_stack+0x10/0x14)
[<80011704>] (show_stack+0x10/0x14) from [<80025614>] (warn_slowpath_common+0x4c/0x6c)
[<80025614>] (warn_slowpath_common+0x4c/0x6c) from [<800256c8>] (warn_slowpath_fmt+0x30/0x40)
[<800256c8>] (warn_slowpath_fmt+0x30/0x40) from [<8026bfdc>] (check_unmap+0x43c/0x7d8)
[<8026bfdc>] (check_unmap+0x43c/0x7d8) from [<8026c584>] (debug_dma_unmap_page+0x6c/0x78)
[<8026c584>] (debug_dma_unmap_page+0x6c/0x78) from [<8038049c>] (fec_enet_rx_napi+0x254/0x8a8)
[<8038049c>] (fec_enet_rx_napi+0x254/0x8a8) from [<804dc8c0>] (net_rx_action+0x94/0x160)
[<804dc8c0>] (net_rx_action+0x94/0x160) from [<8002c758>] (__do_softirq+0xe8/0x1d0)
[<8002c758>] (__do_softirq+0xe8/0x1d0) from [<8002c8e8>] (do_softirq+0x4c/0x58)
[<8002c8e8>] (do_softirq+0x4c/0x58) from [<8002cb50>] (irq_exit+0x90/0xc8)
[<8002cb50>] (irq_exit+0x90/0xc8) from [<8000ea88>] (handle_IRQ+0x3c/0x94)
[<8000ea88>] (handle_IRQ+0x3c/0x94) from [<8000855c>] (gic_handle_irq+0x28/0x5c)
[<8000855c>] (gic_handle_irq+0x28/0x5c) from [<8000de00>] (__irq_svc+0x40/0x50)
Exception stack(0x815a5f38 to 0x815a5f80)
5f20: 815a5f80 3b9aca00
5f40: 0fe52383 00000002 0dd8950e 00000002 81e7b080 00000000 00000000 815ac4d8
5f60: 806032ec 00000000 00000017 815a5f80 80059028 8041fc4c 60000013 ffffffff
[<8000de00>] (__irq_svc+0x40/0x50) from [<8041fc4c>] (cpuidle_enter_state+0x50/0xf0)
[<8041fc4c>] (cpuidle_enter_state+0x50/0xf0) from [<8041fd94>] (cpuidle_idle_call+0xa8/0x14c)
[<8041fd94>] (cpuidle_idle_call+0xa8/0x14c) from [<8000edac>] (arch_cpu_idle+0x10/0x4c)
[<8000edac>] (arch_cpu_idle+0x10/0x4c) from [<800582f8>] (cpu_startup_entry+0x60/0x130)
[<800582f8>] (cpu_startup_entry+0x60/0x130) from [<80bc7a48>] (start_kernel+0x2d0/0x328)
[<80bc7a48>] (start_kernel+0x2d0/0x328) from [<10008074>] (0x10008074)
---[ end trace c6edec32436e0042 ]---
Because dma-debug add new interfaces to debug dma mapping errors, pls refer
to: http://lwn.net/Articles/516640/
After dma mapping, it must call dma_mapping_error() to check mapping error,
otherwise the map_err_type alway is MAP_ERR_NOT_CHECKED, check_unmap() define
the mapping is not checked and dump the error msg. So,add dma_mapping_error()
checking to fix the WARNING
And RX DMA buffers are used repeatedly and the driver copies it into an skb,
fec_enet_rx() should not map or unmap, use dma_sync_single_for_cpu()/dma_sync_single_for_device()
instead of dma_map_single()/dma_unmap_single().
There have another potential issue: fec_enet_rx() passes the DMA address to __va().
Physical and DMA addresses are *not* the same thing. They may differ if the device
is behind an IOMMU or bounce buffering was required, or just because there is a fixed
offset between the device and host physical addresses. Also fix it in this patch.
=============================================
V2: add net_ratelimit() to limit map err message.
use dma_sync_single_for_cpu() instead of dma_map_single().
fix the issue that pass DMA addresses to __va() to get virture address.
V1: initial send
=============================================
Signed-off-by: Fugang Duan <B38611@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When a transport recovers due to the new coming sack, SCTP should
iterate all of its transport_list to locate the __two__ most recently used
transport and set to active_path and retran_path respectively. The exising
code does not find the two properly - In case of the following list:
[most-recent] -> [2nd-most-recent] -> ...
Both active_path and retran_path would be set to the 1st element.
The bug happens when:
1) multi-homing
2) failure/partial_failure transport recovers
Both active_path and retran_path would be set to the same most-recent one, in
other words, retran_path would not take its role - an end user might not even
notice this issue.
Signed-off-by: Chang Xiangzhong <changxiangzhong@gmail.com>
Acked-by: Vlad Yasevich <vyasevich@gmail.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We had some reports of crashes using TCP fastopen, and Dave Jones
gave a nice stack trace pointing to the error.
Issue is that tcp_get_metrics() should not be called with a NULL dst
Fixes: 1fe4c481ba ("net-tcp: Fast Open client - cookie cache")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Dave Jones <davej@redhat.com>
Cc: Yuchung Cheng <ycheng@google.com>
Acked-by: Yuchung Cheng <ycheng@google.com>
Tested-by: Dave Jones <davej@fedoraproject.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch fixes two race conditions between bond_store_updelay/downdelay
and bond_store_miimon which could lead to division by zero as miimon can
be set to 0 while either updelay/downdelay are being set and thus miss the
zero check in the beginning, the zero div happens because updelay/downdelay
are stored as new_value / bond->params.miimon. Use rtnl to synchronize with
miimon setting.
CC: Jay Vosburgh <fubar@us.ibm.com>
CC: Andy Gospodarek <andy@greyhouse.net>
CC: Veaceslav Falico <vfalico@redhat.com>
Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com>
Acked-by: Veaceslav Falico <vfalico@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
After commit c9eeec26e3 ("tcp: TSQ can use a dynamic limit"), several
users reported throughput regressions, notably on mvneta and wifi
adapters.
802.11 AMPDU requires a fair amount of queueing to be effective.
This patch partially reverts the change done in tcp_write_xmit()
so that the minimal amount is sysctl_tcp_limit_output_bytes.
It also remove the use of this sysctl while building skb stored
in write queue, as TSO autosizing does the right thing anyway.
Users with well behaving NICS and correct qdisc (like sch_fq),
can then lower the default sysctl_tcp_limit_output_bytes value from
128KB to 8KB.
This new usage of sysctl_tcp_limit_output_bytes permits each driver
authors to check how their driver performs when/if the value is set
to a minimum of 4KB.
Normally, line rate for a single TCP flow should be possible,
but some drivers rely on timers to perform TX completion and
too long TX completion delays prevent reaching full throughput.
Fixes: c9eeec26e3 ("tcp: TSQ can use a dynamic limit")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Sujith Manoharan <sujith@msujith.org>
Reported-by: Arnaud Ebalard <arno@natisbad.org>
Tested-by: Sujith Manoharan <sujith@msujith.org>
Cc: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ben Hutchings says:
====================
net_tstamp: Validate hwtstamp_config completely before applying it
This series fixes very similar bugs in 6 implementations of
SIOCSHWTSTAMP.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
hwtstamp_ioctl() should validate all fields of hwtstamp_config
before making any changes. Currently it sets the TX configuration
before validating the rx_filter field.
Untested as I don't have a cross-compiler to hand.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
cpsw_hwtstamp_ioctl() should validate all fields of hwtstamp_config,
and the hardware version, before making any changes. Currently it
sets the TX configuration before validating the rx_filter field
or that the hardware supports timestamping.
Also correct the error code for hardware versions that don't
support timestamping. ENOTSUPP is used by the NFS implementation
and is not part of userland API; we want EOPNOTSUPP (which glibc
also calls ENOTSUP, with one 'P').
Untested as I don't have a cross-compiler to hand.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Acked-by: Mugunthan V N <mugunthanvnm@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
stmmac_hwtstamp_ioctl() should validate all fields of hwtstamp_config
before making any changes. Currently it sets the TX configuration
before validating the rx_filter field.
Compile-tested only.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
hwtstamp_ioctl() should validate all fields of hwtstamp_config
before making any changes. Currently it sets the TX configuration
before validating the rx_filter field.
Compile-tested only.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
e1000e_hwtstamp_ioctl() should validate all fields of hwtstamp_config
before making any changes. Currently it copies the configuration to
the e1000_adapter structure before validating it at all.
Change e1000e_config_hwtstamp() to take a pointer to the
hwstamp_config and to copy the config after validating it.
Compile-tested only.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
tg3_hwtstamp_ioctl() should validate all fields of hwtstamp_config
before making any changes. Currently it sets the TX configuration
before validating the rx_filter field.
Compile-tested only.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Acked-by: Nithin Nayak Sujir <nsujir@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We should call vlan_vid_del for all vids at nbp_vlan_flush to prevent
vid_info->refcount from being leaked when detaching a bridge port.
Signed-off-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
Signed-off-by: David S. Miller <davem@davemloft.net>
We should use wrapper functions vlan_vid_[add/del] instead of
ndo_vlan_rx_[add/kill]_vid. Otherwise, we might be not able to communicate
using vlan interface in a certain situation.
Example of problematic case:
vconfig add eth0 10
brctl addif br0 eth0
bridge vlan add dev eth0 vid 10
bridge vlan del dev eth0 vid 10
brctl delif br0 eth0
In this case, we cannot communicate via eth0.10 because vlan 10 is
filtered by NIC that has the vlan filtering feature.
Signed-off-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use msecs_to_jiffies, for these calculations as different HZ
considerations are taken into account for conversion of the timer
shot, and also it makes the code more readable.
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
We only call that in functions annotated with __init, so add __init
prefix in prandom_start_seed_timer() as well, so that the kernel can
make use of this hint and we can possibly free up resources after it's
usage. And since it's an internal function rename it to
__prandom_start_seed_timer().
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
We currently use hdr_len as a hint of head length which is advertised by
guest. But when guest advertise a very big value, it can lead to an 64K+
allocating of kmalloc() which has a very high possibility of failure when host
memory is fragmented or under heavy stress. The huge hdr_len also reduce the
effect of zerocopy or even disable if a gso skb is linearized in guest.
To solves those issues, this patch introduces an upper limit (PAGE_SIZE) of the
head, which guarantees an order 0 allocation each time.
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We currently use hdr_len as a hint of head length which is advertised by
guest. But when guest advertise a very big value, it can lead to an 64K+
allocating of kmalloc() which has a very high possibility of failure when host
memory is fragmented or under heavy stress. The huge hdr_len also reduce the
effect of zerocopy or even disable if a gso skb is linearized in guest.
To solves those issues, this patch introduces an upper limit (PAGE_SIZE) of the
head, which guarantees an order 0 allocation each time.
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
I noticed that we export a way to high value for the maxfilesize
attribute when debugging a client issue. The issue didn't turn
out to be related to it, but I think we should export it, so that
clients can limit what write sizes they accept before hitting
the server.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
When accessing the lower_file pointer located in private_data of
eCryptfs files, there is no need to check to see if the private_data
pointer has been initialized to a non-NULL value. The file->private_data
and file->private_data->lower_file pointers are always initialized to
non-NULL values in ecryptfs_open().
This change quiets a Smatch warning:
CHECK /var/scm/kernel/linux/fs/ecryptfs/file.c
fs/ecryptfs/file.c:321 ecryptfs_unlocked_ioctl() error: potential NULL dereference 'lower_file'.
fs/ecryptfs/file.c:335 ecryptfs_compat_ioctl() error: potential NULL dereference 'lower_file'.
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Geyslan G. Bem <geyslan@gmail.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Since Ivy Bridge memory controller is very similar to Sandy Bridge, it's
wiser to modify sb_edac to support both instead of creating another
driver.
[m.chehab@samsung.com: Fix CodingStyle]
Signed-off-by: Aristeu Rozanski <arozansk@redhat.com>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
By default, when tasks are specified (i.e. -p, -t or -u options)
per-thread mmaps are created.
Add an option to override that and force per-cpu mmaps.
Further comments by peterz:
So this option allows -t/-p/-u to create one buffer per cpu and attach
all the various thread/process/user tasks' their counters to that one
buffer?
As opposed to the current state where each such counter would have its
own buffer.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1383313899-15987-7-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
You can't pass demangled name into "perf probe", because of special chars:
./perf probe -f -x /tmp/a.out 'foo(int)'
Semantic error :There is non-digit char in line number.
And you can't even pass without demangling (because it search symbol in
DSO with demangle=true):
./perf probe -f -x /tmp/a.out _Z3fooi
no symbols found in /tmp/a.out, maybe install a debug package?
However:
nm /tmp/a.out | grep foo
000000000040056d T _Z3fooi
After this patch, using the next command:
./perf probe -f --no-demangle -x /tmp/a.out _Z3fooi
probe will be successfully added.
Signed-off-by: Azat Khuzhin <a3at.mail@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1382947464-31266-1-git-send-email-a3at.mail@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Despite requesting two memory resources, called 'base' and 'high_base', the
driver uses explicitly only the former. The latter is being used implicitly
by addressing at offset +0x200, which in practice accesses high_base.
In other words, the current driver breaks if the second memory resource
is ever place at an offset different from +0x200.
This patch fixes the above by defining the registers with the offset from
high_base, and use high_base explicitly where appropriate.
Signed-off-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
Acked-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
This mmio address is checked at probe-time, which makes this test
redundant. Let's just remove it.
Acked-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The implementation of ioat3_irq_reinit has two bugs:
1/ The mode is incorrectly set to MSIX for the MSI case
2/ The 'dev_id' parameter to free_irq is the ioatdma_device not the channel in
the msi and intx case
Include a small cleanup to clarify that ioat3_irq_reinit is only for bwd
hardware
Cc: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Once we have determined that we will not have all of our desired msix
vectors there is no point in attempting a single msix allocation. The
driver will already need to read registers to determine the source of
the interrupt the fact that it is msix is moot. Fallback directly to
msi.
Reported-by: Alexander Gordeev <agordeev@redhat.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
With 24 disks and an ioatdma instance with 16 source support there is a
corner case where the driver needs to be careful to account for the
number of implied sources in the continuation case.
Also bump the default case to test more than 16 sources now that it
triggers different paths in offload drivers.
Cc: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Use a single cache for all sed allocations. No need to make it per
channel. This also avoids the slub_debug warnings for multiple caches
with the same name.
Switching to dmam_pool_create() to fix leaking the dma pools on
initialization failure and lets us kill ioat3_dma_remove().
Cc: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
When performing continuations there are implied sources that need to be
added to the source count. Quoting dma_set_maxpq:
/* dma_maxpq - reduce maxpq in the face of continued operations
* @dma - dma device with PQ capability
* @flags - to check if DMA_PREP_CONTINUE and DMA_PREP_PQ_DISABLE_P are set
*
* When an engine does not support native continuation we need 3 extra
* source slots to reuse P and Q with the following coefficients:
* 1/ {00} * P : remove P from Q', but use it as a source for P'
* 2/ {01} * Q : use Q to continue Q' calculation
* 3/ {00} * Q : subtract Q from P' to cancel (2)
*
* In the case where P is disabled we only need 1 extra source:
* 1/ {01} * Q : use Q to continue Q' calculation
*/
...fix the selection of the 16 source path to take these implied sources
into account.
Note this also kills the BUG_ON(src_cnt < 9) check in
__ioat3_prep_pq16_lock(). Besides not accounting for implied sources
the check is redundant given we already made the path selection.
Cc: <stable@vger.kernel.org>
Cc: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The array to lookup the sed pool based on the number of sources
(pq16_idx_to_sedi) is 16 entries and expects a max source index.
However, we pass the total source count which runs off the end of the
array when src_cnt == 16. The minimal fix is to just pass src_cnt-1,
but given we know the source count is > 8 we can just calculate the sed
pool by (src_cnt - 2) >> 3.
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: <stable@vger.kernel.org>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Commit 48a9db4 (3.11) removed the memset op in the xor selftest for ioatdma.
The issue is that with the removal of that op, it never replaced the memset
with a CPU memset. The memory being operated on is expected to be zeroes but
was not. This is causing the xor selftest to fail.
Cc: <stable@vger.kernel.org>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Verbose mode turns on test success messages, by default we only output
test summaries and failure results.
Also cleaned up some stray quotes, leftover from putting the result
message format string all on one line.
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Remove the open coded unmap and add coverage for this core functionality
to dmatest.
Also fixes up a couple places where we leaked dma mappings.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Allows for scripting test runs by module load / unload. Prevent module
load from returning until 'iterations' (finite) tests have completed, or
cause reads of the 'wait' parameter in sysfs to pause until the tests
are done.
Also killed the local waitqueue since we can just let the thread exit
naturally as long as we hold a reference.
Cc: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Add iops and throughput to the summary output.
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Towards enabling dmatest to checkout performance add a 'noverify' mode.
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
There is no need for dmatest to drain the entropy pool.
It would be nice to one day have repeatable runs, but would need a
larger rework to synchronize and order calls to the rng across test
threads.
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Currently we only test raid channels that happen to also have 'copy'
capability. Search for capable channels that do not have DMA_MEMCPY.
Note the return value from run_threaded_test never really made sense
because it could return errors after successfully starting tests. We
already have the test results per channel so missing channels can be
detected at that time.
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
1/ move 'run' control to a module parameter so we can do:
modprobe dmatest run=1. With this moved the rest of the debugfs
boilerplate can go.
2/ Fix parameter initialization. Previously the test was being started
without taking the parameters into account in the built-in case.
Also killed off the '__' version of some routines. The new rule is just
hold the lock when calling a *threaded_test() routine.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
...now that we have a common pr_fmt.
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
For long running tests the tracking results in a memory leak for the "ok"
results, and for the failures the kernel log should be sufficient. Provide a
uniform format for error messages so they can be easily parsed and remove the
debugfs file.
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
This reverts commit d86b2f298e.
The kernel log buffer is sufficient for collecting test results. The
current logging OOMs the machine on long running tests, and usually only
the first error is relevant. It is better to stop on error and parse
the kernel output. If output volume becomes an issue we can always
investigate using trace messages.
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Remove support for DMA unmapping from drivers as it is no longer
needed (DMA core code is now handling it).
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Tomasz Figa <t.figa@samsung.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
[djbw: fix up chan2parent() unused warning in drivers/dma/dw/core.c]
Signed-off-by: Dan Williams <dan.j.williams@intel.com>