Commit graph

9106 commits

Author SHA1 Message Date
Alexander Duyck
42b17f0955 fm10k/igb/ixgbe: Replace __skb_alloc_page with dev_alloc_page
The Intel drivers were pretty much just using the plain vanilla GFP flags
in their calls to __skb_alloc_page so this change makes it so that they use
dev_alloc_page which just uses GFP_ATOMIC for the gfp_flags value.

Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Cc: Matthew Vick <matthew.vick@intel.com>
Cc: Don Skidmore <donald.c.skidmore@intel.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-12 00:00:14 -05:00
Alexander Duyck
aa9cd31c3f cxgb4/cxgb4vf: Replace __skb_alloc_page with __dev_alloc_page
Drop the bloated use of __skb_alloc_page and replace it with
__dev_alloc_page.  In addition update the one other spot that is
allocating a page so that it allocates with the correct flags.

Cc: Hariprasad S <hariprasad@chelsio.com>
Cc: Casey Leedom <leedom@chelsio.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-12 00:00:13 -05:00
Florian Fainelli
dbd479db79 net: bcmgenet: apply MII configuration in bcmgenet_open()
In case an interface has been brought down before entering S3, and then
brought up out of S3, all the initialization done during
bcmgenet_probe() by bcmgenet_mii_init() calling bcmgenet_mii_config() is
just lost since register contents are restored to their reset values.

Re-apply this configuration anytime we call bcmgenet_open() to make sure
our port multiplexer is properly configured to match the PHY interface.

Since we are now calling bcmgenet_mii_config() everytime bcmgenet_open()
is called, make sure we only print the message during initialization
time not to pollute the console.

Fixes: b6e978e504 ("net: bcmgenet: add suspend/resume callbacks")
Fixes: 1c1008c793 ("net: bcmgenet: add main driver file")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 18:23:23 -05:00
Florian Fainelli
c96e731c93 net: bcmgenet: connect and disconnect from the PHY state machine
phy_disconnect() is the only way to guarantee that we are not going to
schedule more work on the PHY state machine workqueue for that
particular PHY device.

This fixes an issue where a network interface was suspended prior to a
system suspend/resume cycle and would then be resumed as part of
mdio_bus_resume(), since the GENET interface clocks would have been
disabled, this basically resulted in bus errors to appear since we are
invoking the GENET driver adjust_link() callback.

Fixes: b6e978e504 ("net: bcmgenet: add suspend/resume callbacks")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 18:23:23 -05:00
Stefan Wahren
93ecd2607f net: qualcomm: Fix dependency
This patch removes the dependency of the VENDOR entry and fixes
the QCA7000 one.

Signed-off-by: Stefan Wahren <stefan.wahren@i2se.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 18:12:32 -05:00
Daniel Borkmann
48eb5b9c3d ixgbe: phy: fix uninitialized status in ixgbe_setup_phy_link_tnx
Status variable is never initialized, can carry an arbitrary value
on the stack and thus may let the function fail.

Fixes: e90dd26456 ("ixgbe: Make return values more direct")
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Acked-by: Emil Tantilov <emil.s.tantilov@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 16:31:54 -05:00
David S. Miller
2387e3b59f Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next
Jeff Kirsher says:

====================
Intel Wired LAN Driver Updates 2014-11-11

This series contains updates to i40e, i40evf and ixgbe.

Kamil updated the i40e and i40evf driver to poll the firmware slower
since we were polling faster than the firmware could respond.

Shannon updates i40e to add a check to keep the service_task from
running the periodic tasks more than once per second, while still
allowing quick action to service the events.

Jesse cleans up the throttle rate code by fixing the minimum interrupt
throttle rate and removing some unused defines.

Mitch makes the early init admin queue message receive code more robust
by handling messages in a loop and ignoring those that we are not
interested in.  This also gets rid of some scary log messages that
really do not indicate a problem.

Don provides several ixgbe patches, first fixes an issue with x540
completion timeout where on topologies including few levels of PCIe
switching for x540 can run into an unexpected completion error.  Cleans
up the functionality in ixgbe_ndo_set_vf_vlan() in preparation for
future work.  Adds support for x550 MAC's to the driver.

v2:
 - Remove code comment in patch 01 of the series, based on feedback from
   David Liaght
 - Updated the "goto" to "break" statements in patch 06 of the series,
   based on feedback from Sergei Shtylyov
 - Initialized the variable err due to the possibility of use before
   being assigned a value in patch 07 of the series
 - Added patch "ixgbe: add helper function for setting RSS key in
   preparation of X550" since it is needed for the addition of X550 MAC
   support
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 16:26:42 -05:00
Andy Shevchenko
b2e2f0c779 stmmac: split to core library and probe drivers
Instead of registering the platform and PCI drivers in one module let's move
necessary bits to where it belongs. During this procedure we convert the module
registration part to use module_*_driver() macros which makes code simplier.

>From now on the driver consists three parts: core library, PCI, and platform
drivers.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 14:34:39 -05:00
Or Gerlitz
f4a1edd561 net/mlx4_en: Advertize encapsulation offloads features only when VXLAN tunnel is set
Currenly we only support Large-Send and TX checksum offloads for
encapsulated traffic of type VXLAN. We must make sure to advertize
these offloads up to the stack only when VXLAN tunnel is set.

Failing to do so, would mislead the the networking stack to assume
that the driver can offload the internal TX checksum for GRE packets
and other buggy schemes.

Reported-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 13:24:45 -05:00
Shani Michaeli
f8c6455bb0 net/mlx4_en: Extend checksum offloading by CHECKSUM COMPLETE
When processing received traffic, pass CHECKSUM_COMPLETE status to the
stack, with calculated checksum for non TCP/UDP packets (such
as GRE or ICMP).

Although the stack expects checksum which doesn't include the pseudo
header, the HW adds it. To address that, we are subtracting the pseudo
header checksum from the checksum value provided by the HW.

In the IPv6 case, we also compute/add the IP header checksum which
is not added by the HW for such packets.

Cc: Jerry Chu <hkchu@google.com>
Signed-off-by: Shani Michaeli <shanim@mellanox.com>
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 13:20:02 -05:00
Shani Michaeli
dd65beac48 net/mlx4_en: Extend usage of napi_gro_frags
We can call napi_gro_frags for all the received traffic regardless
of the checksum status. Specifically, received packets whose status
is CHECKSUM_NONE (and soon to be added CHECKSUM_COMPLETE)
are eligible for napi_gro_frags as well.

Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Shani Michaeli <shanim@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-11 13:20:02 -05:00
Don Skidmore
d1b849b9e9 ixgbe: add helper function for setting RSS key in preparation of X550
Split off the setting of the RSS key into its own function.  This
will help when we add support for X550 which can have different
RSS keys per pool.

Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-11-11 06:43:23 -08:00
Don Skidmore
9a75a1ac77 ixgbe: Add new support for X550 MAC's
This patch will add in the new MAC defines and fit it into the switch
cases throughout the driver.  New functionality and enablement support will
be added in following patches.

Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-11-11 06:18:56 -08:00
Don Skidmore
8d697e7e54 ixgbe: cleanup move setting PFQDE.HIDE_VLAN to support function.
Move setting of drop enable to support function.  This not only makes the
code more readable but is also prep for following patches that add
additional MAC support.

Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-11-11 06:18:49 -08:00
Don Skidmore
2b509c0cd2 ixgbe: cleanup ixgbe_ndo_set_vf_vlan
Clean up functionality in ixgbe_ndo_set_vf_vlan that will simplify later
patches.

Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-11-11 06:18:36 -08:00
Don Skidmore
71bde60191 ixgbe: fix X540 Completion timeout
On topologies including few levels of PCIe switching X540 can run into an
unexpected completion error.  We get around this by waiting after enabling
loopback a sufficient amount of time until Tx Data Fetch is sent.  We then
poll the pending transaction bit to ensure we received the completion.  Only
then do we go on to clear the buffers.

Signed-of-by: Don Skidmore <donald.c.skidmore@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-11-11 06:05:27 -08:00
Mitch Williams
cc0529271f i40evf: don't use more queues than CPUs
It's kind of silly to configure and attempt to use a bunch of queue
pairs when you're running on a single (virtual) CPU. Instead of
unconditionally configuring all of the queues that the PF gives us,
clamp the number of queue pairs to the number of CPUs.

Change-ID: I321714c9e15072ee76de8f95ab9a81f86ed347d1
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Signed-off-by: Patrick Lu <patrick.lu@intel.com>
Tested-by: Jim Young <jamesx.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-11-11 06:02:00 -08:00
Mitch Williams
f8d4db35e8 i40evf: make early init processing more robust
In early init, if we get an unexpected message from the PF (such as link
status), we just kick an error back to the init task, causing it to
restart its state machine and delaying initialization.

Make the early init AQ message receive code more robust by handling
messages in a loop, and ignoring those that we aren't interested in.
This also gets rid of some scary log messages that really didn't
indicate a problem.

Change-ID: I620e8c72e49c49c665ef33eeab2425dd10e721cf
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Signed-off-by: Patrick Lu <patrick.lu@intel.com>
Tested-by: Jim Young <jamesx.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-11-11 06:01:54 -08:00
Jesse Brandeburg
79442d38b3 i40e: clean up throttle rate code
The interrupt throttle rate minimum is actually 2us, so
fix that define and while we are there, remove some unused defines.

Change some strings in the function to be a bit less wrappy, and
express the correct limits.

Change-ID: I96829bbc77935e0b57c6f0fc1439fb4152b2960a
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Patrick Lu <patrick.lu@intel.com>
Tested-by: Jim Young <jamesx.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-11-11 06:01:48 -08:00
Shannon Nelson
215367171b i40e: don't do link_status or stats collection on every ARQ
The ARQ events cause a service_task execution, and we do a link_status
check and full stats gathering for each service_task.  However, when
there are a lot of ARQ events, such as when doing an NVM update, we end up
doing 10's if not 100's of these per second, thereby heavily abusing the
PCI bus and especially the Firmware.  This patch adds a check to keep the
service_task from running these periodic tasks more than once per second,
while still allowing quick action to service the events.

Change-ID: Iec7670c37bfae9791c43fec26df48aea7f70b33e
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: Patrick Lu <patrick.lu@intel.com>
Tested-by: Jim Young <jamesx.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-11-11 05:52:46 -08:00
Kamil Krawczyk
0db4e162e6 i40e: poll firmware slower
The code was polling the firmware tail register for completion every
10 microseconds, which is way faster than the firmware can respond.
This changes the poll interval to 1ms, which reduces polling CPU
utilization, and the number of times we loop.

The maximum delay is still 100ms.

Change-ID: I4bbfa6b66d802890baf8b4154061e55942b90958
Signed-off-by: Kamil Krawczyk <kamil.krawczyk@intel.com>
Acked-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Jim Young <jamesx.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-11-11 05:44:16 -08:00
Eric Dumazet
2e1af7d74f mlx4: restore conditional call to napi_complete_done()
After commit 1a28817282 ("mlx4: use napi_complete_done()") we ended up
calling napi_complete_done() in the case NAPI poll consumed all its
budget.

This added extra interrupt pressure, this patch restores proper
behavior.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Fixes: 1a28817282 ("mlx4: use napi_complete_done()")
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-10 21:09:03 -05:00
Sowmini Varadhan
df20286ab1 sunvnet: Add missing rcu_read_unlock() in vnet_start_xmit
The out_dropped label will only do rcu_read_unlock for non-null port.
So add the missing rcu_read_unlock() when bailing due to non-null port.

Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-10 21:05:36 -05:00
Sowmini Varadhan
777362d721 sunvnet: vnet_ack() should check if !start_cons to send a missed trigger
As per comments in vnet_start_xmit, for the edge case
when outgoing vnet_start_xmit() data and an incoming STOPPED
ACK cross each other in flight, we may need to send the missed
START trigger from maybe_tx_wakeup() after checking for a
false value of start_cons

Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-10 21:05:36 -05:00
Sowmini Varadhan
b0cffed543 sunvnet: Fix race between vnet_start_xmit() and vnet_ack()
When vnet_start_xmit() is concurrent with vnet_ack(), we may
have a race that looks like:

    thread 1                              thread 2
    vnet_start_xmit                       vnet_event_napi -> vnet_rx

__vnet_tx_trigger for some desc X
at this point dr->prod == X
                                        peer sends back a stopped ack for X
                                        we process X, but X == dr->prod
                                        so we bail out in vnet_ack with
                                        !idx_is_pending
update dr->prod

As a result of the fact that we never processed the stopped ack for X,
the Tx path is led to incorrectly believe that the peer is still
"started" and reading, but the peer has stopped reading, which will
ultimately end in flow-control assertions.

The fix is to synchronize the above 2 paths  on the netif_tx_lock.

Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-10 21:05:36 -05:00
Alban Bedel
6f6e741f6d 8139too: Allow using the largest possible MTU
This driver allows MTU up to 1518 bytes which is not enought to run
batman-adv. Simply raise the maximum packet size up to the maximum
allowed by the transmit descriptor, 1792 bytes, giving a maximum MTU
of 1774 bytes.

Signed-off-by: Alban Bedel <albeu@free.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-10 15:30:02 -05:00
Alban Bedel
ef786f106f 8139too: Allow setting MTU larger than 1500
Replace the default ndo_change_mtu callback with one that allow
setting MTU that the driver can handle.

Signed-off-by: Alban Bedel <albeu@free.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-10 15:30:02 -05:00
Anish Bhatt
a815286b94 cxgb4 : Fix bug in DCB app deletion
Unlike CEE, IEEE has a bespoke app delete call and does not rely on priority
for app deletion

Fixes : 2376c879b8 ('cxgb4 : Improve handling of DCB negotiation or loss
 thereof')

Signed-off-by: Anish Bhatt <anish@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-10 15:13:53 -05:00
Hariprasad Shenai
50d21a662d cxgb4vf: FL Starvation Threshold needs to be larger than the SGE's Egress Congestion Threshold
Free List Starvation Threshold needs to be larger than the SGE's Egress
Congestion Threshold or we'll end up in a mutual stall where the driver waits
for Ingress Packets to drive replacing Free List Pointers and the SGE waits for
Free List Pointers before pushing Ingress Packets to the host.

Based on original work by Casey Leedom <leedom@chelsio.com>

Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-10 14:15:03 -05:00
Hariprasad Shenai
ce8f407a3c cxgb4/cxgb4vf: For T5 use Packing and Padding Boundaries for SGE DMA transfers
T5 introduces the ability to have separate Packing and Padding Boundaries
for SGE DMA transfers from the chip to Host Memory. This change set takes
advantage of that to set up a smaller Padding Boundary to conserve PCI Link
and Memory Bandwidth with T5.

Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-10 14:15:03 -05:00
Hariprasad Shenai
65f6ecc93e cxgb4vf: Move fl_starv_thres into adapter->sge data structure
Move fl_starv_thres into adapter->sge data structure since it
_could_ be different from adapter to adapter.  Also move other per-adapter
SGE values which had been treated as driver globals into adapter->sge.

Based on original work by Casey Leedom <leedom@chelsio.com>

Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-10 14:15:03 -05:00
David S. Miller
1ef8019be8 net: Move bonding headers under include/net
This ways drivers like cxgb4 don't need to do ugly relative includes.

Reported-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-10 13:27:49 -05:00
Joe Perches
4483589f71 cxgb4: Remove unnecessary struct in6_addr * casts
Just use the address of the in6_addr.

Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-10 13:08:09 -05:00
Hariprasad Shenai
e2ac962895 cxgb4: Cleanup macros so they follow the same style and look consistent, part 2
Various patches have ended up changing the style of the symbolic macros/register
defines to different style.

As a result, the current kernel.org files are a mix of different macro styles.
Since this macro/register defines is used by different drivers a
few patch series have ended up adding duplicate macro/register define entries
with different styles. This makes these register define/macro files a complete
mess and we want to make them clean and consistent. This patch cleans up a part
of it.

Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-10 12:57:10 -05:00
Hariprasad Shenai
6559a7e829 cxgb4: Cleanup macros so they follow the same style and look consistent
Various patches have ended up changing the style of the symbolic macros/register
to different style.

As a result, the current kernel.org files are a mix of different macro styles.
Since this macro/register defines is used by different drivers a
few patch series have ended up adding duplicate macro/register define entries
with different styles. This makes these register define/macro files a complete
mess and we want to make them clean and consistent. This patch cleans up a part
of it.

Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-10 12:57:10 -05:00
Hariprasad Shenai
fd88b31a1d cxgb4: Add cxgb4_debugfs.c, move all debugfs code to new file
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-10 12:57:10 -05:00
Eric Dumazet
1a28817282 mlx4: use napi_complete_done()
To enable gro_flush_timeout, a driver has to use napi_complete_done()
instead of napi_complete().

Tested:
 Ran 200 netperf TCP_STREAM from A to B (10Gbe mlx4 link, 8 RX queues)

Without this feature, we send back about 305,000 ACK per second.

GRO aggregation ratio is low (811/305 = 2.65 segments per GRO packet)

Setting a timer of 2000 nsec is enough to increase GRO packet sizes
and reduce number of ACK packets. (811/19.2 = 42)

Receiver performs less calls to upper stacks, less wakes up.
This also reduces cpu usage on the sender, as it receives less ACK
packets.

Note that reducing number of wakes up increases cpu efficiency, but can
decrease QPS, as applications wont have the chance to warmup cpu caches
doing a partial read of RPC requests/answers if they fit in one skb.

B:~# sar -n DEV 1 10 | grep eth0 | tail -1
Average:         eth0 811269.80 305732.30 1199462.57  19705.72      0.00
0.00      0.50

B:~# echo 2000 >/sys/class/net/eth0/gro_flush_timeout

B:~# sar -n DEV 1 10 | grep eth0 | tail -1
Average:         eth0 811577.30  19230.80 1199916.51   1239.80      0.00
0.00      0.50

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-10 12:05:59 -05:00
Andy Shevchenko
f10f9fb216 stmmac: platform: fix sparse warnings
This patch fixes the following sparse warnings. One is fixed by casting return
value to a return type of the function. The others by creating a specific
stmmac_platform.h which provides the bits related to the platform driver.

drivers/net/ethernet/stmicro/stmmac/dwmac-meson.c:59:29: warning: incorrect type in return expression (different address spaces)
drivers/net/ethernet/stmicro/stmmac/dwmac-meson.c:59:29:    expected void *
drivers/net/ethernet/stmicro/stmmac/dwmac-meson.c:59:29:    got void [noderef] <asn:2>*reg

drivers/net/ethernet/stmicro/stmmac/dwmac-meson.c:64:29: warning: symbol 'meson6_dwmac_data' was not declared. Should it be static?
drivers/net/ethernet/stmicro/stmmac/dwmac-sti.c:354:29: warning: symbol 'stih4xx_dwmac_data' was not declared. Should it be static?
drivers/net/ethernet/stmicro/stmmac/dwmac-sti.c:361:29: warning: symbol 'stid127_dwmac_data' was not declared. Should it be static?
drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c:133:29: warning: symbol 'sun7i_gmac_data' was not declared. Should it be static?

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-07 12:25:23 -05:00
Andy Shevchenko
424c4f7875 stmmac: remove custom implementation of print_hex_dump()
There is a kernel helper to dump buffers in a hexdecimal format. This patch
substitutes the open coded function by calling that helper.

The output is slightly changed:
 - no lead space
 - ASCII part will be printed along with the dump
 - offset is longer than 3 characters (now 8)

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-07 12:20:43 -05:00
Lothar Waßmann
1310b544e5 net: fec: fix regression on i.MX28 introduced by rx_copybreak support
commit 1b7bde6d65 ("net: fec: implement rx_copybreak to improve rx performance")
introduced a regression for i.MX28. The swap_buffer() function doing
the endian conversion of the received data on i.MX28 may access memory
beyond the actual packet size in the DMA buffer. fec_enet_copybreak()
does not copy those bytes, so that the last bytes of a packet may be
filled with invalid data after swapping.
This will likely lead to checksum errors on received packets.
E.g. when trying to mount an NFS rootfs:
UDP: bad checksum. From 192.168.1.225:111 to 192.168.100.73:44662 ulen 36

Do the byte swapping and copying to the new skb in one go if
necessary.

Signed-off-by: Lothar Waßmann <LW@KARO-electronics.de>
Tested-by: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-07 12:08:58 -05:00
David S. Miller
4e84b496fd Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net 2014-11-06 22:01:18 -05:00
Lendacky, Thomas
f5eecbbef0 amd-xgbe: Check for complete packet on skb allocation error
If the skb allocation fails during receive processing, the driver would
continue reading descriptors without first determining if there were
any more descriptors for the current packet. Update the code to check
whether more descriptors are associated with the current packet or
whether to move on to the next descriptor as a new packet.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-06 19:00:15 -05:00
Lendacky, Thomas
e98c72c942 amd-xgbe: Free channel/ring structures later
The channel structure is freed before freeing the per channel
interrupts resulting in a kernel oops. Move the call to free
the channel structure to after the freeing of the per channel
interrupts.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-06 19:00:15 -05:00
Manish Chopra
9d01412ae7 netxen: Fix link event handling.
o Poll for the link events only if firmware doesn't have capability
  to notify the driver for the link events.

Signed-off-by: Manish Chopra <manish.chopra@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-06 16:43:29 -05:00
Govindarajulu Varadarajan
f6b7734ba7 enic: update desc properly in rx_copybreak
When we reuse the rx buffer, we need to update the desc. If not hardware sees
stale value.

In the following crash, when mtu is changed, hardware sees old rx buffer value
and crashes on skb_put.

Fix this by using enic_queue_rq_desc helper function which updates the necessary
desc.

[   64.657376] skbuff: skb_over_panic: text:ffffffffa041f55d len:9010 put:9010 head:ffff8800d3ca9fc0 data:ffff8800d3caa000 tail:0x2372 end:0x640 dev:enp0s3
[   64.659965] ------------[ cut here ]------------
[   64.661322] kernel BUG at net/core/skbuff.c:100!
[   64.662644] invalid opcode: 0000 [#1] PREEMPT SMP
[   64.664001] Modules linked in: rpcsec_gss_krb5 auth_rpcgss oid_registry nfsv4 cirrus ttm drm_kms_helper drm enic psmouse microcode evdev serio_raw syscopyarea sysfillrect sysimgblt i2c_piix4 i2c_core pcspkr nfs lockd grace sunrpc fscache ext4 crc16 mbcache jbd2 sd_mod ata_generic virtio_balloon ata_piix libata uhci_hcd virtio_pci virtio_ring usbcore usb_common virtio scsi_mod
[   64.664834] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G        W      3.17.0-netnext-10335-g942396b-dirty #273
[   64.664834] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[   64.664834] task: ffffffff81a1d580 ti: ffffffff81a00000 task.ti: ffffffff81a00000
[   64.664834] RIP: 0010:[<ffffffff81392cf1>]  [<ffffffff81392cf1>] skb_panic+0x61/0x70
[   64.664834] RSP: 0018:ffff880210603d48  EFLAGS: 00010292
[   64.664834] RAX: 000000000000008c RBX: ffff88020b0f6930 RCX: 0000000000000000
[   64.664834] RDX: 000000000000008c RSI: ffffffff8178b288 RDI: 00000000ffffffff
[   64.664834] RBP: ffff880210603d68 R08: 0000000000000001 R09: 0000000000000001
[   64.664834] R10: 00000000000005ce R11: 0000000000000001 R12: ffff88020b1f0b40
[   64.664834] R13: 000000000000a332 R14: ffff880209a1a000 R15: 0000000000000001
[   64.664834] FS:  0000000000000000(0000) GS:ffff880210600000(0000) knlGS:0000000000000000
[   64.664834] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[   64.664834] CR2: 00007f6752935e48 CR3: 0000000035743000 CR4: 00000000000006f0
[   64.664834] Stack:
[   64.664834]  ffff8800d3caa000 0000000000002372 0000000000000640 ffff88020b1f0000
[   64.664834]  ffff880210603d78 ffffffff81392d54 ffff880210603e08 ffffffffa041f55d
[   64.664834]  0000000000000296 ffffffff00000000 00008e7e00008e7e ffff880200002332
[   64.664834] Call Trace:
[   64.664834]  <IRQ>
[   64.664834]
[   64.664834]  [<ffffffff81392d54>] skb_put+0x54/0x60
[   64.664834]  [<ffffffffa041f55d>] enic_rq_service.constprop.47+0x3ad/0x730 [enic]
[   64.664834]  [<ffffffffa041fa79>] enic_poll_msix_rq+0x199/0x370 [enic]
[   64.664834]  [<ffffffff813a5499>] net_rx_action+0x139/0x210
[   64.664834]  [<ffffffff81290db3>] ? __this_cpu_preempt_check+0x13/0x20
[   64.664834]  [<ffffffff8106110e>] __do_softirq+0x14e/0x280
[   64.664834]  [<ffffffff8106152e>] irq_exit+0x8e/0xb0
[   64.664834]  [<ffffffff8100fd21>] do_IRQ+0x61/0x100
[   64.664834]  [<ffffffff814a2bf2>] common_interrupt+0x72/0x72

fixes: a03bb56e67 ("enic: implement rx_copybreak")
Signed-off-by: Govindarajulu Varadarajan <_govind@gmx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-06 16:42:04 -05:00
Govindarajulu Varadarajan
44aa91ab2b enic: handle error condition properly in enic_rq_indicate_buf
In case of error in rx path, we free the buf->os_buf but we do not make it NULL.
In next iteration we use the skb which is already freed. This causes the
following crash.

[  886.154772] general protection fault: 0000 [#1] PREEMPT SMP
[  886.154851] Modules linked in: rpcsec_gss_krb5 auth_rpcgss oid_registry nfsv4 microcode evdev cirrus ttm drm_kms_helper drm enic syscopyarea sysfillrect sysimgblt psmouse i2c_piix4 serio_raw pcspkr i2c_core nfs lockd grace sunrpc fscache ext4 crc16 mbcache jbd2 sd_mod crc_t10dif crct10dif_common ata_generic ata_piix virtio_balloon libata scsi_mod uhci_hcd usbcore virtio_pci virtio_ring virtio usb_common
[  886.155199] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G        W      3.17.0-netnext-05668-g876bc7f #272
[  886.155263] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[  886.155304] task: ffffffff81a1d580 ti: ffffffff81a00000 task.ti: ffffffff81a00000
[  886.155356] RIP: 0010:[<ffffffff81384030>]  [<ffffffff81384030>] kfree_skb_list+0x10/0x30
[  886.155418] RSP: 0018:ffff880210603d48  EFLAGS: 00010206
[  886.155456] RAX: 0000000000000020 RBX: 0000000000000000 RCX: 0000000000000000
[  886.155504] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 004500084e000017
[  886.155553] RBP: ffff880210603d50 R08: 00000000fe13d1b6 R09: 0000000000000001
[  886.155601] R10: 0000000000000000 R11: 0000000000000000 R12: ffff880209ff2f00
[  886.155650] R13: ffff88020ac0fe40 R14: ffff880209ff2f00 R15: ffff8800da8e3a80
[  886.155699] FS:  0000000000000000(0000) GS:ffff880210600000(0000) knlGS:0000000000000000
[  886.155774] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[  886.155814] CR2: 00007f0e0c925000 CR3: 0000000035e8b000 CR4: 00000000000006f0
[  886.155865] Stack:
[  886.155882]  0000000000000000 ffff880210603d78 ffffffff81383f79 ffff880209ff2f00
[  886.155942]  ffff88020b0c0b40 000000000000c000 ffff880210603d90 ffffffff81383faf
[  886.156001]  ffff880209ff2f00 ffff880210603da8 ffffffff8138406d ffff88020b1b08c0
[  886.156061] Call Trace:
[  886.156080]  <IRQ>
[  886.156095]
[  886.156112]  [<ffffffff81383f79>] skb_release_data+0xa9/0xc0
[  886.157656]  [<ffffffff81383faf>] skb_release_all+0x1f/0x30
[  886.159195]  [<ffffffff8138406d>] consume_skb+0x1d/0x40
[  886.160719]  [<ffffffff813942e5>] __dev_kfree_skb_any+0x35/0x40
[  886.162224]  [<ffffffffa02dc1d5>] enic_rq_service.constprop.47+0xe5/0x5a0 [enic]
[  886.163756]  [<ffffffffa02dc829>] enic_poll_msix_rq+0x199/0x370 [enic]
[  886.164730]  [<ffffffff81397e29>] net_rx_action+0x139/0x210
[  886.164730]  [<ffffffff8105fb2e>] __do_softirq+0x14e/0x280
[  886.164730]  [<ffffffff8105ff2e>] irq_exit+0x8e/0xb0
[  886.164730]  [<ffffffff8100fc1d>] do_IRQ+0x5d/0x100
[  886.164730]  [<ffffffff81496832>] common_interrupt+0x72/0x72

fixes: a03bb56e67 ("enic: implement rx_copybreak")
Signed-off-by: Govindarajulu Varadarajan <_govind@gmx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-06 16:42:04 -05:00
Eli Cohen
364d1798ef net/mlx5_core: Fix race on driver load
When events arrive at driver load, the event handler gets called even before
the spinlock and list are initialized. Fix this by moving the initialization
before EQs creation.

Signed-off-by: Eli Cohen <eli@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-06 16:40:36 -05:00
Eli Cohen
a158906dd7 net/mlx5_core: Fix race in create EQ
After the EQ is created, it can possibly generate interrupts and the interrupt
handler is referencing eq->dev. It is therefore required to set eq->dev before
calling request_irq() so if an event is generated before request_irq() returns,
we will have a valid eq->dev field.

Signed-off-by: Eli Cohen <eli@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-06 16:40:35 -05:00
Sowmini Varadhan
8c4ee3e706 sunvnet: Return from vnet_napi_event() if no packets to read
vnet_event_napi() may be called as part of the NAPI ->poll,
to resume reading descriptor rings. When no data is available,
descriptor ring state (e.g., rcv_nxt) needs to be reset
carefully to stay in lock-step with ldc_read(). In the interest
of simplicity, the best way to do this is to return from
vnet_event_napi() when there are no more packets to read.
The next trip through ldc_rx will correctly set up the dring state.

Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Tested-by: David Stevens <david.stevens@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-06 15:16:30 -05:00
Sowmini Varadhan
6c3ce8a30c sunvnet: Fix indentation in maybe_tx_wakeup()
remove redundant tab.

Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Reported-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-06 15:16:30 -05:00