Commit graph

469709 commits

Author SHA1 Message Date
Sergei Shtylyov
dd318b0df2 i2c: rcar: fix MNR interrupt handling
Sometimes the MNR and MST interrupts happen simultaneously  (stop  automatically
follows NACK, according to the manuals) and in such case the ID_NACK flag  isn't
set since the MST interrupt handling precedes MNR and all interrupts are cleared
and disabled then, so that MNR interrupt is never noticed -- this causes NACK'ed
transfers to be falsely reported as successful. Exchanging MNR and  MST handlers
fixes this issue, however the MNR bit  somehow  gets set again even after  being
explicitly cleared, so I decided to completely suppress handling of all disabled
interrupts (which is a good thing anyway)...

Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Cc: stable@vger.kernel.org
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2014-09-02 12:28:15 +02:00
Ville Syrjälä
bbfb44e8b6 drm/i915: Fix lock dropping in intel_tv_detect()
When intel_tv_detect() fails to do load detection it would forget to
drop the locks and clean up the acquire context. Fix it up.

This is a regression from:
 commit 208bf9fdcd
 Author: Ville Syrjälä <ville.syrjala@linux.intel.com>
 Date:   Mon Aug 11 13:15:35 2014 +0300

    drm/i915: Fix locking for intel_enable_pipe_a()

v2: Make the code more readable (Chris)
v3: Drop WARN_ON(type < 0) (Chris)

Cc: stable@vger.kernel.org
Cc: Tibor Billes <tbilles@gmx.com>
Reported-by: Tibor Billes <tbilles@gmx.com>
Tested-by: Tibor Billes <tbilles@gmx.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
2014-09-02 12:58:51 +03:00
Paolo Bonzini
eebbcf6da9 Merge tag 'kvm-s390-master-20140902' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into kvm-master 2014-09-02 11:09:00 +02:00
Christian Borntraeger
1951497d90 KVM: s390/mm: Fix guest storage key corruption in ptep_set_access_flags
commit 0944fe3f4a ("s390/mm: implement software referenced bits")
triggered another paging/storage key corruption. There is an
unhandled invalid->valid pte change where we have to set the real
storage key from the pgste.
When doing paging a guest page might be swapcache or swap and when
faulted in it might be read-only and due to a parallel scan old.
An do_wp_page will make it writeable and young. Due to software
reference tracking this page was invalid and now becomes valid.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: stable@vger.kernel.org # v3.12+
2014-09-02 10:30:43 +02:00
Christian Borntraeger
3e03d4c46d KVM: s390/mm: Fix storage key corruption during swapping
Since 3.12 or more precisely  commit 0944fe3f4a ("s390/mm:
implement software referenced bits") guest storage keys get
corrupted during paging. This commit added another valid->invalid
translation for page tables - namely ptep_test_and_clear_young.
We have to transfer the storage key into the pgste in that case.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: stable@vger.kernel.org # v3.12+
2014-09-02 10:30:43 +02:00
Linus Walleij
0dbc8b7afe gpio: move varargs hack outside #ifdef GPIOLIB
commit 39b2bbe3d7
"gpio: add flags argument to gpiod_get*() functions"
added a dynamic flags argument to all the GPIOD getter
functions, however this did not cover the stubs so
when people used gpiod stubs to compile out descriptor
code, compilation failed.

Solve this by:
- Also rename all the stub functions __gpiod_*
- Moving the vararg hack outside of #ifdef CONFIG_GPIOLIB
  so these will always be available.

Reviewed-by: Alexandre Courbot <acourbot@nvidia.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2014-09-02 10:09:43 +02:00
Chao Yu
b73e52824c f2fs: reposition unlock_new_inode to prevent accessing invalid inode
As the race condition on the inode cache, following scenario can appear:
[Thread a]				[Thread b]
					->f2fs_mkdir
					  ->f2fs_add_link
					    ->__f2fs_add_link
					      ->init_inode_metadata failed here
->gc_thread_func
  ->f2fs_gc
    ->do_garbage_collect
      ->gc_data_segment
        ->f2fs_iget
          ->iget_locked
            ->wait_on_inode
					  ->unlock_new_inode
        ->move_data_page
					  ->make_bad_inode
					  ->iput

When we fail in create/symlink/mkdir/mknod/tmpfile, the new allocated inode
should be set as bad to avoid being accessed by other thread. But in above
scenario, it allows f2fs to access the invalid inode before this inode was set
as bad.
This patch fix the potential problem, and this issue was found by code review.

change log from v1:
 o Add condition judgment in gc_data_segment() suggested by Changman Lee.
 o use iget_failed to simplify code.

Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2014-09-02 00:22:24 -07:00
David S. Miller
88e4194712 Merge branch 'cxgb4'
Hariprasad Shenai says:

====================
Trivial fixes for cxgb4

This patch series adds support to fix T5 adapter accessing T4 adapter registers,
issue mbox command on correct mbox for physical function, avoid dumping write
only registers, use correct length for adapter part number and support to detect
and display firmware reported errors.

The patches series is created against 'net' tree.
And includes patches on cxgb4 driver.

We have included all the maintainers of respective drivers. Kindly review the
change and let us know in case of any review comments.

Thanks

V2:
   Added description for each patch as per David Miller's comment
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 23:00:49 -07:00
Hariprasad Shenai
5c937dd3f9 cxgb4: Issue mbox commands on correct mbox
A couple of RDMA-related called to t4_query_params() were issuing mbox commands
on mbox0 instead of mbox4.

Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 23:00:42 -07:00
Hariprasad Shenai
3d9103f80d cxgb4: Avoid dumping Write-only registers in register dump
Avoid dumping MPS_RPLC_MAP_CTL for reg dumps; this is a Write-Only register.
Reading this register may cause MPS TCAM corruption.

Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 23:00:42 -07:00
Hariprasad Shenai
31d55c2d66 cxgb4: Detect and display firmware reported errors
The adapter firmware can indicate error conditions to the host.
If the firmware has indicated an error, print out the reason for
the firmware error.

Based on original work by Casey Leedom <leedom@chelsio.com>

Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 23:00:41 -07:00
Hariprasad Shenai
9bb59b96ae cxgb4: Fix T5 adapter accessing T4 adapter registers
Fixes few register access for both T4 and T5.
PCIE_CORE_UTL_SYSTEM_BUS_AGENT_STATUS & PCIE_CORE_UTL_PCI_EXPRESS_PORT_STATUS
is T4 only register don't let T5 access them. For T5 MA_PARITY_ERROR_STATUS2
is additionally read. MPS_TRC_RSS_CONTROL is T4 only register, for T5 use
MPS_T5_TRC_RSS_CONTROL.

Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 23:00:41 -07:00
Hariprasad Shenai
63a92fe6f7 cxgb4: Fixed the code to use correct length for part number
Previously it was using the length value of serial number.
Also added macro for VPD unique identifier (0x82).

Signed-off-by: Casey Leedom <leedom@chelsio.com>
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 23:00:41 -07:00
Hariprasad Shenai
444018a7f1 cxgb4: Fix for handling 1Gb/s SFP+ Transceiver Modules
We previously assumed that a Port's Capabilities and Advertised Capabilities
would never change from Port Initialization time.  This is no longer true
when we can have 10Gb/s and 1Gb/s SFP+ Transceiver Modules randomly swapped.

Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 23:00:41 -07:00
Takashi Iwai
acf08081ad ALSA: hda - Fix COEF setups for ALC1150 codec
ALC1150 codec seems to need the COEF- and PLL-setups just like its
compatible ALC882 codec.  Some machines (e.g. SunMicro X10SAT) show
the problem like too low output volumes unless the COEF setup is
applied.

Reported-and-tested-by: Dana Goyette <danagoyette@gmail.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2014-09-02 07:21:56 +02:00
Giuseppe CAVALLARO
cc25f0cbe4 stmmac: only remove RXCSUM feature if no rx coe is available
In case of the HW is not able to do the receive checksum offloading
the only feature to remove is NETIF_F_RXCSUM.

Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 21:51:29 -07:00
Giuseppe CAVALLARO
d2afb5bdff stmmac: fix the rx csum feature
For new GMACs it is possible to turn-on/off the COE.
In the current driver, when disabled the Rx-checksum
via ethtool, the tool reported that csum was disabled
but the HW continued to set the IPC. Indeed this is
because the fix_features allows this. So the patch
fixes this problem by adding the set_features.

Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 21:51:29 -07:00
Willem de Bruijn
364a9e9324 sock: deduplicate errqueue dequeue
sk->sk_error_queue is dequeued in four locations. All share the
exact same logic. Deduplicate.

Also collapse the two critical sections for dequeue (at the top of
the recv handler) and signal (at the bottom).

This moves signal generation for the next packet forward, which should
be harmless.

It also changes the behavior if the recv handler exits early with an
error. Previously, a signal for follow-up packets on the errqueue
would then not be scheduled. The new behavior, to always signal, is
arguably a bug fix.

For rxrpc, the change causes the same function to be called repeatedly
for each queued packet (because the recv handler == sk_error_report).
It is likely that all packets will fail for the same reason (e.g.,
memory exhaustion).

This code runs without sk_lock held, so it is not safe to trust that
sk->sk_err is immutable inbetween releasing q->lock and the subsequent
test. Introduce int err just to avoid this potential race.

Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 21:49:08 -07:00
Willem de Bruijn
8fe2f761ca net-timestamp: expand documentation
Expand Documentation/networking/timestamping.txt with new
interfaces and bytestream timestamping. Also minor
cleanup of the other text.

Import txtimestamp.c test of the new features.

Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 21:49:08 -07:00
David S. Miller
c5a65680b3 Merge branch 'csums-next'
Tom Herbert says:

====================
net: Checksum offload changes - Part VI

I am working on overhauling RX checksum offload. Goals of this effort
are:

- Specify what exactly it means when driver returns CHECKSUM_UNNECESSARY
- Preserve CHECKSUM_COMPLETE through encapsulation layers
- Don't do skb_checksum more than once per packet
- Unify GRO and non-GRO csum verification as much as possible
- Unify the checksum functions (checksum_init)
- Simplify code

What is in this seventh patch set:

- Add skb->csum. This allows a device or GRO to indicate that an
  invalid checksum was detected.
- Checksum unncessary to checksum complete conversions.

With these changes, I believe that the third goal of the overhaul is
now mostly achieved. In the case of no encapsulation or one layer of
encapsulation, there should only be at most one skb_checksum over
each packet (between GRO and normal path). In the case of two layers
of encapsulation, it is still possible with the right combination of
non-zero and zero UDP checksums to have >1 skb_checksum. For instance:
IP>GRE(with csum)>IP>UDP(zero csum)>VXLAN>IP>UDP(non-zero csum),
would likely necessiate an skb_checksum in GRO and normal path.
This doesn't seem like a common scenario at all so I'm inclined to
not address this now, if multiple layers of encapsulation becomes
popular we can reassess.

Note that checksum conversion shows a nice improvement for RX VXLAN when
outer UDP checksum is enabled (12.65% CPU compared to 20.94%). This
is not only from the fact that we don't need checksum calculation on
the host, but also allows GRO for VXLAN in this case. Checksum
conversion does not help send side (which still needs to perform
a checksum on host). For that we will implement remote checksum offload
in a later patch
(http://tools.ietf.org/html/draft-herbert-remotecsumoffload-00).

Please review carefully and test if possible, mucking with basic
checksum functions is always a little precarious :-)
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 21:36:35 -07:00
Tom Herbert
72297c59f7 l2tp: Enable checksum unnecessary conversions for l2tp/UDP sockets
Signed-off-by: Tom Herbert <therbert@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 21:36:28 -07:00
Tom Herbert
c60c308cbd vxlan: Enable checksum unnecessary conversions for vxlan/UDP sockets
Signed-off-by: Tom Herbert <therbert@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 21:36:28 -07:00
Tom Herbert
884d338c04 gre: Add support for checksum unnecessary conversions
Call skb_checksum_try_convert and skb_gro_checksum_try_convert
after checksum is found present and validated in the GRE header
for normal and GRO paths respectively.

In GRO path, call skb_gro_checksum_try_convert

Signed-off-by: Tom Herbert <therbert@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 21:36:28 -07:00
Tom Herbert
2abb7cdc0d udp: Add support for doing checksum unnecessary conversion
Add support for doing CHECKSUM_UNNECESSARY to CHECKSUM_COMPLETE
conversion in UDP tunneling path.

In the normal UDP path, we call skb_checksum_try_convert after locating
the UDP socket. The check is that checksum conversion is enabled for
the socket (new flag in UDP socket) and that checksum field is
non-zero.

In the UDP GRO path, we call skb_gro_checksum_try_convert after
checksum is validated and checksum field is non-zero. Since this is
already in GRO we assume that checksum conversion is always wanted.

Signed-off-by: Tom Herbert <therbert@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 21:36:28 -07:00
Tom Herbert
d96535a17d net: Infrastructure for checksum unnecessary conversions
For normal path, added skb_checksum_try_convert which is called
to attempt to convert CHECKSUM_UNNECESSARY to CHECKSUM_COMPLETE. The
primary condition to allow this is that ip_summed is CHECKSUM_NONE
and csum_valid is true, which will be the state after consuming
a CHECKSUM_UNNECESSARY.

For GRO path, added skb_gro_checksum_try_convert which is the GRO
analogue of skb_checksum_try_convert. The primary condition to allow
this is that NAPI_GRO_CB(skb)->csum_cnt == 0 and
NAPI_GRO_CB(skb)->csum_valid is set. This implies that we have consumed
all available CHECKSUM_UNNECESSARY checksums in the GRO path.

Signed-off-by: Tom Herbert <therbert@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 21:36:27 -07:00
Tom Herbert
5a21232983 net: Support for csum_bad in skbuff
This flag indicates that an invalid checksum was detected in the
packet. __skb_mark_checksum_bad helper function was added to set this.

Checksums can be marked bad from a driver or the GRO path (the latter
is implemented in this patch). csum_bad is checked in
__skb_checksum_validate_complete (i.e. calling that when ip_summed ==
CHECKSUM_NONE).

csum_bad works in conjunction with ip_summed value. In the case that
ip_summed is CHECKSUM_NONE and csum_bad is set, this implies that the
first (or next) checksum encountered in the packet is bad. When
ip_summed is CHECKSUM_UNNECESSARY, the first checksum after the last
one validated is bad. For example, if ip_summed == CHECKSUM_UNNECESSARY,
csum_level == 1, and csum_bad is set-- then the third checksum in the
packet is bad. In the normal path, the packet will be dropped when
processing the protocol layer of the bad checksum:
__skb_decr_checksum_unnecessary called twice for the good checksums
changing ip_summed to CHECKSUM_NONE so that
__skb_checksum_validate_complete is called to validate the third
checksum and that will fail since csum_bad is set.

Signed-off-by: Tom Herbert <therbert@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 21:36:27 -07:00
hayeswang
52aec126c4 r8152: rename rx_buf_sz
The variable "rx_buf_sz" is used by both tx and rx buffers. Replace
it with "agg_buf_sz".

Signed-off-by: Hayes Wang <hayeswang@realtek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 20:41:45 -07:00
Florian Fainelli
4559154a58 net: phy: mdio-bcm-unimac: NULL-terminate unimac_mdio_ids
drivers/net/phy/mdio-bcm-unimac.c:195:37-38: unimac_mdio_ids is not NULL
terminated at line 195

Make sure of_device_id tables are NULL terminated
Generated by: scripts/coccinelle/misc/of_table.cocci

Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 20:41:45 -07:00
Florian Fainelli
61b7363ffa net: dsa: make dsa_pack_type static
net/dsa/dsa.c:624:20: sparse: symbol 'dsa_pack_type' was not declared.
Should it be static?

Fixes: 3e8a72d1da ("net: dsa: reduce number of protocol hooks")
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 20:41:45 -07:00
David S. Miller
8a3cf39b72 Merge branch 'amd-xgbe-net'
Tom Lendacky says:

====================
amd-xgbe: AMD XGBE driver fixes 2014-08-29

The following series of patches includes fixes to the driver.

- Tx hardware queue flushing support dependent on hardware version
- Incorrect reported fifo size
- Proper mmd select in XPCS debugfs support
- Proper queue count for configuring Tx flow control

This patch series is based on net.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 20:38:48 -07:00
Lendacky, Thomas
9fc69affda amd-xgbe: Use the Tx queue count for Tx flow control support
When configuring Tx flow control the Rx queue count was used instead of
the Tx queue count for looping through the Tx hardware queues. Fix the
code to use the Tx queue count.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 20:38:14 -07:00
Lendacky, Thomas
625c8e4a2f amd-xgbe: Fix the xpcs mmd debugfs support
The debugfs support for the xpcs registers did not properly use the
specified mmd (xpcs_mmd entry) which resulted in the default mmd
value always being used.  Update the debugfs support to generate the
proper mmd register value.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 20:38:14 -07:00
Lendacky, Thomas
f076f45372 amd-xgbe: Reported fifo size from hardware is not correct
The fifo size reported by the hardware is not correct. Add support
to limit the reported size to what is actually present.  Also, fix
the argument types used in the fifo size calculation function.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 20:38:14 -07:00
Lendacky, Thomas
a9a4a2d9d6 amd-xgbe: Check for Tx hardware queue flushing support
The flushing of the Tx hardware queues is only supported at a certain
level of the hardware.  Retrieve the current version of the hardware
and use that to determine if flushing is supported.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 20:38:14 -07:00
Geert Uytterhoeven
e148e1bf6f drivers: net: NET_XGENE should depend on HAS_DMA
If NO_DMA=y:

drivers/built-in.o: In function `xgene_enet_delete_ring':
xgene_enet_main.c:(.text+0x28755a): undefined reference to `dma_free_coherent'
drivers/built-in.o: In function `xgene_enet_setup_tx_desc':
xgene_enet_main.c:(.text+0x287774): undefined reference to `dma_map_single'
xgene_enet_main.c:(.text+0x287780): undefined reference to `dma_mapping_error'
drivers/built-in.o: In function `xgene_enet_tx_completion':
xgene_enet_main.c:(.text+0x2878e6): undefined reference to `dma_unmap_single'
drivers/built-in.o: In function `xgene_enet_refill_bufpool':
xgene_enet_main.c:(.text+0x2879d4): undefined reference to `dma_map_single'
xgene_enet_main.c:(.text+0x2879e0): undefined reference to `dma_mapping_error'
drivers/built-in.o: In function `xgene_enet_rx_frame':
xgene_enet_main.c:(.text+0x287aaa): undefined reference to `dma_unmap_single'
drivers/built-in.o: In function `xgene_enet_free_desc_ring':
xgene_enet_main.c:(.text+0x287f98): undefined reference to `dma_free_coherent'
drivers/built-in.o: In function `xgene_enet_create_desc_ring':
xgene_enet_main.c:(.text+0x28808e): undefined reference to `dma_alloc_coherent'
drivers/built-in.o: In function `xgene_enet_probe':
xgene_enet_main.c:(.text+0x2883d4): undefined reference to `dma_set_mask'
xgene_enet_main.c:(.text+0x2883ec): undefined reference to `dma_supported'

Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 20:33:15 -07:00
Brian Foster
41b9d7263e xfs: trim eofblocks before collapse range
xfs_collapse_file_space() currently writes back the entire file
undergoing collapse range to settle things down for the extent shift
algorithm. While this prevents changes to the extent list during the
collapse operation, the writeback itself is not enough to prevent
unnecessary collapse failures.

The current shift algorithm uses the extent index to iterate the in-core
extent list. If a post-eof delalloc extent persists after the writeback
(e.g., a prior zero range op where the end of the range aligns with eof
can separate the post-eof blocks such that they are not written back and
converted), xfs_bmap_shift_extents() becomes confused over the encoded
br_startblock value and fails the collapse.

As with the full writeback, this is a temporary fix until the algorithm
is improved to cope with a volatile extent list and avoid attempts to
shift post-eof extents.

Signed-off-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-09-02 12:12:53 +10:00
Dave Chinner
1669a8ca21 xfs: xfs_file_collapse_range is delalloc challenged
If we have delalloc extents on a file before we run a collapse range
opertaion, we sync the range that we are going to collapse to
convert delalloc extents in that region to real extents to simplify
the shift operation.

However, the shift operation then assumes that the extent list is
not going to change as it iterates over the extent list moving
things about. Unfortunately, this isn't true because we can't hold
the ILOCK over all the operations. We can prevent new IO from
modifying the extent list by holding the IOLOCK, but that doesn't
prevent writeback from running....

And when writeback runs, it can convert delalloc extents is the
range of the file prior to the region being collapsed, and this
changes the indexes of all the extents in the file. That causes the
collapse range operation to Go Bad.

The right fix is to rewrite the extent shift operation not to be
dependent on the extent list not changing across the entire
operation, but this is a fairly significant piece of work to do.
Hence, as a short-term workaround for the problem, sync the entire
file before starting a collapse operation to remove all delalloc
ranges from the file and so avoid the problem of concurrent
writeback changing the extent list.

Diagnosed-and-Reported-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-09-02 12:12:53 +10:00
Brian Foster
ca446d880c xfs: don't log inode unless extent shift makes extent modifications
The file collapse mechanism uses xfs_bmap_shift_extents() to collapse
all subsequent extents down into the specified, previously punched out,
region. This function performs some validation, such as whether a
sufficient hole exists in the target region of the collapse, then shifts
the remaining exents downward.

The exit path of the function currently logs the inode unconditionally.
While we must log the inode (and abort) if an error occurs and the
transaction is dirty, the initial validation paths can generate errors
before the transaction has been dirtied. This creates an unnecessary
filesystem shutdown scenario, as the caller will cancel a transaction
that has been marked dirty.

Modify xfs_bmap_shift_extents() to OR the logflags bits as modifications
are made to the inode bmap. Only log the inode in the exit path if
logflags has been set. This ensures we only have to cancel a dirty
transaction if modifications have been made and prevents an unnecessary
filesystem shutdown otherwise.

Signed-off-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-09-02 12:12:53 +10:00
Dave Chinner
7d4ea3ce63 xfs: use ranged writeback and invalidation for direct IO
Now we are not doing silly things with dirtying buffers beyond EOF
and using invalidation correctly, we can finally reduce the ranges of
writeback and invalidation used by direct IO to match that of the IO
being issued.

Bring the writeback and invalidation ranges back to match the
generic direct IO code - this will greatly reduce the perturbation
of cached data when direct IO and buffered IO are mixed, but still
provide the same buffered vs direct IO coherency behaviour we
currently have.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-09-02 12:12:53 +10:00
Dave Chinner
834ffca6f7 xfs: don't zero partial page cache pages during O_DIRECT writes
Similar to direct IO reads, direct IO writes are using 
truncate_pagecache_range to invalidate the page cache. This is
incorrect due to the sub-block zeroing in the page cache that
truncate_pagecache_range() triggers.

This patch fixes things by using invalidate_inode_pages2_range
instead.  It preserves the page cache invalidation, but won't zero
any pages.

cc: stable@vger.kernel.org
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-09-02 12:12:52 +10:00
Chris Mason
85e584da32 xfs: don't zero partial page cache pages during O_DIRECT writes
xfs is using truncate_pagecache_range to invalidate the page cache
during DIO reads.  This is different from the other filesystems who
only invalidate pages during DIO writes.

truncate_pagecache_range is meant to be used when we are freeing the
underlying data structs from disk, so it will zero any partial
ranges in the page.  This means a DIO read can zero out part of the
page cache page, and it is possible the page will stay in cache.

buffered reads will find an up to date page with zeros instead of
the data actually on disk.

This patch fixes things by using invalidate_inode_pages2_range
instead.  It preserves the page cache invalidation, but won't zero
any pages.

[dchinner: catch error and warn if it fails. Comment.]

cc: stable@vger.kernel.org
Signed-off-by: Chris Mason <clm@fb.com>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-09-02 12:12:52 +10:00
Dave Chinner
22e757a49c xfs: don't dirty buffers beyond EOF
generic/263 is failing fsx at this point with a page spanning
EOF that cannot be invalidated. The operations are:

1190 mapwrite   0x52c00 thru    0x5e569 (0xb96a bytes)
1191 mapread    0x5c000 thru    0x5d636 (0x1637 bytes)
1192 write      0x5b600 thru    0x771ff (0x1bc00 bytes)

where 1190 extents EOF from 0x54000 to 0x5e569. When the direct IO
write attempts to invalidate the cached page over this range, it
fails with -EBUSY and so any attempt to do page invalidation fails.

The real question is this: Why can't that page be invalidated after
it has been written to disk and cleaned?

Well, there's data on the first two buffers in the page (1k block
size, 4k page), but the third buffer on the page (i.e. beyond EOF)
is failing drop_buffers because it's bh->b_state == 0x3, which is
BH_Uptodate | BH_Dirty.  IOWs, there's dirty buffers beyond EOF. Say
what?

OK, set_buffer_dirty() is called on all buffers from
__set_page_buffers_dirty(), regardless of whether the buffer is
beyond EOF or not, which means that when we get to ->writepage,
we have buffers marked dirty beyond EOF that we need to clean.
So, we need to implement our own .set_page_dirty method that
doesn't dirty buffers beyond EOF.

This is messy because the buffer code is not meant to be shared
and it has interesting locking issues on the buffer dirty bits.
So just copy and paste it and then modify it to suit what we need.

Note: the solutions the other filesystems and generic block code use
of marking the buffers clean in ->writepage does not work for XFS.
It still leaves dirty buffers beyond EOF and invalidations still
fail. Hence rather than play whack-a-mole, this patch simply
prevents those buffers from being dirtied in the first place.

cc: <stable@kernel.org>
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-09-02 12:12:51 +10:00
Nikolay Aleksandrov
0f23124aaa bonding: add slave_changelink support and use it for queue_id
This patch adds support for slave_changelink to the bonding and uses it
to give the ability to change the queue_id of the enslaved devices via
netlink. It sets slave_maxtype and uses bond_changelink as a prototype for
bond_slave_changelink.
Example/test command after the iproute2 patch:
 ip link set eth0 type bond_slave queue_id 10

CC: David S. Miller <davem@davemloft.net>
CC: Jay Vosburgh <j.vosburgh@gmail.com>
CC: Veaceslav Falico <vfalico@gmail.com>
CC: Andy Gospodarek <andy@greyhouse.net>

Suggested-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com>
Acked-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 18:32:22 -07:00
stephen hemminger
688d1945bc tcp: whitespace fixes
Fix places where there is space before tab, long lines, and
awkward if(){, double spacing etc. Add blank line after declaration/initialization.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 18:12:45 -07:00
Florian Fainelli
d09d3038a3 net: systemport: tell RXCHK if we are using Broadcom tags
When Broadcom tags are enabled, e.g: when interfaced to an Ethernet
switch, make sure that we tell the RXCHK engine that it should be
expecting a 4-bytes Broadcom tag after the Ethernet MAC Source Address.

Use netdev_uses_dsa() to check for that condition since that will tell
us if a switch is attached to our network interface.

Fixes: 80105befdb ("net: systemport: add Broadcom SYSTEMPORT Ethernet MAC driver")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 18:11:47 -07:00
David S. Miller
377655fe6b Merge tag 'master-2014-08-25' of git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless
John W. Linville says:

====================
pull request: wireless 2014-08-28

Please pull this batch of fixes intended for the 3.17 stream.

For the Bluetooth/6LowPAN/802.15.4 bits, Johan says:

'It contains a connection reference counting fix for LE where a
connection might stay up even though it should get disconnected.

The other 802.15.4 6LoWPAN related patches were sent to the bluetooth
tree by Alexander Aring and described as follows by him:

"
these patches contains patches for the bluetooth branch.

This series includes memory leak fixes and an errno value fix.
Also there are two patches for sending and receiving 1280 6LoWPAN
packets, which makes the IEEE 802.15.4 6LoWPAN stack more RFC
compliant.
"'

Along with that...

Alexey Khoroshilov fixes a use-after-free bug on at76c50x-usb.

Hauke Mehrtens adds a PCI ID to bcma.

Himangi Saraogi fixes a silly "A || A" test in rtlwifi.

Larry Finger adds a device ID to rtl8192cu.

Maks Naumov fixes a strncmp argument in ath9k.

Álvaro Fernández Rojas adds a PCI ID to ssb.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 18:08:11 -07:00
Jesper Dangaard Brouer
afb84b6261 pktgen: add flag NO_TIMESTAMP to disable timestamping
Then testing the TX limits of the stack, then it is useful to
be-able to disable the do_gettimeofday() timetamping on every packet.

This implements a pktgen flag NO_TIMESTAMP which will disable this
call to do_gettimeofday().

The performance change on (my system E5-2695) with skb_clone=0, goes
from TX 2,423,751 pps to 2,567,165 pps with flag NO_TIMESTAMP. Thus,
the cost of do_gettimeofday() or saving is approx 23 nanosec.

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 18:06:59 -07:00
Dmitry Kravkov
05f8461bf7 bnx2x: fix tunneled GSO over IPv6
Set correct bit for packed description.

Introduced in e42780b66a
    bnx2x: Utilize FW 7.10.51

Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Dmitry Kravkov <Dmitry.Kravkov@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 17:53:57 -07:00
Dmitry Kravkov
55ef5c89db bnx2x: prevent incorrect byte-swap in BE
Fixes incorrectly defined struct in FW HSI for BE platform.
Affects tunneling, tx-switching and anti-spoofing.

Introduced in e42780b66a
    bnx2x: Utilize FW 7.10.51

Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Dmitry Kravkov <Dmitry.Kravkov@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 17:53:57 -07:00
Erik Hugne
a5325ae5b8 tipc: add name distributor resiliency queue
TIPC name table updates are distributed asynchronously in a cluster,
entailing a risk of certain race conditions. E.g., if two nodes
simultaneously issue conflicting (overlapping) publications, this may
not be detected until both publications have reached a third node, in
which case one of the publications will be silently dropped on that
node. Hence, we end up with an inconsistent name table.

In most cases this conflict is just a temporary race, e.g., one
node is issuing a publication under the assumption that a previous,
conflicting, publication has already been withdrawn by the other node.
However, because of the (rtt related) distributed update delay, this
may not yet hold true on all nodes. The symptom of this failure is a
syslog message: "tipc: Cannot publish {%u,%u,%u}, overlap error".

In this commit we add a resiliency queue at the receiving end of
the name table distributor. When insertion of an arriving publication
fails, we retain it in this queue for a short amount of time, assuming
that another update will arrive very soon and clear the conflict. If so
happens, we insert the publication, otherwise we drop it.

The (configurable) retention value defaults to 2000 ms. Knowing from
experience that the situation described above is extremely rare, there
is no risk that the queue will accumulate any large number of items.

Signed-off-by: Erik Hugne <erik.hugne@ericsson.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01 17:51:48 -07:00