tcp_cong_avoid_ai() was too timid (snd_cwnd increased too slowly) on
"stretch ACKs" -- cases where the receiver ACKed more than 1 packet in
a single ACK. For example, suppose w is 10 and we get a stretch ACK
for 20 packets, so acked is 20. We ought to increase snd_cwnd by 2
(since acked/w = 20/10 = 2), but instead we were only increasing cwnd
by 1. This patch fixes that behavior.
Reported-by: Eyal Perry <eyalpe@mellanox.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
LRO, GRO, delayed ACKs, and middleboxes can cause "stretch ACKs" that
cover more than the RFC-specified maximum of 2 packets. These stretch
ACKs can cause serious performance shortfalls in common congestion
control algorithms that were designed and tuned years ago with
receiver hosts that were not using LRO or GRO, and were instead
politely ACKing every other packet.
This patch series fixes Reno and CUBIC to handle stretch ACKs.
This patch prepares for the upcoming stretch ACK bug fix patches. It
adds an "acked" parameter to tcp_cong_avoid_ai() to allow for future
fixes to tcp_cong_avoid_ai() to correctly handle stretch ACKs, and
changes all congestion control algorithms to pass in 1 for the ACKed
count. It also changes tcp_slow_start() to return the number of packet
ACK "credits" that were not processed in slow start mode, and can be
processed by the congestion control module in additive increase mode.
In future patches we will fix tcp_cong_avoid_ai() to handle stretch
ACKs, and fix Reno and CUBIC handling of stretch ACKs in slow start
and additive increase mode.
Reported-by: Eyal Perry <eyalpe@mellanox.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
As of commit 9a1091ef00 ("irqchip: gic: Support hierarchy irq
domain."), the APE6EVM legacy board support is known to be broken.
The IRQ numbers of the GIC are now virtual, and no longer match the
hardcoded hardware IRQ numbers in the legacy platform board code.
To fix this issue specific to non-muliplatform r8a73a4 and APE6EVM:
1) Instantiate the GIC from platform board code and also
2) Skip over the DT arch timer as well as
3) Force delay setup based on DT CPU frequency
With these 3 fixes in place interrupts on APE6EVM are now unbroken.
Partially based on legacy GIC fix by Geert Uytterhoeven, thanks to
him for the initial work.
Signed-off-by: Magnus Damm <damm+renesas@opensource.se>
Acked-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
375/38x. Only switch the PL310 to I/O coherent mode if I/O coherency
is enabled.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABCAAGBQJUyRueAAoJEOa/DcumaUyEvqAP/AuTbrPtd7LeiZBvCfEBBs+C
bDT+TjksyMBMZEjQ4OFNQTj4NSLsL4RqfUrfTFkg6RJ0pjviD80saNkfKy8bByPR
WduY4AhSgks07YI79Nu8cKGUO/iEuxztnCZQWi5b5wNideY9z+Ta3k46Z26VfRwc
NCQsT2PLKpVmTnIlhi6ilLHsqRAwsV0An+swEyAZXVBAbfKpoWOHxrfR7wVO9oSa
3dFmnxFcF3/pavtIbL5wIHkGSsjlSi8sdCvieWdf186p8ubuV5TNyuRme0msbaAf
JBJCNorSepP9vNCb2cCaEOcq+/A/f3NZAX256zGHcyvg6B9Ntq324MVj0sZd+dSo
nAeJYQvgwD10HvZGj4kCAL11Fc45chnGoo1iPGuxvzF6nKI5liINMnAmU8VPhnZX
swL3M2k69T14QutS+FbEN/6RGOQAWHZoXQ5YxFwFqei2I7j7g0QJvemwrfFkTwQj
bPyOE2op6fPpgJyGedM5icU8KesCkORNuu0lRWT9v5rU51epwdna3lcProtuBOA2
fq/WXb0mC/poyToIEJJZHOLJU7jZy2D+WMKBpu3imiKYBv29qShT9Mah22msySfY
YY9luHCQMWuT5zVVNFA/SMa8sQbpfxbNngen3iW79WK3LHVP9B6I/dfrn3F/BXXq
qtYwwNYQqf9kuI0nyy1H
=qzYQ
-----END PGP SIGNATURE-----
Merge tag 'mvebu-fixes-3.19-6' of git://git.infradead.org/linux-mvebu into fixes
Merge "mvebu-fixes-6" from Andrew Lunn:
The previous fix for Armada XP, disabling I/O coherency, broke Armada
375/38x. Only switch the PL310 to I/O coherent mode if I/O coherency
is enabled.
* tag 'mvebu-fixes-3.19-6' of git://git.infradead.org/linux-mvebu:
ARM: mvebu: don't set the PL310 in I/O coherency mode when I/O coherency is disabled
Signed-off-by: Olof Johansson <olof@lixom.net>
When ak4114 work calls its callback and the callback invokes
ak4114_reinit(), it stalls due to flush_delayed_work(). For avoiding
this, control the reentrance by introducing a refcount. Also
flush_delayed_work() is replaced with cancel_delayed_work_sync().
The exactly same bug is present in ak4113.c and fixed as well.
Reported-by: Pavel Hofman <pavel.hofman@ivitera.com>
Acked-by: Jaroslav Kysela <perex@perex.cz>
Tested-by: Pavel Hofman <pavel.hofman@ivitera.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
The core only supports up to 32 slaves, and the chipselect function
expects the same.
Signed-off-by: Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Refactor drms_uA_update() slightly to allow regulator_set_optimum_mode()
to utilize the same logic instead of duplicating it.
Signed-off-by: Bjorn Andersson <bjorn.andersson@sonymobile.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
"force_mode" is a u32 so it is never "< 0", but because of type
promotion then comparing "== -1" will do what we want.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Small transfers generally can be accomplished faster in polling mode.
This patch select the transfer which size is bellow the buffer size to
be done on polling mode
Suggested-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
The variable never leaves the scope of txrx_bufs.
Signed-off-by: Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Simplify the code by using the unit used on most of the code logic.
Signed-off-by: Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Simplify the code by using the unit used on most of the code logic.
Signed-off-by: Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
spi_rx handles the case where the buffer is null. Nevertheless spi_tx
did not handle it, and was handled by the caller function.
Signed-off-by: Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Simplify the code by removing the tx and and rx function pointers and
substitute them by a single function.
Signed-off-by: Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
The core controls the chip select lines individually.
By default, all the lines are consider active_low. After
spi_setup_transfer, it has its real value.
Signed-off-by: Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
When no irq is used, there is no need to inhibit the transmission for
every transaction. This inhibition was implemented to avoid a race
condition with the irq handler.
Signed-off-by: Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
The core can run in polling mode. In fact, the performance of the core
is similar (or even better), due to the fact most of the spi
transactions are just a couple of bytes and there is one irq per
transactions.
When an mtd device is connected via spi, reading 8MB of data produces
more than 80K interrupts (with irq disabling, context swith....)
Signed-off-by: Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
The control register has not changed since the previous access.
Therefore we can use the cached value and safe one bus access.
Signed-off-by: Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
On the transmission loop, check for remaining bytes at the loop
condition.
This way we can handle transmissions of 0 bytes and clean the code.
Signed-off-by: Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Instead of enabling the IRQ and disabling it for every transaction.
Specially the small transactions (1,2 words) benefit from removing 3 bus
accesses.
Signed-off-by: Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Instead of checking the TX_FULL flag for every transaction, find out the
size of the buffer at probe time and use it.
To avoid situations where the core had some data on the buffer before
initialization, the core is reseted before the buffer size is detected
Signed-off-by: Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
DSPI module need cs change information in
a spi transfer. According to cs change, DSPI
will give last data the right flag. Bitbang
provide cs change behind the last data in
a transfer. So DSPI can not deal the last
data in every transfer properly, so remove
the bitbang in the driver.
Signed-off-by: Bhuvanchandra DV <bhuvanchandra.dv@toradex.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
This is a patch for adding gpio control about enable/disable of buck.
Signed-off-by: James Ban <james.ban.opensource@diasemi.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
The pl08x driver originally selected S3C64XX_PL080 to avoid having
the legacy Samsung DMA interfaces. Those are now gone, so the
select is no longer needed, but it now causes problems when
CONFIG_DMA_ENGINE is disabled:
arch/arm/plat-samsung/built-in.o: In function `s3c64xx_spi0_set_platdata':
:(.init.text+0x518): undefined reference to `pl08x_filter_id'
This simply removes the 'select' to avoid this problem.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Mark Brown <broonie@kernel.org>
We currently get a warning about potentially uninitialized variables
in the rockchip spi driver, at least in certain toolchain versions:
spi/spi-rockchip.c: In function 'rockchip_spi_prepare_dma':
include/linux/dmaengine.h:796:2: warning: 'txdesc' may be used uninitialized in this function
include/linux/dmaengine.h:796:2: warning: 'rxdesc' may be used uninitialized in this function
The reason seems to be that gcc cannot know whether the value
of the rs->rx and rs->tx variables change between the two points
these are accessed.
The code is actually correct, but to make this clearer to the
compiler, this changes the conditionals to test for the local
rxdesc/txdesc variables instead, which it knows won't change.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Mark Brown <broonie@kernel.org>
Since commit f2c3c67f00 (merge commit that adds commit "ARM: mvebu:
completely disable hardware I/O coherency"), we disable I/O coherency
on Armada EBU platforms.
However, we continue to initialize the coherency fabric, because this
coherency fabric is needed on Armada XP for inter-CPU
coherency. Unfortunately, due to this, we also continued to execute
the coherency fabric initialization code for Armada 375/38x, which
switched the PL310 into I/O coherent mode. This has the effect of
disabling the outer cache sync operation: this is needed when I/O
coherency is enabled to work around a PCIe/L2 deadlock. But obviously,
when I/O coherency is disabled, having the outer cache sync operation
is crucial.
Therefore, this commit fixes the armada_375_380_coherency_init() so
that the PL310 is switched to I/O coherent mode only if I/O coherency
is enabled.
Without this fix, all devices using DMA are broken on Armada 375/38x.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Acked-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Cc: <stable@vger.kernel.org> # v3.8+
uClibc Linuxthreads.old doesn't support the pthread_attr_setaffinity_np()
functioo:
----------------->8-----------------------
CC bench/futex-hash.o
CC bench/futex-wake.o
bench/futex-hash.c: In function 'bench_futex_hash':
bench/futex-hash.c:161:3: error: implicit declaration of function
'pthread_attr_setaffinity_np' [-Werror=implicit-function-declaration]
ret = pthread_attr_setaffinity_np(&thread_attr, sizeof(cpu_set_t),
&cpu);
^
bench/futex-hash.c:161:3: error: nested extern declaration of
'pthread_attr_setaffinity_np' [-Werror=nested-externs]
----------------->8-----------------------
So introduce a test to check that and if not available provide a stub.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexey Brodkin <Alexey.Brodkin@synopsys.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1421156604-30603-6-git-send-email-vgupta@synopsys.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
When running perf on ARC (uClibc based userspace), ran into this issue
------------->8----------------
[ARCLinux]$ ./perf record ls
bin etc perf sys
debug init perf.data tmp
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.001 MB perf.data (~24 samples) ]
[ARCLinux]$ ./perf report
incompatible file format (rerun with -v to learn more)
------------->8----------------
The problem happens in the following call stack when zalloc is called
with size zero
glibc default / uClibc with MALLOC_GLIBC_COMPAT are OK, but not if that
config option is not enabled.
cmd_report
perf_session__new
perf_session__open
perf_session__read_header
read_attr(fd, header, &f_attr)
nr_ids = f_attr.ids.size / sizeof(u64); <-- 0
perf_evsel__alloc_id(vsel, 1, nr_ids)
zalloc(ncpus * nthreads * sizeof(u64)) <-- 0
header.c: read_attr()
(gdb) p *f_attr
$17 = {
attr = {
type = 0,
size = 96,
config = 0,
{
sample_period = 4000,
sample_freq = 4000
},
...
ids = {
offset = 104,
size = 0 <------
}
}
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Suggested-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexey Brodkin <Alexey.Brodkin@synopsys.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1421156604-30603-5-git-send-email-vgupta@synopsys.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
You can't modify the metadata in these modes. It's better to fail these
messages immediately than let the block-manager deny write locks on
metadata blocks. Otherwise these failed metadata changes will trigger
'needs_check' to get set in the metadata superblock -- requiring repair
using the thin_check utility.
Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org
Commit 9b1cc9f251 ("dm cache: share cache-metadata object across
inactive and active DM tables") mistakenly ignored the use of ERR_PTR
returns. Restore missing IS_ERR checks and ERR_PTR returns where
appropriate.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org
The new hw_breakpoint bits are now ready for v3.20, merge them
into the main branch, to avoid conflicts.
Conflicts:
tools/perf/Documentation/perf-record.txt
Signed-off-by: Ingo Molnar <mingo@kernel.org>
User visible:
- Enable sampling loads and stores simultaneously in 'perf mem' (Stephane Eranian)
- 'perf diff' output improvements (Namhyung Kim)
- Fix error reporting for evsel pgfault constructor (Arnaldo Carvalho de Melo)
Infrastructure:
- Move debugfs sterrno like method to tools/lib/ so that it may be used by
other tools, as 'perf probe' will be soon (Arnaldo Carvalho de Melo)
- Introduce function fro deleting/removing hist_entry to avoid code duplication
(Arnaldo Carvalho de Melo)
- Support parsing parameterized events (Cody P Schafer)
- Add support for IP address formats in libtraceevent (David Ahern)
- Fix typo in sample-parsing.c 'perf test' entry (Rasmus Villemoes)
- Remove some unused functions from color.c (Rickard Strandqvist)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJUxnhhAAoJEBpxZoYYoA71tmMIAL8vvd8zzF92hMB5vmfQia37
WjPzdwMZxBy3noQztZdega+VcH+B9kpKI7Aes3knBb+SsBIR7IZHJGlI23cyWe83
7iDJWNXtEQhBt7y94zTulEdmxeIxiHMItRVspJpfdGlZ6VaVWZKsTKNWFqjvm7mC
fjcfbMXf0itsKky5spvg/425+rO//6c1jmprWTtHmIt7B5ysO+9UdMmOqFZlKiFM
YTbwpwTMMHhdlrLYm7+CqyEH4pemEtf10KVathf7rlmfHMJllyHy3AnJh9N0dN8L
XaokVxe4BYcfKCpyXDlKDH6s1Rf0+6JyRKu+hnDKY9y86RV0HV1L5mgcVInc+Nw=
=CxtR
-----END PGP SIGNATURE-----
Merge tag 'perf-core-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core
Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:
"
User visible changes:
- Enable sampling loads and stores simultaneously in 'perf mem' (Stephane Eranian)
- 'perf diff' output improvements (Namhyung Kim)
- Fix error reporting for evsel pgfault constructor (Arnaldo Carvalho de Melo)
Infrastructure changes:
- Move debugfs sterrno like method to tools/lib/ so that it may be used by
other tools, as 'perf probe' will be soon (Arnaldo Carvalho de Melo)
- Introduce function fro deleting/removing hist_entry to avoid code duplication
(Arnaldo Carvalho de Melo)
- Support parsing parameterized events (Cody P Schafer)
- Add support for IP address formats in libtraceevent (David Ahern)
- Fix typo in sample-parsing.c 'perf test' entry (Rasmus Villemoes)
- Remove some unused functions from color.c (Rickard Strandqvist)
"
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
User visible:
- Fix probing at function return (Namhyumg Kim)
Developer stuff:
- Symbol processing changes necessary for fixing support for
kretprobes in 'perf probe' (Namhyung Kim, Arnaldo Carvalho de Melo)
- Annotation memory leaks and instruction parsing fixes (Rabin Vincent)
- Fix perl build on ARM64 (Wang Nam)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJUv7uZAAoJEBpxZoYYoA71FocIAKPHkpS3qr1xS1aP3KKjS/0m
PGwsZmx6M92HPB1dHDgyZKmcdd6LL+QNwqXeXinwYhgnH62uvZCRWURlcGYcD/NU
cPTQJzmxDyXNHJGRPyDcYRZS9CFp5A7ccmpmcW7Zw6FgR+JLOcW0A7WkkmetxlYo
OyvbmTuWR+jT/PhfLRvriNPo2gbY7IV3xstJW7lP4R6VXcvpA3RJRKckQRhi9FBR
FcJzyITcaGKVe7f3sY3SAOrBB/bvmIK2EHk9qcMrwlR75FHVrWeuD7J/WiVCmGtG
VUEfl/EmGiBIwPNl268GoImzqJcAVY2EQmnHsSEQwVIRuJtjFc07jGH+Tg2XaUs=
=XNsc
-----END PGP SIGNATURE-----
Merge tag 'perf-urgent-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent
Pull perf/urgent fixes from Arnaldo Carvalho de Melo:
" User visible fixes:
- Fix probing at function return (Namhyumg Kim)
Developer visible fixes:
- Symbol processing changes necessary for fixing support for
kretprobes in 'perf probe' (Namhyung Kim, Arnaldo Carvalho de Melo)
- Annotation memory leaks and instruction parsing fixes (Rabin Vincent)
- Fix perl build on ARM64 (Wang Nam)
"
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
of this is an IST rework. When an IST exception interrupts user
space, we will handle it on the per-thread kernel stack instead of
on the IST stack. This sounds messy, but it actually simplifies the
IST entry/exit code, because it eliminates some ugly games we used
to play in order to handle rescheduling, signal delivery, etc on the
way out of an IST exception.
The IST rework introduces proper context tracking to IST exception
handlers. I haven't seen any bug reports, but the old code could
have incorrectly treated an IST exception handler as an RCU extended
quiescent state.
The memory failure change (included in this pull request with
Borislav and Tony's permission) eliminates a bunch of code that
is no longer needed now that user memory failure handlers are
called in process context.
Finally, this includes a few on Denys' uncontroversial and Obviously
Correct (tm) cleanups.
The IST and memory failure changes have been in -next for a while.
LKML references:
IST rework:
http://lkml.kernel.org/r/cover.1416604491.git.luto@amacapital.net
Memory failure change:
http://lkml.kernel.org/r/54ab2ffa301102cd6e@agluck-desk.sc.intel.com
Denys' cleanups:
http://lkml.kernel.org/r/1420927210-19738-1-git-send-email-dvlasenk@redhat.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJUtvkFAAoJEK9N98ZeDfrkcfsIAJxZ0UBUCEDvulbqgk/iPGOa
fIpKLMowS7CpKtw6Wdc/YvAIkeHXWm1vU44Hj0TrjSrXCgVF8yCngs/xlXtOjoa1
dosXQqgqVJJ+hyui7chAEWyalLW7bEO8raq/6snhiMrhiuEkVKpEr7Fer4FVVCZL
4VALmNQQsbV+Qq4pXIhuagZC0Nt/XKi/+/cKvhS4p//q1F/TbHTz0FpDUrh0jPMh
18WFy0jWgxdkMRnSp/wJhekvdXX6PwUy5BdES9fjw8LQJZxxFpqN3Fe1kgfyzV0k
yuvEHw1hPt2aBGj3q69wQvDVyyn4OqMpRDBhk4S+GJYmVh7mFyFMN4BDMEy/EY8=
=LXVl
-----END PGP SIGNATURE-----
Merge tag 'pr-20150114-x86-entry' of git://git.kernel.org/pub/scm/linux/kernel/git/luto/linux into x86/asm
Pull x86/entry enhancements from Andy Lutomirski:
" This is my accumulated x86 entry work, part 1, for 3.20. The meat
of this is an IST rework. When an IST exception interrupts user
space, we will handle it on the per-thread kernel stack instead of
on the IST stack. This sounds messy, but it actually simplifies the
IST entry/exit code, because it eliminates some ugly games we used
to play in order to handle rescheduling, signal delivery, etc on the
way out of an IST exception.
The IST rework introduces proper context tracking to IST exception
handlers. I haven't seen any bug reports, but the old code could
have incorrectly treated an IST exception handler as an RCU extended
quiescent state.
The memory failure change (included in this pull request with
Borislav and Tony's permission) eliminates a bunch of code that
is no longer needed now that user memory failure handlers are
called in process context.
Finally, this includes a few on Denys' uncontroversial and Obviously
Correct (tm) cleanups.
The IST and memory failure changes have been in -next for a while.
LKML references:
IST rework:
http://lkml.kernel.org/r/cover.1416604491.git.luto@amacapital.net
Memory failure change:
http://lkml.kernel.org/r/54ab2ffa301102cd6e@agluck-desk.sc.intel.com
Denys' cleanups:
http://lkml.kernel.org/r/1420927210-19738-1-git-send-email-dvlasenk@redhat.com
"
This tree semantically depends on and is based on the following RCU commit:
734d168013 ("rcu: Make rcu_nmi_enter() handle nesting")
... and for that reason won't be pushed upstream before the RCU bits hit Linus's tree.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This effectively reverts the last hunk of 392a9dad7e ("rbd: detect
when clone image is flattened").
The problem with parent_overlap != 0 condition is that it's possible
and completely valid to have an image with parent_overlap == 0 whose
parent state needs to be cleaned up on unmap. The next commit, which
drops the "clone image now standalone" logic, opens up another window
of opportunity to hit this, but even without it
# cat parent-ref.sh
#!/bin/bash
rbd create --image-format 2 --size 1 foo
rbd snap create foo@snap
rbd snap protect foo@snap
rbd clone foo@snap bar
rbd resize --allow-shrink --size 0 bar
rbd resize --size 1 bar
DEV=$(rbd map bar)
rbd unmap $DEV
leaves rbd_device/rbd_spec/etc and rbd_client along with ceph_client
hanging around.
My thinking behind calling rbd_dev_parent_put() unconditionally is that
there shouldn't be any requests in flight at that point in time as we
are deep into unmap sequence. Hence, even if rbd_dev_unparent() caused
by flatten is delayed by in-flight requests, it will have finished by
the time we reach rbd_dev_unprobe() caused by unmap, thus turning
unconditional rbd_dev_parent_put() into a no-op.
Fixes: http://tracker.ceph.com/issues/10352
Cc: stable@vger.kernel.org # 3.11+
Signed-off-by: Ilya Dryomov <idryomov@redhat.com>
Reviewed-by: Josh Durgin <jdurgin@redhat.com>
Reviewed-by: Alex Elder <elder@linaro.org>
The comment for rbd_dev_parent_get() said
* We must get the reference before checking for the overlap to
* coordinate properly with zeroing the parent overlap in
* rbd_dev_v2_parent_info() when an image gets flattened. We
* drop it again if there is no overlap.
but the "drop it again if there is no overlap" part was missing from
the implementation. This lead to absurd parent_ref values for images
with parent_overlap == 0, as parent_ref was incremented for each
img_request and virtually never decremented.
Fix this by leveraging the fact that refresh path calls
rbd_dev_v2_parent_info() under header_rwsem and use it for read in
rbd_dev_parent_get(), instead of messing around with atomics. Get rid
of barriers in rbd_dev_v2_parent_info() while at it - I don't see what
they'd pair with now and I suspect we are in a pretty miserable
situation as far as proper locking goes regardless.
Cc: stable@vger.kernel.org # 3.11+
Signed-off-by: Ilya Dryomov <idryomov@redhat.com>
Reviewed-by: Josh Durgin <jdurgin@redhat.com>
Reviewed-by: Alex Elder <elder@linaro.org>
The fix from 9fc81d8742 ("perf: Fix events installation during
moving group") was incomplete in that it failed to recognise that
creating a group with events for different CPUs is semantically
broken -- they cannot be co-scheduled.
Furthermore, it leads to real breakage where, when we create an event
for CPU Y and then migrate it to form a group on CPU X, the code gets
confused where the counter is programmed -- triggered in practice
as well by me via the perf fuzzer.
Fix this by tightening the rules for creating groups. Only allow
grouping of counters that can be co-scheduled in the same context.
This means for the same task and/or the same cpu.
Fixes: 9fc81d8742 ("perf: Fix events installation during moving group")
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20150123125834.090683288@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
At least some gcc versions - validly afaict - warn about potentially
using max_group uninitialized: There's no way the compiler can prove
that the body of the conditional where it and max_faults get set/
updated gets executed; in fact, without knowing all the details of
other scheduler code, I can't prove this either.
Generally the necessary change would appear to be to clear max_group
prior to entering the inner loop, and break out of the outer loop when
it ends up being all clear after the inner one. This, however, seems
inefficient, and afaict the same effect can be achieved by exiting the
outer loop when max_faults is still zero after the inner loop.
[ mingo: changed the solution to zero initialization: uninitialized_var()
needs to die, as it's an actively dangerous construct: if in the future
a known-proven-good piece of code is changed to have a true, buggy
uninitialized variable, the compiler warning is then supressed...
The better long term solution is to clean up the code flow, so that
even simple minded compilers (and humans!) are able to read it without
getting a headache. ]
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/54C2139202000078000588F7@mail.emea.novell.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This patch fixes a systematic crash in rapl_scale()
due to an invalid pointer.
The bug was introduced by commit:
89cbc76768 ("x86: Replace __get_cpu_var uses")
The fix is simple. Just put the parenthesis where it needs
to be, i.e., around rapl_pmu. To my surprise, the compiler
was not complaining about passing an integer instead of a
pointer.
Reported-by: Vince Weaver <vincent.weaver@maine.edu>
Tested-by: Vince Weaver <vincent.weaver@maine.edu>
Fixes: 89cbc76768 ("x86: Replace __get_cpu_var uses")
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: cl@linux.com
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20150122203834.GA10228@thinkpad
Signed-off-by: Ingo Molnar <mingo@kernel.org>
There were some issues about the uncore driver tried to access
non-existing boxes, which caused boot crashes. These issues have
been all fixed. But we should avoid boot failures if that ever
happens again.
This patch intends to prevent this kind of potential issues.
It moves uncore_box_init out of driver initialization. The box
will be initialized when it's first enabled.
Signed-off-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1421729665-5912-1-git-send-email-kan.liang@intel.com
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Yan, Zheng <zheng.z.yan@intel.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Commit 7566bcc76b ("spi: pxa2xx: Move is_lpss_ssp() tests to caller") did
not check LPSS before calling lpss_ssp_setup() in pxa2xx_spi_resume().
Signed-off-by: Jarkko Nikula <jarkko.nikula@linux.intel.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Commits 65cef1311d ("x86, microcode: Add a disable chicken bit") and
a18a0f6850 ("x86, microcode: Don't initialize microcode code on
paravirt") allow microcode driver skip initialization when microcode
loading is not permitted.
However, they don't prevent the driver from being loaded since the
init code returns 0. If at some point later the driver gets unloaded
this will result in an oops while trying to deregister the (never
registered) device.
To avoid this, make init code return an error on paravirt or when
microcode loading is disabled. The driver will then never be loaded.
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Link: http://lkml.kernel.org/r/1422411669-25147-1-git-send-email-boris.ostrovsky@oracle.com
Reported-by: James Digwall <james@dingwall.me.uk>
Cc: stable@vger.kernel.org # 3.18
Signed-off-by: Borislav Petkov <bp@suse.de>
Currently ->get_dqblk() and ->set_dqblk() use struct fs_disk_quota which
tracks space limits and usage in 512-byte blocks. However VFS quotas
track usage in bytes (as some filesystems require that) and we need to
somehow pass this information. Upto now it wasn't a problem because we
didn't do any unit conversion (thus VFS quota routines happily stuck
number of bytes into d_bcount field of struct fd_disk_quota). Only if
you tried to use Q_XGETQUOTA or Q_XSETQLIM for VFS quotas (or Q_GETQUOTA
/ Q_SETQUOTA for XFS quotas), you got bogus results. Hardly anyone
tried this but reportedly some Samba users hit the problem in practice.
So when we want interfaces compatible we need to fix this.
We bite the bullet and define another quota structure used for passing
information from/to ->get_dqblk()/->set_dqblk. It's somewhat sad we have
to have more conversion routines in fs/quota/quota.c and another copying
of quota structure slows down getting of quota information by about 2%
but it seems cleaner than overloading e.g. units of d_bcount to bytes.
CC: stable@vger.kernel.org
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jan Kara <jack@suse.cz>
Commit 6fb1ca92a6 "udf: Fix race between write(2) and close(2)"
changed the condition when preallocation is released. The idea was that
we don't want to release the preallocation for an inode on close when
there are other writeable file descriptors for the inode. However the
condition was written in the opposite way so we released preallocation
only if there were other writeable file descriptors. Fix the problem by
changing the condition properly.
CC: stable@vger.kernel.org
Fixes: 6fb1ca92a6
Reported-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: Jan Kara <jack@suse.cz>