Commit graph

13 commits

Author SHA1 Message Date
Greg Kroah-Hartman
fb7e319634 This is the 4.4.136 stable release
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAlsX88AACgkQONu9yGCS
 aT4fEBAAygf8GZqR8ql76DdEBREkgTgGrne2+Rq56eylWZyycU2FpZVLe2ct7yjf
 rbF2XCtxdPmia++z0WvmslDbtUeqSSPOz1jZBEERmyZpjpOkDTwsMUfz75Gvpi83
 ZJS4KXseL9W/jrSyIAbHJ4Fq1ffmoWzN8mEepde26Ic2DJ/3mB2Dphgg95UjI7rw
 KGg3+Jjr21ojrEmI1BOVItgZ6iU0jTgCkwrYrP1eI+OzRjasGMMJRh/HYBfr3GEY
 N6Ggi5PyIWF/DOeTp53hajOAFbt5WTFK6hiiwLqz+6XQuhY45N1YuXgT/vszZmKz
 nngD5p5+GWKZoXtRXoLMXts8EdZ55yoyj6dkIOM5W62C3HhxjqpPrLXJMdtm5eO/
 tL8/vbB6AzniFB/hQS4IqfqQ6sizcAzGi/vP0eOW2I7K9WIsbXR9vt1BcvVaIrRF
 O/9xX4QJrceNIUzq25sdS7vv4fk7O0AUq/bZtYWWjKY+4E2LhAPoHgmB7cF/M8jJ
 K8BtMtClyDqfpIhJiH3PDYdY6jRfYKcNUhMZLBYN9uRwa/5l8cC4AIKBEY8IyhgB
 i05G8YadInSSqf2eRGZ97Qpn5MVYm2G/r2BtpNLbCfIYUfvnHD7mWfteVjVw4Yjh
 Q6ERVHkvjEFsn1BPBd34OMVJlDz0oqNT92NwiAlXiA4Sxizvvh4=
 =0oNX
 -----END PGP SIGNATURE-----

Merge 4.4.136 into android-4.4

Changes in 4.4.136
	arm64: lse: Add early clobbers to some input/output asm operands
	powerpc/64s: Clear PCR on boot
	USB: serial: cp210x: use tcflag_t to fix incompatible pointer type
	sh: New gcc support
	xfs: detect agfl count corruption and reset agfl
	Revert "ima: limit file hash setting by user to fix and log modes"
	Input: elan_i2c_smbus - fix corrupted stack
	tracing: Fix crash when freeing instances with event triggers
	selinux: KASAN: slab-out-of-bounds in xattr_getsecurity
	cfg80211: further limit wiphy names to 64 bytes
	rtlwifi: rtl8192cu: Remove variable self-assignment in rf.c
	ASoC: Intel: sst: remove redundant variable dma_dev_name
	irda: fix overly long udelay()
	tcp: avoid integer overflows in tcp_rcv_space_adjust()
	i2c: rcar: make sure clocks are on when doing clock calculation
	i2c: rcar: rework hw init
	i2c: rcar: remove unused IOERROR state
	i2c: rcar: remove spinlock
	i2c: rcar: refactor setup of a msg
	i2c: rcar: init new messages in irq
	i2c: rcar: don't issue stop when HW does it automatically
	i2c: rcar: check master irqs before slave irqs
	i2c: rcar: revoke START request early
	dmaengine: usb-dmac: fix endless loop in usb_dmac_chan_terminate_all()
	iio:kfifo_buf: check for uint overflow
	MIPS: ptrace: Fix PTRACE_PEEKUSR requests for 64-bit FGRs
	MIPS: prctl: Disallow FRE without FR with PR_SET_FP_MODE requests
	scsi: scsi_transport_srp: Fix shost to rport translation
	stm class: Use vmalloc for the master map
	hwtracing: stm: fix build error on some arches
	drm/i915: Disable LVDS on Radiant P845
	Kbuild: change CC_OPTIMIZE_FOR_SIZE definition
	fix io_destroy()/aio_complete() race
	mm: fix the NULL mapping case in __isolate_lru_page()
	sparc64: Fix build warnings with gcc 7.
	Linux 4.4.136

Change-Id: I3457f995cf22c65952271ecd517a46144ac4dc79
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
2018-06-06 18:53:06 +02:00
Will Deacon
55a0e02e85 arm64: lse: Add early clobbers to some input/output asm operands
commit 32c3fa7cdf0c4a3eb8405fc3e13398de019e828b upstream.

For LSE atomics that read and write a register operand, we need to
ensure that these operands are annotated as "early clobber" if the
register is written before all of the input operands have been consumed.
Failure to do so can result in the compiler allocating the same register
to both operands, leading to splats such as:

 Unable to handle kernel paging request at virtual address 11111122222221
 [...]
 x1 : 1111111122222222 x0 : 1111111122222221
 Process swapper/0 (pid: 1, stack limit = 0x000000008209f908)
 Call trace:
  test_atomic64+0x1360/0x155c

where x0 has been allocated as both the value to be stored and also the
atomic_t pointer.

This patch adds the missing clobbers.

Cc: <stable@vger.kernel.org>
Cc: Dave Martin <dave.martin@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Reported-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-06-06 16:46:20 +02:00
Ard Biesheuvel
391ba6bf3e UPSTREAM: arm64: lse: deal with clobbered IP registers after branch via PLT
The LSE atomics implementation uses runtime patching to patch in calls
to out of line non-LSE atomics implementations on cores that lack hardware
support for LSE. To avoid paying the overhead cost of a function call even
if no call ends up being made, the bl instruction is kept invisible to the
compiler, and the out of line implementations preserve all registers, not
just the ones that they are required to preserve as per the AAPCS64.

However, commit fd045f6cd98e ("arm64: add support for module PLTs") added
support for routing branch instructions via veneers if the branch target
offset exceeds the range of the ordinary relative branch instructions.
Since this deals with jump and call instructions that are exposed to ELF
relocations, the PLT code uses x16 to hold the address of the branch target
when it performs an indirect branch-to-register, something which is
explicitly allowed by the AAPCS64 (and ordinary compiler generated code
does not expect register x16 or x17 to retain their values across a bl
instruction).

Since the lse runtime patched bl instructions don't adhere to the AAPCS64,
they don't deal with this clobbering of registers x16 and x17. So add them
to the clobber list of the asm() statements that perform the call
instructions, and drop x16 and x17 from the list of registers that are
callee saved in the out of line non-LSE implementations.

In addition, since we have given these functions two scratch registers,
they no longer need to stack/unstack temp registers.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[will: factored clobber list into #define, updated Makefile comment]
Signed-off-by: Will Deacon <will.deacon@arm.com>

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 5be8b70af1ca78cefb8b756d157532360a5fd663)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ia44a54eba315a47a6b8aaa2259b444e0139162c0
2016-09-22 13:38:22 -07:00
Lorenzo Pieralisi
57a6566799 arm64: cmpxchg_dbl: fix return value type
The current arm64 __cmpxchg_double{_mb} implementations carry out the
compare exchange by first comparing the old values passed in to the
values read from the pointer provided and by stashing the cumulative
bitwise difference in a 64-bit register.

By comparing the register content against 0, it is possible to detect if
the values read differ from the old values passed in, so that the compare
exchange detects whether it has to bail out or carry on completing the
operation with the exchange.

Given the current implementation, to detect the cmpxchg operation
status, the __cmpxchg_double{_mb} functions should return the 64-bit
stashed bitwise difference so that the caller can detect cmpxchg failure
by comparing the return value content against 0. The current implementation
declares the return value as an int, which means that the 64-bit
value stashing the bitwise difference is truncated before being
returned to the __cmpxchg_double{_mb} callers, which means that
any bitwise difference present in the top 32 bits goes undetected,
triggering false positives and subsequent kernel failures.

This patch fixes the issue by declaring the arm64 __cmpxchg_double{_mb}
return values as a long, so that the bitwise difference is
properly propagated on failure, restoring the expected behaviour.

Fixes: e9a4b79565 ("arm64: cmpxchg_dbl: patch in lse instructions when supported by the CPU")
Cc: <stable@vger.kernel.org> # 4.3+
Cc: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-11-05 17:29:40 +00:00
Will Deacon
305d454aaa arm64: atomics: implement native {relaxed, acquire, release} atomics
Commit 654672d4ba ("locking/atomics: Add _{acquire|release|relaxed}()
variants of some atomic operation") introduced a relaxed atomic API to
Linux that maps nicely onto the arm64 memory model, including the new
ARMv8.1 atomic instructions.

This patch hooks up the API to our relaxed atomic instructions, rather
than have them all expand to the full-barrier variants as they do
currently.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12 17:36:58 +01:00
Will Deacon
484c96dbb2 arm64: lse: fix lse cmpxchg code indentation
For some reason, the ll/sc cmpxchg asm is all off to the left and
awkward to read in conjunction with the following (correctly indented)
LSE version.

This patch shifts the ll/sc code back to where it should be.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-29 18:32:09 +01:00
Will Deacon
db26217e6f arm64: atomic64_dec_if_positive: fix incorrect branch condition
If we attempt to atomic64_dec_if_positive on INT_MIN, we will underflow
and incorrectly decide that the original parameter was positive.

This patches fixes the broken condition code so that we handle this
corner case correctly.

Reviewed-by: Steve Capper <steve.capper@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:54 +01:00
Will Deacon
6059a7b6e8 arm64: atomics: implement atomic{,64}_cmpxchg using cmpxchg
We don't need duplicate cmpxchg implementations, so use cmpxchg to
implement atomic{,64}_cmpxchg, like we do for xchg already.

Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:53 +01:00
Will Deacon
0bc671d3f4 arm64: cmpxchg: avoid "cc" clobber in ll/sc routines
We can perform the cmpxchg comparison using eor and cbnz which avoids
the "cc" clobber for the ll/sc case and consequently for the LSE case
where we may have to fall-back on the ll/sc code at runtime.

Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:52 +01:00
Will Deacon
e9a4b79565 arm64: cmpxchg_dbl: patch in lse instructions when supported by the CPU
On CPUs which support the LSE atomic instructions introduced in ARMv8.1,
it makes sense to use them in preference to ll/sc sequences.

This patch introduces runtime patching of our cmpxchg_double primitives
so that the LSE casp instruction is used instead.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:52 +01:00
Will Deacon
c342f78217 arm64: cmpxchg: patch in lse instructions when supported by the CPU
On CPUs which support the LSE atomic instructions introduced in ARMv8.1,
it makes sense to use them in preference to ll/sc sequences.

This patch introduces runtime patching of our cmpxchg primitives so that
the LSE cas instruction is used instead.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:51 +01:00
Will Deacon
c09d6a04d1 arm64: atomics: patch in lse instructions when supported by the CPU
On CPUs which support the LSE atomic instructions introduced in ARMv8.1,
it makes sense to use them in preference to ll/sc sequences.

This patch introduces runtime patching of atomic_t and atomic64_t
routines so that the call-site for the out-of-line ll/sc sequences is
patched with an LSE atomic instruction when we detect that
the CPU supports it.

If binutils is not recent enough to assemble the LSE instructions, then
the ll/sc sequences are inlined as though CONFIG_ARM64_LSE_ATOMICS=n.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:50 +01:00
Will Deacon
c0385b24af arm64: introduce CONFIG_ARM64_LSE_ATOMICS as fallback to ll/sc atomics
In order to patch in the new atomic instructions at runtime, we need to
generate wrappers around the out-of-line exclusive load/store atomics.

This patch adds a new Kconfig option, CONFIG_ARM64_LSE_ATOMICS. which
causes our atomic functions to branch to the out-of-line ll/sc
implementations. To avoid the register spill overhead of the PCS, the
out-of-line functions are compiled with specific compiler flags to
force out-of-line save/restore of any registers that are usually
caller-saved.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-27 15:28:50 +01:00