Commit graph

566262 commits

Author SHA1 Message Date
Arve Hjønnevåg
12dd1aad02 ANDROID: binder: Clear binder and cookie when setting handle in flat binder struct
Prevents leaking pointers between processes

BUG: 30768347
Change-Id: Id898076926f658a1b8b27a3ccb848756b36de4ca
Signed-off-by: Arve Hjønnevåg <arve@android.com>
2016-10-12 17:34:22 +05:30
Arve Hjønnevåg
4a9040398e ANDROID: binder: Add strong ref checks
Prevent using a binder_ref with only weak references where a strong
reference is required.

BUG: 30445380
Change-Id: I66c15b066808f28bd27bfe50fd0e03ff45a09fca
Signed-off-by: Arve Hjønnevåg <arve@android.com>
2016-10-12 17:34:22 +05:30
EunTaik Lee
255a8affe5 UPSTREAM: staging/android/ion : fix a race condition in the ion driver
There is a use-after-free problem in the ion driver.
This is caused by a race condition in the ion_ioctl()
function.

A handle has ref count of 1 and two tasks on different
cpus calls ION_IOC_FREE simultaneously.

cpu 0                                   cpu 1
-------------------------------------------------------
ion_handle_get_by_id()
(ref == 2)
                            ion_handle_get_by_id()
                            (ref == 3)

ion_free()
(ref == 2)

ion_handle_put()
(ref == 1)

                            ion_free()
                            (ref == 0 so ion_handle_destroy() is
                            called
                            and the handle is freed.)

                            ion_handle_put() is called and it
                            decreases the slub's next free pointer

The problem is detected as an unaligned access in the
spin lock functions since it uses load exclusive
 instruction. In some cases it corrupts the slub's
free pointer which causes a mis-aligned access to the
next free pointer.(kmalloc returns a pointer like
ffffc0745b4580aa). And it causes lots of other
hard-to-debug problems.

This symptom is caused since the first member in the
ion_handle structure is the reference count and the
ion driver decrements the reference after it has been
freed.

To fix this problem client->lock mutex is extended
to protect all the codes that uses the handle.

Signed-off-by: Eun Taik Lee <eun.taik.lee@samsung.com>
Reviewed-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 9590232bb4f4cc824f3425a6e1349afbe6d6d2b7)
bug: 31568617
Change-Id: I4ea2be0cad3305c4e196126a02e2ab7108ef0976
2016-10-12 17:34:22 +05:30
Sami Tolvanen
e41543b2d1 ANDROID: android-base: CONFIG_HARDENED_USERCOPY=y
Bug: 31374226
Change-Id: I977e76395017d8d718ea634421b3635023934ef9
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Jiri Olsa
816619eef3 UPSTREAM: fs/proc/kcore.c: Add bounce buffer for ktext data
We hit hardened usercopy feature check for kernel text access by reading
kcore file:

  usercopy: kernel memory exposure attempt detected from ffffffff8179a01f (<kernel text>) (4065 bytes)
  kernel BUG at mm/usercopy.c:75!

Bypassing this check for kcore by adding bounce buffer for ktext data.

Reported-by: Steve Best <sbest@redhat.com>
Fixes: f5509cc18daa ("mm: Hardened usercopy")
Suggested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Bug: 31374226
Change-Id: Ic93e6041b67d804a994518bf4950811f828b406e
(cherry picked from commit df04abfd181acc276ba6762c8206891ae10ae00d)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Jiri Olsa
ac8c3e0150 UPSTREAM: fs/proc/kcore.c: Make bounce buffer global for read
Next patch adds bounce buffer for ktext area, so it's
convenient to have single bounce buffer for both
vmalloc/module and ktext cases.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Bug: 31374226
Change-Id: I8f517354e6d12aed75ed4ae6c0a6adef0a1e61da
(cherry picked from commit f5beeb1851ea6f8cfcf2657f26cb24c0582b4945)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Laura Abbott
7678802f66 BACKPORT: arm64: Correctly bounds check virt_addr_valid
virt_addr_valid is supposed to return true if and only if virt_to_page
returns a valid page structure. The current macro does math on whatever
address is given and passes that to pfn_valid to verify. vmalloc and
module addresses can happen to generate a pfn that 'happens' to be
valid. Fix this by only performing the pfn_valid check on addresses that
have the potential to be valid.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>

Bug: 31374226
Change-Id: I75cbeb3edb059f19af992b7f5d0baa283f95991b
(cherry picked from commit ca219452c6b8a6cd1369b6a78b1cf069d0386865)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Mohan Srinivasan
f79fcde0ef Fix a build breakage in IO latency hist code.
Fix a build breakage where MMC is enabled, but BLOCK is not.

Change-Id: I0eb422d12264f0371f3368ae7c37342ef9efabaa
Signed-off-by: Mohan Srinivasan <srmohan@google.com>
2016-10-12 17:34:22 +05:30
Ard Biesheuvel
fb5727b2d4 UPSTREAM: efi: include asm/early_ioremap.h not asm/efi.h to get early_memremap
The code in efi.c uses early_memremap(), but relies on a transitive
include rather than including asm/early_ioremap.h directly, since
this header did not exist on ia64.

Commit f7d924894265 ("arm64/efi: refactor EFI init and runtime code
for reuse by 32-bit ARM") attempted to work around this by including
asm/efi.h, which transitively includes asm/early_ioremap.h on most
architectures. However, since asm/efi.h does not exist on ia64 either,
this is not much of an improvement.

Now that we have created an asm/early_ioremap.h for ia64, we can just
include it directly.

Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Tony Luck <tony.luck@intel.com>

Change-Id: Ifa3e69e0b4078bac1e1d29bfe56861eb394e865b
(cherry picked from commit 0f7f2f0c0fcbe5e2bcba707a628ebaedfe2be4b4)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Ard Biesheuvel
32bfaab84e UPSTREAM: ia64: split off early_ioremap() declarations into asm/early_ioremap.h
Unlike x86, arm64 and ARM, ia64 does not declare its implementations
of early_ioremap/early_iounmap/early_memremap/early_memunmap in a header
file called <asm/early_ioremap.h>

This complicates the use of these functions in generic code, since the
header cannot be included directly, and we have to rely on transitive
includes, which is fragile.

So create a <asm/early_ioremap.h> for ia64, and move the existing
definitions into it.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Tony Luck <tony.luck@intel.com>

Change-Id: I31eb55a9d57596faa40aec64bd26ce3ec21b0b4d
(cherry picked from commit 809267708557ed5575831282f719ca644698084b)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Catalin Marinas
125eba9356 FROMLIST: arm64: Enable CONFIG_ARM64_SW_TTBR0_PAN
This patch adds the Kconfig option to enable support for TTBR0 PAN
emulation. The option is default off because of a slight performance hit
when enabled, caused by the additional TTBR0_EL1 switching during user
access operations or exception entry/exit code.

Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

Change-Id: Id00a8ad4169d6eb6176c468d953436eb4ae887ae
(cherry picked from commit 6a2d7bad43474c48b68394d455b84a16b7d7dc3f)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Catalin Marinas
b97ad92d12 FROMLIST: arm64: xen: Enable user access before a privcmd hvc call
Privcmd calls are issued by the userspace. The kernel needs to enable
access to TTBR0_EL1 as the hypervisor would issue stage 1 translations
to user memory via AT instructions. Since AT instructions are not
affected by the PAN bit (ARMv8.1), we only need the explicit
uaccess_enable/disable if the TTBR0 PAN option is enabled.

Reviewed-by: Julien Grall <julien.grall@arm.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

Change-Id: I927f14076ba94c83e609b19f46dd373287e11fc4
(cherry picked from commit 8cc1f33d2c9f206b6505bedba41aed2b33c203c0)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Catalin Marinas
0a75f2be2d FROMLIST: arm64: Handle faults caused by inadvertent user access with PAN enabled
When TTBR0_EL1 is set to the reserved page, an erroneous kernel access
to user space would generate a translation fault. This patch adds the
checks for the software-set PSR_PAN_BIT to emulate a permission fault
and report it accordingly.

This patch also updates the description of the synchronous external
aborts on translation table walks.

Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

Change-Id: I623113fc8bf6d5f023aeec7a0640b62a25ef8420
(cherry picked from commit be3db9340c8011d22f06715339b66bcbbd4893bd)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>

[Conflict fixes in lsk-v4.4-android topic branch]
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-10-12 17:34:22 +05:30
Catalin Marinas
914f938920 FROMLIST: arm64: Disable TTBR0_EL1 during normal kernel execution
When the TTBR0 PAN feature is enabled, the kernel entry points need to
disable access to TTBR0_EL1. The PAN status of the interrupted context
is stored as part of the saved pstate, reusing the PSR_PAN_BIT (22).
Restoring access to TTBR0_PAN is done on exception return if returning
to user or returning to a context where PAN was disabled.

Context switching via switch_mm() must defer the update of TTBR0_EL1
until a return to user or an explicit uaccess_enable() call.

Special care needs to be taken for two cases where TTBR0_EL1 is set
outside the normal kernel context switch operation: EFI run-time
services (via efi_set_pgd) and CPU suspend (via cpu_(un)install_idmap).
Code has been added to avoid deferred TTBR0_EL1 switching as in
switch_mm() and restore the reserved TTBR0_EL1 when uninstalling the
special TTBR0_EL1.

This patch also removes a stale comment on the switch_mm() function.

Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

Change-Id: Id1198cf1cde022fad10a94f95d698fae91d742aa
(cherry picked from commit d26cfd64c973b31f73091c882e07350e14fdd6c9)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Catalin Marinas
fb82efd29d FROMLIST: arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1
This patch adds the uaccess macros/functions to disable access to user
space by setting TTBR0_EL1 to a reserved zeroed page. Since the value
written to TTBR0_EL1 must be a physical address, for simplicity this
patch introduces a reserved_ttbr0 page at a constant offset from
swapper_pg_dir. The uaccess_disable code uses the ttbr1_el1 value
adjusted by the reserved_ttbr0 offset.

Enabling access to user is done by restoring TTBR0_EL1 with the value
from the struct thread_info ttbr0 variable. Interrupts must be disabled
during the uaccess_ttbr0_enable code to ensure the atomicity of the
thread_info.ttbr0 read and TTBR0_EL1 write. This patch also moves the
get_thread_info asm macro from entry.S to assembler.h for reuse in the
uaccess_ttbr0_* macros.

Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

Change-Id: Idf09a870b8612dce23215bce90d88781f0c0c3aa
(cherry picked from commit 940d37234182d2675ab8ab46084840212d735018)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Catalin Marinas
bb8653ae4b FROMLIST: arm64: Factor out TTBR0_EL1 post-update workaround into a specific asm macro
This patch takes the errata workaround code out of cpu_do_switch_mm into
a dedicated post_ttbr0_update_workaround macro which will be reused in a
subsequent patch.

Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

Change-Id: I69f94e4c41046bd52ca9340b72d97bfcf955b586
(cherry picked from commit 4398e6a1644373a4c2f535f4153c8378d0914630)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Catalin Marinas
8ea0a504d9 FROMLIST: arm64: Factor out PAN enabling/disabling into separate uaccess_* macros
This patch moves the directly coded alternatives for turning PAN on/off
into separate uaccess_{enable,disable} macros or functions. The asm
macros take a few arguments which will be used in subsequent patches.

Note that any (unlikely) access that the compiler might generate between
uaccess_enable() and uaccess_disable(), other than those explicitly
specified by the user access code, will not be protected by PAN.

Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

Change-Id: Ic3fddd706400c8798f57456c56361d84d234f6ef
(cherry picked from commit a4820644c627b82cbc865f2425bb788c94743b16)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Laura Abbott
f80392d0ae UPSTREAM: arm64: Handle el1 synchronous instruction aborts cleanly
Executing from a non-executable area gives an ugly message:

lkdtm: Performing direct entry EXEC_RODATA
lkdtm: attempting ok execution at ffff0000084c0e08
lkdtm: attempting bad execution at ffff000008880700
Bad mode in Synchronous Abort handler detected on CPU2, code 0x8400000e -- IABT (current EL)
CPU: 2 PID: 998 Comm: sh Not tainted 4.7.0-rc2+ #13
Hardware name: linux,dummy-virt (DT)
task: ffff800077e35780 ti: ffff800077970000 task.ti: ffff800077970000
PC is at lkdtm_rodata_do_nothing+0x0/0x8
LR is at execute_location+0x74/0x88

The 'IABT (current EL)' indicates the error but it's a bit cryptic
without knowledge of the ARM ARM. There is also no indication of the
specific address which triggered the fault. The increase in kernel
page permissions makes hitting this case more likely as well.
Handling the case in the vectors gives a much more familiar looking
error message:

lkdtm: Performing direct entry EXEC_RODATA
lkdtm: attempting ok execution at ffff0000084c0840
lkdtm: attempting bad execution at ffff000008880680
Unable to handle kernel paging request at virtual address ffff000008880680
pgd = ffff8000089b2000
[ffff000008880680] *pgd=00000000489b4003, *pud=0000000048904003, *pmd=0000000000000000
Internal error: Oops: 8400000e [#1] PREEMPT SMP
Modules linked in:
CPU: 1 PID: 997 Comm: sh Not tainted 4.7.0-rc1+ #24
Hardware name: linux,dummy-virt (DT)
task: ffff800077f9f080 ti: ffff800008a1c000 task.ti: ffff800008a1c000
PC is at lkdtm_rodata_do_nothing+0x0/0x8
LR is at execute_location+0x74/0x88

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

Change-Id: Ifba74589ba2cf05b28335d4fd3e3140ef73668db
(cherry picked from commit 9adeb8e72dbfe976709df01e259ed556ee60e779)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>

[Conflict fixes in lsk-v4.4-android topic branch]
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
2016-10-12 17:34:22 +05:30
Andre Przywara
96836c618a UPSTREAM: arm64: include alternative handling in dcache_by_line_op
The newly introduced dcache_by_line_op macro is used at least in
one occassion at the moment to issue a "dc cvau" instruction,
which is affected by ARM errata 819472, 826319, 827319 and 824069.
Change the macro to allow for alternative patching in there to
protect affected Cortex-A53 cores.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
[catalin.marinas@arm.com: indentation fixups]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

Change-Id: I450594dc311b09b6b832b707a9abb357608cc6e4
(cherry picked from commit 823066d9edcdfe4cedb06216c2b1f91efaf68a87)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Andre Przywara
bef1ce21fc UPSTREAM: arm64: fix "dc cvau" cache operation on errata-affected core
The ARM errata 819472, 826319, 827319 and 824069 for affected
Cortex-A53 cores demand to promote "dc cvau" instructions to
"dc civac" as well.
Attribute the usage of the instruction in __flush_cache_user_range
to also be covered by our alternative patching efforts.
For that we introduce an assembly macro which both deals with
alternatives while still tagging the instructions as USER.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

Change-Id: If5e7933ba32331b2aa28fc5d9e019649452f0f6c
(cherry picked from commit 290622efc76ece22ef76a30bf117755891ab27f6)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Andre Przywara
395caccf0e UPSTREAM: Revert "arm64: alternatives: add enable parameter to conditional asm macros"
Commit 77ee306c0a ("arm64: alternatives: add enable parameter to
conditional asm macros") extended the alternative assembly macros.
Unfortunately this does not really work as one would expect, as the
enable parameter in fact correctly protects the alternative section
magic, but not the actual code sequences.
This results in having both the original instruction(s) _and_  the
alternative ones, if enable if false.
Since there is no user of this macros anyway, just revert it.

This reverts commit 77ee306c0a.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

Change-Id: I608104891335dfa2dacdb364754ae2658088ddf2
(cherry picked from commit b82bfa4793cd0f8fde49b85e0ad66906682e7447)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Geoff Levand
5f657f2582 UPSTREAM: arm64: Add new asm macro copy_page
Kexec and hibernate need to copy pages of memory, but may not have all
of the kernel mapped, and are unable to call copy_page().

Add a simplistic copy_page() macro, that can be inlined in these
situations. lib/copy_page.S provides a bigger better version, but
uses more registers.

Signed-off-by: Geoff Levand <geoff@infradead.org>
[Changed asm label to 9998, added commit message]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>

Change-Id: If23a454e211b1f57f8ba1a2a00b44dabf4b82932
(cherry picked from commit 5003dbde45961dd7ab3d8a09ab9ad8bcb604db40)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Mark Rutland
7668fe330d UPSTREAM: arm64: kill ESR_LNX_EXEC
Currently we treat ESR_EL1 bit 24 as software-defined for distinguishing
instruction aborts from data aborts, but this bit is architecturally
RES0 for instruction aborts, and could be allocated for an arbitrary
purpose in future. Additionally, we hard-code the value in entry.S
without the mnemonic, making the code difficult to understand.

Instead, remove ESR_LNX_EXEC, and distinguish aborts based on the esr,
which we already pass to the sole use of ESR_LNX_EXEC. A new helper,
is_el0_instruction_abort() is added to make the logic clear. Any
instruction aborts taken from EL1 will already have been handled by
bad_mode, so we need not handle that case in the helper.

For consistency, the existing permission_fault helper is renamed to
is_permission_fault, and the return type is changed to bool. There
should be no functional changes as the return value was a boolean
expression, and the result is only used in another boolean expression.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Dave P Martin <dave.martin@arm.com>
Cc: Huang Shijie <shijie.huang@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

Change-Id: Iaf66fa5f3b13cf985b11a3b0a40c4333fe9ef833
(cherry picked from commit 541ec870ef31433018d245614254bd9d810a9ac3)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Mark Rutland
bff5974694 UPSTREAM: arm64: add macro to extract ESR_ELx.EC
Several places open-code extraction of the EC field from an ESR_ELx
value, in subtly different ways. This is unfortunate duplication and
variation, and the precise logic used to extract the field is a
distraction.

This patch adds a new macro, ESR_ELx_EC(), to extract the EC field from
an ESR_ELx value in a consistent fashion.

Existing open-coded extractions in core arm64 code are moved over to the
new helper. KVM code is left as-is for the moment.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Huang Shijie <shijie.huang@arm.com>
Cc: Dave P Martin <dave.martin@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

Change-Id: Ib634a4795277d243fce5dd30b139e2ec1465bee9
(cherry picked from commit 275f344bec51e9100bae81f3cc8c6940bbfb24c0)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Mark Rutland
6763781e70 UPSTREAM: arm64: mm: mark fault_info table const
Unlike the debug_fault_info table, we never intentionally alter the
fault_info table at runtime, and all derived pointers are treated as
const currently.

Make the table const so that it can be placed in .rodata and protected
from unintentional writes, as we do for the syscall tables.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>

Change-Id: I3fb0bb55427835c165cc377d8dc2a3fa9e6e950d
(cherry picked from commit bbb1681ee3653bdcfc6a4ba31902738118311fd4)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Mark Rutland
f73825a2c4 UPSTREAM: arm64: fix dump_instr when PAN and UAO are in use
If the kernel is set to show unhandled signals, and a user task does not
handle a SIGILL as a result of an instruction abort, we will attempt to
log the offending instruction with dump_instr before killing the task.

We use dump_instr to log the encoding of the offending userspace
instruction. However, dump_instr is also used to dump instructions from
kernel space, and internally always switches to KERNEL_DS before dumping
the instruction with get_user. When both PAN and UAO are in use, reading
a user instruction via get_user while in KERNEL_DS will result in a
permission fault, which leads to an Oops.

As we have regs corresponding to the context of the original instruction
abort, we can inspect this and only flip to KERNEL_DS if the original
abort was taken from the kernel, avoiding this issue. At the same time,
remove the redundant (and incorrect) comments regarding the order
dump_mem and dump_instr are called in.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: <stable@vger.kernel.org> #4.6+
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reported-by: Vladimir Murzin <vladimir.murzin@arm.com>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com>
Fixes: 57f4959bad0a154a ("arm64: kernel: Add support for User Access Override")
Signed-off-by: Will Deacon <will.deacon@arm.com>

Change-Id: I54c00f3598d227a7e2767b357cb453075dcce7bd
(cherry picked from commit c5cea06be060f38e5400d796e61cfc8c36e52924)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Geoff Levand
e5adeac6e7 BACKPORT: arm64: Fold proc-macros.S into assembler.h
To allow the assembler macros defined in arch/arm64/mm/proc-macros.S to
be used outside the mm code move the contents of proc-macros.S to
asm/assembler.h.  Also, delete proc-macros.S, and fix up all references
to proc-macros.S.

Signed-off-by: Geoff Levand <geoff@infradead.org>
Acked-by: Pavel Machek <pavel@ucw.cz>
[rebased, included dcache_by_line_op]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>

Change-Id: I09e694442ffd25dcac864216d0369c9727ad0090
(cherry picked from commit 7b7293ae3dbd0a1965bf310b77fed5f9bb37bb93)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Ard Biesheuvel
49e73440b2 UPSTREAM: arm64: choose memstart_addr based on minimum sparsemem section alignment
This redefines ARM64_MEMSTART_ALIGN in terms of the minimal alignment
required by sparsemem vmemmap. This comes down to using 1 GB for all
translation granules if CONFIG_SPARSEMEM_VMEMMAP is enabled.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>

Change-Id: I05b8bc6ab24f677f263b09d7c31fcce4f21269b1
(cherry picked from commit 06e9bf2fd9b372bc1c757995b6ca1cfab0720acb)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Ard Biesheuvel
87b1b0b0be UPSTREAM: arm64/mm: ensure memstart_addr remains sufficiently aligned
After choosing memstart_addr to be the highest multiple of
ARM64_MEMSTART_ALIGN less than or equal to the first usable physical memory
address, we clip the memblocks to the maximum size of the linear region.
Since the kernel may be high up in memory, we take care not to clip the
kernel itself, which means we have to clip some memory from the bottom if
this occurs, to ensure that the distance between the first and the last
usable physical memory address can be covered by the linear region.

However, we fail to update memstart_addr if this clipping from the bottom
occurs, which means that we may still end up with virtual addresses that
wrap into the userland range. So increment memstart_addr as appropriate to
prevent this from happening.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>

Change-Id: I72306207cc46a30b780f5e00b9ef23aa8409867e
(cherry picked from commit 2958987f5da2ebcf6a237c5f154d7e3340e60945)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Ard Biesheuvel
91ce39653a UPSTREAM: arm64/kernel: fix incorrect EL0 check in inv_entry macro
The implementation of macro inv_entry refers to its 'el' argument without
the required leading backslash, which results in an undefined symbol
'el' to be passed into the kernel_entry macro rather than the index of
the exception level as intended.

This undefined symbol strangely enough does not result in build failures,
although it is visible in vmlinux:

     $ nm -n vmlinux |head
                      U el
     0000000000000000 A _kernel_flags_le_hi32
     0000000000000000 A _kernel_offset_le_hi32
     0000000000000000 A _kernel_size_le_hi32
     000000000000000a A _kernel_flags_le_lo32
     .....

However, it does result in incorrect code being generated for invalid
exceptions taken from EL0, since the argument check in kernel_entry
assumes EL1 if its argument does not equal '0'.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

Change-Id: I406c1207682a4dff3054a019c26fdf1310b08ed1
(cherry picked from commit b660950c60a7278f9d8deb7c32a162031207c758)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Mark Rutland
4d8f55cd93 UPSTREAM: arm64: Add macros to read/write system registers
Rather than crafting custom macros for reading/writing each system
register provide generics accessors, read_sysreg and write_sysreg, for
this purpose.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Change-Id: I1d6cf948bc6660dfd096ff5a18eba682941098c1
(cherry picked from commit 3600c2fdc09a43a30909743569e35a29121602ed)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Ard Biesheuvel
b5f3bc58bd UPSTREAM: arm64/efi: refactor EFI init and runtime code for reuse by 32-bit ARM
This refactors the EFI init and runtime code that will be shared
between arm64 and ARM so that it can be built for both archs.

Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>

Change-Id: Ieee70bbe117170d2054a9c82c4f1a8143b7e302b
(cherry picked from commit f7d924894265794f447ea799dd853400749b5a22)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Ard Biesheuvel
103fab35c1 UPSTREAM: arm64/efi: split off EFI init and runtime code for reuse by 32-bit ARM
This splits off the early EFI init and runtime code that
- discovers the EFI params and the memory map from the FDT, and installs
  the memblocks and config tables.
- prepares and installs the EFI page tables so that UEFI Runtime Services
  can be invoked at the virtual address installed by the stub.

This will allow it to be reused for 32-bit ARM.

Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>

Change-Id: I143e4b38a5426f70027eff6cc5f732ac370ae69d
(cherry picked from commit e5bc22a42e4d46cc203fdfb6d2c76202b08666a0)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Ard Biesheuvel
2c2cc72a6e UPSTREAM: arm64/efi: mark UEFI reserved regions as MEMBLOCK_NOMAP
Change the EFI memory reservation logic to use memblock_mark_nomap()
rather than memblock_reserve() to mark UEFI reserved regions as
occupied. In addition to reserving them against allocations done by
memblock, this will also prevent them from being covered by the linear
mapping.

Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>

Change-Id: Ia3ce78f40f8d41a9afdd42238fe9cbfd81bbff08
(cherry picked from commit 4dffbfc48d65e5d8157a634fd670065d237a9377)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Ard Biesheuvel
1c4a6945fd BACKPORT: arm64: only consider memblocks with NOMAP cleared for linear mapping
Take the new memblock attribute MEMBLOCK_NOMAP into account when
deciding whether a certain region is or should be covered by the
kernel direct mapping.

Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>

Change-Id: Id7346a09bb3aee5e9a5ef8812251f80cf8265532
(cherry picked from commit 68709f45385aeddb0ca96a060c0c8259944f321b)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Ard Biesheuvel
a400076119 UPSTREAM: mm/memblock: add MEMBLOCK_NOMAP attribute to memblock memory table
This introduces the MEMBLOCK_NOMAP attribute and the required plumbing
to make it usable as an indicator that some parts of normal memory
should not be covered by the kernel direct mapping. It is up to the
arch to actually honor the attribute when laying out this mapping,
but the memblock code itself is modified to disregard these regions
for allocations and other general use.

Cc: linux-mm@kvack.org
Cc: Alexander Kuleshov <kuleshovmail@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>

Change-Id: I55cd3abdf514ac54c071fa0037d8dac73bda798d
(cherry picked from commit bf3d3cc580f9960883ebf9ea05868f336d9491c2)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Badhri Jagan Sridharan
5834d204cb ANDROID: dm: android-verity: Remove fec_header location constraint
This CL removes the mandate of the fec_header being located right
after the ECC data.

(Cherry-picked from https://android-review.googlesource.com/#/c/280401)

Bug: 28865197
Signed-off-by: Badhri Jagan Sridharan <Badhri@google.com>
Change-Id: Ie04c8cf2dd755f54d02dbdc4e734a13d6f6507b5
2016-10-12 17:34:22 +05:30
Paul Moore
f885566c5e BACKPORT: audit: consistently record PIDs with task_tgid_nr()
Unfortunately we record PIDs in audit records using a variety of
methods despite the correct way being the use of task_tgid_nr().
This patch converts all of these callers, except for the case of
AUDIT_SET in audit_receive_msg() (see the comment in the code).

Reported-by: Jeff Vander Stoep <jeffv@google.com>
Signed-off-by: Paul Moore <paul@paul-moore.com>

Bug: 28952093

(cherry picked from commit fa2bea2f5cca5b8d4a3e5520d2e8c0ede67ac108)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: If6645f9de8bc58ed9755f28dc6af5fbf08d72a00
2016-10-12 17:34:22 +05:30
Jeff Vander Stoep
ce33efa799 android-base.cfg: Enable kernel ASLR
Bug: 30369029
Change-Id: I0c1c932255866f308d67de1df2ad52c9c19c4799
2016-10-12 17:34:22 +05:30
Heiko Carstens
0827c0e185 UPSTREAM: vmlinux.lds.h: allow arch specific handling of ro_after_init data section
commit c74ba8b3480d ("arch: Introduce post-init read-only memory")
introduced the __ro_after_init attribute which allows to add variables
to the ro_after_init data section.

This new section was added to rodata, even though it contains writable
data. This in turn causes problems on architectures which mark the
page table entries read-only that point to rodata very early.

This patch allows architectures to implement an own handling of the
.data..ro_after_init section.
Usually that would be:
- mark the rodata section read-only very early
- mark the ro_after_init section read-only within mark_rodata_ro

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>

Bug: 31660652
Change-Id: If68cb4d86f88678c9bac8c47072775ab85ef5770
(cherry picked from commit 32fb2fc5c357fb99616bbe100dbcb27bc7f5d045)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2016-10-12 17:34:22 +05:30
Will Deacon
c749e858e1 UPSTREAM: arm64: spinlock: fix spin_unlock_wait for LSE atomics
Commit d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against
concurrent lockers") fixed spin_unlock_wait for LL/SC-based atomics under
the premise that the LSE atomics (in particular, the LDADDA instruction)
are indivisible.

Unfortunately, these instructions are only indivisible when used with the
-AL (full ordering) suffix and, consequently, the same issue can
theoretically be observed with LSE atomics, where a later (in program
order) load can be speculated before the write portion of the atomic
operation.

This patch fixes the issue by performing a CAS of the lock once we've
established that it's unlocked, in much the same way as the LL/SC code.

Fixes: d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers")
Signed-off-by: Will Deacon <will.deacon@arm.com>

Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 3a5facd09da848193f5bcb0dea098a298bc1a29d)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Icedaa4c508784bf43d0b5787586480fd668ccc49
2016-10-12 17:34:22 +05:30
Mark Rutland
feffbe70d5 UPSTREAM: arm64: avoid TLB conflict with CONFIG_RANDOMIZE_BASE
When CONFIG_RANDOMIZE_BASE is selected, we modify the page tables to remap the
kernel at a newly-chosen VA range. We do this with the MMU disabled, but do not
invalidate TLBs prior to re-enabling the MMU with the new tables. Thus the old
mappings entries may still live in TLBs, and we risk violating
Break-Before-Make requirements, leading to TLB conflicts and/or other issues.

We invalidate TLBs when we uninsall the idmap in early setup code, but prior to
this we are subject to issues relating to the Break-Before-Make violation.

Avoid these issues by invalidating the TLBs before the new mappings can be
used by the hardware.

Fixes: f80fb3a3d508 ("arm64: add support for kernel ASLR")
Cc: <stable@vger.kernel.org> # 4.6+
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit fd363bd417ddb6103564c69cfcbd92d9a7877431)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I6c23ce55cdd8b66587b6787b8f28df8535e39f24
2016-10-12 17:34:22 +05:30
Catalin Marinas
e54f34381d UPSTREAM: arm64: Only select ARM64_MODULE_PLTS if MODULES=y
Selecting CONFIG_RANDOMIZE_BASE=y and CONFIG_MODULES=n fails to build
the module PLTs support:

  CC      arch/arm64/kernel/module-plts.o
/work/Linux/linux-2.6-aarch64/arch/arm64/kernel/module-plts.c: In function ‘module_emit_plt_entry’:
/work/Linux/linux-2.6-aarch64/arch/arm64/kernel/module-plts.c:32:49: error: dereferencing pointer to incomplete type ‘struct module’

This patch selects ARM64_MODULE_PLTS conditionally only if MODULES is
enabled.

Fixes: f80fb3a3d508 ("arm64: add support for kernel ASLR")
Cc: <stable@vger.kernel.org> # 4.6+
Reported-by: Jeff Vander Stoep <jeffv@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit b9c220b589daaf140f5b8ebe502c98745b94e65c)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I446cb3aa78f1c64b5aa1e2e90fda13f7d46cac33
2016-10-12 17:34:22 +05:30
John Stultz
fc1d6c8c6a sched: Add Kconfig option DEFAULT_USE_ENERGY_AWARE to set ENERGY_AWARE feature flag
The ENERGY_AWARE sched feature flag cannot be set unless
CONFIG_SCHED_DEBUG is enabled.

So this patch allows the flag to default to true at build time
if the config is set.

Change-Id: I8835a571fdb7a8f8ee6a54af1e11a69f3b5ce8e6
Signed-off-by: John Stultz <john.stultz@linaro.org>
2016-10-12 17:34:22 +05:30
Caesar Wang
ca4b83c547 sched/fair: remove printk while schedule is in progress
It will cause deadlock and while(1) if call printk while schedule is in
progress. The block state like as below:

cpu0(hold the console sem):
printk->console_unlock->up_sem->spin_lock(&sem->lock)->wake_up_process(cpu1)
->try_to_wake_up(cpu1)->while(p->on_cpu).

cpu1(request console sem):
console_lock->down_sem->schedule->idle_banlance->update_cpu_capacity->
printk->console_trylock->spin_lock(&sem->lock).

p->on_cpu will be 1 forever, because the task is still running on cpu1,
so cpu0 is blocked in while(p->on_cpu), but cpu1 could not get
spin_lock(&sem->lock), it is blocked too, it means the task will running
on cpu1 forever.

Signed-off-by: Caesar Wang <wxt@rock-chips.com>
2016-10-12 17:34:22 +05:30
Mohan Srinivasan
31f42471b1 ANDROID: fs: FS tracepoints to track IO.
Adds tracepoints in ext4/f2fs/mpage to track readpages/buffered
write()s. This allows us to track files that are being read/written
to PIDs.

Change-Id: I26bd36f933108927d6903da04d8cb42fd9c3ef3d
Signed-off-by: Mohan Srinivasan <srmohan@google.com>
2016-10-12 17:34:22 +05:30
Chris Redpath
27f0430c61 sched/walt: Drop arch-specific timer access
On at least one platform, occasionally the timer providing the wallclock
was able to be reset/go backwards for at least some time after wakeup.

Accept that this might happen and warn the first time, but otherwise just
carry on.

Change-Id: Id3164477ba79049561af7f0889cbeebc199ead4e
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2016-10-12 17:34:22 +05:30
Jeff Vander Stoep
9e86c7d3b1 ANDROID: fiq_debugger: Pass task parameter to unwind_frame()
Fixes: fe13f95b7200 ("arm64: pass a task parameter to unwind_frame()")

Bug: 30369029
Patchset: rework-pagetable

Change-Id: I9a4ab50ef61532d27282f189f063c938c196ec08
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
2016-10-12 17:34:22 +05:30
Srinath Sridharan
09c5f91afe eas/sched/fair: Fixing comments in find_best_target.
Change-Id: I83f5b9887e98f9fdb81318cde45408e7ebfc4b13
Signed-off-by: Srinath Sridharan <srinathsr@google.com>
2016-10-12 17:34:22 +05:30
Eric Ernst
e931652d1b input: keyreset: switch to orderly_reboot
Prior restart function would make a call to sys_sync and then
execute a kernel reset.  Rather than call the sync directly,
thus necessitating this driver to be builtin, call orderly_reboot,
which will take care of the file system sync.

Note: since CONFIG_INPUT Kconfig is tristate, this driver can be built
as module, despite being marked bool.

Signed-off-by: Eric Ernst <eric.ernst@linux.intel.com>
2016-10-12 17:34:22 +05:30