(cherry picked from commit 92964c79b357efd980812c4de5c1fd2ec8bb5520)
When we free cb->skb after a dump, we do it after releasing the
lock. This means that a new dump could have started in the time
being and we'll end up freeing their skb instead of ours.
This patch saves the skb and module before we unlock so we free
the right memory.
Fixes: 16b304f340 ("netlink: Eliminate kmalloc in netlink dump operation.")
Reported-by: Baozeng Ding <sploving1@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Change-Id: Ie2db6a32a49686c6d22c4a88c251b288343c7813
Bug: 33393474
(cherry picked from commit b98b0bc8c431e3ceb4b26b0dfc8db509518fb290)
CAP_NET_ADMIN users should not be allowed to set negative
sk_sndbuf or sk_rcvbuf values, as it can lead to various memory
corruptions, crashes, OOM...
Note that before commit 8298193012 ("net: cleanups in
sock_setsockopt()"), the bug was even more serious, since SO_SNDBUF
and SO_RCVBUF were vulnerable.
This needs to be backported to all known linux kernels.
Again, many thanks to syzkaller team for discovering this gem.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Change-Id: I2b621c28c02267af5b34a379b2970fe5fb61a4f6
Bug: 33363517
commit 6533af4d4831c421cd9aa4dce7cfc19a3514cc09 upstream.
If a kernel doesn't support MSA context (ie. CONFIG_CPU_HAS_MSA=n) then
it will only keep 64 bits per FP register in thread context, and the
calls to set_fpr64 in restore_msa_extcontext will overrun the end of the
FP register context into the FCSR & MSACSR values. GCC 6.x has become
smart enough to detect this & complain like so:
arch/mips/kernel/signal.c: In function 'protected_restore_fp_context':
./arch/mips/include/asm/processor.h:114:17: error: array subscript is above array bounds [-Werror=array-bounds]
fpr->val##width[FPR_IDX(width, idx)] = val; \
~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~
./arch/mips/include/asm/processor.h:118:1: note: in expansion of macro 'BUILD_FPR_ACCESS'
BUILD_FPR_ACCESS(64)
The only way to trigger this code to run would be for a program to set
up an artificial extended MSA context structure following a sigframe &
execute sigreturn. Whilst this doesn't allow a program to write to any
state that it couldn't already, it makes little sense to allow this
"restoration" of MSA context in a system that doesn't support MSA.
Fix this by killing a program with SIGSYS if it tries something as crazy
as "restoring" fake MSA context in this way, also fixing the build error
& allowing for most of restore_msa_extcontext to be optimised out of
kernels without support for MSA.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reported-by: Michal Toman <michal.toman@imgtec.com>
Fixes: bf82cb30c7 ("MIPS: Save MSA extended context around signals")
Tested-by: Aaro Koskinen <aaro.koskinen@iki.fi>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Michal Toman <michal.toman@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13164/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
->setattr() was recently implemented for socket files to sync the socket
inode's uid to the new 'sk_uid' member of struct sock. It does this by
copying over the ia_uid member of struct iattr. However, ia_uid is
actually only valid when ATTR_UID is set in ia_valid, indicating that
the uid is being changed, e.g. by chown. Other metadata operations such
as chmod or utimes leave ia_uid uninitialized. Therefore, sk_uid could
be set to a "garbage" value from the stack.
Fix this by only copying the uid over when ATTR_UID is set.
[cherry-pick of net e1a3a60a2ebe991605acb14cd58e39c0545e174e]
Bug: 16355602
Change-Id: I20e53848e54282b72a388ce12bfa88da5e3e9efe
Fixes: 86741ec25462 ("net: core: Add a UID field to struct sock.")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Tested-by: Lorenzo Colitti <lorenzo@google.com>
Acked-by: Lorenzo Colitti <lorenzo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit 4b65a5db3627 ("arm64: Introduce uaccess_{disable,enable}
functionality based on TTBR0_EL1") added conditional user access
enable/disable. Unfortunately, a typo prevents the PAN bit from being
cleared for user access functions.
Restore the PAN functionality by adding the missing '!'.
Fixes: b65a5db3627 ("arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1")
Reported-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 31432001
Change-Id: If61cb6cc756affc7df7fa06213723a8b96eb1e80
(cherry picked from commit 75037120e62b58c536999eb23d70cfcb6d6c0bcc)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
This patch adds the Kconfig option to enable support for TTBR0 PAN
emulation. The option is default off because of a slight performance hit
when enabled, caused by the additional TTBR0_EL1 switching during user
access operations or exception entry/exit code.
Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 31432001
Change-Id: I2f0b5f332e3c56ea0453ff69826525dec49f034b
(cherry picked from commit ba42822af1c287f038aa550f3578c61c212a892e)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Privcmd calls are issued by the userspace. The kernel needs to enable
access to TTBR0_EL1 as the hypervisor would issue stage 1 translations
to user memory via AT instructions. Since AT instructions are not
affected by the PAN bit (ARMv8.1), we only need the explicit
uaccess_enable/disable if the TTBR0 PAN option is enabled.
Reviewed-by: Julien Grall <julien.grall@arm.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 31432001
Change-Id: I64d827923d869c1868702c8a18efa99ea91d3151
(cherry picked from commit 9cf09d68b89ae5fe0261dcc69464bcc676900af6)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
When TTBR0_EL1 is set to the reserved page, an erroneous kernel access
to user space would generate a translation fault. This patch adds the
checks for the software-set PSR_PAN_BIT to emulate a permission fault
and report it accordingly.
Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 31432001
Change-Id: I87e48f6075f84878e4d26d4fadf6eaac49d2cb4e
(cherry picked from commit 786889636ad75296c213547d1ca656af4c59f390)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
When the TTBR0 PAN feature is enabled, the kernel entry points need to
disable access to TTBR0_EL1. The PAN status of the interrupted context
is stored as part of the saved pstate, reusing the PSR_PAN_BIT (22).
Restoring access to TTBR0_EL1 is done on exception return if returning
to user or returning to a context where PAN was disabled.
Context switching via switch_mm() must defer the update of TTBR0_EL1
until a return to user or an explicit uaccess_enable() call.
Special care needs to be taken for two cases where TTBR0_EL1 is set
outside the normal kernel context switch operation: EFI run-time
services (via efi_set_pgd) and CPU suspend (via cpu_(un)install_idmap).
Code has been added to avoid deferred TTBR0_EL1 switching as in
switch_mm() and restore the reserved TTBR0_EL1 when uninstalling the
special TTBR0_EL1.
User cache maintenance (user_cache_maint_handler and
__flush_cache_user_range) needs the TTBR0_EL1 re-instated since the
operations are performed by user virtual address.
This patch also removes a stale comment on the switch_mm() function.
Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 31432001
Change-Id: I85a49f70e13b153b9903851edf56f6531c14e6de
(cherry picked from commit 39bc88e5e38e9b213bd7d833ce0df6ec029761ad)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
This patch adds the uaccess macros/functions to disable access to user
space by setting TTBR0_EL1 to a reserved zeroed page. Since the value
written to TTBR0_EL1 must be a physical address, for simplicity this
patch introduces a reserved_ttbr0 page at a constant offset from
swapper_pg_dir. The uaccess_disable code uses the ttbr1_el1 value
adjusted by the reserved_ttbr0 offset.
Enabling access to user is done by restoring TTBR0_EL1 with the value
from the struct thread_info ttbr0 variable. Interrupts must be disabled
during the uaccess_ttbr0_enable code to ensure the atomicity of the
thread_info.ttbr0 read and TTBR0_EL1 write. This patch also moves the
get_thread_info asm macro from entry.S to assembler.h for reuse in the
uaccess_ttbr0_* macros.
Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 31432001
Change-Id: I54ada623160cb47f5762e0e39a5e84a75252dbfd
(cherry picked from commit 4b65a5db362783ab4b04ca1c1d2ad70ed9b0ba2a)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
This patch takes the errata workaround code out of cpu_do_switch_mm into
a dedicated post_ttbr0_update_workaround macro which will be reused in a
subsequent patch.
Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 31432001
Change-Id: I2b45b11ab7390c3545b9e162532109c1526bef14
(cherry picked from commit f33bcf03e6079668da6bf4eec4a7dcf9289131d0)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
This patch moves the directly coded alternatives for turning PAN on/off
into separate uaccess_{enable,disable} macros or functions. The asm
macros take a few arguments which will be used in subsequent patches.
Note that any (unlikely) access that the compiler might generate between
uaccess_enable() and uaccess_disable(), other than those explicitly
specified by the user access code, will not be protected by PAN.
Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 31432001
Change-Id: I75a410139d0756edab3210ee091fa5d047a22e04
(cherry picked from commit bd38967d406fb4f9fca67d612db71b5d74cfb0f5)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
In some cases, one side of an alternative sequence is simply a number of
NOPs used to balance the other side. Keeping track of this manually is
tedious, and the presence of large chains of NOPs makes the code more
painful to read than necessary.
To ameliorate matters, this patch adds a new alternative_else_nop_endif,
which automatically balances an alternative sequence with a trivial NOP
sled.
In many cases, we would like a NOP-sled in the default case, and
instructions patched in in the presence of a feature. To enable the NOPs
to be generated automatically for this case, this patch also adds a new
alternative_if, and updates alternative_else and alternative_endif to
work with either alternative_if or alternative_endif.
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Cc: James Morse <james.morse@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[will: use new nops macro to generate nop sequences]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug: 31432001
Change-Id: I28d8aae073e113048577c41cfe27c91215fb4cf3
(cherry picked from commit 792d47379f4d4c76692f1795f33d38582f8907fa)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
NOP sequences tend to get used for padding out alternative sections
and uarch-specific pipeline flushes in errata workarounds.
This patch adds macros for generating these sequences as both inline
asm blocks, but also as strings suitable for embedding in other asm
blocks directly.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug: 31432001
Change-Id: I7f82b677a065ede302a763d39ffcc3fef83f8fbe
(cherry picked from commit f99a250cb6a3b301b101b4c0f5fcb80593bba6dc)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Fix SCHED_WALT dependency on FAIR_GROUP_SCHED otherwise we run
into following build failure:
CC kernel/sched/walt.o
kernel/sched/walt.c: In function 'walt_inc_cfs_cumulative_runnable_avg':
kernel/sched/walt.c:148:8: error: 'struct cfs_rq' has no member named 'cumulative_runnable_avg'
cfs_rq->cumulative_runnable_avg += p->ravg.demand;
^
kernel/sched/walt.c: In function 'walt_dec_cfs_cumulative_runnable_avg':
kernel/sched/walt.c:154:8: error: 'struct cfs_rq' has no member named 'cumulative_runnable_avg'
cfs_rq->cumulative_runnable_avg -= p->ravg.demand;
^
Reported-at: https://bugs.linaro.org/show_bug.cgi?id=2793
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
We want to use network trace events in production
builds, to help diagnose Wifi problems. However, we
don't want to expose raw kernel pointers in such
builds.
Change the format specifier for the skbaddr field,
so that, if kptr_restrict is enabled, the pointers
will be reported as 0.
Bug: 30090733
Change-Id: Ic4bd583d37af6637343601feca875ee24479ddff
Signed-off-by: mukesh agrawal <quiche@google.com>
Commit e2d118a1cb5e ("net: inet: Support UID-based routing in IP
protocols.") made __build_flow_key call sock_net(sk) to determine
the network namespace of the passed-in socket. This crashes if sk
is NULL.
Fix this by getting the network namespace from the skb instead.
Bug: 16355602
Change-Id: I27161b70f448bb95adce3994a97920d54987ce4e
Fixes: e2d118a1cb5e ("net: inet: Support UID-based routing in IP protocols.")
Reported-by: Erez Shitrit <erezsh@dev.mellanox.co.il>
Signed-off-by: Lorenzo Colitti <lorenzo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
- Use the UID in routing lookups made by protocol connect() and
sendmsg() functions.
- Make sure that routing lookups triggered by incoming packets
(e.g., Path MTU discovery) take the UID of the socket into
account.
- For packets not associated with a userspace socket, (e.g., ping
replies) use UID 0 inside the user namespace corresponding to
the network namespace the socket belongs to. This allows
all namespaces to apply routing and iptables rules to
kernel-originated traffic in that namespaces by matching UID 0.
This is better than using the UID of the kernel socket that is
sending the traffic, because the UID of kernel sockets created
at namespace creation time (e.g., the per-processor ICMP and
TCP sockets) is the UID of the user that created the socket,
which might not be mapped in the namespace.
Bug: 16355602
Change-Id: I910504b508948057912bc188fd1e8aca28294de3
Tested: compiles allnoconfig, allyesconfig, allmodconfig
Tested: https://android-review.googlesource.com/253302
Signed-off-by: Lorenzo Colitti <lorenzo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
- Define a new FIB rule attributes, FRA_UID_RANGE, to describe a
range of UIDs.
- Define a RTA_UID attribute for per-UID route lookups and dumps.
- Support passing these attributes to and from userspace via
rtnetlink. The value INVALID_UID indicates no UID was
specified.
- Add a UID field to the flow structures.
Bug: 16355602
Change-Id: Iea98e6fedd0fd4435a1f4efa3deb3629505619ab
Signed-off-by: Lorenzo Colitti <lorenzo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Protocol sockets (struct sock) don't have UIDs, but most of the
time, they map 1:1 to userspace sockets (struct socket) which do.
Various operations such as the iptables xt_owner match need
access to the "UID of a socket", and do so by following the
backpointer to the struct socket. This involves taking
sk_callback_lock and doesn't work when there is no socket
because userspace has already called close().
Simplify this by adding a sk_uid field to struct sock whose value
matches the UID of the corresponding struct socket. The semantics
are as follows:
1. Whenever sk_socket is non-null: sk_uid is the same as the UID
in sk_socket, i.e., matches the return value of sock_i_uid.
Specifically, the UID is set when userspace calls socket(),
fchown(), or accept().
2. When sk_socket is NULL, sk_uid is defined as follows:
- For a socket that no longer has a sk_socket because
userspace has called close(): the previous UID.
- For a cloned socket (e.g., an incoming connection that is
established but on which userspace has not yet called
accept): the UID of the socket it was cloned from.
- For a socket that has never had an sk_socket: UID 0 inside
the user namespace corresponding to the network namespace
the socket belongs to.
Kernel sockets created by sock_create_kern are a special case
of #1 and sk_uid is the user that created them. For kernel
sockets created at network namespace creation time, such as the
per-processor ICMP and TCP sockets, this is the user that created
the network namespace.
Bug: 16355602
Change-Id: Idbc3e9a0cec91c4c6e01916b967b6237645ebe59
Signed-off-by: Lorenzo Colitti <lorenzo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
(Cherry picked from commit 7cc8cbcf82d165dd658d89a7a287140948e76413)
Commit 4dffbfc48d65 ("arm64/efi: mark UEFI reserved regions as
MEMBLOCK_NOMAP") updated the mapping logic of both the RuntimeServices
regions as well as the kernel's copy of the UEFI memory map to set the
MEMBLOCK_NOMAP flag, which causes these regions to be omitted from the
kernel direct mapping, and from being covered by a struct page.
For the RuntimeServices regions, this is an obvious win, since the contents
of these regions have significance to the firmware executable code itself,
and are mapped in the EFI page tables using attributes that are described in
the UEFI memory map, and which may differ from the attributes we use for
mapping system RAM. It also prevents the contents from being modified
inadvertently, since the EFI page tables are only live during runtime
service invocations.
None of these concerns apply to the allocation that covers the UEFI memory
map, since it is entirely owned by the kernel. Setting the MEMBLOCK_NOMAP on
the region did allow us to use ioremap_cache() to map it both on arm64 and
on ARM, since the latter does not allow ioremap_cache() to be used on
regions that are covered by a struct page.
The ioremap_cache() on ARM restriction will be lifted in the v4.7 timeframe,
but in the mean time, it has been reported that commit 4dffbfc48d65 causes
a regression on 64k granule kernels. This is due to the fact that, given
the 64 KB page size, the region that we end up removing from the kernel
direct mapping is rounded up to 64 KB, and this 64 KB page frame may be
shared with the initrd when booting via GRUB (which does not align its
EFI_LOADER_DATA allocations to 64 KB like the stub does). This will crash
the kernel as soon as it tries to access the initrd.
Since the issue is specific to arm64, revert back to memblock_reserve()'ing
the UEFI memory map when running on arm64. This is a temporary fix for v4.5
and v4.6, and will be superseded in the v4.7 timeframe when we will be able
to move back to memblock_reserve() unconditionally.
Fixes: 4dffbfc48d65 ("arm64/efi: mark UEFI reserved regions as MEMBLOCK_NOMAP")
Reported-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Jeremy Linton <jeremy.linton@arm.com>
Cc: Mark Langsdorf <mlangsdo@redhat.com>
Cc: <stable@vger.kernel.org> # v4.5
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
Fixes: Change-Id: Ia3ce78f40f8d41a9afdd42238fe9cbfd81bbff08
("UPSTREAM: arm64/efi: mark UEFI reserved regions as MEMBLOCK_NOMAP")
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
(Cherry picked from commit 0106d456c4cb1770253fefc0ab23c9ca760b43f7)
Commit 66dbd6e61a52 ("arm64: Implement ptep_set_access_flags() for
hardware AF/DBM") ensured that pte flags are updated atomically in the
face of potential concurrent, hardware-assisted updates. However, Alex
reports that:
| This patch breaks swapping for me.
| In the broken case, you'll see either systemd cpu time spike (because
| it's stuck in a page fault loop) or the system hang (because the
| application owning the screen is stuck in a page fault loop).
It turns out that this is because the 'dirty' argument to
ptep_set_access_flags is always 0 for read faults, and so we can't use
it to set PTE_RDONLY. The failing sequence is:
1. We put down a PTE_WRITE | PTE_DIRTY | PTE_AF pte
2. Memory pressure -> pte_mkold(pte) -> clear PTE_AF
3. A read faults due to the missing access flag
4. ptep_set_access_flags is called with dirty = 0, due to the read fault
5. pte is then made PTE_WRITE | PTE_DIRTY | PTE_AF | PTE_RDONLY (!)
6. A write faults, but pte_write is true so we get stuck
The solution is to check the new page table entry (as would be done by
the generic, non-atomic definition of ptep_set_access_flags that just
calls set_pte_at) to establish the dirty state.
Cc: <stable@vger.kernel.org> # 4.3+
Fixes: 66dbd6e61a52 ("arm64: Implement ptep_set_access_flags() for hardware AF/DBM")
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Alexander Graf <agraf@suse.de>
Tested-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Fixes: Change-Id: Id2a0b0d8eb6e7df6325ecb48b88b8401a5dd09e5
("UPSTREAM: arm64: Implement ptep_set_access_flags() for hardware AF/DBM")
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
(Cherry picked from commit 282aa7051b0169991b34716f0f22d9c2f59c46c4)
The update to the accessed or dirty states for block mappings must be
done atomically on hardware with support for automatic AF/DBM. The
ptep_set_access_flags() function has been fixed as part of commit
66dbd6e61a52 ("arm64: Implement ptep_set_access_flags() for hardware
AF/DBM"). This patch brings pmdp_set_access_flags() in line with the pte
counterpart.
Fixes: 2f4b829c62 ("arm64: Add support for hardware updates of the access and dirty pte bits")
Cc: <stable@vger.kernel.org> # 4.4.x: 66dbd6e61a52: arm64: Implement ptep_set_access_flags() for hardware AF/DBM
Cc: <stable@vger.kernel.org> # 4.3+
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
(Cherry picked from commit 911f56eeb87ee378f5e215469268a7a2f68a5a8a)
With hardware AF/DBM support, pmd modifications (transparent huge pages)
should be performed atomically using load/store exclusive. The initial
patches defined the get-and-clear function and __HAVE_ARCH_* macro
without the "huge" word, leaving the pmdp_huge_get_and_clear() to the
default, non-atomic implementation.
Fixes: 2f4b829c62 ("arm64: Add support for hardware updates of the access and dirty pte bits")
Cc: <stable@vger.kernel.org> # 4.3+
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
(Cherry picked from commit 57efac2f7108e3255d0dfe512290c9896f4ed55f)
In spite of its name, CONFIG_DEBUG_RODATA is an important hardening feature
for production kernels, and distros all enable it by default in their
kernel configs. However, since enabling it used to result in more granular,
and thus less efficient kernel mappings, it is not enabled by default for
performance reasons.
However, since commit 2f39b5f91eb4 ("arm64: mm: Mark .rodata as RO"), the
various kernel segments (.text, .rodata, .init and .data) are already
mapped individually, and the only effect of setting CONFIG_DEBUG_RODATA is
that the existing .text and .rodata mappings are updated late in the boot
sequence to have their read-only attributes set, which means that any
performance concerns related to enabling CONFIG_DEBUG_RODATA are no longer
valid.
So from now on, make CONFIG_DEBUG_RODATA default to 'y'
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
The result of "__entry->walt_avg = (__entry->demand << 10)" will exceed
the range of "unsigned int", which will be truncated and make the trace
looks like as follows:
UnityMain-4588 [004] 6029.645672: walt_update_history: 4588(UnityMain): runtime 9928307 samples 1 event 4
demand 9928307 walt 157 pelt 870 (hist: 9928307 9604307 8440077 87392 34144328) cpu 4
UnityMain-4588 [004] 6029.653658: walt_update_history: 4588(UnityMain): runtime 10000000 samples 1 event 4
demand 10000000 walt 165 pelt 886 (hist: 10000000 9955691 6549308 64000 34144328) cpu 4
Fix this by using a u64 type instead of unsgined int type and make the
trace as below:
UnityMain-4617 [004] 117.613558: walt_update_history: 4617(UnityMain): runtime 5770597 samples 1 event 4
demand 7038739 walt 720 pelt 680 (hist: 5770597 7680001 8904509 65596 156) cpu 4
UnityMain-4617 [004] 117.633560: walt_update_history: 4617(UnityMain): runtime 9911238 samples 1 event 4
demand 9911238 walt 1014 pelt 769 (hist: 9911238 5770597 7680001 0 1664188058) cpu 4
Signed-off-by: Ke Wang <ke.wang@spreadtrum.com>
- For device like eMMC, it gives better performance to read more hash
blocks at a time.
- For android, set it to default 128.
For other devices, set it to 1 which is the same as now.
- saved boot-up time by 300ms in tested device
bug: 32246564
Cc: Sami Tolvanen <samitolvanen@google.com>
Signed-off-by: Keun-young Park <keunyoung@google.com>
Documentation was missing for mono and mono_raw, add them and also for
the boot clock introduced in this series.
Bug: b/33184060
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Joel Fernandes <joelaf@google.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Unlike monotonic clock, boot clock as a trace clock will account for
time spent in suspend useful for tracing suspend/resume. This uses
earlier introduced infrastructure for using the fast boot clock.
Bug: b/33184060
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Joel Fernandes <joelaf@google.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
This boot clock can be used as a tracing clock and will account for
suspend time.
To keep it NMI safe since we're accessing from tracing, we're not using a
separate timekeeper with updates to monotonic clock and boot offset
protected with seqlocks. This has the following minor side effects:
(1) Its possible that a timestamp be taken after the boot offset is updated
but before the timekeeper is updated. If this happens, the new boot offset
is added to the old timekeeping making the clock appear to update slightly
earlier:
CPU 0 CPU 1
timekeeping_inject_sleeptime64()
__timekeeping_inject_sleeptime(tk, delta);
timestamp();
timekeeping_update(tk, TK_CLEAR_NTP...);
(2) On 32-bit systems, the 64-bit boot offset (tk->offs_boot) may be
partially updated. Since the tk->offs_boot update is a rare event, this
should be a rare occurrence which postprocessing should be able to handle.
Bug: b/33184060
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Joel Fernandes <joelaf@google.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
If the get_user_pages_fast() call in goldfish_pipe_read_write() failed,
it would return while still holding pipe->lock.
goldfish_pipe_read_write() later releases and tries to re-acquire
pipe->lock. If the re-acquire call failed, goldfish_pipe_read_write()
would try unlock pipe->lock on exit anyway.
This fixes the smatch messages:
drivers/platform/goldfish/goldfish_pipe.c:392 goldfish_pipe_read_write() error: double unlock 'mutex:&pipe->lock'
drivers/platform/goldfish/goldfish_pipe.c:397 goldfish_pipe_read_write() warn: inconsistent returns 'mutex:&pipe->lock'.
Change-Id: Ifd06a76b32027ca451a001704ade0c5440ed69c4
Signed-off-by: Greg Hackmann <ghackmann@google.com>
drivers/video/fbdev/goldfishfb.c:318:3-8: No need to set .owner here. The core will do it.
Remove .owner field if calls are used which set it automatically
Generated by: scripts/coccinelle/api/platform_no_drv_owner.cocci
CC: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Guenter Roeck <groeck@chromium.org>
Function get_free_pipe_id_locked called on line 671 inside lock on line
669 but uses GFP_KERNEL. Replace with GFP_ATOMIC.
Generated by: scripts/coccinelle/locks/call_kern.cocci
CC: Yurii Zubrytskyi <zyy@google.com>
Signed-off-by: Julia Lawall <julia.lawall@lip6.fr>
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Guenter Roeck <groeck@chromium.org>