When TTBR0_EL1 is set to the reserved page, an erroneous kernel access
to user space would generate a translation fault. This patch adds the
checks for the software-set PSR_PAN_BIT to emulate a permission fault
and report it accordingly.
Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 31432001
Change-Id: I87e48f6075f84878e4d26d4fadf6eaac49d2cb4e
(cherry picked from commit 786889636ad75296c213547d1ca656af4c59f390)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
When the TTBR0 PAN feature is enabled, the kernel entry points need to
disable access to TTBR0_EL1. The PAN status of the interrupted context
is stored as part of the saved pstate, reusing the PSR_PAN_BIT (22).
Restoring access to TTBR0_EL1 is done on exception return if returning
to user or returning to a context where PAN was disabled.
Context switching via switch_mm() must defer the update of TTBR0_EL1
until a return to user or an explicit uaccess_enable() call.
Special care needs to be taken for two cases where TTBR0_EL1 is set
outside the normal kernel context switch operation: EFI run-time
services (via efi_set_pgd) and CPU suspend (via cpu_(un)install_idmap).
Code has been added to avoid deferred TTBR0_EL1 switching as in
switch_mm() and restore the reserved TTBR0_EL1 when uninstalling the
special TTBR0_EL1.
User cache maintenance (user_cache_maint_handler and
__flush_cache_user_range) needs the TTBR0_EL1 re-instated since the
operations are performed by user virtual address.
This patch also removes a stale comment on the switch_mm() function.
Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 31432001
Change-Id: I85a49f70e13b153b9903851edf56f6531c14e6de
(cherry picked from commit 39bc88e5e38e9b213bd7d833ce0df6ec029761ad)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
This patch adds the uaccess macros/functions to disable access to user
space by setting TTBR0_EL1 to a reserved zeroed page. Since the value
written to TTBR0_EL1 must be a physical address, for simplicity this
patch introduces a reserved_ttbr0 page at a constant offset from
swapper_pg_dir. The uaccess_disable code uses the ttbr1_el1 value
adjusted by the reserved_ttbr0 offset.
Enabling access to user is done by restoring TTBR0_EL1 with the value
from the struct thread_info ttbr0 variable. Interrupts must be disabled
during the uaccess_ttbr0_enable code to ensure the atomicity of the
thread_info.ttbr0 read and TTBR0_EL1 write. This patch also moves the
get_thread_info asm macro from entry.S to assembler.h for reuse in the
uaccess_ttbr0_* macros.
Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 31432001
Change-Id: I54ada623160cb47f5762e0e39a5e84a75252dbfd
(cherry picked from commit 4b65a5db362783ab4b04ca1c1d2ad70ed9b0ba2a)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
This patch takes the errata workaround code out of cpu_do_switch_mm into
a dedicated post_ttbr0_update_workaround macro which will be reused in a
subsequent patch.
Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 31432001
Change-Id: I2b45b11ab7390c3545b9e162532109c1526bef14
(cherry picked from commit f33bcf03e6079668da6bf4eec4a7dcf9289131d0)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
This patch moves the directly coded alternatives for turning PAN on/off
into separate uaccess_{enable,disable} macros or functions. The asm
macros take a few arguments which will be used in subsequent patches.
Note that any (unlikely) access that the compiler might generate between
uaccess_enable() and uaccess_disable(), other than those explicitly
specified by the user access code, will not be protected by PAN.
Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 31432001
Change-Id: I75a410139d0756edab3210ee091fa5d047a22e04
(cherry picked from commit bd38967d406fb4f9fca67d612db71b5d74cfb0f5)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
In some cases, one side of an alternative sequence is simply a number of
NOPs used to balance the other side. Keeping track of this manually is
tedious, and the presence of large chains of NOPs makes the code more
painful to read than necessary.
To ameliorate matters, this patch adds a new alternative_else_nop_endif,
which automatically balances an alternative sequence with a trivial NOP
sled.
In many cases, we would like a NOP-sled in the default case, and
instructions patched in in the presence of a feature. To enable the NOPs
to be generated automatically for this case, this patch also adds a new
alternative_if, and updates alternative_else and alternative_endif to
work with either alternative_if or alternative_endif.
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Cc: James Morse <james.morse@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[will: use new nops macro to generate nop sequences]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug: 31432001
Change-Id: I28d8aae073e113048577c41cfe27c91215fb4cf3
(cherry picked from commit 792d47379f4d4c76692f1795f33d38582f8907fa)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
NOP sequences tend to get used for padding out alternative sections
and uarch-specific pipeline flushes in errata workarounds.
This patch adds macros for generating these sequences as both inline
asm blocks, but also as strings suitable for embedding in other asm
blocks directly.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug: 31432001
Change-Id: I7f82b677a065ede302a763d39ffcc3fef83f8fbe
(cherry picked from commit f99a250cb6a3b301b101b4c0f5fcb80593bba6dc)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Fix SCHED_WALT dependency on FAIR_GROUP_SCHED otherwise we run
into following build failure:
CC kernel/sched/walt.o
kernel/sched/walt.c: In function 'walt_inc_cfs_cumulative_runnable_avg':
kernel/sched/walt.c:148:8: error: 'struct cfs_rq' has no member named 'cumulative_runnable_avg'
cfs_rq->cumulative_runnable_avg += p->ravg.demand;
^
kernel/sched/walt.c: In function 'walt_dec_cfs_cumulative_runnable_avg':
kernel/sched/walt.c:154:8: error: 'struct cfs_rq' has no member named 'cumulative_runnable_avg'
cfs_rq->cumulative_runnable_avg -= p->ravg.demand;
^
Reported-at: https://bugs.linaro.org/show_bug.cgi?id=2793
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
We want to use network trace events in production
builds, to help diagnose Wifi problems. However, we
don't want to expose raw kernel pointers in such
builds.
Change the format specifier for the skbaddr field,
so that, if kptr_restrict is enabled, the pointers
will be reported as 0.
Bug: 30090733
Change-Id: Ic4bd583d37af6637343601feca875ee24479ddff
Signed-off-by: mukesh agrawal <quiche@google.com>
Commit e2d118a1cb5e ("net: inet: Support UID-based routing in IP
protocols.") made __build_flow_key call sock_net(sk) to determine
the network namespace of the passed-in socket. This crashes if sk
is NULL.
Fix this by getting the network namespace from the skb instead.
Bug: 16355602
Change-Id: I27161b70f448bb95adce3994a97920d54987ce4e
Fixes: e2d118a1cb5e ("net: inet: Support UID-based routing in IP protocols.")
Reported-by: Erez Shitrit <erezsh@dev.mellanox.co.il>
Signed-off-by: Lorenzo Colitti <lorenzo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
- Use the UID in routing lookups made by protocol connect() and
sendmsg() functions.
- Make sure that routing lookups triggered by incoming packets
(e.g., Path MTU discovery) take the UID of the socket into
account.
- For packets not associated with a userspace socket, (e.g., ping
replies) use UID 0 inside the user namespace corresponding to
the network namespace the socket belongs to. This allows
all namespaces to apply routing and iptables rules to
kernel-originated traffic in that namespaces by matching UID 0.
This is better than using the UID of the kernel socket that is
sending the traffic, because the UID of kernel sockets created
at namespace creation time (e.g., the per-processor ICMP and
TCP sockets) is the UID of the user that created the socket,
which might not be mapped in the namespace.
Bug: 16355602
Change-Id: I910504b508948057912bc188fd1e8aca28294de3
Tested: compiles allnoconfig, allyesconfig, allmodconfig
Tested: https://android-review.googlesource.com/253302
Signed-off-by: Lorenzo Colitti <lorenzo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
- Define a new FIB rule attributes, FRA_UID_RANGE, to describe a
range of UIDs.
- Define a RTA_UID attribute for per-UID route lookups and dumps.
- Support passing these attributes to and from userspace via
rtnetlink. The value INVALID_UID indicates no UID was
specified.
- Add a UID field to the flow structures.
Bug: 16355602
Change-Id: Iea98e6fedd0fd4435a1f4efa3deb3629505619ab
Signed-off-by: Lorenzo Colitti <lorenzo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Protocol sockets (struct sock) don't have UIDs, but most of the
time, they map 1:1 to userspace sockets (struct socket) which do.
Various operations such as the iptables xt_owner match need
access to the "UID of a socket", and do so by following the
backpointer to the struct socket. This involves taking
sk_callback_lock and doesn't work when there is no socket
because userspace has already called close().
Simplify this by adding a sk_uid field to struct sock whose value
matches the UID of the corresponding struct socket. The semantics
are as follows:
1. Whenever sk_socket is non-null: sk_uid is the same as the UID
in sk_socket, i.e., matches the return value of sock_i_uid.
Specifically, the UID is set when userspace calls socket(),
fchown(), or accept().
2. When sk_socket is NULL, sk_uid is defined as follows:
- For a socket that no longer has a sk_socket because
userspace has called close(): the previous UID.
- For a cloned socket (e.g., an incoming connection that is
established but on which userspace has not yet called
accept): the UID of the socket it was cloned from.
- For a socket that has never had an sk_socket: UID 0 inside
the user namespace corresponding to the network namespace
the socket belongs to.
Kernel sockets created by sock_create_kern are a special case
of #1 and sk_uid is the user that created them. For kernel
sockets created at network namespace creation time, such as the
per-processor ICMP and TCP sockets, this is the user that created
the network namespace.
Bug: 16355602
Change-Id: Idbc3e9a0cec91c4c6e01916b967b6237645ebe59
Signed-off-by: Lorenzo Colitti <lorenzo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
(Cherry picked from commit 7cc8cbcf82d165dd658d89a7a287140948e76413)
Commit 4dffbfc48d65 ("arm64/efi: mark UEFI reserved regions as
MEMBLOCK_NOMAP") updated the mapping logic of both the RuntimeServices
regions as well as the kernel's copy of the UEFI memory map to set the
MEMBLOCK_NOMAP flag, which causes these regions to be omitted from the
kernel direct mapping, and from being covered by a struct page.
For the RuntimeServices regions, this is an obvious win, since the contents
of these regions have significance to the firmware executable code itself,
and are mapped in the EFI page tables using attributes that are described in
the UEFI memory map, and which may differ from the attributes we use for
mapping system RAM. It also prevents the contents from being modified
inadvertently, since the EFI page tables are only live during runtime
service invocations.
None of these concerns apply to the allocation that covers the UEFI memory
map, since it is entirely owned by the kernel. Setting the MEMBLOCK_NOMAP on
the region did allow us to use ioremap_cache() to map it both on arm64 and
on ARM, since the latter does not allow ioremap_cache() to be used on
regions that are covered by a struct page.
The ioremap_cache() on ARM restriction will be lifted in the v4.7 timeframe,
but in the mean time, it has been reported that commit 4dffbfc48d65 causes
a regression on 64k granule kernels. This is due to the fact that, given
the 64 KB page size, the region that we end up removing from the kernel
direct mapping is rounded up to 64 KB, and this 64 KB page frame may be
shared with the initrd when booting via GRUB (which does not align its
EFI_LOADER_DATA allocations to 64 KB like the stub does). This will crash
the kernel as soon as it tries to access the initrd.
Since the issue is specific to arm64, revert back to memblock_reserve()'ing
the UEFI memory map when running on arm64. This is a temporary fix for v4.5
and v4.6, and will be superseded in the v4.7 timeframe when we will be able
to move back to memblock_reserve() unconditionally.
Fixes: 4dffbfc48d65 ("arm64/efi: mark UEFI reserved regions as MEMBLOCK_NOMAP")
Reported-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Jeremy Linton <jeremy.linton@arm.com>
Cc: Mark Langsdorf <mlangsdo@redhat.com>
Cc: <stable@vger.kernel.org> # v4.5
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
Fixes: Change-Id: Ia3ce78f40f8d41a9afdd42238fe9cbfd81bbff08
("UPSTREAM: arm64/efi: mark UEFI reserved regions as MEMBLOCK_NOMAP")
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
(Cherry picked from commit 0106d456c4cb1770253fefc0ab23c9ca760b43f7)
Commit 66dbd6e61a52 ("arm64: Implement ptep_set_access_flags() for
hardware AF/DBM") ensured that pte flags are updated atomically in the
face of potential concurrent, hardware-assisted updates. However, Alex
reports that:
| This patch breaks swapping for me.
| In the broken case, you'll see either systemd cpu time spike (because
| it's stuck in a page fault loop) or the system hang (because the
| application owning the screen is stuck in a page fault loop).
It turns out that this is because the 'dirty' argument to
ptep_set_access_flags is always 0 for read faults, and so we can't use
it to set PTE_RDONLY. The failing sequence is:
1. We put down a PTE_WRITE | PTE_DIRTY | PTE_AF pte
2. Memory pressure -> pte_mkold(pte) -> clear PTE_AF
3. A read faults due to the missing access flag
4. ptep_set_access_flags is called with dirty = 0, due to the read fault
5. pte is then made PTE_WRITE | PTE_DIRTY | PTE_AF | PTE_RDONLY (!)
6. A write faults, but pte_write is true so we get stuck
The solution is to check the new page table entry (as would be done by
the generic, non-atomic definition of ptep_set_access_flags that just
calls set_pte_at) to establish the dirty state.
Cc: <stable@vger.kernel.org> # 4.3+
Fixes: 66dbd6e61a52 ("arm64: Implement ptep_set_access_flags() for hardware AF/DBM")
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Alexander Graf <agraf@suse.de>
Tested-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Fixes: Change-Id: Id2a0b0d8eb6e7df6325ecb48b88b8401a5dd09e5
("UPSTREAM: arm64: Implement ptep_set_access_flags() for hardware AF/DBM")
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
(Cherry picked from commit 282aa7051b0169991b34716f0f22d9c2f59c46c4)
The update to the accessed or dirty states for block mappings must be
done atomically on hardware with support for automatic AF/DBM. The
ptep_set_access_flags() function has been fixed as part of commit
66dbd6e61a52 ("arm64: Implement ptep_set_access_flags() for hardware
AF/DBM"). This patch brings pmdp_set_access_flags() in line with the pte
counterpart.
Fixes: 2f4b829c62 ("arm64: Add support for hardware updates of the access and dirty pte bits")
Cc: <stable@vger.kernel.org> # 4.4.x: 66dbd6e61a52: arm64: Implement ptep_set_access_flags() for hardware AF/DBM
Cc: <stable@vger.kernel.org> # 4.3+
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
(Cherry picked from commit 911f56eeb87ee378f5e215469268a7a2f68a5a8a)
With hardware AF/DBM support, pmd modifications (transparent huge pages)
should be performed atomically using load/store exclusive. The initial
patches defined the get-and-clear function and __HAVE_ARCH_* macro
without the "huge" word, leaving the pmdp_huge_get_and_clear() to the
default, non-atomic implementation.
Fixes: 2f4b829c62 ("arm64: Add support for hardware updates of the access and dirty pte bits")
Cc: <stable@vger.kernel.org> # 4.3+
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
(Cherry picked from commit 57efac2f7108e3255d0dfe512290c9896f4ed55f)
In spite of its name, CONFIG_DEBUG_RODATA is an important hardening feature
for production kernels, and distros all enable it by default in their
kernel configs. However, since enabling it used to result in more granular,
and thus less efficient kernel mappings, it is not enabled by default for
performance reasons.
However, since commit 2f39b5f91eb4 ("arm64: mm: Mark .rodata as RO"), the
various kernel segments (.text, .rodata, .init and .data) are already
mapped individually, and the only effect of setting CONFIG_DEBUG_RODATA is
that the existing .text and .rodata mappings are updated late in the boot
sequence to have their read-only attributes set, which means that any
performance concerns related to enabling CONFIG_DEBUG_RODATA are no longer
valid.
So from now on, make CONFIG_DEBUG_RODATA default to 'y'
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
The result of "__entry->walt_avg = (__entry->demand << 10)" will exceed
the range of "unsigned int", which will be truncated and make the trace
looks like as follows:
UnityMain-4588 [004] 6029.645672: walt_update_history: 4588(UnityMain): runtime 9928307 samples 1 event 4
demand 9928307 walt 157 pelt 870 (hist: 9928307 9604307 8440077 87392 34144328) cpu 4
UnityMain-4588 [004] 6029.653658: walt_update_history: 4588(UnityMain): runtime 10000000 samples 1 event 4
demand 10000000 walt 165 pelt 886 (hist: 10000000 9955691 6549308 64000 34144328) cpu 4
Fix this by using a u64 type instead of unsgined int type and make the
trace as below:
UnityMain-4617 [004] 117.613558: walt_update_history: 4617(UnityMain): runtime 5770597 samples 1 event 4
demand 7038739 walt 720 pelt 680 (hist: 5770597 7680001 8904509 65596 156) cpu 4
UnityMain-4617 [004] 117.633560: walt_update_history: 4617(UnityMain): runtime 9911238 samples 1 event 4
demand 9911238 walt 1014 pelt 769 (hist: 9911238 5770597 7680001 0 1664188058) cpu 4
Signed-off-by: Ke Wang <ke.wang@spreadtrum.com>
- For device like eMMC, it gives better performance to read more hash
blocks at a time.
- For android, set it to default 128.
For other devices, set it to 1 which is the same as now.
- saved boot-up time by 300ms in tested device
bug: 32246564
Cc: Sami Tolvanen <samitolvanen@google.com>
Signed-off-by: Keun-young Park <keunyoung@google.com>
Documentation was missing for mono and mono_raw, add them and also for
the boot clock introduced in this series.
Bug: b/33184060
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Joel Fernandes <joelaf@google.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Unlike monotonic clock, boot clock as a trace clock will account for
time spent in suspend useful for tracing suspend/resume. This uses
earlier introduced infrastructure for using the fast boot clock.
Bug: b/33184060
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Joel Fernandes <joelaf@google.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
This boot clock can be used as a tracing clock and will account for
suspend time.
To keep it NMI safe since we're accessing from tracing, we're not using a
separate timekeeper with updates to monotonic clock and boot offset
protected with seqlocks. This has the following minor side effects:
(1) Its possible that a timestamp be taken after the boot offset is updated
but before the timekeeper is updated. If this happens, the new boot offset
is added to the old timekeeping making the clock appear to update slightly
earlier:
CPU 0 CPU 1
timekeeping_inject_sleeptime64()
__timekeeping_inject_sleeptime(tk, delta);
timestamp();
timekeeping_update(tk, TK_CLEAR_NTP...);
(2) On 32-bit systems, the 64-bit boot offset (tk->offs_boot) may be
partially updated. Since the tk->offs_boot update is a rare event, this
should be a rare occurrence which postprocessing should be able to handle.
Bug: b/33184060
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Joel Fernandes <joelaf@google.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
If the get_user_pages_fast() call in goldfish_pipe_read_write() failed,
it would return while still holding pipe->lock.
goldfish_pipe_read_write() later releases and tries to re-acquire
pipe->lock. If the re-acquire call failed, goldfish_pipe_read_write()
would try unlock pipe->lock on exit anyway.
This fixes the smatch messages:
drivers/platform/goldfish/goldfish_pipe.c:392 goldfish_pipe_read_write() error: double unlock 'mutex:&pipe->lock'
drivers/platform/goldfish/goldfish_pipe.c:397 goldfish_pipe_read_write() warn: inconsistent returns 'mutex:&pipe->lock'.
Change-Id: Ifd06a76b32027ca451a001704ade0c5440ed69c4
Signed-off-by: Greg Hackmann <ghackmann@google.com>
drivers/video/fbdev/goldfishfb.c:318:3-8: No need to set .owner here. The core will do it.
Remove .owner field if calls are used which set it automatically
Generated by: scripts/coccinelle/api/platform_no_drv_owner.cocci
CC: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Guenter Roeck <groeck@chromium.org>
Function get_free_pipe_id_locked called on line 671 inside lock on line
669 but uses GFP_KERNEL. Replace with GFP_ATOMIC.
Generated by: scripts/coccinelle/locks/call_kern.cocci
CC: Yurii Zubrytskyi <zyy@google.com>
Signed-off-by: Julia Lawall <julia.lawall@lip6.fr>
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Guenter Roeck <groeck@chromium.org>
Android toolchains enable PIC, so explicitly disable it with
-fno-pic (this is the upstream gcc default)
Signed-off-by: Greg Hackmann <ghackmann@google.com>
(cherry picked from commit 892606ece2bebfa5a1ed62e9552cc973707ae9d3)
Change-Id: I1e600363e5d18e459479fe4eb23d76855e16868d
This is a driver code for a redesigned android pipe.
Currently it works for x86 and x64 emulators with the following
performance results:
ADB push to /dev/null,
Ubuntu,
400 MB file,
times are for 1/10/100 parallel adb commands
x86 adb push: (4.4s / 11.5s / 2m10s) -> (2.8s / 6s / 51s)
x64 adb push: (7s / 15s / (too long, 6m+) -> (2.7s / 6.2s / 52s)
ADB pull and push to /data/ have the same %% of speedup
More importantly, I don't see any signs of slowdowns when
run in parallel with Antutu benchmark, so it is definitely
making much better job at multithreading.
The code features dynamic host detection: old emulator gets
the previous version of the pipe driver code.
Combine follow patch from android-goldfish-3.10
b543285 [pipe] Increase the default pipe buffers size, make it configurable
Signed-off-by: "Yurii Zubrytskyi" <zyy@google.com>
Change-Id: I140d506204cab6e78dd503e5a43abc8886e4ffff
Combine following patches from android-goldfish-3.18 branch:
c0f015a [pipe] Fix the pipe driver for x64 platform + correct pages count
48e6bf5 [pipe] Use get_use_pages_fast() which is possibly faster
fb20f13 [goldfish] More pages in goldfish pipe
f180e6d goldfish_pipe: Return from read_write on signal and EIO
3dec3b7 [pipe] Fix a minor leak in setup_access_params_addr()
Change-Id: I1041fd65d7faaec123e6cedd3dbbc5a2fbb86c4d
This is kernel driver for controlling the Goldfish sync
device on the host. It is used to maintain ordering
in critical OpenGL state changes while using
GPU emulation.
The guest open()'s the Goldfish sync device to create
a context for possibly maintaining sync timeline and fences.
There is a 1:1 correspondence between such sync contexts
and OpenGL contexts in the guest that need synchronization
(which in turn, is anything involving swapping buffers,
SurfaceFlinger, or Hardware Composer).
The ioctl QUEUE_WORK takes a handle to a sync object
and attempts to tell the host GPU to wait on the sync object
and deal with signaling it. It possibly outputs
a fence FD on which the Android systems that use them
(GLConsumer, SurfaceFlinger, anything employing
EGL_ANDROID_native_fence_sync) can use to wait.
Design decisions and work log:
- New approach is to have the guest issue ioctls that
trigger host wait, and then host increments timeline.
- We need the host's sync object handle and sync thread handle
as the necessary information for that.
- ioctl() from guest can work simultaneously with the
interrupt handling for commands from host.
- optimization: don't write back on timeline inc
- Change spin lock design to be much more lightweight;
do not call sw_sync functions or loop too long
anywhere.
- Send read/write commands in batches to minimize guest/host
transitions.
- robustness: BUG if we will overrun the cmd buffer.
- robustness: return fd -1 if we cannot get an unused fd.
- correctness: remove global mutex
- cleanup pass done, incl. but not limited to:
- removal of clear_upto and
- switching to devm_***
This is part of a sequential, multi-CL change:
external/qemu:
https://android-review.googlesource.com/239442 <- host-side device's
host interface
https://android-review.googlesource.com/221593https://android-review.googlesource.com/248563https://android-review.googlesource.com/248564https://android-review.googlesource.com/223032
external/qemu-android:
https://android-review.googlesource.com/238790 <- host-side device
implementation
kernel/goldfish:
https://android-review.googlesource.com/232631 <- needed
https://android-review.googlesource.com/238399 <- this CL
Also squash following bug fixes from android-goldfish-3.18 branch.
b44d486 goldfish_sync: provide a signal to detect reboot
ad1f597 goldfish_sync: fix stalls by avoiding early kfree()
de208e8 [goldfish-sync] Fix possible race between kernel and user space
Change-Id: I22f8a0e824717a7e751b1b0e1b461455501502b6
The buffer_status field is interrupt updated. After every read request,
the buffer_status read field should be reset so that on the next loop
iteration we don't read a stale value and read data before the
device is ready.
Signed-off-by: “Joshua Lang” <joshualang@google.com>
Change-Id: I4943d5aaada1cad9c7e59a94a87c387578dabe86
If we send SYN_REPORT on every single
multitouch event, it breaks the multitouch.
The multitouch becomes janky and
having to click 2-3 times to
do stuff (plus randomly activating notification
bars when not clicking)
If we suppress these SYN_REPORTS,
multitouch will work fine, plus the events
will have a protocol that looks nice.
In addition, we need to register Goldfish Events
as a multitouch device by issuing
input_mt_init_slots, otherwise
input_handle_abs_event in drivers/input/input.c
will silently drop all ABS_MT_SLOT events,
making it so that touches with more than 1 finger
do not work properly.
Signed-off-by: "Lingfeng Yang" <lfy@google.com>
Change-Id: Ib2350f7d1732449d246f6f0d9b7b08f02cc7c2dd
(cherry picked from commit 6cf40d0a16330e1ef42bdf07d9aba6c16ee11fbc)
User space Android code identifies pixclock == 0 as a sign for emulation
and will set the frame rate to 60 fps when reading this value, which is
the desired outcome.
Change-Id: I759bf518bf6683446bc786bf1be3cafa02dd8d42
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>