Having the system register numbers as #defines has been a pain
since day one, as the ordering is pretty fragile, and moving
things around leads to renumbering and epic conflict resolutions.
Now that we're mostly acessing the sysreg file in C, an enum is
a much better type to use, and we can clean things up a bit.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
(cherry picked from commit 9d8415d6c148a16b6d906a96f0596851d7e4d607)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
With VHE, the host never issues an HVC instruction to get into the
KVM code, as we can simply branch there.
Use runtime code patching to simplify things a bit.
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit b81125c791a2958cc60ae968fc1cdea82b7cd3b0)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Add a new ARM64_HAS_VIRT_HOST_EXTN features to indicate that the
CPU has the ARMv8.1 VHE capability.
This will be used to trigger kernel patching in KVM.
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit d88701bea3664cea99b8b7380f63a3bd0ec3ead3)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
arch/arm64/include/asm/cpufeature.h
With ARMv8.1 VHE extension, it will be possible to run the kernel
at EL2 (aka HYP mode). In order for the kernel to easily find out
where it is running, add a new predicate that returns whether or
not the kernel is in HYP mode.
For completeness, the 32bit code also get such a predicate (always
returning false) so that code common to both architecture (timers,
KVM) can use it transparently.
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 82deae0fc8ba256c1061dd4de42f0ef6cb9f954f)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
arch/arm/include/asm/virt.h
This is it. We remove all of the code that has now been rewritten.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
(cherry picked from commit 1ea66d27e7b01086669ff2abdc3ac89dc90eae51)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
In order to run C code in HYP, we must make sure that the kernel's
RO section is mapped into HYP (otherwise things break badly).
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
(cherry picked from commit 910917bb7db070cc67557a6b3c8fcceaa5c398a7)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
So far, we've implemented the new world switch with a completely
different namespace, so that we could have both implementation
compiled in.
Let's take things one step further by adding weak aliases that
have the same names as the original implementation. The weak
attributes allows the new implementation to be overriden by the
old one, and everything still work.
At a later point, we'll be able to simply drop the old code, and
everything will hopefully keep working, thanks to the aliases we
have just added. This also saves us repainting all the callers.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
(cherry picked from commit 044ac37d1281fc7b59d5dce4fe979a99369e95f2)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Implement the vgic-v3 save restore as a direct translation of
the assembly code version.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit f68d2b1b73cc3d8f6eb189c11ce79a472ed27c42)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
arch/arm64/kvm/hyp/Makefile
arch/arm64/kvm/hyp/hyp.h
Add the panic handler, together with the small bits of assembly
code to call the kernel's panic implementation.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
(cherry picked from commit 53fd5b6487e4438049a5da5e36dfb8edcf1fd789)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Add the entry points for HYP mode (both for hypercalls and
exception handling).
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
(cherry picked from commit 2b28162cf65a6fe1c93d172675e4f2792792f17e)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Implement the TLB handling as a direct translation of the assembly
code version.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
(cherry picked from commit 5eec0a91e32a2862e86265532ae773820e0afd77)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Implement the fpsimd save restore, keeping the lazy part in
assembler (as returning to C would be overkill).
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
(cherry picked from commit c13d1683df16db16c91372177ca10c31677b5ed5)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Implement the core of the world switch in C. Not everything is there
yet, and there is nothing to re-enter the world switch either.
But this already outlines the code structure well enough.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
(cherry picked from commit be901e9b15cd2c8e48dc089b4655ea4a076e66fd)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
KVM so far relies on code patching, and is likely to use it more
in the future. The main issue is that our alternative system works
at the instruction level, while we'd like to have alternatives at
the function level.
In order to cope with this, add the "hyp_alternate_select" macro that
outputs a brief sequence of code that in turn can be patched, allowing
an alternative function to be selected.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit c1bf6e18e97e7ead77371d4251f8ef1567455584)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Contrary to the previous patch, the guest entry is fairly different
from its assembly counterpart, mostly because it is only concerned
with saving/restoring the GP registers, and nothing else.
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit b97b66c14b96ab562e4fd516d804c5cd05c0529e)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Implement the debug save restore as a direct translation of
the assembly code version.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
(cherry picked from commit 8eb992674c9e69d57af199f36b6455dbc00ac9f9)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Implement the 32bit system register save/restore as a direct
translation of the assembly code version.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
(cherry picked from commit c209ec85a2a7d2fd38bca0a44b7e70abd079c178)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Implement the system register save/restore as a direct translation of
the assembly code version.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
(cherry picked from commit 6d6ec20fcf2830ca10c1b7c8efd7e2592c40e3d6)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Implement the timer save restore as a direct translation of
the assembly code version.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 1431af367e52b08038e78d346822966d968f1694)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
arch/arm64/kvm/hyp/Makefile
arch/arm64/kvm/hyp/hyp.h
Implement the vgic-v2 save restore (mostly) as a direct translation
of the assembly code version.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 06282fd2c2bf61619649a2b13e4a08556598a64c)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
In order to expose the various EL2 services that are private to
the hypervisor, add a new hyp.h file.
So far, it only contains mundane things such as section annotation
and VA manipulation.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit c76a0a6695c61088c8d2e731e25305502666bf7d)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
We store GICv3 LRs in reverse order so that the CPU can save/restore
them in rever order as well (don't ask why, the design is crazy),
and yet generate memory traffic that doesn't completely suck.
We need this macro to be available to the C version of save/restore.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 3c13b8f435acb452eac62d966148a8b6fa92151f)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Rather than crafting custom macros for reading/writing each system
register provide generics accessors, read_sysreg and write_sysreg, for
this purpose.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 3600c2fdc09a43a30909743569e35a29121602ed)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
In systems with three levels of cache(PoU at L1 and PoC at L3),
PoC cache flush instructions flushes L2 and L3 caches which could affect
performance.
For cache flushes for I and D coherency, PoU should suffice.
So changing all I and D coherency related cache flushes to PoU.
Introduced a new __clean_dcache_area_pou API for dcache flush till PoU
and provided a common macro for __flush_dcache_area and
__clean_dcache_area_pou.
Also, now in __sync_icache_dcache, icache invalidation for non-aliasing
VIPT icache is done only for that particular page instead of the earlier
__flush_icache_all.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ashok Kumar <ashoks@broadcom.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit 0a28714c53fd4f7aea709be7577dfbe0095c8c3e)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
arch/arm64/mm/proc-macros.S
If page tables are modified without suitable TLB maintenance, the ARM
architecture permits multiple TLB entries to be allocated for the same
VA. When this occurs, it is permitted that TLB conflict aborts are
raised in response to synchronous data/instruction accesses, and/or and
amalgamation of the TLB entries may be used as a result of a TLB lookup.
The presence of conflicting TLB entries may result in a variety of
behaviours detrimental to the system (e.g. erroneous physical addresses
may be used by I-cache fetches and/or page table walks). Some of these
cases may result in unexpected changes of hardware state, and/or result
in the (asynchronous) delivery of SError.
To avoid these issues, we must avoid situations where conflicting
entries may be allocated into TLBs. For user and module mappings we can
follow a strict break-before-make approach, but this cannot work for
modifications to the swapper page tables that cover the kernel text and
data.
Instead, this patch adds code which is intended to be executed from the
idmap, which can safely unmap the swapper page tables as it only
requires the idmap to be active. This enables us to uninstall the active
TTBR1_EL1 entry, invalidate TLBs, then install a new TTBR1_EL1 entry
without potentially unmapping code or data required for the sequence.
This avoids the risk of conflict, but requires that updates are staged
in a copy of the swapper page tables prior to being installed.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 50e1881ddde2a986c7d0d2150985239e5e3d7d96)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
We drop __cpu_setup in .text.init, which ends up being part of .text.
The .text.init section was a legacy section name which has been unused
elsewhere for a long time.
The ".text.init" name is misleading if read as a synonym for
".init.text". Any CPU may execute __cpu_setup before turning the MMU on,
so it should simply live in .text.
Remove the pointless section assignment. This will leave __cpu_setup in
the .text section.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit f00083cae331e5d3eecade6b4fdc35d0825e73ef)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
In some cases (e.g. when making invasive changes to the kernel page
tables) we will need to execute code from the idmap.
Add a new helper which may be used to install the idmap, complementing
the existing cpu_uninstall_idmap.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 609116d202a8c5fd3fe393eb85373cbee906df68)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
During boot we leave the idmap in place until paging_init, as we
previously had to wait for the zero page to become allocated and
accessible.
Now that we have a statically-allocated zero page, we can uninstall the
idmap much earlier in the boot process, making it far easier to spot
accidental use of physical addresses. This also brings the cold boot
path in line with the secondary boot path.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 86ccce896cb0aa800a7a6dcd29b41ffc4eeb1a75)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
We currently open-code the removal of the idmap and restoration of the
current task's MMU state in a few places.
Before introducing yet more copies of this sequence, unify these to call
a new helper, cpu_uninstall_idmap.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 9e8e865bbe294a69666a1996bda3e87825b258c0)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Currently the zero page is set up in paging_init, and thus we cannot use
the zero page earlier. We use the zero page as a reserved TTBR value
from which no TLB entries may be allocated (e.g. when uninstalling the
idmap). To enable such usage earlier (as may be required for invasive
changes to the kernel page tables), and to minimise the time that the
idmap is active, we need to be able to use the zero page before
paging_init.
This patch follows the example set by x86, by allocating the zero page
at compile time, in .bss. This means that the zero page itself is
available immediately upon entry to start_kernel (as we zero .bss before
this), and also means that the zero page takes up no space in the raw
Image binary. The associated struct page is allocated in bootmem_init,
and remains unavailable until this time.
Outside of arch code, the only users of empty_zero_page assume that the
empty_zero_page symbol refers to the zeroed memory itself, and that
ZERO_PAGE(x) must be used to acquire the associated struct page,
following the example of x86. This patch also brings arm64 inline with
these assumptions.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 5227cfa71f9e8574373f4d0e9e754942d76cdf67)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Currently we use an open-coded memzero to clear the BSS. As it is a
trivial implementation, it is sub-optimal.
Our optimised memset doesn't use the stack, is position-independent, and
for the memzero case can use of DC ZVA to clear large blocks
efficiently. In __mmap_switched the MMU is on and there are no live
caller-saved registers, so we can safely call an uninstrumented memset.
This patch changes __mmap_switched to use memset when clearing the BSS.
We use the __pi_memset alias so as to avoid any instrumentation in all
kernel configurations.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit 2a803c4db615d85126c5c7afd5849a3cfde71422)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
We pass a size parameter to early_alloc and late_alloc, but these are
only ever used to allocate single pages. In late_alloc we always
allocate a single page.
Both allocators provide us with zeroed pages (such that all entries are
invalid), but we have no barriers between allocating a page and adding
that page to existing (live) tables. A concurrent page table walk may
see stale data, leading to a number of issues.
This patch specialises the two allocators for page tables. The size
parameter is removed and the necessary dsb(ishst) is folded into each.
To make it clear that the functions are intended for use for page table
allocation, they are renamed to {early,late}_pgtable_alloc, with the
related function pointed renamed to pgtable_alloc.
As the dsb(ishst) is now in the allocator, the existing barrier for the
zero page is redundant and thus is removed. The previously missing
include of barrier.h is added.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 21ab99c289d350f4ae454bc069870009db6df20e)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
As pgd_offset{,_k} shift the input address by PGDIR_SHIFT, the sub-page
bits will always be shifted out. There is no need to apply PAGE_MASK
before this.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit e2c30ee320eb96304896c7ab84499e5bc5e5fb6e)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Currently __set_fixmap_offset is a macro function which has a local
variable called 'addr'. If a caller passes a 'phys' parameter which is
derived from a variable also called 'addr', the local variable will
shadow this, and the compiler will complain about the use of an
uninitialized variable. To avoid the issue with namespace clashes,
'addr' is prefixed with a liberal sprinkling of underscores.
Turning __set_fixmap_offset into a static inline breaks the build for
several architectures. Fixing this properly requires updates to a number
of architectures to make them agree on the prototype of __set_fixmap (it
could be done as a subsequent patch series).
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
[catalin.marinas@arm.com: squashed the original function patch and macro fixup]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 3694bd76781b76c4f8d2ecd85018feeb1609f0e5)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Currently we treat the alternatives separately from other data that's
only used during initialisation, using separate .altinstructions and
.altinstr_replacement linker sections. These are freed for general
allocation separately from .init*. This is problematic as:
* We do not remove execute permissions, as we do for .init, leaving the
memory executable.
* We pad between them, making the kernel Image bianry up to PAGE_SIZE
bytes larger than necessary.
This patch moves the two sections into the contiguous region used for
.init*. This saves some memory, ensures that we remove execute
permissions, and allows us to remove some code made redundant by this
reorganisation.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit 9aa4ec1571da62366cfddc20f3b923609604fe63)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
ARM64 PSCI kernel interfaces that initialize idle states and implement
the suspend API to enter them are generic and can be shared with the
ARM architecture.
To achieve that goal, this patch moves ARM64 PSCI idle management
code to drivers/firmware, so that the interface to initialize and
enter idle states can actually be shared by ARM and ARM64 arches
back-ends.
The ARM generic CPUidle implementation also requires the definition of
a cpuidle_ops section entry for the kernel to initialize the CPUidle
operations at boot based on the enable-method (ie ARM64 has the
statically initialized cpu_ops counterparts for that purpose); therefore
this patch also adds the required section entry on CONFIG_ARM for PSCI so
that the kernel can initialize the PSCI CPUidle back-end when PSCI is
the probed enable-method.
On ARM64 this patch provides no functional change.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arch/arm64]
Acked-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jisheng Zhang <jszhang@marvell.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Sudeep Holla <sudeep.holla@arm.com>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
(cherry picked from commit 8b6f2499ac45d5a0ab2e4b6f9613ab3f60416be1)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Switch to use a generic interface for issuing SMC/HVC based on ARM SMC
Calling Convention. Removes now the now unused psci-call.S.
Acked-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
(cherry picked from commit e679660dbb8347f275fe5d83a5dd59c1fb6c8e63)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Adds implementation for arm-smccc and enables CONFIG_HAVE_SMCCC.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
(cherry picked from commit 14457459f9ca2ff8521686168ea179edc3a56a44)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
arch/arm64/kernel/arm64ksyms.c
Adds implementation for arm-smccc and enables CONFIG_HAVE_SMCCC for
architectures that may support arm-smccc. It's the responsibility of the
caller to know if the SMC instruction is supported by the platform.
Reviewed-by: Lars Persson <lars.persson@axis.com>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
(cherry picked from commit b329f95d70f3f955093e9a2b18ac1ed3587a8f73)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Adds helpers to do SMC and HVC based on ARM SMC Calling Convention.
CONFIG_HAVE_ARM_SMCCC is enabled for architectures that may support the
SMC or HVC instruction. It's the responsibility of the caller to know if
the SMC instruction is supported by the platform.
This patch doesn't provide an implementation of the declared functions.
Later patches will bring in implementations and set
CONFIG_HAVE_ARM_SMCCC for ARM and ARM64 respectively.
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
(cherry picked from commit 98dd64f34f47ce19b388d9015f767f48393a81eb)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
The code enabled by the ARM_CPU_SUSPEND config option is used by
kernel subsystems for purposes that go beyond system suspend so its
config entry should be augmented to take more default options into
account and avoid forcing its selection to prevent dependencies
override.
To achieve this goal, this patch reworks the ARM_CPU_SUSPEND config
entry and updates its default config value (by adding the BL_SWITCHER
option to it) and its dependencies (ARCH_SUSPEND_POSSIBLE), so that the
symbol is still selected by default by the subsystems requiring it and
at the same time enforcing the dependencies correctly.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
(cherry picked from commit 1b9bdf5c1661873a10e193b8cbb803a87fe5c4a1)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
It is not possible to build the bL_switcher code if the GIC
driver is disabled, because it relies on calling into some
gic specific interfaces, and that would result in this build
error:
arch/arm/common/built-in.o: In function `bL_switch_to':
:(.text+0x1230): undefined reference to `gic_get_sgir_physaddr'
:(.text+0x1244): undefined reference to `gic_send_sgi'
:(.text+0x1268): undefined reference to `gic_migrate_target'
arch/arm/common/built-in.o: In function `bL_switcher_enable.part.4':
:(.text.unlikely+0x2f8): undefined reference to `gic_get_cpu_id'
This adds a Kconfig dependency to ensure we only build the big-little
switcher if the GIC driver is present as well.
Almost all ARMv7 platforms come with a GIC anyway, but it is possible
to build a kernel that disables all platforms.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
(cherry picked from commit 6c044fecdf78be3fda159a5036bb33700cdd5e59)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
commit ad05711cec12131e1277ce749a99d08ecf233aa7 upstream.
Because the arm64 calling standard allows stacked function arguments to be
anywhere in the stack frame, do not attempt to duplicate the stack frame for
jprobes handler functions.
Documentation changes to describe this issue have been broken out into a
separate patch in order to simultaneously address them in other
architecture(s).
Signed-off-by: David A. Long <dave.long@linaro.org>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
commit 3b7d14e9f3f1efd4c4348800e977fd1ce4ca660e upstream.
jprobe_return seems to have aged badly. Comments referring to
non-existent behaviours, and a dangerous habit of messing
with registers without telling the compiler.
This patches applies the following remedies:
- Fix the comments to describe the actual behaviour
- Tidy up the asm sequence to directly assign the
stack pointer without clobbering extra registers
- Mark the rest of the function as unreachable() so
that the compiler knows that there is no need for
an epilogue
- Stop making jprobe_return_break a global function
(you really don't want to call that guy, and it isn't
even a function).
Tested with tcp_probe.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
commit ab4c1325d4bf111a590a1f773e3d93bde7f40201 upstream.
The MIN_STACK_SIZE macro tries evaluate how much stack space needs
to be saved in the jprobes_stack array, sized at 128 bytes.
When using the IRQ stack, said macro can happily return up to
IRQ_STACK_SIZE, which is 16kB. Mayhem follows.
This patch fixes things by getting rid of the crazy macro and
limiting the copy to be at most the size of the jprobes_stack
array, no matter which stack we're on.
[dave.long@linaro.org: Since there is no irq_stack in this kernel version
this fix is not strictly necessary, but is included for completeness.]
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
commit 44bd887ce10eb8061f6a137f8a73f823957edd82 upstream.
Stepping with PSTATE.D=1 is bad news. The step won't generate a debug
exception and we'll likely walk off into random data structures. This
should never happen, but when it does, it's a PITA to debug. Add a
WARN_ON to shout if we realise this is about to take place.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
commit af78cede8bfc772baf424fc03f7cd3c8f9437733 upstream.
Add info prints in sample kprobe handlers for ARM64
Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
commit fcfd708b8cf86b8c1ca6ce014d50287f61c0eb88 upstream.
The pre-handler of this special 'trampoline' kprobe executes the return
probe handler functions and restores original return address in ELR_EL1.
This way the saved pt_regs still hold the original register context to be
carried back to the probed kernel function.
Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
commit da6a91252ad98d49b49e83b76c1f032cdf6e5258 upstream.
The trampoline code is used by kretprobes to capture a return from a probed
function. This is done by saving the registers, calling the handler, and
restoring the registers. The code then returns to the original saved caller
return address. It is necessary to do this directly instead of using a
software breakpoint because the code used in processing that breakpoint
could itself be kprobe'd and cause a problematic reentry into the debug
exception handler.
Signed-off-by: William Cohen <wcohen@redhat.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
[catalin.marinas@arm.com: removed unnecessary masking of the PSTATE bits]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>