Merge 7207889184
on remote branch
Change-Id: I219a5f0e8bd6ee3be3ba0d49230dde908d35dc25
This commit is contained in:
commit
33dc678e76
539 changed files with 8437 additions and 4644 deletions
Documentation
Makefilearch
arc/kernel
arm
boot
compressed
dts
include/asm
kernel
Makefilebugs.centry-common.Sentry-header.Shead-common.Ssetup.csignal.csmp.csuspend.csys_oabi-compat.c
lib
mach-imx
mm
vfp
arm64
boot/dts/amd
kernel
lib
mips
bcm47xx
bcm63xx
fw/sni
include/asm
kernel
txx9/generic
powerpc
s390
sparc/include/asm
um/drivers
x86
|
@ -279,6 +279,8 @@ What: /sys/devices/system/cpu/vulnerabilities
|
|||
/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
|
||||
/sys/devices/system/cpu/vulnerabilities/l1tf
|
||||
/sys/devices/system/cpu/vulnerabilities/mds
|
||||
/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
|
||||
/sys/devices/system/cpu/vulnerabilities/itlb_multihit
|
||||
Date: January 2018
|
||||
Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
|
||||
Description: Information about CPU vulnerabilities
|
||||
|
|
|
@ -262,8 +262,11 @@ time with the option "mds=". The valid arguments for this option are:
|
|||
|
||||
============ =============================================================
|
||||
|
||||
Not specifying this option is equivalent to "mds=full".
|
||||
|
||||
Not specifying this option is equivalent to "mds=full". For processors
|
||||
that are affected by both TAA (TSX Asynchronous Abort) and MDS,
|
||||
specifying just "mds=off" without an accompanying "tsx_async_abort=off"
|
||||
will have no effect as the same mitigation is used for both
|
||||
vulnerabilities.
|
||||
|
||||
Mitigation selection guide
|
||||
--------------------------
|
||||
|
|
271
Documentation/hw-vuln/tsx_async_abort.rst
Normal file
271
Documentation/hw-vuln/tsx_async_abort.rst
Normal file
|
@ -0,0 +1,271 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
TAA - TSX Asynchronous Abort
|
||||
======================================
|
||||
|
||||
TAA is a hardware vulnerability that allows unprivileged speculative access to
|
||||
data which is available in various CPU internal buffers by using asynchronous
|
||||
aborts within an Intel TSX transactional region.
|
||||
|
||||
Affected processors
|
||||
-------------------
|
||||
|
||||
This vulnerability only affects Intel processors that support Intel
|
||||
Transactional Synchronization Extensions (TSX) when the TAA_NO bit (bit 8)
|
||||
is 0 in the IA32_ARCH_CAPABILITIES MSR. On processors where the MDS_NO bit
|
||||
(bit 5) is 0 in the IA32_ARCH_CAPABILITIES MSR, the existing MDS mitigations
|
||||
also mitigate against TAA.
|
||||
|
||||
Whether a processor is affected or not can be read out from the TAA
|
||||
vulnerability file in sysfs. See :ref:`tsx_async_abort_sys_info`.
|
||||
|
||||
Related CVEs
|
||||
------------
|
||||
|
||||
The following CVE entry is related to this TAA issue:
|
||||
|
||||
============== ===== ===================================================
|
||||
CVE-2019-11135 TAA TSX Asynchronous Abort (TAA) condition on some
|
||||
microprocessors utilizing speculative execution may
|
||||
allow an authenticated user to potentially enable
|
||||
information disclosure via a side channel with
|
||||
local access.
|
||||
============== ===== ===================================================
|
||||
|
||||
Problem
|
||||
-------
|
||||
|
||||
When performing store, load or L1 refill operations, processors write
|
||||
data into temporary microarchitectural structures (buffers). The data in
|
||||
those buffers can be forwarded to load operations as an optimization.
|
||||
|
||||
Intel TSX is an extension to the x86 instruction set architecture that adds
|
||||
hardware transactional memory support to improve performance of multi-threaded
|
||||
software. TSX lets the processor expose and exploit concurrency hidden in an
|
||||
application due to dynamically avoiding unnecessary synchronization.
|
||||
|
||||
TSX supports atomic memory transactions that are either committed (success) or
|
||||
aborted. During an abort, operations that happened within the transactional region
|
||||
are rolled back. An asynchronous abort takes place, among other options, when a
|
||||
different thread accesses a cache line that is also used within the transactional
|
||||
region when that access might lead to a data race.
|
||||
|
||||
Immediately after an uncompleted asynchronous abort, certain speculatively
|
||||
executed loads may read data from those internal buffers and pass it to dependent
|
||||
operations. This can be then used to infer the value via a cache side channel
|
||||
attack.
|
||||
|
||||
Because the buffers are potentially shared between Hyper-Threads cross
|
||||
Hyper-Thread attacks are possible.
|
||||
|
||||
The victim of a malicious actor does not need to make use of TSX. Only the
|
||||
attacker needs to begin a TSX transaction and raise an asynchronous abort
|
||||
which in turn potenitally leaks data stored in the buffers.
|
||||
|
||||
More detailed technical information is available in the TAA specific x86
|
||||
architecture section: :ref:`Documentation/x86/tsx_async_abort.rst <tsx_async_abort>`.
|
||||
|
||||
|
||||
Attack scenarios
|
||||
----------------
|
||||
|
||||
Attacks against the TAA vulnerability can be implemented from unprivileged
|
||||
applications running on hosts or guests.
|
||||
|
||||
As for MDS, the attacker has no control over the memory addresses that can
|
||||
be leaked. Only the victim is responsible for bringing data to the CPU. As
|
||||
a result, the malicious actor has to sample as much data as possible and
|
||||
then postprocess it to try to infer any useful information from it.
|
||||
|
||||
A potential attacker only has read access to the data. Also, there is no direct
|
||||
privilege escalation by using this technique.
|
||||
|
||||
|
||||
.. _tsx_async_abort_sys_info:
|
||||
|
||||
TAA system information
|
||||
-----------------------
|
||||
|
||||
The Linux kernel provides a sysfs interface to enumerate the current TAA status
|
||||
of mitigated systems. The relevant sysfs file is:
|
||||
|
||||
/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
|
||||
|
||||
The possible values in this file are:
|
||||
|
||||
.. list-table::
|
||||
|
||||
* - 'Vulnerable'
|
||||
- The CPU is affected by this vulnerability and the microcode and kernel mitigation are not applied.
|
||||
* - 'Vulnerable: Clear CPU buffers attempted, no microcode'
|
||||
- The system tries to clear the buffers but the microcode might not support the operation.
|
||||
* - 'Mitigation: Clear CPU buffers'
|
||||
- The microcode has been updated to clear the buffers. TSX is still enabled.
|
||||
* - 'Mitigation: TSX disabled'
|
||||
- TSX is disabled.
|
||||
* - 'Not affected'
|
||||
- The CPU is not affected by this issue.
|
||||
|
||||
.. _ucode_needed:
|
||||
|
||||
Best effort mitigation mode
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
If the processor is vulnerable, but the availability of the microcode-based
|
||||
mitigation mechanism is not advertised via CPUID the kernel selects a best
|
||||
effort mitigation mode. This mode invokes the mitigation instructions
|
||||
without a guarantee that they clear the CPU buffers.
|
||||
|
||||
This is done to address virtualization scenarios where the host has the
|
||||
microcode update applied, but the hypervisor is not yet updated to expose the
|
||||
CPUID to the guest. If the host has updated microcode the protection takes
|
||||
effect; otherwise a few CPU cycles are wasted pointlessly.
|
||||
|
||||
The state in the tsx_async_abort sysfs file reflects this situation
|
||||
accordingly.
|
||||
|
||||
|
||||
Mitigation mechanism
|
||||
--------------------
|
||||
|
||||
The kernel detects the affected CPUs and the presence of the microcode which is
|
||||
required. If a CPU is affected and the microcode is available, then the kernel
|
||||
enables the mitigation by default.
|
||||
|
||||
|
||||
The mitigation can be controlled at boot time via a kernel command line option.
|
||||
See :ref:`taa_mitigation_control_command_line`.
|
||||
|
||||
.. _virt_mechanism:
|
||||
|
||||
Virtualization mitigation
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Affected systems where the host has TAA microcode and TAA is mitigated by
|
||||
having disabled TSX previously, are not vulnerable regardless of the status
|
||||
of the VMs.
|
||||
|
||||
In all other cases, if the host either does not have the TAA microcode or
|
||||
the kernel is not mitigated, the system might be vulnerable.
|
||||
|
||||
|
||||
.. _taa_mitigation_control_command_line:
|
||||
|
||||
Mitigation control on the kernel command line
|
||||
---------------------------------------------
|
||||
|
||||
The kernel command line allows to control the TAA mitigations at boot time with
|
||||
the option "tsx_async_abort=". The valid arguments for this option are:
|
||||
|
||||
============ =============================================================
|
||||
off This option disables the TAA mitigation on affected platforms.
|
||||
If the system has TSX enabled (see next parameter) and the CPU
|
||||
is affected, the system is vulnerable.
|
||||
|
||||
full TAA mitigation is enabled. If TSX is enabled, on an affected
|
||||
system it will clear CPU buffers on ring transitions. On
|
||||
systems which are MDS-affected and deploy MDS mitigation,
|
||||
TAA is also mitigated. Specifying this option on those
|
||||
systems will have no effect.
|
||||
============ =============================================================
|
||||
|
||||
Not specifying this option is equivalent to "tsx_async_abort=full". For
|
||||
processors that are affected by both TAA and MDS, specifying just
|
||||
"tsx_async_abort=off" without an accompanying "mds=off" will have no
|
||||
effect as the same mitigation is used for both vulnerabilities.
|
||||
|
||||
The kernel command line also allows to control the TSX feature using the
|
||||
parameter "tsx=" on CPUs which support TSX control. MSR_IA32_TSX_CTRL is used
|
||||
to control the TSX feature and the enumeration of the TSX feature bits (RTM
|
||||
and HLE) in CPUID.
|
||||
|
||||
The valid options are:
|
||||
|
||||
============ =============================================================
|
||||
off Disables TSX on the system.
|
||||
|
||||
Note that this option takes effect only on newer CPUs which are
|
||||
not vulnerable to MDS, i.e., have MSR_IA32_ARCH_CAPABILITIES.MDS_NO=1
|
||||
and which get the new IA32_TSX_CTRL MSR through a microcode
|
||||
update. This new MSR allows for the reliable deactivation of
|
||||
the TSX functionality.
|
||||
|
||||
on Enables TSX.
|
||||
|
||||
Although there are mitigations for all known security
|
||||
vulnerabilities, TSX has been known to be an accelerator for
|
||||
several previous speculation-related CVEs, and so there may be
|
||||
unknown security risks associated with leaving it enabled.
|
||||
|
||||
auto Disables TSX if X86_BUG_TAA is present, otherwise enables TSX
|
||||
on the system.
|
||||
============ =============================================================
|
||||
|
||||
Not specifying this option is equivalent to "tsx=off".
|
||||
|
||||
The following combinations of the "tsx_async_abort" and "tsx" are possible. For
|
||||
affected platforms tsx=auto is equivalent to tsx=off and the result will be:
|
||||
|
||||
========= ========================== =========================================
|
||||
tsx=on tsx_async_abort=full The system will use VERW to clear CPU
|
||||
buffers. Cross-thread attacks are still
|
||||
possible on SMT machines.
|
||||
tsx=on tsx_async_abort=off The system is vulnerable.
|
||||
tsx=off tsx_async_abort=full TSX might be disabled if microcode
|
||||
provides a TSX control MSR. If so,
|
||||
system is not vulnerable.
|
||||
tsx=off tsx_async_abort=off ditto
|
||||
========= ========================== =========================================
|
||||
|
||||
|
||||
For unaffected platforms "tsx=on" and "tsx_async_abort=full" does not clear CPU
|
||||
buffers. For platforms without TSX control (MSR_IA32_ARCH_CAPABILITIES.MDS_NO=0)
|
||||
"tsx" command line argument has no effect.
|
||||
|
||||
For the affected platforms below table indicates the mitigation status for the
|
||||
combinations of CPUID bit MD_CLEAR and IA32_ARCH_CAPABILITIES MSR bits MDS_NO
|
||||
and TSX_CTRL_MSR.
|
||||
|
||||
======= ========= ============= ========================================
|
||||
MDS_NO MD_CLEAR TSX_CTRL_MSR Status
|
||||
======= ========= ============= ========================================
|
||||
0 0 0 Vulnerable (needs microcode)
|
||||
0 1 0 MDS and TAA mitigated via VERW
|
||||
1 1 0 MDS fixed, TAA vulnerable if TSX enabled
|
||||
because MD_CLEAR has no meaning and
|
||||
VERW is not guaranteed to clear buffers
|
||||
1 X 1 MDS fixed, TAA can be mitigated by
|
||||
VERW or TSX_CTRL_MSR
|
||||
======= ========= ============= ========================================
|
||||
|
||||
Mitigation selection guide
|
||||
--------------------------
|
||||
|
||||
1. Trusted userspace and guests
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
If all user space applications are from a trusted source and do not execute
|
||||
untrusted code which is supplied externally, then the mitigation can be
|
||||
disabled. The same applies to virtualized environments with trusted guests.
|
||||
|
||||
|
||||
2. Untrusted userspace and guests
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
If there are untrusted applications or guests on the system, enabling TSX
|
||||
might allow a malicious actor to leak data from the host or from other
|
||||
processes running on the same physical core.
|
||||
|
||||
If the microcode is available and the TSX is disabled on the host, attacks
|
||||
are prevented in a virtualized environment as well, even if the VMs do not
|
||||
explicitly enable the mitigation.
|
||||
|
||||
|
||||
.. _taa_default_mitigations:
|
||||
|
||||
Default mitigations
|
||||
-------------------
|
||||
|
||||
The kernel's default action for vulnerable processors is:
|
||||
|
||||
- Deploy TSX disable mitigation (tsx_async_abort=full tsx=off).
|
|
@ -297,7 +297,10 @@ them as any other INPUT_PROP_BUTTONPAD device.
|
|||
INPUT_PROP_ACCELEROMETER
|
||||
-------------------------
|
||||
Directional axes on this device (absolute and/or relative x, y, z) represent
|
||||
accelerometer data. All other axes retain their meaning. A device must not mix
|
||||
accelerometer data. Some devices also report gyroscope data, which devices
|
||||
can report through the rotational axes (absolute and/or relative rx, ry, rz).
|
||||
|
||||
All other axes retain their meaning. A device must not mix
|
||||
regular directional axes and accelerometer axes on the same event node.
|
||||
|
||||
Guidelines:
|
||||
|
|
|
@ -2105,6 +2105,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
|
|||
full - Enable MDS mitigation on vulnerable CPUs
|
||||
off - Unconditionally disable MDS mitigation
|
||||
|
||||
On TAA-affected machines, mds=off can be prevented by
|
||||
an active TAA mitigation as both vulnerabilities are
|
||||
mitigated with the same mechanism so in order to disable
|
||||
this mitigation, you need to specify tsx_async_abort=off
|
||||
too.
|
||||
|
||||
Not specifying this option is equivalent to
|
||||
mds=full.
|
||||
|
||||
|
@ -2240,6 +2246,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
|
|||
spectre_v2_user=off [X86]
|
||||
spec_store_bypass_disable=off [X86]
|
||||
mds=off [X86]
|
||||
tsx_async_abort=off [X86]
|
||||
|
||||
auto (default)
|
||||
Mitigate all CPU vulnerabilities, but leave SMT
|
||||
|
@ -4131,6 +4138,72 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
|
|||
platforms where RDTSC is slow and this accounting
|
||||
can add overhead.
|
||||
|
||||
tsx= [X86] Control Transactional Synchronization
|
||||
Extensions (TSX) feature in Intel processors that
|
||||
support TSX control.
|
||||
|
||||
This parameter controls the TSX feature. The options are:
|
||||
|
||||
on - Enable TSX on the system. Although there are
|
||||
mitigations for all known security vulnerabilities,
|
||||
TSX has been known to be an accelerator for
|
||||
several previous speculation-related CVEs, and
|
||||
so there may be unknown security risks associated
|
||||
with leaving it enabled.
|
||||
|
||||
off - Disable TSX on the system. (Note that this
|
||||
option takes effect only on newer CPUs which are
|
||||
not vulnerable to MDS, i.e., have
|
||||
MSR_IA32_ARCH_CAPABILITIES.MDS_NO=1 and which get
|
||||
the new IA32_TSX_CTRL MSR through a microcode
|
||||
update. This new MSR allows for the reliable
|
||||
deactivation of the TSX functionality.)
|
||||
|
||||
auto - Disable TSX if X86_BUG_TAA is present,
|
||||
otherwise enable TSX on the system.
|
||||
|
||||
Not specifying this option is equivalent to tsx=off.
|
||||
|
||||
See Documentation/hw-vuln/tsx_async_abort.rst
|
||||
for more details.
|
||||
|
||||
tsx_async_abort= [X86,INTEL] Control mitigation for the TSX Async
|
||||
Abort (TAA) vulnerability.
|
||||
|
||||
Similar to Micro-architectural Data Sampling (MDS)
|
||||
certain CPUs that support Transactional
|
||||
Synchronization Extensions (TSX) are vulnerable to an
|
||||
exploit against CPU internal buffers which can forward
|
||||
information to a disclosure gadget under certain
|
||||
conditions.
|
||||
|
||||
In vulnerable processors, the speculatively forwarded
|
||||
data can be used in a cache side channel attack, to
|
||||
access data to which the attacker does not have direct
|
||||
access.
|
||||
|
||||
This parameter controls the TAA mitigation. The
|
||||
options are:
|
||||
|
||||
full - Enable TAA mitigation on vulnerable CPUs
|
||||
if TSX is enabled.
|
||||
|
||||
off - Unconditionally disable TAA mitigation
|
||||
|
||||
On MDS-affected machines, tsx_async_abort=off can be
|
||||
prevented by an active MDS mitigation as both vulnerabilities
|
||||
are mitigated with the same mechanism so in order to disable
|
||||
this mitigation, you need to specify mds=off too.
|
||||
|
||||
Not specifying this option is equivalent to
|
||||
tsx_async_abort=full. On CPUs which are MDS affected
|
||||
and deploy MDS mitigation, TAA mitigation is not
|
||||
required and doesn't provide any additional
|
||||
mitigation.
|
||||
|
||||
For details see:
|
||||
Documentation/hw-vuln/tsx_async_abort.rst
|
||||
|
||||
turbografx.map[2|3]= [HW,JOY]
|
||||
TurboGraFX parallel port interface
|
||||
Format:
|
||||
|
|
|
@ -370,7 +370,7 @@ static uint32_t amt_host_if_call(struct amt_host_if *acmd,
|
|||
unsigned int expected_sz)
|
||||
{
|
||||
uint32_t in_buf_sz;
|
||||
uint32_t out_buf_sz;
|
||||
ssize_t out_buf_sz;
|
||||
ssize_t written;
|
||||
uint32_t status;
|
||||
struct amt_host_if_resp_header *msg_hdr;
|
||||
|
|
117
Documentation/x86/tsx_async_abort.rst
Normal file
117
Documentation/x86/tsx_async_abort.rst
Normal file
|
@ -0,0 +1,117 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
TSX Async Abort (TAA) mitigation
|
||||
================================
|
||||
|
||||
.. _tsx_async_abort:
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
TSX Async Abort (TAA) is a side channel attack on internal buffers in some
|
||||
Intel processors similar to Microachitectural Data Sampling (MDS). In this
|
||||
case certain loads may speculatively pass invalid data to dependent operations
|
||||
when an asynchronous abort condition is pending in a Transactional
|
||||
Synchronization Extensions (TSX) transaction. This includes loads with no
|
||||
fault or assist condition. Such loads may speculatively expose stale data from
|
||||
the same uarch data structures as in MDS, with same scope of exposure i.e.
|
||||
same-thread and cross-thread. This issue affects all current processors that
|
||||
support TSX.
|
||||
|
||||
Mitigation strategy
|
||||
-------------------
|
||||
|
||||
a) TSX disable - one of the mitigations is to disable TSX. A new MSR
|
||||
IA32_TSX_CTRL will be available in future and current processors after
|
||||
microcode update which can be used to disable TSX. In addition, it
|
||||
controls the enumeration of the TSX feature bits (RTM and HLE) in CPUID.
|
||||
|
||||
b) Clear CPU buffers - similar to MDS, clearing the CPU buffers mitigates this
|
||||
vulnerability. More details on this approach can be found in
|
||||
:ref:`Documentation/hw-vuln/mds.rst <mds>`.
|
||||
|
||||
Kernel internal mitigation modes
|
||||
--------------------------------
|
||||
|
||||
============= ============================================================
|
||||
off Mitigation is disabled. Either the CPU is not affected or
|
||||
tsx_async_abort=off is supplied on the kernel command line.
|
||||
|
||||
tsx disabled Mitigation is enabled. TSX feature is disabled by default at
|
||||
bootup on processors that support TSX control.
|
||||
|
||||
verw Mitigation is enabled. CPU is affected and MD_CLEAR is
|
||||
advertised in CPUID.
|
||||
|
||||
ucode needed Mitigation is enabled. CPU is affected and MD_CLEAR is not
|
||||
advertised in CPUID. That is mainly for virtualization
|
||||
scenarios where the host has the updated microcode but the
|
||||
hypervisor does not expose MD_CLEAR in CPUID. It's a best
|
||||
effort approach without guarantee.
|
||||
============= ============================================================
|
||||
|
||||
If the CPU is affected and the "tsx_async_abort" kernel command line parameter is
|
||||
not provided then the kernel selects an appropriate mitigation depending on the
|
||||
status of RTM and MD_CLEAR CPUID bits.
|
||||
|
||||
Below tables indicate the impact of tsx=on|off|auto cmdline options on state of
|
||||
TAA mitigation, VERW behavior and TSX feature for various combinations of
|
||||
MSR_IA32_ARCH_CAPABILITIES bits.
|
||||
|
||||
1. "tsx=off"
|
||||
|
||||
========= ========= ============ ============ ============== =================== ======================
|
||||
MSR_IA32_ARCH_CAPABILITIES bits Result with cmdline tsx=off
|
||||
---------------------------------- -------------------------------------------------------------------------
|
||||
TAA_NO MDS_NO TSX_CTRL_MSR TSX state VERW can clear TAA mitigation TAA mitigation
|
||||
after bootup CPU buffers tsx_async_abort=off tsx_async_abort=full
|
||||
========= ========= ============ ============ ============== =================== ======================
|
||||
0 0 0 HW default Yes Same as MDS Same as MDS
|
||||
0 0 1 Invalid case Invalid case Invalid case Invalid case
|
||||
0 1 0 HW default No Need ucode update Need ucode update
|
||||
0 1 1 Disabled Yes TSX disabled TSX disabled
|
||||
1 X 1 Disabled X None needed None needed
|
||||
========= ========= ============ ============ ============== =================== ======================
|
||||
|
||||
2. "tsx=on"
|
||||
|
||||
========= ========= ============ ============ ============== =================== ======================
|
||||
MSR_IA32_ARCH_CAPABILITIES bits Result with cmdline tsx=on
|
||||
---------------------------------- -------------------------------------------------------------------------
|
||||
TAA_NO MDS_NO TSX_CTRL_MSR TSX state VERW can clear TAA mitigation TAA mitigation
|
||||
after bootup CPU buffers tsx_async_abort=off tsx_async_abort=full
|
||||
========= ========= ============ ============ ============== =================== ======================
|
||||
0 0 0 HW default Yes Same as MDS Same as MDS
|
||||
0 0 1 Invalid case Invalid case Invalid case Invalid case
|
||||
0 1 0 HW default No Need ucode update Need ucode update
|
||||
0 1 1 Enabled Yes None Same as MDS
|
||||
1 X 1 Enabled X None needed None needed
|
||||
========= ========= ============ ============ ============== =================== ======================
|
||||
|
||||
3. "tsx=auto"
|
||||
|
||||
========= ========= ============ ============ ============== =================== ======================
|
||||
MSR_IA32_ARCH_CAPABILITIES bits Result with cmdline tsx=auto
|
||||
---------------------------------- -------------------------------------------------------------------------
|
||||
TAA_NO MDS_NO TSX_CTRL_MSR TSX state VERW can clear TAA mitigation TAA mitigation
|
||||
after bootup CPU buffers tsx_async_abort=off tsx_async_abort=full
|
||||
========= ========= ============ ============ ============== =================== ======================
|
||||
0 0 0 HW default Yes Same as MDS Same as MDS
|
||||
0 0 1 Invalid case Invalid case Invalid case Invalid case
|
||||
0 1 0 HW default No Need ucode update Need ucode update
|
||||
0 1 1 Disabled Yes TSX disabled TSX disabled
|
||||
1 X 1 Enabled X None needed None needed
|
||||
========= ========= ============ ============ ============== =================== ======================
|
||||
|
||||
In the tables, TSX_CTRL_MSR is a new bit in MSR_IA32_ARCH_CAPABILITIES that
|
||||
indicates whether MSR_IA32_TSX_CTRL is supported.
|
||||
|
||||
There are two control bits in IA32_TSX_CTRL MSR:
|
||||
|
||||
Bit 0: When set it disables the Restricted Transactional Memory (RTM)
|
||||
sub-feature of TSX (will force all transactions to abort on the
|
||||
XBEGIN instruction).
|
||||
|
||||
Bit 1: When set it disables the enumeration of the RTM and HLE feature
|
||||
(i.e. it will make CPUID(EAX=7).EBX{bit4} and
|
||||
CPUID(EAX=7).EBX{bit11} read as 0).
|
8
Makefile
8
Makefile
|
@ -1,6 +1,6 @@
|
|||
VERSION = 4
|
||||
PATCHLEVEL = 4
|
||||
SUBLEVEL = 198
|
||||
SUBLEVEL = 205
|
||||
EXTRAVERSION =
|
||||
NAME = Blurry Fish Butt
|
||||
|
||||
|
@ -844,6 +844,12 @@ KBUILD_CFLAGS += $(call cc-option,-Werror=strict-prototypes)
|
|||
# Prohibit date/time macros, which would make the build non-deterministic
|
||||
KBUILD_CFLAGS += $(call cc-option,-Werror=date-time)
|
||||
|
||||
# ensure -fcf-protection is disabled when using retpoline as it is
|
||||
# incompatible with -mindirect-branch=thunk-extern
|
||||
ifdef CONFIG_RETPOLINE
|
||||
KBUILD_CFLAGS += $(call cc-option,-fcf-protection=none)
|
||||
endif
|
||||
|
||||
# use the deterministic mode of AR if available
|
||||
KBUILD_ARFLAGS := $(call ar-option,D)
|
||||
|
||||
|
|
|
@ -486,8 +486,8 @@ static int arc_pmu_device_probe(struct platform_device *pdev)
|
|||
/* loop thru all available h/w condition indexes */
|
||||
for (j = 0; j < cc_bcr.c; j++) {
|
||||
write_aux_reg(ARC_REG_CC_INDEX, j);
|
||||
cc_name.indiv.word0 = read_aux_reg(ARC_REG_CC_NAME0);
|
||||
cc_name.indiv.word1 = read_aux_reg(ARC_REG_CC_NAME1);
|
||||
cc_name.indiv.word0 = le32_to_cpu(read_aux_reg(ARC_REG_CC_NAME0));
|
||||
cc_name.indiv.word1 = le32_to_cpu(read_aux_reg(ARC_REG_CC_NAME1));
|
||||
|
||||
/* See if it has been mapped to a perf event_id */
|
||||
for (i = 0; i < ARRAY_SIZE(arc_pmu_ev_hw_map); i++) {
|
||||
|
|
|
@ -5,6 +5,8 @@
|
|||
#include <linux/string.h>
|
||||
#include <asm/byteorder.h>
|
||||
|
||||
#define INT_MAX ((int)(~0U>>1))
|
||||
|
||||
typedef __be16 fdt16_t;
|
||||
typedef __be32 fdt32_t;
|
||||
typedef __be64 fdt64_t;
|
||||
|
|
|
@ -697,6 +697,7 @@
|
|||
pinctrl-0 = <&cpsw_default>;
|
||||
pinctrl-1 = <&cpsw_sleep>;
|
||||
status = "okay";
|
||||
slaves = <1>;
|
||||
};
|
||||
|
||||
&davinci_mdio {
|
||||
|
@ -704,15 +705,14 @@
|
|||
pinctrl-0 = <&davinci_mdio_default>;
|
||||
pinctrl-1 = <&davinci_mdio_sleep>;
|
||||
status = "okay";
|
||||
|
||||
ethphy0: ethernet-phy@0 {
|
||||
reg = <0>;
|
||||
};
|
||||
};
|
||||
|
||||
&cpsw_emac0 {
|
||||
phy_id = <&davinci_mdio>, <0>;
|
||||
phy-mode = "rgmii-txid";
|
||||
};
|
||||
|
||||
&cpsw_emac1 {
|
||||
phy_id = <&davinci_mdio>, <1>;
|
||||
phy-handle = <ðphy0>;
|
||||
phy-mode = "rgmii-txid";
|
||||
};
|
||||
|
||||
|
|
|
@ -546,7 +546,7 @@
|
|||
};
|
||||
};
|
||||
|
||||
uart1 {
|
||||
usart1 {
|
||||
pinctrl_usart1: usart1-0 {
|
||||
atmel,pins =
|
||||
<AT91_PIOB 4 AT91_PERIPH_A AT91_PINCTRL_PULL_UP /* PB4 periph A with pullup */
|
||||
|
|
|
@ -23,6 +23,14 @@
|
|||
|
||||
samsung,model = "Snow-I2S-MAX98090";
|
||||
samsung,audio-codec = <&max98090>;
|
||||
|
||||
cpu {
|
||||
sound-dai = <&i2s0 0>;
|
||||
};
|
||||
|
||||
codec {
|
||||
sound-dai = <&max98090 0>, <&hdmi>;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
|
@ -34,6 +42,9 @@
|
|||
interrupt-parent = <&gpx0>;
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&max98090_irq>;
|
||||
clocks = <&pmu_system_controller 0>;
|
||||
clock-names = "mclk";
|
||||
#sound-dai-cells = <1>;
|
||||
};
|
||||
};
|
||||
|
||||
|
|
|
@ -169,3 +169,7 @@
|
|||
&twl_gpio {
|
||||
ti,use-leds;
|
||||
};
|
||||
|
||||
&twl_keypad {
|
||||
status = "disabled";
|
||||
};
|
||||
|
|
|
@ -28,6 +28,7 @@
|
|||
|
||||
aliases {
|
||||
display0 = &lcd;
|
||||
display1 = &tv0;
|
||||
};
|
||||
|
||||
gpio-keys {
|
||||
|
@ -70,7 +71,7 @@
|
|||
#sound-dai-cells = <0>;
|
||||
};
|
||||
|
||||
spi_lcd {
|
||||
spi_lcd: spi_lcd {
|
||||
compatible = "spi-gpio";
|
||||
#address-cells = <0x1>;
|
||||
#size-cells = <0x0>;
|
||||
|
@ -459,6 +460,12 @@
|
|||
regulator-max-microvolt = <3150000>;
|
||||
};
|
||||
|
||||
/* Needed to power the DPI pins */
|
||||
|
||||
&vpll2 {
|
||||
regulator-always-on;
|
||||
};
|
||||
|
||||
&dss {
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = < &dss_dpi_pins >;
|
||||
|
@ -522,22 +529,22 @@
|
|||
|
||||
bootloaders@80000 {
|
||||
label = "U-Boot";
|
||||
reg = <0x80000 0x1e0000>;
|
||||
reg = <0x80000 0x1c0000>;
|
||||
};
|
||||
|
||||
bootloaders_env@260000 {
|
||||
bootloaders_env@240000 {
|
||||
label = "U-Boot Env";
|
||||
reg = <0x260000 0x20000>;
|
||||
reg = <0x240000 0x40000>;
|
||||
};
|
||||
|
||||
kernel@280000 {
|
||||
label = "Kernel";
|
||||
reg = <0x280000 0x400000>;
|
||||
reg = <0x280000 0x600000>;
|
||||
};
|
||||
|
||||
filesystem@680000 {
|
||||
filesystem@880000 {
|
||||
label = "File System";
|
||||
reg = <0x680000 0xf980000>;
|
||||
reg = <0x880000 0>; /* 0 = MTDPART_SIZ_FULL */
|
||||
};
|
||||
};
|
||||
};
|
||||
|
|
|
@ -63,7 +63,7 @@
|
|||
clocks = <&clks CLK_PWM1>;
|
||||
};
|
||||
|
||||
pwri2c: i2c@40f000180 {
|
||||
pwri2c: i2c@40f00180 {
|
||||
compatible = "mrvl,pxa-i2c";
|
||||
reg = <0x40f00180 0x24>;
|
||||
interrupts = <6>;
|
||||
|
|
|
@ -88,7 +88,7 @@
|
|||
status = "okay";
|
||||
speed-mode = <0>;
|
||||
|
||||
adxl345: adxl345@0 {
|
||||
adxl345: adxl345@53 {
|
||||
compatible = "adi,adxl345";
|
||||
reg = <0x53>;
|
||||
|
||||
|
|
|
@ -186,7 +186,7 @@
|
|||
<0xa0410100 0x100>;
|
||||
};
|
||||
|
||||
scu@a04100000 {
|
||||
scu@a0410000 {
|
||||
compatible = "arm,cortex-a9-scu";
|
||||
reg = <0xa0410000 0x100>;
|
||||
};
|
||||
|
@ -894,7 +894,7 @@
|
|||
power-domains = <&pm_domains DOMAIN_VAPE>;
|
||||
};
|
||||
|
||||
ssp@80002000 {
|
||||
spi@80002000 {
|
||||
compatible = "arm,pl022", "arm,primecell";
|
||||
reg = <0x80002000 0x1000>;
|
||||
interrupts = <0 14 IRQ_TYPE_LEVEL_HIGH>;
|
||||
|
@ -908,7 +908,7 @@
|
|||
power-domains = <&pm_domains DOMAIN_VAPE>;
|
||||
};
|
||||
|
||||
ssp@80003000 {
|
||||
spi@80003000 {
|
||||
compatible = "arm,pl022", "arm,primecell";
|
||||
reg = <0x80003000 0x1000>;
|
||||
interrupts = <0 52 IRQ_TYPE_LEVEL_HIGH>;
|
||||
|
|
|
@ -607,16 +607,20 @@
|
|||
|
||||
mcde {
|
||||
lcd_default_mode: lcd_default {
|
||||
default_mux {
|
||||
default_mux1 {
|
||||
/* Mux in VSI0 and all the data lines */
|
||||
function = "lcd";
|
||||
groups =
|
||||
"lcdvsi0_a_1", /* VSI0 for LCD */
|
||||
"lcd_d0_d7_a_1", /* Data lines */
|
||||
"lcd_d8_d11_a_1", /* TV-out */
|
||||
"lcdaclk_b_1", /* Clock line for TV-out */
|
||||
"lcdvsi1_a_1"; /* VSI1 for HDMI */
|
||||
};
|
||||
default_mux2 {
|
||||
function = "lcda";
|
||||
groups =
|
||||
"lcdaclk_b_1"; /* Clock line for TV-out */
|
||||
};
|
||||
default_cfg1 {
|
||||
pins =
|
||||
"GPIO68_E1", /* VSI0 */
|
||||
|
|
|
@ -57,7 +57,7 @@
|
|||
};
|
||||
};
|
||||
|
||||
ssp@80002000 {
|
||||
spi@80002000 {
|
||||
/*
|
||||
* On the first generation boards, this SSP/SPI port was connected
|
||||
* to the AB8500.
|
||||
|
|
|
@ -311,7 +311,7 @@
|
|||
pinctrl-1 = <&i2c3_sleep_mode>;
|
||||
};
|
||||
|
||||
ssp@80002000 {
|
||||
spi@80002000 {
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&ssp0_snowball_mode>;
|
||||
};
|
||||
|
|
|
@ -451,7 +451,7 @@
|
|||
dma-names = "rx";
|
||||
};
|
||||
|
||||
spi: ssp@c0006000 {
|
||||
spi: spi@c0006000 {
|
||||
compatible = "arm,pl022", "arm,primecell";
|
||||
reg = <0xc0006000 0x1000>;
|
||||
interrupt-parent = <&vica>;
|
||||
|
|
|
@ -147,14 +147,14 @@
|
|||
|
||||
/* Apalis MMC1 */
|
||||
sdmmc3_clk_pa6 {
|
||||
nvidia,pins = "sdmmc3_clk_pa6",
|
||||
"sdmmc3_cmd_pa7";
|
||||
nvidia,pins = "sdmmc3_clk_pa6";
|
||||
nvidia,function = "sdmmc3";
|
||||
nvidia,pull = <TEGRA_PIN_PULL_NONE>;
|
||||
nvidia,tristate = <TEGRA_PIN_DISABLE>;
|
||||
};
|
||||
sdmmc3_dat0_pb7 {
|
||||
nvidia,pins = "sdmmc3_dat0_pb7",
|
||||
nvidia,pins = "sdmmc3_cmd_pa7",
|
||||
"sdmmc3_dat0_pb7",
|
||||
"sdmmc3_dat1_pb6",
|
||||
"sdmmc3_dat2_pb5",
|
||||
"sdmmc3_dat3_pb4",
|
||||
|
|
|
@ -823,7 +823,7 @@
|
|||
nvidia,elastic-limit = <16>;
|
||||
nvidia,term-range-adj = <6>;
|
||||
nvidia,xcvr-setup = <51>;
|
||||
nvidia.xcvr-setup-use-fuses;
|
||||
nvidia,xcvr-setup-use-fuses;
|
||||
nvidia,xcvr-lsfslew = <1>;
|
||||
nvidia,xcvr-lsrslew = <1>;
|
||||
nvidia,xcvr-hsslew = <32>;
|
||||
|
@ -860,7 +860,7 @@
|
|||
nvidia,elastic-limit = <16>;
|
||||
nvidia,term-range-adj = <6>;
|
||||
nvidia,xcvr-setup = <51>;
|
||||
nvidia.xcvr-setup-use-fuses;
|
||||
nvidia,xcvr-setup-use-fuses;
|
||||
nvidia,xcvr-lsfslew = <2>;
|
||||
nvidia,xcvr-lsrslew = <2>;
|
||||
nvidia,xcvr-hsslew = <32>;
|
||||
|
@ -896,7 +896,7 @@
|
|||
nvidia,elastic-limit = <16>;
|
||||
nvidia,term-range-adj = <6>;
|
||||
nvidia,xcvr-setup = <51>;
|
||||
nvidia.xcvr-setup-use-fuses;
|
||||
nvidia,xcvr-setup-use-fuses;
|
||||
nvidia,xcvr-lsfslew = <2>;
|
||||
nvidia,xcvr-lsrslew = <2>;
|
||||
nvidia,xcvr-hsslew = <32>;
|
||||
|
|
|
@ -22,9 +22,7 @@
|
|||
|
||||
#include <linux/io.h>
|
||||
#include <asm/barrier.h>
|
||||
|
||||
#define __ACCESS_CP15(CRn, Op1, CRm, Op2) p15, Op1, %0, CRn, CRm, Op2
|
||||
#define __ACCESS_CP15_64(Op1, CRm) p15, Op1, %Q0, %R0, CRm
|
||||
#include <asm/cp15.h>
|
||||
|
||||
#define ICC_EOIR1 __ACCESS_CP15(c12, 0, c12, 1)
|
||||
#define ICC_DIR __ACCESS_CP15(c12, 0, c11, 1)
|
||||
|
@ -102,58 +100,55 @@
|
|||
|
||||
static inline void gic_write_eoir(u32 irq)
|
||||
{
|
||||
asm volatile("mcr " __stringify(ICC_EOIR1) : : "r" (irq));
|
||||
write_sysreg(irq, ICC_EOIR1);
|
||||
isb();
|
||||
}
|
||||
|
||||
static inline void gic_write_dir(u32 val)
|
||||
{
|
||||
asm volatile("mcr " __stringify(ICC_DIR) : : "r" (val));
|
||||
write_sysreg(val, ICC_DIR);
|
||||
isb();
|
||||
}
|
||||
|
||||
static inline u32 gic_read_iar(void)
|
||||
{
|
||||
u32 irqstat;
|
||||
u32 irqstat = read_sysreg(ICC_IAR1);
|
||||
|
||||
asm volatile("mrc " __stringify(ICC_IAR1) : "=r" (irqstat));
|
||||
dsb(sy);
|
||||
|
||||
return irqstat;
|
||||
}
|
||||
|
||||
static inline void gic_write_pmr(u32 val)
|
||||
{
|
||||
asm volatile("mcr " __stringify(ICC_PMR) : : "r" (val));
|
||||
write_sysreg(val, ICC_PMR);
|
||||
}
|
||||
|
||||
static inline void gic_write_ctlr(u32 val)
|
||||
{
|
||||
asm volatile("mcr " __stringify(ICC_CTLR) : : "r" (val));
|
||||
write_sysreg(val, ICC_CTLR);
|
||||
isb();
|
||||
}
|
||||
|
||||
static inline void gic_write_grpen1(u32 val)
|
||||
{
|
||||
asm volatile("mcr " __stringify(ICC_IGRPEN1) : : "r" (val));
|
||||
write_sysreg(val, ICC_IGRPEN1);
|
||||
isb();
|
||||
}
|
||||
|
||||
static inline void gic_write_sgi1r(u64 val)
|
||||
{
|
||||
asm volatile("mcrr " __stringify(ICC_SGI1R) : : "r" (val));
|
||||
write_sysreg(val, ICC_SGI1R);
|
||||
}
|
||||
|
||||
static inline u32 gic_read_sre(void)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
asm volatile("mrc " __stringify(ICC_SRE) : "=r" (val));
|
||||
return val;
|
||||
return read_sysreg(ICC_SRE);
|
||||
}
|
||||
|
||||
static inline void gic_write_sre(u32 val)
|
||||
{
|
||||
asm volatile("mcr " __stringify(ICC_SRE) : : "r" (val));
|
||||
write_sysreg(val, ICC_SRE);
|
||||
isb();
|
||||
}
|
||||
|
||||
|
|
|
@ -441,11 +441,34 @@ THUMB( orr \reg , \reg , #PSR_T_BIT )
|
|||
.size \name , . - \name
|
||||
.endm
|
||||
|
||||
.macro csdb
|
||||
#ifdef CONFIG_THUMB2_KERNEL
|
||||
.inst.w 0xf3af8014
|
||||
#else
|
||||
.inst 0xe320f014
|
||||
#endif
|
||||
.endm
|
||||
|
||||
.macro check_uaccess, addr:req, size:req, limit:req, tmp:req, bad:req
|
||||
#ifndef CONFIG_CPU_USE_DOMAINS
|
||||
adds \tmp, \addr, #\size - 1
|
||||
sbcccs \tmp, \tmp, \limit
|
||||
bcs \bad
|
||||
#ifdef CONFIG_CPU_SPECTRE
|
||||
movcs \addr, #0
|
||||
csdb
|
||||
#endif
|
||||
#endif
|
||||
.endm
|
||||
|
||||
.macro uaccess_mask_range_ptr, addr:req, size:req, limit:req, tmp:req
|
||||
#ifdef CONFIG_CPU_SPECTRE
|
||||
sub \tmp, \limit, #1
|
||||
subs \tmp, \tmp, \addr @ tmp = limit - 1 - addr
|
||||
addhs \tmp, \tmp, #1 @ if (tmp >= 0) {
|
||||
subhss \tmp, \tmp, \size @ tmp = limit - (addr + size) }
|
||||
movlo \addr, #0 @ if (tmp < 0) addr = NULL
|
||||
csdb
|
||||
#endif
|
||||
.endm
|
||||
|
||||
|
|
|
@ -18,6 +18,12 @@
|
|||
#define isb(option) __asm__ __volatile__ ("isb " #option : : : "memory")
|
||||
#define dsb(option) __asm__ __volatile__ ("dsb " #option : : : "memory")
|
||||
#define dmb(option) __asm__ __volatile__ ("dmb " #option : : : "memory")
|
||||
#ifdef CONFIG_THUMB2_KERNEL
|
||||
#define CSDB ".inst.w 0xf3af8014"
|
||||
#else
|
||||
#define CSDB ".inst 0xe320f014"
|
||||
#endif
|
||||
#define csdb() __asm__ __volatile__(CSDB : : : "memory")
|
||||
#elif defined(CONFIG_CPU_XSC3) || __LINUX_ARM_ARCH__ == 6
|
||||
#define isb(x) __asm__ __volatile__ ("mcr p15, 0, %0, c7, c5, 4" \
|
||||
: : "r" (0) : "memory")
|
||||
|
@ -38,6 +44,13 @@
|
|||
#define dmb(x) __asm__ __volatile__ ("" : : : "memory")
|
||||
#endif
|
||||
|
||||
#ifndef CSDB
|
||||
#define CSDB
|
||||
#endif
|
||||
#ifndef csdb
|
||||
#define csdb()
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_ARM_HEAVY_MB
|
||||
extern void (*soc_mb)(void);
|
||||
extern void arm_heavy_mb(void);
|
||||
|
@ -95,5 +108,26 @@ do { \
|
|||
#define smp_mb__before_atomic() smp_mb()
|
||||
#define smp_mb__after_atomic() smp_mb()
|
||||
|
||||
#ifdef CONFIG_CPU_SPECTRE
|
||||
static inline unsigned long array_index_mask_nospec(unsigned long idx,
|
||||
unsigned long sz)
|
||||
{
|
||||
unsigned long mask;
|
||||
|
||||
asm volatile(
|
||||
"cmp %1, %2\n"
|
||||
" sbc %0, %1, %1\n"
|
||||
CSDB
|
||||
: "=r" (mask)
|
||||
: "r" (idx), "Ir" (sz)
|
||||
: "cc");
|
||||
|
||||
return mask;
|
||||
}
|
||||
#define array_index_mask_nospec array_index_mask_nospec
|
||||
#endif
|
||||
|
||||
#include <asm-generic/barrier.h>
|
||||
|
||||
#endif /* !__ASSEMBLY__ */
|
||||
#endif /* __ASM_BARRIER_H */
|
||||
|
|
|
@ -10,12 +10,14 @@
|
|||
#ifndef __ASM_BUGS_H
|
||||
#define __ASM_BUGS_H
|
||||
|
||||
#ifdef CONFIG_MMU
|
||||
extern void check_writebuffer_bugs(void);
|
||||
|
||||
#define check_bugs() check_writebuffer_bugs()
|
||||
#ifdef CONFIG_MMU
|
||||
extern void check_bugs(void);
|
||||
extern void check_other_bugs(void);
|
||||
#else
|
||||
#define check_bugs() do { } while (0)
|
||||
#define check_other_bugs() do { } while (0)
|
||||
#endif
|
||||
|
||||
#endif
|
||||
|
|
|
@ -49,6 +49,24 @@
|
|||
|
||||
#ifdef CONFIG_CPU_CP15
|
||||
|
||||
#define __ACCESS_CP15(CRn, Op1, CRm, Op2) \
|
||||
"mrc", "mcr", __stringify(p15, Op1, %0, CRn, CRm, Op2), u32
|
||||
#define __ACCESS_CP15_64(Op1, CRm) \
|
||||
"mrrc", "mcrr", __stringify(p15, Op1, %Q0, %R0, CRm), u64
|
||||
|
||||
#define __read_sysreg(r, w, c, t) ({ \
|
||||
t __val; \
|
||||
asm volatile(r " " c : "=r" (__val)); \
|
||||
__val; \
|
||||
})
|
||||
#define read_sysreg(...) __read_sysreg(__VA_ARGS__)
|
||||
|
||||
#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
|
||||
#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
|
||||
|
||||
#define BPIALL __ACCESS_CP15(c7, 0, c5, 6)
|
||||
#define ICIALLU __ACCESS_CP15(c7, 0, c5, 0)
|
||||
|
||||
extern unsigned long cr_alignment; /* defined in entry-armv.S */
|
||||
|
||||
static inline unsigned long get_cr(void)
|
||||
|
|
|
@ -74,8 +74,16 @@
|
|||
#define ARM_CPU_PART_CORTEX_A12 0x4100c0d0
|
||||
#define ARM_CPU_PART_CORTEX_A17 0x4100c0e0
|
||||
#define ARM_CPU_PART_CORTEX_A15 0x4100c0f0
|
||||
#define ARM_CPU_PART_CORTEX_A53 0x4100d030
|
||||
#define ARM_CPU_PART_CORTEX_A57 0x4100d070
|
||||
#define ARM_CPU_PART_CORTEX_A72 0x4100d080
|
||||
#define ARM_CPU_PART_CORTEX_A73 0x4100d090
|
||||
#define ARM_CPU_PART_CORTEX_A75 0x4100d0a0
|
||||
#define ARM_CPU_PART_MASK 0xff00fff0
|
||||
|
||||
/* Broadcom cores */
|
||||
#define ARM_CPU_PART_BRAHMA_B15 0x420000f0
|
||||
|
||||
#define ARM_CPU_XSCALE_ARCH_MASK 0xe000
|
||||
#define ARM_CPU_XSCALE_ARCH_V1 0x2000
|
||||
#define ARM_CPU_XSCALE_ARCH_V2 0x4000
|
||||
|
@ -85,6 +93,7 @@
|
|||
#define ARM_CPU_PART_SCORPION 0x510002d0
|
||||
|
||||
extern unsigned int processor_id;
|
||||
struct proc_info_list *lookup_processor(u32 midr);
|
||||
|
||||
#ifdef CONFIG_CPU_CP15
|
||||
#define read_cpuid(reg) \
|
||||
|
|
|
@ -23,7 +23,7 @@ struct mm_struct;
|
|||
/*
|
||||
* Don't change this structure - ASM code relies on it.
|
||||
*/
|
||||
extern struct processor {
|
||||
struct processor {
|
||||
/* MISC
|
||||
* get data abort address/flags
|
||||
*/
|
||||
|
@ -36,6 +36,10 @@ extern struct processor {
|
|||
* Set up any processor specifics
|
||||
*/
|
||||
void (*_proc_init)(void);
|
||||
/*
|
||||
* Check for processor bugs
|
||||
*/
|
||||
void (*check_bugs)(void);
|
||||
/*
|
||||
* Disable any processor specifics
|
||||
*/
|
||||
|
@ -75,9 +79,13 @@ extern struct processor {
|
|||
unsigned int suspend_size;
|
||||
void (*do_suspend)(void *);
|
||||
void (*do_resume)(void *);
|
||||
} processor;
|
||||
};
|
||||
|
||||
#ifndef MULTI_CPU
|
||||
static inline void init_proc_vtable(const struct processor *p)
|
||||
{
|
||||
}
|
||||
|
||||
extern void cpu_proc_init(void);
|
||||
extern void cpu_proc_fin(void);
|
||||
extern int cpu_do_idle(void);
|
||||
|
@ -94,17 +102,50 @@ extern void cpu_reset(unsigned long addr) __attribute__((noreturn));
|
|||
extern void cpu_do_suspend(void *);
|
||||
extern void cpu_do_resume(void *);
|
||||
#else
|
||||
#define cpu_proc_init processor._proc_init
|
||||
#define cpu_proc_fin processor._proc_fin
|
||||
#define cpu_reset processor.reset
|
||||
#define cpu_do_idle processor._do_idle
|
||||
#define cpu_dcache_clean_area processor.dcache_clean_area
|
||||
#define cpu_set_pte_ext processor.set_pte_ext
|
||||
#define cpu_do_switch_mm processor.switch_mm
|
||||
|
||||
/* These three are private to arch/arm/kernel/suspend.c */
|
||||
#define cpu_do_suspend processor.do_suspend
|
||||
#define cpu_do_resume processor.do_resume
|
||||
extern struct processor processor;
|
||||
#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)
|
||||
#include <linux/smp.h>
|
||||
/*
|
||||
* This can't be a per-cpu variable because we need to access it before
|
||||
* per-cpu has been initialised. We have a couple of functions that are
|
||||
* called in a pre-emptible context, and so can't use smp_processor_id()
|
||||
* there, hence PROC_TABLE(). We insist in init_proc_vtable() that the
|
||||
* function pointers for these are identical across all CPUs.
|
||||
*/
|
||||
extern struct processor *cpu_vtable[];
|
||||
#define PROC_VTABLE(f) cpu_vtable[smp_processor_id()]->f
|
||||
#define PROC_TABLE(f) cpu_vtable[0]->f
|
||||
static inline void init_proc_vtable(const struct processor *p)
|
||||
{
|
||||
unsigned int cpu = smp_processor_id();
|
||||
*cpu_vtable[cpu] = *p;
|
||||
WARN_ON_ONCE(cpu_vtable[cpu]->dcache_clean_area !=
|
||||
cpu_vtable[0]->dcache_clean_area);
|
||||
WARN_ON_ONCE(cpu_vtable[cpu]->set_pte_ext !=
|
||||
cpu_vtable[0]->set_pte_ext);
|
||||
}
|
||||
#else
|
||||
#define PROC_VTABLE(f) processor.f
|
||||
#define PROC_TABLE(f) processor.f
|
||||
static inline void init_proc_vtable(const struct processor *p)
|
||||
{
|
||||
processor = *p;
|
||||
}
|
||||
#endif
|
||||
|
||||
#define cpu_proc_init PROC_VTABLE(_proc_init)
|
||||
#define cpu_check_bugs PROC_VTABLE(check_bugs)
|
||||
#define cpu_proc_fin PROC_VTABLE(_proc_fin)
|
||||
#define cpu_reset PROC_VTABLE(reset)
|
||||
#define cpu_do_idle PROC_VTABLE(_do_idle)
|
||||
#define cpu_dcache_clean_area PROC_TABLE(dcache_clean_area)
|
||||
#define cpu_set_pte_ext PROC_TABLE(set_pte_ext)
|
||||
#define cpu_do_switch_mm PROC_VTABLE(switch_mm)
|
||||
|
||||
/* These two are private to arch/arm/kernel/suspend.c */
|
||||
#define cpu_do_suspend PROC_VTABLE(do_suspend)
|
||||
#define cpu_do_resume PROC_VTABLE(do_resume)
|
||||
#endif
|
||||
|
||||
extern void cpu_resume(void);
|
||||
|
|
|
@ -7,6 +7,7 @@
|
|||
#include <linux/linkage.h>
|
||||
#include <linux/irqflags.h>
|
||||
#include <linux/reboot.h>
|
||||
#include <linux/percpu.h>
|
||||
|
||||
extern void cpu_init(void);
|
||||
|
||||
|
@ -14,6 +15,20 @@ void soft_restart(unsigned long);
|
|||
extern void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
|
||||
extern void (*arm_pm_idle)(void);
|
||||
|
||||
#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
|
||||
typedef void (*harden_branch_predictor_fn_t)(void);
|
||||
DECLARE_PER_CPU(harden_branch_predictor_fn_t, harden_branch_predictor_fn);
|
||||
static inline void harden_branch_predictor(void)
|
||||
{
|
||||
harden_branch_predictor_fn_t fn = per_cpu(harden_branch_predictor_fn,
|
||||
smp_processor_id());
|
||||
if (fn)
|
||||
fn();
|
||||
}
|
||||
#else
|
||||
#define harden_branch_predictor() do { } while (0)
|
||||
#endif
|
||||
|
||||
#define UDBG_UNDEFINED (1 << 0)
|
||||
#define UDBG_SYSCALL (1 << 1)
|
||||
#define UDBG_BADABORT (1 << 2)
|
||||
|
|
|
@ -124,10 +124,10 @@ extern void vfp_flush_hwstate(struct thread_info *);
|
|||
struct user_vfp;
|
||||
struct user_vfp_exc;
|
||||
|
||||
extern int vfp_preserve_user_clear_hwstate(struct user_vfp __user *,
|
||||
struct user_vfp_exc __user *);
|
||||
extern int vfp_restore_user_hwstate(struct user_vfp __user *,
|
||||
struct user_vfp_exc __user *);
|
||||
extern int vfp_preserve_user_clear_hwstate(struct user_vfp *,
|
||||
struct user_vfp_exc *);
|
||||
extern int vfp_restore_user_hwstate(struct user_vfp *,
|
||||
struct user_vfp_exc *);
|
||||
#endif
|
||||
|
||||
/*
|
||||
|
|
|
@ -99,6 +99,14 @@ extern int __put_user_bad(void);
|
|||
static inline void set_fs(mm_segment_t fs)
|
||||
{
|
||||
current_thread_info()->addr_limit = fs;
|
||||
|
||||
/*
|
||||
* Prevent a mispredicted conditional call to set_fs from forwarding
|
||||
* the wrong address limit to access_ok under speculation.
|
||||
*/
|
||||
dsb(nsh);
|
||||
isb();
|
||||
|
||||
modify_domain(DOMAIN_KERNEL, fs ? DOMAIN_CLIENT : DOMAIN_MANAGER);
|
||||
}
|
||||
|
||||
|
@ -122,6 +130,39 @@ static inline void set_fs(mm_segment_t fs)
|
|||
: "cc"); \
|
||||
flag; })
|
||||
|
||||
/*
|
||||
* This is a type: either unsigned long, if the argument fits into
|
||||
* that type, or otherwise unsigned long long.
|
||||
*/
|
||||
#define __inttype(x) \
|
||||
__typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL))
|
||||
|
||||
/*
|
||||
* Sanitise a uaccess pointer such that it becomes NULL if addr+size
|
||||
* is above the current addr_limit.
|
||||
*/
|
||||
#define uaccess_mask_range_ptr(ptr, size) \
|
||||
((__typeof__(ptr))__uaccess_mask_range_ptr(ptr, size))
|
||||
static inline void __user *__uaccess_mask_range_ptr(const void __user *ptr,
|
||||
size_t size)
|
||||
{
|
||||
void __user *safe_ptr = (void __user *)ptr;
|
||||
unsigned long tmp;
|
||||
|
||||
asm volatile(
|
||||
" sub %1, %3, #1\n"
|
||||
" subs %1, %1, %0\n"
|
||||
" addhs %1, %1, #1\n"
|
||||
" subhss %1, %1, %2\n"
|
||||
" movlo %0, #0\n"
|
||||
: "+r" (safe_ptr), "=&r" (tmp)
|
||||
: "r" (size), "r" (current_thread_info()->addr_limit)
|
||||
: "cc");
|
||||
|
||||
csdb();
|
||||
return safe_ptr;
|
||||
}
|
||||
|
||||
/*
|
||||
* Single-value transfer routines. They automatically use the right
|
||||
* size if we just have the right pointer type. Note that the functions
|
||||
|
@ -191,7 +232,7 @@ extern int __get_user_64t_4(void *);
|
|||
({ \
|
||||
unsigned long __limit = current_thread_info()->addr_limit - 1; \
|
||||
register const typeof(*(p)) __user *__p asm("r0") = (p);\
|
||||
register typeof(x) __r2 asm("r2"); \
|
||||
register __inttype(x) __r2 asm("r2"); \
|
||||
register unsigned long __l asm("r1") = __limit; \
|
||||
register int __e asm("r0"); \
|
||||
unsigned int __ua_flags = uaccess_save_and_enable(); \
|
||||
|
@ -238,49 +279,23 @@ extern int __put_user_2(void *, unsigned int);
|
|||
extern int __put_user_4(void *, unsigned int);
|
||||
extern int __put_user_8(void *, unsigned long long);
|
||||
|
||||
#define __put_user_x(__r2, __p, __e, __l, __s) \
|
||||
__asm__ __volatile__ ( \
|
||||
__asmeq("%0", "r0") __asmeq("%2", "r2") \
|
||||
__asmeq("%3", "r1") \
|
||||
"bl __put_user_" #__s \
|
||||
: "=&r" (__e) \
|
||||
: "0" (__p), "r" (__r2), "r" (__l) \
|
||||
: "ip", "lr", "cc")
|
||||
|
||||
#define __put_user_check(x, p) \
|
||||
#define __put_user_check(__pu_val, __ptr, __err, __s) \
|
||||
({ \
|
||||
unsigned long __limit = current_thread_info()->addr_limit - 1; \
|
||||
const typeof(*(p)) __user *__tmp_p = (p); \
|
||||
register typeof(*(p)) __r2 asm("r2") = (x); \
|
||||
register const typeof(*(p)) __user *__p asm("r0") = __tmp_p; \
|
||||
register typeof(__pu_val) __r2 asm("r2") = __pu_val; \
|
||||
register const void __user *__p asm("r0") = __ptr; \
|
||||
register unsigned long __l asm("r1") = __limit; \
|
||||
register int __e asm("r0"); \
|
||||
unsigned int __ua_flags = uaccess_save_and_enable(); \
|
||||
switch (sizeof(*(__p))) { \
|
||||
case 1: \
|
||||
__put_user_x(__r2, __p, __e, __l, 1); \
|
||||
break; \
|
||||
case 2: \
|
||||
__put_user_x(__r2, __p, __e, __l, 2); \
|
||||
break; \
|
||||
case 4: \
|
||||
__put_user_x(__r2, __p, __e, __l, 4); \
|
||||
break; \
|
||||
case 8: \
|
||||
__put_user_x(__r2, __p, __e, __l, 8); \
|
||||
break; \
|
||||
default: __e = __put_user_bad(); break; \
|
||||
} \
|
||||
uaccess_restore(__ua_flags); \
|
||||
__e; \
|
||||
__asm__ __volatile__ ( \
|
||||
__asmeq("%0", "r0") __asmeq("%2", "r2") \
|
||||
__asmeq("%3", "r1") \
|
||||
"bl __put_user_" #__s \
|
||||
: "=&r" (__e) \
|
||||
: "0" (__p), "r" (__r2), "r" (__l) \
|
||||
: "ip", "lr", "cc"); \
|
||||
__err = __e; \
|
||||
})
|
||||
|
||||
#define put_user(x, p) \
|
||||
({ \
|
||||
might_fault(); \
|
||||
__put_user_check(x, p); \
|
||||
})
|
||||
|
||||
#else /* CONFIG_MMU */
|
||||
|
||||
/*
|
||||
|
@ -298,7 +313,7 @@ static inline void set_fs(mm_segment_t fs)
|
|||
}
|
||||
|
||||
#define get_user(x, p) __get_user(x, p)
|
||||
#define put_user(x, p) __put_user(x, p)
|
||||
#define __put_user_check __put_user_nocheck
|
||||
|
||||
#endif /* CONFIG_MMU */
|
||||
|
||||
|
@ -307,6 +322,16 @@ static inline void set_fs(mm_segment_t fs)
|
|||
#define user_addr_max() \
|
||||
(segment_eq(get_fs(), KERNEL_DS) ? ~0UL : get_fs())
|
||||
|
||||
#ifdef CONFIG_CPU_SPECTRE
|
||||
/*
|
||||
* When mitigating Spectre variant 1, it is not worth fixing the non-
|
||||
* verifying accessors, because we need to add verification of the
|
||||
* address space there. Force these to use the standard get_user()
|
||||
* version instead.
|
||||
*/
|
||||
#define __get_user(x, ptr) get_user(x, ptr)
|
||||
#else
|
||||
|
||||
/*
|
||||
* The "__xxx" versions of the user access functions do not verify the
|
||||
* address space - it must have been done previously with a separate
|
||||
|
@ -323,12 +348,6 @@ static inline void set_fs(mm_segment_t fs)
|
|||
__gu_err; \
|
||||
})
|
||||
|
||||
#define __get_user_error(x, ptr, err) \
|
||||
({ \
|
||||
__get_user_err((x), (ptr), err); \
|
||||
(void) 0; \
|
||||
})
|
||||
|
||||
#define __get_user_err(x, ptr, err) \
|
||||
do { \
|
||||
unsigned long __gu_addr = (unsigned long)(ptr); \
|
||||
|
@ -388,37 +407,58 @@ do { \
|
|||
|
||||
#define __get_user_asm_word(x, addr, err) \
|
||||
__get_user_asm(x, addr, err, ldr)
|
||||
#endif
|
||||
|
||||
#define __put_user(x, ptr) \
|
||||
|
||||
#define __put_user_switch(x, ptr, __err, __fn) \
|
||||
do { \
|
||||
const __typeof__(*(ptr)) __user *__pu_ptr = (ptr); \
|
||||
__typeof__(*(ptr)) __pu_val = (x); \
|
||||
unsigned int __ua_flags; \
|
||||
might_fault(); \
|
||||
__ua_flags = uaccess_save_and_enable(); \
|
||||
switch (sizeof(*(ptr))) { \
|
||||
case 1: __fn(__pu_val, __pu_ptr, __err, 1); break; \
|
||||
case 2: __fn(__pu_val, __pu_ptr, __err, 2); break; \
|
||||
case 4: __fn(__pu_val, __pu_ptr, __err, 4); break; \
|
||||
case 8: __fn(__pu_val, __pu_ptr, __err, 8); break; \
|
||||
default: __err = __put_user_bad(); break; \
|
||||
} \
|
||||
uaccess_restore(__ua_flags); \
|
||||
} while (0)
|
||||
|
||||
#define put_user(x, ptr) \
|
||||
({ \
|
||||
long __pu_err = 0; \
|
||||
__put_user_err((x), (ptr), __pu_err); \
|
||||
int __pu_err = 0; \
|
||||
__put_user_switch((x), (ptr), __pu_err, __put_user_check); \
|
||||
__pu_err; \
|
||||
})
|
||||
|
||||
#define __put_user_error(x, ptr, err) \
|
||||
#ifdef CONFIG_CPU_SPECTRE
|
||||
/*
|
||||
* When mitigating Spectre variant 1.1, all accessors need to include
|
||||
* verification of the address space.
|
||||
*/
|
||||
#define __put_user(x, ptr) put_user(x, ptr)
|
||||
|
||||
#else
|
||||
#define __put_user(x, ptr) \
|
||||
({ \
|
||||
__put_user_err((x), (ptr), err); \
|
||||
(void) 0; \
|
||||
long __pu_err = 0; \
|
||||
__put_user_switch((x), (ptr), __pu_err, __put_user_nocheck); \
|
||||
__pu_err; \
|
||||
})
|
||||
|
||||
#define __put_user_err(x, ptr, err) \
|
||||
do { \
|
||||
unsigned long __pu_addr = (unsigned long)(ptr); \
|
||||
unsigned int __ua_flags; \
|
||||
__typeof__(*(ptr)) __pu_val = (x); \
|
||||
__chk_user_ptr(ptr); \
|
||||
might_fault(); \
|
||||
__ua_flags = uaccess_save_and_enable(); \
|
||||
switch (sizeof(*(ptr))) { \
|
||||
case 1: __put_user_asm_byte(__pu_val, __pu_addr, err); break; \
|
||||
case 2: __put_user_asm_half(__pu_val, __pu_addr, err); break; \
|
||||
case 4: __put_user_asm_word(__pu_val, __pu_addr, err); break; \
|
||||
case 8: __put_user_asm_dword(__pu_val, __pu_addr, err); break; \
|
||||
default: __put_user_bad(); \
|
||||
} \
|
||||
uaccess_restore(__ua_flags); \
|
||||
} while (0)
|
||||
#define __put_user_nocheck(x, __pu_ptr, __err, __size) \
|
||||
do { \
|
||||
unsigned long __pu_addr = (unsigned long)__pu_ptr; \
|
||||
__put_user_nocheck_##__size(x, __pu_addr, __err); \
|
||||
} while (0)
|
||||
|
||||
#define __put_user_nocheck_1 __put_user_asm_byte
|
||||
#define __put_user_nocheck_2 __put_user_asm_half
|
||||
#define __put_user_nocheck_4 __put_user_asm_word
|
||||
#define __put_user_nocheck_8 __put_user_asm_dword
|
||||
|
||||
#define __put_user_asm(x, __pu_addr, err, instr) \
|
||||
__asm__ __volatile__( \
|
||||
|
@ -488,6 +528,7 @@ do { \
|
|||
: "r" (x), "i" (-EFAULT) \
|
||||
: "cc")
|
||||
|
||||
#endif /* !CONFIG_CPU_SPECTRE */
|
||||
|
||||
#ifdef CONFIG_MMU
|
||||
extern unsigned long __must_check
|
||||
|
|
|
@ -30,6 +30,7 @@ else
|
|||
obj-y += entry-armv.o
|
||||
endif
|
||||
|
||||
obj-$(CONFIG_MMU) += bugs.o
|
||||
obj-$(CONFIG_CPU_IDLE) += cpuidle.o
|
||||
obj-$(CONFIG_ISA_DMA_API) += dma.o
|
||||
obj-$(CONFIG_FIQ) += fiq.o fiqasm.o
|
||||
|
|
18
arch/arm/kernel/bugs.c
Normal file
18
arch/arm/kernel/bugs.c
Normal file
|
@ -0,0 +1,18 @@
|
|||
// SPDX-Identifier: GPL-2.0
|
||||
#include <linux/init.h>
|
||||
#include <asm/bugs.h>
|
||||
#include <asm/proc-fns.h>
|
||||
|
||||
void check_other_bugs(void)
|
||||
{
|
||||
#ifdef MULTI_CPU
|
||||
if (cpu_check_bugs)
|
||||
cpu_check_bugs();
|
||||
#endif
|
||||
}
|
||||
|
||||
void __init check_bugs(void)
|
||||
{
|
||||
check_writebuffer_bugs();
|
||||
check_other_bugs();
|
||||
}
|
|
@ -233,9 +233,7 @@ local_restart:
|
|||
tst r10, #_TIF_SYSCALL_WORK @ are we tracing syscalls?
|
||||
bne __sys_trace
|
||||
|
||||
cmp scno, #NR_syscalls @ check upper syscall limit
|
||||
badr lr, ret_fast_syscall @ return address
|
||||
ldrcc pc, [tbl, scno, lsl #2] @ call sys_* routine
|
||||
invoke_syscall tbl, scno, r10, ret_fast_syscall
|
||||
|
||||
add r1, sp, #S_OFF
|
||||
2: cmp scno, #(__ARM_NR_BASE - __NR_SYSCALL_BASE)
|
||||
|
@ -268,27 +266,20 @@ __sys_trace:
|
|||
mov r1, scno
|
||||
add r0, sp, #S_OFF
|
||||
bl syscall_trace_enter
|
||||
|
||||
badr lr, __sys_trace_return @ return address
|
||||
mov scno, r0 @ syscall number (possibly new)
|
||||
add r1, sp, #S_R0 + S_OFF @ pointer to regs
|
||||
cmp scno, #NR_syscalls @ check upper syscall limit
|
||||
ldmccia r1, {r0 - r6} @ have to reload r0 - r6
|
||||
stmccia sp, {r4, r5} @ and update the stack args
|
||||
ldrcc pc, [tbl, scno, lsl #2] @ call sys_* routine
|
||||
mov scno, r0
|
||||
invoke_syscall tbl, scno, r10, __sys_trace_return, reload=1
|
||||
cmp scno, #-1 @ skip the syscall?
|
||||
bne 2b
|
||||
add sp, sp, #S_OFF @ restore stack
|
||||
b ret_slow_syscall
|
||||
|
||||
__sys_trace_return:
|
||||
str r0, [sp, #S_R0 + S_OFF]! @ save returned r0
|
||||
__sys_trace_return_nosave:
|
||||
enable_irq_notrace
|
||||
mov r0, sp
|
||||
bl syscall_trace_exit
|
||||
b ret_slow_syscall
|
||||
|
||||
__sys_trace_return_nosave:
|
||||
enable_irq_notrace
|
||||
__sys_trace_return:
|
||||
str r0, [sp, #S_R0 + S_OFF]! @ save returned r0
|
||||
mov r0, sp
|
||||
bl syscall_trace_exit
|
||||
b ret_slow_syscall
|
||||
|
@ -327,6 +318,10 @@ sys_syscall:
|
|||
bic scno, r0, #__NR_OABI_SYSCALL_BASE
|
||||
cmp scno, #__NR_syscall - __NR_SYSCALL_BASE
|
||||
cmpne scno, #NR_syscalls @ check range
|
||||
#ifdef CONFIG_CPU_SPECTRE
|
||||
movhs scno, #0
|
||||
csdb
|
||||
#endif
|
||||
stmloia sp, {r5, r6} @ shuffle args
|
||||
movlo r0, r1
|
||||
movlo r1, r2
|
||||
|
|
|
@ -373,6 +373,31 @@
|
|||
#endif
|
||||
.endm
|
||||
|
||||
.macro invoke_syscall, table, nr, tmp, ret, reload=0
|
||||
#ifdef CONFIG_CPU_SPECTRE
|
||||
mov \tmp, \nr
|
||||
cmp \tmp, #NR_syscalls @ check upper syscall limit
|
||||
movcs \tmp, #0
|
||||
csdb
|
||||
badr lr, \ret @ return address
|
||||
.if \reload
|
||||
add r1, sp, #S_R0 + S_OFF @ pointer to regs
|
||||
ldmccia r1, {r0 - r6} @ reload r0-r6
|
||||
stmccia sp, {r4, r5} @ update stack arguments
|
||||
.endif
|
||||
ldrcc pc, [\table, \tmp, lsl #2] @ call sys_* routine
|
||||
#else
|
||||
cmp \nr, #NR_syscalls @ check upper syscall limit
|
||||
badr lr, \ret @ return address
|
||||
.if \reload
|
||||
add r1, sp, #S_R0 + S_OFF @ pointer to regs
|
||||
ldmccia r1, {r0 - r6} @ reload r0-r6
|
||||
stmccia sp, {r4, r5} @ update stack arguments
|
||||
.endif
|
||||
ldrcc pc, [\table, \nr, lsl #2] @ call sys_* routine
|
||||
#endif
|
||||
.endm
|
||||
|
||||
/*
|
||||
* These are the registers used in the syscall handler, and allow us to
|
||||
* have in theory up to 7 arguments to a function - r0 to r6.
|
||||
|
|
|
@ -122,6 +122,9 @@ __mmap_switched_data:
|
|||
.long init_thread_union + THREAD_START_SP @ sp
|
||||
.size __mmap_switched_data, . - __mmap_switched_data
|
||||
|
||||
__FINIT
|
||||
.text
|
||||
|
||||
/*
|
||||
* This provides a C-API version of __lookup_processor_type
|
||||
*/
|
||||
|
@ -133,9 +136,6 @@ ENTRY(lookup_processor_type)
|
|||
ldmfd sp!, {r4 - r6, r9, pc}
|
||||
ENDPROC(lookup_processor_type)
|
||||
|
||||
__FINIT
|
||||
.text
|
||||
|
||||
/*
|
||||
* Read processor ID register (CP#15, CR0), and look up in the linker-built
|
||||
* supported processor list. Note that we can't use the absolute addresses
|
||||
|
|
|
@ -122,6 +122,11 @@ EXPORT_SYMBOL(cold_boot);
|
|||
|
||||
#ifdef MULTI_CPU
|
||||
struct processor processor __read_mostly;
|
||||
#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)
|
||||
struct processor *cpu_vtable[NR_CPUS] = {
|
||||
[0] = &processor,
|
||||
};
|
||||
#endif
|
||||
#endif
|
||||
#ifdef MULTI_TLB
|
||||
struct cpu_tlb_fns cpu_tlb __read_mostly;
|
||||
|
@ -608,28 +613,33 @@ static void __init smp_build_mpidr_hash(void)
|
|||
}
|
||||
#endif
|
||||
|
||||
/*
|
||||
* locate processor in the list of supported processor types. The linker
|
||||
* builds this table for us from the entries in arch/arm/mm/proc-*.S
|
||||
*/
|
||||
struct proc_info_list *lookup_processor(u32 midr)
|
||||
{
|
||||
struct proc_info_list *list = lookup_processor_type(midr);
|
||||
|
||||
if (!list) {
|
||||
pr_err("CPU%u: configuration botched (ID %08x), CPU halted\n",
|
||||
smp_processor_id(), midr);
|
||||
while (1)
|
||||
/* can't use cpu_relax() here as it may require MMU setup */;
|
||||
}
|
||||
|
||||
return list;
|
||||
}
|
||||
|
||||
static void __init setup_processor(void)
|
||||
{
|
||||
struct proc_info_list *list;
|
||||
|
||||
/*
|
||||
* locate processor in the list of supported processor
|
||||
* types. The linker builds this table for us from the
|
||||
* entries in arch/arm/mm/proc-*.S
|
||||
*/
|
||||
list = lookup_processor_type(read_cpuid_id());
|
||||
if (!list) {
|
||||
pr_err("CPU configuration botched (ID %08x), unable to continue.\n",
|
||||
read_cpuid_id());
|
||||
while (1);
|
||||
}
|
||||
unsigned int midr = read_cpuid_id();
|
||||
struct proc_info_list *list = lookup_processor(midr);
|
||||
|
||||
cpu_name = list->cpu_name;
|
||||
__cpu_architecture = __get_cpu_architecture();
|
||||
|
||||
#ifdef MULTI_CPU
|
||||
processor = *list->proc;
|
||||
#endif
|
||||
init_proc_vtable(list->proc);
|
||||
#ifdef MULTI_TLB
|
||||
cpu_tlb = *list->tlb;
|
||||
#endif
|
||||
|
@ -641,7 +651,7 @@ static void __init setup_processor(void)
|
|||
#endif
|
||||
|
||||
pr_info("CPU: %s [%08x] revision %d (ARMv%s), cr=%08lx\n",
|
||||
cpu_name, read_cpuid_id(), read_cpuid_id() & 15,
|
||||
list->cpu_name, midr, midr & 15,
|
||||
proc_arch[cpu_architecture()], get_cr());
|
||||
|
||||
snprintf(init_utsname()->machine, __NEW_UTS_LEN + 1, "%s%c",
|
||||
|
|
|
@ -95,34 +95,34 @@ static int restore_iwmmxt_context(struct iwmmxt_sigframe *frame)
|
|||
|
||||
static int preserve_vfp_context(struct vfp_sigframe __user *frame)
|
||||
{
|
||||
const unsigned long magic = VFP_MAGIC;
|
||||
const unsigned long size = VFP_STORAGE_SIZE;
|
||||
struct vfp_sigframe kframe;
|
||||
int err = 0;
|
||||
|
||||
__put_user_error(magic, &frame->magic, err);
|
||||
__put_user_error(size, &frame->size, err);
|
||||
memset(&kframe, 0, sizeof(kframe));
|
||||
kframe.magic = VFP_MAGIC;
|
||||
kframe.size = VFP_STORAGE_SIZE;
|
||||
|
||||
err = vfp_preserve_user_clear_hwstate(&kframe.ufp, &kframe.ufp_exc);
|
||||
if (err)
|
||||
return -EFAULT;
|
||||
return err;
|
||||
|
||||
return vfp_preserve_user_clear_hwstate(&frame->ufp, &frame->ufp_exc);
|
||||
return __copy_to_user(frame, &kframe, sizeof(kframe));
|
||||
}
|
||||
|
||||
static int restore_vfp_context(struct vfp_sigframe __user *frame)
|
||||
static int restore_vfp_context(struct vfp_sigframe __user *auxp)
|
||||
{
|
||||
unsigned long magic;
|
||||
unsigned long size;
|
||||
int err = 0;
|
||||
struct vfp_sigframe frame;
|
||||
int err;
|
||||
|
||||
__get_user_error(magic, &frame->magic, err);
|
||||
__get_user_error(size, &frame->size, err);
|
||||
err = __copy_from_user(&frame, (char __user *) auxp, sizeof(frame));
|
||||
|
||||
if (err)
|
||||
return -EFAULT;
|
||||
if (magic != VFP_MAGIC || size != VFP_STORAGE_SIZE)
|
||||
return err;
|
||||
|
||||
if (frame.magic != VFP_MAGIC || frame.size != VFP_STORAGE_SIZE)
|
||||
return -EINVAL;
|
||||
|
||||
return vfp_restore_user_hwstate(&frame->ufp, &frame->ufp_exc);
|
||||
return vfp_restore_user_hwstate(&frame.ufp, &frame.ufp_exc);
|
||||
}
|
||||
|
||||
#endif
|
||||
|
@ -142,6 +142,7 @@ struct rt_sigframe {
|
|||
|
||||
static int restore_sigframe(struct pt_regs *regs, struct sigframe __user *sf)
|
||||
{
|
||||
struct sigcontext context;
|
||||
struct aux_sigframe __user *aux;
|
||||
sigset_t set;
|
||||
int err;
|
||||
|
@ -150,23 +151,26 @@ static int restore_sigframe(struct pt_regs *regs, struct sigframe __user *sf)
|
|||
if (err == 0)
|
||||
set_current_blocked(&set);
|
||||
|
||||
__get_user_error(regs->ARM_r0, &sf->uc.uc_mcontext.arm_r0, err);
|
||||
__get_user_error(regs->ARM_r1, &sf->uc.uc_mcontext.arm_r1, err);
|
||||
__get_user_error(regs->ARM_r2, &sf->uc.uc_mcontext.arm_r2, err);
|
||||
__get_user_error(regs->ARM_r3, &sf->uc.uc_mcontext.arm_r3, err);
|
||||
__get_user_error(regs->ARM_r4, &sf->uc.uc_mcontext.arm_r4, err);
|
||||
__get_user_error(regs->ARM_r5, &sf->uc.uc_mcontext.arm_r5, err);
|
||||
__get_user_error(regs->ARM_r6, &sf->uc.uc_mcontext.arm_r6, err);
|
||||
__get_user_error(regs->ARM_r7, &sf->uc.uc_mcontext.arm_r7, err);
|
||||
__get_user_error(regs->ARM_r8, &sf->uc.uc_mcontext.arm_r8, err);
|
||||
__get_user_error(regs->ARM_r9, &sf->uc.uc_mcontext.arm_r9, err);
|
||||
__get_user_error(regs->ARM_r10, &sf->uc.uc_mcontext.arm_r10, err);
|
||||
__get_user_error(regs->ARM_fp, &sf->uc.uc_mcontext.arm_fp, err);
|
||||
__get_user_error(regs->ARM_ip, &sf->uc.uc_mcontext.arm_ip, err);
|
||||
__get_user_error(regs->ARM_sp, &sf->uc.uc_mcontext.arm_sp, err);
|
||||
__get_user_error(regs->ARM_lr, &sf->uc.uc_mcontext.arm_lr, err);
|
||||
__get_user_error(regs->ARM_pc, &sf->uc.uc_mcontext.arm_pc, err);
|
||||
__get_user_error(regs->ARM_cpsr, &sf->uc.uc_mcontext.arm_cpsr, err);
|
||||
err |= __copy_from_user(&context, &sf->uc.uc_mcontext, sizeof(context));
|
||||
if (err == 0) {
|
||||
regs->ARM_r0 = context.arm_r0;
|
||||
regs->ARM_r1 = context.arm_r1;
|
||||
regs->ARM_r2 = context.arm_r2;
|
||||
regs->ARM_r3 = context.arm_r3;
|
||||
regs->ARM_r4 = context.arm_r4;
|
||||
regs->ARM_r5 = context.arm_r5;
|
||||
regs->ARM_r6 = context.arm_r6;
|
||||
regs->ARM_r7 = context.arm_r7;
|
||||
regs->ARM_r8 = context.arm_r8;
|
||||
regs->ARM_r9 = context.arm_r9;
|
||||
regs->ARM_r10 = context.arm_r10;
|
||||
regs->ARM_fp = context.arm_fp;
|
||||
regs->ARM_ip = context.arm_ip;
|
||||
regs->ARM_sp = context.arm_sp;
|
||||
regs->ARM_lr = context.arm_lr;
|
||||
regs->ARM_pc = context.arm_pc;
|
||||
regs->ARM_cpsr = context.arm_cpsr;
|
||||
}
|
||||
|
||||
err |= !valid_user_regs(regs);
|
||||
|
||||
|
@ -254,30 +258,35 @@ static int
|
|||
setup_sigframe(struct sigframe __user *sf, struct pt_regs *regs, sigset_t *set)
|
||||
{
|
||||
struct aux_sigframe __user *aux;
|
||||
struct sigcontext context;
|
||||
int err = 0;
|
||||
|
||||
__put_user_error(regs->ARM_r0, &sf->uc.uc_mcontext.arm_r0, err);
|
||||
__put_user_error(regs->ARM_r1, &sf->uc.uc_mcontext.arm_r1, err);
|
||||
__put_user_error(regs->ARM_r2, &sf->uc.uc_mcontext.arm_r2, err);
|
||||
__put_user_error(regs->ARM_r3, &sf->uc.uc_mcontext.arm_r3, err);
|
||||
__put_user_error(regs->ARM_r4, &sf->uc.uc_mcontext.arm_r4, err);
|
||||
__put_user_error(regs->ARM_r5, &sf->uc.uc_mcontext.arm_r5, err);
|
||||
__put_user_error(regs->ARM_r6, &sf->uc.uc_mcontext.arm_r6, err);
|
||||
__put_user_error(regs->ARM_r7, &sf->uc.uc_mcontext.arm_r7, err);
|
||||
__put_user_error(regs->ARM_r8, &sf->uc.uc_mcontext.arm_r8, err);
|
||||
__put_user_error(regs->ARM_r9, &sf->uc.uc_mcontext.arm_r9, err);
|
||||
__put_user_error(regs->ARM_r10, &sf->uc.uc_mcontext.arm_r10, err);
|
||||
__put_user_error(regs->ARM_fp, &sf->uc.uc_mcontext.arm_fp, err);
|
||||
__put_user_error(regs->ARM_ip, &sf->uc.uc_mcontext.arm_ip, err);
|
||||
__put_user_error(regs->ARM_sp, &sf->uc.uc_mcontext.arm_sp, err);
|
||||
__put_user_error(regs->ARM_lr, &sf->uc.uc_mcontext.arm_lr, err);
|
||||
__put_user_error(regs->ARM_pc, &sf->uc.uc_mcontext.arm_pc, err);
|
||||
__put_user_error(regs->ARM_cpsr, &sf->uc.uc_mcontext.arm_cpsr, err);
|
||||
context = (struct sigcontext) {
|
||||
.arm_r0 = regs->ARM_r0,
|
||||
.arm_r1 = regs->ARM_r1,
|
||||
.arm_r2 = regs->ARM_r2,
|
||||
.arm_r3 = regs->ARM_r3,
|
||||
.arm_r4 = regs->ARM_r4,
|
||||
.arm_r5 = regs->ARM_r5,
|
||||
.arm_r6 = regs->ARM_r6,
|
||||
.arm_r7 = regs->ARM_r7,
|
||||
.arm_r8 = regs->ARM_r8,
|
||||
.arm_r9 = regs->ARM_r9,
|
||||
.arm_r10 = regs->ARM_r10,
|
||||
.arm_fp = regs->ARM_fp,
|
||||
.arm_ip = regs->ARM_ip,
|
||||
.arm_sp = regs->ARM_sp,
|
||||
.arm_lr = regs->ARM_lr,
|
||||
.arm_pc = regs->ARM_pc,
|
||||
.arm_cpsr = regs->ARM_cpsr,
|
||||
|
||||
__put_user_error(current->thread.trap_no, &sf->uc.uc_mcontext.trap_no, err);
|
||||
__put_user_error(current->thread.error_code, &sf->uc.uc_mcontext.error_code, err);
|
||||
__put_user_error(current->thread.address, &sf->uc.uc_mcontext.fault_address, err);
|
||||
__put_user_error(set->sig[0], &sf->uc.uc_mcontext.oldmask, err);
|
||||
.trap_no = current->thread.trap_no,
|
||||
.error_code = current->thread.error_code,
|
||||
.fault_address = current->thread.address,
|
||||
.oldmask = set->sig[0],
|
||||
};
|
||||
|
||||
err |= __copy_to_user(&sf->uc.uc_mcontext, &context, sizeof(context));
|
||||
|
||||
err |= __copy_to_user(&sf->uc.uc_sigmask, set, sizeof(*set));
|
||||
|
||||
|
@ -294,7 +303,7 @@ setup_sigframe(struct sigframe __user *sf, struct pt_regs *regs, sigset_t *set)
|
|||
if (err == 0)
|
||||
err |= preserve_vfp_context(&aux->vfp);
|
||||
#endif
|
||||
__put_user_error(0, &aux->end_magic, err);
|
||||
err |= __put_user(0, &aux->end_magic);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
@ -426,7 +435,7 @@ setup_frame(struct ksignal *ksig, sigset_t *set, struct pt_regs *regs)
|
|||
/*
|
||||
* Set uc.uc_flags to a value which sc.trap_no would never have.
|
||||
*/
|
||||
__put_user_error(0x5ac3c35a, &frame->uc.uc_flags, err);
|
||||
err = __put_user(0x5ac3c35a, &frame->uc.uc_flags);
|
||||
|
||||
err |= setup_sigframe(frame, regs, set);
|
||||
if (err == 0)
|
||||
|
@ -446,8 +455,8 @@ setup_rt_frame(struct ksignal *ksig, sigset_t *set, struct pt_regs *regs)
|
|||
|
||||
err |= copy_siginfo_to_user(&frame->info, &ksig->info);
|
||||
|
||||
__put_user_error(0, &frame->sig.uc.uc_flags, err);
|
||||
__put_user_error(NULL, &frame->sig.uc.uc_link, err);
|
||||
err |= __put_user(0, &frame->sig.uc.uc_flags);
|
||||
err |= __put_user(NULL, &frame->sig.uc.uc_link);
|
||||
|
||||
err |= __save_altstack(&frame->sig.uc.uc_stack, regs->ARM_sp);
|
||||
err |= setup_sigframe(&frame->sig, regs, set);
|
||||
|
|
|
@ -27,8 +27,10 @@
|
|||
#include <linux/completion.h>
|
||||
#include <linux/cpufreq.h>
|
||||
#include <linux/irq_work.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
#include <linux/atomic.h>
|
||||
#include <asm/bugs.h>
|
||||
#include <asm/smp.h>
|
||||
#include <asm/cacheflush.h>
|
||||
#include <asm/cpu.h>
|
||||
|
@ -39,6 +41,7 @@
|
|||
#include <asm/mmu_context.h>
|
||||
#include <asm/pgtable.h>
|
||||
#include <asm/pgalloc.h>
|
||||
#include <asm/procinfo.h>
|
||||
#include <asm/processor.h>
|
||||
#include <asm/sections.h>
|
||||
#include <asm/tlbflush.h>
|
||||
|
@ -95,6 +98,30 @@ static unsigned long get_arch_pgd(pgd_t *pgd)
|
|||
#endif
|
||||
}
|
||||
|
||||
#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)
|
||||
static int secondary_biglittle_prepare(unsigned int cpu)
|
||||
{
|
||||
if (!cpu_vtable[cpu])
|
||||
cpu_vtable[cpu] = kzalloc(sizeof(*cpu_vtable[cpu]), GFP_KERNEL);
|
||||
|
||||
return cpu_vtable[cpu] ? 0 : -ENOMEM;
|
||||
}
|
||||
|
||||
static void secondary_biglittle_init(void)
|
||||
{
|
||||
init_proc_vtable(lookup_processor(read_cpuid_id())->proc);
|
||||
}
|
||||
#else
|
||||
static int secondary_biglittle_prepare(unsigned int cpu)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void secondary_biglittle_init(void)
|
||||
{
|
||||
}
|
||||
#endif
|
||||
|
||||
int __cpu_up(unsigned int cpu, struct task_struct *idle)
|
||||
{
|
||||
int ret;
|
||||
|
@ -102,6 +129,10 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
|
|||
if (!smp_ops.smp_boot_secondary)
|
||||
return -ENOSYS;
|
||||
|
||||
ret = secondary_biglittle_prepare(cpu);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* We need to tell the secondary core where to find
|
||||
* its stack and the page tables.
|
||||
|
@ -353,6 +384,8 @@ asmlinkage void secondary_start_kernel(void)
|
|||
struct mm_struct *mm = &init_mm;
|
||||
unsigned int cpu;
|
||||
|
||||
secondary_biglittle_init();
|
||||
|
||||
/*
|
||||
* The identity mapping is uncached (strongly ordered), so
|
||||
* switch away from it before attempting any exclusive accesses.
|
||||
|
@ -396,6 +429,9 @@ asmlinkage void secondary_start_kernel(void)
|
|||
* before we continue - which happens after __cpu_up returns.
|
||||
*/
|
||||
set_cpu_online(cpu, true);
|
||||
|
||||
check_other_bugs();
|
||||
|
||||
complete(&cpu_running);
|
||||
|
||||
local_irq_enable();
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
#include <linux/init.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
#include <asm/bugs.h>
|
||||
#include <asm/cacheflush.h>
|
||||
#include <asm/idmap.h>
|
||||
#include <asm/pgalloc.h>
|
||||
|
@ -34,6 +35,7 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
|
|||
cpu_switch_mm(mm->pgd, mm);
|
||||
local_flush_bp_all();
|
||||
local_flush_tlb_all();
|
||||
check_other_bugs();
|
||||
}
|
||||
|
||||
return ret;
|
||||
|
|
|
@ -276,6 +276,7 @@ asmlinkage long sys_oabi_epoll_wait(int epfd,
|
|||
int maxevents, int timeout)
|
||||
{
|
||||
struct epoll_event *kbuf;
|
||||
struct oabi_epoll_event e;
|
||||
mm_segment_t fs;
|
||||
long ret, err, i;
|
||||
|
||||
|
@ -294,8 +295,11 @@ asmlinkage long sys_oabi_epoll_wait(int epfd,
|
|||
set_fs(fs);
|
||||
err = 0;
|
||||
for (i = 0; i < ret; i++) {
|
||||
__put_user_error(kbuf[i].events, &events->events, err);
|
||||
__put_user_error(kbuf[i].data, &events->data, err);
|
||||
e.events = kbuf[i].events;
|
||||
e.data = kbuf[i].data;
|
||||
err = __copy_to_user(events, &e, sizeof(e));
|
||||
if (err)
|
||||
break;
|
||||
events++;
|
||||
}
|
||||
kfree(kbuf);
|
||||
|
@ -328,9 +332,11 @@ asmlinkage long sys_oabi_semtimedop(int semid,
|
|||
return -ENOMEM;
|
||||
err = 0;
|
||||
for (i = 0; i < nsops; i++) {
|
||||
__get_user_error(sops[i].sem_num, &tsops->sem_num, err);
|
||||
__get_user_error(sops[i].sem_op, &tsops->sem_op, err);
|
||||
__get_user_error(sops[i].sem_flg, &tsops->sem_flg, err);
|
||||
struct oabi_sembuf osb;
|
||||
err |= __copy_from_user(&osb, tsops, sizeof(osb));
|
||||
sops[i].sem_num = osb.sem_num;
|
||||
sops[i].sem_op = osb.sem_op;
|
||||
sops[i].sem_flg = osb.sem_flg;
|
||||
tsops++;
|
||||
}
|
||||
if (timeout) {
|
||||
|
|
|
@ -90,6 +90,11 @@
|
|||
.text
|
||||
|
||||
ENTRY(arm_copy_from_user)
|
||||
#ifdef CONFIG_CPU_SPECTRE
|
||||
get_thread_info r3
|
||||
ldr r3, [r3, #TI_ADDR_LIMIT]
|
||||
uaccess_mask_range_ptr r1, r2, r3, ip
|
||||
#endif
|
||||
|
||||
#include "copy_template.S"
|
||||
|
||||
|
|
|
@ -602,6 +602,28 @@ static void __init imx6_pm_common_init(const struct imx6_pm_socdata
|
|||
IMX6Q_GPR1_GINT);
|
||||
}
|
||||
|
||||
static void imx6_pm_stby_poweroff(void)
|
||||
{
|
||||
imx6_set_lpm(STOP_POWER_OFF);
|
||||
imx6q_suspend_finish(0);
|
||||
|
||||
mdelay(1000);
|
||||
|
||||
pr_emerg("Unable to poweroff system\n");
|
||||
}
|
||||
|
||||
static int imx6_pm_stby_poweroff_probe(void)
|
||||
{
|
||||
if (pm_power_off) {
|
||||
pr_warn("%s: pm_power_off already claimed %p %pf!\n",
|
||||
__func__, pm_power_off, pm_power_off);
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
pm_power_off = imx6_pm_stby_poweroff;
|
||||
return 0;
|
||||
}
|
||||
|
||||
void __init imx6_pm_ccm_init(const char *ccm_compat)
|
||||
{
|
||||
struct device_node *np;
|
||||
|
@ -618,6 +640,9 @@ void __init imx6_pm_ccm_init(const char *ccm_compat)
|
|||
val = readl_relaxed(ccm_base + CLPCR);
|
||||
val &= ~BM_CLPCR_LPM;
|
||||
writel_relaxed(val, ccm_base + CLPCR);
|
||||
|
||||
if (of_property_read_bool(np, "fsl,pmic-stby-poweroff"))
|
||||
imx6_pm_stby_poweroff_probe();
|
||||
}
|
||||
|
||||
void __init imx6q_pm_init(void)
|
||||
|
|
|
@ -396,6 +396,7 @@ config CPU_V7
|
|||
select CPU_CP15_MPU if !MMU
|
||||
select CPU_HAS_ASID if MMU
|
||||
select CPU_PABRT_V7
|
||||
select CPU_SPECTRE if MMU
|
||||
select CPU_TLB_V7 if MMU
|
||||
|
||||
# ARMv7M
|
||||
|
@ -793,6 +794,28 @@ config CPU_BPREDICT_DISABLE
|
|||
help
|
||||
Say Y here to disable branch prediction. If unsure, say N.
|
||||
|
||||
config CPU_SPECTRE
|
||||
bool
|
||||
|
||||
config HARDEN_BRANCH_PREDICTOR
|
||||
bool "Harden the branch predictor against aliasing attacks" if EXPERT
|
||||
depends on CPU_SPECTRE
|
||||
default y
|
||||
help
|
||||
Speculation attacks against some high-performance processors rely
|
||||
on being able to manipulate the branch predictor for a victim
|
||||
context by executing aliasing branches in the attacker context.
|
||||
Such attacks can be partially mitigated against by clearing
|
||||
internal branch predictor state and limiting the prediction
|
||||
logic in some situations.
|
||||
|
||||
This config option will take CPU-specific actions to harden
|
||||
the branch predictor against aliasing attacks and may rely on
|
||||
specific instruction sequences or control bits being set by
|
||||
the system firmware.
|
||||
|
||||
If unsure, say Y.
|
||||
|
||||
config TLS_REG_EMUL
|
||||
bool
|
||||
select NEED_KUSER_HELPERS
|
||||
|
|
|
@ -92,7 +92,7 @@ obj-$(CONFIG_CPU_MOHAWK) += proc-mohawk.o
|
|||
obj-$(CONFIG_CPU_FEROCEON) += proc-feroceon.o
|
||||
obj-$(CONFIG_CPU_V6) += proc-v6.o
|
||||
obj-$(CONFIG_CPU_V6K) += proc-v6.o
|
||||
obj-$(CONFIG_CPU_V7) += proc-v7.o
|
||||
obj-$(CONFIG_CPU_V7) += proc-v7.o proc-v7-bugs.o
|
||||
obj-$(CONFIG_CPU_V7M) += proc-v7m.o
|
||||
|
||||
AFLAGS_proc-v6.o :=-Wa,-march=armv6
|
||||
|
|
|
@ -767,6 +767,36 @@ do_alignment_t32_to_handler(unsigned long *pinstr, struct pt_regs *regs,
|
|||
return NULL;
|
||||
}
|
||||
|
||||
static int alignment_get_arm(struct pt_regs *regs, u32 *ip, unsigned long *inst)
|
||||
{
|
||||
u32 instr = 0;
|
||||
int fault;
|
||||
|
||||
if (user_mode(regs))
|
||||
fault = get_user(instr, ip);
|
||||
else
|
||||
fault = probe_kernel_address(ip, instr);
|
||||
|
||||
*inst = __mem_to_opcode_arm(instr);
|
||||
|
||||
return fault;
|
||||
}
|
||||
|
||||
static int alignment_get_thumb(struct pt_regs *regs, u16 *ip, u16 *inst)
|
||||
{
|
||||
u16 instr = 0;
|
||||
int fault;
|
||||
|
||||
if (user_mode(regs))
|
||||
fault = get_user(instr, ip);
|
||||
else
|
||||
fault = probe_kernel_address(ip, instr);
|
||||
|
||||
*inst = __mem_to_opcode_thumb16(instr);
|
||||
|
||||
return fault;
|
||||
}
|
||||
|
||||
static int
|
||||
do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
|
||||
{
|
||||
|
@ -774,10 +804,10 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
|
|||
unsigned long instr = 0, instrptr;
|
||||
int (*handler)(unsigned long addr, unsigned long instr, struct pt_regs *regs);
|
||||
unsigned int type;
|
||||
unsigned int fault;
|
||||
u16 tinstr = 0;
|
||||
int isize = 4;
|
||||
int thumb2_32b = 0;
|
||||
int fault;
|
||||
|
||||
if (interrupts_enabled(regs))
|
||||
local_irq_enable();
|
||||
|
@ -786,15 +816,14 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
|
|||
|
||||
if (thumb_mode(regs)) {
|
||||
u16 *ptr = (u16 *)(instrptr & ~1);
|
||||
fault = probe_kernel_address(ptr, tinstr);
|
||||
tinstr = __mem_to_opcode_thumb16(tinstr);
|
||||
|
||||
fault = alignment_get_thumb(regs, ptr, &tinstr);
|
||||
if (!fault) {
|
||||
if (cpu_architecture() >= CPU_ARCH_ARMv7 &&
|
||||
IS_T32(tinstr)) {
|
||||
/* Thumb-2 32-bit */
|
||||
u16 tinst2 = 0;
|
||||
fault = probe_kernel_address(ptr + 1, tinst2);
|
||||
tinst2 = __mem_to_opcode_thumb16(tinst2);
|
||||
u16 tinst2;
|
||||
fault = alignment_get_thumb(regs, ptr + 1, &tinst2);
|
||||
instr = __opcode_thumb32_compose(tinstr, tinst2);
|
||||
thumb2_32b = 1;
|
||||
} else {
|
||||
|
@ -803,8 +832,7 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
|
|||
}
|
||||
}
|
||||
} else {
|
||||
fault = probe_kernel_address((void *)instrptr, instr);
|
||||
instr = __mem_to_opcode_arm(instr);
|
||||
fault = alignment_get_arm(regs, (void *)instrptr, &instr);
|
||||
}
|
||||
|
||||
if (fault) {
|
||||
|
|
|
@ -163,6 +163,9 @@ __do_user_fault(struct task_struct *tsk, unsigned long addr,
|
|||
{
|
||||
struct siginfo si;
|
||||
|
||||
if (addr > TASK_SIZE)
|
||||
harden_branch_predictor();
|
||||
|
||||
#ifdef CONFIG_DEBUG_USER
|
||||
if (((user_debug & UDBG_SEGV) && (sig == SIGSEGV)) ||
|
||||
((user_debug & UDBG_BUS) && (sig == SIGBUS))) {
|
||||
|
|
|
@ -258,13 +258,21 @@
|
|||
mcr p15, 0, ip, c7, c10, 4 @ data write barrier
|
||||
.endm
|
||||
|
||||
.macro define_processor_functions name:req, dabort:req, pabort:req, nommu=0, suspend=0
|
||||
.macro define_processor_functions name:req, dabort:req, pabort:req, nommu=0, suspend=0, bugs=0
|
||||
/*
|
||||
* If we are building for big.Little with branch predictor hardening,
|
||||
* we need the processor function tables to remain available after boot.
|
||||
*/
|
||||
#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)
|
||||
.section ".rodata"
|
||||
#endif
|
||||
.type \name\()_processor_functions, #object
|
||||
.align 2
|
||||
ENTRY(\name\()_processor_functions)
|
||||
.word \dabort
|
||||
.word \pabort
|
||||
.word cpu_\name\()_proc_init
|
||||
.word \bugs
|
||||
.word cpu_\name\()_proc_fin
|
||||
.word cpu_\name\()_reset
|
||||
.word cpu_\name\()_do_idle
|
||||
|
@ -293,6 +301,9 @@ ENTRY(\name\()_processor_functions)
|
|||
.endif
|
||||
|
||||
.size \name\()_processor_functions, . - \name\()_processor_functions
|
||||
#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)
|
||||
.previous
|
||||
#endif
|
||||
.endm
|
||||
|
||||
.macro define_cache_functions name:req
|
||||
|
|
|
@ -41,11 +41,6 @@
|
|||
* even on Cortex-A8 revisions not affected by 430973.
|
||||
* If IBE is not set, the flush BTAC/BTB won't do anything.
|
||||
*/
|
||||
ENTRY(cpu_ca8_switch_mm)
|
||||
#ifdef CONFIG_MMU
|
||||
mov r2, #0
|
||||
mcr p15, 0, r2, c7, c5, 6 @ flush BTAC/BTB
|
||||
#endif
|
||||
ENTRY(cpu_v7_switch_mm)
|
||||
#ifdef CONFIG_MMU
|
||||
mmid r1, r1 @ get mm->context.id
|
||||
|
@ -66,7 +61,6 @@ ENTRY(cpu_v7_switch_mm)
|
|||
#endif
|
||||
bx lr
|
||||
ENDPROC(cpu_v7_switch_mm)
|
||||
ENDPROC(cpu_ca8_switch_mm)
|
||||
|
||||
/*
|
||||
* cpu_v7_set_pte_ext(ptep, pte)
|
||||
|
|
161
arch/arm/mm/proc-v7-bugs.c
Normal file
161
arch/arm/mm/proc-v7-bugs.c
Normal file
|
@ -0,0 +1,161 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
#include <linux/arm-smccc.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/psci.h>
|
||||
#include <linux/smp.h>
|
||||
|
||||
#include <asm/cp15.h>
|
||||
#include <asm/cputype.h>
|
||||
#include <asm/proc-fns.h>
|
||||
#include <asm/system_misc.h>
|
||||
|
||||
#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
|
||||
DEFINE_PER_CPU(harden_branch_predictor_fn_t, harden_branch_predictor_fn);
|
||||
|
||||
extern void cpu_v7_iciallu_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
|
||||
extern void cpu_v7_bpiall_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
|
||||
extern void cpu_v7_smc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
|
||||
extern void cpu_v7_hvc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
|
||||
|
||||
static void harden_branch_predictor_bpiall(void)
|
||||
{
|
||||
write_sysreg(0, BPIALL);
|
||||
}
|
||||
|
||||
static void harden_branch_predictor_iciallu(void)
|
||||
{
|
||||
write_sysreg(0, ICIALLU);
|
||||
}
|
||||
|
||||
static void __maybe_unused call_smc_arch_workaround_1(void)
|
||||
{
|
||||
arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
|
||||
}
|
||||
|
||||
static void __maybe_unused call_hvc_arch_workaround_1(void)
|
||||
{
|
||||
arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
|
||||
}
|
||||
|
||||
static void cpu_v7_spectre_init(void)
|
||||
{
|
||||
const char *spectre_v2_method = NULL;
|
||||
int cpu = smp_processor_id();
|
||||
|
||||
if (per_cpu(harden_branch_predictor_fn, cpu))
|
||||
return;
|
||||
|
||||
switch (read_cpuid_part()) {
|
||||
case ARM_CPU_PART_CORTEX_A8:
|
||||
case ARM_CPU_PART_CORTEX_A9:
|
||||
case ARM_CPU_PART_CORTEX_A12:
|
||||
case ARM_CPU_PART_CORTEX_A17:
|
||||
case ARM_CPU_PART_CORTEX_A73:
|
||||
case ARM_CPU_PART_CORTEX_A75:
|
||||
per_cpu(harden_branch_predictor_fn, cpu) =
|
||||
harden_branch_predictor_bpiall;
|
||||
spectre_v2_method = "BPIALL";
|
||||
break;
|
||||
|
||||
case ARM_CPU_PART_CORTEX_A15:
|
||||
case ARM_CPU_PART_BRAHMA_B15:
|
||||
per_cpu(harden_branch_predictor_fn, cpu) =
|
||||
harden_branch_predictor_iciallu;
|
||||
spectre_v2_method = "ICIALLU";
|
||||
break;
|
||||
|
||||
#ifdef CONFIG_ARM_PSCI
|
||||
default:
|
||||
/* Other ARM CPUs require no workaround */
|
||||
if (read_cpuid_implementor() == ARM_CPU_IMP_ARM)
|
||||
break;
|
||||
/* fallthrough */
|
||||
/* Cortex A57/A72 require firmware workaround */
|
||||
case ARM_CPU_PART_CORTEX_A57:
|
||||
case ARM_CPU_PART_CORTEX_A72: {
|
||||
struct arm_smccc_res res;
|
||||
|
||||
if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
|
||||
break;
|
||||
|
||||
switch (psci_ops.conduit) {
|
||||
case PSCI_CONDUIT_HVC:
|
||||
arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
|
||||
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
|
||||
if ((int)res.a0 != 0)
|
||||
break;
|
||||
per_cpu(harden_branch_predictor_fn, cpu) =
|
||||
call_hvc_arch_workaround_1;
|
||||
cpu_do_switch_mm = cpu_v7_hvc_switch_mm;
|
||||
spectre_v2_method = "hypervisor";
|
||||
break;
|
||||
|
||||
case PSCI_CONDUIT_SMC:
|
||||
arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
|
||||
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
|
||||
if ((int)res.a0 != 0)
|
||||
break;
|
||||
per_cpu(harden_branch_predictor_fn, cpu) =
|
||||
call_smc_arch_workaround_1;
|
||||
cpu_do_switch_mm = cpu_v7_smc_switch_mm;
|
||||
spectre_v2_method = "firmware";
|
||||
break;
|
||||
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
if (spectre_v2_method)
|
||||
pr_info("CPU%u: Spectre v2: using %s workaround\n",
|
||||
smp_processor_id(), spectre_v2_method);
|
||||
}
|
||||
#else
|
||||
static void cpu_v7_spectre_init(void)
|
||||
{
|
||||
}
|
||||
#endif
|
||||
|
||||
static __maybe_unused bool cpu_v7_check_auxcr_set(bool *warned,
|
||||
u32 mask, const char *msg)
|
||||
{
|
||||
u32 aux_cr;
|
||||
|
||||
asm("mrc p15, 0, %0, c1, c0, 1" : "=r" (aux_cr));
|
||||
|
||||
if ((aux_cr & mask) != mask) {
|
||||
if (!*warned)
|
||||
pr_err("CPU%u: %s", smp_processor_id(), msg);
|
||||
*warned = true;
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
static DEFINE_PER_CPU(bool, spectre_warned);
|
||||
|
||||
static bool check_spectre_auxcr(bool *warned, u32 bit)
|
||||
{
|
||||
return IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR) &&
|
||||
cpu_v7_check_auxcr_set(warned, bit,
|
||||
"Spectre v2: firmware did not set auxiliary control register IBE bit, system vulnerable\n");
|
||||
}
|
||||
|
||||
void cpu_v7_ca8_ibe(void)
|
||||
{
|
||||
if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(6)))
|
||||
cpu_v7_spectre_init();
|
||||
}
|
||||
|
||||
void cpu_v7_ca15_ibe(void)
|
||||
{
|
||||
if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(0)))
|
||||
cpu_v7_spectre_init();
|
||||
}
|
||||
|
||||
void cpu_v7_bugs_init(void)
|
||||
{
|
||||
cpu_v7_spectre_init();
|
||||
}
|
|
@ -9,6 +9,7 @@
|
|||
*
|
||||
* This is the "shell" of the ARMv7 processor support.
|
||||
*/
|
||||
#include <linux/arm-smccc.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/linkage.h>
|
||||
#include <asm/assembler.h>
|
||||
|
@ -87,6 +88,37 @@ ENTRY(cpu_v7_dcache_clean_area)
|
|||
ret lr
|
||||
ENDPROC(cpu_v7_dcache_clean_area)
|
||||
|
||||
#ifdef CONFIG_ARM_PSCI
|
||||
.arch_extension sec
|
||||
ENTRY(cpu_v7_smc_switch_mm)
|
||||
stmfd sp!, {r0 - r3}
|
||||
movw r0, #:lower16:ARM_SMCCC_ARCH_WORKAROUND_1
|
||||
movt r0, #:upper16:ARM_SMCCC_ARCH_WORKAROUND_1
|
||||
smc #0
|
||||
ldmfd sp!, {r0 - r3}
|
||||
b cpu_v7_switch_mm
|
||||
ENDPROC(cpu_v7_smc_switch_mm)
|
||||
.arch_extension virt
|
||||
ENTRY(cpu_v7_hvc_switch_mm)
|
||||
stmfd sp!, {r0 - r3}
|
||||
movw r0, #:lower16:ARM_SMCCC_ARCH_WORKAROUND_1
|
||||
movt r0, #:upper16:ARM_SMCCC_ARCH_WORKAROUND_1
|
||||
hvc #0
|
||||
ldmfd sp!, {r0 - r3}
|
||||
b cpu_v7_switch_mm
|
||||
ENDPROC(cpu_v7_hvc_switch_mm)
|
||||
#endif
|
||||
ENTRY(cpu_v7_iciallu_switch_mm)
|
||||
mov r3, #0
|
||||
mcr p15, 0, r3, c7, c5, 0 @ ICIALLU
|
||||
b cpu_v7_switch_mm
|
||||
ENDPROC(cpu_v7_iciallu_switch_mm)
|
||||
ENTRY(cpu_v7_bpiall_switch_mm)
|
||||
mov r3, #0
|
||||
mcr p15, 0, r3, c7, c5, 6 @ flush BTAC/BTB
|
||||
b cpu_v7_switch_mm
|
||||
ENDPROC(cpu_v7_bpiall_switch_mm)
|
||||
|
||||
string cpu_v7_name, "ARMv7 Processor"
|
||||
.align
|
||||
|
||||
|
@ -152,31 +184,6 @@ ENTRY(cpu_v7_do_resume)
|
|||
ENDPROC(cpu_v7_do_resume)
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Cortex-A8
|
||||
*/
|
||||
globl_equ cpu_ca8_proc_init, cpu_v7_proc_init
|
||||
globl_equ cpu_ca8_proc_fin, cpu_v7_proc_fin
|
||||
globl_equ cpu_ca8_reset, cpu_v7_reset
|
||||
globl_equ cpu_ca8_do_idle, cpu_v7_do_idle
|
||||
globl_equ cpu_ca8_dcache_clean_area, cpu_v7_dcache_clean_area
|
||||
globl_equ cpu_ca8_set_pte_ext, cpu_v7_set_pte_ext
|
||||
globl_equ cpu_ca8_suspend_size, cpu_v7_suspend_size
|
||||
#ifdef CONFIG_ARM_CPU_SUSPEND
|
||||
globl_equ cpu_ca8_do_suspend, cpu_v7_do_suspend
|
||||
globl_equ cpu_ca8_do_resume, cpu_v7_do_resume
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Cortex-A9 processor functions
|
||||
*/
|
||||
globl_equ cpu_ca9mp_proc_init, cpu_v7_proc_init
|
||||
globl_equ cpu_ca9mp_proc_fin, cpu_v7_proc_fin
|
||||
globl_equ cpu_ca9mp_reset, cpu_v7_reset
|
||||
globl_equ cpu_ca9mp_do_idle, cpu_v7_do_idle
|
||||
globl_equ cpu_ca9mp_dcache_clean_area, cpu_v7_dcache_clean_area
|
||||
globl_equ cpu_ca9mp_switch_mm, cpu_v7_switch_mm
|
||||
globl_equ cpu_ca9mp_set_pte_ext, cpu_v7_set_pte_ext
|
||||
.globl cpu_ca9mp_suspend_size
|
||||
.equ cpu_ca9mp_suspend_size, cpu_v7_suspend_size + 4 * 2
|
||||
#ifdef CONFIG_ARM_CPU_SUSPEND
|
||||
|
@ -488,12 +495,79 @@ __v7_setup_stack:
|
|||
|
||||
__INITDATA
|
||||
|
||||
.weak cpu_v7_bugs_init
|
||||
|
||||
@ define struct processor (see <asm/proc-fns.h> and proc-macros.S)
|
||||
define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
|
||||
#ifndef CONFIG_ARM_LPAE
|
||||
define_processor_functions ca8, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
|
||||
define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
|
||||
define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init
|
||||
|
||||
#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
|
||||
@ generic v7 bpiall on context switch
|
||||
globl_equ cpu_v7_bpiall_proc_init, cpu_v7_proc_init
|
||||
globl_equ cpu_v7_bpiall_proc_fin, cpu_v7_proc_fin
|
||||
globl_equ cpu_v7_bpiall_reset, cpu_v7_reset
|
||||
globl_equ cpu_v7_bpiall_do_idle, cpu_v7_do_idle
|
||||
globl_equ cpu_v7_bpiall_dcache_clean_area, cpu_v7_dcache_clean_area
|
||||
globl_equ cpu_v7_bpiall_set_pte_ext, cpu_v7_set_pte_ext
|
||||
globl_equ cpu_v7_bpiall_suspend_size, cpu_v7_suspend_size
|
||||
#ifdef CONFIG_ARM_CPU_SUSPEND
|
||||
globl_equ cpu_v7_bpiall_do_suspend, cpu_v7_do_suspend
|
||||
globl_equ cpu_v7_bpiall_do_resume, cpu_v7_do_resume
|
||||
#endif
|
||||
define_processor_functions v7_bpiall, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init
|
||||
|
||||
#define HARDENED_BPIALL_PROCESSOR_FUNCTIONS v7_bpiall_processor_functions
|
||||
#else
|
||||
#define HARDENED_BPIALL_PROCESSOR_FUNCTIONS v7_processor_functions
|
||||
#endif
|
||||
|
||||
#ifndef CONFIG_ARM_LPAE
|
||||
@ Cortex-A8 - always needs bpiall switch_mm implementation
|
||||
globl_equ cpu_ca8_proc_init, cpu_v7_proc_init
|
||||
globl_equ cpu_ca8_proc_fin, cpu_v7_proc_fin
|
||||
globl_equ cpu_ca8_reset, cpu_v7_reset
|
||||
globl_equ cpu_ca8_do_idle, cpu_v7_do_idle
|
||||
globl_equ cpu_ca8_dcache_clean_area, cpu_v7_dcache_clean_area
|
||||
globl_equ cpu_ca8_set_pte_ext, cpu_v7_set_pte_ext
|
||||
globl_equ cpu_ca8_switch_mm, cpu_v7_bpiall_switch_mm
|
||||
globl_equ cpu_ca8_suspend_size, cpu_v7_suspend_size
|
||||
#ifdef CONFIG_ARM_CPU_SUSPEND
|
||||
globl_equ cpu_ca8_do_suspend, cpu_v7_do_suspend
|
||||
globl_equ cpu_ca8_do_resume, cpu_v7_do_resume
|
||||
#endif
|
||||
define_processor_functions ca8, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_ca8_ibe
|
||||
|
||||
@ Cortex-A9 - needs more registers preserved across suspend/resume
|
||||
@ and bpiall switch_mm for hardening
|
||||
globl_equ cpu_ca9mp_proc_init, cpu_v7_proc_init
|
||||
globl_equ cpu_ca9mp_proc_fin, cpu_v7_proc_fin
|
||||
globl_equ cpu_ca9mp_reset, cpu_v7_reset
|
||||
globl_equ cpu_ca9mp_do_idle, cpu_v7_do_idle
|
||||
globl_equ cpu_ca9mp_dcache_clean_area, cpu_v7_dcache_clean_area
|
||||
#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
|
||||
globl_equ cpu_ca9mp_switch_mm, cpu_v7_bpiall_switch_mm
|
||||
#else
|
||||
globl_equ cpu_ca9mp_switch_mm, cpu_v7_switch_mm
|
||||
#endif
|
||||
globl_equ cpu_ca9mp_set_pte_ext, cpu_v7_set_pte_ext
|
||||
define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init
|
||||
#endif
|
||||
|
||||
@ Cortex-A15 - needs iciallu switch_mm for hardening
|
||||
globl_equ cpu_ca15_proc_init, cpu_v7_proc_init
|
||||
globl_equ cpu_ca15_proc_fin, cpu_v7_proc_fin
|
||||
globl_equ cpu_ca15_reset, cpu_v7_reset
|
||||
globl_equ cpu_ca15_do_idle, cpu_v7_do_idle
|
||||
globl_equ cpu_ca15_dcache_clean_area, cpu_v7_dcache_clean_area
|
||||
#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
|
||||
globl_equ cpu_ca15_switch_mm, cpu_v7_iciallu_switch_mm
|
||||
#else
|
||||
globl_equ cpu_ca15_switch_mm, cpu_v7_switch_mm
|
||||
#endif
|
||||
globl_equ cpu_ca15_set_pte_ext, cpu_v7_set_pte_ext
|
||||
globl_equ cpu_ca15_suspend_size, cpu_v7_suspend_size
|
||||
globl_equ cpu_ca15_do_suspend, cpu_v7_do_suspend
|
||||
globl_equ cpu_ca15_do_resume, cpu_v7_do_resume
|
||||
define_processor_functions ca15, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_ca15_ibe
|
||||
#ifdef CONFIG_CPU_PJ4B
|
||||
define_processor_functions pj4b, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
|
||||
#endif
|
||||
|
@ -600,7 +674,7 @@ __v7_ca7mp_proc_info:
|
|||
__v7_ca12mp_proc_info:
|
||||
.long 0x410fc0d0
|
||||
.long 0xff0ffff0
|
||||
__v7_proc __v7_ca12mp_proc_info, __v7_ca12mp_setup
|
||||
__v7_proc __v7_ca12mp_proc_info, __v7_ca12mp_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS
|
||||
.size __v7_ca12mp_proc_info, . - __v7_ca12mp_proc_info
|
||||
|
||||
/*
|
||||
|
@ -610,7 +684,7 @@ __v7_ca12mp_proc_info:
|
|||
__v7_ca15mp_proc_info:
|
||||
.long 0x410fc0f0
|
||||
.long 0xff0ffff0
|
||||
__v7_proc __v7_ca15mp_proc_info, __v7_ca15mp_setup
|
||||
__v7_proc __v7_ca15mp_proc_info, __v7_ca15mp_setup, proc_fns = ca15_processor_functions
|
||||
.size __v7_ca15mp_proc_info, . - __v7_ca15mp_proc_info
|
||||
|
||||
/*
|
||||
|
@ -620,7 +694,7 @@ __v7_ca15mp_proc_info:
|
|||
__v7_b15mp_proc_info:
|
||||
.long 0x420f00f0
|
||||
.long 0xff0ffff0
|
||||
__v7_proc __v7_b15mp_proc_info, __v7_b15mp_setup
|
||||
__v7_proc __v7_b15mp_proc_info, __v7_b15mp_setup, proc_fns = ca15_processor_functions
|
||||
.size __v7_b15mp_proc_info, . - __v7_b15mp_proc_info
|
||||
|
||||
/*
|
||||
|
@ -630,9 +704,25 @@ __v7_b15mp_proc_info:
|
|||
__v7_ca17mp_proc_info:
|
||||
.long 0x410fc0e0
|
||||
.long 0xff0ffff0
|
||||
__v7_proc __v7_ca17mp_proc_info, __v7_ca17mp_setup
|
||||
__v7_proc __v7_ca17mp_proc_info, __v7_ca17mp_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS
|
||||
.size __v7_ca17mp_proc_info, . - __v7_ca17mp_proc_info
|
||||
|
||||
/* ARM Ltd. Cortex A73 processor */
|
||||
.type __v7_ca73_proc_info, #object
|
||||
__v7_ca73_proc_info:
|
||||
.long 0x410fd090
|
||||
.long 0xff0ffff0
|
||||
__v7_proc __v7_ca73_proc_info, __v7_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS
|
||||
.size __v7_ca73_proc_info, . - __v7_ca73_proc_info
|
||||
|
||||
/* ARM Ltd. Cortex A75 processor */
|
||||
.type __v7_ca75_proc_info, #object
|
||||
__v7_ca75_proc_info:
|
||||
.long 0x410fd0a0
|
||||
.long 0xff0ffff0
|
||||
__v7_proc __v7_ca75_proc_info, __v7_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS
|
||||
.size __v7_ca75_proc_info, . - __v7_ca75_proc_info
|
||||
|
||||
/*
|
||||
* Qualcomm Inc. Krait processors.
|
||||
*/
|
||||
|
|
|
@ -554,12 +554,11 @@ void vfp_flush_hwstate(struct thread_info *thread)
|
|||
* Save the current VFP state into the provided structures and prepare
|
||||
* for entry into a new function (signal handler).
|
||||
*/
|
||||
int vfp_preserve_user_clear_hwstate(struct user_vfp __user *ufp,
|
||||
struct user_vfp_exc __user *ufp_exc)
|
||||
int vfp_preserve_user_clear_hwstate(struct user_vfp *ufp,
|
||||
struct user_vfp_exc *ufp_exc)
|
||||
{
|
||||
struct thread_info *thread = current_thread_info();
|
||||
struct vfp_hard_struct *hwstate = &thread->vfpstate.hard;
|
||||
int err = 0;
|
||||
|
||||
/* Ensure that the saved hwstate is up-to-date. */
|
||||
vfp_sync_hwstate(thread);
|
||||
|
@ -568,22 +567,19 @@ int vfp_preserve_user_clear_hwstate(struct user_vfp __user *ufp,
|
|||
* Copy the floating point registers. There can be unused
|
||||
* registers see asm/hwcap.h for details.
|
||||
*/
|
||||
err |= __copy_to_user(&ufp->fpregs, &hwstate->fpregs,
|
||||
sizeof(hwstate->fpregs));
|
||||
memcpy(&ufp->fpregs, &hwstate->fpregs, sizeof(hwstate->fpregs));
|
||||
|
||||
/*
|
||||
* Copy the status and control register.
|
||||
*/
|
||||
__put_user_error(hwstate->fpscr, &ufp->fpscr, err);
|
||||
ufp->fpscr = hwstate->fpscr;
|
||||
|
||||
/*
|
||||
* Copy the exception registers.
|
||||
*/
|
||||
__put_user_error(hwstate->fpexc, &ufp_exc->fpexc, err);
|
||||
__put_user_error(hwstate->fpinst, &ufp_exc->fpinst, err);
|
||||
__put_user_error(hwstate->fpinst2, &ufp_exc->fpinst2, err);
|
||||
|
||||
if (err)
|
||||
return -EFAULT;
|
||||
ufp_exc->fpexc = hwstate->fpexc;
|
||||
ufp_exc->fpinst = hwstate->fpinst;
|
||||
ufp_exc->fpinst2 = hwstate->fpinst2;
|
||||
|
||||
/* Ensure that VFP is disabled. */
|
||||
vfp_flush_hwstate(thread);
|
||||
|
@ -597,13 +593,11 @@ int vfp_preserve_user_clear_hwstate(struct user_vfp __user *ufp,
|
|||
}
|
||||
|
||||
/* Sanitise and restore the current VFP state from the provided structures. */
|
||||
int vfp_restore_user_hwstate(struct user_vfp __user *ufp,
|
||||
struct user_vfp_exc __user *ufp_exc)
|
||||
int vfp_restore_user_hwstate(struct user_vfp *ufp, struct user_vfp_exc *ufp_exc)
|
||||
{
|
||||
struct thread_info *thread = current_thread_info();
|
||||
struct vfp_hard_struct *hwstate = &thread->vfpstate.hard;
|
||||
unsigned long fpexc;
|
||||
int err = 0;
|
||||
|
||||
/* Disable VFP to avoid corrupting the new thread state. */
|
||||
vfp_flush_hwstate(thread);
|
||||
|
@ -612,17 +606,16 @@ int vfp_restore_user_hwstate(struct user_vfp __user *ufp,
|
|||
* Copy the floating point registers. There can be unused
|
||||
* registers see asm/hwcap.h for details.
|
||||
*/
|
||||
err |= __copy_from_user(&hwstate->fpregs, &ufp->fpregs,
|
||||
sizeof(hwstate->fpregs));
|
||||
memcpy(&hwstate->fpregs, &ufp->fpregs, sizeof(hwstate->fpregs));
|
||||
/*
|
||||
* Copy the status and control register.
|
||||
*/
|
||||
__get_user_error(hwstate->fpscr, &ufp->fpscr, err);
|
||||
hwstate->fpscr = ufp->fpscr;
|
||||
|
||||
/*
|
||||
* Sanitise and restore the exception registers.
|
||||
*/
|
||||
__get_user_error(fpexc, &ufp_exc->fpexc, err);
|
||||
fpexc = ufp_exc->fpexc;
|
||||
|
||||
/* Ensure the VFP is enabled. */
|
||||
fpexc |= FPEXC_EN;
|
||||
|
@ -631,10 +624,10 @@ int vfp_restore_user_hwstate(struct user_vfp __user *ufp,
|
|||
fpexc &= ~(FPEXC_EX | FPEXC_FP2V);
|
||||
hwstate->fpexc = fpexc;
|
||||
|
||||
__get_user_error(hwstate->fpinst, &ufp_exc->fpinst, err);
|
||||
__get_user_error(hwstate->fpinst2, &ufp_exc->fpinst2, err);
|
||||
hwstate->fpinst = ufp_exc->fpinst;
|
||||
hwstate->fpinst2 = ufp_exc->fpinst2;
|
||||
|
||||
return err ? -EFAULT : 0;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
@ -84,7 +84,7 @@
|
|||
clock-names = "uartclk", "apb_pclk";
|
||||
};
|
||||
|
||||
spi0: ssp@e1020000 {
|
||||
spi0: spi@e1020000 {
|
||||
status = "disabled";
|
||||
compatible = "arm,pl022", "arm,primecell";
|
||||
#gpio-cells = <2>;
|
||||
|
@ -95,7 +95,7 @@
|
|||
clock-names = "apb_pclk";
|
||||
};
|
||||
|
||||
spi1: ssp@e1030000 {
|
||||
spi1: spi@e1030000 {
|
||||
status = "disabled";
|
||||
compatible = "arm,pl022", "arm,primecell";
|
||||
#gpio-cells = <2>;
|
||||
|
|
|
@ -568,7 +568,6 @@ asmlinkage void bad_mode(struct pt_regs *regs, int reason, unsigned int esr)
|
|||
arm64_check_cache_ecc(NULL);
|
||||
}
|
||||
|
||||
die("Oops - bad mode", regs, 0);
|
||||
local_irq_disable();
|
||||
panic("bad mode");
|
||||
}
|
||||
|
|
|
@ -57,5 +57,7 @@ ENDPROC(__clear_user)
|
|||
.section .fixup,"ax"
|
||||
.align 2
|
||||
9: mov x0, x2 // return the original size
|
||||
ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \
|
||||
CONFIG_ARM64_PAN)
|
||||
ret
|
||||
.previous
|
||||
|
|
|
@ -80,5 +80,7 @@ ENDPROC(__arch_copy_from_user)
|
|||
strb wzr, [dst], #1 // zero remaining buffer space
|
||||
cmp dst, end
|
||||
b.lo 9999b
|
||||
ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \
|
||||
CONFIG_ARM64_PAN)
|
||||
ret
|
||||
.previous
|
||||
|
|
|
@ -76,5 +76,7 @@ ENDPROC(__copy_in_user)
|
|||
.section .fixup,"ax"
|
||||
.align 2
|
||||
9998: sub x0, end, dst // bytes not copied
|
||||
ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \
|
||||
CONFIG_ARM64_PAN)
|
||||
ret
|
||||
.previous
|
||||
|
|
|
@ -74,5 +74,7 @@ ENDPROC(__arch_copy_to_user)
|
|||
.section .fixup,"ax"
|
||||
.align 2
|
||||
9998: sub x0, end, dst // bytes not copied
|
||||
ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \
|
||||
CONFIG_ARM64_PAN)
|
||||
ret
|
||||
.previous
|
||||
|
|
|
@ -4,9 +4,8 @@
|
|||
#include <bcm47xx_board.h>
|
||||
#include <bcm47xx.h>
|
||||
|
||||
static void __init bcm47xx_workarounds_netgear_wnr3500l(void)
|
||||
static void __init bcm47xx_workarounds_enable_usb_power(int usb_power)
|
||||
{
|
||||
const int usb_power = 12;
|
||||
int err;
|
||||
|
||||
err = gpio_request_one(usb_power, GPIOF_OUT_INIT_HIGH, "usb_power");
|
||||
|
@ -22,7 +21,10 @@ void __init bcm47xx_workarounds(void)
|
|||
|
||||
switch (board) {
|
||||
case BCM47XX_BOARD_NETGEAR_WNR3500L:
|
||||
bcm47xx_workarounds_netgear_wnr3500l();
|
||||
bcm47xx_workarounds_enable_usb_power(12);
|
||||
break;
|
||||
case BCM47XX_BOARD_NETGEAR_WNDR3400_V3:
|
||||
bcm47xx_workarounds_enable_usb_power(21);
|
||||
break;
|
||||
default:
|
||||
/* No workaround(s) needed */
|
||||
|
|
|
@ -84,7 +84,7 @@ void __init prom_init(void)
|
|||
* Here we will start up CPU1 in the background and ask it to
|
||||
* reconfigure itself then go back to sleep.
|
||||
*/
|
||||
memcpy((void *)0xa0000200, &bmips_smp_movevec, 0x20);
|
||||
memcpy((void *)0xa0000200, bmips_smp_movevec, 0x20);
|
||||
__sync();
|
||||
set_c0_cause(C_SW0);
|
||||
cpumask_set_cpu(1, &bmips_booted_mask);
|
||||
|
|
|
@ -119,7 +119,7 @@
|
|||
#define BCM6368_RESET_DSL 0
|
||||
#define BCM6368_RESET_SAR SOFTRESET_6368_SAR_MASK
|
||||
#define BCM6368_RESET_EPHY SOFTRESET_6368_EPHY_MASK
|
||||
#define BCM6368_RESET_ENETSW 0
|
||||
#define BCM6368_RESET_ENETSW SOFTRESET_6368_ENETSW_MASK
|
||||
#define BCM6368_RESET_PCM SOFTRESET_6368_PCM_MASK
|
||||
#define BCM6368_RESET_MPI SOFTRESET_6368_MPI_MASK
|
||||
#define BCM6368_RESET_PCIE 0
|
||||
|
|
|
@ -42,7 +42,7 @@
|
|||
|
||||
/* O32 stack has to be 8-byte aligned. */
|
||||
static u64 o32_stk[4096];
|
||||
#define O32_STK &o32_stk[sizeof(o32_stk)]
|
||||
#define O32_STK (&o32_stk[ARRAY_SIZE(o32_stk)])
|
||||
|
||||
#define __PROM_O32(fun, arg) fun arg __asm__(#fun); \
|
||||
__asm__(#fun " = call_o32")
|
||||
|
|
|
@ -75,11 +75,11 @@ static inline int register_bmips_smp_ops(void)
|
|||
#endif
|
||||
}
|
||||
|
||||
extern char bmips_reset_nmi_vec;
|
||||
extern char bmips_reset_nmi_vec_end;
|
||||
extern char bmips_smp_movevec;
|
||||
extern char bmips_smp_int_vec;
|
||||
extern char bmips_smp_int_vec_end;
|
||||
extern char bmips_reset_nmi_vec[];
|
||||
extern char bmips_reset_nmi_vec_end[];
|
||||
extern char bmips_smp_movevec[];
|
||||
extern char bmips_smp_int_vec[];
|
||||
extern char bmips_smp_int_vec_end[];
|
||||
|
||||
extern int bmips_smp_enabled;
|
||||
extern int bmips_cpu_offset;
|
||||
|
|
|
@ -12,11 +12,11 @@
|
|||
#include <asm/stacktrace.h>
|
||||
|
||||
/* Maximum physical address we can use pages from */
|
||||
#define KEXEC_SOURCE_MEMORY_LIMIT (0x20000000)
|
||||
#define KEXEC_SOURCE_MEMORY_LIMIT (-1UL)
|
||||
/* Maximum address we can reach in physical address mode */
|
||||
#define KEXEC_DESTINATION_MEMORY_LIMIT (0x20000000)
|
||||
#define KEXEC_DESTINATION_MEMORY_LIMIT (-1UL)
|
||||
/* Maximum address we can use for the control code buffer */
|
||||
#define KEXEC_CONTROL_MEMORY_LIMIT (0x20000000)
|
||||
#define KEXEC_CONTROL_MEMORY_LIMIT (-1UL)
|
||||
/* Reserve 3*4096 bytes for board-specific info */
|
||||
#define KEXEC_CONTROL_PAGE_SIZE (4096 + 3*4096)
|
||||
|
||||
|
|
|
@ -451,10 +451,10 @@ static void bmips_wr_vec(unsigned long dst, char *start, char *end)
|
|||
|
||||
static inline void bmips_nmi_handler_setup(void)
|
||||
{
|
||||
bmips_wr_vec(BMIPS_NMI_RESET_VEC, &bmips_reset_nmi_vec,
|
||||
&bmips_reset_nmi_vec_end);
|
||||
bmips_wr_vec(BMIPS_WARM_RESTART_VEC, &bmips_smp_int_vec,
|
||||
&bmips_smp_int_vec_end);
|
||||
bmips_wr_vec(BMIPS_NMI_RESET_VEC, bmips_reset_nmi_vec,
|
||||
bmips_reset_nmi_vec_end);
|
||||
bmips_wr_vec(BMIPS_WARM_RESTART_VEC, bmips_smp_int_vec,
|
||||
bmips_smp_int_vec_end);
|
||||
}
|
||||
|
||||
struct reset_vec_info {
|
||||
|
|
|
@ -961,12 +961,11 @@ void __init txx9_sramc_init(struct resource *r)
|
|||
goto exit_put;
|
||||
err = sysfs_create_bin_file(&dev->dev.kobj, &dev->bindata_attr);
|
||||
if (err) {
|
||||
device_unregister(&dev->dev);
|
||||
iounmap(dev->base);
|
||||
kfree(dev);
|
||||
device_unregister(&dev->dev);
|
||||
}
|
||||
return;
|
||||
exit_put:
|
||||
iounmap(dev->base);
|
||||
put_device(&dev->dev);
|
||||
return;
|
||||
}
|
||||
|
|
|
@ -66,29 +66,35 @@ endif
|
|||
UTS_MACHINE := $(OLDARCH)
|
||||
|
||||
ifeq ($(CONFIG_CPU_LITTLE_ENDIAN),y)
|
||||
override CC += -mlittle-endian
|
||||
ifneq ($(cc-name),clang)
|
||||
override CC += -mno-strict-align
|
||||
endif
|
||||
override AS += -mlittle-endian
|
||||
override LD += -EL
|
||||
override CROSS32CC += -mlittle-endian
|
||||
override CROSS32AS += -mlittle-endian
|
||||
LDEMULATION := lppc
|
||||
GNUTARGET := powerpcle
|
||||
MULTIPLEWORD := -mno-multiple
|
||||
KBUILD_CFLAGS_MODULE += $(call cc-option,-mno-save-toc-indirect)
|
||||
else
|
||||
ifeq ($(call cc-option-yn,-mbig-endian),y)
|
||||
override CC += -mbig-endian
|
||||
override AS += -mbig-endian
|
||||
endif
|
||||
override LD += -EB
|
||||
LDEMULATION := ppc
|
||||
GNUTARGET := powerpc
|
||||
MULTIPLEWORD := -mmultiple
|
||||
endif
|
||||
|
||||
ifdef CONFIG_PPC64
|
||||
cflags-$(CONFIG_CPU_BIG_ENDIAN) += $(call cc-option,-mabi=elfv1)
|
||||
cflags-$(CONFIG_CPU_BIG_ENDIAN) += $(call cc-option,-mcall-aixdesc)
|
||||
aflags-$(CONFIG_CPU_BIG_ENDIAN) += $(call cc-option,-mabi=elfv1)
|
||||
aflags-$(CONFIG_CPU_LITTLE_ENDIAN) += -mabi=elfv2
|
||||
endif
|
||||
|
||||
cflags-$(CONFIG_CPU_LITTLE_ENDIAN) += -mlittle-endian
|
||||
cflags-$(CONFIG_CPU_BIG_ENDIAN) += $(call cc-option,-mbig-endian)
|
||||
ifneq ($(cc-name),clang)
|
||||
cflags-$(CONFIG_CPU_LITTLE_ENDIAN) += -mno-strict-align
|
||||
endif
|
||||
|
||||
aflags-$(CONFIG_CPU_BIG_ENDIAN) += $(call cc-option,-mbig-endian)
|
||||
aflags-$(CONFIG_CPU_LITTLE_ENDIAN) += -mlittle-endian
|
||||
|
||||
ifeq ($(HAS_BIARCH),y)
|
||||
override AS += -a$(CONFIG_WORD_SIZE)
|
||||
override LD += -m elf$(CONFIG_WORD_SIZE)$(LDEMULATION)
|
||||
|
@ -121,7 +127,9 @@ ifeq ($(CONFIG_CPU_LITTLE_ENDIAN),y)
|
|||
CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mabi=elfv2,$(call cc-option,-mcall-aixdesc))
|
||||
AFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mabi=elfv2)
|
||||
else
|
||||
CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mabi=elfv1)
|
||||
CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mcall-aixdesc)
|
||||
AFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mabi=elfv1)
|
||||
endif
|
||||
CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mcmodel=medium,$(call cc-option,-mminimal-toc))
|
||||
CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mno-pointers-to-nested-functions)
|
||||
|
@ -212,6 +220,9 @@ cpu-as-$(CONFIG_E200) += -Wa,-me200
|
|||
KBUILD_AFLAGS += $(cpu-as-y)
|
||||
KBUILD_CFLAGS += $(cpu-as-y)
|
||||
|
||||
KBUILD_AFLAGS += $(aflags-y)
|
||||
KBUILD_CFLAGS += $(cflags-y)
|
||||
|
||||
head-y := arch/powerpc/kernel/head_$(CONFIG_WORD_SIZE).o
|
||||
head-$(CONFIG_8xx) := arch/powerpc/kernel/head_8xx.o
|
||||
head-$(CONFIG_40x) := arch/powerpc/kernel/head_40x.o
|
||||
|
|
|
@ -4,6 +4,8 @@
|
|||
#include <types.h>
|
||||
#include <string.h>
|
||||
|
||||
#define INT_MAX ((int)(~0U>>1))
|
||||
|
||||
#include "of.h"
|
||||
|
||||
typedef u32 uint32_t;
|
||||
|
|
|
@ -161,6 +161,28 @@ case "$elfformat" in
|
|||
elf32-powerpc) format=elf32ppc ;;
|
||||
esac
|
||||
|
||||
ld_version()
|
||||
{
|
||||
# Poached from scripts/ld-version.sh, but we don't want to call that because
|
||||
# this script (wrapper) is distributed separately from the kernel source.
|
||||
# Extract linker version number from stdin and turn into single number.
|
||||
awk '{
|
||||
gsub(".*\\)", "");
|
||||
gsub(".*version ", "");
|
||||
gsub("-.*", "");
|
||||
split($1,a, ".");
|
||||
print a[1]*100000000 + a[2]*1000000 + a[3]*10000;
|
||||
exit
|
||||
}'
|
||||
}
|
||||
|
||||
# Do not include PT_INTERP segment when linking pie. Non-pie linking
|
||||
# just ignores this option.
|
||||
LD_VERSION=$(${CROSS}ld --version | ld_version)
|
||||
LD_NO_DL_MIN_VERSION=$(echo 2.26 | ld_version)
|
||||
if [ "$LD_VERSION" -ge "$LD_NO_DL_MIN_VERSION" ] ; then
|
||||
nodl="--no-dynamic-linker"
|
||||
fi
|
||||
|
||||
platformo=$object/"$platform".o
|
||||
lds=$object/zImage.lds
|
||||
|
@ -412,7 +434,7 @@ if [ "$platform" != "miboot" ]; then
|
|||
if [ -n "$link_address" ] ; then
|
||||
text_start="-Ttext $link_address"
|
||||
fi
|
||||
${CROSS}ld -m $format -T $lds $text_start $pie -o "$ofile" \
|
||||
${CROSS}ld -m $format -T $lds $text_start $pie $nodl -o "$ofile" \
|
||||
$platformo $tmp $object/wrapper.a
|
||||
rm $tmp
|
||||
fi
|
||||
|
|
|
@ -15,7 +15,10 @@
|
|||
/* Patch sites */
|
||||
extern s32 patch__call_flush_count_cache;
|
||||
extern s32 patch__flush_count_cache_return;
|
||||
extern s32 patch__flush_link_stack_return;
|
||||
extern s32 patch__call_kvm_flush_link_stack;
|
||||
|
||||
extern long flush_count_cache;
|
||||
extern long kvm_flush_link_stack;
|
||||
|
||||
#endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */
|
||||
|
|
|
@ -81,6 +81,9 @@ static inline bool security_ftr_enabled(unsigned long feature)
|
|||
// Software required to flush count cache on context switch
|
||||
#define SEC_FTR_FLUSH_COUNT_CACHE 0x0000000000000400ull
|
||||
|
||||
// Software required to flush link stack on context switch
|
||||
#define SEC_FTR_FLUSH_LINK_STACK 0x0000000000001000ull
|
||||
|
||||
|
||||
// Features enabled by default
|
||||
#define SEC_FTR_DEFAULT \
|
||||
|
|
|
@ -367,7 +367,7 @@ int eeh_add_to_parent_pe(struct eeh_dev *edev)
|
|||
while (parent) {
|
||||
if (!(parent->type & EEH_PE_INVALID))
|
||||
break;
|
||||
parent->type &= ~(EEH_PE_INVALID | EEH_PE_KEEP);
|
||||
parent->type &= ~EEH_PE_INVALID;
|
||||
parent = parent->parent;
|
||||
}
|
||||
|
||||
|
|
|
@ -477,6 +477,7 @@ flush_count_cache:
|
|||
/* Save LR into r9 */
|
||||
mflr r9
|
||||
|
||||
// Flush the link stack
|
||||
.rept 64
|
||||
bl .+4
|
||||
.endr
|
||||
|
@ -486,6 +487,11 @@ flush_count_cache:
|
|||
.balign 32
|
||||
/* Restore LR */
|
||||
1: mtlr r9
|
||||
|
||||
// If we're just flushing the link stack, return here
|
||||
3: nop
|
||||
patch_site 3b patch__flush_link_stack_return
|
||||
|
||||
li r9,0x7fff
|
||||
mtctr r9
|
||||
|
||||
|
|
|
@ -764,9 +764,9 @@ dma_addr_t iommu_map_page(struct device *dev, struct iommu_table *tbl,
|
|||
|
||||
vaddr = page_address(page) + offset;
|
||||
uaddr = (unsigned long)vaddr;
|
||||
npages = iommu_num_pages(uaddr, size, IOMMU_PAGE_SIZE(tbl));
|
||||
|
||||
if (tbl) {
|
||||
npages = iommu_num_pages(uaddr, size, IOMMU_PAGE_SIZE(tbl));
|
||||
align = 0;
|
||||
if (tbl->it_page_shift < PAGE_SHIFT && size >= PAGE_SIZE &&
|
||||
((unsigned long)vaddr & ~PAGE_MASK) == 0)
|
||||
|
|
|
@ -967,6 +967,7 @@ int rtas_ibm_suspend_me(u64 handle)
|
|||
goto out;
|
||||
}
|
||||
|
||||
cpu_hotplug_disable();
|
||||
stop_topology_update();
|
||||
|
||||
/* Call function on all CPUs. One of us will make the
|
||||
|
@ -981,6 +982,7 @@ int rtas_ibm_suspend_me(u64 handle)
|
|||
printk(KERN_ERR "Error doing global join\n");
|
||||
|
||||
start_topology_update();
|
||||
cpu_hotplug_enable();
|
||||
|
||||
/* Take down CPUs not online prior to suspend */
|
||||
cpuret = rtas_offline_cpus_mask(offline_mask);
|
||||
|
|
|
@ -25,11 +25,12 @@ enum count_cache_flush_type {
|
|||
COUNT_CACHE_FLUSH_HW = 0x4,
|
||||
};
|
||||
static enum count_cache_flush_type count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
|
||||
static bool link_stack_flush_enabled;
|
||||
|
||||
bool barrier_nospec_enabled;
|
||||
static bool no_nospec;
|
||||
static bool btb_flush_enabled;
|
||||
#ifdef CONFIG_PPC_FSL_BOOK3E
|
||||
#if defined(CONFIG_PPC_FSL_BOOK3E) || defined(CONFIG_PPC_BOOK3S_64)
|
||||
static bool no_spectrev2;
|
||||
#endif
|
||||
|
||||
|
@ -107,7 +108,7 @@ static __init int barrier_nospec_debugfs_init(void)
|
|||
device_initcall(barrier_nospec_debugfs_init);
|
||||
#endif /* CONFIG_DEBUG_FS */
|
||||
|
||||
#ifdef CONFIG_PPC_FSL_BOOK3E
|
||||
#if defined(CONFIG_PPC_FSL_BOOK3E) || defined(CONFIG_PPC_BOOK3S_64)
|
||||
static int __init handle_nospectre_v2(char *p)
|
||||
{
|
||||
no_spectrev2 = true;
|
||||
|
@ -115,6 +116,9 @@ static int __init handle_nospectre_v2(char *p)
|
|||
return 0;
|
||||
}
|
||||
early_param("nospectre_v2", handle_nospectre_v2);
|
||||
#endif /* CONFIG_PPC_FSL_BOOK3E || CONFIG_PPC_BOOK3S_64 */
|
||||
|
||||
#ifdef CONFIG_PPC_FSL_BOOK3E
|
||||
void setup_spectre_v2(void)
|
||||
{
|
||||
if (no_spectrev2)
|
||||
|
@ -202,11 +206,19 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, c
|
|||
|
||||
if (ccd)
|
||||
seq_buf_printf(&s, "Indirect branch cache disabled");
|
||||
|
||||
if (link_stack_flush_enabled)
|
||||
seq_buf_printf(&s, ", Software link stack flush");
|
||||
|
||||
} else if (count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
|
||||
seq_buf_printf(&s, "Mitigation: Software count cache flush");
|
||||
|
||||
if (count_cache_flush_type == COUNT_CACHE_FLUSH_HW)
|
||||
seq_buf_printf(&s, " (hardware accelerated)");
|
||||
|
||||
if (link_stack_flush_enabled)
|
||||
seq_buf_printf(&s, ", Software link stack flush");
|
||||
|
||||
} else if (btb_flush_enabled) {
|
||||
seq_buf_printf(&s, "Mitigation: Branch predictor state flush");
|
||||
} else {
|
||||
|
@ -365,18 +377,49 @@ static __init int stf_barrier_debugfs_init(void)
|
|||
device_initcall(stf_barrier_debugfs_init);
|
||||
#endif /* CONFIG_DEBUG_FS */
|
||||
|
||||
static void no_count_cache_flush(void)
|
||||
{
|
||||
count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
|
||||
pr_info("count-cache-flush: software flush disabled.\n");
|
||||
}
|
||||
|
||||
static void toggle_count_cache_flush(bool enable)
|
||||
{
|
||||
if (!enable || !security_ftr_enabled(SEC_FTR_FLUSH_COUNT_CACHE)) {
|
||||
if (!security_ftr_enabled(SEC_FTR_FLUSH_COUNT_CACHE) &&
|
||||
!security_ftr_enabled(SEC_FTR_FLUSH_LINK_STACK))
|
||||
enable = false;
|
||||
|
||||
if (!enable) {
|
||||
patch_instruction_site(&patch__call_flush_count_cache, PPC_INST_NOP);
|
||||
count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
|
||||
pr_info("count-cache-flush: software flush disabled.\n");
|
||||
#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
|
||||
patch_instruction_site(&patch__call_kvm_flush_link_stack, PPC_INST_NOP);
|
||||
#endif
|
||||
pr_info("link-stack-flush: software flush disabled.\n");
|
||||
link_stack_flush_enabled = false;
|
||||
no_count_cache_flush();
|
||||
return;
|
||||
}
|
||||
|
||||
// This enables the branch from _switch to flush_count_cache
|
||||
patch_branch_site(&patch__call_flush_count_cache,
|
||||
(u64)&flush_count_cache, BRANCH_SET_LINK);
|
||||
|
||||
#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
|
||||
// This enables the branch from guest_exit_cont to kvm_flush_link_stack
|
||||
patch_branch_site(&patch__call_kvm_flush_link_stack,
|
||||
(u64)&kvm_flush_link_stack, BRANCH_SET_LINK);
|
||||
#endif
|
||||
|
||||
pr_info("link-stack-flush: software flush enabled.\n");
|
||||
link_stack_flush_enabled = true;
|
||||
|
||||
// If we just need to flush the link stack, patch an early return
|
||||
if (!security_ftr_enabled(SEC_FTR_FLUSH_COUNT_CACHE)) {
|
||||
patch_instruction_site(&patch__flush_link_stack_return, PPC_INST_BLR);
|
||||
no_count_cache_flush();
|
||||
return;
|
||||
}
|
||||
|
||||
if (!security_ftr_enabled(SEC_FTR_BCCTR_FLUSH_ASSIST)) {
|
||||
count_cache_flush_type = COUNT_CACHE_FLUSH_SW;
|
||||
pr_info("count-cache-flush: full software flush sequence enabled.\n");
|
||||
|
@ -390,7 +433,26 @@ static void toggle_count_cache_flush(bool enable)
|
|||
|
||||
void setup_count_cache_flush(void)
|
||||
{
|
||||
toggle_count_cache_flush(true);
|
||||
bool enable = true;
|
||||
|
||||
if (no_spectrev2 || cpu_mitigations_off()) {
|
||||
if (security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED) ||
|
||||
security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED))
|
||||
pr_warn("Spectre v2 mitigations not fully under software control, can't disable\n");
|
||||
|
||||
enable = false;
|
||||
}
|
||||
|
||||
/*
|
||||
* There's no firmware feature flag/hypervisor bit to tell us we need to
|
||||
* flush the link stack on context switch. So we set it here if we see
|
||||
* either of the Spectre v2 mitigations that aim to protect userspace.
|
||||
*/
|
||||
if (security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED) ||
|
||||
security_ftr_enabled(SEC_FTR_FLUSH_COUNT_CACHE))
|
||||
security_ftr_set(SEC_FTR_FLUSH_LINK_STACK);
|
||||
|
||||
toggle_count_cache_flush(enable);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
|
|
|
@ -37,6 +37,7 @@ data_page_branch:
|
|||
mtlr r0
|
||||
addi r3, r3, __kernel_datapage_offset-data_page_branch
|
||||
lwz r0,0(r3)
|
||||
.cfi_restore lr
|
||||
add r3,r0,r3
|
||||
blr
|
||||
.cfi_endproc
|
||||
|
|
|
@ -139,6 +139,7 @@ V_FUNCTION_BEGIN(__kernel_clock_gettime)
|
|||
*/
|
||||
99:
|
||||
li r0,__NR_clock_gettime
|
||||
.cfi_restore lr
|
||||
sc
|
||||
blr
|
||||
.cfi_endproc
|
||||
|
|
|
@ -37,6 +37,7 @@ data_page_branch:
|
|||
mtlr r0
|
||||
addi r3, r3, __kernel_datapage_offset-data_page_branch
|
||||
lwz r0,0(r3)
|
||||
.cfi_restore lr
|
||||
add r3,r0,r3
|
||||
blr
|
||||
.cfi_endproc
|
||||
|
|
|
@ -124,6 +124,7 @@ V_FUNCTION_BEGIN(__kernel_clock_gettime)
|
|||
*/
|
||||
99:
|
||||
li r0,__NR_clock_gettime
|
||||
.cfi_restore lr
|
||||
sc
|
||||
blr
|
||||
.cfi_endproc
|
||||
|
|
|
@ -70,8 +70,11 @@ void kvmppc_unfixup_split_real(struct kvm_vcpu *vcpu)
|
|||
{
|
||||
if (vcpu->arch.hflags & BOOK3S_HFLAG_SPLIT_HACK) {
|
||||
ulong pc = kvmppc_get_pc(vcpu);
|
||||
ulong lr = kvmppc_get_lr(vcpu);
|
||||
if ((pc & SPLIT_HACK_MASK) == SPLIT_HACK_OFFS)
|
||||
kvmppc_set_pc(vcpu, pc & ~SPLIT_HACK_MASK);
|
||||
if ((lr & SPLIT_HACK_MASK) == SPLIT_HACK_OFFS)
|
||||
kvmppc_set_lr(vcpu, lr & ~SPLIT_HACK_MASK);
|
||||
vcpu->arch.hflags &= ~BOOK3S_HFLAG_SPLIT_HACK;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -18,6 +18,7 @@
|
|||
*/
|
||||
|
||||
#include <asm/ppc_asm.h>
|
||||
#include <asm/code-patching-asm.h>
|
||||
#include <asm/kvm_asm.h>
|
||||
#include <asm/reg.h>
|
||||
#include <asm/mmu.h>
|
||||
|
@ -1169,6 +1170,10 @@ mc_cont:
|
|||
bl kvmhv_accumulate_time
|
||||
#endif
|
||||
|
||||
/* Possibly flush the link stack here. */
|
||||
1: nop
|
||||
patch_site 1b patch__call_kvm_flush_link_stack
|
||||
|
||||
mr r3, r12
|
||||
/* Increment exit count, poke other threads to exit */
|
||||
bl kvmhv_commence_exit
|
||||
|
@ -1564,6 +1569,21 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
|
|||
mtlr r0
|
||||
blr
|
||||
|
||||
.balign 32
|
||||
.global kvm_flush_link_stack
|
||||
kvm_flush_link_stack:
|
||||
/* Save LR into r0 */
|
||||
mflr r0
|
||||
|
||||
/* Flush the link stack. On Power8 it's up to 32 entries in size. */
|
||||
.rept 32
|
||||
bl .+4
|
||||
.endr
|
||||
|
||||
/* Restore LR */
|
||||
mtlr r0
|
||||
blr
|
||||
|
||||
/*
|
||||
* Check whether an HDSI is an HPTE not found fault or something else.
|
||||
* If it is an HPTE not found fault that is due to the guest accessing
|
||||
|
|
|
@ -322,7 +322,7 @@ void slb_initialize(void)
|
|||
#endif
|
||||
}
|
||||
|
||||
get_paca()->stab_rr = SLB_NUM_BOLTED;
|
||||
get_paca()->stab_rr = SLB_NUM_BOLTED - 1;
|
||||
|
||||
lflags = SLB_VSID_KERNEL | linear_llp;
|
||||
vflags = SLB_VSID_KERNEL | vmalloc_llp;
|
||||
|
|
|
@ -664,7 +664,7 @@ static int update_flash_db(void)
|
|||
db_set_64(db, &os_area_db_id_rtc_diff, saved_params.rtc_diff);
|
||||
|
||||
count = os_area_flash_write(db, sizeof(struct os_area_db), pos);
|
||||
if (count < sizeof(struct os_area_db)) {
|
||||
if (count < 0 || count < sizeof(struct os_area_db)) {
|
||||
pr_debug("%s: os_area_flash_write failed %zd\n", __func__,
|
||||
count);
|
||||
error = count < 0 ? count : -EIO;
|
||||
|
|
|
@ -150,7 +150,7 @@ static int dtl_start(struct dtl *dtl)
|
|||
|
||||
/* Register our dtl buffer with the hypervisor. The HV expects the
|
||||
* buffer size to be passed in the second word of the buffer */
|
||||
((u32 *)dtl->buf)[1] = DISPATCH_LOG_BYTES;
|
||||
((u32 *)dtl->buf)[1] = cpu_to_be32(DISPATCH_LOG_BYTES);
|
||||
|
||||
hwcpu = get_hard_smp_processor_id(dtl->cpu);
|
||||
addr = __pa(dtl->buf);
|
||||
|
@ -185,7 +185,7 @@ static void dtl_stop(struct dtl *dtl)
|
|||
|
||||
static u64 dtl_current_index(struct dtl *dtl)
|
||||
{
|
||||
return lppaca_of(dtl->cpu).dtl_idx;
|
||||
return be64_to_cpu(lppaca_of(dtl->cpu).dtl_idx);
|
||||
}
|
||||
#endif /* CONFIG_VIRT_CPU_ACCOUNTING_NATIVE */
|
||||
|
||||
|
|
|
@ -1616,14 +1616,17 @@ static int __init init_cpum_sampling_pmu(void)
|
|||
}
|
||||
|
||||
sfdbg = debug_register(KMSG_COMPONENT, 2, 1, 80);
|
||||
if (!sfdbg)
|
||||
if (!sfdbg) {
|
||||
pr_err("Registering for s390dbf failed\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
debug_register_view(sfdbg, &debug_sprintf_view);
|
||||
|
||||
err = register_external_irq(EXT_IRQ_MEASURE_ALERT,
|
||||
cpumf_measurement_alert);
|
||||
if (err) {
|
||||
pr_cpumsf_err(RS_INIT_FAILURE_ALRT);
|
||||
debug_unregister(sfdbg);
|
||||
goto out;
|
||||
}
|
||||
|
||||
|
@ -1632,6 +1635,7 @@ static int __init init_cpum_sampling_pmu(void)
|
|||
pr_cpumsf_err(RS_INIT_FAILURE_PERF);
|
||||
unregister_external_irq(EXT_IRQ_MEASURE_ALERT,
|
||||
cpumf_measurement_alert);
|
||||
debug_unregister(sfdbg);
|
||||
goto out;
|
||||
}
|
||||
perf_cpu_notifier(cpumf_pmu_notifier);
|
||||
|
|
|
@ -306,16 +306,16 @@ static int cmm_timeout_handler(struct ctl_table *ctl, int write,
|
|||
}
|
||||
|
||||
if (write) {
|
||||
len = *lenp;
|
||||
if (copy_from_user(buf, buffer,
|
||||
len > sizeof(buf) ? sizeof(buf) : len))
|
||||
len = min(*lenp, sizeof(buf));
|
||||
if (copy_from_user(buf, buffer, len))
|
||||
return -EFAULT;
|
||||
buf[sizeof(buf) - 1] = '\0';
|
||||
buf[len - 1] = '\0';
|
||||
cmm_skip_blanks(buf, &p);
|
||||
nr = simple_strtoul(p, &p, 0);
|
||||
cmm_skip_blanks(p, &p);
|
||||
seconds = simple_strtoul(p, &p, 0);
|
||||
cmm_set_timeout(nr, seconds);
|
||||
*ppos += *lenp;
|
||||
} else {
|
||||
len = sprintf(buf, "%ld %ld\n",
|
||||
cmm_timeout_pages, cmm_timeout_seconds);
|
||||
|
@ -323,9 +323,9 @@ static int cmm_timeout_handler(struct ctl_table *ctl, int write,
|
|||
len = *lenp;
|
||||
if (copy_to_user(buffer, buf, len))
|
||||
return -EFAULT;
|
||||
*lenp = len;
|
||||
*ppos += len;
|
||||
}
|
||||
*lenp = len;
|
||||
*ppos += len;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -40,7 +40,12 @@ static inline unsigned long xchg64(__volatile__ unsigned long *m, unsigned long
|
|||
return val;
|
||||
}
|
||||
|
||||
#define xchg(ptr,x) ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))
|
||||
#define xchg(ptr,x) \
|
||||
({ __typeof__(*(ptr)) __ret; \
|
||||
__ret = (__typeof__(*(ptr))) \
|
||||
__xchg((unsigned long)(x), (ptr), sizeof(*(ptr))); \
|
||||
__ret; \
|
||||
})
|
||||
|
||||
void __xchg_called_with_bad_pointer(void);
|
||||
|
||||
|
|
|
@ -20,6 +20,7 @@
|
|||
*/
|
||||
#define HAS_DMA
|
||||
|
||||
#ifdef CONFIG_PARPORT_PC_FIFO
|
||||
static DEFINE_SPINLOCK(dma_spin_lock);
|
||||
|
||||
#define claim_dma_lock() \
|
||||
|
@ -30,6 +31,7 @@ static DEFINE_SPINLOCK(dma_spin_lock);
|
|||
|
||||
#define release_dma_lock(__flags) \
|
||||
spin_unlock_irqrestore(&dma_spin_lock, __flags);
|
||||
#endif
|
||||
|
||||
static struct sparc_ebus_info {
|
||||
struct ebus_dma_info info;
|
||||
|
|
|
@ -260,7 +260,7 @@ static irqreturn_t line_write_interrupt(int irq, void *data)
|
|||
if (err == 0) {
|
||||
spin_unlock(&line->lock);
|
||||
return IRQ_NONE;
|
||||
} else if (err < 0) {
|
||||
} else if ((err < 0) && (err != -EAGAIN)) {
|
||||
line->head = line->buffer;
|
||||
line->tail = line->buffer;
|
||||
}
|
||||
|
|
|
@ -1718,6 +1718,51 @@ config X86_INTEL_MPX
|
|||
|
||||
If unsure, say N.
|
||||
|
||||
choice
|
||||
prompt "TSX enable mode"
|
||||
depends on CPU_SUP_INTEL
|
||||
default X86_INTEL_TSX_MODE_OFF
|
||||
help
|
||||
Intel's TSX (Transactional Synchronization Extensions) feature
|
||||
allows to optimize locking protocols through lock elision which
|
||||
can lead to a noticeable performance boost.
|
||||
|
||||
On the other hand it has been shown that TSX can be exploited
|
||||
to form side channel attacks (e.g. TAA) and chances are there
|
||||
will be more of those attacks discovered in the future.
|
||||
|
||||
Therefore TSX is not enabled by default (aka tsx=off). An admin
|
||||
might override this decision by tsx=on the command line parameter.
|
||||
Even with TSX enabled, the kernel will attempt to enable the best
|
||||
possible TAA mitigation setting depending on the microcode available
|
||||
for the particular machine.
|
||||
|
||||
This option allows to set the default tsx mode between tsx=on, =off
|
||||
and =auto. See Documentation/kernel-parameters.txt for more
|
||||
details.
|
||||
|
||||
Say off if not sure, auto if TSX is in use but it should be used on safe
|
||||
platforms or on if TSX is in use and the security aspect of tsx is not
|
||||
relevant.
|
||||
|
||||
config X86_INTEL_TSX_MODE_OFF
|
||||
bool "off"
|
||||
help
|
||||
TSX is disabled if possible - equals to tsx=off command line parameter.
|
||||
|
||||
config X86_INTEL_TSX_MODE_ON
|
||||
bool "on"
|
||||
help
|
||||
TSX is always enabled on TSX capable HW - equals the tsx=on command
|
||||
line parameter.
|
||||
|
||||
config X86_INTEL_TSX_MODE_AUTO
|
||||
bool "auto"
|
||||
help
|
||||
TSX is enabled on TSX capable HW that is believed to be safe against
|
||||
side channel attacks- equals the tsx=auto command line parameter.
|
||||
endchoice
|
||||
|
||||
config EFI
|
||||
bool "EFI runtime service support"
|
||||
depends on ACPI
|
||||
|
@ -2504,8 +2549,7 @@ config OLPC
|
|||
|
||||
config OLPC_XO1_PM
|
||||
bool "OLPC XO-1 Power Management"
|
||||
depends on OLPC && MFD_CS5535 && PM_SLEEP
|
||||
select MFD_CORE
|
||||
depends on OLPC && MFD_CS5535=y && PM_SLEEP
|
||||
---help---
|
||||
Add support for poweroff and suspend of the OLPC XO-1 laptop.
|
||||
|
||||
|
|
|
@ -49,7 +49,7 @@ static __always_inline void atomic_add(int i, atomic_t *v)
|
|||
{
|
||||
asm volatile(LOCK_PREFIX "addl %1,%0"
|
||||
: "+m" (v->counter)
|
||||
: "ir" (i));
|
||||
: "ir" (i) : "memory");
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -63,7 +63,7 @@ static __always_inline void atomic_sub(int i, atomic_t *v)
|
|||
{
|
||||
asm volatile(LOCK_PREFIX "subl %1,%0"
|
||||
: "+m" (v->counter)
|
||||
: "ir" (i));
|
||||
: "ir" (i) : "memory");
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -89,7 +89,7 @@ static __always_inline int atomic_sub_and_test(int i, atomic_t *v)
|
|||
static __always_inline void atomic_inc(atomic_t *v)
|
||||
{
|
||||
asm volatile(LOCK_PREFIX "incl %0"
|
||||
: "+m" (v->counter));
|
||||
: "+m" (v->counter) :: "memory");
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -101,7 +101,7 @@ static __always_inline void atomic_inc(atomic_t *v)
|
|||
static __always_inline void atomic_dec(atomic_t *v)
|
||||
{
|
||||
asm volatile(LOCK_PREFIX "decl %0"
|
||||
: "+m" (v->counter));
|
||||
: "+m" (v->counter) :: "memory");
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -44,7 +44,7 @@ static __always_inline void atomic64_add(long i, atomic64_t *v)
|
|||
{
|
||||
asm volatile(LOCK_PREFIX "addq %1,%0"
|
||||
: "=m" (v->counter)
|
||||
: "er" (i), "m" (v->counter));
|
||||
: "er" (i), "m" (v->counter) : "memory");
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -58,7 +58,7 @@ static inline void atomic64_sub(long i, atomic64_t *v)
|
|||
{
|
||||
asm volatile(LOCK_PREFIX "subq %1,%0"
|
||||
: "=m" (v->counter)
|
||||
: "er" (i), "m" (v->counter));
|
||||
: "er" (i), "m" (v->counter) : "memory");
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -85,7 +85,7 @@ static __always_inline void atomic64_inc(atomic64_t *v)
|
|||
{
|
||||
asm volatile(LOCK_PREFIX "incq %0"
|
||||
: "=m" (v->counter)
|
||||
: "m" (v->counter));
|
||||
: "m" (v->counter) : "memory");
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -98,7 +98,7 @@ static __always_inline void atomic64_dec(atomic64_t *v)
|
|||
{
|
||||
asm volatile(LOCK_PREFIX "decq %0"
|
||||
: "=m" (v->counter)
|
||||
: "m" (v->counter));
|
||||
: "m" (v->counter) : "memory");
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -116,7 +116,7 @@ do { \
|
|||
#endif
|
||||
|
||||
/* Atomic operations are already serializing on x86 */
|
||||
#define smp_mb__before_atomic() barrier()
|
||||
#define smp_mb__after_atomic() barrier()
|
||||
#define smp_mb__before_atomic() do { } while (0)
|
||||
#define smp_mb__after_atomic() do { } while (0)
|
||||
|
||||
#endif /* _ASM_X86_BARRIER_H */
|
||||
|
|
|
@ -340,5 +340,7 @@
|
|||
#define X86_BUG_MDS X86_BUG(19) /* CPU is affected by Microarchitectural data sampling */
|
||||
#define X86_BUG_MSBDS_ONLY X86_BUG(20) /* CPU is only affected by the MSDBS variant of BUG_MDS */
|
||||
#define X86_BUG_SWAPGS X86_BUG(21) /* CPU is affected by speculation through SWAPGS */
|
||||
#define X86_BUG_TAA X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */
|
||||
#define X86_BUG_ITLB_MULTIHIT X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */
|
||||
|
||||
#endif /* _ASM_X86_CPUFEATURES_H */
|
||||
|
|
|
@ -198,4 +198,22 @@ static inline int insn_offset_immediate(struct insn *insn)
|
|||
return insn_offset_displacement(insn) + insn->displacement.nbytes;
|
||||
}
|
||||
|
||||
#define POP_SS_OPCODE 0x1f
|
||||
#define MOV_SREG_OPCODE 0x8e
|
||||
|
||||
/*
|
||||
* Intel SDM Vol.3A 6.8.3 states;
|
||||
* "Any single-step trap that would be delivered following the MOV to SS
|
||||
* instruction or POP to SS instruction (because EFLAGS.TF is 1) is
|
||||
* suppressed."
|
||||
* This function returns true if @insn is MOV SS or POP SS. On these
|
||||
* instructions, single stepping is suppressed.
|
||||
*/
|
||||
static inline int insn_masking_exception(struct insn *insn)
|
||||
{
|
||||
return insn->opcode.bytes[0] == POP_SS_OPCODE ||
|
||||
(insn->opcode.bytes[0] == MOV_SREG_OPCODE &&
|
||||
X86_MODRM_REG(insn->modrm.bytes[0]) == 2);
|
||||
}
|
||||
|
||||
#endif /* _ASM_X86_INSN_H */
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
* "Big Core" Processors (Branded as Core, Xeon, etc...)
|
||||
*
|
||||
* The "_X" parts are generally the EP and EX Xeons, or the
|
||||
* "Extreme" ones, like Broadwell-E.
|
||||
* "Extreme" ones, like Broadwell-E, or Atom microserver.
|
||||
*
|
||||
* Things ending in "2" are usually because we have no better
|
||||
* name for them. There's no processor called "WESTMERE2".
|
||||
|
@ -67,6 +67,7 @@
|
|||
#define INTEL_FAM6_ATOM_GOLDMONT 0x5C /* Apollo Lake */
|
||||
#define INTEL_FAM6_ATOM_GOLDMONT_X 0x5F /* Denverton */
|
||||
#define INTEL_FAM6_ATOM_GOLDMONT_PLUS 0x7A /* Gemini Lake */
|
||||
#define INTEL_FAM6_ATOM_TREMONT_X 0x86 /* Jacobsville */
|
||||
|
||||
/* Xeon Phi */
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue