Merge "Merge android-4.4.104 (8bc4213) into msm-4.4"

This commit is contained in:
Linux Build Service Account 2018-01-11 04:14:45 -08:00 committed by Gerrit - the friendly Code Review server
commit e525ef12a4
369 changed files with 9279 additions and 2112 deletions

View file

@ -437,6 +437,8 @@ sysrq.txt
- info on the magic SysRq key. - info on the magic SysRq key.
target/ target/
- directory with info on generating TCM v4 fabric .ko modules - directory with info on generating TCM v4 fabric .ko modules
tee.txt
- info on the TEE subsystem and drivers
this_cpu_ops.txt this_cpu_ops.txt
- List rationale behind and the way to use this_cpu operations. - List rationale behind and the way to use this_cpu operations.
thermal/ thermal/

View file

@ -51,6 +51,18 @@ Description:
Controls the dirty page count condition for the in-place-update Controls the dirty page count condition for the in-place-update
policies. policies.
What: /sys/fs/f2fs/<disk>/min_hot_blocks
Date: March 2017
Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
Description:
Controls the dirty page count condition for redefining hot data.
What: /sys/fs/f2fs/<disk>/min_ssr_sections
Date: October 2017
Contact: "Chao Yu" <yuchao0@huawei.com>
Description:
Controls the fee section threshold to trigger SSR allocation.
What: /sys/fs/f2fs/<disk>/max_small_discards What: /sys/fs/f2fs/<disk>/max_small_discards
Date: November 2013 Date: November 2013
Contact: "Jaegeuk Kim" <jaegeuk.kim@samsung.com> Contact: "Jaegeuk Kim" <jaegeuk.kim@samsung.com>
@ -96,6 +108,18 @@ Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
Description: Description:
Controls the checkpoint timing. Controls the checkpoint timing.
What: /sys/fs/f2fs/<disk>/idle_interval
Date: January 2016
Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
Description:
Controls the idle timing.
What: /sys/fs/f2fs/<disk>/iostat_enable
Date: August 2017
Contact: "Chao Yu" <yuchao0@huawei.com>
Description:
Controls to enable/disable IO stat.
What: /sys/fs/f2fs/<disk>/ra_nid_pages What: /sys/fs/f2fs/<disk>/ra_nid_pages
Date: October 2015 Date: October 2015
Contact: "Chao Yu" <chao2.yu@samsung.com> Contact: "Chao Yu" <chao2.yu@samsung.com>
@ -116,6 +140,12 @@ Contact: "Shuoran Liu" <liushuoran@huawei.com>
Description: Description:
Shows total written kbytes issued to disk. Shows total written kbytes issued to disk.
What: /sys/fs/f2fs/<disk>/feature
Date: July 2017
Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
Description:
Shows all enabled features in current device.
What: /sys/fs/f2fs/<disk>/inject_rate What: /sys/fs/f2fs/<disk>/inject_rate
Date: May 2016 Date: May 2016
Contact: "Sheng Yong" <shengyong1@huawei.com> Contact: "Sheng Yong" <shengyong1@huawei.com>
@ -132,7 +162,18 @@ What: /sys/fs/f2fs/<disk>/reserved_blocks
Date: June 2017 Date: June 2017
Contact: "Chao Yu" <yuchao0@huawei.com> Contact: "Chao Yu" <yuchao0@huawei.com>
Description: Description:
Controls current reserved blocks in system. Controls target reserved blocks in system, the threshold
is soft, it could exceed current available user space.
What: /sys/fs/f2fs/<disk>/current_reserved_blocks
Date: October 2017
Contact: "Yunlong Song" <yunlong.song@huawei.com>
Contact: "Chao Yu" <yuchao0@huawei.com>
Description:
Shows current reserved blocks in system, it may be temporarily
smaller than target_reserved_blocks, but will gradually
increase to target_reserved_blocks when more free blocks are
freed by user later.
What: /sys/fs/f2fs/<disk>/gc_urgent What: /sys/fs/f2fs/<disk>/gc_urgent
Date: August 2017 Date: August 2017

View file

@ -0,0 +1,31 @@
OP-TEE Device Tree Bindings
OP-TEE is a piece of software using hardware features to provide a Trusted
Execution Environment. The security can be provided with ARM TrustZone, but
also by virtualization or a separate chip.
We're using "linaro" as the first part of the compatible property for
the reference implementation maintained by Linaro.
* OP-TEE based on ARM TrustZone required properties:
- compatible : should contain "linaro,optee-tz"
- method : The method of calling the OP-TEE Trusted OS. Permitted
values are:
"smc" : SMC #0, with the register assignments specified
in drivers/tee/optee/optee_smc.h
"hvc" : HVC #0, with the register assignments specified
in drivers/tee/optee/optee_smc.h
Example:
firmware {
optee {
compatible = "linaro,optee-tz";
method = "smc";
};
};

View file

@ -133,6 +133,7 @@ lacie LaCie
lantiq Lantiq Semiconductor lantiq Lantiq Semiconductor
lenovo Lenovo Group Ltd. lenovo Lenovo Group Ltd.
lg LG Corporation lg LG Corporation
linaro Linaro Limited
linux Linux-specific binding linux Linux-specific binding
lsi LSI Corp. (LSI Logic) lsi LSI Corp. (LSI Logic)
lltc Linear Technology Corporation lltc Linear Technology Corporation

View file

@ -307,6 +307,7 @@ Code Seq#(hex) Include File Comments
0xA3 80-8F Port ACL in development: 0xA3 80-8F Port ACL in development:
<mailto:tlewis@mindspring.com> <mailto:tlewis@mindspring.com>
0xA3 90-9F linux/dtlk.h 0xA3 90-9F linux/dtlk.h
0xA4 00-1F uapi/linux/tee.h Generic TEE subsystem
0xAA 00-3F linux/uapi/linux/userfaultfd.h 0xAA 00-3F linux/uapi/linux/userfaultfd.h
0xAB 00-1F linux/nbd.h 0xAB 00-1F linux/nbd.h
0xAC 00-1F linux/raw.h 0xAC 00-1F linux/raw.h

118
Documentation/tee.txt Normal file
View file

@ -0,0 +1,118 @@
TEE subsystem
This document describes the TEE subsystem in Linux.
A TEE (Trusted Execution Environment) is a trusted OS running in some
secure environment, for example, TrustZone on ARM CPUs, or a separate
secure co-processor etc. A TEE driver handles the details needed to
communicate with the TEE.
This subsystem deals with:
- Registration of TEE drivers
- Managing shared memory between Linux and the TEE
- Providing a generic API to the TEE
The TEE interface
=================
include/uapi/linux/tee.h defines the generic interface to a TEE.
User space (the client) connects to the driver by opening /dev/tee[0-9]* or
/dev/teepriv[0-9]*.
- TEE_IOC_SHM_ALLOC allocates shared memory and returns a file descriptor
which user space can mmap. When user space doesn't need the file
descriptor any more, it should be closed. When shared memory isn't needed
any longer it should be unmapped with munmap() to allow the reuse of
memory.
- TEE_IOC_VERSION lets user space know which TEE this driver handles and
the its capabilities.
- TEE_IOC_OPEN_SESSION opens a new session to a Trusted Application.
- TEE_IOC_INVOKE invokes a function in a Trusted Application.
- TEE_IOC_CANCEL may cancel an ongoing TEE_IOC_OPEN_SESSION or TEE_IOC_INVOKE.
- TEE_IOC_CLOSE_SESSION closes a session to a Trusted Application.
There are two classes of clients, normal clients and supplicants. The latter is
a helper process for the TEE to access resources in Linux, for example file
system access. A normal client opens /dev/tee[0-9]* and a supplicant opens
/dev/teepriv[0-9].
Much of the communication between clients and the TEE is opaque to the
driver. The main job for the driver is to receive requests from the
clients, forward them to the TEE and send back the results. In the case of
supplicants the communication goes in the other direction, the TEE sends
requests to the supplicant which then sends back the result.
OP-TEE driver
=============
The OP-TEE driver handles OP-TEE [1] based TEEs. Currently it is only the ARM
TrustZone based OP-TEE solution that is supported.
Lowest level of communication with OP-TEE builds on ARM SMC Calling
Convention (SMCCC) [2], which is the foundation for OP-TEE's SMC interface
[3] used internally by the driver. Stacked on top of that is OP-TEE Message
Protocol [4].
OP-TEE SMC interface provides the basic functions required by SMCCC and some
additional functions specific for OP-TEE. The most interesting functions are:
- OPTEE_SMC_FUNCID_CALLS_UID (part of SMCCC) returns the version information
which is then returned by TEE_IOC_VERSION
- OPTEE_SMC_CALL_GET_OS_UUID returns the particular OP-TEE implementation, used
to tell, for instance, a TrustZone OP-TEE apart from an OP-TEE running on a
separate secure co-processor.
- OPTEE_SMC_CALL_WITH_ARG drives the OP-TEE message protocol
- OPTEE_SMC_GET_SHM_CONFIG lets the driver and OP-TEE agree on which memory
range to used for shared memory between Linux and OP-TEE.
The GlobalPlatform TEE Client API [5] is implemented on top of the generic
TEE API.
Picture of the relationship between the different components in the
OP-TEE architecture.
User space Kernel Secure world
~~~~~~~~~~ ~~~~~~ ~~~~~~~~~~~~
+--------+ +-------------+
| Client | | Trusted |
+--------+ | Application |
/\ +-------------+
|| +----------+ /\
|| |tee- | ||
|| |supplicant| \/
|| +----------+ +-------------+
\/ /\ | TEE Internal|
+-------+ || | API |
+ TEE | || +--------+--------+ +-------------+
| Client| || | TEE | OP-TEE | | OP-TEE |
| API | \/ | subsys | driver | | Trusted OS |
+-------+----------------+----+-------+----+-----------+-------------+
| Generic TEE API | | OP-TEE MSG |
| IOCTL (TEE_IOC_*) | | SMCCC (OPTEE_SMC_CALL_*) |
+-----------------------------+ +------------------------------+
RPC (Remote Procedure Call) are requests from secure world to kernel driver
or tee-supplicant. An RPC is identified by a special range of SMCCC return
values from OPTEE_SMC_CALL_WITH_ARG. RPC messages which are intended for the
kernel are handled by the kernel driver. Other RPC messages will be forwarded to
tee-supplicant without further involvement of the driver, except switching
shared memory buffer representation.
References:
[1] https://github.com/OP-TEE/optee_os
[2] http://infocenter.arm.com/help/topic/com.arm.doc.den0028a/index.html
[3] drivers/tee/optee/optee_smc.h
[4] drivers/tee/optee/optee_msg.h
[5] http://www.globalplatform.org/specificationsdevice.asp look for
"TEE Client API Specification v1.0" and click download.

View file

@ -7955,6 +7955,11 @@ F: arch/*/oprofile/
F: drivers/oprofile/ F: drivers/oprofile/
F: include/linux/oprofile.h F: include/linux/oprofile.h
OP-TEE DRIVER
M: Jens Wiklander <jens.wiklander@linaro.org>
S: Maintained
F: drivers/tee/optee/
ORACLE CLUSTER FILESYSTEM 2 (OCFS2) ORACLE CLUSTER FILESYSTEM 2 (OCFS2)
M: Mark Fasheh <mfasheh@suse.com> M: Mark Fasheh <mfasheh@suse.com>
M: Joel Becker <jlbec@evilplan.org> M: Joel Becker <jlbec@evilplan.org>
@ -9382,6 +9387,14 @@ F: drivers/hwtracing/stm/
F: include/linux/stm.h F: include/linux/stm.h
F: include/uapi/linux/stm.h F: include/uapi/linux/stm.h
TEE SUBSYSTEM
M: Jens Wiklander <jens.wiklander@linaro.org>
S: Maintained
F: include/linux/tee_drv.h
F: include/uapi/linux/tee.h
F: drivers/tee/
F: Documentation/tee.txt
THUNDERBOLT DRIVER THUNDERBOLT DRIVER
M: Andreas Noever <andreas.noever@gmail.com> M: Andreas Noever <andreas.noever@gmail.com>
S: Maintained S: Maintained

View file

@ -1,6 +1,6 @@
VERSION = 4 VERSION = 4
PATCHLEVEL = 4 PATCHLEVEL = 4
SUBLEVEL = 97 SUBLEVEL = 104
EXTRAVERSION = EXTRAVERSION =
NAME = Blurry Fish Butt NAME = Blurry Fish Butt

View file

@ -142,10 +142,11 @@
}; };
scm_conf: scm_conf@0 { scm_conf: scm_conf@0 {
compatible = "syscon"; compatible = "syscon", "simple-bus";
reg = <0x0 0x800>; reg = <0x0 0x800>;
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;
ranges = <0 0 0x800>;
scm_clocks: clocks { scm_clocks: clocks {
#address-cells = <1>; #address-cells = <1>;

View file

@ -138,7 +138,7 @@
}; };
uart1: uart@20000 { uart1: uart@20000 {
compatible = "ti,omap3-uart"; compatible = "ti,am3352-uart", "ti,omap3-uart";
ti,hwmods = "uart1"; ti,hwmods = "uart1";
reg = <0x20000 0x2000>; reg = <0x20000 0x2000>;
clock-frequency = <48000000>; clock-frequency = <48000000>;
@ -148,7 +148,7 @@
}; };
uart2: uart@22000 { uart2: uart@22000 {
compatible = "ti,omap3-uart"; compatible = "ti,am3352-uart", "ti,omap3-uart";
ti,hwmods = "uart2"; ti,hwmods = "uart2";
reg = <0x22000 0x2000>; reg = <0x22000 0x2000>;
clock-frequency = <48000000>; clock-frequency = <48000000>;
@ -158,7 +158,7 @@
}; };
uart3: uart@24000 { uart3: uart@24000 {
compatible = "ti,omap3-uart"; compatible = "ti,am3352-uart", "ti,omap3-uart";
ti,hwmods = "uart3"; ti,hwmods = "uart3";
reg = <0x24000 0x2000>; reg = <0x24000 0x2000>;
clock-frequency = <48000000>; clock-frequency = <48000000>;
@ -189,10 +189,11 @@
ranges = <0 0x160000 0x16d000>; ranges = <0 0x160000 0x16d000>;
scm_conf: scm_conf@0 { scm_conf: scm_conf@0 {
compatible = "syscon"; compatible = "syscon", "simple-bus";
reg = <0x0 0x800>; reg = <0x0 0x800>;
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;
ranges = <0 0 0x800>;
scm_clocks: clocks { scm_clocks: clocks {
#address-cells = <1>; #address-cells = <1>;

View file

@ -347,7 +347,7 @@
}; };
uart1: uart@48020000 { uart1: uart@48020000 {
compatible = "ti,omap3-uart"; compatible = "ti,am3352-uart", "ti,omap3-uart";
ti,hwmods = "uart1"; ti,hwmods = "uart1";
reg = <0x48020000 0x2000>; reg = <0x48020000 0x2000>;
clock-frequency = <48000000>; clock-frequency = <48000000>;
@ -357,7 +357,7 @@
}; };
uart2: uart@48022000 { uart2: uart@48022000 {
compatible = "ti,omap3-uart"; compatible = "ti,am3352-uart", "ti,omap3-uart";
ti,hwmods = "uart2"; ti,hwmods = "uart2";
reg = <0x48022000 0x2000>; reg = <0x48022000 0x2000>;
clock-frequency = <48000000>; clock-frequency = <48000000>;
@ -367,7 +367,7 @@
}; };
uart3: uart@48024000 { uart3: uart@48024000 {
compatible = "ti,omap3-uart"; compatible = "ti,am3352-uart", "ti,omap3-uart";
ti,hwmods = "uart3"; ti,hwmods = "uart3";
reg = <0x48024000 0x2000>; reg = <0x48024000 0x2000>;
clock-frequency = <48000000>; clock-frequency = <48000000>;

View file

@ -88,7 +88,7 @@
interrupts-extended = <&intc 83 &omap3_pmx_core 0x11a>; interrupts-extended = <&intc 83 &omap3_pmx_core 0x11a>;
pinctrl-names = "default"; pinctrl-names = "default";
pinctrl-0 = <&mmc1_pins &mmc1_cd>; pinctrl-0 = <&mmc1_pins &mmc1_cd>;
cd-gpios = <&gpio4 31 IRQ_TYPE_LEVEL_LOW>; /* gpio127 */ cd-gpios = <&gpio4 31 GPIO_ACTIVE_LOW>; /* gpio127 */
vmmc-supply = <&vmmc1>; vmmc-supply = <&vmmc1>;
bus-width = <4>; bus-width = <4>;
cap-power-off-card; cap-power-off-card;

View file

@ -221,6 +221,7 @@ CONFIG_SERIO=m
CONFIG_SERIAL_8250=y CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_8250_NR_UARTS=32 CONFIG_SERIAL_8250_NR_UARTS=32
CONFIG_SERIAL_8250_RUNTIME_UARTS=6
CONFIG_SERIAL_8250_EXTENDED=y CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_MANY_PORTS=y CONFIG_SERIAL_8250_MANY_PORTS=y
CONFIG_SERIAL_8250_SHARE_IRQ=y CONFIG_SERIAL_8250_SHARE_IRQ=y

View file

@ -357,7 +357,7 @@ static struct crypto_alg aesbs_algs[] = { {
}, { }, {
.cra_name = "cbc(aes)", .cra_name = "cbc(aes)",
.cra_driver_name = "cbc-aes-neonbs", .cra_driver_name = "cbc-aes-neonbs",
.cra_priority = 300, .cra_priority = 250,
.cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER|CRYPTO_ALG_ASYNC, .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER|CRYPTO_ALG_ASYNC,
.cra_blocksize = AES_BLOCK_SIZE, .cra_blocksize = AES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct async_helper_ctx), .cra_ctxsize = sizeof(struct async_helper_ctx),
@ -377,7 +377,7 @@ static struct crypto_alg aesbs_algs[] = { {
}, { }, {
.cra_name = "ctr(aes)", .cra_name = "ctr(aes)",
.cra_driver_name = "ctr-aes-neonbs", .cra_driver_name = "ctr-aes-neonbs",
.cra_priority = 300, .cra_priority = 250,
.cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER|CRYPTO_ALG_ASYNC, .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER|CRYPTO_ALG_ASYNC,
.cra_blocksize = 1, .cra_blocksize = 1,
.cra_ctxsize = sizeof(struct async_helper_ctx), .cra_ctxsize = sizeof(struct async_helper_ctx),
@ -397,7 +397,7 @@ static struct crypto_alg aesbs_algs[] = { {
}, { }, {
.cra_name = "xts(aes)", .cra_name = "xts(aes)",
.cra_driver_name = "xts-aes-neonbs", .cra_driver_name = "xts-aes-neonbs",
.cra_priority = 300, .cra_priority = 250,
.cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER|CRYPTO_ALG_ASYNC, .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER|CRYPTO_ALG_ASYNC,
.cra_blocksize = AES_BLOCK_SIZE, .cra_blocksize = AES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct async_helper_ctx), .cra_ctxsize = sizeof(struct async_helper_ctx),

View file

@ -133,30 +133,26 @@ static void dump_mem(const char *lvl, const char *str, unsigned long bottom,
set_fs(fs); set_fs(fs);
} }
static void dump_instr(const char *lvl, struct pt_regs *regs) static void __dump_instr(const char *lvl, struct pt_regs *regs)
{ {
unsigned long addr = instruction_pointer(regs); unsigned long addr = instruction_pointer(regs);
const int thumb = thumb_mode(regs); const int thumb = thumb_mode(regs);
const int width = thumb ? 4 : 8; const int width = thumb ? 4 : 8;
mm_segment_t fs;
char str[sizeof("00000000 ") * 5 + 2 + 1], *p = str; char str[sizeof("00000000 ") * 5 + 2 + 1], *p = str;
int i; int i;
/* /*
* We need to switch to kernel mode so that we can use __get_user * Note that we now dump the code first, just in case the backtrace
* to safely read from kernel space. Note that we now dump the * kills us.
* code first, just in case the backtrace kills us.
*/ */
fs = get_fs();
set_fs(KERNEL_DS);
for (i = -4; i < 1 + !!thumb; i++) { for (i = -4; i < 1 + !!thumb; i++) {
unsigned int val, bad; unsigned int val, bad;
if (thumb) if (thumb)
bad = __get_user(val, &((u16 *)addr)[i]); bad = get_user(val, &((u16 *)addr)[i]);
else else
bad = __get_user(val, &((u32 *)addr)[i]); bad = get_user(val, &((u32 *)addr)[i]);
if (!bad) if (!bad)
p += sprintf(p, i == 0 ? "(%0*x) " : "%0*x ", p += sprintf(p, i == 0 ? "(%0*x) " : "%0*x ",
@ -167,8 +163,20 @@ static void dump_instr(const char *lvl, struct pt_regs *regs)
} }
} }
printk("%sCode: %s\n", lvl, str); printk("%sCode: %s\n", lvl, str);
}
static void dump_instr(const char *lvl, struct pt_regs *regs)
{
mm_segment_t fs;
if (!user_mode(regs)) {
fs = get_fs();
set_fs(KERNEL_DS);
__dump_instr(lvl, regs);
set_fs(fs); set_fs(fs);
} else {
__dump_instr(lvl, regs);
}
} }
#ifdef CONFIG_ARM_UNWIND #ifdef CONFIG_ARM_UNWIND

View file

@ -522,7 +522,6 @@ static void pdata_quirks_check(struct pdata_init *quirks)
if (of_machine_is_compatible(quirks->compatible)) { if (of_machine_is_compatible(quirks->compatible)) {
if (quirks->fn) if (quirks->fn)
quirks->fn(); quirks->fn();
break;
} }
quirks++; quirks++;
} }

View file

@ -126,8 +126,8 @@ static const struct prot_bits section_bits[] = {
.val = PMD_SECT_USER, .val = PMD_SECT_USER,
.set = "USR", .set = "USR",
}, { }, {
.mask = L_PMD_SECT_RDONLY, .mask = L_PMD_SECT_RDONLY | PMD_SECT_AP2,
.val = L_PMD_SECT_RDONLY, .val = L_PMD_SECT_RDONLY | PMD_SECT_AP2,
.set = "ro", .set = "ro",
.clear = "RW", .clear = "RW",
#elif __LINUX_ARM_ARCH__ >= 6 #elif __LINUX_ARM_ARCH__ >= 6

View file

@ -669,8 +669,8 @@ static struct section_perm ro_perms[] = {
.start = (unsigned long)_stext, .start = (unsigned long)_stext,
.end = (unsigned long)__init_begin, .end = (unsigned long)__init_begin,
#ifdef CONFIG_ARM_LPAE #ifdef CONFIG_ARM_LPAE
.mask = ~L_PMD_SECT_RDONLY, .mask = ~(L_PMD_SECT_RDONLY | PMD_SECT_AP2),
.prot = L_PMD_SECT_RDONLY, .prot = L_PMD_SECT_RDONLY | PMD_SECT_AP2,
#else #else
.mask = ~(PMD_SECT_APX | PMD_SECT_AP_WRITE), .mask = ~(PMD_SECT_APX | PMD_SECT_AP_WRITE),
.prot = PMD_SECT_APX | PMD_SECT_AP_WRITE, .prot = PMD_SECT_APX | PMD_SECT_AP_WRITE,

View file

@ -104,6 +104,7 @@ config ARM64
select HAVE_CONTEXT_TRACKING select HAVE_CONTEXT_TRACKING
select HAVE_ARM_SMCCC select HAVE_ARM_SMCCC
select THREAD_INFO_IN_TASK select THREAD_INFO_IN_TASK
select HAVE_ARM_SMCCC
help help
ARM 64-bit (AArch64) Linux support. ARM 64-bit (AArch64) Linux support.

View file

@ -30,6 +30,8 @@
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/ */
/memreserve/ 0x81000000 0x00200000;
#include <dt-bindings/interrupt-controller/arm-gic.h> #include <dt-bindings/interrupt-controller/arm-gic.h>
/memreserve/ 0x84b00000 0x00000008; /memreserve/ 0x84b00000 0x00000008;

View file

@ -266,7 +266,7 @@ static inline void __kvm_flush_dcache_pud(pud_t pud)
kvm_flush_dcache_to_poc(page_address(page), PUD_SIZE); kvm_flush_dcache_to_poc(page_address(page), PUD_SIZE);
} }
#define kvm_virt_to_phys(x) __virt_to_phys((unsigned long)(x)) #define kvm_virt_to_phys(x) __pa_symbol(x)
void kvm_set_way_flush(struct kvm_vcpu *vcpu); void kvm_set_way_flush(struct kvm_vcpu *vcpu);
void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled); void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled);

View file

@ -188,6 +188,7 @@ static inline void *phys_to_virt(phys_addr_t x)
#define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x))) #define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x)))
#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT) #define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
#define virt_to_pfn(x) __phys_to_pfn(__virt_to_phys(x)) #define virt_to_pfn(x) __phys_to_pfn(__virt_to_phys(x))
#define sym_to_pfn(x) __phys_to_pfn(__pa_symbol(x))
/* /*
* virt_to_page(k) convert a _valid_ virtual address to struct page * * virt_to_page(k) convert a _valid_ virtual address to struct page *

View file

@ -55,7 +55,7 @@ static inline void contextidr_thread_switch(struct task_struct *next)
*/ */
static inline void cpu_set_reserved_ttbr0(void) static inline void cpu_set_reserved_ttbr0(void)
{ {
unsigned long ttbr = virt_to_phys(empty_zero_page); unsigned long ttbr = __pa_symbol(empty_zero_page);
asm( asm(
" msr ttbr0_el1, %0 // set TTBR0\n" " msr ttbr0_el1, %0 // set TTBR0\n"
@ -129,7 +129,7 @@ static inline void cpu_install_idmap(void)
local_flush_tlb_all(); local_flush_tlb_all();
cpu_set_idmap_tcr_t0sz(); cpu_set_idmap_tcr_t0sz();
cpu_switch_mm(idmap_pg_dir, &init_mm); cpu_switch_mm(lm_alias(idmap_pg_dir), &init_mm);
} }
/* /*
@ -144,7 +144,7 @@ static inline void cpu_replace_ttbr1(pgd_t *pgd)
phys_addr_t pgd_phys = virt_to_phys(pgd); phys_addr_t pgd_phys = virt_to_phys(pgd);
replace_phys = (void *)virt_to_phys(idmap_cpu_replace_ttbr1); replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1);
cpu_install_idmap(); cpu_install_idmap();
replace_phys(pgd_phys); replace_phys(pgd_phys);

View file

@ -120,7 +120,7 @@ extern void __pgd_error(const char *file, int line, unsigned long val);
* for zero-mapped memory areas etc.. * for zero-mapped memory areas etc..
*/ */
extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
#define ZERO_PAGE(vaddr) virt_to_page(empty_zero_page) #define ZERO_PAGE(vaddr) phys_to_page(__pa_symbol(empty_zero_page))
#define pte_ERROR(pte) __pte_error(__FILE__, __LINE__, pte_val(pte)) #define pte_ERROR(pte) __pte_error(__FILE__, __LINE__, pte_val(pte))

View file

@ -17,6 +17,7 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>. * along with this program. If not, see <http://www.gnu.org/licenses/>.
*/ */
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/mm.h>
#include <linux/types.h> #include <linux/types.h>
#include <asm/cpu_ops.h> #include <asm/cpu_ops.h>
@ -102,7 +103,7 @@ static int acpi_parking_protocol_cpu_boot(unsigned int cpu)
* that read this address need to convert this address to the * that read this address need to convert this address to the
* Boot-Loader's endianness before jumping. * Boot-Loader's endianness before jumping.
*/ */
writeq_relaxed(__pa(secondary_entry), &mailbox->entry_point); writeq_relaxed(__pa_symbol(secondary_entry), &mailbox->entry_point);
writel_relaxed(cpu_entry->gic_cpu_id, &mailbox->cpu_id); writel_relaxed(cpu_entry->gic_cpu_id, &mailbox->cpu_id);
arch_send_wakeup_ipi_mask(cpumask_of(cpu)); arch_send_wakeup_ipi_mask(cpumask_of(cpu));

View file

@ -23,6 +23,7 @@
#include <linux/sort.h> #include <linux/sort.h>
#include <linux/stop_machine.h> #include <linux/stop_machine.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/mm.h>
#include <asm/cpu.h> #include <asm/cpu.h>
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
#include <asm/cpu_ops.h> #include <asm/cpu_ops.h>

View file

@ -98,7 +98,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
page = vmalloc_to_page(addr); page = vmalloc_to_page(addr);
else if (!module && (IS_ENABLED(CONFIG_DEBUG_RODATA) else if (!module && (IS_ENABLED(CONFIG_DEBUG_RODATA)
|| IS_ENABLED(CONFIG_KERNEL_TEXT_RDONLY))) || IS_ENABLED(CONFIG_KERNEL_TEXT_RDONLY)))
page = virt_to_page(addr); page = phys_to_page(__pa_symbol(addr));
else else
return addr; return addr;

View file

@ -26,8 +26,7 @@
*/ */
void __memcpy_fromio(void *to, const volatile void __iomem *from, size_t count) void __memcpy_fromio(void *to, const volatile void __iomem *from, size_t count)
{ {
while (count && (!IS_ALIGNED((unsigned long)from, 8) || while (count && !IS_ALIGNED((unsigned long)from, 8)) {
!IS_ALIGNED((unsigned long)to, 8))) {
*(u8 *)to = __raw_readb_no_log(from); *(u8 *)to = __raw_readb_no_log(from);
from++; from++;
to++; to++;
@ -55,23 +54,22 @@ EXPORT_SYMBOL(__memcpy_fromio);
*/ */
void __memcpy_toio(volatile void __iomem *to, const void *from, size_t count) void __memcpy_toio(volatile void __iomem *to, const void *from, size_t count)
{ {
while (count && (!IS_ALIGNED((unsigned long)to, 8) || while (count && !IS_ALIGNED((unsigned long)to, 8)) {
!IS_ALIGNED((unsigned long)from, 8))) { __raw_writeb_no_log(*(u8 *)from, to);
__raw_writeb_no_log(*(volatile u8 *)from, to);
from++; from++;
to++; to++;
count--; count--;
} }
while (count >= 8) { while (count >= 8) {
__raw_writeq_no_log(*(volatile u64 *)from, to); __raw_writeq_no_log(*(u64 *)from, to);
from += 8; from += 8;
to += 8; to += 8;
count -= 8; count -= 8;
} }
while (count) { while (count) {
__raw_writeb_no_log(*(volatile u8 *)from, to); __raw_writeb_no_log(*(u8 *)from, to);
from++; from++;
to++; to++;
count--; count--;

View file

@ -19,6 +19,7 @@
#include <linux/of.h> #include <linux/of.h>
#include <linux/smp.h> #include <linux/smp.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/mm.h>
#include <linux/psci.h> #include <linux/psci.h>
#include <uapi/linux/psci.h> #include <uapi/linux/psci.h>
@ -46,7 +47,8 @@ static int __init cpu_psci_cpu_prepare(unsigned int cpu)
static int cpu_psci_cpu_boot(unsigned int cpu) static int cpu_psci_cpu_boot(unsigned int cpu)
{ {
int err = psci_ops.cpu_on(cpu_logical_map(cpu), __pa(secondary_entry)); int err = psci_ops.cpu_on(cpu_logical_map(cpu),
__pa_symbol(secondary_entry));
if (err) if (err)
pr_err("failed to boot CPU%d (%d)\n", cpu, err); pr_err("failed to boot CPU%d (%d)\n", cpu, err);

View file

@ -45,6 +45,7 @@
#include <linux/efi.h> #include <linux/efi.h>
#include <linux/psci.h> #include <linux/psci.h>
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#include <linux/mm.h>
#include <asm/acpi.h> #include <asm/acpi.h>
#include <asm/fixmap.h> #include <asm/fixmap.h>
@ -212,10 +213,10 @@ static void __init request_standard_resources(void)
struct memblock_region *region; struct memblock_region *region;
struct resource *res; struct resource *res;
kernel_code.start = virt_to_phys(_text); kernel_code.start = __pa_symbol(_text);
kernel_code.end = virt_to_phys(__init_begin - 1); kernel_code.end = __pa_symbol(__init_begin - 1);
kernel_data.start = virt_to_phys(_sdata); kernel_data.start = __pa_symbol(_sdata);
kernel_data.end = virt_to_phys(_end - 1); kernel_data.end = __pa_symbol(_end - 1);
for_each_memblock(memory, region) { for_each_memblock(memory, region) {
res = alloc_bootmem_low(sizeof(*res)); res = alloc_bootmem_low(sizeof(*res));
@ -367,9 +368,9 @@ void __init setup_arch(char **cmdline_p)
* thread. * thread.
*/ */
#ifdef CONFIG_THREAD_INFO_IN_TASK #ifdef CONFIG_THREAD_INFO_IN_TASK
init_task.thread_info.ttbr0 = virt_to_phys(empty_zero_page); init_task.thread_info.ttbr0 = __pa_symbol(empty_zero_page);
#else #else
init_thread_info.ttbr0 = virt_to_phys(empty_zero_page); init_thread_info.ttbr0 = __pa_symbol(empty_zero_page);
#endif #endif
#endif #endif

View file

@ -21,6 +21,7 @@
#include <linux/of.h> #include <linux/of.h>
#include <linux/smp.h> #include <linux/smp.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/mm.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/cpu_ops.h> #include <asm/cpu_ops.h>
@ -96,7 +97,7 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu)
* boot-loader's endianess before jumping. This is mandated by * boot-loader's endianess before jumping. This is mandated by
* the boot protocol. * the boot protocol.
*/ */
writeq_relaxed(__pa(secondary_holding_pen), release_addr); writeq_relaxed(__pa_symbol(secondary_holding_pen), release_addr);
__flush_dcache_area((__force void *)release_addr, __flush_dcache_area((__force void *)release_addr,
sizeof(*release_addr)); sizeof(*release_addr));

View file

@ -114,6 +114,7 @@ static struct vm_special_mapping vdso_spec[2];
static int __init vdso_init(void) static int __init vdso_init(void)
{ {
int i; int i;
unsigned long pfn;
if (memcmp(&vdso_start, "\177ELF", 4)) { if (memcmp(&vdso_start, "\177ELF", 4)) {
pr_err("vDSO is not a valid ELF object!\n"); pr_err("vDSO is not a valid ELF object!\n");
@ -131,11 +132,14 @@ static int __init vdso_init(void)
return -ENOMEM; return -ENOMEM;
/* Grab the vDSO data page. */ /* Grab the vDSO data page. */
vdso_pagelist[0] = virt_to_page(vdso_data); vdso_pagelist[0] = phys_to_page(__pa_symbol(vdso_data));
/* Grab the vDSO code pages. */ /* Grab the vDSO code pages. */
pfn = sym_to_pfn(&vdso_start);
for (i = 0; i < vdso_pages; i++) for (i = 0; i < vdso_pages; i++)
vdso_pagelist[i + 1] = virt_to_page(&vdso_start + i * PAGE_SIZE); vdso_pagelist[i + 1] = pfn_to_page(pfn + i);
/* Populate the special mapping structures */ /* Populate the special mapping structures */
vdso_spec[0] = (struct vm_special_mapping) { vdso_spec[0] = (struct vm_special_mapping) {
@ -214,8 +218,8 @@ void update_vsyscall(struct timekeeper *tk)
if (!use_syscall) { if (!use_syscall) {
/* tkr_mono.cycle_last == tkr_raw.cycle_last */ /* tkr_mono.cycle_last == tkr_raw.cycle_last */
vdso_data->cs_cycle_last = tk->tkr_mono.cycle_last; vdso_data->cs_cycle_last = tk->tkr_mono.cycle_last;
vdso_data->raw_time_sec = tk->raw_time.tv_sec; vdso_data->raw_time_sec = tk->raw_sec;
vdso_data->raw_time_nsec = tk->raw_time.tv_nsec; vdso_data->raw_time_nsec = tk->tkr_raw.xtime_nsec;
vdso_data->xtime_clock_sec = tk->xtime_sec; vdso_data->xtime_clock_sec = tk->xtime_sec;
vdso_data->xtime_clock_nsec = tk->tkr_mono.xtime_nsec; vdso_data->xtime_clock_nsec = tk->tkr_mono.xtime_nsec;
/* tkr_raw.xtime_nsec == 0 */ /* tkr_raw.xtime_nsec == 0 */

View file

@ -310,7 +310,7 @@ ENTRY(__kernel_clock_getres)
b.ne 4f b.ne 4f
ldr x2, 6f ldr x2, 6f
2: 2:
cbz w1, 3f cbz x1, 3f
stp xzr, x2, [x1] stp xzr, x2, [x1]
3: /* res == NULL. */ 3: /* res == NULL. */

View file

@ -34,6 +34,7 @@
#include <linux/dma-contiguous.h> #include <linux/dma-contiguous.h>
#include <linux/efi.h> #include <linux/efi.h>
#include <linux/swiotlb.h> #include <linux/swiotlb.h>
#include <linux/mm.h>
#include <asm/boot.h> #include <asm/boot.h>
#include <asm/fixmap.h> #include <asm/fixmap.h>
@ -191,8 +192,8 @@ void __init arm64_memblock_init(void)
* linear mapping. Take care not to clip the kernel which may be * linear mapping. Take care not to clip the kernel which may be
* high in memory. * high in memory.
*/ */
memblock_remove(max_t(u64, memstart_addr + linear_region_size, __pa(_end)), memblock_remove(max_t(u64, memstart_addr + linear_region_size,
ULLONG_MAX); __pa_symbol(_end)), ULLONG_MAX);
if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) { if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {
/* ensure that memstart_addr remains sufficiently aligned */ /* ensure that memstart_addr remains sufficiently aligned */
memstart_addr = round_up(memblock_end_of_DRAM() - linear_region_size, memstart_addr = round_up(memblock_end_of_DRAM() - linear_region_size,
@ -212,7 +213,7 @@ void __init arm64_memblock_init(void)
*/ */
bootloader_memory_limit = memblock_end_of_DRAM(); bootloader_memory_limit = memblock_end_of_DRAM();
memblock_enforce_memory_limit(memory_limit); memblock_enforce_memory_limit(memory_limit);
memblock_add(__pa(_text), (u64)(_end - _text)); memblock_add(__pa_symbol(_text), (u64)(_end - _text));
} }
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) { if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
@ -236,7 +237,7 @@ void __init arm64_memblock_init(void)
* Register the kernel text, kernel data, initrd, and initial * Register the kernel text, kernel data, initrd, and initial
* pagetables with memblock. * pagetables with memblock.
*/ */
memblock_reserve(__pa(_text), _end - _text); memblock_reserve(__pa_symbol(_text), _end - _text);
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
if (initrd_start) { if (initrd_start) {
memblock_reserve(initrd_start, initrd_end - initrd_start); memblock_reserve(initrd_start, initrd_end - initrd_start);

View file

@ -15,6 +15,7 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/memblock.h> #include <linux/memblock.h>
#include <linux/start_kernel.h> #include <linux/start_kernel.h>
#include <linux/mm.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/kernel-pgtable.h> #include <asm/kernel-pgtable.h>
@ -26,6 +27,13 @@
static pgd_t tmp_pg_dir[PTRS_PER_PGD] __initdata __aligned(PGD_SIZE); static pgd_t tmp_pg_dir[PTRS_PER_PGD] __initdata __aligned(PGD_SIZE);
/*
* The p*d_populate functions call virt_to_phys implicitly so they can't be used
* directly on kernel symbols (bm_p*d). All the early functions are called too
* early to use lm_alias so __p*d_populate functions must be used to populate
* with the physical address from __pa_symbol.
*/
static void __init kasan_early_pte_populate(pmd_t *pmd, unsigned long addr, static void __init kasan_early_pte_populate(pmd_t *pmd, unsigned long addr,
unsigned long end) unsigned long end)
{ {
@ -33,12 +41,13 @@ static void __init kasan_early_pte_populate(pmd_t *pmd, unsigned long addr,
unsigned long next; unsigned long next;
if (pmd_none(*pmd)) if (pmd_none(*pmd))
pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte); __pmd_populate(pmd, __pa_symbol(kasan_zero_pte),
PMD_TYPE_TABLE);
pte = pte_offset_kimg(pmd, addr); pte = pte_offset_kimg(pmd, addr);
do { do {
next = addr + PAGE_SIZE; next = addr + PAGE_SIZE;
set_pte(pte, pfn_pte(virt_to_pfn(kasan_zero_page), set_pte(pte, pfn_pte(sym_to_pfn(kasan_zero_page),
PAGE_KERNEL)); PAGE_KERNEL));
} while (pte++, addr = next, addr != end && pte_none(*pte)); } while (pte++, addr = next, addr != end && pte_none(*pte));
} }
@ -51,7 +60,8 @@ static void __init kasan_early_pmd_populate(pud_t *pud,
unsigned long next; unsigned long next;
if (pud_none(*pud)) if (pud_none(*pud))
pud_populate(&init_mm, pud, kasan_zero_pmd); __pud_populate(pud, __pa_symbol(kasan_zero_pmd),
PMD_TYPE_TABLE);
pmd = pmd_offset_kimg(pud, addr); pmd = pmd_offset_kimg(pud, addr);
do { do {
@ -68,7 +78,8 @@ static void __init kasan_early_pud_populate(pgd_t *pgd,
unsigned long next; unsigned long next;
if (pgd_none(*pgd)) if (pgd_none(*pgd))
pgd_populate(&init_mm, pgd, kasan_zero_pud); __pgd_populate(pgd, __pa_symbol(kasan_zero_pud),
PUD_TYPE_TABLE);
pud = pud_offset_kimg(pgd, addr); pud = pud_offset_kimg(pgd, addr);
do { do {
@ -148,7 +159,7 @@ void __init kasan_init(void)
*/ */
memcpy(tmp_pg_dir, swapper_pg_dir, sizeof(tmp_pg_dir)); memcpy(tmp_pg_dir, swapper_pg_dir, sizeof(tmp_pg_dir));
dsb(ishst); dsb(ishst);
cpu_replace_ttbr1(tmp_pg_dir); cpu_replace_ttbr1(lm_alias(tmp_pg_dir));
clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END); clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
@ -199,10 +210,10 @@ void __init kasan_init(void)
*/ */
for (i = 0; i < PTRS_PER_PTE; i++) for (i = 0; i < PTRS_PER_PTE; i++)
set_pte(&kasan_zero_pte[i], set_pte(&kasan_zero_pte[i],
pfn_pte(virt_to_pfn(kasan_zero_page), PAGE_KERNEL_RO)); pfn_pte(sym_to_pfn(kasan_zero_page), PAGE_KERNEL_RO));
memset(kasan_zero_page, 0, PAGE_SIZE); memset(kasan_zero_page, 0, PAGE_SIZE);
cpu_replace_ttbr1(swapper_pg_dir); cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
/* At this point kasan is fully initialized. Enable error messages */ /* At this point kasan is fully initialized. Enable error messages */
init_task.kasan_depth = 0; init_task.kasan_depth = 0;

View file

@ -31,6 +31,7 @@
#include <linux/stop_machine.h> #include <linux/stop_machine.h>
#include <linux/dma-contiguous.h> #include <linux/dma-contiguous.h>
#include <linux/cma.h> #include <linux/cma.h>
#include <linux/mm.h>
#include <asm/barrier.h> #include <asm/barrier.h>
#include <asm/cputype.h> #include <asm/cputype.h>
@ -391,8 +392,8 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end) static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end)
{ {
unsigned long kernel_start = __pa(_text); unsigned long kernel_start = __pa_symbol(_text);
unsigned long kernel_end = __pa(__init_begin); unsigned long kernel_end = __pa_symbol(__init_begin);
/* /*
* Take care not to create a writable alias for the * Take care not to create a writable alias for the
@ -456,14 +457,15 @@ void mark_rodata_ro(void)
unsigned long section_size; unsigned long section_size;
section_size = (unsigned long)_etext - (unsigned long)_text; section_size = (unsigned long)_etext - (unsigned long)_text;
create_mapping_late(__pa(_text), (unsigned long)_text, create_mapping_late(__pa_symbol(_text), (unsigned long)_text,
section_size, PAGE_KERNEL_ROX); section_size, PAGE_KERNEL_ROX);
/* /*
* mark .rodata as read only. Use __init_begin rather than __end_rodata * mark .rodata as read only. Use __init_begin rather than __end_rodata
* to cover NOTES and EXCEPTION_TABLE. * to cover NOTES and EXCEPTION_TABLE.
*/ */
section_size = (unsigned long)__init_begin - (unsigned long)__start_rodata; section_size = (unsigned long)__init_begin - (unsigned long)__start_rodata;
create_mapping_late(__pa(__start_rodata), (unsigned long)__start_rodata, create_mapping_late(__pa_symbol(__start_rodata),
(unsigned long)__start_rodata,
section_size, PAGE_KERNEL_RO); section_size, PAGE_KERNEL_RO);
} }
@ -480,7 +482,7 @@ void fixup_init(void)
static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end, static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end,
pgprot_t prot, struct vm_struct *vma) pgprot_t prot, struct vm_struct *vma)
{ {
phys_addr_t pa_start = __pa(va_start); phys_addr_t pa_start = __pa_symbol(va_start);
unsigned long size = va_end - va_start; unsigned long size = va_end - va_start;
BUG_ON(!PAGE_ALIGNED(pa_start)); BUG_ON(!PAGE_ALIGNED(pa_start));
@ -528,7 +530,7 @@ static void __init map_kernel(pgd_t *pgd)
*/ */
BUG_ON(!IS_ENABLED(CONFIG_ARM64_16K_PAGES)); BUG_ON(!IS_ENABLED(CONFIG_ARM64_16K_PAGES));
set_pud(pud_set_fixmap_offset(pgd, FIXADDR_START), set_pud(pud_set_fixmap_offset(pgd, FIXADDR_START),
__pud(__pa(bm_pmd) | PUD_TYPE_TABLE)); __pud(__pa_symbol(bm_pmd) | PUD_TYPE_TABLE));
pud_clear_fixmap(); pud_clear_fixmap();
} else { } else {
BUG(); BUG();
@ -590,7 +592,7 @@ void __init paging_init(void)
*/ */
cpu_replace_ttbr1(__va(pgd_phys)); cpu_replace_ttbr1(__va(pgd_phys));
memcpy(swapper_pg_dir, pgd, PAGE_SIZE); memcpy(swapper_pg_dir, pgd, PAGE_SIZE);
cpu_replace_ttbr1(swapper_pg_dir); cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
pgd_clear_fixmap(); pgd_clear_fixmap();
memblock_free(pgd_phys, PAGE_SIZE); memblock_free(pgd_phys, PAGE_SIZE);
@ -599,7 +601,7 @@ void __init paging_init(void)
* We only reuse the PGD from the swapper_pg_dir, not the pud + pmd * We only reuse the PGD from the swapper_pg_dir, not the pud + pmd
* allocated with it. * allocated with it.
*/ */
memblock_free(__pa(swapper_pg_dir) + PAGE_SIZE, memblock_free(__pa_symbol(swapper_pg_dir) + PAGE_SIZE,
SWAPPER_DIR_SIZE - PAGE_SIZE); SWAPPER_DIR_SIZE - PAGE_SIZE);
bootmem_init(); bootmem_init();
@ -1141,6 +1143,12 @@ static inline pte_t * fixmap_pte(unsigned long addr)
return &bm_pte[pte_index(addr)]; return &bm_pte[pte_index(addr)];
} }
/*
* The p*d_populate functions call virt_to_phys implicitly so they can't be used
* directly on kernel symbols (bm_p*d). This function is called too early to use
* lm_alias so __p*d_populate functions must be used to populate with the
* physical address from __pa_symbol.
*/
void __init early_fixmap_init(void) void __init early_fixmap_init(void)
{ {
pgd_t *pgd; pgd_t *pgd;
@ -1150,7 +1158,7 @@ void __init early_fixmap_init(void)
pgd = pgd_offset_k(addr); pgd = pgd_offset_k(addr);
if (CONFIG_PGTABLE_LEVELS > 3 && if (CONFIG_PGTABLE_LEVELS > 3 &&
!(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa(bm_pud))) { !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa_symbol(bm_pud))) {
/* /*
* We only end up here if the kernel mapping and the fixmap * We only end up here if the kernel mapping and the fixmap
* share the top level pgd entry, which should only happen on * share the top level pgd entry, which should only happen on
@ -1159,12 +1167,15 @@ void __init early_fixmap_init(void)
BUG_ON(!IS_ENABLED(CONFIG_ARM64_16K_PAGES)); BUG_ON(!IS_ENABLED(CONFIG_ARM64_16K_PAGES));
pud = pud_offset_kimg(pgd, addr); pud = pud_offset_kimg(pgd, addr);
} else { } else {
pgd_populate(&init_mm, pgd, bm_pud); if (pgd_none(*pgd))
__pgd_populate(pgd, __pa_symbol(bm_pud),
PUD_TYPE_TABLE);
pud = fixmap_pud(addr); pud = fixmap_pud(addr);
} }
pud_populate(&init_mm, pud, bm_pmd); if (pud_none(*pud))
__pud_populate(pud, __pa_symbol(bm_pmd), PMD_TYPE_TABLE);
pmd = fixmap_pmd(addr); pmd = fixmap_pmd(addr);
pmd_populate_kernel(&init_mm, pmd, bm_pte); __pmd_populate(pmd, __pa_symbol(bm_pte), PMD_TYPE_TABLE);
/* /*
* The boot-ioremap range spans multiple pmds, for which * The boot-ioremap range spans multiple pmds, for which

View file

@ -576,6 +576,7 @@ static int __init ar7_register_uarts(void)
uart_port.type = PORT_AR7; uart_port.type = PORT_AR7;
uart_port.uartclk = clk_get_rate(bus_clk) / 2; uart_port.uartclk = clk_get_rate(bus_clk) / 2;
uart_port.iotype = UPIO_MEM32; uart_port.iotype = UPIO_MEM32;
uart_port.flags = UPF_FIXED_TYPE;
uart_port.regshift = 2; uart_port.regshift = 2;
uart_port.line = 0; uart_port.line = 0;
@ -654,6 +655,10 @@ static int __init ar7_register_devices(void)
u32 val; u32 val;
int res; int res;
res = ar7_gpio_init();
if (res)
pr_warn("unable to register gpios: %d\n", res);
res = ar7_register_uarts(); res = ar7_register_uarts();
if (res) if (res)
pr_err("unable to setup uart(s): %d\n", res); pr_err("unable to setup uart(s): %d\n", res);

View file

@ -246,8 +246,6 @@ void __init prom_init(void)
ar7_init_cmdline(fw_arg0, (char **)fw_arg1); ar7_init_cmdline(fw_arg0, (char **)fw_arg1);
ar7_init_env((struct env_var *)fw_arg2); ar7_init_env((struct env_var *)fw_arg2);
console_config(); console_config();
ar7_gpio_init();
} }
#define PORT(offset) (KSEG1ADDR(AR7_REGS_UART0 + (offset * 4))) #define PORT(offset) (KSEG1ADDR(AR7_REGS_UART0 + (offset * 4)))

View file

@ -330,7 +330,7 @@ bcm47xx_leds_linksys_wrt54g3gv2[] __initconst = {
/* Verified on: WRT54GS V1.0 */ /* Verified on: WRT54GS V1.0 */
static const struct gpio_led static const struct gpio_led
bcm47xx_leds_linksys_wrt54g_type_0101[] __initconst = { bcm47xx_leds_linksys_wrt54g_type_0101[] __initconst = {
BCM47XX_GPIO_LED(0, "green", "wlan", 0, LEDS_GPIO_DEFSTATE_OFF), BCM47XX_GPIO_LED(0, "green", "wlan", 1, LEDS_GPIO_DEFSTATE_OFF),
BCM47XX_GPIO_LED(1, "green", "power", 0, LEDS_GPIO_DEFSTATE_ON), BCM47XX_GPIO_LED(1, "green", "power", 0, LEDS_GPIO_DEFSTATE_ON),
BCM47XX_GPIO_LED(7, "green", "dmz", 1, LEDS_GPIO_DEFSTATE_OFF), BCM47XX_GPIO_LED(7, "green", "dmz", 1, LEDS_GPIO_DEFSTATE_OFF),
}; };

View file

@ -54,7 +54,8 @@
.align 2; \ .align 2; \
.type symbol, @function; \ .type symbol, @function; \
.ent symbol, 0; \ .ent symbol, 0; \
symbol: .frame sp, 0, ra symbol: .frame sp, 0, ra; \
.insn
/* /*
* NESTED - declare nested routine entry point * NESTED - declare nested routine entry point
@ -64,7 +65,8 @@ symbol: .frame sp, 0, ra
.align 2; \ .align 2; \
.type symbol, @function; \ .type symbol, @function; \
.ent symbol, 0; \ .ent symbol, 0; \
symbol: .frame sp, framesize, rpc symbol: .frame sp, framesize, rpc; \
.insn
/* /*
* END - mark end of function * END - mark end of function
@ -86,7 +88,7 @@ symbol:
#define FEXPORT(symbol) \ #define FEXPORT(symbol) \
.globl symbol; \ .globl symbol; \
.type symbol, @function; \ .type symbol, @function; \
symbol: symbol: .insn
/* /*
* ABS - export absolute symbol * ABS - export absolute symbol

View file

@ -238,8 +238,8 @@ BUILD_CM_Cx_R_(tcid_8_priority, 0x80)
#define CM_GCR_BASE_GCRBASE_MSK (_ULCAST_(0x1ffff) << 15) #define CM_GCR_BASE_GCRBASE_MSK (_ULCAST_(0x1ffff) << 15)
#define CM_GCR_BASE_CMDEFTGT_SHF 0 #define CM_GCR_BASE_CMDEFTGT_SHF 0
#define CM_GCR_BASE_CMDEFTGT_MSK (_ULCAST_(0x3) << 0) #define CM_GCR_BASE_CMDEFTGT_MSK (_ULCAST_(0x3) << 0)
#define CM_GCR_BASE_CMDEFTGT_DISABLED 0 #define CM_GCR_BASE_CMDEFTGT_MEM 0
#define CM_GCR_BASE_CMDEFTGT_MEM 1 #define CM_GCR_BASE_CMDEFTGT_RESERVED 1
#define CM_GCR_BASE_CMDEFTGT_IOCU0 2 #define CM_GCR_BASE_CMDEFTGT_IOCU0 2
#define CM_GCR_BASE_CMDEFTGT_IOCU1 3 #define CM_GCR_BASE_CMDEFTGT_IOCU1 3

View file

@ -49,8 +49,6 @@
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
void arch_cpu_idle_dead(void) void arch_cpu_idle_dead(void)
{ {
/* What the heck is this check doing ? */
if (!cpumask_test_cpu(smp_processor_id(), &cpu_callin_map))
play_dead(); play_dead();
} }
#endif #endif

View file

@ -650,6 +650,19 @@ static const struct user_regset_view user_mips64_view = {
.n = ARRAY_SIZE(mips64_regsets), .n = ARRAY_SIZE(mips64_regsets),
}; };
#ifdef CONFIG_MIPS32_N32
static const struct user_regset_view user_mipsn32_view = {
.name = "mipsn32",
.e_flags = EF_MIPS_ABI2,
.e_machine = ELF_ARCH,
.ei_osabi = ELF_OSABI,
.regsets = mips64_regsets,
.n = ARRAY_SIZE(mips64_regsets),
};
#endif /* CONFIG_MIPS32_N32 */
#endif /* CONFIG_64BIT */ #endif /* CONFIG_64BIT */
const struct user_regset_view *task_user_regset_view(struct task_struct *task) const struct user_regset_view *task_user_regset_view(struct task_struct *task)
@ -660,6 +673,10 @@ const struct user_regset_view *task_user_regset_view(struct task_struct *task)
#ifdef CONFIG_MIPS32_O32 #ifdef CONFIG_MIPS32_O32
if (test_tsk_thread_flag(task, TIF_32BIT_REGS)) if (test_tsk_thread_flag(task, TIF_32BIT_REGS))
return &user_mips_view; return &user_mips_view;
#endif
#ifdef CONFIG_MIPS32_N32
if (test_tsk_thread_flag(task, TIF_32BIT_ADDR))
return &user_mipsn32_view;
#endif #endif
return &user_mips64_view; return &user_mips64_view;
#endif #endif

View file

@ -152,6 +152,35 @@ void __init detect_memory_region(phys_addr_t start, phys_addr_t sz_min, phys_add
add_memory_region(start, size, BOOT_MEM_RAM); add_memory_region(start, size, BOOT_MEM_RAM);
} }
bool __init memory_region_available(phys_addr_t start, phys_addr_t size)
{
int i;
bool in_ram = false, free = true;
for (i = 0; i < boot_mem_map.nr_map; i++) {
phys_addr_t start_, end_;
start_ = boot_mem_map.map[i].addr;
end_ = boot_mem_map.map[i].addr + boot_mem_map.map[i].size;
switch (boot_mem_map.map[i].type) {
case BOOT_MEM_RAM:
if (start >= start_ && start + size <= end_)
in_ram = true;
break;
case BOOT_MEM_RESERVED:
if ((start >= start_ && start < end_) ||
(start < start_ && start + size >= start_))
free = false;
break;
default:
continue;
}
}
return in_ram && free;
}
static void __init print_memory_map(void) static void __init print_memory_map(void)
{ {
int i; int i;
@ -300,11 +329,19 @@ static void __init bootmem_init(void)
#else /* !CONFIG_SGI_IP27 */ #else /* !CONFIG_SGI_IP27 */
static unsigned long __init bootmap_bytes(unsigned long pages)
{
unsigned long bytes = DIV_ROUND_UP(pages, 8);
return ALIGN(bytes, sizeof(long));
}
static void __init bootmem_init(void) static void __init bootmem_init(void)
{ {
unsigned long reserved_end; unsigned long reserved_end;
unsigned long mapstart = ~0UL; unsigned long mapstart = ~0UL;
unsigned long bootmap_size; unsigned long bootmap_size;
bool bootmap_valid = false;
int i; int i;
/* /*
@ -384,12 +421,43 @@ static void __init bootmem_init(void)
mapstart = max(mapstart, (unsigned long)PFN_UP(__pa(initrd_end))); mapstart = max(mapstart, (unsigned long)PFN_UP(__pa(initrd_end)));
#endif #endif
/*
* check that mapstart doesn't overlap with any of
* memory regions that have been reserved through eg. DTB
*/
bootmap_size = bootmap_bytes(max_low_pfn - min_low_pfn);
bootmap_valid = memory_region_available(PFN_PHYS(mapstart),
bootmap_size);
for (i = 0; i < boot_mem_map.nr_map && !bootmap_valid; i++) {
unsigned long mapstart_addr;
switch (boot_mem_map.map[i].type) {
case BOOT_MEM_RESERVED:
mapstart_addr = PFN_ALIGN(boot_mem_map.map[i].addr +
boot_mem_map.map[i].size);
if (PHYS_PFN(mapstart_addr) < mapstart)
break;
bootmap_valid = memory_region_available(mapstart_addr,
bootmap_size);
if (bootmap_valid)
mapstart = PHYS_PFN(mapstart_addr);
break;
default:
break;
}
}
if (!bootmap_valid)
panic("No memory area to place a bootmap bitmap");
/* /*
* Initialize the boot-time allocator with low memory only. * Initialize the boot-time allocator with low memory only.
*/ */
bootmap_size = init_bootmem_node(NODE_DATA(0), mapstart, if (bootmap_size != init_bootmem_node(NODE_DATA(0), mapstart,
min_low_pfn, max_low_pfn); min_low_pfn, max_low_pfn))
panic("Unexpected memory size required for bootmap");
for (i = 0; i < boot_mem_map.nr_map; i++) { for (i = 0; i < boot_mem_map.nr_map; i++) {
unsigned long start, end; unsigned long start, end;
@ -438,6 +506,10 @@ static void __init bootmem_init(void)
continue; continue;
default: default:
/* Not usable memory */ /* Not usable memory */
if (start > min_low_pfn && end < max_low_pfn)
reserve_bootmem(boot_mem_map.map[i].addr,
boot_mem_map.map[i].size,
BOOTMEM_DEFAULT);
continue; continue;
} }

View file

@ -64,6 +64,9 @@ EXPORT_SYMBOL(cpu_sibling_map);
cpumask_t cpu_core_map[NR_CPUS] __read_mostly; cpumask_t cpu_core_map[NR_CPUS] __read_mostly;
EXPORT_SYMBOL(cpu_core_map); EXPORT_SYMBOL(cpu_core_map);
static DECLARE_COMPLETION(cpu_starting);
static DECLARE_COMPLETION(cpu_running);
/* /*
* A logcal cpu mask containing only one VPE per core to * A logcal cpu mask containing only one VPE per core to
* reduce the number of IPIs on large MT systems. * reduce the number of IPIs on large MT systems.
@ -174,9 +177,12 @@ asmlinkage void start_secondary(void)
cpumask_set_cpu(cpu, &cpu_coherent_mask); cpumask_set_cpu(cpu, &cpu_coherent_mask);
notify_cpu_starting(cpu); notify_cpu_starting(cpu);
cpumask_set_cpu(cpu, &cpu_callin_map); /* Notify boot CPU that we're starting & ready to sync counters */
complete(&cpu_starting);
synchronise_count_slave(cpu); synchronise_count_slave(cpu);
/* The CPU is running and counters synchronised, now mark it online */
set_cpu_online(cpu, true); set_cpu_online(cpu, true);
set_cpu_sibling_map(cpu); set_cpu_sibling_map(cpu);
@ -184,6 +190,12 @@ asmlinkage void start_secondary(void)
calculate_cpu_foreign_map(); calculate_cpu_foreign_map();
/*
* Notify boot CPU that we're up & online and it can safely return
* from __cpu_up
*/
complete(&cpu_running);
/* /*
* irq will be enabled in ->smp_finish(), enabling it too early * irq will be enabled in ->smp_finish(), enabling it too early
* is dangerous. * is dangerous.
@ -242,22 +254,23 @@ void smp_prepare_boot_cpu(void)
{ {
set_cpu_possible(0, true); set_cpu_possible(0, true);
set_cpu_online(0, true); set_cpu_online(0, true);
cpumask_set_cpu(0, &cpu_callin_map);
} }
int __cpu_up(unsigned int cpu, struct task_struct *tidle) int __cpu_up(unsigned int cpu, struct task_struct *tidle)
{ {
mp_ops->boot_secondary(cpu, tidle); mp_ops->boot_secondary(cpu, tidle);
/* /* Wait for CPU to start and be ready to sync counters */
* Trust is futile. We should really have timeouts ... if (!wait_for_completion_timeout(&cpu_starting,
*/ msecs_to_jiffies(1000))) {
while (!cpumask_test_cpu(cpu, &cpu_callin_map)) { pr_crit("CPU%u: failed to start\n", cpu);
udelay(100); return -EIO;
schedule();
} }
synchronise_count_master(cpu); synchronise_count_master(cpu);
/* Wait for CPU to finish startup & mark itself online before return */
wait_for_completion(&cpu_running);
return 0; return 0;
} }

View file

@ -75,7 +75,7 @@ static struct insn insn_table_MM[] = {
{ insn_jr, M(mm_pool32a_op, 0, 0, 0, mm_jalr_op, mm_pool32axf_op), RS }, { insn_jr, M(mm_pool32a_op, 0, 0, 0, mm_jalr_op, mm_pool32axf_op), RS },
{ insn_lb, M(mm_lb32_op, 0, 0, 0, 0, 0), RT | RS | SIMM }, { insn_lb, M(mm_lb32_op, 0, 0, 0, 0, 0), RT | RS | SIMM },
{ insn_ld, 0, 0 }, { insn_ld, 0, 0 },
{ insn_lh, M(mm_lh32_op, 0, 0, 0, 0, 0), RS | RS | SIMM }, { insn_lh, M(mm_lh32_op, 0, 0, 0, 0, 0), RT | RS | SIMM },
{ insn_ll, M(mm_pool32c_op, 0, 0, (mm_ll_func << 1), 0, 0), RS | RT | SIMM }, { insn_ll, M(mm_pool32c_op, 0, 0, (mm_ll_func << 1), 0, 0), RS | RT | SIMM },
{ insn_lld, 0, 0 }, { insn_lld, 0, 0 },
{ insn_lui, M(mm_pool32i_op, mm_lui_op, 0, 0, 0, 0), RS | SIMM }, { insn_lui, M(mm_pool32i_op, mm_lui_op, 0, 0, 0, 0), RS | SIMM },

View file

@ -275,7 +275,7 @@ asmlinkage void plat_irq_dispatch(void)
do_IRQ(nlm_irq_to_xirq(node, i)); do_IRQ(nlm_irq_to_xirq(node, i));
} }
#ifdef CONFIG_OF #ifdef CONFIG_CPU_XLP
static const struct irq_domain_ops xlp_pic_irq_domain_ops = { static const struct irq_domain_ops xlp_pic_irq_domain_ops = {
.xlate = irq_domain_xlate_onetwocell, .xlate = irq_domain_xlate_onetwocell,
}; };
@ -348,7 +348,7 @@ void __init arch_init_irq(void)
#if defined(CONFIG_CPU_XLR) #if defined(CONFIG_CPU_XLR)
nlm_setup_fmn_irq(); nlm_setup_fmn_irq();
#endif #endif
#if defined(CONFIG_OF) #ifdef CONFIG_CPU_XLP
of_irq_init(xlp_pic_irq_ids); of_irq_init(xlp_pic_irq_ids);
#endif #endif
} }

View file

@ -141,8 +141,8 @@ static struct rt2880_pmx_func i2c_grp_mt7628[] = {
FUNC("i2c", 0, 4, 2), FUNC("i2c", 0, 4, 2),
}; };
static struct rt2880_pmx_func refclk_grp_mt7628[] = { FUNC("reclk", 0, 36, 1) }; static struct rt2880_pmx_func refclk_grp_mt7628[] = { FUNC("refclk", 0, 37, 1) };
static struct rt2880_pmx_func perst_grp_mt7628[] = { FUNC("perst", 0, 37, 1) }; static struct rt2880_pmx_func perst_grp_mt7628[] = { FUNC("perst", 0, 36, 1) };
static struct rt2880_pmx_func wdt_grp_mt7628[] = { FUNC("wdt", 0, 38, 1) }; static struct rt2880_pmx_func wdt_grp_mt7628[] = { FUNC("wdt", 0, 38, 1) };
static struct rt2880_pmx_func spi_grp_mt7628[] = { FUNC("spi", 0, 7, 4) }; static struct rt2880_pmx_func spi_grp_mt7628[] = { FUNC("spi", 0, 7, 4) };

View file

@ -688,15 +688,15 @@ cas_action:
/* ELF32 Process entry path */ /* ELF32 Process entry path */
lws_compare_and_swap_2: lws_compare_and_swap_2:
#ifdef CONFIG_64BIT #ifdef CONFIG_64BIT
/* Clip the input registers */ /* Clip the input registers. We don't need to clip %r23 as we
only use it for word operations */
depdi 0, 31, 32, %r26 depdi 0, 31, 32, %r26
depdi 0, 31, 32, %r25 depdi 0, 31, 32, %r25
depdi 0, 31, 32, %r24 depdi 0, 31, 32, %r24
depdi 0, 31, 32, %r23
#endif #endif
/* Check the validity of the size pointer */ /* Check the validity of the size pointer */
subi,>>= 4, %r23, %r0 subi,>>= 3, %r23, %r0
b,n lws_exit_nosys b,n lws_exit_nosys
/* Jump to the functions which will load the old and new values into /* Jump to the functions which will load the old and new values into

View file

@ -1083,11 +1083,6 @@ source "arch/powerpc/Kconfig.debug"
source "security/Kconfig" source "security/Kconfig"
config KEYS_COMPAT
bool
depends on COMPAT && KEYS
default y
source "crypto/Kconfig" source "crypto/Kconfig"
config PPC_LIB_RHEAP config PPC_LIB_RHEAP

View file

@ -83,6 +83,10 @@
}; };
}; };
sdhc@114000 {
status = "disabled";
};
i2c@119000 { i2c@119000 {
status = "disabled"; status = "disabled";
}; };

View file

@ -102,7 +102,7 @@ static void check_syscall_restart(struct pt_regs *regs, struct k_sigaction *ka,
static void do_signal(struct pt_regs *regs) static void do_signal(struct pt_regs *regs)
{ {
sigset_t *oldset = sigmask_to_save(); sigset_t *oldset = sigmask_to_save();
struct ksignal ksig; struct ksignal ksig = { .sig = 0 };
int ret; int ret;
int is32 = is_32bit_task(); int is32 = is_32bit_task();

View file

@ -280,6 +280,7 @@ static void icp_rm_deliver_irq(struct kvmppc_xics *xics, struct kvmppc_icp *icp,
*/ */
if (reject && reject != XICS_IPI) { if (reject && reject != XICS_IPI) {
arch_spin_unlock(&ics->lock); arch_spin_unlock(&ics->lock);
icp->n_reject++;
new_irq = reject; new_irq = reject;
goto again; goto again;
} }
@ -611,10 +612,8 @@ int kvmppc_rm_h_eoi(struct kvm_vcpu *vcpu, unsigned long xirr)
state = &ics->irq_state[src]; state = &ics->irq_state[src];
/* Still asserted, resend it */ /* Still asserted, resend it */
if (state->asserted) { if (state->asserted)
icp->n_reject++;
icp_rm_deliver_irq(xics, icp, irq); icp_rm_deliver_irq(xics, icp, irq);
}
if (!hlist_empty(&vcpu->kvm->irq_ack_notifier_list)) { if (!hlist_empty(&vcpu->kvm->irq_ack_notifier_list)) {
icp->rm_action |= XICS_RM_NOTIFY_EOI; icp->rm_action |= XICS_RM_NOTIFY_EOI;

View file

@ -347,9 +347,6 @@ config COMPAT
config SYSVIPC_COMPAT config SYSVIPC_COMPAT
def_bool y if COMPAT && SYSVIPC def_bool y if COMPAT && SYSVIPC
config KEYS_COMPAT
def_bool y if COMPAT && KEYS
config SMP config SMP
def_bool y def_bool y
prompt "Symmetric multi-processing support" prompt "Symmetric multi-processing support"

View file

@ -0,0 +1,8 @@
#ifndef _ASM_S390_PROTOTYPES_H
#include <linux/kvm_host.h>
#include <linux/ftrace.h>
#include <asm/fpu/api.h>
#include <asm-generic/asm-prototypes.h>
#endif /* _ASM_S390_PROTOTYPES_H */

View file

@ -34,8 +34,8 @@ static inline void restore_access_regs(unsigned int *acrs)
save_access_regs(&prev->thread.acrs[0]); \ save_access_regs(&prev->thread.acrs[0]); \
save_ri_cb(prev->thread.ri_cb); \ save_ri_cb(prev->thread.ri_cb); \
} \ } \
if (next->mm) { \
update_cr_regs(next); \ update_cr_regs(next); \
if (next->mm) { \
set_cpu_flag(CIF_FPU); \ set_cpu_flag(CIF_FPU); \
restore_access_regs(&next->thread.acrs[0]); \ restore_access_regs(&next->thread.acrs[0]); \
restore_ri_cb(next->thread.ri_cb, prev->thread.ri_cb); \ restore_ri_cb(next->thread.ri_cb, prev->thread.ri_cb); \

View file

@ -1549,6 +1549,7 @@ static struct s390_insn opcode_e7[] = {
{ "vfsq", 0xce, INSTR_VRR_VV000MM }, { "vfsq", 0xce, INSTR_VRR_VV000MM },
{ "vfs", 0xe2, INSTR_VRR_VVV00MM }, { "vfs", 0xe2, INSTR_VRR_VVV00MM },
{ "vftci", 0x4a, INSTR_VRI_VVIMM }, { "vftci", 0x4a, INSTR_VRI_VVIMM },
{ "", 0, INSTR_INVALID }
}; };
static struct s390_insn opcode_eb[] = { static struct s390_insn opcode_eb[] = {
@ -1961,7 +1962,7 @@ void show_code(struct pt_regs *regs)
{ {
char *mode = user_mode(regs) ? "User" : "Krnl"; char *mode = user_mode(regs) ? "User" : "Krnl";
unsigned char code[64]; unsigned char code[64];
char buffer[64], *ptr; char buffer[128], *ptr;
mm_segment_t old_fs; mm_segment_t old_fs;
unsigned long addr; unsigned long addr;
int start, end, opsize, hops, i; int start, end, opsize, hops, i;
@ -2024,7 +2025,7 @@ void show_code(struct pt_regs *regs)
start += opsize; start += opsize;
printk(buffer); printk(buffer);
ptr = buffer; ptr = buffer;
ptr += sprintf(ptr, "\n "); ptr += sprintf(ptr, "\n\t ");
hops++; hops++;
} }
printk("\n"); printk("\n");

View file

@ -325,8 +325,10 @@ static __init void detect_machine_facilities(void)
S390_lowcore.machine_flags |= MACHINE_FLAG_IDTE; S390_lowcore.machine_flags |= MACHINE_FLAG_IDTE;
if (test_facility(40)) if (test_facility(40))
S390_lowcore.machine_flags |= MACHINE_FLAG_LPP; S390_lowcore.machine_flags |= MACHINE_FLAG_LPP;
if (test_facility(50) && test_facility(73)) if (test_facility(50) && test_facility(73)) {
S390_lowcore.machine_flags |= MACHINE_FLAG_TE; S390_lowcore.machine_flags |= MACHINE_FLAG_TE;
__ctl_set_bit(0, 55);
}
if (test_facility(51)) if (test_facility(51))
S390_lowcore.machine_flags |= MACHINE_FLAG_TLB_LC; S390_lowcore.machine_flags |= MACHINE_FLAG_TLB_LC;
if (test_facility(129)) { if (test_facility(129)) {

View file

@ -137,6 +137,7 @@ int copy_thread(unsigned long clone_flags, unsigned long new_stackp,
memset(&p->thread.per_user, 0, sizeof(p->thread.per_user)); memset(&p->thread.per_user, 0, sizeof(p->thread.per_user));
memset(&p->thread.per_event, 0, sizeof(p->thread.per_event)); memset(&p->thread.per_event, 0, sizeof(p->thread.per_event));
clear_tsk_thread_flag(p, TIF_SINGLE_STEP); clear_tsk_thread_flag(p, TIF_SINGLE_STEP);
p->thread.per_flags = 0;
/* Initialize per thread user and system timer values */ /* Initialize per thread user and system timer values */
ti = task_thread_info(p); ti = task_thread_info(p);
ti->user_timer = 0; ti->user_timer = 0;

View file

@ -47,11 +47,13 @@ void exit_thread_runtime_instr(void)
{ {
struct task_struct *task = current; struct task_struct *task = current;
preempt_disable();
if (!task->thread.ri_cb) if (!task->thread.ri_cb)
return; return;
disable_runtime_instr(); disable_runtime_instr();
kfree(task->thread.ri_cb); kfree(task->thread.ri_cb);
task->thread.ri_cb = NULL; task->thread.ri_cb = NULL;
preempt_enable();
} }
SYSCALL_DEFINE1(s390_runtime_instr, int, command) SYSCALL_DEFINE1(s390_runtime_instr, int, command)
@ -62,9 +64,7 @@ SYSCALL_DEFINE1(s390_runtime_instr, int, command)
return -EOPNOTSUPP; return -EOPNOTSUPP;
if (command == S390_RUNTIME_INSTR_STOP) { if (command == S390_RUNTIME_INSTR_STOP) {
preempt_disable();
exit_thread_runtime_instr(); exit_thread_runtime_instr();
preempt_enable();
return 0; return 0;
} }

View file

@ -165,7 +165,6 @@ static struct plat_sci_port scif2_platform_data = {
.scscr = SCSCR_TE | SCSCR_RE, .scscr = SCSCR_TE | SCSCR_RE,
.type = PORT_IRDA, .type = PORT_IRDA,
.ops = &sh770x_sci_port_ops, .ops = &sh770x_sci_port_ops,
.regshift = 1,
}; };
static struct resource scif2_resources[] = { static struct resource scif2_resources[] = {

View file

@ -550,9 +550,6 @@ config SYSVIPC_COMPAT
depends on COMPAT && SYSVIPC depends on COMPAT && SYSVIPC
default y default y
config KEYS_COMPAT
def_bool y if COMPAT && KEYS
endmenu endmenu
source "net/Kconfig" source "net/Kconfig"

View file

@ -2657,10 +2657,6 @@ config COMPAT_FOR_U64_ALIGNMENT
config SYSVIPC_COMPAT config SYSVIPC_COMPAT
def_bool y def_bool y
depends on SYSVIPC depends on SYSVIPC
config KEYS_COMPAT
def_bool y
depends on KEYS
endif endif
endmenu endmenu

View file

@ -174,8 +174,8 @@ LABEL skip_ %I
.endr .endr
# Find min length # Find min length
vmovdqa _lens+0*16(state), %xmm0 vmovdqu _lens+0*16(state), %xmm0
vmovdqa _lens+1*16(state), %xmm1 vmovdqu _lens+1*16(state), %xmm1
vpminud %xmm1, %xmm0, %xmm2 # xmm2 has {D,C,B,A} vpminud %xmm1, %xmm0, %xmm2 # xmm2 has {D,C,B,A}
vpalignr $8, %xmm2, %xmm3, %xmm3 # xmm3 has {x,x,D,C} vpalignr $8, %xmm2, %xmm3, %xmm3 # xmm3 has {x,x,D,C}
@ -195,8 +195,8 @@ LABEL skip_ %I
vpsubd %xmm2, %xmm0, %xmm0 vpsubd %xmm2, %xmm0, %xmm0
vpsubd %xmm2, %xmm1, %xmm1 vpsubd %xmm2, %xmm1, %xmm1
vmovdqa %xmm0, _lens+0*16(state) vmovdqu %xmm0, _lens+0*16(state)
vmovdqa %xmm1, _lens+1*16(state) vmovdqu %xmm1, _lens+1*16(state)
# "state" and "args" are the same address, arg1 # "state" and "args" are the same address, arg1
# len is arg2 # len is arg2
@ -260,8 +260,8 @@ ENTRY(sha1_mb_mgr_get_comp_job_avx2)
jc .return_null jc .return_null
# Find min length # Find min length
vmovdqa _lens(state), %xmm0 vmovdqu _lens(state), %xmm0
vmovdqa _lens+1*16(state), %xmm1 vmovdqu _lens+1*16(state), %xmm1
vpminud %xmm1, %xmm0, %xmm2 # xmm2 has {D,C,B,A} vpminud %xmm1, %xmm0, %xmm2 # xmm2 has {D,C,B,A}
vpalignr $8, %xmm2, %xmm3, %xmm3 # xmm3 has {x,x,D,C} vpalignr $8, %xmm2, %xmm3, %xmm3 # xmm3 has {x,x,D,C}

View file

@ -3,6 +3,7 @@
#include <asm/fpu/api.h> #include <asm/fpu/api.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/tlb.h>
/* /*
* We map the EFI regions needed for runtime services non-contiguously, * We map the EFI regions needed for runtime services non-contiguously,
@ -66,6 +67,17 @@ extern u64 asmlinkage efi_call(void *fp, ...);
#define efi_call_phys(f, args...) efi_call((f), args) #define efi_call_phys(f, args...) efi_call((f), args)
/*
* Scratch space used for switching the pagetable in the EFI stub
*/
struct efi_scratch {
u64 r15;
u64 prev_cr3;
pgd_t *efi_pgt;
bool use_pgd;
u64 phys_stack;
} __packed;
#define efi_call_virt(f, ...) \ #define efi_call_virt(f, ...) \
({ \ ({ \
efi_status_t __s; \ efi_status_t __s; \
@ -73,7 +85,20 @@ extern u64 asmlinkage efi_call(void *fp, ...);
efi_sync_low_kernel_mappings(); \ efi_sync_low_kernel_mappings(); \
preempt_disable(); \ preempt_disable(); \
__kernel_fpu_begin(); \ __kernel_fpu_begin(); \
\
if (efi_scratch.use_pgd) { \
efi_scratch.prev_cr3 = read_cr3(); \
write_cr3((unsigned long)efi_scratch.efi_pgt); \
__flush_tlb_all(); \
} \
\
__s = efi_call((void *)efi.systab->runtime->f, __VA_ARGS__); \ __s = efi_call((void *)efi.systab->runtime->f, __VA_ARGS__); \
\
if (efi_scratch.use_pgd) { \
write_cr3(efi_scratch.prev_cr3); \
__flush_tlb_all(); \
} \
\
__kernel_fpu_end(); \ __kernel_fpu_end(); \
preempt_enable(); \ preempt_enable(); \
__s; \ __s; \
@ -113,6 +138,7 @@ extern void __init efi_memory_uc(u64 addr, unsigned long size);
extern void __init efi_map_region(efi_memory_desc_t *md); extern void __init efi_map_region(efi_memory_desc_t *md);
extern void __init efi_map_region_fixed(efi_memory_desc_t *md); extern void __init efi_map_region_fixed(efi_memory_desc_t *md);
extern void efi_sync_low_kernel_mappings(void); extern void efi_sync_low_kernel_mappings(void);
extern int __init efi_alloc_page_tables(void);
extern int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages); extern int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages);
extern void __init efi_cleanup_page_tables(unsigned long pa_memmap, unsigned num_pages); extern void __init efi_cleanup_page_tables(unsigned long pa_memmap, unsigned num_pages);
extern void __init old_map_region(efi_memory_desc_t *md); extern void __init old_map_region(efi_memory_desc_t *md);

View file

@ -296,6 +296,7 @@ struct x86_emulate_ctxt {
bool perm_ok; /* do not check permissions if true */ bool perm_ok; /* do not check permissions if true */
bool ud; /* inject an #UD if host doesn't support insn */ bool ud; /* inject an #UD if host doesn't support insn */
bool tf; /* TF value before instruction (after for syscall/sysret) */
bool have_exception; bool have_exception;
struct x86_exception exception; struct x86_exception exception;

View file

@ -7,6 +7,7 @@
#include <linux/compiler.h> #include <linux/compiler.h>
#include <linux/thread_info.h> #include <linux/thread_info.h>
#include <linux/string.h> #include <linux/string.h>
#include <linux/preempt.h>
#include <asm/asm.h> #include <asm/asm.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/smap.h> #include <asm/smap.h>
@ -66,6 +67,12 @@ static inline bool __chk_range_not_ok(unsigned long addr, unsigned long size, un
__chk_range_not_ok((unsigned long __force)(addr), size, limit); \ __chk_range_not_ok((unsigned long __force)(addr), size, limit); \
}) })
#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
# define WARN_ON_IN_IRQ() WARN_ON_ONCE(!in_task())
#else
# define WARN_ON_IN_IRQ()
#endif
/** /**
* access_ok: - Checks if a user space pointer is valid * access_ok: - Checks if a user space pointer is valid
* @type: Type of access: %VERIFY_READ or %VERIFY_WRITE. Note that * @type: Type of access: %VERIFY_READ or %VERIFY_WRITE. Note that
@ -87,7 +94,10 @@ static inline bool __chk_range_not_ok(unsigned long addr, unsigned long size, un
* this function, memory access functions may still return -EFAULT. * this function, memory access functions may still return -EFAULT.
*/ */
#define access_ok(type, addr, size) \ #define access_ok(type, addr, size) \
likely(!__range_not_ok(addr, size, user_addr_max())) ({ \
WARN_ON_IN_IRQ(); \
likely(!__range_not_ok(addr, size, user_addr_max())); \
})
/* /*
* The exception table consists of pairs of addresses relative to the * The exception table consists of pairs of addresses relative to the

View file

@ -2726,6 +2726,7 @@ static int em_syscall(struct x86_emulate_ctxt *ctxt)
ctxt->eflags &= ~(X86_EFLAGS_VM | X86_EFLAGS_IF); ctxt->eflags &= ~(X86_EFLAGS_VM | X86_EFLAGS_IF);
} }
ctxt->tf = (ctxt->eflags & X86_EFLAGS_TF) != 0;
return X86EMUL_CONTINUE; return X86EMUL_CONTINUE;
} }

View file

@ -1696,6 +1696,8 @@ static int ud_interception(struct vcpu_svm *svm)
int er; int er;
er = emulate_instruction(&svm->vcpu, EMULTYPE_TRAP_UD); er = emulate_instruction(&svm->vcpu, EMULTYPE_TRAP_UD);
if (er == EMULATE_USER_EXIT)
return 0;
if (er != EMULATE_DONE) if (er != EMULATE_DONE)
kvm_queue_exception(&svm->vcpu, UD_VECTOR); kvm_queue_exception(&svm->vcpu, UD_VECTOR);
return 1; return 1;
@ -3114,6 +3116,13 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
u32 ecx = msr->index; u32 ecx = msr->index;
u64 data = msr->data; u64 data = msr->data;
switch (ecx) { switch (ecx) {
case MSR_IA32_CR_PAT:
if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data))
return 1;
vcpu->arch.pat = data;
svm->vmcb->save.g_pat = data;
mark_dirty(svm->vmcb, VMCB_NPT);
break;
case MSR_IA32_TSC: case MSR_IA32_TSC:
kvm_write_tsc(vcpu, msr); kvm_write_tsc(vcpu, msr);
break; break;

View file

@ -5267,6 +5267,8 @@ static int handle_exception(struct kvm_vcpu *vcpu)
return 1; return 1;
} }
er = emulate_instruction(vcpu, EMULTYPE_TRAP_UD); er = emulate_instruction(vcpu, EMULTYPE_TRAP_UD);
if (er == EMULATE_USER_EXIT)
return 0;
if (er != EMULATE_DONE) if (er != EMULATE_DONE)
kvm_queue_exception(vcpu, UD_VECTOR); kvm_queue_exception(vcpu, UD_VECTOR);
return 1; return 1;
@ -10394,6 +10396,8 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
vmcs_writel(GUEST_SYSENTER_EIP, vmcs12->host_ia32_sysenter_eip); vmcs_writel(GUEST_SYSENTER_EIP, vmcs12->host_ia32_sysenter_eip);
vmcs_writel(GUEST_IDTR_BASE, vmcs12->host_idtr_base); vmcs_writel(GUEST_IDTR_BASE, vmcs12->host_idtr_base);
vmcs_writel(GUEST_GDTR_BASE, vmcs12->host_gdtr_base); vmcs_writel(GUEST_GDTR_BASE, vmcs12->host_gdtr_base);
vmcs_write32(GUEST_IDTR_LIMIT, 0xFFFF);
vmcs_write32(GUEST_GDTR_LIMIT, 0xFFFF);
/* If not VM_EXIT_CLEAR_BNDCFGS, the L2 value propagates to L1. */ /* If not VM_EXIT_CLEAR_BNDCFGS, the L2 value propagates to L1. */
if (vmcs12->vm_exit_controls & VM_EXIT_CLEAR_BNDCFGS) if (vmcs12->vm_exit_controls & VM_EXIT_CLEAR_BNDCFGS)

View file

@ -1812,6 +1812,9 @@ static int kvm_guest_time_update(struct kvm_vcpu *v)
*/ */
BUILD_BUG_ON(offsetof(struct pvclock_vcpu_time_info, version) != 0); BUILD_BUG_ON(offsetof(struct pvclock_vcpu_time_info, version) != 0);
if (guest_hv_clock.version & 1)
++guest_hv_clock.version; /* first time write, random junk */
vcpu->hv_clock.version = guest_hv_clock.version + 1; vcpu->hv_clock.version = guest_hv_clock.version + 1;
kvm_write_guest_cached(v->kvm, &vcpu->pv_time, kvm_write_guest_cached(v->kvm, &vcpu->pv_time,
&vcpu->hv_clock, &vcpu->hv_clock,
@ -5095,6 +5098,8 @@ static void init_emulate_ctxt(struct kvm_vcpu *vcpu)
kvm_x86_ops->get_cs_db_l_bits(vcpu, &cs_db, &cs_l); kvm_x86_ops->get_cs_db_l_bits(vcpu, &cs_db, &cs_l);
ctxt->eflags = kvm_get_rflags(vcpu); ctxt->eflags = kvm_get_rflags(vcpu);
ctxt->tf = (ctxt->eflags & X86_EFLAGS_TF) != 0;
ctxt->eip = kvm_rip_read(vcpu); ctxt->eip = kvm_rip_read(vcpu);
ctxt->mode = (!is_protmode(vcpu)) ? X86EMUL_MODE_REAL : ctxt->mode = (!is_protmode(vcpu)) ? X86EMUL_MODE_REAL :
(ctxt->eflags & X86_EFLAGS_VM) ? X86EMUL_MODE_VM86 : (ctxt->eflags & X86_EFLAGS_VM) ? X86EMUL_MODE_VM86 :
@ -5315,22 +5320,12 @@ static int kvm_vcpu_check_hw_bp(unsigned long addr, u32 type, u32 dr7,
return dr6; return dr6;
} }
static void kvm_vcpu_check_singlestep(struct kvm_vcpu *vcpu, unsigned long rflags, int *r) static void kvm_vcpu_do_singlestep(struct kvm_vcpu *vcpu, int *r)
{ {
struct kvm_run *kvm_run = vcpu->run; struct kvm_run *kvm_run = vcpu->run;
/*
* rflags is the old, "raw" value of the flags. The new value has
* not been saved yet.
*
* This is correct even for TF set by the guest, because "the
* processor will not generate this exception after the instruction
* that sets the TF flag".
*/
if (unlikely(rflags & X86_EFLAGS_TF)) {
if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) { if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) {
kvm_run->debug.arch.dr6 = DR6_BS | DR6_FIXED_1 | kvm_run->debug.arch.dr6 = DR6_BS | DR6_FIXED_1 | DR6_RTM;
DR6_RTM;
kvm_run->debug.arch.pc = vcpu->arch.singlestep_rip; kvm_run->debug.arch.pc = vcpu->arch.singlestep_rip;
kvm_run->debug.arch.exception = DB_VECTOR; kvm_run->debug.arch.exception = DB_VECTOR;
kvm_run->exit_reason = KVM_EXIT_DEBUG; kvm_run->exit_reason = KVM_EXIT_DEBUG;
@ -5346,7 +5341,6 @@ static void kvm_vcpu_check_singlestep(struct kvm_vcpu *vcpu, unsigned long rflag
vcpu->arch.dr6 |= DR6_BS | DR6_RTM; vcpu->arch.dr6 |= DR6_BS | DR6_RTM;
kvm_queue_exception(vcpu, DB_VECTOR); kvm_queue_exception(vcpu, DB_VECTOR);
} }
}
} }
static bool kvm_vcpu_check_breakpoint(struct kvm_vcpu *vcpu, int *r) static bool kvm_vcpu_check_breakpoint(struct kvm_vcpu *vcpu, int *r)
@ -5435,6 +5429,8 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu,
if (reexecute_instruction(vcpu, cr2, write_fault_to_spt, if (reexecute_instruction(vcpu, cr2, write_fault_to_spt,
emulation_type)) emulation_type))
return EMULATE_DONE; return EMULATE_DONE;
if (ctxt->have_exception && inject_emulated_exception(vcpu))
return EMULATE_DONE;
if (emulation_type & EMULTYPE_SKIP) if (emulation_type & EMULTYPE_SKIP)
return EMULATE_FAIL; return EMULATE_FAIL;
return handle_emulation_failure(vcpu); return handle_emulation_failure(vcpu);
@ -5500,8 +5496,9 @@ restart:
toggle_interruptibility(vcpu, ctxt->interruptibility); toggle_interruptibility(vcpu, ctxt->interruptibility);
vcpu->arch.emulate_regs_need_sync_to_vcpu = false; vcpu->arch.emulate_regs_need_sync_to_vcpu = false;
kvm_rip_write(vcpu, ctxt->eip); kvm_rip_write(vcpu, ctxt->eip);
if (r == EMULATE_DONE) if (r == EMULATE_DONE &&
kvm_vcpu_check_singlestep(vcpu, rflags, &r); (ctxt->tf || (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP)))
kvm_vcpu_do_singlestep(vcpu, &r);
if (!ctxt->have_exception || if (!ctxt->have_exception ||
exception_type(ctxt->exception.vector) == EXCPT_TRAP) exception_type(ctxt->exception.vector) == EXCPT_TRAP)
__kvm_set_rflags(vcpu, ctxt->eflags); __kvm_set_rflags(vcpu, ctxt->eflags);

View file

@ -833,7 +833,7 @@ EndTable
GrpTable: Grp3_1 GrpTable: Grp3_1
0: TEST Eb,Ib 0: TEST Eb,Ib
1: 1: TEST Eb,Ib
2: NOT Eb 2: NOT Eb
3: NEG Eb 3: NEG Eb
4: MUL AL,Eb 4: MUL AL,Eb

View file

@ -911,15 +911,10 @@ static void populate_pte(struct cpa_data *cpa,
pte = pte_offset_kernel(pmd, start); pte = pte_offset_kernel(pmd, start);
while (num_pages-- && start < end) { while (num_pages-- && start < end) {
set_pte(pte, pfn_pte(cpa->pfn, pgprot));
/* deal with the NX bit */
if (!(pgprot_val(pgprot) & _PAGE_NX))
cpa->pfn &= ~_PAGE_NX;
set_pte(pte, pfn_pte(cpa->pfn >> PAGE_SHIFT, pgprot));
start += PAGE_SIZE; start += PAGE_SIZE;
cpa->pfn += PAGE_SIZE; cpa->pfn++;
pte++; pte++;
} }
} }
@ -975,11 +970,11 @@ static int populate_pmd(struct cpa_data *cpa,
pmd = pmd_offset(pud, start); pmd = pmd_offset(pud, start);
set_pmd(pmd, __pmd(cpa->pfn | _PAGE_PSE | set_pmd(pmd, __pmd(cpa->pfn << PAGE_SHIFT | _PAGE_PSE |
massage_pgprot(pmd_pgprot))); massage_pgprot(pmd_pgprot)));
start += PMD_SIZE; start += PMD_SIZE;
cpa->pfn += PMD_SIZE; cpa->pfn += PMD_SIZE >> PAGE_SHIFT;
cur_pages += PMD_SIZE >> PAGE_SHIFT; cur_pages += PMD_SIZE >> PAGE_SHIFT;
} }
@ -1048,11 +1043,11 @@ static int populate_pud(struct cpa_data *cpa, unsigned long start, pgd_t *pgd,
* Map everything starting from the Gb boundary, possibly with 1G pages * Map everything starting from the Gb boundary, possibly with 1G pages
*/ */
while (end - start >= PUD_SIZE) { while (end - start >= PUD_SIZE) {
set_pud(pud, __pud(cpa->pfn | _PAGE_PSE | set_pud(pud, __pud(cpa->pfn << PAGE_SHIFT | _PAGE_PSE |
massage_pgprot(pud_pgprot))); massage_pgprot(pud_pgprot)));
start += PUD_SIZE; start += PUD_SIZE;
cpa->pfn += PUD_SIZE; cpa->pfn += PUD_SIZE >> PAGE_SHIFT;
cur_pages += PUD_SIZE >> PAGE_SHIFT; cur_pages += PUD_SIZE >> PAGE_SHIFT;
pud++; pud++;
} }

View file

@ -212,8 +212,8 @@ static void arch_perfmon_setup_counters(void)
eax.full = cpuid_eax(0xa); eax.full = cpuid_eax(0xa);
/* Workaround for BIOS bugs in 6/15. Taken from perfmon2 */ /* Workaround for BIOS bugs in 6/15. Taken from perfmon2 */
if (eax.split.version_id == 0 && __this_cpu_read(cpu_info.x86) == 6 && if (eax.split.version_id == 0 && boot_cpu_data.x86 == 6 &&
__this_cpu_read(cpu_info.x86_model) == 15) { boot_cpu_data.x86_model == 15) {
eax.split.version_id = 2; eax.split.version_id = 2;
eax.split.num_counters = 2; eax.split.num_counters = 2;
eax.split.bit_width = 40; eax.split.bit_width = 40;

View file

@ -28,8 +28,7 @@ struct bmp_header {
void __init efi_bgrt_init(void) void __init efi_bgrt_init(void)
{ {
acpi_status status; acpi_status status;
void __iomem *image; void *image;
bool ioremapped = false;
struct bmp_header bmp_header; struct bmp_header bmp_header;
if (acpi_disabled) if (acpi_disabled)
@ -70,20 +69,14 @@ void __init efi_bgrt_init(void)
return; return;
} }
image = efi_lookup_mapped_addr(bgrt_tab->image_address); image = memremap(bgrt_tab->image_address, sizeof(bmp_header), MEMREMAP_WB);
if (!image) {
image = early_ioremap(bgrt_tab->image_address,
sizeof(bmp_header));
ioremapped = true;
if (!image) { if (!image) {
pr_err("Ignoring BGRT: failed to map image header memory\n"); pr_err("Ignoring BGRT: failed to map image header memory\n");
return; return;
} }
}
memcpy_fromio(&bmp_header, image, sizeof(bmp_header)); memcpy(&bmp_header, image, sizeof(bmp_header));
if (ioremapped) memunmap(image);
early_iounmap(image, sizeof(bmp_header));
bgrt_image_size = bmp_header.size; bgrt_image_size = bmp_header.size;
bgrt_image = kmalloc(bgrt_image_size, GFP_KERNEL | __GFP_NOWARN); bgrt_image = kmalloc(bgrt_image_size, GFP_KERNEL | __GFP_NOWARN);
@ -93,18 +86,14 @@ void __init efi_bgrt_init(void)
return; return;
} }
if (ioremapped) { image = memremap(bgrt_tab->image_address, bmp_header.size, MEMREMAP_WB);
image = early_ioremap(bgrt_tab->image_address,
bmp_header.size);
if (!image) { if (!image) {
pr_err("Ignoring BGRT: failed to map image memory\n"); pr_err("Ignoring BGRT: failed to map image memory\n");
kfree(bgrt_image); kfree(bgrt_image);
bgrt_image = NULL; bgrt_image = NULL;
return; return;
} }
}
memcpy_fromio(bgrt_image, image, bgrt_image_size); memcpy(bgrt_image, image, bgrt_image_size);
if (ioremapped) memunmap(image);
early_iounmap(image, bmp_header.size);
} }

View file

@ -869,7 +869,7 @@ static void __init kexec_enter_virtual_mode(void)
* This function will switch the EFI runtime services to virtual mode. * This function will switch the EFI runtime services to virtual mode.
* Essentially, we look through the EFI memmap and map every region that * Essentially, we look through the EFI memmap and map every region that
* has the runtime attribute bit set in its memory descriptor into the * has the runtime attribute bit set in its memory descriptor into the
* ->trampoline_pgd page table using a top-down VA allocation scheme. * efi_pgd page table.
* *
* The old method which used to update that memory descriptor with the * The old method which used to update that memory descriptor with the
* virtual address obtained from ioremap() is still supported when the * virtual address obtained from ioremap() is still supported when the
@ -879,8 +879,8 @@ static void __init kexec_enter_virtual_mode(void)
* *
* The new method does a pagetable switch in a preemption-safe manner * The new method does a pagetable switch in a preemption-safe manner
* so that we're in a different address space when calling a runtime * so that we're in a different address space when calling a runtime
* function. For function arguments passing we do copy the PGDs of the * function. For function arguments passing we do copy the PUDs of the
* kernel page table into ->trampoline_pgd prior to each call. * kernel page table into efi_pgd prior to each call.
* *
* Specially for kexec boot, efi runtime maps in previous kernel should * Specially for kexec boot, efi runtime maps in previous kernel should
* be passed in via setup_data. In that case runtime ranges will be mapped * be passed in via setup_data. In that case runtime ranges will be mapped
@ -895,6 +895,12 @@ static void __init __efi_enter_virtual_mode(void)
efi.systab = NULL; efi.systab = NULL;
if (efi_alloc_page_tables()) {
pr_err("Failed to allocate EFI page tables\n");
clear_bit(EFI_RUNTIME_SERVICES, &efi.flags);
return;
}
efi_merge_regions(); efi_merge_regions();
new_memmap = efi_map_regions(&count, &pg_shift); new_memmap = efi_map_regions(&count, &pg_shift);
if (!new_memmap) { if (!new_memmap) {
@ -954,28 +960,11 @@ static void __init __efi_enter_virtual_mode(void)
efi_runtime_mkexec(); efi_runtime_mkexec();
/* /*
* We mapped the descriptor array into the EFI pagetable above but we're * We mapped the descriptor array into the EFI pagetable above
* not unmapping it here. Here's why: * but we're not unmapping it here because if we're running in
* * EFI mixed mode we need all of memory to be accessible when
* We're copying select PGDs from the kernel page table to the EFI page * we pass parameters to the EFI runtime services in the
* table and when we do so and make changes to those PGDs like unmapping * thunking code.
* stuff from them, those changes appear in the kernel page table and we
* go boom.
*
* From setup_real_mode():
*
* ...
* trampoline_pgd[0] = init_level4_pgt[pgd_index(__PAGE_OFFSET)].pgd;
*
* In this particular case, our allocation is in PGD 0 of the EFI page
* table but we've copied that PGD from PGD[272] of the EFI page table:
*
* pgd_index(__PAGE_OFFSET = 0xffff880000000000) = 272
*
* where the direct memory mapping in kernel space is.
*
* new_memmap's VA comes from that direct mapping and thus clearing it,
* it would get cleared in the kernel page table too.
* *
* efi_cleanup_page_tables(__pa(new_memmap), 1 << pg_shift); * efi_cleanup_page_tables(__pa(new_memmap), 1 << pg_shift);
*/ */

View file

@ -38,6 +38,11 @@
* say 0 - 3G. * say 0 - 3G.
*/ */
int __init efi_alloc_page_tables(void)
{
return 0;
}
void efi_sync_low_kernel_mappings(void) {} void efi_sync_low_kernel_mappings(void) {}
void __init efi_dump_pagetable(void) {} void __init efi_dump_pagetable(void) {}
int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages) int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)

View file

@ -40,6 +40,7 @@
#include <asm/fixmap.h> #include <asm/fixmap.h>
#include <asm/realmode.h> #include <asm/realmode.h>
#include <asm/time.h> #include <asm/time.h>
#include <asm/pgalloc.h>
/* /*
* We allocate runtime services regions bottom-up, starting from -4G, i.e. * We allocate runtime services regions bottom-up, starting from -4G, i.e.
@ -47,16 +48,7 @@
*/ */
static u64 efi_va = EFI_VA_START; static u64 efi_va = EFI_VA_START;
/* struct efi_scratch efi_scratch;
* Scratch space used for switching the pagetable in the EFI stub
*/
struct efi_scratch {
u64 r15;
u64 prev_cr3;
pgd_t *efi_pgt;
bool use_pgd;
u64 phys_stack;
} __packed;
static void __init early_code_mapping_set_exec(int executable) static void __init early_code_mapping_set_exec(int executable)
{ {
@ -83,8 +75,11 @@ pgd_t * __init efi_call_phys_prolog(void)
int pgd; int pgd;
int n_pgds; int n_pgds;
if (!efi_enabled(EFI_OLD_MEMMAP)) if (!efi_enabled(EFI_OLD_MEMMAP)) {
return NULL; save_pgd = (pgd_t *)read_cr3();
write_cr3((unsigned long)efi_scratch.efi_pgt);
goto out;
}
early_code_mapping_set_exec(1); early_code_mapping_set_exec(1);
@ -96,6 +91,7 @@ pgd_t * __init efi_call_phys_prolog(void)
vaddress = (unsigned long)__va(pgd * PGDIR_SIZE); vaddress = (unsigned long)__va(pgd * PGDIR_SIZE);
set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress)); set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress));
} }
out:
__flush_tlb_all(); __flush_tlb_all();
return save_pgd; return save_pgd;
@ -109,8 +105,11 @@ void __init efi_call_phys_epilog(pgd_t *save_pgd)
int pgd_idx; int pgd_idx;
int nr_pgds; int nr_pgds;
if (!save_pgd) if (!efi_enabled(EFI_OLD_MEMMAP)) {
write_cr3((unsigned long)save_pgd);
__flush_tlb_all();
return; return;
}
nr_pgds = DIV_ROUND_UP((max_pfn << PAGE_SHIFT) , PGDIR_SIZE); nr_pgds = DIV_ROUND_UP((max_pfn << PAGE_SHIFT) , PGDIR_SIZE);
@ -123,27 +122,97 @@ void __init efi_call_phys_epilog(pgd_t *save_pgd)
early_code_mapping_set_exec(0); early_code_mapping_set_exec(0);
} }
static pgd_t *efi_pgd;
/*
* We need our own copy of the higher levels of the page tables
* because we want to avoid inserting EFI region mappings (EFI_VA_END
* to EFI_VA_START) into the standard kernel page tables. Everything
* else can be shared, see efi_sync_low_kernel_mappings().
*/
int __init efi_alloc_page_tables(void)
{
pgd_t *pgd;
pud_t *pud;
gfp_t gfp_mask;
if (efi_enabled(EFI_OLD_MEMMAP))
return 0;
gfp_mask = GFP_KERNEL | __GFP_NOTRACK | __GFP_REPEAT | __GFP_ZERO;
efi_pgd = (pgd_t *)__get_free_page(gfp_mask);
if (!efi_pgd)
return -ENOMEM;
pgd = efi_pgd + pgd_index(EFI_VA_END);
pud = pud_alloc_one(NULL, 0);
if (!pud) {
free_page((unsigned long)efi_pgd);
return -ENOMEM;
}
pgd_populate(NULL, pgd, pud);
return 0;
}
/* /*
* Add low kernel mappings for passing arguments to EFI functions. * Add low kernel mappings for passing arguments to EFI functions.
*/ */
void efi_sync_low_kernel_mappings(void) void efi_sync_low_kernel_mappings(void)
{ {
unsigned num_pgds; unsigned num_entries;
pgd_t *pgd = (pgd_t *)__va(real_mode_header->trampoline_pgd); pgd_t *pgd_k, *pgd_efi;
pud_t *pud_k, *pud_efi;
if (efi_enabled(EFI_OLD_MEMMAP)) if (efi_enabled(EFI_OLD_MEMMAP))
return; return;
num_pgds = pgd_index(MODULES_END - 1) - pgd_index(PAGE_OFFSET); /*
* We can share all PGD entries apart from the one entry that
* covers the EFI runtime mapping space.
*
* Make sure the EFI runtime region mappings are guaranteed to
* only span a single PGD entry and that the entry also maps
* other important kernel regions.
*/
BUILD_BUG_ON(pgd_index(EFI_VA_END) != pgd_index(MODULES_END));
BUILD_BUG_ON((EFI_VA_START & PGDIR_MASK) !=
(EFI_VA_END & PGDIR_MASK));
memcpy(pgd + pgd_index(PAGE_OFFSET), pgd_efi = efi_pgd + pgd_index(PAGE_OFFSET);
init_mm.pgd + pgd_index(PAGE_OFFSET), pgd_k = pgd_offset_k(PAGE_OFFSET);
sizeof(pgd_t) * num_pgds);
num_entries = pgd_index(EFI_VA_END) - pgd_index(PAGE_OFFSET);
memcpy(pgd_efi, pgd_k, sizeof(pgd_t) * num_entries);
/*
* We share all the PUD entries apart from those that map the
* EFI regions. Copy around them.
*/
BUILD_BUG_ON((EFI_VA_START & ~PUD_MASK) != 0);
BUILD_BUG_ON((EFI_VA_END & ~PUD_MASK) != 0);
pgd_efi = efi_pgd + pgd_index(EFI_VA_END);
pud_efi = pud_offset(pgd_efi, 0);
pgd_k = pgd_offset_k(EFI_VA_END);
pud_k = pud_offset(pgd_k, 0);
num_entries = pud_index(EFI_VA_END);
memcpy(pud_efi, pud_k, sizeof(pud_t) * num_entries);
pud_efi = pud_offset(pgd_efi, EFI_VA_START);
pud_k = pud_offset(pgd_k, EFI_VA_START);
num_entries = PTRS_PER_PUD - pud_index(EFI_VA_START);
memcpy(pud_efi, pud_k, sizeof(pud_t) * num_entries);
} }
int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages) int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
{ {
unsigned long text; unsigned long pfn, text;
struct page *page; struct page *page;
unsigned npages; unsigned npages;
pgd_t *pgd; pgd_t *pgd;
@ -151,8 +220,8 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
if (efi_enabled(EFI_OLD_MEMMAP)) if (efi_enabled(EFI_OLD_MEMMAP))
return 0; return 0;
efi_scratch.efi_pgt = (pgd_t *)(unsigned long)real_mode_header->trampoline_pgd; efi_scratch.efi_pgt = (pgd_t *)__pa(efi_pgd);
pgd = __va(efi_scratch.efi_pgt); pgd = efi_pgd;
/* /*
* It can happen that the physical address of new_memmap lands in memory * It can happen that the physical address of new_memmap lands in memory
@ -160,7 +229,8 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
* and ident-map those pages containing the map before calling * and ident-map those pages containing the map before calling
* phys_efi_set_virtual_address_map(). * phys_efi_set_virtual_address_map().
*/ */
if (kernel_map_pages_in_pgd(pgd, pa_memmap, pa_memmap, num_pages, _PAGE_NX)) { pfn = pa_memmap >> PAGE_SHIFT;
if (kernel_map_pages_in_pgd(pgd, pfn, pa_memmap, num_pages, _PAGE_NX)) {
pr_err("Error ident-mapping new memmap (0x%lx)!\n", pa_memmap); pr_err("Error ident-mapping new memmap (0x%lx)!\n", pa_memmap);
return 1; return 1;
} }
@ -185,8 +255,9 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
npages = (_end - _text) >> PAGE_SHIFT; npages = (_end - _text) >> PAGE_SHIFT;
text = __pa(_text); text = __pa(_text);
pfn = text >> PAGE_SHIFT;
if (kernel_map_pages_in_pgd(pgd, text >> PAGE_SHIFT, text, npages, 0)) { if (kernel_map_pages_in_pgd(pgd, pfn, text, npages, 0)) {
pr_err("Failed to map kernel text 1:1\n"); pr_err("Failed to map kernel text 1:1\n");
return 1; return 1;
} }
@ -196,20 +267,20 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
void __init efi_cleanup_page_tables(unsigned long pa_memmap, unsigned num_pages) void __init efi_cleanup_page_tables(unsigned long pa_memmap, unsigned num_pages)
{ {
pgd_t *pgd = (pgd_t *)__va(real_mode_header->trampoline_pgd); kernel_unmap_pages_in_pgd(efi_pgd, pa_memmap, num_pages);
kernel_unmap_pages_in_pgd(pgd, pa_memmap, num_pages);
} }
static void __init __map_region(efi_memory_desc_t *md, u64 va) static void __init __map_region(efi_memory_desc_t *md, u64 va)
{ {
pgd_t *pgd = (pgd_t *)__va(real_mode_header->trampoline_pgd); unsigned long flags = 0;
unsigned long pf = 0; unsigned long pfn;
pgd_t *pgd = efi_pgd;
if (!(md->attribute & EFI_MEMORY_WB)) if (!(md->attribute & EFI_MEMORY_WB))
pf |= _PAGE_PCD; flags |= _PAGE_PCD;
if (kernel_map_pages_in_pgd(pgd, md->phys_addr, va, md->num_pages, pf)) pfn = md->phys_addr >> PAGE_SHIFT;
if (kernel_map_pages_in_pgd(pgd, pfn, va, md->num_pages, flags))
pr_warn("Error mapping PA 0x%llx -> VA 0x%llx!\n", pr_warn("Error mapping PA 0x%llx -> VA 0x%llx!\n",
md->phys_addr, va); md->phys_addr, va);
} }
@ -312,9 +383,7 @@ void __init efi_runtime_mkexec(void)
void __init efi_dump_pagetable(void) void __init efi_dump_pagetable(void)
{ {
#ifdef CONFIG_EFI_PGT_DUMP #ifdef CONFIG_EFI_PGT_DUMP
pgd_t *pgd = (pgd_t *)__va(real_mode_header->trampoline_pgd); ptdump_walk_pgd_level(NULL, efi_pgd);
ptdump_walk_pgd_level(NULL, pgd);
#endif #endif
} }

View file

@ -38,41 +38,6 @@
mov %rsi, %cr0; \ mov %rsi, %cr0; \
mov (%rsp), %rsp mov (%rsp), %rsp
/* stolen from gcc */
.macro FLUSH_TLB_ALL
movq %r15, efi_scratch(%rip)
movq %r14, efi_scratch+8(%rip)
movq %cr4, %r15
movq %r15, %r14
andb $0x7f, %r14b
movq %r14, %cr4
movq %r15, %cr4
movq efi_scratch+8(%rip), %r14
movq efi_scratch(%rip), %r15
.endm
.macro SWITCH_PGT
cmpb $0, efi_scratch+24(%rip)
je 1f
movq %r15, efi_scratch(%rip) # r15
# save previous CR3
movq %cr3, %r15
movq %r15, efi_scratch+8(%rip) # prev_cr3
movq efi_scratch+16(%rip), %r15 # EFI pgt
movq %r15, %cr3
1:
.endm
.macro RESTORE_PGT
cmpb $0, efi_scratch+24(%rip)
je 2f
movq efi_scratch+8(%rip), %r15
movq %r15, %cr3
movq efi_scratch(%rip), %r15
FLUSH_TLB_ALL
2:
.endm
ENTRY(efi_call) ENTRY(efi_call)
SAVE_XMM SAVE_XMM
mov (%rsp), %rax mov (%rsp), %rax
@ -83,16 +48,8 @@ ENTRY(efi_call)
mov %r8, %r9 mov %r8, %r9
mov %rcx, %r8 mov %rcx, %r8
mov %rsi, %rcx mov %rsi, %rcx
SWITCH_PGT
call *%rdi call *%rdi
RESTORE_PGT
addq $48, %rsp addq $48, %rsp
RESTORE_XMM RESTORE_XMM
ret ret
ENDPROC(efi_call) ENDPROC(efi_call)
.data
ENTRY(efi_scratch)
.fill 3,8,0
.byte 0
.quad 0

View file

@ -361,7 +361,6 @@ config CRYPTO_XTS
select CRYPTO_BLKCIPHER select CRYPTO_BLKCIPHER
select CRYPTO_MANAGER select CRYPTO_MANAGER
select CRYPTO_GF128MUL select CRYPTO_GF128MUL
select CRYPTO_ECB
help help
XTS: IEEE1619/D16 narrow block cipher use with aes-xts-plain, XTS: IEEE1619/D16 narrow block cipher use with aes-xts-plain,
key size 256, 384 or 512 bits. This implementation currently key size 256, 384 or 512 bits. This implementation currently

View file

@ -87,7 +87,7 @@ EXPORT_SYMBOL_GPL(pkcs7_free_message);
static int pkcs7_check_authattrs(struct pkcs7_message *msg) static int pkcs7_check_authattrs(struct pkcs7_message *msg)
{ {
struct pkcs7_signed_info *sinfo; struct pkcs7_signed_info *sinfo;
bool want; bool want = false;
sinfo = msg->signed_infos; sinfo = msg->signed_infos;
if (!sinfo) if (!sinfo)

View file

@ -212,4 +212,6 @@ source "drivers/bif/Kconfig"
source "drivers/sensors/Kconfig" source "drivers/sensors/Kconfig"
source "drivers/tee/Kconfig"
endmenu endmenu

View file

@ -182,3 +182,4 @@ obj-$(CONFIG_BIF) += bif/
obj-$(CONFIG_SENSORS_SSC) += sensors/ obj-$(CONFIG_SENSORS_SSC) += sensors/
obj-$(CONFIG_ESOC) += esoc/ obj-$(CONFIG_ESOC) += esoc/
obj-$(CONFIG_TEE) += tee/

View file

@ -833,7 +833,7 @@ binder_enqueue_work_ilocked(struct binder_work *work,
} }
/** /**
* binder_enqueue_thread_work_ilocked_nowake() - Add thread work * binder_enqueue_deferred_thread_work_ilocked() - Add deferred thread work
* @thread: thread to queue work to * @thread: thread to queue work to
* @work: struct binder_work to add to list * @work: struct binder_work to add to list
* *
@ -844,7 +844,7 @@ binder_enqueue_work_ilocked(struct binder_work *work,
* Requires the proc->inner_lock to be held. * Requires the proc->inner_lock to be held.
*/ */
static void static void
binder_enqueue_thread_work_ilocked_nowake(struct binder_thread *thread, binder_enqueue_deferred_thread_work_ilocked(struct binder_thread *thread,
struct binder_work *work) struct binder_work *work)
{ {
binder_enqueue_work_ilocked(work, &thread->todo); binder_enqueue_work_ilocked(work, &thread->todo);
@ -2468,7 +2468,7 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
debug_id, (u64)fda->num_fds); debug_id, (u64)fda->num_fds);
continue; continue;
} }
fd_array = (u32 *)(parent_buffer + fda->parent_offset); fd_array = (u32 *)(parent_buffer + (uintptr_t)fda->parent_offset);
for (fd_index = 0; fd_index < fda->num_fds; fd_index++) for (fd_index = 0; fd_index < fda->num_fds; fd_index++)
task_close_fd(proc, fd_array[fd_index]); task_close_fd(proc, fd_array[fd_index]);
} break; } break;
@ -2692,7 +2692,7 @@ static int binder_translate_fd_array(struct binder_fd_array_object *fda,
*/ */
parent_buffer = parent->buffer - parent_buffer = parent->buffer -
binder_alloc_get_user_buffer_offset(&target_proc->alloc); binder_alloc_get_user_buffer_offset(&target_proc->alloc);
fd_array = (u32 *)(parent_buffer + fda->parent_offset); fd_array = (u32 *)(parent_buffer + (uintptr_t)fda->parent_offset);
if (!IS_ALIGNED((unsigned long)fd_array, sizeof(u32))) { if (!IS_ALIGNED((unsigned long)fd_array, sizeof(u32))) {
binder_user_error("%d:%d parent offset not aligned correctly.\n", binder_user_error("%d:%d parent offset not aligned correctly.\n",
proc->pid, thread->pid); proc->pid, thread->pid);
@ -2758,7 +2758,7 @@ static int binder_fixup_parent(struct binder_transaction *t,
proc->pid, thread->pid); proc->pid, thread->pid);
return -EINVAL; return -EINVAL;
} }
parent_buffer = (u8 *)(parent->buffer - parent_buffer = (u8 *)((uintptr_t)parent->buffer -
binder_alloc_get_user_buffer_offset( binder_alloc_get_user_buffer_offset(
&target_proc->alloc)); &target_proc->alloc));
*(binder_uintptr_t *)(parent_buffer + bp->parent_offset) = bp->buffer; *(binder_uintptr_t *)(parent_buffer + bp->parent_offset) = bp->buffer;
@ -3348,7 +3348,14 @@ static void binder_transaction(struct binder_proc *proc,
} else if (!(t->flags & TF_ONE_WAY)) { } else if (!(t->flags & TF_ONE_WAY)) {
BUG_ON(t->buffer->async_transaction != 0); BUG_ON(t->buffer->async_transaction != 0);
binder_inner_proc_lock(proc); binder_inner_proc_lock(proc);
binder_enqueue_thread_work_ilocked_nowake(thread, tcomplete); /*
* Defer the TRANSACTION_COMPLETE, so we don't return to
* userspace immediately; this allows the target process to
* immediately start processing this transaction, reducing
* latency. We will then return the TRANSACTION_COMPLETE when
* the target replies (or there is an error).
*/
binder_enqueue_deferred_thread_work_ilocked(thread, tcomplete);
t->need_reply = 1; t->need_reply = 1;
t->from_parent = thread->transaction_stack; t->from_parent = thread->transaction_stack;
thread->transaction_stack = t; thread->transaction_stack = t;

View file

@ -272,6 +272,7 @@ config SATA_SX4
config ATA_BMDMA config ATA_BMDMA
bool "ATA BMDMA support" bool "ATA BMDMA support"
depends on HAS_DMA
default y default y
help help
This option adds support for SFF ATA controllers with BMDMA This option adds support for SFF ATA controllers with BMDMA
@ -318,6 +319,7 @@ config SATA_DWC_VDEBUG
config SATA_HIGHBANK config SATA_HIGHBANK
tristate "Calxeda Highbank SATA support" tristate "Calxeda Highbank SATA support"
depends on HAS_DMA
depends on ARCH_HIGHBANK || COMPILE_TEST depends on ARCH_HIGHBANK || COMPILE_TEST
help help
This option enables support for the Calxeda Highbank SoC's This option enables support for the Calxeda Highbank SoC's
@ -327,6 +329,7 @@ config SATA_HIGHBANK
config SATA_MV config SATA_MV
tristate "Marvell SATA support" tristate "Marvell SATA support"
depends on HAS_DMA
depends on PCI || ARCH_DOVE || ARCH_MV78XX0 || \ depends on PCI || ARCH_DOVE || ARCH_MV78XX0 || \
ARCH_MVEBU || ARCH_ORION5X || COMPILE_TEST ARCH_MVEBU || ARCH_ORION5X || COMPILE_TEST
select GENERIC_PHY select GENERIC_PHY

View file

@ -2245,8 +2245,8 @@ static void ata_eh_link_autopsy(struct ata_link *link)
if (dev->flags & ATA_DFLAG_DUBIOUS_XFER) if (dev->flags & ATA_DFLAG_DUBIOUS_XFER)
eflags |= ATA_EFLAG_DUBIOUS_XFER; eflags |= ATA_EFLAG_DUBIOUS_XFER;
ehc->i.action |= ata_eh_speed_down(dev, eflags, all_err_mask); ehc->i.action |= ata_eh_speed_down(dev, eflags, all_err_mask);
}
trace_ata_eh_link_autopsy(dev, ehc->i.action, all_err_mask); trace_ata_eh_link_autopsy(dev, ehc->i.action, all_err_mask);
}
DPRINTK("EXIT\n"); DPRINTK("EXIT\n");
} }

View file

@ -1936,6 +1936,7 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np)
if (ret) { if (ret) {
dev_err(dev, "%s: Failed to add OPP, %d\n", __func__, dev_err(dev, "%s: Failed to add OPP, %d\n", __func__,
ret); ret);
of_node_put(np);
goto free_table; goto free_table;
} }
} }

View file

@ -2736,7 +2736,7 @@ static int rbd_img_obj_parent_read_full(struct rbd_obj_request *obj_request)
* from the parent. * from the parent.
*/ */
page_count = (u32)calc_pages_for(0, length); page_count = (u32)calc_pages_for(0, length);
pages = ceph_alloc_page_vector(page_count, GFP_KERNEL); pages = ceph_alloc_page_vector(page_count, GFP_NOIO);
if (IS_ERR(pages)) { if (IS_ERR(pages)) {
result = PTR_ERR(pages); result = PTR_ERR(pages);
pages = NULL; pages = NULL;
@ -2863,7 +2863,7 @@ static int rbd_img_obj_exists_submit(struct rbd_obj_request *obj_request)
*/ */
size = sizeof (__le64) + sizeof (__le32) + sizeof (__le32); size = sizeof (__le64) + sizeof (__le32) + sizeof (__le32);
page_count = (u32)calc_pages_for(0, size); page_count = (u32)calc_pages_for(0, size);
pages = ceph_alloc_page_vector(page_count, GFP_KERNEL); pages = ceph_alloc_page_vector(page_count, GFP_NOIO);
if (IS_ERR(pages)) if (IS_ERR(pages))
return PTR_ERR(pages); return PTR_ERR(pages);

View file

@ -1407,33 +1407,34 @@ static int dispatch_rw_block_io(struct xen_blkif *blkif,
static void make_response(struct xen_blkif *blkif, u64 id, static void make_response(struct xen_blkif *blkif, u64 id,
unsigned short op, int st) unsigned short op, int st)
{ {
struct blkif_response resp; struct blkif_response *resp;
unsigned long flags; unsigned long flags;
union blkif_back_rings *blk_rings = &blkif->blk_rings; union blkif_back_rings *blk_rings = &blkif->blk_rings;
int notify; int notify;
resp.id = id;
resp.operation = op;
resp.status = st;
spin_lock_irqsave(&blkif->blk_ring_lock, flags); spin_lock_irqsave(&blkif->blk_ring_lock, flags);
/* Place on the response ring for the relevant domain. */ /* Place on the response ring for the relevant domain. */
switch (blkif->blk_protocol) { switch (blkif->blk_protocol) {
case BLKIF_PROTOCOL_NATIVE: case BLKIF_PROTOCOL_NATIVE:
memcpy(RING_GET_RESPONSE(&blk_rings->native, blk_rings->native.rsp_prod_pvt), resp = RING_GET_RESPONSE(&blk_rings->native,
&resp, sizeof(resp)); blk_rings->native.rsp_prod_pvt);
break; break;
case BLKIF_PROTOCOL_X86_32: case BLKIF_PROTOCOL_X86_32:
memcpy(RING_GET_RESPONSE(&blk_rings->x86_32, blk_rings->x86_32.rsp_prod_pvt), resp = RING_GET_RESPONSE(&blk_rings->x86_32,
&resp, sizeof(resp)); blk_rings->x86_32.rsp_prod_pvt);
break; break;
case BLKIF_PROTOCOL_X86_64: case BLKIF_PROTOCOL_X86_64:
memcpy(RING_GET_RESPONSE(&blk_rings->x86_64, blk_rings->x86_64.rsp_prod_pvt), resp = RING_GET_RESPONSE(&blk_rings->x86_64,
&resp, sizeof(resp)); blk_rings->x86_64.rsp_prod_pvt);
break; break;
default: default:
BUG(); BUG();
} }
resp->id = id;
resp->operation = op;
resp->status = st;
blk_rings->common.rsp_prod_pvt++; blk_rings->common.rsp_prod_pvt++;
RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&blk_rings->common, notify); RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&blk_rings->common, notify);
spin_unlock_irqrestore(&blkif->blk_ring_lock, flags); spin_unlock_irqrestore(&blkif->blk_ring_lock, flags);

View file

@ -74,9 +74,8 @@ extern unsigned int xen_blkif_max_ring_order;
struct blkif_common_request { struct blkif_common_request {
char dummy; char dummy;
}; };
struct blkif_common_response {
char dummy; /* i386 protocol version */
};
struct blkif_x86_32_request_rw { struct blkif_x86_32_request_rw {
uint8_t nr_segments; /* number of segments */ uint8_t nr_segments; /* number of segments */
@ -128,14 +127,6 @@ struct blkif_x86_32_request {
} u; } u;
} __attribute__((__packed__)); } __attribute__((__packed__));
/* i386 protocol version */
#pragma pack(push, 4)
struct blkif_x86_32_response {
uint64_t id; /* copied from request */
uint8_t operation; /* copied from request */
int16_t status; /* BLKIF_RSP_??? */
};
#pragma pack(pop)
/* x86_64 protocol version */ /* x86_64 protocol version */
struct blkif_x86_64_request_rw { struct blkif_x86_64_request_rw {
@ -192,18 +183,12 @@ struct blkif_x86_64_request {
} u; } u;
} __attribute__((__packed__)); } __attribute__((__packed__));
struct blkif_x86_64_response {
uint64_t __attribute__((__aligned__(8))) id;
uint8_t operation; /* copied from request */
int16_t status; /* BLKIF_RSP_??? */
};
DEFINE_RING_TYPES(blkif_common, struct blkif_common_request, DEFINE_RING_TYPES(blkif_common, struct blkif_common_request,
struct blkif_common_response); struct blkif_response);
DEFINE_RING_TYPES(blkif_x86_32, struct blkif_x86_32_request, DEFINE_RING_TYPES(blkif_x86_32, struct blkif_x86_32_request,
struct blkif_x86_32_response); struct blkif_response __packed);
DEFINE_RING_TYPES(blkif_x86_64, struct blkif_x86_64_request, DEFINE_RING_TYPES(blkif_x86_64, struct blkif_x86_64_request,
struct blkif_x86_64_response); struct blkif_response);
union blkif_back_rings { union blkif_back_rings {
struct blkif_back_ring native; struct blkif_back_ring native;

View file

@ -2969,6 +2969,12 @@ static int btusb_probe(struct usb_interface *intf,
if (id->driver_info & BTUSB_QCA_ROME) { if (id->driver_info & BTUSB_QCA_ROME) {
data->setup_on_usb = btusb_setup_qca; data->setup_on_usb = btusb_setup_qca;
hdev->set_bdaddr = btusb_set_bdaddr_ath3012; hdev->set_bdaddr = btusb_set_bdaddr_ath3012;
/* QCA Rome devices lose their updated firmware over suspend,
* but the USB hub doesn't notice any status change.
* Explicitly request a device reset on resume.
*/
set_bit(BTUSB_RESET_RESUME, &data->flags);
} }
#ifdef CONFIG_BT_HCIBTUSB_RTL #ifdef CONFIG_BT_HCIBTUSB_RTL

View file

@ -4029,7 +4029,8 @@ smi_from_recv_msg(ipmi_smi_t intf, struct ipmi_recv_msg *recv_msg,
} }
static void check_msg_timeout(ipmi_smi_t intf, struct seq_table *ent, static void check_msg_timeout(ipmi_smi_t intf, struct seq_table *ent,
struct list_head *timeouts, long timeout_period, struct list_head *timeouts,
unsigned long timeout_period,
int slot, unsigned long *flags, int slot, unsigned long *flags,
unsigned int *waiting_msgs) unsigned int *waiting_msgs)
{ {
@ -4042,8 +4043,8 @@ static void check_msg_timeout(ipmi_smi_t intf, struct seq_table *ent,
if (!ent->inuse) if (!ent->inuse)
return; return;
if (timeout_period < ent->timeout) {
ent->timeout -= timeout_period; ent->timeout -= timeout_period;
if (ent->timeout > 0) {
(*waiting_msgs)++; (*waiting_msgs)++;
return; return;
} }
@ -4109,7 +4110,8 @@ static void check_msg_timeout(ipmi_smi_t intf, struct seq_table *ent,
} }
} }
static unsigned int ipmi_timeout_handler(ipmi_smi_t intf, long timeout_period) static unsigned int ipmi_timeout_handler(ipmi_smi_t intf,
unsigned long timeout_period)
{ {
struct list_head timeouts; struct list_head timeouts;
struct ipmi_recv_msg *msg, *msg2; struct ipmi_recv_msg *msg, *msg2;

View file

@ -265,7 +265,7 @@ static int of_dra7_atl_clk_probe(struct platform_device *pdev)
/* Get configuration for the ATL instances */ /* Get configuration for the ATL instances */
snprintf(prop, sizeof(prop), "atl%u", i); snprintf(prop, sizeof(prop), "atl%u", i);
cfg_node = of_find_node_by_name(node, prop); cfg_node = of_get_child_by_name(node, prop);
if (cfg_node) { if (cfg_node) {
ret = of_property_read_u32(cfg_node, "bws", ret = of_property_read_u32(cfg_node, "bws",
&cdesc->bws); &cdesc->bws);
@ -278,6 +278,7 @@ static int of_dra7_atl_clk_probe(struct platform_device *pdev)
atl_write(cinfo, DRA7_ATL_AWSMUX_REG(i), atl_write(cinfo, DRA7_ATL_AWSMUX_REG(i),
cdesc->aws); cdesc->aws);
} }
of_node_put(cfg_node);
} }
cdesc->probed = true; cdesc->probed = true;

View file

@ -80,11 +80,13 @@ static int p8_aes_ctr_setkey(struct crypto_tfm *tfm, const u8 *key,
int ret; int ret;
struct p8_aes_ctr_ctx *ctx = crypto_tfm_ctx(tfm); struct p8_aes_ctr_ctx *ctx = crypto_tfm_ctx(tfm);
preempt_disable();
pagefault_disable(); pagefault_disable();
enable_kernel_altivec(); enable_kernel_altivec();
enable_kernel_vsx(); enable_kernel_vsx();
ret = aes_p8_set_encrypt_key(key, keylen * 8, &ctx->enc_key); ret = aes_p8_set_encrypt_key(key, keylen * 8, &ctx->enc_key);
pagefault_enable(); pagefault_enable();
preempt_enable();
ret += crypto_blkcipher_setkey(ctx->fallback, key, keylen); ret += crypto_blkcipher_setkey(ctx->fallback, key, keylen);
return ret; return ret;
@ -99,11 +101,13 @@ static void p8_aes_ctr_final(struct p8_aes_ctr_ctx *ctx,
u8 *dst = walk->dst.virt.addr; u8 *dst = walk->dst.virt.addr;
unsigned int nbytes = walk->nbytes; unsigned int nbytes = walk->nbytes;
preempt_disable();
pagefault_disable(); pagefault_disable();
enable_kernel_altivec(); enable_kernel_altivec();
enable_kernel_vsx(); enable_kernel_vsx();
aes_p8_encrypt(ctrblk, keystream, &ctx->enc_key); aes_p8_encrypt(ctrblk, keystream, &ctx->enc_key);
pagefault_enable(); pagefault_enable();
preempt_enable();
crypto_xor(keystream, src, nbytes); crypto_xor(keystream, src, nbytes);
memcpy(dst, keystream, nbytes); memcpy(dst, keystream, nbytes);
@ -132,6 +136,7 @@ static int p8_aes_ctr_crypt(struct blkcipher_desc *desc,
blkcipher_walk_init(&walk, dst, src, nbytes); blkcipher_walk_init(&walk, dst, src, nbytes);
ret = blkcipher_walk_virt_block(desc, &walk, AES_BLOCK_SIZE); ret = blkcipher_walk_virt_block(desc, &walk, AES_BLOCK_SIZE);
while ((nbytes = walk.nbytes) >= AES_BLOCK_SIZE) { while ((nbytes = walk.nbytes) >= AES_BLOCK_SIZE) {
preempt_disable();
pagefault_disable(); pagefault_disable();
enable_kernel_altivec(); enable_kernel_altivec();
enable_kernel_vsx(); enable_kernel_vsx();
@ -143,6 +148,7 @@ static int p8_aes_ctr_crypt(struct blkcipher_desc *desc,
&ctx->enc_key, &ctx->enc_key,
walk.iv); walk.iv);
pagefault_enable(); pagefault_enable();
preempt_enable();
/* We need to update IV mostly for last bytes/round */ /* We need to update IV mostly for last bytes/round */
inc = (nbytes & AES_BLOCK_MASK) / AES_BLOCK_SIZE; inc = (nbytes & AES_BLOCK_MASK) / AES_BLOCK_SIZE;

View file

@ -634,6 +634,7 @@ static int dmatest_func(void *data)
* free it this time?" dancing. For now, just * free it this time?" dancing. For now, just
* leave it dangling. * leave it dangling.
*/ */
WARN(1, "dmatest: Kernel stack may be corrupted!!\n");
dmaengine_unmap_put(um); dmaengine_unmap_put(um);
result("test timed out", total_tests, src_off, dst_off, result("test timed out", total_tests, src_off, dst_off,
len, 0); len, 0);

View file

@ -813,6 +813,7 @@ static int zx_dma_probe(struct platform_device *op)
INIT_LIST_HEAD(&d->slave.channels); INIT_LIST_HEAD(&d->slave.channels);
dma_cap_set(DMA_SLAVE, d->slave.cap_mask); dma_cap_set(DMA_SLAVE, d->slave.cap_mask);
dma_cap_set(DMA_MEMCPY, d->slave.cap_mask); dma_cap_set(DMA_MEMCPY, d->slave.cap_mask);
dma_cap_set(DMA_CYCLIC, d->slave.cap_mask);
dma_cap_set(DMA_PRIVATE, d->slave.cap_mask); dma_cap_set(DMA_PRIVATE, d->slave.cap_mask);
d->slave.dev = &op->dev; d->slave.dev = &op->dev;
d->slave.device_free_chan_resources = zx_dma_free_chan_resources; d->slave.device_free_chan_resources = zx_dma_free_chan_resources;

View file

@ -190,6 +190,11 @@ static int palmas_usb_probe(struct platform_device *pdev)
struct palmas_usb *palmas_usb; struct palmas_usb *palmas_usb;
int status; int status;
if (!palmas) {
dev_err(&pdev->dev, "failed to get valid parent\n");
return -EINVAL;
}
palmas_usb = devm_kzalloc(&pdev->dev, sizeof(*palmas_usb), GFP_KERNEL); palmas_usb = devm_kzalloc(&pdev->dev, sizeof(*palmas_usb), GFP_KERNEL);
if (!palmas_usb) if (!palmas_usb)
return -ENOMEM; return -ENOMEM;

View file

@ -327,38 +327,6 @@ u64 __init efi_mem_desc_end(efi_memory_desc_t *md)
return end; return end;
} }
/*
* We can't ioremap data in EFI boot services RAM, because we've already mapped
* it as RAM. So, look it up in the existing EFI memory map instead. Only
* callable after efi_enter_virtual_mode and before efi_free_boot_services.
*/
void __iomem *efi_lookup_mapped_addr(u64 phys_addr)
{
struct efi_memory_map *map;
void *p;
map = efi.memmap;
if (!map)
return NULL;
if (WARN_ON(!map->map))
return NULL;
for (p = map->map; p < map->map_end; p += map->desc_size) {
efi_memory_desc_t *md = p;
u64 size = md->num_pages << EFI_PAGE_SHIFT;
u64 end = md->phys_addr + size;
if (!(md->attribute & EFI_MEMORY_RUNTIME) &&
md->type != EFI_BOOT_SERVICES_CODE &&
md->type != EFI_BOOT_SERVICES_DATA)
continue;
if (!md->virt_addr)
continue;
if (phys_addr >= md->phys_addr && phys_addr < end) {
phys_addr += md->virt_addr - md->phys_addr;
return (__force void __iomem *)(unsigned long)phys_addr;
}
}
return NULL;
}
static __initdata efi_config_table_type_t common_tables[] = { static __initdata efi_config_table_type_t common_tables[] = {
{ACPI_20_TABLE_GUID, "ACPI 2.0", &efi.acpi20}, {ACPI_20_TABLE_GUID, "ACPI 2.0", &efi.acpi20},
{ACPI_TABLE_GUID, "ACPI", &efi.acpi}, {ACPI_TABLE_GUID, "ACPI", &efi.acpi},

View file

@ -1575,34 +1575,32 @@ void amdgpu_atombios_scratch_regs_restore(struct amdgpu_device *adev)
WREG32(mmBIOS_SCRATCH_0 + i, adev->bios_scratch[i]); WREG32(mmBIOS_SCRATCH_0 + i, adev->bios_scratch[i]);
} }
/* Atom needs data in little endian format /* Atom needs data in little endian format so swap as appropriate when copying
* so swap as appropriate when copying data to * data to or from atom. Note that atom operates on dw units.
* or from atom. Note that atom operates on *
* dw units. * Use to_le=true when sending data to atom and provide at least
* ALIGN(num_bytes,4) bytes in the dst buffer.
*
* Use to_le=false when receiving data from atom and provide ALIGN(num_bytes,4)
* byes in the src buffer.
*/ */
void amdgpu_atombios_copy_swap(u8 *dst, u8 *src, u8 num_bytes, bool to_le) void amdgpu_atombios_copy_swap(u8 *dst, u8 *src, u8 num_bytes, bool to_le)
{ {
#ifdef __BIG_ENDIAN #ifdef __BIG_ENDIAN
u8 src_tmp[20], dst_tmp[20]; /* used for byteswapping */ u32 src_tmp[5], dst_tmp[5];
u32 *dst32, *src32;
int i; int i;
u8 align_num_bytes = ALIGN(num_bytes, 4);
memcpy(src_tmp, src, num_bytes);
src32 = (u32 *)src_tmp;
dst32 = (u32 *)dst_tmp;
if (to_le) { if (to_le) {
for (i = 0; i < ((num_bytes + 3) / 4); i++) memcpy(src_tmp, src, num_bytes);
dst32[i] = cpu_to_le32(src32[i]); for (i = 0; i < align_num_bytes / 4; i++)
memcpy(dst, dst_tmp, num_bytes); dst_tmp[i] = cpu_to_le32(src_tmp[i]);
memcpy(dst, dst_tmp, align_num_bytes);
} else { } else {
u8 dws = num_bytes & ~3; memcpy(src_tmp, src, align_num_bytes);
for (i = 0; i < ((num_bytes + 3) / 4); i++) for (i = 0; i < align_num_bytes / 4; i++)
dst32[i] = le32_to_cpu(src32[i]); dst_tmp[i] = le32_to_cpu(src_tmp[i]);
memcpy(dst, dst_tmp, dws); memcpy(dst, dst_tmp, num_bytes);
if (num_bytes % 4) {
for (i = 0; i < (num_bytes % 4); i++)
dst[dws+i] = dst_tmp[dws+i];
}
} }
#else #else
memcpy(dst, src, num_bytes); memcpy(dst, src, num_bytes);

View file

@ -4,3 +4,5 @@ armada-y += armada_510.o
armada-$(CONFIG_DEBUG_FS) += armada_debugfs.o armada-$(CONFIG_DEBUG_FS) += armada_debugfs.o
obj-$(CONFIG_DRM_ARMADA) := armada.o obj-$(CONFIG_DRM_ARMADA) := armada.o
CFLAGS_armada_trace.o := -I$(src)

Some files were not shown because too many files have changed in this diff Show more