Linux 4.2-rc2

-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJVouXcAAoJEHm+PkMAQRiGjWQIAJcfwb9z5U8jCR+FyvqqM//U
 gzKdODPpwuJBa4IpQoCIZlT4wbGH3W0euuVKWk2zv9f+D/i5D/ZIAj0Wt5mjs1Z2
 /uhEMnYskGHn7Q3HCQyyjPqTi6I4DnSiFS880xTQhsufXxtpDmVWETO1fyCvlt73
 NU6Y0Y/Oe+tN5/Qp3wKLUaZMigiiBD5GviGQQNHtJ6UIorOCJnw5mYSx0dWPotIL
 SYUno4WIR/48FKABUDxs13YK7JDLv3w4h8FimvY+cY6hKLZ9zUhFUCCE3OKXcpHQ
 auAKhyi9YcqJxWvVUCl1gV8NmBefETp6zfWf07NBaDnFiNsfW+mqfO0mWwnOT5s=
 =C7m8
 -----END PGP SIGNATURE-----

Merge tag 'v4.2-rc2' into patchwork

Linux 4.2-rc2

We need to backport changeset 31f0245545, with fixes a bug at
include/linux/compiler.h that breaks sparse.

* tag 'v4.2-rc2': (221 commits)
  Linux 4.2-rc2
  Revert "drm/i915: Use crtc_state->active in primary check_plane func"
  freeing unlinked file indefinitely delayed
  fix a braino in ovl_d_select_inode()
  9p: don't leave a half-initialized inode sitting around
  tick/broadcast: Prevent NULL pointer dereference
  selinux: fix mprotect PROT_EXEC regression caused by mm change
  parisc: Fix some PTE/TLB race conditions and optimize __flush_tlb_range based on timing results
  stifb: Implement hardware accelerated copyarea
  nfit: add support for NVDIMM "latch" flag
  nfit: update block I/O path to use PMEM API
  tools/testing/nvdimm: add mock acpi_nfit_flush_address entries to nfit_test
  tools/testing/nvdimm: fix return code for unimplemented commands
  tools/testing/nvdimm: mock ioremap_wt
  pmem: add maintainer for include/linux/pmem.h
  Revert "Input: synaptics - allocate 3 slots to keep stability in image sensors"
  arm64: entry32: remove pointless register assignment
  MIPS: O32: Use compat_sys_getsockopt.
  MIPS: c-r4k: Extend way_string array
  MIPS: Pistachio: Support CDMM & Fast Debug Channel
  ...
This commit is contained in:
Mauro Carvalho Chehab 2015-07-17 09:12:19 -03:00
commit 675a9a9fda
245 changed files with 5777 additions and 1918 deletions

View file

@ -36,7 +36,7 @@ SunXi family
+ User Manual + User Manual
http://dl.linux-sunxi.org/A20/A20%20User%20Manual%202013-03-22.pdf http://dl.linux-sunxi.org/A20/A20%20User%20Manual%202013-03-22.pdf
- Allwinner A23 - Allwinner A23 (sun8i)
+ Datasheet + Datasheet
http://dl.linux-sunxi.org/A23/A23%20Datasheet%20V1.0%2020130830.pdf http://dl.linux-sunxi.org/A23/A23%20Datasheet%20V1.0%2020130830.pdf
+ User Manual + User Manual
@ -55,7 +55,23 @@ SunXi family
+ User Manual + User Manual
http://dl.linux-sunxi.org/A31/A3x_release_document/A31s/IC/A31s%20User%20Manual%20%20V1.0%2020130322.pdf http://dl.linux-sunxi.org/A31/A3x_release_document/A31s/IC/A31s%20User%20Manual%20%20V1.0%2020130322.pdf
- Allwinner A33 (sun8i)
+ Datasheet
http://dl.linux-sunxi.org/A33/A33%20Datasheet%20release%201.1.pdf
+ User Manual
http://dl.linux-sunxi.org/A33/A33%20user%20manual%20release%201.1.pdf
- Allwinner H3 (sun8i)
+ Datasheet
http://dl.linux-sunxi.org/H3/Allwinner_H3_Datasheet_V1.0.pdf
* Quad ARM Cortex-A15, Quad ARM Cortex-A7 based SoCs * Quad ARM Cortex-A15, Quad ARM Cortex-A7 based SoCs
- Allwinner A80 - Allwinner A80
+ Datasheet + Datasheet
http://dl.linux-sunxi.org/A80/A80_Datasheet_Revision_1.0_0404.pdf http://dl.linux-sunxi.org/A80/A80_Datasheet_Revision_1.0_0404.pdf
* Octa ARM Cortex-A7 based SoCs
- Allwinner A83T
+ Not Supported
+ Datasheet
http://dl.linux-sunxi.org/A83T/A83T_datasheet_Revision_1.1.pdf

View file

@ -9,4 +9,6 @@ using one of the following compatible strings:
allwinner,sun6i-a31 allwinner,sun6i-a31
allwinner,sun7i-a20 allwinner,sun7i-a20
allwinner,sun8i-a23 allwinner,sun8i-a23
allwinner,sun8i-a33
allwinner,sun8i-h3
allwinner,sun9i-a80 allwinner,sun9i-a80

View file

@ -8,6 +8,7 @@ of the EMIF IP and memory parts attached to it.
Required properties: Required properties:
- compatible : Should be of the form "ti,emif-<ip-rev>" where <ip-rev> - compatible : Should be of the form "ti,emif-<ip-rev>" where <ip-rev>
is the IP revision of the specific EMIF instance. is the IP revision of the specific EMIF instance.
For am437x should be ti,emif-am4372.
- phy-type : <u32> indicating the DDR phy type. Following are the - phy-type : <u32> indicating the DDR phy type. Following are the
allowed values allowed values

View file

@ -410,8 +410,17 @@ Documentation/usb/persist.txt.
Q: Can I suspend-to-disk using a swap partition under LVM? Q: Can I suspend-to-disk using a swap partition under LVM?
A: No. You can suspend successfully, but you'll not be able to A: Yes and No. You can suspend successfully, but the kernel will not be able
resume. uswsusp should be able to work with LVM. See suspend.sf.net. to resume on its own. You need an initramfs that can recognize the resume
situation, activate the logical volume containing the swap volume (but not
touch any filesystems!), and eventually call
echo -n "$major:$minor" > /sys/power/resume
where $major and $minor are the respective major and minor device numbers of
the swap volume.
uswsusp works with LVM, too. See http://suspend.sourceforge.net/
Q: I upgraded the kernel from 2.6.15 to 2.6.16. Both kernels were Q: I upgraded the kernel from 2.6.15 to 2.6.16. Both kernels were
compiled with the similar configuration files. Anyway I found that compiled with the similar configuration files. Anyway I found that

View file

@ -1614,6 +1614,7 @@ M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: arch/arm/boot/dts/vexpress* F: arch/arm/boot/dts/vexpress*
F: arch/arm64/boot/dts/arm/vexpress*
F: arch/arm/mach-vexpress/ F: arch/arm/mach-vexpress/
F: */*/vexpress* F: */*/vexpress*
F: */*/*/vexpress* F: */*/*/vexpress*
@ -2562,19 +2563,31 @@ F: arch/powerpc/include/uapi/asm/spu*.h
F: arch/powerpc/oprofile/*cell* F: arch/powerpc/oprofile/*cell*
F: arch/powerpc/platforms/cell/ F: arch/powerpc/platforms/cell/
CEPH DISTRIBUTED FILE SYSTEM CLIENT CEPH COMMON CODE (LIBCEPH)
M: Ilya Dryomov <idryomov@gmail.com>
M: "Yan, Zheng" <zyan@redhat.com> M: "Yan, Zheng" <zyan@redhat.com>
M: Sage Weil <sage@redhat.com> M: Sage Weil <sage@redhat.com>
L: ceph-devel@vger.kernel.org L: ceph-devel@vger.kernel.org
W: http://ceph.com/ W: http://ceph.com/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git
T: git git://github.com/ceph/ceph-client.git
S: Supported S: Supported
F: Documentation/filesystems/ceph.txt
F: fs/ceph/
F: net/ceph/ F: net/ceph/
F: include/linux/ceph/ F: include/linux/ceph/
F: include/linux/crush/ F: include/linux/crush/
CEPH DISTRIBUTED FILE SYSTEM CLIENT (CEPH)
M: "Yan, Zheng" <zyan@redhat.com>
M: Sage Weil <sage@redhat.com>
M: Ilya Dryomov <idryomov@gmail.com>
L: ceph-devel@vger.kernel.org
W: http://ceph.com/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git
T: git git://github.com/ceph/ceph-client.git
S: Supported
F: Documentation/filesystems/ceph.txt
F: fs/ceph/
CERTIFIED WIRELESS USB (WUSB) SUBSYSTEM: CERTIFIED WIRELESS USB (WUSB) SUBSYSTEM:
L: linux-usb@vger.kernel.org L: linux-usb@vger.kernel.org
S: Orphan S: Orphan
@ -6147,6 +6160,7 @@ L: linux-nvdimm@lists.01.org
Q: https://patchwork.kernel.org/project/linux-nvdimm/list/ Q: https://patchwork.kernel.org/project/linux-nvdimm/list/
S: Supported S: Supported
F: drivers/nvdimm/pmem.c F: drivers/nvdimm/pmem.c
F: include/linux/pmem.h
LINUX FOR IBM pSERIES (RS/6000) LINUX FOR IBM pSERIES (RS/6000)
M: Paul Mackerras <paulus@au.ibm.com> M: Paul Mackerras <paulus@au.ibm.com>
@ -6161,7 +6175,7 @@ M: Michael Ellerman <mpe@ellerman.id.au>
W: http://www.penguinppc.org/ W: http://www.penguinppc.org/
L: linuxppc-dev@lists.ozlabs.org L: linuxppc-dev@lists.ozlabs.org
Q: http://patchwork.ozlabs.org/project/linuxppc-dev/list/ Q: http://patchwork.ozlabs.org/project/linuxppc-dev/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git
S: Supported S: Supported
F: Documentation/powerpc/ F: Documentation/powerpc/
F: arch/powerpc/ F: arch/powerpc/
@ -8366,10 +8380,12 @@ RADOS BLOCK DEVICE (RBD)
M: Ilya Dryomov <idryomov@gmail.com> M: Ilya Dryomov <idryomov@gmail.com>
M: Sage Weil <sage@redhat.com> M: Sage Weil <sage@redhat.com>
M: Alex Elder <elder@kernel.org> M: Alex Elder <elder@kernel.org>
M: ceph-devel@vger.kernel.org L: ceph-devel@vger.kernel.org
W: http://ceph.com/ W: http://ceph.com/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git
T: git git://github.com/ceph/ceph-client.git
S: Supported S: Supported
F: Documentation/ABI/testing/sysfs-bus-rbd
F: drivers/block/rbd.c F: drivers/block/rbd.c
F: drivers/block/rbd_types.h F: drivers/block/rbd_types.h

View file

@ -1,7 +1,7 @@
VERSION = 4 VERSION = 4
PATCHLEVEL = 2 PATCHLEVEL = 2
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc1 EXTRAVERSION = -rc2
NAME = Hurr durr I'ma sheep NAME = Hurr durr I'ma sheep
# *DOCUMENTATION* # *DOCUMENTATION*

View file

@ -1693,6 +1693,12 @@ config HIGHMEM
config HIGHPTE config HIGHPTE
bool "Allocate 2nd-level pagetables from highmem" bool "Allocate 2nd-level pagetables from highmem"
depends on HIGHMEM depends on HIGHMEM
help
The VM uses one page of physical memory for each page table.
For systems with a lot of processes, this can use a lot of
precious low memory, eventually leading to low memory being
consumed by page tables. Setting this option will allow
user-space 2nd level page tables to reside in high memory.
config HW_PERF_EVENTS config HW_PERF_EVENTS
bool "Enable hardware performance counter support for perf events" bool "Enable hardware performance counter support for perf events"

View file

@ -1635,7 +1635,7 @@ config PID_IN_CONTEXTIDR
config DEBUG_SET_MODULE_RONX config DEBUG_SET_MODULE_RONX
bool "Set loadable kernel module data as NX and text as RO" bool "Set loadable kernel module data as NX and text as RO"
depends on MODULES depends on MODULES && MMU
---help--- ---help---
This option helps catch unintended modifications to loadable This option helps catch unintended modifications to loadable
kernel module's text and read-only data. It also prevents execution kernel module's text and read-only data. It also prevents execution

View file

@ -80,3 +80,7 @@
status = "okay"; status = "okay";
}; };
}; };
&rtc {
system-power-controller;
};

View file

@ -132,6 +132,12 @@
}; };
}; };
emif: emif@4c000000 {
compatible = "ti,emif-am4372";
reg = <0x4c000000 0x1000000>;
ti,hwmods = "emif";
};
edma: edma@49000000 { edma: edma@49000000 {
compatible = "ti,edma3"; compatible = "ti,edma3";
ti,hwmods = "tpcc", "tptc0", "tptc1", "tptc2"; ti,hwmods = "tpcc", "tptc0", "tptc1", "tptc2";
@ -941,6 +947,7 @@
ti,hwmods = "dss_rfbi"; ti,hwmods = "dss_rfbi";
clocks = <&disp_clk>; clocks = <&disp_clk>;
clock-names = "fck"; clock-names = "fck";
status = "disabled";
}; };
}; };

View file

@ -605,6 +605,10 @@
phy-supply = <&ldousb_reg>; phy-supply = <&ldousb_reg>;
}; };
&usb2_phy2 {
phy-supply = <&ldousb_reg>;
};
&usb1 { &usb1 {
dr_mode = "host"; dr_mode = "host";
pinctrl-names = "default"; pinctrl-names = "default";

File diff suppressed because it is too large Load diff

View file

@ -150,6 +150,16 @@
interface-type = "ace"; interface-type = "ace";
reg = <0x5000 0x1000>; reg = <0x5000 0x1000>;
}; };
pmu@9000 {
compatible = "arm,cci-400-pmu,r0";
reg = <0x9000 0x5000>;
interrupts = <0 105 4>,
<0 101 4>,
<0 102 4>,
<0 103 4>,
<0 104 4>;
};
}; };
memory-controller@7ffd0000 { memory-controller@7ffd0000 {
@ -187,11 +197,22 @@
<1 10 0xf08>; <1 10 0xf08>;
}; };
pmu { pmu_a15 {
compatible = "arm,cortex-a15-pmu"; compatible = "arm,cortex-a15-pmu";
interrupts = <0 68 4>, interrupts = <0 68 4>,
<0 69 4>; <0 69 4>;
interrupt-affinity = <&cpu0>, <&cpu1>; interrupt-affinity = <&cpu0>,
<&cpu1>;
};
pmu_a7 {
compatible = "arm,cortex-a7-pmu";
interrupts = <0 128 4>,
<0 129 4>,
<0 130 4>;
interrupt-affinity = <&cpu2>,
<&cpu3>,
<&cpu4>;
}; };
oscclk6a: oscclk6a { oscclk6a: oscclk6a {

View file

@ -353,7 +353,6 @@ CONFIG_POWER_RESET_AS3722=y
CONFIG_POWER_RESET_GPIO=y CONFIG_POWER_RESET_GPIO=y
CONFIG_POWER_RESET_GPIO_RESTART=y CONFIG_POWER_RESET_GPIO_RESTART=y
CONFIG_POWER_RESET_KEYSTONE=y CONFIG_POWER_RESET_KEYSTONE=y
CONFIG_POWER_RESET_SUN6I=y
CONFIG_POWER_RESET_RMOBILE=y CONFIG_POWER_RESET_RMOBILE=y
CONFIG_SENSORS_LM90=y CONFIG_SENSORS_LM90=y
CONFIG_SENSORS_LM95245=y CONFIG_SENSORS_LM95245=y

View file

@ -2,6 +2,7 @@ CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y CONFIG_HIGH_RES_TIMERS=y
CONFIG_BLK_DEV_INITRD=y CONFIG_BLK_DEV_INITRD=y
CONFIG_PERF_EVENTS=y CONFIG_PERF_EVENTS=y
CONFIG_MODULES=y
CONFIG_ARCH_SUNXI=y CONFIG_ARCH_SUNXI=y
CONFIG_SMP=y CONFIG_SMP=y
CONFIG_NR_CPUS=8 CONFIG_NR_CPUS=8
@ -77,7 +78,6 @@ CONFIG_SPI_SUN6I=y
CONFIG_GPIO_SYSFS=y CONFIG_GPIO_SYSFS=y
CONFIG_POWER_SUPPLY=y CONFIG_POWER_SUPPLY=y
CONFIG_POWER_RESET=y CONFIG_POWER_RESET=y
CONFIG_POWER_RESET_SUN6I=y
CONFIG_THERMAL=y CONFIG_THERMAL=y
CONFIG_CPU_THERMAL=y CONFIG_CPU_THERMAL=y
CONFIG_WATCHDOG=y CONFIG_WATCHDOG=y
@ -87,6 +87,10 @@ CONFIG_REGULATOR=y
CONFIG_REGULATOR_FIXED_VOLTAGE=y CONFIG_REGULATOR_FIXED_VOLTAGE=y
CONFIG_REGULATOR_AXP20X=y CONFIG_REGULATOR_AXP20X=y
CONFIG_REGULATOR_GPIO=y CONFIG_REGULATOR_GPIO=y
CONFIG_FB=y
CONFIG_FB_SIMPLE=y
CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
CONFIG_USB=y CONFIG_USB=y
CONFIG_USB_EHCI_HCD=y CONFIG_USB_EHCI_HCD=y
CONFIG_USB_EHCI_HCD_PLATFORM=y CONFIG_USB_EHCI_HCD_PLATFORM=y

View file

@ -140,16 +140,11 @@ static inline u32 __raw_readl(const volatile void __iomem *addr)
* The _caller variety takes a __builtin_return_address(0) value for * The _caller variety takes a __builtin_return_address(0) value for
* /proc/vmalloc to use - and should only be used in non-inline functions. * /proc/vmalloc to use - and should only be used in non-inline functions.
*/ */
extern void __iomem *__arm_ioremap_pfn_caller(unsigned long, unsigned long,
size_t, unsigned int, void *);
extern void __iomem *__arm_ioremap_caller(phys_addr_t, size_t, unsigned int, extern void __iomem *__arm_ioremap_caller(phys_addr_t, size_t, unsigned int,
void *); void *);
extern void __iomem *__arm_ioremap_pfn(unsigned long, unsigned long, size_t, unsigned int); extern void __iomem *__arm_ioremap_pfn(unsigned long, unsigned long, size_t, unsigned int);
extern void __iomem *__arm_ioremap(phys_addr_t, size_t, unsigned int);
extern void __iomem *__arm_ioremap_exec(phys_addr_t, size_t, bool cached); extern void __iomem *__arm_ioremap_exec(phys_addr_t, size_t, bool cached);
extern void __iounmap(volatile void __iomem *addr); extern void __iounmap(volatile void __iomem *addr);
extern void __arm_iounmap(volatile void __iomem *addr);
extern void __iomem * (*arch_ioremap_caller)(phys_addr_t, size_t, extern void __iomem * (*arch_ioremap_caller)(phys_addr_t, size_t,
unsigned int, void *); unsigned int, void *);
@ -321,21 +316,24 @@ extern void _memset_io(volatile void __iomem *, int, size_t);
static inline void memset_io(volatile void __iomem *dst, unsigned c, static inline void memset_io(volatile void __iomem *dst, unsigned c,
size_t count) size_t count)
{ {
memset((void __force *)dst, c, count); extern void mmioset(void *, unsigned int, size_t);
mmioset((void __force *)dst, c, count);
} }
#define memset_io(dst,c,count) memset_io(dst,c,count) #define memset_io(dst,c,count) memset_io(dst,c,count)
static inline void memcpy_fromio(void *to, const volatile void __iomem *from, static inline void memcpy_fromio(void *to, const volatile void __iomem *from,
size_t count) size_t count)
{ {
memcpy(to, (const void __force *)from, count); extern void mmiocpy(void *, const void *, size_t);
mmiocpy(to, (const void __force *)from, count);
} }
#define memcpy_fromio(to,from,count) memcpy_fromio(to,from,count) #define memcpy_fromio(to,from,count) memcpy_fromio(to,from,count)
static inline void memcpy_toio(volatile void __iomem *to, const void *from, static inline void memcpy_toio(volatile void __iomem *to, const void *from,
size_t count) size_t count)
{ {
memcpy((void __force *)to, from, count); extern void mmiocpy(void *, const void *, size_t);
mmiocpy((void __force *)to, from, count);
} }
#define memcpy_toio(to,from,count) memcpy_toio(to,from,count) #define memcpy_toio(to,from,count) memcpy_toio(to,from,count)
@ -348,18 +346,61 @@ static inline void memcpy_toio(volatile void __iomem *to, const void *from,
#endif /* readl */ #endif /* readl */
/* /*
* ioremap and friends. * ioremap() and friends.
* *
* ioremap takes a PCI memory address, as specified in * ioremap() takes a resource address, and size. Due to the ARM memory
* Documentation/io-mapping.txt. * types, it is important to use the correct ioremap() function as each
* mapping has specific properties.
* *
* Function Memory type Cacheability Cache hint
* ioremap() Device n/a n/a
* ioremap_nocache() Device n/a n/a
* ioremap_cache() Normal Writeback Read allocate
* ioremap_wc() Normal Non-cacheable n/a
* ioremap_wt() Normal Non-cacheable n/a
*
* All device mappings have the following properties:
* - no access speculation
* - no repetition (eg, on return from an exception)
* - number, order and size of accesses are maintained
* - unaligned accesses are "unpredictable"
* - writes may be delayed before they hit the endpoint device
*
* ioremap_nocache() is the same as ioremap() as there are too many device
* drivers using this for device registers, and documentation which tells
* people to use it for such for this to be any different. This is not a
* safe fallback for memory-like mappings, or memory regions where the
* compiler may generate unaligned accesses - eg, via inlining its own
* memcpy.
*
* All normal memory mappings have the following properties:
* - reads can be repeated with no side effects
* - repeated reads return the last value written
* - reads can fetch additional locations without side effects
* - writes can be repeated (in certain cases) with no side effects
* - writes can be merged before accessing the target
* - unaligned accesses can be supported
* - ordering is not guaranteed without explicit dependencies or barrier
* instructions
* - writes may be delayed before they hit the endpoint memory
*
* The cache hint is only a performance hint: CPUs may alias these hints.
* Eg, a CPU not implementing read allocate but implementing write allocate
* will provide a write allocate mapping instead.
*/ */
#define ioremap(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE) void __iomem *ioremap(resource_size_t res_cookie, size_t size);
#define ioremap_nocache(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE) #define ioremap ioremap
#define ioremap_cache(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE_CACHED) #define ioremap_nocache ioremap
#define ioremap_wc(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE_WC)
#define ioremap_wt(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE) void __iomem *ioremap_cache(resource_size_t res_cookie, size_t size);
#define iounmap __arm_iounmap #define ioremap_cache ioremap_cache
void __iomem *ioremap_wc(resource_size_t res_cookie, size_t size);
#define ioremap_wc ioremap_wc
#define ioremap_wt ioremap_wc
void iounmap(volatile void __iomem *iomem_cookie);
#define iounmap iounmap
/* /*
* io{read,write}{16,32}be() macros * io{read,write}{16,32}be() macros

View file

@ -275,7 +275,7 @@ static inline void *phys_to_virt(phys_addr_t x)
*/ */
#define __pa(x) __virt_to_phys((unsigned long)(x)) #define __pa(x) __virt_to_phys((unsigned long)(x))
#define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x))) #define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x)))
#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT) #define pfn_to_kaddr(pfn) __va((phys_addr_t)(pfn) << PAGE_SHIFT)
extern phys_addr_t (*arch_virt_to_idmap)(unsigned long x); extern phys_addr_t (*arch_virt_to_idmap)(unsigned long x);

View file

@ -129,7 +129,36 @@
/* /*
* These are the memory types, defined to be compatible with * These are the memory types, defined to be compatible with
* pre-ARMv6 CPUs cacheable and bufferable bits: XXCB * pre-ARMv6 CPUs cacheable and bufferable bits: n/a,n/a,C,B
* ARMv6+ without TEX remapping, they are a table index.
* ARMv6+ with TEX remapping, they correspond to n/a,TEX(0),C,B
*
* MT type Pre-ARMv6 ARMv6+ type / cacheable status
* UNCACHED Uncached Strongly ordered
* BUFFERABLE Bufferable Normal memory / non-cacheable
* WRITETHROUGH Writethrough Normal memory / write through
* WRITEBACK Writeback Normal memory / write back, read alloc
* MINICACHE Minicache N/A
* WRITEALLOC Writeback Normal memory / write back, write alloc
* DEV_SHARED Uncached Device memory (shared)
* DEV_NONSHARED Uncached Device memory (non-shared)
* DEV_WC Bufferable Normal memory / non-cacheable
* DEV_CACHED Writeback Normal memory / write back, read alloc
* VECTORS Variable Normal memory / variable
*
* All normal memory mappings have the following properties:
* - reads can be repeated with no side effects
* - repeated reads return the last value written
* - reads can fetch additional locations without side effects
* - writes can be repeated (in certain cases) with no side effects
* - writes can be merged before accessing the target
* - unaligned accesses can be supported
*
* All device mappings have the following properties:
* - no access speculation
* - no repetition (eg, on return from an exception)
* - number, order and size of accesses are maintained
* - unaligned accesses are "unpredictable"
*/ */
#define L_PTE_MT_UNCACHED (_AT(pteval_t, 0x00) << 2) /* 0000 */ #define L_PTE_MT_UNCACHED (_AT(pteval_t, 0x00) << 2) /* 0000 */
#define L_PTE_MT_BUFFERABLE (_AT(pteval_t, 0x01) << 2) /* 0001 */ #define L_PTE_MT_BUFFERABLE (_AT(pteval_t, 0x01) << 2) /* 0001 */

View file

@ -50,6 +50,9 @@ extern void __aeabi_ulcmp(void);
extern void fpundefinstr(void); extern void fpundefinstr(void);
void mmioset(void *, unsigned int, size_t);
void mmiocpy(void *, const void *, size_t);
/* platform dependent support */ /* platform dependent support */
EXPORT_SYMBOL(arm_delay_ops); EXPORT_SYMBOL(arm_delay_ops);
@ -88,6 +91,9 @@ EXPORT_SYMBOL(memmove);
EXPORT_SYMBOL(memchr); EXPORT_SYMBOL(memchr);
EXPORT_SYMBOL(__memzero); EXPORT_SYMBOL(__memzero);
EXPORT_SYMBOL(mmioset);
EXPORT_SYMBOL(mmiocpy);
#ifdef CONFIG_MMU #ifdef CONFIG_MMU
EXPORT_SYMBOL(copy_page); EXPORT_SYMBOL(copy_page);

View file

@ -410,7 +410,7 @@ ENDPROC(__fiq_abt)
zero_fp zero_fp
.if \trace .if \trace
#ifdef CONFIG_IRQSOFF_TRACER #ifdef CONFIG_TRACE_IRQFLAGS
bl trace_hardirqs_off bl trace_hardirqs_off
#endif #endif
ct_user_exit save = 0 ct_user_exit save = 0

View file

@ -578,7 +578,7 @@ void handle_IPI(int ipinr, struct pt_regs *regs)
struct pt_regs *old_regs = set_irq_regs(regs); struct pt_regs *old_regs = set_irq_regs(regs);
if ((unsigned)ipinr < NR_IPI) { if ((unsigned)ipinr < NR_IPI) {
trace_ipi_entry(ipi_types[ipinr]); trace_ipi_entry_rcuidle(ipi_types[ipinr]);
__inc_irq_stat(cpu, ipi_irqs[ipinr]); __inc_irq_stat(cpu, ipi_irqs[ipinr]);
} }
@ -637,7 +637,7 @@ void handle_IPI(int ipinr, struct pt_regs *regs)
} }
if ((unsigned)ipinr < NR_IPI) if ((unsigned)ipinr < NR_IPI)
trace_ipi_exit(ipi_types[ipinr]); trace_ipi_exit_rcuidle(ipi_types[ipinr]);
set_irq_regs(old_regs); set_irq_regs(old_regs);
} }

View file

@ -61,8 +61,10 @@
/* Prototype: void *memcpy(void *dest, const void *src, size_t n); */ /* Prototype: void *memcpy(void *dest, const void *src, size_t n); */
ENTRY(mmiocpy)
ENTRY(memcpy) ENTRY(memcpy)
#include "copy_template.S" #include "copy_template.S"
ENDPROC(memcpy) ENDPROC(memcpy)
ENDPROC(mmiocpy)

View file

@ -16,6 +16,7 @@
.text .text
.align 5 .align 5
ENTRY(mmioset)
ENTRY(memset) ENTRY(memset)
UNWIND( .fnstart ) UNWIND( .fnstart )
ands r3, r0, #3 @ 1 unaligned? ands r3, r0, #3 @ 1 unaligned?
@ -133,3 +134,4 @@ UNWIND( .fnstart )
b 1b b 1b
UNWIND( .fnend ) UNWIND( .fnend )
ENDPROC(memset) ENDPROC(memset)
ENDPROC(mmioset)

View file

@ -117,7 +117,6 @@ static void omap2_show_dma_caps(void)
u8 revision = dma_read(REVISION, 0) & 0xff; u8 revision = dma_read(REVISION, 0) & 0xff;
printk(KERN_INFO "OMAP DMA hardware revision %d.%d\n", printk(KERN_INFO "OMAP DMA hardware revision %d.%d\n",
revision >> 4, revision & 0xf); revision >> 4, revision & 0xf);
return;
} }
static unsigned configure_dma_errata(void) static unsigned configure_dma_errata(void)

View file

@ -4,6 +4,7 @@ menuconfig ARCH_SIRF
select ARCH_REQUIRE_GPIOLIB select ARCH_REQUIRE_GPIOLIB
select GENERIC_IRQ_CHIP select GENERIC_IRQ_CHIP
select NO_IOPORT_MAP select NO_IOPORT_MAP
select REGMAP
select PINCTRL select PINCTRL
select PINCTRL_SIRF select PINCTRL_SIRF
help help

View file

@ -1,5 +1,5 @@
/* /*
* RTC I/O Bridge interfaces for CSR SiRFprimaII * RTC I/O Bridge interfaces for CSR SiRFprimaII/atlas7
* ARM access the registers of SYSRTC, GPSRTC and PWRC through this module * ARM access the registers of SYSRTC, GPSRTC and PWRC through this module
* *
* Copyright (c) 2011 Cambridge Silicon Radio Limited, a CSR plc group company. * Copyright (c) 2011 Cambridge Silicon Radio Limited, a CSR plc group company.
@ -10,6 +10,7 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/regmap.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/of_address.h> #include <linux/of_address.h>
#include <linux/of_device.h> #include <linux/of_device.h>
@ -66,6 +67,7 @@ u32 sirfsoc_rtc_iobrg_readl(u32 addr)
{ {
unsigned long flags, val; unsigned long flags, val;
/* TODO: add hwspinlock to sync with M3 */
spin_lock_irqsave(&rtciobrg_lock, flags); spin_lock_irqsave(&rtciobrg_lock, flags);
val = __sirfsoc_rtc_iobrg_readl(addr); val = __sirfsoc_rtc_iobrg_readl(addr);
@ -90,6 +92,7 @@ void sirfsoc_rtc_iobrg_writel(u32 val, u32 addr)
{ {
unsigned long flags; unsigned long flags;
/* TODO: add hwspinlock to sync with M3 */
spin_lock_irqsave(&rtciobrg_lock, flags); spin_lock_irqsave(&rtciobrg_lock, flags);
sirfsoc_rtc_iobrg_pre_writel(val, addr); sirfsoc_rtc_iobrg_pre_writel(val, addr);
@ -102,6 +105,45 @@ void sirfsoc_rtc_iobrg_writel(u32 val, u32 addr)
} }
EXPORT_SYMBOL_GPL(sirfsoc_rtc_iobrg_writel); EXPORT_SYMBOL_GPL(sirfsoc_rtc_iobrg_writel);
static int regmap_iobg_regwrite(void *context, unsigned int reg,
unsigned int val)
{
sirfsoc_rtc_iobrg_writel(val, reg);
return 0;
}
static int regmap_iobg_regread(void *context, unsigned int reg,
unsigned int *val)
{
*val = (u32)sirfsoc_rtc_iobrg_readl(reg);
return 0;
}
static struct regmap_bus regmap_iobg = {
.reg_write = regmap_iobg_regwrite,
.reg_read = regmap_iobg_regread,
};
/**
* devm_regmap_init_iobg(): Initialise managed register map
*
* @iobg: Device that will be interacted with
* @config: Configuration for register map
*
* The return value will be an ERR_PTR() on error or a valid pointer
* to a struct regmap. The regmap will be automatically freed by the
* device management code.
*/
struct regmap *devm_regmap_init_iobg(struct device *dev,
const struct regmap_config *config)
{
const struct regmap_bus *bus = &regmap_iobg;
return devm_regmap_init(dev, bus, dev, config);
}
EXPORT_SYMBOL_GPL(devm_regmap_init_iobg);
static const struct of_device_id rtciobrg_ids[] = { static const struct of_device_id rtciobrg_ids[] = {
{ .compatible = "sirf,prima2-rtciobg" }, { .compatible = "sirf,prima2-rtciobg" },
{} {}
@ -132,7 +174,7 @@ static int __init sirfsoc_rtciobrg_init(void)
} }
postcore_initcall(sirfsoc_rtciobrg_init); postcore_initcall(sirfsoc_rtciobrg_init);
MODULE_AUTHOR("Zhiwu Song <zhiwu.song@csr.com>, " MODULE_AUTHOR("Zhiwu Song <zhiwu.song@csr.com>");
"Barry Song <baohua.song@csr.com>"); MODULE_AUTHOR("Barry Song <baohua.song@csr.com>");
MODULE_DESCRIPTION("CSR SiRFprimaII rtc io bridge"); MODULE_DESCRIPTION("CSR SiRFprimaII rtc io bridge");
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");

View file

@ -35,7 +35,7 @@ config MACH_SUN7I
select SUN5I_HSTIMER select SUN5I_HSTIMER
config MACH_SUN8I config MACH_SUN8I
bool "Allwinner A23 (sun8i) SoCs support" bool "Allwinner sun8i Family SoCs support"
default ARCH_SUNXI default ARCH_SUNXI
select ARM_GIC select ARM_GIC
select MFD_SUN6I_PRCM select MFD_SUN6I_PRCM

View file

@ -67,10 +67,13 @@ MACHINE_END
static const char * const sun8i_board_dt_compat[] = { static const char * const sun8i_board_dt_compat[] = {
"allwinner,sun8i-a23", "allwinner,sun8i-a23",
"allwinner,sun8i-a33",
"allwinner,sun8i-h3",
NULL, NULL,
}; };
DT_MACHINE_START(SUN8I_DT, "Allwinner sun8i (A23) Family") DT_MACHINE_START(SUN8I_DT, "Allwinner sun8i Family")
.init_time = sun6i_timer_init,
.dt_compat = sun8i_board_dt_compat, .dt_compat = sun8i_board_dt_compat,
.init_late = sunxi_dt_cpufreq_init, .init_late = sunxi_dt_cpufreq_init,
MACHINE_END MACHINE_END

View file

@ -255,7 +255,7 @@ remap_area_supersections(unsigned long virt, unsigned long pfn,
} }
#endif #endif
void __iomem * __arm_ioremap_pfn_caller(unsigned long pfn, static void __iomem * __arm_ioremap_pfn_caller(unsigned long pfn,
unsigned long offset, size_t size, unsigned int mtype, void *caller) unsigned long offset, size_t size, unsigned int mtype, void *caller)
{ {
const struct mem_type *type; const struct mem_type *type;
@ -363,7 +363,7 @@ __arm_ioremap_pfn(unsigned long pfn, unsigned long offset, size_t size,
unsigned int mtype) unsigned int mtype)
{ {
return __arm_ioremap_pfn_caller(pfn, offset, size, mtype, return __arm_ioremap_pfn_caller(pfn, offset, size, mtype,
__builtin_return_address(0)); __builtin_return_address(0));
} }
EXPORT_SYMBOL(__arm_ioremap_pfn); EXPORT_SYMBOL(__arm_ioremap_pfn);
@ -371,13 +371,26 @@ void __iomem * (*arch_ioremap_caller)(phys_addr_t, size_t,
unsigned int, void *) = unsigned int, void *) =
__arm_ioremap_caller; __arm_ioremap_caller;
void __iomem * void __iomem *ioremap(resource_size_t res_cookie, size_t size)
__arm_ioremap(phys_addr_t phys_addr, size_t size, unsigned int mtype)
{ {
return arch_ioremap_caller(phys_addr, size, mtype, return arch_ioremap_caller(res_cookie, size, MT_DEVICE,
__builtin_return_address(0)); __builtin_return_address(0));
} }
EXPORT_SYMBOL(__arm_ioremap); EXPORT_SYMBOL(ioremap);
void __iomem *ioremap_cache(resource_size_t res_cookie, size_t size)
{
return arch_ioremap_caller(res_cookie, size, MT_DEVICE_CACHED,
__builtin_return_address(0));
}
EXPORT_SYMBOL(ioremap_cache);
void __iomem *ioremap_wc(resource_size_t res_cookie, size_t size)
{
return arch_ioremap_caller(res_cookie, size, MT_DEVICE_WC,
__builtin_return_address(0));
}
EXPORT_SYMBOL(ioremap_wc);
/* /*
* Remap an arbitrary physical address space into the kernel virtual * Remap an arbitrary physical address space into the kernel virtual
@ -431,11 +444,11 @@ void __iounmap(volatile void __iomem *io_addr)
void (*arch_iounmap)(volatile void __iomem *) = __iounmap; void (*arch_iounmap)(volatile void __iomem *) = __iounmap;
void __arm_iounmap(volatile void __iomem *io_addr) void iounmap(volatile void __iomem *cookie)
{ {
arch_iounmap(io_addr); arch_iounmap(cookie);
} }
EXPORT_SYMBOL(__arm_iounmap); EXPORT_SYMBOL(iounmap);
#ifdef CONFIG_PCI #ifdef CONFIG_PCI
static int pci_ioremap_mem_type = MT_DEVICE; static int pci_ioremap_mem_type = MT_DEVICE;

View file

@ -1072,6 +1072,7 @@ void __init sanity_check_meminfo(void)
int highmem = 0; int highmem = 0;
phys_addr_t vmalloc_limit = __pa(vmalloc_min - 1) + 1; phys_addr_t vmalloc_limit = __pa(vmalloc_min - 1) + 1;
struct memblock_region *reg; struct memblock_region *reg;
bool should_use_highmem = false;
for_each_memblock(memory, reg) { for_each_memblock(memory, reg) {
phys_addr_t block_start = reg->base; phys_addr_t block_start = reg->base;
@ -1090,6 +1091,7 @@ void __init sanity_check_meminfo(void)
pr_notice("Ignoring RAM at %pa-%pa (!CONFIG_HIGHMEM)\n", pr_notice("Ignoring RAM at %pa-%pa (!CONFIG_HIGHMEM)\n",
&block_start, &block_end); &block_start, &block_end);
memblock_remove(reg->base, reg->size); memblock_remove(reg->base, reg->size);
should_use_highmem = true;
continue; continue;
} }
@ -1100,6 +1102,7 @@ void __init sanity_check_meminfo(void)
&block_start, &block_end, &vmalloc_limit); &block_start, &block_end, &vmalloc_limit);
memblock_remove(vmalloc_limit, overlap_size); memblock_remove(vmalloc_limit, overlap_size);
block_end = vmalloc_limit; block_end = vmalloc_limit;
should_use_highmem = true;
} }
} }
@ -1134,6 +1137,9 @@ void __init sanity_check_meminfo(void)
} }
} }
if (should_use_highmem)
pr_notice("Consider using a HIGHMEM enabled kernel.\n");
high_memory = __va(arm_lowmem_limit - 1) + 1; high_memory = __va(arm_lowmem_limit - 1) + 1;
/* /*
@ -1494,6 +1500,7 @@ void __init paging_init(const struct machine_desc *mdesc)
build_mem_type_table(); build_mem_type_table();
prepare_page_table(); prepare_page_table();
map_lowmem(); map_lowmem();
memblock_set_current_limit(arm_lowmem_limit);
dma_contiguous_remap(); dma_contiguous_remap();
devicemaps_init(mdesc); devicemaps_init(mdesc);
kmap_init(); kmap_init();

View file

@ -351,30 +351,43 @@ void __iomem *__arm_ioremap_pfn(unsigned long pfn, unsigned long offset,
} }
EXPORT_SYMBOL(__arm_ioremap_pfn); EXPORT_SYMBOL(__arm_ioremap_pfn);
void __iomem *__arm_ioremap_pfn_caller(unsigned long pfn, unsigned long offset,
size_t size, unsigned int mtype, void *caller)
{
return __arm_ioremap_pfn(pfn, offset, size, mtype);
}
void __iomem *__arm_ioremap(phys_addr_t phys_addr, size_t size,
unsigned int mtype)
{
return (void __iomem *)phys_addr;
}
EXPORT_SYMBOL(__arm_ioremap);
void __iomem * (*arch_ioremap_caller)(phys_addr_t, size_t, unsigned int, void *);
void __iomem *__arm_ioremap_caller(phys_addr_t phys_addr, size_t size, void __iomem *__arm_ioremap_caller(phys_addr_t phys_addr, size_t size,
unsigned int mtype, void *caller) unsigned int mtype, void *caller)
{ {
return __arm_ioremap(phys_addr, size, mtype); return (void __iomem *)phys_addr;
} }
void __iomem * (*arch_ioremap_caller)(phys_addr_t, size_t, unsigned int, void *);
void __iomem *ioremap(resource_size_t res_cookie, size_t size)
{
return __arm_ioremap_caller(res_cookie, size, MT_DEVICE,
__builtin_return_address(0));
}
EXPORT_SYMBOL(ioremap);
void __iomem *ioremap_cache(resource_size_t res_cookie, size_t size)
{
return __arm_ioremap_caller(res_cookie, size, MT_DEVICE_CACHED,
__builtin_return_address(0));
}
EXPORT_SYMBOL(ioremap_cache);
void __iomem *ioremap_wc(resource_size_t res_cookie, size_t size)
{
return __arm_ioremap_caller(res_cookie, size, MT_DEVICE_WC,
__builtin_return_address(0));
}
EXPORT_SYMBOL(ioremap_wc);
void __iounmap(volatile void __iomem *addr)
{
}
EXPORT_SYMBOL(__iounmap);
void (*arch_iounmap)(volatile void __iomem *); void (*arch_iounmap)(volatile void __iomem *);
void __arm_iounmap(volatile void __iomem *addr) void iounmap(volatile void __iomem *addr)
{ {
} }
EXPORT_SYMBOL(__arm_iounmap); EXPORT_SYMBOL(iounmap);

View file

@ -45,13 +45,11 @@
* it does. * it does.
*/ */
#define _GNU_SOURCE
#include <byteswap.h> #include <byteswap.h>
#include <elf.h> #include <elf.h>
#include <errno.h> #include <errno.h>
#include <error.h>
#include <fcntl.h> #include <fcntl.h>
#include <stdarg.h>
#include <stdbool.h> #include <stdbool.h>
#include <stdio.h> #include <stdio.h>
#include <stdlib.h> #include <stdlib.h>
@ -82,11 +80,25 @@
#define EF_ARM_ABI_FLOAT_HARD 0x400 #define EF_ARM_ABI_FLOAT_HARD 0x400
#endif #endif
static int failed;
static const char *argv0;
static const char *outfile; static const char *outfile;
static void fail(const char *fmt, ...)
{
va_list ap;
failed = 1;
fprintf(stderr, "%s: ", argv0);
va_start(ap, fmt);
vfprintf(stderr, fmt, ap);
va_end(ap);
exit(EXIT_FAILURE);
}
static void cleanup(void) static void cleanup(void)
{ {
if (error_message_count > 0 && outfile != NULL) if (failed && outfile != NULL)
unlink(outfile); unlink(outfile);
} }
@ -119,68 +131,66 @@ int main(int argc, char **argv)
int infd; int infd;
atexit(cleanup); atexit(cleanup);
argv0 = argv[0];
if (argc != 3) if (argc != 3)
error(EXIT_FAILURE, 0, "Usage: %s [infile] [outfile]", argv[0]); fail("Usage: %s [infile] [outfile]\n", argv[0]);
infile = argv[1]; infile = argv[1];
outfile = argv[2]; outfile = argv[2];
infd = open(infile, O_RDONLY); infd = open(infile, O_RDONLY);
if (infd < 0) if (infd < 0)
error(EXIT_FAILURE, errno, "Cannot open %s", infile); fail("Cannot open %s: %s\n", infile, strerror(errno));
if (fstat(infd, &stat) != 0) if (fstat(infd, &stat) != 0)
error(EXIT_FAILURE, errno, "Failed stat for %s", infile); fail("Failed stat for %s: %s\n", infile, strerror(errno));
inbuf = mmap(NULL, stat.st_size, PROT_READ, MAP_PRIVATE, infd, 0); inbuf = mmap(NULL, stat.st_size, PROT_READ, MAP_PRIVATE, infd, 0);
if (inbuf == MAP_FAILED) if (inbuf == MAP_FAILED)
error(EXIT_FAILURE, errno, "Failed to map %s", infile); fail("Failed to map %s: %s\n", infile, strerror(errno));
close(infd); close(infd);
inhdr = inbuf; inhdr = inbuf;
if (memcmp(&inhdr->e_ident, ELFMAG, SELFMAG) != 0) if (memcmp(&inhdr->e_ident, ELFMAG, SELFMAG) != 0)
error(EXIT_FAILURE, 0, "Not an ELF file"); fail("Not an ELF file\n");
if (inhdr->e_ident[EI_CLASS] != ELFCLASS32) if (inhdr->e_ident[EI_CLASS] != ELFCLASS32)
error(EXIT_FAILURE, 0, "Unsupported ELF class"); fail("Unsupported ELF class\n");
swap = inhdr->e_ident[EI_DATA] != HOST_ORDER; swap = inhdr->e_ident[EI_DATA] != HOST_ORDER;
if (read_elf_half(inhdr->e_type, swap) != ET_DYN) if (read_elf_half(inhdr->e_type, swap) != ET_DYN)
error(EXIT_FAILURE, 0, "Not a shared object"); fail("Not a shared object\n");
if (read_elf_half(inhdr->e_machine, swap) != EM_ARM) { if (read_elf_half(inhdr->e_machine, swap) != EM_ARM)
error(EXIT_FAILURE, 0, "Unsupported architecture %#x", fail("Unsupported architecture %#x\n", inhdr->e_machine);
inhdr->e_machine);
}
e_flags = read_elf_word(inhdr->e_flags, swap); e_flags = read_elf_word(inhdr->e_flags, swap);
if (EF_ARM_EABI_VERSION(e_flags) != EF_ARM_EABI_VER5) { if (EF_ARM_EABI_VERSION(e_flags) != EF_ARM_EABI_VER5) {
error(EXIT_FAILURE, 0, "Unsupported EABI version %#x", fail("Unsupported EABI version %#x\n",
EF_ARM_EABI_VERSION(e_flags)); EF_ARM_EABI_VERSION(e_flags));
} }
if (e_flags & EF_ARM_ABI_FLOAT_HARD) if (e_flags & EF_ARM_ABI_FLOAT_HARD)
error(EXIT_FAILURE, 0, fail("Unexpected hard-float flag set in e_flags\n");
"Unexpected hard-float flag set in e_flags");
clear_soft_float = !!(e_flags & EF_ARM_ABI_FLOAT_SOFT); clear_soft_float = !!(e_flags & EF_ARM_ABI_FLOAT_SOFT);
outfd = open(outfile, O_RDWR | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR); outfd = open(outfile, O_RDWR | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR);
if (outfd < 0) if (outfd < 0)
error(EXIT_FAILURE, errno, "Cannot open %s", outfile); fail("Cannot open %s: %s\n", outfile, strerror(errno));
if (ftruncate(outfd, stat.st_size) != 0) if (ftruncate(outfd, stat.st_size) != 0)
error(EXIT_FAILURE, errno, "Cannot truncate %s", outfile); fail("Cannot truncate %s: %s\n", outfile, strerror(errno));
outbuf = mmap(NULL, stat.st_size, PROT_READ | PROT_WRITE, MAP_SHARED, outbuf = mmap(NULL, stat.st_size, PROT_READ | PROT_WRITE, MAP_SHARED,
outfd, 0); outfd, 0);
if (outbuf == MAP_FAILED) if (outbuf == MAP_FAILED)
error(EXIT_FAILURE, errno, "Failed to map %s", outfile); fail("Failed to map %s: %s\n", outfile, strerror(errno));
close(outfd); close(outfd);
@ -195,7 +205,7 @@ int main(int argc, char **argv)
} }
if (msync(outbuf, stat.st_size, MS_SYNC) != 0) if (msync(outbuf, stat.st_size, MS_SYNC) != 0)
error(EXIT_FAILURE, errno, "Failed to sync %s", outfile); fail("Failed to sync %s: %s\n", outfile, strerror(errno));
return EXIT_SUCCESS; return EXIT_SUCCESS;
} }

View file

@ -23,9 +23,9 @@ config ARM64
select BUILDTIME_EXTABLE_SORT select BUILDTIME_EXTABLE_SORT
select CLONE_BACKWARDS select CLONE_BACKWARDS
select COMMON_CLK select COMMON_CLK
select EDAC_SUPPORT
select CPU_PM if (SUSPEND || CPU_IDLE) select CPU_PM if (SUSPEND || CPU_IDLE)
select DCACHE_WORD_ACCESS select DCACHE_WORD_ACCESS
select EDAC_SUPPORT
select GENERIC_ALLOCATOR select GENERIC_ALLOCATOR
select GENERIC_CLOCKEVENTS select GENERIC_CLOCKEVENTS
select GENERIC_CLOCKEVENTS_BROADCAST if SMP select GENERIC_CLOCKEVENTS_BROADCAST if SMP

View file

@ -23,6 +23,16 @@
device_type = "memory"; device_type = "memory";
reg = < 0x1 0x00000000 0x0 0x80000000 >; /* Updated by bootloader */ reg = < 0x1 0x00000000 0x0 0x80000000 >; /* Updated by bootloader */
}; };
gpio-keys {
compatible = "gpio-keys";
button@1 {
label = "POWER";
linux,code = <116>;
linux,input-type = <0x1>;
interrupts = <0x0 0x2d 0x1>;
};
};
}; };
&pcie0clk { &pcie0clk {

View file

@ -1,6 +1,7 @@
dtb-$(CONFIG_ARCH_VEXPRESS) += foundation-v8.dtb dtb-$(CONFIG_ARCH_VEXPRESS) += foundation-v8.dtb
dtb-$(CONFIG_ARCH_VEXPRESS) += juno.dtb juno-r1.dtb dtb-$(CONFIG_ARCH_VEXPRESS) += juno.dtb juno-r1.dtb
dtb-$(CONFIG_ARCH_VEXPRESS) += rtsm_ve-aemv8a.dtb dtb-$(CONFIG_ARCH_VEXPRESS) += rtsm_ve-aemv8a.dtb
dtb-$(CONFIG_ARCH_VEXPRESS) += vexpress-v2f-1xv7-ca53x2.dtb
always := $(dtb-y) always := $(dtb-y)
subdir-y := $(dts-dirs) subdir-y := $(dts-dirs)

View file

@ -0,0 +1,191 @@
/*
* ARM Ltd. Versatile Express
*
* LogicTile Express 20MG
* V2F-1XV7
*
* Cortex-A53 (2 cores) Soft Macrocell Model
*
* HBI-0247C
*/
/dts-v1/;
#include <dt-bindings/interrupt-controller/arm-gic.h>
/ {
model = "V2F-1XV7 Cortex-A53x2 SMM";
arm,hbi = <0x247>;
arm,vexpress,site = <0xf>;
compatible = "arm,vexpress,v2f-1xv7,ca53x2", "arm,vexpress,v2f-1xv7", "arm,vexpress";
interrupt-parent = <&gic>;
#address-cells = <2>;
#size-cells = <2>;
chosen {
stdout-path = "serial0:38400n8";
};
aliases {
serial0 = &v2m_serial0;
serial1 = &v2m_serial1;
serial2 = &v2m_serial2;
serial3 = &v2m_serial3;
i2c0 = &v2m_i2c_dvi;
i2c1 = &v2m_i2c_pcie;
};
cpus {
#address-cells = <2>;
#size-cells = <0>;
cpu@0 {
device_type = "cpu";
compatible = "arm,cortex-a53", "arm,armv8";
reg = <0 0>;
next-level-cache = <&L2_0>;
};
cpu@1 {
device_type = "cpu";
compatible = "arm,cortex-a53", "arm,armv8";
reg = <0 1>;
next-level-cache = <&L2_0>;
};
L2_0: l2-cache0 {
compatible = "cache";
};
};
memory@80000000 {
device_type = "memory";
reg = <0 0x80000000 0 0x80000000>; /* 2GB @ 2GB */
};
gic: interrupt-controller@2c001000 {
compatible = "arm,gic-400";
#interrupt-cells = <3>;
#address-cells = <0>;
interrupt-controller;
reg = <0 0x2c001000 0 0x1000>,
<0 0x2c002000 0 0x2000>,
<0 0x2c004000 0 0x2000>,
<0 0x2c006000 0 0x2000>;
interrupts = <GIC_PPI 9 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_HIGH)>;
};
timer {
compatible = "arm,armv8-timer";
interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
<GIC_PPI 14 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
<GIC_PPI 11 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
<GIC_PPI 10 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>;
};
pmu {
compatible = "arm,armv8-pmuv3";
interrupts = <GIC_SPI 68 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 69 IRQ_TYPE_LEVEL_HIGH>;
};
dcc {
compatible = "arm,vexpress,config-bus";
arm,vexpress,config-bridge = <&v2m_sysreg>;
smbclk: osc@4 {
/* SMC clock */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 4>;
freq-range = <40000000 40000000>;
#clock-cells = <0>;
clock-output-names = "smclk";
};
volt@0 {
/* VIO to expansion board above */
compatible = "arm,vexpress-volt";
arm,vexpress-sysreg,func = <2 0>;
regulator-name = "VIO_UP";
regulator-min-microvolt = <800000>;
regulator-max-microvolt = <1800000>;
regulator-always-on;
};
volt@1 {
/* 12V from power connector J6 */
compatible = "arm,vexpress-volt";
arm,vexpress-sysreg,func = <2 1>;
regulator-name = "12";
regulator-always-on;
};
temp@0 {
/* FPGA temperature */
compatible = "arm,vexpress-temp";
arm,vexpress-sysreg,func = <4 0>;
label = "FPGA";
};
};
smb {
compatible = "simple-bus";
#address-cells = <2>;
#size-cells = <1>;
ranges = <0 0 0 0x08000000 0x04000000>,
<1 0 0 0x14000000 0x04000000>,
<2 0 0 0x18000000 0x04000000>,
<3 0 0 0x1c000000 0x04000000>,
<4 0 0 0x0c000000 0x04000000>,
<5 0 0 0x10000000 0x04000000>;
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 63>;
interrupt-map = <0 0 0 &gic GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>,
<0 0 1 &gic GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>,
<0 0 2 &gic GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>,
<0 0 3 &gic GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>,
<0 0 4 &gic GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>,
<0 0 5 &gic GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>,
<0 0 6 &gic GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>,
<0 0 7 &gic GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>,
<0 0 8 &gic GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
<0 0 9 &gic GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>,
<0 0 10 &gic GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>,
<0 0 11 &gic GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>,
<0 0 12 &gic GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>,
<0 0 13 &gic GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>,
<0 0 14 &gic GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>,
<0 0 15 &gic GIC_SPI 15 IRQ_TYPE_LEVEL_HIGH>,
<0 0 16 &gic GIC_SPI 16 IRQ_TYPE_LEVEL_HIGH>,
<0 0 17 &gic GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>,
<0 0 18 &gic GIC_SPI 18 IRQ_TYPE_LEVEL_HIGH>,
<0 0 19 &gic GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>,
<0 0 20 &gic GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>,
<0 0 21 &gic GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>,
<0 0 22 &gic GIC_SPI 22 IRQ_TYPE_LEVEL_HIGH>,
<0 0 23 &gic GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>,
<0 0 24 &gic GIC_SPI 24 IRQ_TYPE_LEVEL_HIGH>,
<0 0 25 &gic GIC_SPI 25 IRQ_TYPE_LEVEL_HIGH>,
<0 0 26 &gic GIC_SPI 26 IRQ_TYPE_LEVEL_HIGH>,
<0 0 27 &gic GIC_SPI 27 IRQ_TYPE_LEVEL_HIGH>,
<0 0 28 &gic GIC_SPI 28 IRQ_TYPE_LEVEL_HIGH>,
<0 0 29 &gic GIC_SPI 29 IRQ_TYPE_LEVEL_HIGH>,
<0 0 30 &gic GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>,
<0 0 31 &gic GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>,
<0 0 32 &gic GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>,
<0 0 33 &gic GIC_SPI 33 IRQ_TYPE_LEVEL_HIGH>,
<0 0 34 &gic GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>,
<0 0 35 &gic GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>,
<0 0 36 &gic GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>,
<0 0 37 &gic GIC_SPI 37 IRQ_TYPE_LEVEL_HIGH>,
<0 0 38 &gic GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>,
<0 0 39 &gic GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>,
<0 0 40 &gic GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>,
<0 0 41 &gic GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>,
<0 0 42 &gic GIC_SPI 42 IRQ_TYPE_LEVEL_HIGH>;
/include/ "../../../../arm/boot/dts/vexpress-v2m-rs1.dtsi"
};
};

View file

@ -376,10 +376,19 @@
gic0: interrupt-controller@8010,00000000 { gic0: interrupt-controller@8010,00000000 {
compatible = "arm,gic-v3"; compatible = "arm,gic-v3";
#interrupt-cells = <3>; #interrupt-cells = <3>;
#address-cells = <2>;
#size-cells = <2>;
ranges;
interrupt-controller; interrupt-controller;
reg = <0x8010 0x00000000 0x0 0x010000>, /* GICD */ reg = <0x8010 0x00000000 0x0 0x010000>, /* GICD */
<0x8010 0x80000000 0x0 0x600000>; /* GICR */ <0x8010 0x80000000 0x0 0x600000>; /* GICR */
interrupts = <1 9 0xf04>; interrupts = <1 9 0xf04>;
its: gic-its@8010,00020000 {
compatible = "arm,gic-v3-its";
msi-controller;
reg = <0x8010 0x20000 0x0 0x200000>;
};
}; };
uaa0: serial@87e0,24000000 { uaa0: serial@87e0,24000000 {

View file

@ -83,6 +83,7 @@ CONFIG_BLK_DEV_SD=y
CONFIG_ATA=y CONFIG_ATA=y
CONFIG_SATA_AHCI=y CONFIG_SATA_AHCI=y
CONFIG_SATA_AHCI_PLATFORM=y CONFIG_SATA_AHCI_PLATFORM=y
CONFIG_AHCI_CEVA=y
CONFIG_AHCI_XGENE=y CONFIG_AHCI_XGENE=y
CONFIG_PATA_PLATFORM=y CONFIG_PATA_PLATFORM=y
CONFIG_PATA_OF_PLATFORM=y CONFIG_PATA_OF_PLATFORM=y

View file

@ -19,6 +19,14 @@
#include <asm/psci.h> #include <asm/psci.h>
#include <asm/smp_plat.h> #include <asm/smp_plat.h>
/* Macros for consistency checks of the GICC subtable of MADT */
#define ACPI_MADT_GICC_LENGTH \
(acpi_gbl_FADT.header.revision < 6 ? 76 : 80)
#define BAD_MADT_GICC_ENTRY(entry, end) \
(!(entry) || (unsigned long)(entry) + sizeof(*(entry)) > (end) || \
(entry)->header.length != ACPI_MADT_GICC_LENGTH)
/* Basic configuration for ACPI */ /* Basic configuration for ACPI */
#ifdef CONFIG_ACPI #ifdef CONFIG_ACPI
/* ACPI table mapping after acpi_gbl_permanent_mmap is set */ /* ACPI table mapping after acpi_gbl_permanent_mmap is set */

View file

@ -352,8 +352,8 @@ el1_inv:
// TODO: add support for undefined instructions in kernel mode // TODO: add support for undefined instructions in kernel mode
enable_dbg enable_dbg
mov x0, sp mov x0, sp
mov x2, x1
mov x1, #BAD_SYNC mov x1, #BAD_SYNC
mrs x2, esr_el1
b bad_mode b bad_mode
ENDPROC(el1_sync) ENDPROC(el1_sync)
@ -553,7 +553,7 @@ el0_inv:
ct_user_exit ct_user_exit
mov x0, sp mov x0, sp
mov x1, #BAD_SYNC mov x1, #BAD_SYNC
mrs x2, esr_el1 mov x2, x25
bl bad_mode bl bad_mode
b ret_to_user b ret_to_user
ENDPROC(el0_sync) ENDPROC(el0_sync)

View file

@ -32,13 +32,11 @@
ENTRY(compat_sys_sigreturn_wrapper) ENTRY(compat_sys_sigreturn_wrapper)
mov x0, sp mov x0, sp
mov x27, #0 // prevent syscall restart handling (why)
b compat_sys_sigreturn b compat_sys_sigreturn
ENDPROC(compat_sys_sigreturn_wrapper) ENDPROC(compat_sys_sigreturn_wrapper)
ENTRY(compat_sys_rt_sigreturn_wrapper) ENTRY(compat_sys_rt_sigreturn_wrapper)
mov x0, sp mov x0, sp
mov x27, #0 // prevent syscall restart handling (why)
b compat_sys_rt_sigreturn b compat_sys_rt_sigreturn
ENDPROC(compat_sys_rt_sigreturn_wrapper) ENDPROC(compat_sys_rt_sigreturn_wrapper)

View file

@ -438,7 +438,7 @@ acpi_parse_gic_cpu_interface(struct acpi_subtable_header *header,
struct acpi_madt_generic_interrupt *processor; struct acpi_madt_generic_interrupt *processor;
processor = (struct acpi_madt_generic_interrupt *)header; processor = (struct acpi_madt_generic_interrupt *)header;
if (BAD_MADT_ENTRY(processor, end)) if (BAD_MADT_GICC_ENTRY(processor, end))
return -EINVAL; return -EINVAL;
acpi_table_print_madt_entry(header); acpi_table_print_madt_entry(header);

View file

@ -4,5 +4,3 @@ obj-y := dma-mapping.o extable.o fault.o init.o \
context.o proc.o pageattr.o context.o proc.o pageattr.o
obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
obj-$(CONFIG_ARM64_PTDUMP) += dump.o obj-$(CONFIG_ARM64_PTDUMP) += dump.o
CFLAGS_mmu.o := -I$(srctree)/scripts/dtc/libfdt/

View file

@ -1464,7 +1464,7 @@ static inline void handle_rx_packet(struct sync_port *port)
if (port->write_ts_idx == NBR_IN_DESCR) if (port->write_ts_idx == NBR_IN_DESCR)
port->write_ts_idx = 0; port->write_ts_idx = 0;
idx = port->write_ts_idx++; idx = port->write_ts_idx++;
do_posix_clock_monotonic_gettime(&port->timestamp[idx]); ktime_get_ts(&port->timestamp[idx]);
port->in_buffer_len += port->inbufchunk; port->in_buffer_len += port->inbufchunk;
} }
spin_unlock_irqrestore(&port->lock, flags); spin_unlock_irqrestore(&port->lock, flags);

View file

@ -2231,7 +2231,7 @@ config MIPS_CMP
config MIPS_CPS config MIPS_CPS
bool "MIPS Coherent Processing System support" bool "MIPS Coherent Processing System support"
depends on SYS_SUPPORTS_MIPS_CPS && !64BIT depends on SYS_SUPPORTS_MIPS_CPS
select MIPS_CM select MIPS_CM
select MIPS_CPC select MIPS_CPC
select MIPS_CPS_PM if HOTPLUG_CPU select MIPS_CPS_PM if HOTPLUG_CPU

View file

@ -1,6 +1,6 @@
/* /*
* Copyright (C) 2010 Loongson Inc. & Lemote Inc. & * Copyright (C) 2010 Loongson Inc. & Lemote Inc. &
* Insititute of Computing Technology * Institute of Computing Technology
* Author: Xiang Gao, gaoxiang@ict.ac.cn * Author: Xiang Gao, gaoxiang@ict.ac.cn
* Huacai Chen, chenhc@lemote.com * Huacai Chen, chenhc@lemote.com
* Xiaofu Meng, Shuangshuang Zhang * Xiaofu Meng, Shuangshuang Zhang

View file

@ -23,6 +23,7 @@
extern int smp_num_siblings; extern int smp_num_siblings;
extern cpumask_t cpu_sibling_map[]; extern cpumask_t cpu_sibling_map[];
extern cpumask_t cpu_core_map[]; extern cpumask_t cpu_core_map[];
extern cpumask_t cpu_foreign_map;
#define raw_smp_processor_id() (current_thread_info()->cpu) #define raw_smp_processor_id() (current_thread_info()->cpu)

View file

@ -600,7 +600,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
break; break;
case blezl_op: /* not really i_format */ case blezl_op: /* not really i_format */
if (NO_R6EMU) if (!insn.i_format.rt && NO_R6EMU)
goto sigill_r6; goto sigill_r6;
case blez_op: case blez_op:
/* /*
@ -635,7 +635,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
break; break;
case bgtzl_op: case bgtzl_op:
if (NO_R6EMU) if (!insn.i_format.rt && NO_R6EMU)
goto sigill_r6; goto sigill_r6;
case bgtz_op: case bgtz_op:
/* /*

View file

@ -60,7 +60,7 @@ LEAF(mips_cps_core_entry)
nop nop
/* This is an NMI */ /* This is an NMI */
la k0, nmi_handler PTR_LA k0, nmi_handler
jr k0 jr k0
nop nop
@ -107,10 +107,10 @@ not_nmi:
mul t1, t1, t0 mul t1, t1, t0
mul t1, t1, t2 mul t1, t1, t2
li a0, KSEG0 li a0, CKSEG0
add a1, a0, t1 PTR_ADD a1, a0, t1
1: cache Index_Store_Tag_I, 0(a0) 1: cache Index_Store_Tag_I, 0(a0)
add a0, a0, t0 PTR_ADD a0, a0, t0
bne a0, a1, 1b bne a0, a1, 1b
nop nop
icache_done: icache_done:
@ -134,12 +134,12 @@ icache_done:
mul t1, t1, t0 mul t1, t1, t0
mul t1, t1, t2 mul t1, t1, t2
li a0, KSEG0 li a0, CKSEG0
addu a1, a0, t1 PTR_ADDU a1, a0, t1
subu a1, a1, t0 PTR_SUBU a1, a1, t0
1: cache Index_Store_Tag_D, 0(a0) 1: cache Index_Store_Tag_D, 0(a0)
bne a0, a1, 1b bne a0, a1, 1b
add a0, a0, t0 PTR_ADD a0, a0, t0
dcache_done: dcache_done:
/* Set Kseg0 CCA to that in s0 */ /* Set Kseg0 CCA to that in s0 */
@ -152,11 +152,11 @@ dcache_done:
/* Enter the coherent domain */ /* Enter the coherent domain */
li t0, 0xff li t0, 0xff
sw t0, GCR_CL_COHERENCE_OFS(v1) PTR_S t0, GCR_CL_COHERENCE_OFS(v1)
ehb ehb
/* Jump to kseg0 */ /* Jump to kseg0 */
la t0, 1f PTR_LA t0, 1f
jr t0 jr t0
nop nop
@ -178,9 +178,9 @@ dcache_done:
nop nop
/* Off we go! */ /* Off we go! */
lw t1, VPEBOOTCFG_PC(v0) PTR_L t1, VPEBOOTCFG_PC(v0)
lw gp, VPEBOOTCFG_GP(v0) PTR_L gp, VPEBOOTCFG_GP(v0)
lw sp, VPEBOOTCFG_SP(v0) PTR_L sp, VPEBOOTCFG_SP(v0)
jr t1 jr t1
nop nop
END(mips_cps_core_entry) END(mips_cps_core_entry)
@ -217,7 +217,7 @@ LEAF(excep_intex)
.org 0x480 .org 0x480
LEAF(excep_ejtag) LEAF(excep_ejtag)
la k0, ejtag_debug_handler PTR_LA k0, ejtag_debug_handler
jr k0 jr k0
nop nop
END(excep_ejtag) END(excep_ejtag)
@ -229,7 +229,7 @@ LEAF(mips_cps_core_init)
nop nop
.set push .set push
.set mips32r2 .set mips64r2
.set mt .set mt
/* Only allow 1 TC per VPE to execute... */ /* Only allow 1 TC per VPE to execute... */
@ -237,7 +237,7 @@ LEAF(mips_cps_core_init)
/* ...and for the moment only 1 VPE */ /* ...and for the moment only 1 VPE */
dvpe dvpe
la t1, 1f PTR_LA t1, 1f
jr.hb t1 jr.hb t1
nop nop
@ -250,25 +250,25 @@ LEAF(mips_cps_core_init)
mfc0 t0, CP0_MVPCONF0 mfc0 t0, CP0_MVPCONF0
srl t0, t0, MVPCONF0_PVPE_SHIFT srl t0, t0, MVPCONF0_PVPE_SHIFT
andi t0, t0, (MVPCONF0_PVPE >> MVPCONF0_PVPE_SHIFT) andi t0, t0, (MVPCONF0_PVPE >> MVPCONF0_PVPE_SHIFT)
addiu t7, t0, 1 addiu ta3, t0, 1
/* If there's only 1, we're done */ /* If there's only 1, we're done */
beqz t0, 2f beqz t0, 2f
nop nop
/* Loop through each VPE within this core */ /* Loop through each VPE within this core */
li t5, 1 li ta1, 1
1: /* Operate on the appropriate TC */ 1: /* Operate on the appropriate TC */
mtc0 t5, CP0_VPECONTROL mtc0 ta1, CP0_VPECONTROL
ehb ehb
/* Bind TC to VPE (1:1 TC:VPE mapping) */ /* Bind TC to VPE (1:1 TC:VPE mapping) */
mttc0 t5, CP0_TCBIND mttc0 ta1, CP0_TCBIND
/* Set exclusive TC, non-active, master */ /* Set exclusive TC, non-active, master */
li t0, VPECONF0_MVP li t0, VPECONF0_MVP
sll t1, t5, VPECONF0_XTC_SHIFT sll t1, ta1, VPECONF0_XTC_SHIFT
or t0, t0, t1 or t0, t0, t1
mttc0 t0, CP0_VPECONF0 mttc0 t0, CP0_VPECONF0
@ -280,8 +280,8 @@ LEAF(mips_cps_core_init)
mttc0 t0, CP0_TCHALT mttc0 t0, CP0_TCHALT
/* Next VPE */ /* Next VPE */
addiu t5, t5, 1 addiu ta1, ta1, 1
slt t0, t5, t7 slt t0, ta1, ta3
bnez t0, 1b bnez t0, 1b
nop nop
@ -298,19 +298,19 @@ LEAF(mips_cps_core_init)
LEAF(mips_cps_boot_vpes) LEAF(mips_cps_boot_vpes)
/* Retrieve CM base address */ /* Retrieve CM base address */
la t0, mips_cm_base PTR_LA t0, mips_cm_base
lw t0, 0(t0) PTR_L t0, 0(t0)
/* Calculate a pointer to this cores struct core_boot_config */ /* Calculate a pointer to this cores struct core_boot_config */
lw t0, GCR_CL_ID_OFS(t0) PTR_L t0, GCR_CL_ID_OFS(t0)
li t1, COREBOOTCFG_SIZE li t1, COREBOOTCFG_SIZE
mul t0, t0, t1 mul t0, t0, t1
la t1, mips_cps_core_bootcfg PTR_LA t1, mips_cps_core_bootcfg
lw t1, 0(t1) PTR_L t1, 0(t1)
addu t0, t0, t1 PTR_ADDU t0, t0, t1
/* Calculate this VPEs ID. If the core doesn't support MT use 0 */ /* Calculate this VPEs ID. If the core doesn't support MT use 0 */
has_mt t6, 1f has_mt ta2, 1f
li t9, 0 li t9, 0
/* Find the number of VPEs present in the core */ /* Find the number of VPEs present in the core */
@ -334,24 +334,24 @@ LEAF(mips_cps_boot_vpes)
1: /* Calculate a pointer to this VPEs struct vpe_boot_config */ 1: /* Calculate a pointer to this VPEs struct vpe_boot_config */
li t1, VPEBOOTCFG_SIZE li t1, VPEBOOTCFG_SIZE
mul v0, t9, t1 mul v0, t9, t1
lw t7, COREBOOTCFG_VPECONFIG(t0) PTR_L ta3, COREBOOTCFG_VPECONFIG(t0)
addu v0, v0, t7 PTR_ADDU v0, v0, ta3
#ifdef CONFIG_MIPS_MT #ifdef CONFIG_MIPS_MT
/* If the core doesn't support MT then return */ /* If the core doesn't support MT then return */
bnez t6, 1f bnez ta2, 1f
nop nop
jr ra jr ra
nop nop
.set push .set push
.set mips32r2 .set mips64r2
.set mt .set mt
1: /* Enter VPE configuration state */ 1: /* Enter VPE configuration state */
dvpe dvpe
la t1, 1f PTR_LA t1, 1f
jr.hb t1 jr.hb t1
nop nop
1: mfc0 t1, CP0_MVPCONTROL 1: mfc0 t1, CP0_MVPCONTROL
@ -360,12 +360,12 @@ LEAF(mips_cps_boot_vpes)
ehb ehb
/* Loop through each VPE */ /* Loop through each VPE */
lw t6, COREBOOTCFG_VPEMASK(t0) PTR_L ta2, COREBOOTCFG_VPEMASK(t0)
move t8, t6 move t8, ta2
li t5, 0 li ta1, 0
/* Check whether the VPE should be running. If not, skip it */ /* Check whether the VPE should be running. If not, skip it */
1: andi t0, t6, 1 1: andi t0, ta2, 1
beqz t0, 2f beqz t0, 2f
nop nop
@ -373,7 +373,7 @@ LEAF(mips_cps_boot_vpes)
mfc0 t0, CP0_VPECONTROL mfc0 t0, CP0_VPECONTROL
ori t0, t0, VPECONTROL_TARGTC ori t0, t0, VPECONTROL_TARGTC
xori t0, t0, VPECONTROL_TARGTC xori t0, t0, VPECONTROL_TARGTC
or t0, t0, t5 or t0, t0, ta1
mtc0 t0, CP0_VPECONTROL mtc0 t0, CP0_VPECONTROL
ehb ehb
@ -384,8 +384,8 @@ LEAF(mips_cps_boot_vpes)
/* Calculate a pointer to the VPEs struct vpe_boot_config */ /* Calculate a pointer to the VPEs struct vpe_boot_config */
li t0, VPEBOOTCFG_SIZE li t0, VPEBOOTCFG_SIZE
mul t0, t0, t5 mul t0, t0, ta1
addu t0, t0, t7 addu t0, t0, ta3
/* Set the TC restart PC */ /* Set the TC restart PC */
lw t1, VPEBOOTCFG_PC(t0) lw t1, VPEBOOTCFG_PC(t0)
@ -423,9 +423,9 @@ LEAF(mips_cps_boot_vpes)
mttc0 t0, CP0_VPECONF0 mttc0 t0, CP0_VPECONF0
/* Next VPE */ /* Next VPE */
2: srl t6, t6, 1 2: srl ta2, ta2, 1
addiu t5, t5, 1 addiu ta1, ta1, 1
bnez t6, 1b bnez ta2, 1b
nop nop
/* Leave VPE configuration state */ /* Leave VPE configuration state */
@ -445,7 +445,7 @@ LEAF(mips_cps_boot_vpes)
/* This VPE should be offline, halt the TC */ /* This VPE should be offline, halt the TC */
li t0, TCHALT_H li t0, TCHALT_H
mtc0 t0, CP0_TCHALT mtc0 t0, CP0_TCHALT
la t0, 1f PTR_LA t0, 1f
1: jr.hb t0 1: jr.hb t0
nop nop
@ -466,10 +466,10 @@ LEAF(mips_cps_boot_vpes)
.set noat .set noat
lw $1, TI_CPU(gp) lw $1, TI_CPU(gp)
sll $1, $1, LONGLOG sll $1, $1, LONGLOG
la \dest, __per_cpu_offset PTR_LA \dest, __per_cpu_offset
addu $1, $1, \dest addu $1, $1, \dest
lw $1, 0($1) lw $1, 0($1)
la \dest, cps_cpu_state PTR_LA \dest, cps_cpu_state
addu \dest, \dest, $1 addu \dest, \dest, $1
.set pop .set pop
.endm .endm

View file

@ -73,10 +73,11 @@ NESTED(handle_sys, PT_SIZE, sp)
.set noreorder .set noreorder
.set nomacro .set nomacro
1: user_lw(t5, 16(t0)) # argument #5 from usp load_a4: user_lw(t5, 16(t0)) # argument #5 from usp
4: user_lw(t6, 20(t0)) # argument #6 from usp load_a5: user_lw(t6, 20(t0)) # argument #6 from usp
3: user_lw(t7, 24(t0)) # argument #7 from usp load_a6: user_lw(t7, 24(t0)) # argument #7 from usp
2: user_lw(t8, 28(t0)) # argument #8 from usp load_a7: user_lw(t8, 28(t0)) # argument #8 from usp
loads_done:
sw t5, 16(sp) # argument #5 to ksp sw t5, 16(sp) # argument #5 to ksp
sw t6, 20(sp) # argument #6 to ksp sw t6, 20(sp) # argument #6 to ksp
@ -85,10 +86,10 @@ NESTED(handle_sys, PT_SIZE, sp)
.set pop .set pop
.section __ex_table,"a" .section __ex_table,"a"
PTR 1b,bad_stack PTR load_a4, bad_stack_a4
PTR 2b,bad_stack PTR load_a5, bad_stack_a5
PTR 3b,bad_stack PTR load_a6, bad_stack_a6
PTR 4b,bad_stack PTR load_a7, bad_stack_a7
.previous .previous
lw t0, TI_FLAGS($28) # syscall tracing enabled? lw t0, TI_FLAGS($28) # syscall tracing enabled?
@ -153,8 +154,8 @@ syscall_trace_entry:
/* ------------------------------------------------------------------------ */ /* ------------------------------------------------------------------------ */
/* /*
* The stackpointer for a call with more than 4 arguments is bad. * Our open-coded access area sanity test for the stack pointer
* We probably should handle this case a bit more drastic. * failed. We probably should handle this case a bit more drastic.
*/ */
bad_stack: bad_stack:
li v0, EFAULT li v0, EFAULT
@ -163,6 +164,22 @@ bad_stack:
sw t0, PT_R7(sp) sw t0, PT_R7(sp)
j o32_syscall_exit j o32_syscall_exit
bad_stack_a4:
li t5, 0
b load_a5
bad_stack_a5:
li t6, 0
b load_a6
bad_stack_a6:
li t7, 0
b load_a7
bad_stack_a7:
li t8, 0
b loads_done
/* /*
* The system call does not exist in this kernel * The system call does not exist in this kernel
*/ */

View file

@ -69,16 +69,17 @@ NESTED(handle_sys, PT_SIZE, sp)
daddu t1, t0, 32 daddu t1, t0, 32
bltz t1, bad_stack bltz t1, bad_stack
1: lw a4, 16(t0) # argument #5 from usp load_a4: lw a4, 16(t0) # argument #5 from usp
2: lw a5, 20(t0) # argument #6 from usp load_a5: lw a5, 20(t0) # argument #6 from usp
3: lw a6, 24(t0) # argument #7 from usp load_a6: lw a6, 24(t0) # argument #7 from usp
4: lw a7, 28(t0) # argument #8 from usp (for indirect syscalls) load_a7: lw a7, 28(t0) # argument #8 from usp
loads_done:
.section __ex_table,"a" .section __ex_table,"a"
PTR 1b, bad_stack PTR load_a4, bad_stack_a4
PTR 2b, bad_stack PTR load_a5, bad_stack_a5
PTR 3b, bad_stack PTR load_a6, bad_stack_a6
PTR 4b, bad_stack PTR load_a7, bad_stack_a7
.previous .previous
li t1, _TIF_WORK_SYSCALL_ENTRY li t1, _TIF_WORK_SYSCALL_ENTRY
@ -167,6 +168,22 @@ bad_stack:
sd t0, PT_R7(sp) sd t0, PT_R7(sp)
j o32_syscall_exit j o32_syscall_exit
bad_stack_a4:
li a4, 0
b load_a5
bad_stack_a5:
li a5, 0
b load_a6
bad_stack_a6:
li a6, 0
b load_a7
bad_stack_a7:
li a7, 0
b loads_done
not_o32_scall: not_o32_scall:
/* /*
* This is not an o32 compatibility syscall, pass it on * This is not an o32 compatibility syscall, pass it on
@ -383,7 +400,7 @@ EXPORT(sys32_call_table)
PTR sys_connect /* 4170 */ PTR sys_connect /* 4170 */
PTR sys_getpeername PTR sys_getpeername
PTR sys_getsockname PTR sys_getsockname
PTR sys_getsockopt PTR compat_sys_getsockopt
PTR sys_listen PTR sys_listen
PTR compat_sys_recv /* 4175 */ PTR compat_sys_recv /* 4175 */
PTR compat_sys_recvfrom PTR compat_sys_recvfrom

View file

@ -337,6 +337,11 @@ static void __init bootmem_init(void)
min_low_pfn = start; min_low_pfn = start;
if (end <= reserved_end) if (end <= reserved_end)
continue; continue;
#ifdef CONFIG_BLK_DEV_INITRD
/* mapstart should be after initrd_end */
if (initrd_end && end <= (unsigned long)PFN_UP(__pa(initrd_end)))
continue;
#endif
if (start >= mapstart) if (start >= mapstart)
continue; continue;
mapstart = max(reserved_end, start); mapstart = max(reserved_end, start);
@ -366,14 +371,6 @@ static void __init bootmem_init(void)
max_low_pfn = PFN_DOWN(HIGHMEM_START); max_low_pfn = PFN_DOWN(HIGHMEM_START);
} }
#ifdef CONFIG_BLK_DEV_INITRD
/*
* mapstart should be after initrd_end
*/
if (initrd_end)
mapstart = max(mapstart, (unsigned long)PFN_UP(__pa(initrd_end)));
#endif
/* /*
* Initialize the boot-time allocator with low memory only. * Initialize the boot-time allocator with low memory only.
*/ */

View file

@ -133,7 +133,7 @@ static void __init cps_prepare_cpus(unsigned int max_cpus)
/* /*
* Patch the start of mips_cps_core_entry to provide: * Patch the start of mips_cps_core_entry to provide:
* *
* v0 = CM base address * v1 = CM base address
* s0 = kseg0 CCA * s0 = kseg0 CCA
*/ */
entry_code = (u32 *)&mips_cps_core_entry; entry_code = (u32 *)&mips_cps_core_entry;
@ -369,7 +369,7 @@ void play_dead(void)
static void wait_for_sibling_halt(void *ptr_cpu) static void wait_for_sibling_halt(void *ptr_cpu)
{ {
unsigned cpu = (unsigned)ptr_cpu; unsigned cpu = (unsigned long)ptr_cpu;
unsigned vpe_id = cpu_vpe_id(&cpu_data[cpu]); unsigned vpe_id = cpu_vpe_id(&cpu_data[cpu]);
unsigned halted; unsigned halted;
unsigned long flags; unsigned long flags;
@ -430,7 +430,7 @@ static void cps_cpu_die(unsigned int cpu)
*/ */
err = smp_call_function_single(cpu_death_sibling, err = smp_call_function_single(cpu_death_sibling,
wait_for_sibling_halt, wait_for_sibling_halt,
(void *)cpu, 1); (void *)(unsigned long)cpu, 1);
if (err) if (err)
panic("Failed to call remote sibling CPU\n"); panic("Failed to call remote sibling CPU\n");
} }

View file

@ -63,6 +63,13 @@ EXPORT_SYMBOL(cpu_sibling_map);
cpumask_t cpu_core_map[NR_CPUS] __read_mostly; cpumask_t cpu_core_map[NR_CPUS] __read_mostly;
EXPORT_SYMBOL(cpu_core_map); EXPORT_SYMBOL(cpu_core_map);
/*
* A logcal cpu mask containing only one VPE per core to
* reduce the number of IPIs on large MT systems.
*/
cpumask_t cpu_foreign_map __read_mostly;
EXPORT_SYMBOL(cpu_foreign_map);
/* representing cpus for which sibling maps can be computed */ /* representing cpus for which sibling maps can be computed */
static cpumask_t cpu_sibling_setup_map; static cpumask_t cpu_sibling_setup_map;
@ -103,6 +110,29 @@ static inline void set_cpu_core_map(int cpu)
} }
} }
/*
* Calculate a new cpu_foreign_map mask whenever a
* new cpu appears or disappears.
*/
static inline void calculate_cpu_foreign_map(void)
{
int i, k, core_present;
cpumask_t temp_foreign_map;
/* Re-calculate the mask */
for_each_online_cpu(i) {
core_present = 0;
for_each_cpu(k, &temp_foreign_map)
if (cpu_data[i].package == cpu_data[k].package &&
cpu_data[i].core == cpu_data[k].core)
core_present = 1;
if (!core_present)
cpumask_set_cpu(i, &temp_foreign_map);
}
cpumask_copy(&cpu_foreign_map, &temp_foreign_map);
}
struct plat_smp_ops *mp_ops; struct plat_smp_ops *mp_ops;
EXPORT_SYMBOL(mp_ops); EXPORT_SYMBOL(mp_ops);
@ -146,6 +176,8 @@ asmlinkage void start_secondary(void)
set_cpu_sibling_map(cpu); set_cpu_sibling_map(cpu);
set_cpu_core_map(cpu); set_cpu_core_map(cpu);
calculate_cpu_foreign_map();
cpumask_set_cpu(cpu, &cpu_callin_map); cpumask_set_cpu(cpu, &cpu_callin_map);
synchronise_count_slave(cpu); synchronise_count_slave(cpu);
@ -173,9 +205,18 @@ void __irq_entry smp_call_function_interrupt(void)
static void stop_this_cpu(void *dummy) static void stop_this_cpu(void *dummy)
{ {
/* /*
* Remove this CPU: * Remove this CPU. Be a bit slow here and
* set the bits for every online CPU so we don't miss
* any IPI whilst taking this VPE down.
*/ */
cpumask_copy(&cpu_foreign_map, cpu_online_mask);
/* Make it visible to every other CPU */
smp_mb();
set_cpu_online(smp_processor_id(), false); set_cpu_online(smp_processor_id(), false);
calculate_cpu_foreign_map();
local_irq_disable(); local_irq_disable();
while (1); while (1);
} }
@ -197,6 +238,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
mp_ops->prepare_cpus(max_cpus); mp_ops->prepare_cpus(max_cpus);
set_cpu_sibling_map(0); set_cpu_sibling_map(0);
set_cpu_core_map(0); set_cpu_core_map(0);
calculate_cpu_foreign_map();
#ifndef CONFIG_HOTPLUG_CPU #ifndef CONFIG_HOTPLUG_CPU
init_cpu_present(cpu_possible_mask); init_cpu_present(cpu_possible_mask);
#endif #endif

View file

@ -2130,10 +2130,10 @@ void per_cpu_trap_init(bool is_boot_cpu)
BUG_ON(current->mm); BUG_ON(current->mm);
enter_lazy_tlb(&init_mm, current); enter_lazy_tlb(&init_mm, current);
/* Boot CPU's cache setup in setup_arch(). */ /* Boot CPU's cache setup in setup_arch(). */
if (!is_boot_cpu) if (!is_boot_cpu)
cpu_cache_init(); cpu_cache_init();
tlb_init(); tlb_init();
TLBMISS_HANDLER_SETUP(); TLBMISS_HANDLER_SETUP();
} }

View file

@ -3,7 +3,7 @@
* Author: Jun Sun, jsun@mvista.com or jsun@junsun.net * Author: Jun Sun, jsun@mvista.com or jsun@junsun.net
* Copyright (C) 2000, 2001 Ralf Baechle (ralf@gnu.org) * Copyright (C) 2000, 2001 Ralf Baechle (ralf@gnu.org)
* *
* Copyright (C) 2007 Lemote Inc. & Insititute of Computing Technology * Copyright (C) 2007 Lemote Inc. & Institute of Computing Technology
* Author: Fuxin Zhang, zhangfx@lemote.com * Author: Fuxin Zhang, zhangfx@lemote.com
* *
* This program is free software; you can redistribute it and/or modify it * This program is free software; you can redistribute it and/or modify it

View file

@ -6,7 +6,7 @@
* Copyright 2003 ICT CAS * Copyright 2003 ICT CAS
* Author: Michael Guo <guoyi@ict.ac.cn> * Author: Michael Guo <guoyi@ict.ac.cn>
* *
* Copyright (C) 2007 Lemote Inc. & Insititute of Computing Technology * Copyright (C) 2007 Lemote Inc. & Institute of Computing Technology
* Author: Fuxin Zhang, zhangfx@lemote.com * Author: Fuxin Zhang, zhangfx@lemote.com
* *
* Copyright (C) 2009 Lemote Inc. * Copyright (C) 2009 Lemote Inc.

View file

@ -1,7 +1,7 @@
/* /*
* CS5536 General timer functions * CS5536 General timer functions
* *
* Copyright (C) 2007 Lemote Inc. & Insititute of Computing Technology * Copyright (C) 2007 Lemote Inc. & Institute of Computing Technology
* Author: Yanhua, yanh@lemote.com * Author: Yanhua, yanh@lemote.com
* *
* Copyright (C) 2009 Lemote Inc. * Copyright (C) 2009 Lemote Inc.

View file

@ -6,7 +6,7 @@
* Copyright 2003 ICT CAS * Copyright 2003 ICT CAS
* Author: Michael Guo <guoyi@ict.ac.cn> * Author: Michael Guo <guoyi@ict.ac.cn>
* *
* Copyright (C) 2007 Lemote Inc. & Insititute of Computing Technology * Copyright (C) 2007 Lemote Inc. & Institute of Computing Technology
* Author: Fuxin Zhang, zhangfx@lemote.com * Author: Fuxin Zhang, zhangfx@lemote.com
* *
* Copyright (C) 2009 Lemote Inc. * Copyright (C) 2009 Lemote Inc.

View file

@ -1,5 +1,5 @@
/* /*
* Copyright (C) 2007 Lemote Inc. & Insititute of Computing Technology * Copyright (C) 2007 Lemote Inc. & Institute of Computing Technology
* Author: Fuxin Zhang, zhangfx@lemote.com * Author: Fuxin Zhang, zhangfx@lemote.com
* *
* This program is free software; you can redistribute it and/or modify it * This program is free software; you can redistribute it and/or modify it

View file

@ -1,5 +1,5 @@
/* /*
* Copyright (C) 2007 Lemote Inc. & Insititute of Computing Technology * Copyright (C) 2007 Lemote Inc. & Institute of Computing Technology
* Author: Fuxin Zhang, zhangfx@lemote.com * Author: Fuxin Zhang, zhangfx@lemote.com
* *
* This program is free software; you can redistribute it and/or modify it * This program is free software; you can redistribute it and/or modify it

View file

@ -1,5 +1,5 @@
/* /*
* Copyright (C) 2007 Lemote Inc. & Insititute of Computing Technology * Copyright (C) 2007 Lemote Inc. & Institute of Computing Technology
* Author: Fuxin Zhang, zhangfx@lemote.com * Author: Fuxin Zhang, zhangfx@lemote.com
* *
* This program is free software; you can redistribute it and/or modify it * This program is free software; you can redistribute it and/or modify it

View file

@ -1,5 +1,5 @@
/* /*
* Copyright (C) 2006 - 2008 Lemote Inc. & Insititute of Computing Technology * Copyright (C) 2006 - 2008 Lemote Inc. & Institute of Computing Technology
* Author: Yanhua, yanh@lemote.com * Author: Yanhua, yanh@lemote.com
* *
* This file is subject to the terms and conditions of the GNU General Public * This file is subject to the terms and conditions of the GNU General Public
@ -15,7 +15,7 @@
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <asm/clock.h> #include <asm/clock.h>
#include <asm/mach-loongson/loongson.h> #include <asm/mach-loongson64/loongson.h>
static LIST_HEAD(clock_list); static LIST_HEAD(clock_list);
static DEFINE_SPINLOCK(clock_lock); static DEFINE_SPINLOCK(clock_lock);

View file

@ -1,6 +1,6 @@
/* /*
* Copyright (C) 2010 Loongson Inc. & Lemote Inc. & * Copyright (C) 2010 Loongson Inc. & Lemote Inc. &
* Insititute of Computing Technology * Institute of Computing Technology
* Author: Xiang Gao, gaoxiang@ict.ac.cn * Author: Xiang Gao, gaoxiang@ict.ac.cn
* Huacai Chen, chenhc@lemote.com * Huacai Chen, chenhc@lemote.com
* Xiaofu Meng, Shuangshuang Zhang * Xiaofu Meng, Shuangshuang Zhang

View file

@ -451,7 +451,7 @@ static int isBranchInstr(struct pt_regs *regs, struct mm_decoded_insn dec_insn,
/* Fall through */ /* Fall through */
case jr_op: case jr_op:
/* For R6, JR already emulated in jalr_op */ /* For R6, JR already emulated in jalr_op */
if (NO_R6EMU && insn.r_format.opcode == jr_op) if (NO_R6EMU && insn.r_format.func == jr_op)
break; break;
*contpc = regs->regs[insn.r_format.rs]; *contpc = regs->regs[insn.r_format.rs];
return 1; return 1;
@ -551,7 +551,7 @@ static int isBranchInstr(struct pt_regs *regs, struct mm_decoded_insn dec_insn,
dec_insn.next_pc_inc; dec_insn.next_pc_inc;
return 1; return 1;
case blezl_op: case blezl_op:
if (NO_R6EMU) if (!insn.i_format.rt && NO_R6EMU)
break; break;
case blez_op: case blez_op:
@ -588,7 +588,7 @@ static int isBranchInstr(struct pt_regs *regs, struct mm_decoded_insn dec_insn,
dec_insn.next_pc_inc; dec_insn.next_pc_inc;
return 1; return 1;
case bgtzl_op: case bgtzl_op:
if (NO_R6EMU) if (!insn.i_format.rt && NO_R6EMU)
break; break;
case bgtz_op: case bgtz_op:
/* /*

View file

@ -37,6 +37,7 @@
#include <asm/cacheflush.h> /* for run_uncached() */ #include <asm/cacheflush.h> /* for run_uncached() */
#include <asm/traps.h> #include <asm/traps.h>
#include <asm/dma-coherence.h> #include <asm/dma-coherence.h>
#include <asm/mips-cm.h>
/* /*
* Special Variant of smp_call_function for use by cache functions: * Special Variant of smp_call_function for use by cache functions:
@ -51,9 +52,16 @@ static inline void r4k_on_each_cpu(void (*func) (void *info), void *info)
{ {
preempt_disable(); preempt_disable();
#ifndef CONFIG_MIPS_MT_SMP /*
smp_call_function(func, info, 1); * The Coherent Manager propagates address-based cache ops to other
#endif * cores but not index-based ops. However, r4k_on_each_cpu is used
* in both cases so there is no easy way to tell what kind of op is
* executed to the other cores. The best we can probably do is
* to restrict that call when a CM is not present because both
* CM-based SMP protocols (CMP & CPS) restrict index-based cache ops.
*/
if (!mips_cm_present())
smp_call_function_many(&cpu_foreign_map, func, info, 1);
func(info); func(info);
preempt_enable(); preempt_enable();
} }
@ -937,7 +945,9 @@ static void b5k_instruction_hazard(void)
} }
static char *way_string[] = { NULL, "direct mapped", "2-way", static char *way_string[] = { NULL, "direct mapped", "2-way",
"3-way", "4-way", "5-way", "6-way", "7-way", "8-way" "3-way", "4-way", "5-way", "6-way", "7-way", "8-way",
"9-way", "10-way", "11-way", "12-way",
"13-way", "14-way", "15-way", "16-way",
}; };
static void probe_pcache(void) static void probe_pcache(void)

View file

@ -119,18 +119,24 @@ void read_persistent_clock(struct timespec *ts)
int get_c0_fdc_int(void) int get_c0_fdc_int(void)
{ {
int mips_cpu_fdc_irq; /*
* Some cores claim the FDC is routable through the GIC, but it doesn't
* actually seem to be connected for those Malta bitstreams.
*/
switch (current_cpu_type()) {
case CPU_INTERAPTIV:
case CPU_PROAPTIV:
return -1;
};
if (cpu_has_veic) if (cpu_has_veic)
mips_cpu_fdc_irq = -1; return -1;
else if (gic_present) else if (gic_present)
mips_cpu_fdc_irq = gic_get_c0_fdc_int(); return gic_get_c0_fdc_int();
else if (cp0_fdc_irq >= 0) else if (cp0_fdc_irq >= 0)
mips_cpu_fdc_irq = MIPS_CPU_IRQ_BASE + cp0_fdc_irq; return MIPS_CPU_IRQ_BASE + cp0_fdc_irq;
else else
mips_cpu_fdc_irq = -1; return -1;
return mips_cpu_fdc_irq;
} }
int get_c0_perfcount_int(void) int get_c0_perfcount_int(void)

View file

@ -63,13 +63,19 @@ void __init plat_mem_setup(void)
plat_setup_iocoherency(); plat_setup_iocoherency();
} }
#define DEFAULT_CPC_BASE_ADDR 0x1bde0000 #define DEFAULT_CPC_BASE_ADDR 0x1bde0000
#define DEFAULT_CDMM_BASE_ADDR 0x1bdd0000
phys_addr_t mips_cpc_default_phys_base(void) phys_addr_t mips_cpc_default_phys_base(void)
{ {
return DEFAULT_CPC_BASE_ADDR; return DEFAULT_CPC_BASE_ADDR;
} }
phys_addr_t mips_cdmm_phys_base(void)
{
return DEFAULT_CDMM_BASE_ADDR;
}
static void __init mips_nmi_setup(void) static void __init mips_nmi_setup(void)
{ {
void *base; void *base;

View file

@ -27,6 +27,11 @@ int get_c0_perfcount_int(void)
return gic_get_c0_perfcount_int(); return gic_get_c0_perfcount_int();
} }
int get_c0_fdc_int(void)
{
return gic_get_c0_fdc_int();
}
void __init plat_time_init(void) void __init plat_time_init(void)
{ {
struct device_node *np; struct device_node *np;

View file

@ -16,7 +16,7 @@
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/cache.h> #include <asm/cache.h>
extern spinlock_t pa_dbit_lock; extern spinlock_t pa_tlb_lock;
/* /*
* kern_addr_valid(ADDR) tests if ADDR is pointing to valid kernel * kern_addr_valid(ADDR) tests if ADDR is pointing to valid kernel
@ -33,6 +33,19 @@ extern spinlock_t pa_dbit_lock;
*/ */
#define kern_addr_valid(addr) (1) #define kern_addr_valid(addr) (1)
/* Purge data and instruction TLB entries. Must be called holding
* the pa_tlb_lock. The TLB purge instructions are slow on SMP
* machines since the purge must be broadcast to all CPUs.
*/
static inline void purge_tlb_entries(struct mm_struct *mm, unsigned long addr)
{
mtsp(mm->context, 1);
pdtlb(addr);
if (unlikely(split_tlb))
pitlb(addr);
}
/* Certain architectures need to do special things when PTEs /* Certain architectures need to do special things when PTEs
* within a page table are directly modified. Thus, the following * within a page table are directly modified. Thus, the following
* hook is made available. * hook is made available.
@ -42,15 +55,20 @@ extern spinlock_t pa_dbit_lock;
*(pteptr) = (pteval); \ *(pteptr) = (pteval); \
} while(0) } while(0)
extern void purge_tlb_entries(struct mm_struct *, unsigned long); #define pte_inserted(x) \
((pte_val(x) & (_PAGE_PRESENT|_PAGE_ACCESSED)) \
== (_PAGE_PRESENT|_PAGE_ACCESSED))
#define set_pte_at(mm, addr, ptep, pteval) \ #define set_pte_at(mm, addr, ptep, pteval) \
do { \ do { \
pte_t old_pte; \
unsigned long flags; \ unsigned long flags; \
spin_lock_irqsave(&pa_dbit_lock, flags); \ spin_lock_irqsave(&pa_tlb_lock, flags); \
set_pte(ptep, pteval); \ old_pte = *ptep; \
purge_tlb_entries(mm, addr); \ set_pte(ptep, pteval); \
spin_unlock_irqrestore(&pa_dbit_lock, flags); \ if (pte_inserted(old_pte)) \
purge_tlb_entries(mm, addr); \
spin_unlock_irqrestore(&pa_tlb_lock, flags); \
} while (0) } while (0)
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
@ -268,7 +286,7 @@ extern unsigned long *empty_zero_page;
#define pte_none(x) (pte_val(x) == 0) #define pte_none(x) (pte_val(x) == 0)
#define pte_present(x) (pte_val(x) & _PAGE_PRESENT) #define pte_present(x) (pte_val(x) & _PAGE_PRESENT)
#define pte_clear(mm,addr,xp) do { pte_val(*(xp)) = 0; } while (0) #define pte_clear(mm, addr, xp) set_pte_at(mm, addr, xp, __pte(0))
#define pmd_flag(x) (pmd_val(x) & PxD_FLAG_MASK) #define pmd_flag(x) (pmd_val(x) & PxD_FLAG_MASK)
#define pmd_address(x) ((unsigned long)(pmd_val(x) &~ PxD_FLAG_MASK) << PxD_VALUE_SHIFT) #define pmd_address(x) ((unsigned long)(pmd_val(x) &~ PxD_FLAG_MASK) << PxD_VALUE_SHIFT)
@ -435,15 +453,15 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned
if (!pte_young(*ptep)) if (!pte_young(*ptep))
return 0; return 0;
spin_lock_irqsave(&pa_dbit_lock, flags); spin_lock_irqsave(&pa_tlb_lock, flags);
pte = *ptep; pte = *ptep;
if (!pte_young(pte)) { if (!pte_young(pte)) {
spin_unlock_irqrestore(&pa_dbit_lock, flags); spin_unlock_irqrestore(&pa_tlb_lock, flags);
return 0; return 0;
} }
set_pte(ptep, pte_mkold(pte)); set_pte(ptep, pte_mkold(pte));
purge_tlb_entries(vma->vm_mm, addr); purge_tlb_entries(vma->vm_mm, addr);
spin_unlock_irqrestore(&pa_dbit_lock, flags); spin_unlock_irqrestore(&pa_tlb_lock, flags);
return 1; return 1;
} }
@ -453,11 +471,12 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
pte_t old_pte; pte_t old_pte;
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&pa_dbit_lock, flags); spin_lock_irqsave(&pa_tlb_lock, flags);
old_pte = *ptep; old_pte = *ptep;
pte_clear(mm,addr,ptep); set_pte(ptep, __pte(0));
purge_tlb_entries(mm, addr); if (pte_inserted(old_pte))
spin_unlock_irqrestore(&pa_dbit_lock, flags); purge_tlb_entries(mm, addr);
spin_unlock_irqrestore(&pa_tlb_lock, flags);
return old_pte; return old_pte;
} }
@ -465,10 +484,10 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
{ {
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&pa_dbit_lock, flags); spin_lock_irqsave(&pa_tlb_lock, flags);
set_pte(ptep, pte_wrprotect(*ptep)); set_pte(ptep, pte_wrprotect(*ptep));
purge_tlb_entries(mm, addr); purge_tlb_entries(mm, addr);
spin_unlock_irqrestore(&pa_dbit_lock, flags); spin_unlock_irqrestore(&pa_tlb_lock, flags);
} }
#define pte_same(A,B) (pte_val(A) == pte_val(B)) #define pte_same(A,B) (pte_val(A) == pte_val(B))

View file

@ -13,6 +13,9 @@
* active at any one time on the Merced bus. This tlb purge * active at any one time on the Merced bus. This tlb purge
* synchronisation is fairly lightweight and harmless so we activate * synchronisation is fairly lightweight and harmless so we activate
* it on all systems not just the N class. * it on all systems not just the N class.
* It is also used to ensure PTE updates are atomic and consistent
* with the TLB.
*/ */
extern spinlock_t pa_tlb_lock; extern spinlock_t pa_tlb_lock;
@ -24,20 +27,24 @@ extern void flush_tlb_all_local(void *);
#define smp_flush_tlb_all() flush_tlb_all() #define smp_flush_tlb_all() flush_tlb_all()
int __flush_tlb_range(unsigned long sid,
unsigned long start, unsigned long end);
#define flush_tlb_range(vma, start, end) \
__flush_tlb_range((vma)->vm_mm->context, start, end)
#define flush_tlb_kernel_range(start, end) \
__flush_tlb_range(0, start, end)
/* /*
* flush_tlb_mm() * flush_tlb_mm()
* *
* XXX This code is NOT valid for HP-UX compatibility processes, * The code to switch to a new context is NOT valid for processes
* (although it will probably work 99% of the time). HP-UX * which play with the space id's. Thus, we have to preserve the
* processes are free to play with the space id's and save them * space and just flush the entire tlb. However, the compilers,
* over long periods of time, etc. so we have to preserve the * dynamic linker, etc, do not manipulate space id's, so there
* space and just flush the entire tlb. We need to check the * could be a significant performance benefit in switching contexts
* personality in order to do that, but the personality is not * and not flushing the whole tlb.
* currently being set correctly.
*
* Of course, Linux processes could do the same thing, but
* we don't support that (and the compilers, dynamic linker,
* etc. do not do that).
*/ */
static inline void flush_tlb_mm(struct mm_struct *mm) static inline void flush_tlb_mm(struct mm_struct *mm)
@ -45,10 +52,18 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
BUG_ON(mm == &init_mm); /* Should never happen */ BUG_ON(mm == &init_mm); /* Should never happen */
#if 1 || defined(CONFIG_SMP) #if 1 || defined(CONFIG_SMP)
/* Except for very small threads, flushing the whole TLB is
* faster than using __flush_tlb_range. The pdtlb and pitlb
* instructions are very slow because of the TLB broadcast.
* It might be faster to do local range flushes on all CPUs
* on PA 2.0 systems.
*/
flush_tlb_all(); flush_tlb_all();
#else #else
/* FIXME: currently broken, causing space id and protection ids /* FIXME: currently broken, causing space id and protection ids
* to go out of sync, resulting in faults on userspace accesses. * to go out of sync, resulting in faults on userspace accesses.
* This approach needs further investigation since running many
* small applications (e.g., GCC testsuite) is faster on HP-UX.
*/ */
if (mm) { if (mm) {
if (mm->context != 0) if (mm->context != 0)
@ -65,22 +80,12 @@ static inline void flush_tlb_page(struct vm_area_struct *vma,
{ {
unsigned long flags, sid; unsigned long flags, sid;
/* For one page, it's not worth testing the split_tlb variable */
mb();
sid = vma->vm_mm->context; sid = vma->vm_mm->context;
purge_tlb_start(flags); purge_tlb_start(flags);
mtsp(sid, 1); mtsp(sid, 1);
pdtlb(addr); pdtlb(addr);
pitlb(addr); if (unlikely(split_tlb))
pitlb(addr);
purge_tlb_end(flags); purge_tlb_end(flags);
} }
void __flush_tlb_range(unsigned long sid,
unsigned long start, unsigned long end);
#define flush_tlb_range(vma,start,end) __flush_tlb_range((vma)->vm_mm->context,start,end)
#define flush_tlb_kernel_range(start, end) __flush_tlb_range(0,start,end)
#endif #endif

View file

@ -342,12 +342,15 @@ EXPORT_SYMBOL(flush_data_cache_local);
EXPORT_SYMBOL(flush_kernel_icache_range_asm); EXPORT_SYMBOL(flush_kernel_icache_range_asm);
#define FLUSH_THRESHOLD 0x80000 /* 0.5MB */ #define FLUSH_THRESHOLD 0x80000 /* 0.5MB */
int parisc_cache_flush_threshold __read_mostly = FLUSH_THRESHOLD; static unsigned long parisc_cache_flush_threshold __read_mostly = FLUSH_THRESHOLD;
#define FLUSH_TLB_THRESHOLD (2*1024*1024) /* 2MB initial TLB threshold */
static unsigned long parisc_tlb_flush_threshold __read_mostly = FLUSH_TLB_THRESHOLD;
void __init parisc_setup_cache_timing(void) void __init parisc_setup_cache_timing(void)
{ {
unsigned long rangetime, alltime; unsigned long rangetime, alltime;
unsigned long size; unsigned long size, start;
alltime = mfctl(16); alltime = mfctl(16);
flush_data_cache(); flush_data_cache();
@ -364,14 +367,43 @@ void __init parisc_setup_cache_timing(void)
/* Racy, but if we see an intermediate value, it's ok too... */ /* Racy, but if we see an intermediate value, it's ok too... */
parisc_cache_flush_threshold = size * alltime / rangetime; parisc_cache_flush_threshold = size * alltime / rangetime;
parisc_cache_flush_threshold = (parisc_cache_flush_threshold + L1_CACHE_BYTES - 1) &~ (L1_CACHE_BYTES - 1); parisc_cache_flush_threshold = L1_CACHE_ALIGN(parisc_cache_flush_threshold);
if (!parisc_cache_flush_threshold) if (!parisc_cache_flush_threshold)
parisc_cache_flush_threshold = FLUSH_THRESHOLD; parisc_cache_flush_threshold = FLUSH_THRESHOLD;
if (parisc_cache_flush_threshold > cache_info.dc_size) if (parisc_cache_flush_threshold > cache_info.dc_size)
parisc_cache_flush_threshold = cache_info.dc_size; parisc_cache_flush_threshold = cache_info.dc_size;
printk(KERN_INFO "Setting cache flush threshold to %x (%d CPUs online)\n", parisc_cache_flush_threshold, num_online_cpus()); printk(KERN_INFO "Setting cache flush threshold to %lu kB\n",
parisc_cache_flush_threshold/1024);
/* calculate TLB flush threshold */
alltime = mfctl(16);
flush_tlb_all();
alltime = mfctl(16) - alltime;
size = PAGE_SIZE;
start = (unsigned long) _text;
rangetime = mfctl(16);
while (start < (unsigned long) _end) {
flush_tlb_kernel_range(start, start + PAGE_SIZE);
start += PAGE_SIZE;
size += PAGE_SIZE;
}
rangetime = mfctl(16) - rangetime;
printk(KERN_DEBUG "Whole TLB flush %lu cycles, flushing %lu bytes %lu cycles\n",
alltime, size, rangetime);
parisc_tlb_flush_threshold = size * alltime / rangetime;
parisc_tlb_flush_threshold *= num_online_cpus();
parisc_tlb_flush_threshold = PAGE_ALIGN(parisc_tlb_flush_threshold);
if (!parisc_tlb_flush_threshold)
parisc_tlb_flush_threshold = FLUSH_TLB_THRESHOLD;
printk(KERN_INFO "Setting TLB flush threshold to %lu kB\n",
parisc_tlb_flush_threshold/1024);
} }
extern void purge_kernel_dcache_page_asm(unsigned long); extern void purge_kernel_dcache_page_asm(unsigned long);
@ -403,48 +435,45 @@ void copy_user_page(void *vto, void *vfrom, unsigned long vaddr,
} }
EXPORT_SYMBOL(copy_user_page); EXPORT_SYMBOL(copy_user_page);
void purge_tlb_entries(struct mm_struct *mm, unsigned long addr) /* __flush_tlb_range()
*
* returns 1 if all TLBs were flushed.
*/
int __flush_tlb_range(unsigned long sid, unsigned long start,
unsigned long end)
{ {
unsigned long flags; unsigned long flags, size;
/* Note: purge_tlb_entries can be called at startup with size = (end - start);
no context. */ if (size >= parisc_tlb_flush_threshold) {
purge_tlb_start(flags);
mtsp(mm->context, 1);
pdtlb(addr);
pitlb(addr);
purge_tlb_end(flags);
}
EXPORT_SYMBOL(purge_tlb_entries);
void __flush_tlb_range(unsigned long sid, unsigned long start,
unsigned long end)
{
unsigned long npages;
npages = ((end - (start & PAGE_MASK)) + (PAGE_SIZE - 1)) >> PAGE_SHIFT;
if (npages >= 512) /* 2MB of space: arbitrary, should be tuned */
flush_tlb_all(); flush_tlb_all();
else { return 1;
unsigned long flags; }
/* Purge TLB entries for small ranges using the pdtlb and
pitlb instructions. These instructions execute locally
but cause a purge request to be broadcast to other TLBs. */
if (likely(!split_tlb)) {
while (start < end) {
purge_tlb_start(flags);
mtsp(sid, 1);
pdtlb(start);
purge_tlb_end(flags);
start += PAGE_SIZE;
}
return 0;
}
/* split TLB case */
while (start < end) {
purge_tlb_start(flags); purge_tlb_start(flags);
mtsp(sid, 1); mtsp(sid, 1);
if (split_tlb) { pdtlb(start);
while (npages--) { pitlb(start);
pdtlb(start);
pitlb(start);
start += PAGE_SIZE;
}
} else {
while (npages--) {
pdtlb(start);
start += PAGE_SIZE;
}
}
purge_tlb_end(flags); purge_tlb_end(flags);
start += PAGE_SIZE;
} }
return 0;
} }
static void cacheflush_h_tmp_function(void *dummy) static void cacheflush_h_tmp_function(void *dummy)

View file

@ -45,7 +45,7 @@
.level 2.0 .level 2.0
#endif #endif
.import pa_dbit_lock,data .import pa_tlb_lock,data
/* space_to_prot macro creates a prot id from a space id */ /* space_to_prot macro creates a prot id from a space id */
@ -420,8 +420,8 @@
SHLREG %r9,PxD_VALUE_SHIFT,\pmd SHLREG %r9,PxD_VALUE_SHIFT,\pmd
extru \va,31-PAGE_SHIFT,ASM_BITS_PER_PTE,\index extru \va,31-PAGE_SHIFT,ASM_BITS_PER_PTE,\index
dep %r0,31,PAGE_SHIFT,\pmd /* clear offset */ dep %r0,31,PAGE_SHIFT,\pmd /* clear offset */
shladd \index,BITS_PER_PTE_ENTRY,\pmd,\pmd shladd \index,BITS_PER_PTE_ENTRY,\pmd,\pmd /* pmd is now pte */
LDREG %r0(\pmd),\pte /* pmd is now pte */ LDREG %r0(\pmd),\pte
bb,>=,n \pte,_PAGE_PRESENT_BIT,\fault bb,>=,n \pte,_PAGE_PRESENT_BIT,\fault
.endm .endm
@ -453,57 +453,53 @@
L2_ptep \pgd,\pte,\index,\va,\fault L2_ptep \pgd,\pte,\index,\va,\fault
.endm .endm
/* Acquire pa_dbit_lock lock. */ /* Acquire pa_tlb_lock lock and recheck page is still present. */
.macro dbit_lock spc,tmp,tmp1 .macro tlb_lock spc,ptp,pte,tmp,tmp1,fault
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
cmpib,COND(=),n 0,\spc,2f cmpib,COND(=),n 0,\spc,2f
load32 PA(pa_dbit_lock),\tmp load32 PA(pa_tlb_lock),\tmp
1: LDCW 0(\tmp),\tmp1 1: LDCW 0(\tmp),\tmp1
cmpib,COND(=) 0,\tmp1,1b cmpib,COND(=) 0,\tmp1,1b
nop nop
LDREG 0(\ptp),\pte
bb,<,n \pte,_PAGE_PRESENT_BIT,2f
b \fault
stw \spc,0(\tmp)
2: 2:
#endif #endif
.endm .endm
/* Release pa_dbit_lock lock without reloading lock address. */ /* Release pa_tlb_lock lock without reloading lock address. */
.macro dbit_unlock0 spc,tmp .macro tlb_unlock0 spc,tmp
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
or,COND(=) %r0,\spc,%r0 or,COND(=) %r0,\spc,%r0
stw \spc,0(\tmp) stw \spc,0(\tmp)
#endif #endif
.endm .endm
/* Release pa_dbit_lock lock. */ /* Release pa_tlb_lock lock. */
.macro dbit_unlock1 spc,tmp .macro tlb_unlock1 spc,tmp
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
load32 PA(pa_dbit_lock),\tmp load32 PA(pa_tlb_lock),\tmp
dbit_unlock0 \spc,\tmp tlb_unlock0 \spc,\tmp
#endif #endif
.endm .endm
/* Set the _PAGE_ACCESSED bit of the PTE. Be clever and /* Set the _PAGE_ACCESSED bit of the PTE. Be clever and
* don't needlessly dirty the cache line if it was already set */ * don't needlessly dirty the cache line if it was already set */
.macro update_ptep spc,ptep,pte,tmp,tmp1 .macro update_accessed ptp,pte,tmp,tmp1
#ifdef CONFIG_SMP
or,COND(=) %r0,\spc,%r0
LDREG 0(\ptep),\pte
#endif
ldi _PAGE_ACCESSED,\tmp1 ldi _PAGE_ACCESSED,\tmp1
or \tmp1,\pte,\tmp or \tmp1,\pte,\tmp
and,COND(<>) \tmp1,\pte,%r0 and,COND(<>) \tmp1,\pte,%r0
STREG \tmp,0(\ptep) STREG \tmp,0(\ptp)
.endm .endm
/* Set the dirty bit (and accessed bit). No need to be /* Set the dirty bit (and accessed bit). No need to be
* clever, this is only used from the dirty fault */ * clever, this is only used from the dirty fault */
.macro update_dirty spc,ptep,pte,tmp .macro update_dirty ptp,pte,tmp
#ifdef CONFIG_SMP
or,COND(=) %r0,\spc,%r0
LDREG 0(\ptep),\pte
#endif
ldi _PAGE_ACCESSED|_PAGE_DIRTY,\tmp ldi _PAGE_ACCESSED|_PAGE_DIRTY,\tmp
or \tmp,\pte,\pte or \tmp,\pte,\pte
STREG \pte,0(\ptep) STREG \pte,0(\ptp)
.endm .endm
/* bitshift difference between a PFN (based on kernel's PAGE_SIZE) /* bitshift difference between a PFN (based on kernel's PAGE_SIZE)
@ -1148,14 +1144,14 @@ dtlb_miss_20w:
L3_ptep ptp,pte,t0,va,dtlb_check_alias_20w L3_ptep ptp,pte,t0,va,dtlb_check_alias_20w
dbit_lock spc,t0,t1 tlb_lock spc,ptp,pte,t0,t1,dtlb_check_alias_20w
update_ptep spc,ptp,pte,t0,t1 update_accessed ptp,pte,t0,t1
make_insert_tlb spc,pte,prot make_insert_tlb spc,pte,prot
idtlbt pte,prot idtlbt pte,prot
dbit_unlock1 spc,t0
tlb_unlock1 spc,t0
rfir rfir
nop nop
@ -1174,14 +1170,14 @@ nadtlb_miss_20w:
L3_ptep ptp,pte,t0,va,nadtlb_check_alias_20w L3_ptep ptp,pte,t0,va,nadtlb_check_alias_20w
dbit_lock spc,t0,t1 tlb_lock spc,ptp,pte,t0,t1,nadtlb_check_alias_20w
update_ptep spc,ptp,pte,t0,t1 update_accessed ptp,pte,t0,t1
make_insert_tlb spc,pte,prot make_insert_tlb spc,pte,prot
idtlbt pte,prot idtlbt pte,prot
dbit_unlock1 spc,t0
tlb_unlock1 spc,t0
rfir rfir
nop nop
@ -1202,20 +1198,20 @@ dtlb_miss_11:
L2_ptep ptp,pte,t0,va,dtlb_check_alias_11 L2_ptep ptp,pte,t0,va,dtlb_check_alias_11
dbit_lock spc,t0,t1 tlb_lock spc,ptp,pte,t0,t1,dtlb_check_alias_11
update_ptep spc,ptp,pte,t0,t1 update_accessed ptp,pte,t0,t1
make_insert_tlb_11 spc,pte,prot make_insert_tlb_11 spc,pte,prot
mfsp %sr1,t0 /* Save sr1 so we can use it in tlb inserts */ mfsp %sr1,t1 /* Save sr1 so we can use it in tlb inserts */
mtsp spc,%sr1 mtsp spc,%sr1
idtlba pte,(%sr1,va) idtlba pte,(%sr1,va)
idtlbp prot,(%sr1,va) idtlbp prot,(%sr1,va)
mtsp t0, %sr1 /* Restore sr1 */ mtsp t1, %sr1 /* Restore sr1 */
dbit_unlock1 spc,t0
tlb_unlock1 spc,t0
rfir rfir
nop nop
@ -1235,21 +1231,20 @@ nadtlb_miss_11:
L2_ptep ptp,pte,t0,va,nadtlb_check_alias_11 L2_ptep ptp,pte,t0,va,nadtlb_check_alias_11
dbit_lock spc,t0,t1 tlb_lock spc,ptp,pte,t0,t1,nadtlb_check_alias_11
update_ptep spc,ptp,pte,t0,t1 update_accessed ptp,pte,t0,t1
make_insert_tlb_11 spc,pte,prot make_insert_tlb_11 spc,pte,prot
mfsp %sr1,t1 /* Save sr1 so we can use it in tlb inserts */
mfsp %sr1,t0 /* Save sr1 so we can use it in tlb inserts */
mtsp spc,%sr1 mtsp spc,%sr1
idtlba pte,(%sr1,va) idtlba pte,(%sr1,va)
idtlbp prot,(%sr1,va) idtlbp prot,(%sr1,va)
mtsp t0, %sr1 /* Restore sr1 */ mtsp t1, %sr1 /* Restore sr1 */
dbit_unlock1 spc,t0
tlb_unlock1 spc,t0
rfir rfir
nop nop
@ -1269,16 +1264,16 @@ dtlb_miss_20:
L2_ptep ptp,pte,t0,va,dtlb_check_alias_20 L2_ptep ptp,pte,t0,va,dtlb_check_alias_20
dbit_lock spc,t0,t1 tlb_lock spc,ptp,pte,t0,t1,dtlb_check_alias_20
update_ptep spc,ptp,pte,t0,t1 update_accessed ptp,pte,t0,t1
make_insert_tlb spc,pte,prot make_insert_tlb spc,pte,prot
f_extend pte,t0 f_extend pte,t1
idtlbt pte,prot idtlbt pte,prot
dbit_unlock1 spc,t0
tlb_unlock1 spc,t0
rfir rfir
nop nop
@ -1297,16 +1292,16 @@ nadtlb_miss_20:
L2_ptep ptp,pte,t0,va,nadtlb_check_alias_20 L2_ptep ptp,pte,t0,va,nadtlb_check_alias_20
dbit_lock spc,t0,t1 tlb_lock spc,ptp,pte,t0,t1,nadtlb_check_alias_20
update_ptep spc,ptp,pte,t0,t1 update_accessed ptp,pte,t0,t1
make_insert_tlb spc,pte,prot make_insert_tlb spc,pte,prot
f_extend pte,t0 f_extend pte,t1
idtlbt pte,prot idtlbt pte,prot
dbit_unlock1 spc,t0
tlb_unlock1 spc,t0
rfir rfir
nop nop
@ -1406,14 +1401,14 @@ itlb_miss_20w:
L3_ptep ptp,pte,t0,va,itlb_fault L3_ptep ptp,pte,t0,va,itlb_fault
dbit_lock spc,t0,t1 tlb_lock spc,ptp,pte,t0,t1,itlb_fault
update_ptep spc,ptp,pte,t0,t1 update_accessed ptp,pte,t0,t1
make_insert_tlb spc,pte,prot make_insert_tlb spc,pte,prot
iitlbt pte,prot iitlbt pte,prot
dbit_unlock1 spc,t0
tlb_unlock1 spc,t0
rfir rfir
nop nop
@ -1430,14 +1425,14 @@ naitlb_miss_20w:
L3_ptep ptp,pte,t0,va,naitlb_check_alias_20w L3_ptep ptp,pte,t0,va,naitlb_check_alias_20w
dbit_lock spc,t0,t1 tlb_lock spc,ptp,pte,t0,t1,naitlb_check_alias_20w
update_ptep spc,ptp,pte,t0,t1 update_accessed ptp,pte,t0,t1
make_insert_tlb spc,pte,prot make_insert_tlb spc,pte,prot
iitlbt pte,prot iitlbt pte,prot
dbit_unlock1 spc,t0
tlb_unlock1 spc,t0
rfir rfir
nop nop
@ -1458,20 +1453,20 @@ itlb_miss_11:
L2_ptep ptp,pte,t0,va,itlb_fault L2_ptep ptp,pte,t0,va,itlb_fault
dbit_lock spc,t0,t1 tlb_lock spc,ptp,pte,t0,t1,itlb_fault
update_ptep spc,ptp,pte,t0,t1 update_accessed ptp,pte,t0,t1
make_insert_tlb_11 spc,pte,prot make_insert_tlb_11 spc,pte,prot
mfsp %sr1,t0 /* Save sr1 so we can use it in tlb inserts */ mfsp %sr1,t1 /* Save sr1 so we can use it in tlb inserts */
mtsp spc,%sr1 mtsp spc,%sr1
iitlba pte,(%sr1,va) iitlba pte,(%sr1,va)
iitlbp prot,(%sr1,va) iitlbp prot,(%sr1,va)
mtsp t0, %sr1 /* Restore sr1 */ mtsp t1, %sr1 /* Restore sr1 */
dbit_unlock1 spc,t0
tlb_unlock1 spc,t0
rfir rfir
nop nop
@ -1482,20 +1477,20 @@ naitlb_miss_11:
L2_ptep ptp,pte,t0,va,naitlb_check_alias_11 L2_ptep ptp,pte,t0,va,naitlb_check_alias_11
dbit_lock spc,t0,t1 tlb_lock spc,ptp,pte,t0,t1,naitlb_check_alias_11
update_ptep spc,ptp,pte,t0,t1 update_accessed ptp,pte,t0,t1
make_insert_tlb_11 spc,pte,prot make_insert_tlb_11 spc,pte,prot
mfsp %sr1,t0 /* Save sr1 so we can use it in tlb inserts */ mfsp %sr1,t1 /* Save sr1 so we can use it in tlb inserts */
mtsp spc,%sr1 mtsp spc,%sr1
iitlba pte,(%sr1,va) iitlba pte,(%sr1,va)
iitlbp prot,(%sr1,va) iitlbp prot,(%sr1,va)
mtsp t0, %sr1 /* Restore sr1 */ mtsp t1, %sr1 /* Restore sr1 */
dbit_unlock1 spc,t0
tlb_unlock1 spc,t0
rfir rfir
nop nop
@ -1516,16 +1511,16 @@ itlb_miss_20:
L2_ptep ptp,pte,t0,va,itlb_fault L2_ptep ptp,pte,t0,va,itlb_fault
dbit_lock spc,t0,t1 tlb_lock spc,ptp,pte,t0,t1,itlb_fault
update_ptep spc,ptp,pte,t0,t1 update_accessed ptp,pte,t0,t1
make_insert_tlb spc,pte,prot make_insert_tlb spc,pte,prot
f_extend pte,t0 f_extend pte,t1
iitlbt pte,prot iitlbt pte,prot
dbit_unlock1 spc,t0
tlb_unlock1 spc,t0
rfir rfir
nop nop
@ -1536,16 +1531,16 @@ naitlb_miss_20:
L2_ptep ptp,pte,t0,va,naitlb_check_alias_20 L2_ptep ptp,pte,t0,va,naitlb_check_alias_20
dbit_lock spc,t0,t1 tlb_lock spc,ptp,pte,t0,t1,naitlb_check_alias_20
update_ptep spc,ptp,pte,t0,t1 update_accessed ptp,pte,t0,t1
make_insert_tlb spc,pte,prot make_insert_tlb spc,pte,prot
f_extend pte,t0 f_extend pte,t1
iitlbt pte,prot iitlbt pte,prot
dbit_unlock1 spc,t0
tlb_unlock1 spc,t0
rfir rfir
nop nop
@ -1568,14 +1563,14 @@ dbit_trap_20w:
L3_ptep ptp,pte,t0,va,dbit_fault L3_ptep ptp,pte,t0,va,dbit_fault
dbit_lock spc,t0,t1 tlb_lock spc,ptp,pte,t0,t1,dbit_fault
update_dirty spc,ptp,pte,t1 update_dirty ptp,pte,t1
make_insert_tlb spc,pte,prot make_insert_tlb spc,pte,prot
idtlbt pte,prot idtlbt pte,prot
dbit_unlock0 spc,t0
tlb_unlock0 spc,t0
rfir rfir
nop nop
#else #else
@ -1588,8 +1583,8 @@ dbit_trap_11:
L2_ptep ptp,pte,t0,va,dbit_fault L2_ptep ptp,pte,t0,va,dbit_fault
dbit_lock spc,t0,t1 tlb_lock spc,ptp,pte,t0,t1,dbit_fault
update_dirty spc,ptp,pte,t1 update_dirty ptp,pte,t1
make_insert_tlb_11 spc,pte,prot make_insert_tlb_11 spc,pte,prot
@ -1600,8 +1595,8 @@ dbit_trap_11:
idtlbp prot,(%sr1,va) idtlbp prot,(%sr1,va)
mtsp t1, %sr1 /* Restore sr1 */ mtsp t1, %sr1 /* Restore sr1 */
dbit_unlock0 spc,t0
tlb_unlock0 spc,t0
rfir rfir
nop nop
@ -1612,16 +1607,16 @@ dbit_trap_20:
L2_ptep ptp,pte,t0,va,dbit_fault L2_ptep ptp,pte,t0,va,dbit_fault
dbit_lock spc,t0,t1 tlb_lock spc,ptp,pte,t0,t1,dbit_fault
update_dirty spc,ptp,pte,t1 update_dirty ptp,pte,t1
make_insert_tlb spc,pte,prot make_insert_tlb spc,pte,prot
f_extend pte,t1 f_extend pte,t1
idtlbt pte,prot idtlbt pte,prot
dbit_unlock0 spc,t0
tlb_unlock0 spc,t0
rfir rfir
nop nop
#endif #endif

View file

@ -43,10 +43,6 @@
#include "../math-emu/math-emu.h" /* for handle_fpe() */ #include "../math-emu/math-emu.h" /* for handle_fpe() */
#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
DEFINE_SPINLOCK(pa_dbit_lock);
#endif
static void parisc_show_stack(struct task_struct *task, unsigned long *sp, static void parisc_show_stack(struct task_struct *task, unsigned long *sp,
struct pt_regs *regs); struct pt_regs *regs);

View file

@ -51,6 +51,22 @@
.text .text
/*
* Used by threads when the lock bit of core_idle_state is set.
* Threads will spin in HMT_LOW until the lock bit is cleared.
* r14 - pointer to core_idle_state
* r15 - used to load contents of core_idle_state
*/
core_idle_lock_held:
HMT_LOW
3: lwz r15,0(r14)
andi. r15,r15,PNV_CORE_IDLE_LOCK_BIT
bne 3b
HMT_MEDIUM
lwarx r15,0,r14
blr
/* /*
* Pass requested state in r3: * Pass requested state in r3:
* r3 - PNV_THREAD_NAP/SLEEP/WINKLE * r3 - PNV_THREAD_NAP/SLEEP/WINKLE
@ -150,6 +166,10 @@ power7_enter_nap_mode:
ld r14,PACA_CORE_IDLE_STATE_PTR(r13) ld r14,PACA_CORE_IDLE_STATE_PTR(r13)
lwarx_loop1: lwarx_loop1:
lwarx r15,0,r14 lwarx r15,0,r14
andi. r9,r15,PNV_CORE_IDLE_LOCK_BIT
bnel core_idle_lock_held
andc r15,r15,r7 /* Clear thread bit */ andc r15,r15,r7 /* Clear thread bit */
andi. r15,r15,PNV_CORE_IDLE_THREAD_BITS andi. r15,r15,PNV_CORE_IDLE_THREAD_BITS
@ -294,7 +314,7 @@ lwarx_loop2:
* workaround undo code or resyncing timebase or restoring context * workaround undo code or resyncing timebase or restoring context
* In either case loop until the lock bit is cleared. * In either case loop until the lock bit is cleared.
*/ */
bne core_idle_lock_held bnel core_idle_lock_held
cmpwi cr2,r15,0 cmpwi cr2,r15,0
lbz r4,PACA_SUBCORE_SIBLING_MASK(r13) lbz r4,PACA_SUBCORE_SIBLING_MASK(r13)
@ -319,15 +339,6 @@ lwarx_loop2:
isync isync
b common_exit b common_exit
core_idle_lock_held:
HMT_LOW
core_idle_lock_loop:
lwz r15,0(14)
andi. r9,r15,PNV_CORE_IDLE_LOCK_BIT
bne core_idle_lock_loop
HMT_MEDIUM
b lwarx_loop2
first_thread_in_subcore: first_thread_in_subcore:
/* First thread in subcore to wakeup */ /* First thread in subcore to wakeup */
ori r15,r15,PNV_CORE_IDLE_LOCK_BIT ori r15,r15,PNV_CORE_IDLE_LOCK_BIT

View file

@ -297,6 +297,8 @@ long machine_check_early(struct pt_regs *regs)
__this_cpu_inc(irq_stat.mce_exceptions); __this_cpu_inc(irq_stat.mce_exceptions);
add_taint(TAINT_MACHINE_CHECK, LOCKDEP_NOW_UNRELIABLE);
if (cur_cpu_spec && cur_cpu_spec->machine_check_early) if (cur_cpu_spec && cur_cpu_spec->machine_check_early)
handled = cur_cpu_spec->machine_check_early(regs); handled = cur_cpu_spec->machine_check_early(regs);
return handled; return handled;

View file

@ -529,6 +529,10 @@ void bad_page_fault(struct pt_regs *regs, unsigned long address, int sig)
printk(KERN_ALERT "Unable to handle kernel paging request for " printk(KERN_ALERT "Unable to handle kernel paging request for "
"instruction fetch\n"); "instruction fetch\n");
break; break;
case 0x600:
printk(KERN_ALERT "Unable to handle kernel paging request for "
"unaligned access at address 0x%08lx\n", regs->dar);
break;
default: default:
printk(KERN_ALERT "Unable to handle kernel paging request for " printk(KERN_ALERT "Unable to handle kernel paging request for "
"unknown fault\n"); "unknown fault\n");

View file

@ -320,6 +320,8 @@ static struct attribute *device_str_attr_create_(char *name, char *str)
if (!attr) if (!attr)
return NULL; return NULL;
sysfs_attr_init(&attr->attr.attr);
attr->var = str; attr->var = str;
attr->attr.attr.name = name; attr->attr.attr.name = name;
attr->attr.attr.mode = 0444; attr->attr.attr.mode = 0444;

View file

@ -237,7 +237,7 @@ static struct elog_obj *create_elog_obj(uint64_t id, size_t size, uint64_t type)
return elog; return elog;
} }
static void elog_work_fn(struct work_struct *work) static irqreturn_t elog_event(int irq, void *data)
{ {
__be64 size; __be64 size;
__be64 id; __be64 id;
@ -251,7 +251,7 @@ static void elog_work_fn(struct work_struct *work)
rc = opal_get_elog_size(&id, &size, &type); rc = opal_get_elog_size(&id, &size, &type);
if (rc != OPAL_SUCCESS) { if (rc != OPAL_SUCCESS) {
pr_err("ELOG: OPAL log info read failed\n"); pr_err("ELOG: OPAL log info read failed\n");
return; return IRQ_HANDLED;
} }
elog_size = be64_to_cpu(size); elog_size = be64_to_cpu(size);
@ -270,16 +270,10 @@ static void elog_work_fn(struct work_struct *work)
* entries. * entries.
*/ */
if (kset_find_obj(elog_kset, name)) if (kset_find_obj(elog_kset, name))
return; return IRQ_HANDLED;
create_elog_obj(log_id, elog_size, elog_type); create_elog_obj(log_id, elog_size, elog_type);
}
static DECLARE_WORK(elog_work, elog_work_fn);
static irqreturn_t elog_event(int irq, void *data)
{
schedule_work(&elog_work);
return IRQ_HANDLED; return IRQ_HANDLED;
} }
@ -304,8 +298,8 @@ int __init opal_elog_init(void)
return irq; return irq;
} }
rc = request_irq(irq, elog_event, rc = request_threaded_irq(irq, NULL, elog_event,
IRQ_TYPE_LEVEL_HIGH, "opal-elog", NULL); IRQF_TRIGGER_HIGH | IRQF_ONESHOT, "opal-elog", NULL);
if (rc) { if (rc) {
pr_err("%s: Can't request OPAL event irq (%d)\n", pr_err("%s: Can't request OPAL event irq (%d)\n",
__func__, rc); __func__, rc);

View file

@ -112,6 +112,7 @@ static int opal_prd_open(struct inode *inode, struct file *file)
static int opal_prd_mmap(struct file *file, struct vm_area_struct *vma) static int opal_prd_mmap(struct file *file, struct vm_area_struct *vma)
{ {
size_t addr, size; size_t addr, size;
pgprot_t page_prot;
int rc; int rc;
pr_devel("opal_prd_mmap(0x%016lx, 0x%016lx, 0x%lx, 0x%lx)\n", pr_devel("opal_prd_mmap(0x%016lx, 0x%016lx, 0x%lx, 0x%lx)\n",
@ -125,13 +126,11 @@ static int opal_prd_mmap(struct file *file, struct vm_area_struct *vma)
if (!opal_prd_range_is_valid(addr, size)) if (!opal_prd_range_is_valid(addr, size))
return -EINVAL; return -EINVAL;
vma->vm_page_prot = __pgprot(pgprot_val(phys_mem_access_prot(file, page_prot = phys_mem_access_prot(file, vma->vm_pgoff,
vma->vm_pgoff, size, vma->vm_page_prot);
size, vma->vm_page_prot))
| _PAGE_SPECIAL);
rc = remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff, size, rc = remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff, size,
vma->vm_page_prot); page_prot);
return rc; return rc;
} }

View file

@ -18,6 +18,7 @@
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/semaphore.h> #include <linux/semaphore.h>
#include <asm/msi_bitmap.h> #include <asm/msi_bitmap.h>
#include <asm/ppc-pci.h>
struct ppc4xx_hsta_msi { struct ppc4xx_hsta_msi {
struct device *dev; struct device *dev;

View file

@ -28,7 +28,7 @@
#define _ST(p, inst, v) \ #define _ST(p, inst, v) \
({ \ ({ \
asm("1: " #inst " %0, %1;" \ asm("1: " #inst " %0, %1;" \
".pushsection .coldtext.memcpy,\"ax\";" \ ".pushsection .coldtext,\"ax\";" \
"2: { move r0, %2; jrp lr };" \ "2: { move r0, %2; jrp lr };" \
".section __ex_table,\"a\";" \ ".section __ex_table,\"a\";" \
".align 8;" \ ".align 8;" \
@ -41,7 +41,7 @@
({ \ ({ \
unsigned long __v; \ unsigned long __v; \
asm("1: " #inst " %0, %1;" \ asm("1: " #inst " %0, %1;" \
".pushsection .coldtext.memcpy,\"ax\";" \ ".pushsection .coldtext,\"ax\";" \
"2: { move r0, %2; jrp lr };" \ "2: { move r0, %2; jrp lr };" \
".section __ex_table,\"a\";" \ ".section __ex_table,\"a\";" \
".align 8;" \ ".align 8;" \

View file

@ -254,6 +254,11 @@ config ARCH_SUPPORTS_OPTIMIZED_INLINING
config ARCH_SUPPORTS_DEBUG_PAGEALLOC config ARCH_SUPPORTS_DEBUG_PAGEALLOC
def_bool y def_bool y
config KASAN_SHADOW_OFFSET
hex
depends on KASAN
default 0xdffffc0000000000
config HAVE_INTEL_TXT config HAVE_INTEL_TXT
def_bool y def_bool y
depends on INTEL_IOMMU && ACPI depends on INTEL_IOMMU && ACPI
@ -2015,7 +2020,7 @@ config CMDLINE_BOOL
To compile command line arguments into the kernel, To compile command line arguments into the kernel,
set this option to 'Y', then fill in the set this option to 'Y', then fill in the
the boot arguments in CONFIG_CMDLINE. boot arguments in CONFIG_CMDLINE.
Systems with fully functional boot loaders (i.e. non-embedded) Systems with fully functional boot loaders (i.e. non-embedded)
should leave this option set to 'N'. should leave this option set to 'N'.

View file

@ -9,7 +9,7 @@ DECLARE_PER_CPU_READ_MOSTLY(unsigned long, espfix_stack);
DECLARE_PER_CPU_READ_MOSTLY(unsigned long, espfix_waddr); DECLARE_PER_CPU_READ_MOSTLY(unsigned long, espfix_waddr);
extern void init_espfix_bsp(void); extern void init_espfix_bsp(void);
extern void init_espfix_ap(void); extern void init_espfix_ap(int cpu);
#endif /* CONFIG_X86_64 */ #endif /* CONFIG_X86_64 */

View file

@ -14,15 +14,11 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
extern pte_t kasan_zero_pte[];
extern pte_t kasan_zero_pmd[];
extern pte_t kasan_zero_pud[];
#ifdef CONFIG_KASAN #ifdef CONFIG_KASAN
void __init kasan_map_early_shadow(pgd_t *pgd); void __init kasan_early_init(void);
void __init kasan_init(void); void __init kasan_init(void);
#else #else
static inline void kasan_map_early_shadow(pgd_t *pgd) { } static inline void kasan_early_init(void) { }
static inline void kasan_init(void) { } static inline void kasan_init(void) { }
#endif #endif

View file

@ -409,12 +409,6 @@ static void __setup_vector_irq(int cpu)
int irq, vector; int irq, vector;
struct apic_chip_data *data; struct apic_chip_data *data;
/*
* vector_lock will make sure that we don't run into irq vector
* assignments that might be happening on another cpu in parallel,
* while we setup our initial vector to irq mappings.
*/
raw_spin_lock(&vector_lock);
/* Mark the inuse vectors */ /* Mark the inuse vectors */
for_each_active_irq(irq) { for_each_active_irq(irq) {
data = apic_chip_data(irq_get_irq_data(irq)); data = apic_chip_data(irq_get_irq_data(irq));
@ -436,16 +430,16 @@ static void __setup_vector_irq(int cpu)
if (!cpumask_test_cpu(cpu, data->domain)) if (!cpumask_test_cpu(cpu, data->domain))
per_cpu(vector_irq, cpu)[vector] = VECTOR_UNDEFINED; per_cpu(vector_irq, cpu)[vector] = VECTOR_UNDEFINED;
} }
raw_spin_unlock(&vector_lock);
} }
/* /*
* Setup the vector to irq mappings. * Setup the vector to irq mappings. Must be called with vector_lock held.
*/ */
void setup_vector_irq(int cpu) void setup_vector_irq(int cpu)
{ {
int irq; int irq;
lockdep_assert_held(&vector_lock);
/* /*
* On most of the platforms, legacy PIC delivers the interrupts on the * On most of the platforms, legacy PIC delivers the interrupts on the
* boot cpu. But there are certain platforms where PIC interrupts are * boot cpu. But there are certain platforms where PIC interrupts are

View file

@ -175,7 +175,9 @@ static __init void early_serial_init(char *s)
} }
if (*s) { if (*s) {
if (kstrtoul(s, 0, &baud) < 0 || baud == 0) baud = simple_strtoull(s, &e, 0);
if (baud == 0 || s == e)
baud = DEFAULT_BAUD; baud = DEFAULT_BAUD;
} }

View file

@ -131,25 +131,24 @@ void __init init_espfix_bsp(void)
init_espfix_random(); init_espfix_random();
/* The rest is the same as for any other processor */ /* The rest is the same as for any other processor */
init_espfix_ap(); init_espfix_ap(0);
} }
void init_espfix_ap(void) void init_espfix_ap(int cpu)
{ {
unsigned int cpu, page; unsigned int page;
unsigned long addr; unsigned long addr;
pud_t pud, *pud_p; pud_t pud, *pud_p;
pmd_t pmd, *pmd_p; pmd_t pmd, *pmd_p;
pte_t pte, *pte_p; pte_t pte, *pte_p;
int n; int n, node;
void *stack_page; void *stack_page;
pteval_t ptemask; pteval_t ptemask;
/* We only have to do this once... */ /* We only have to do this once... */
if (likely(this_cpu_read(espfix_stack))) if (likely(per_cpu(espfix_stack, cpu)))
return; /* Already initialized */ return; /* Already initialized */
cpu = smp_processor_id();
addr = espfix_base_addr(cpu); addr = espfix_base_addr(cpu);
page = cpu/ESPFIX_STACKS_PER_PAGE; page = cpu/ESPFIX_STACKS_PER_PAGE;
@ -165,12 +164,15 @@ void init_espfix_ap(void)
if (stack_page) if (stack_page)
goto unlock_done; goto unlock_done;
node = cpu_to_node(cpu);
ptemask = __supported_pte_mask; ptemask = __supported_pte_mask;
pud_p = &espfix_pud_page[pud_index(addr)]; pud_p = &espfix_pud_page[pud_index(addr)];
pud = *pud_p; pud = *pud_p;
if (!pud_present(pud)) { if (!pud_present(pud)) {
pmd_p = (pmd_t *)__get_free_page(PGALLOC_GFP); struct page *page = alloc_pages_node(node, PGALLOC_GFP, 0);
pmd_p = (pmd_t *)page_address(page);
pud = __pud(__pa(pmd_p) | (PGTABLE_PROT & ptemask)); pud = __pud(__pa(pmd_p) | (PGTABLE_PROT & ptemask));
paravirt_alloc_pmd(&init_mm, __pa(pmd_p) >> PAGE_SHIFT); paravirt_alloc_pmd(&init_mm, __pa(pmd_p) >> PAGE_SHIFT);
for (n = 0; n < ESPFIX_PUD_CLONES; n++) for (n = 0; n < ESPFIX_PUD_CLONES; n++)
@ -180,7 +182,9 @@ void init_espfix_ap(void)
pmd_p = pmd_offset(&pud, addr); pmd_p = pmd_offset(&pud, addr);
pmd = *pmd_p; pmd = *pmd_p;
if (!pmd_present(pmd)) { if (!pmd_present(pmd)) {
pte_p = (pte_t *)__get_free_page(PGALLOC_GFP); struct page *page = alloc_pages_node(node, PGALLOC_GFP, 0);
pte_p = (pte_t *)page_address(page);
pmd = __pmd(__pa(pte_p) | (PGTABLE_PROT & ptemask)); pmd = __pmd(__pa(pte_p) | (PGTABLE_PROT & ptemask));
paravirt_alloc_pte(&init_mm, __pa(pte_p) >> PAGE_SHIFT); paravirt_alloc_pte(&init_mm, __pa(pte_p) >> PAGE_SHIFT);
for (n = 0; n < ESPFIX_PMD_CLONES; n++) for (n = 0; n < ESPFIX_PMD_CLONES; n++)
@ -188,7 +192,7 @@ void init_espfix_ap(void)
} }
pte_p = pte_offset_kernel(&pmd, addr); pte_p = pte_offset_kernel(&pmd, addr);
stack_page = (void *)__get_free_page(GFP_KERNEL); stack_page = page_address(alloc_pages_node(node, GFP_KERNEL, 0));
pte = __pte(__pa(stack_page) | (__PAGE_KERNEL_RO & ptemask)); pte = __pte(__pa(stack_page) | (__PAGE_KERNEL_RO & ptemask));
for (n = 0; n < ESPFIX_PTE_CLONES; n++) for (n = 0; n < ESPFIX_PTE_CLONES; n++)
set_pte(&pte_p[n*PTE_STRIDE], pte); set_pte(&pte_p[n*PTE_STRIDE], pte);
@ -199,7 +203,7 @@ void init_espfix_ap(void)
unlock_done: unlock_done:
mutex_unlock(&espfix_init_mutex); mutex_unlock(&espfix_init_mutex);
done: done:
this_cpu_write(espfix_stack, addr); per_cpu(espfix_stack, cpu) = addr;
this_cpu_write(espfix_waddr, (unsigned long)stack_page per_cpu(espfix_waddr, cpu) = (unsigned long)stack_page
+ (addr & ~PAGE_MASK)); + (addr & ~PAGE_MASK);
} }

View file

@ -161,11 +161,12 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
/* Kill off the identity-map trampoline */ /* Kill off the identity-map trampoline */
reset_early_page_tables(); reset_early_page_tables();
kasan_map_early_shadow(early_level4_pgt);
/* clear bss before set_intr_gate with early_idt_handler */
clear_bss(); clear_bss();
clear_page(init_level4_pgt);
kasan_early_init();
for (i = 0; i < NUM_EXCEPTION_VECTORS; i++) for (i = 0; i < NUM_EXCEPTION_VECTORS; i++)
set_intr_gate(i, early_idt_handler_array[i]); set_intr_gate(i, early_idt_handler_array[i]);
load_idt((const struct desc_ptr *)&idt_descr); load_idt((const struct desc_ptr *)&idt_descr);
@ -177,12 +178,9 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
*/ */
load_ucode_bsp(); load_ucode_bsp();
clear_page(init_level4_pgt);
/* set init_level4_pgt kernel high mapping*/ /* set init_level4_pgt kernel high mapping*/
init_level4_pgt[511] = early_level4_pgt[511]; init_level4_pgt[511] = early_level4_pgt[511];
kasan_map_early_shadow(init_level4_pgt);
x86_64_start_reservations(real_mode_data); x86_64_start_reservations(real_mode_data);
} }

View file

@ -516,38 +516,9 @@ ENTRY(phys_base)
/* This must match the first entry in level2_kernel_pgt */ /* This must match the first entry in level2_kernel_pgt */
.quad 0x0000000000000000 .quad 0x0000000000000000
#ifdef CONFIG_KASAN
#define FILL(VAL, COUNT) \
.rept (COUNT) ; \
.quad (VAL) ; \
.endr
NEXT_PAGE(kasan_zero_pte)
FILL(kasan_zero_page - __START_KERNEL_map + _KERNPG_TABLE, 512)
NEXT_PAGE(kasan_zero_pmd)
FILL(kasan_zero_pte - __START_KERNEL_map + _KERNPG_TABLE, 512)
NEXT_PAGE(kasan_zero_pud)
FILL(kasan_zero_pmd - __START_KERNEL_map + _KERNPG_TABLE, 512)
#undef FILL
#endif
#include "../../x86/xen/xen-head.S" #include "../../x86/xen/xen-head.S"
__PAGE_ALIGNED_BSS __PAGE_ALIGNED_BSS
NEXT_PAGE(empty_zero_page) NEXT_PAGE(empty_zero_page)
.skip PAGE_SIZE .skip PAGE_SIZE
#ifdef CONFIG_KASAN
/*
* This page used as early shadow. We don't use empty_zero_page
* at early stages, stack instrumentation could write some garbage
* to this page.
* Latter we reuse it as zero shadow for large ranges of memory
* that allowed to access, but not instrumented by kasan
* (vmalloc/vmemmap ...).
*/
NEXT_PAGE(kasan_zero_page)
.skip PAGE_SIZE
#endif

View file

@ -347,14 +347,22 @@ int check_irq_vectors_for_cpu_disable(void)
if (!desc) if (!desc)
continue; continue;
/*
* Protect against concurrent action removal,
* affinity changes etc.
*/
raw_spin_lock(&desc->lock);
data = irq_desc_get_irq_data(desc); data = irq_desc_get_irq_data(desc);
cpumask_copy(&affinity_new, data->affinity); cpumask_copy(&affinity_new, data->affinity);
cpumask_clear_cpu(this_cpu, &affinity_new); cpumask_clear_cpu(this_cpu, &affinity_new);
/* Do not count inactive or per-cpu irqs. */ /* Do not count inactive or per-cpu irqs. */
if (!irq_has_action(irq) || irqd_is_per_cpu(data)) if (!irq_has_action(irq) || irqd_is_per_cpu(data)) {
raw_spin_unlock(&desc->lock);
continue; continue;
}
raw_spin_unlock(&desc->lock);
/* /*
* A single irq may be mapped to multiple * A single irq may be mapped to multiple
* cpu's vector_irq[] (for example IOAPIC cluster * cpu's vector_irq[] (for example IOAPIC cluster
@ -385,6 +393,9 @@ int check_irq_vectors_for_cpu_disable(void)
* vector. If the vector is marked in the used vectors * vector. If the vector is marked in the used vectors
* bitmap or an irq is assigned to it, we don't count * bitmap or an irq is assigned to it, we don't count
* it as available. * it as available.
*
* As this is an inaccurate snapshot anyway, we can do
* this w/o holding vector_lock.
*/ */
for (vector = FIRST_EXTERNAL_VECTOR; for (vector = FIRST_EXTERNAL_VECTOR;
vector < first_system_vector; vector++) { vector < first_system_vector; vector++) {
@ -486,6 +497,11 @@ void fixup_irqs(void)
*/ */
mdelay(1); mdelay(1);
/*
* We can walk the vector array of this cpu without holding
* vector_lock because the cpu is already marked !online, so
* nothing else will touch it.
*/
for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; vector++) { for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; vector++) {
unsigned int irr; unsigned int irr;
@ -497,9 +513,9 @@ void fixup_irqs(void)
irq = __this_cpu_read(vector_irq[vector]); irq = __this_cpu_read(vector_irq[vector]);
desc = irq_to_desc(irq); desc = irq_to_desc(irq);
raw_spin_lock(&desc->lock);
data = irq_desc_get_irq_data(desc); data = irq_desc_get_irq_data(desc);
chip = irq_data_get_irq_chip(data); chip = irq_data_get_irq_chip(data);
raw_spin_lock(&desc->lock);
if (chip->irq_retrigger) { if (chip->irq_retrigger) {
chip->irq_retrigger(data); chip->irq_retrigger(data);
__this_cpu_write(vector_irq[vector], VECTOR_RETRIGGERED); __this_cpu_write(vector_irq[vector], VECTOR_RETRIGGERED);

View file

@ -170,11 +170,6 @@ static void smp_callin(void)
*/ */
apic_ap_setup(); apic_ap_setup();
/*
* Need to setup vector mappings before we enable interrupts.
*/
setup_vector_irq(smp_processor_id());
/* /*
* Save our processor parameters. Note: this information * Save our processor parameters. Note: this information
* is needed for clock calibration. * is needed for clock calibration.
@ -239,18 +234,13 @@ static void notrace start_secondary(void *unused)
check_tsc_sync_target(); check_tsc_sync_target();
/* /*
* Enable the espfix hack for this CPU * Lock vector_lock and initialize the vectors on this cpu
*/ * before setting the cpu online. We must set it online with
#ifdef CONFIG_X86_ESPFIX64 * vector_lock held to prevent a concurrent setup/teardown
init_espfix_ap(); * from seeing a half valid vector space.
#endif
/*
* We need to hold vector_lock so there the set of online cpus
* does not change while we are assigning vectors to cpus. Holding
* this lock ensures we don't half assign or remove an irq from a cpu.
*/ */
lock_vector_lock(); lock_vector_lock();
setup_vector_irq(smp_processor_id());
set_cpu_online(smp_processor_id(), true); set_cpu_online(smp_processor_id(), true);
unlock_vector_lock(); unlock_vector_lock();
cpu_set_state_online(smp_processor_id()); cpu_set_state_online(smp_processor_id());
@ -854,6 +844,13 @@ static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle)
initial_code = (unsigned long)start_secondary; initial_code = (unsigned long)start_secondary;
stack_start = idle->thread.sp; stack_start = idle->thread.sp;
/*
* Enable the espfix hack for this CPU
*/
#ifdef CONFIG_X86_ESPFIX64
init_espfix_ap(cpu);
#endif
/* So we see what's up */ /* So we see what's up */
announce_cpu(cpu, apicid); announce_cpu(cpu, apicid);

View file

@ -598,10 +598,19 @@ static unsigned long quick_pit_calibrate(void)
if (!pit_expect_msb(0xff-i, &delta, &d2)) if (!pit_expect_msb(0xff-i, &delta, &d2))
break; break;
delta -= tsc;
/*
* Extrapolate the error and fail fast if the error will
* never be below 500 ppm.
*/
if (i == 1 &&
d1 + d2 >= (delta * MAX_QUICK_PIT_ITERATIONS) >> 11)
return 0;
/* /*
* Iterate until the error is less than 500 ppm * Iterate until the error is less than 500 ppm
*/ */
delta -= tsc;
if (d1+d2 >= delta >> 11) if (d1+d2 >= delta >> 11)
continue; continue;

View file

@ -20,7 +20,7 @@ copy_from_user_nmi(void *to, const void __user *from, unsigned long n)
unsigned long ret; unsigned long ret;
if (__range_not_ok(from, n, TASK_SIZE)) if (__range_not_ok(from, n, TASK_SIZE))
return 0; return n;
/* /*
* Even though this function is typically called from NMI/IRQ context * Even though this function is typically called from NMI/IRQ context

View file

@ -1,3 +1,4 @@
#define pr_fmt(fmt) "kasan: " fmt
#include <linux/bootmem.h> #include <linux/bootmem.h>
#include <linux/kasan.h> #include <linux/kasan.h>
#include <linux/kdebug.h> #include <linux/kdebug.h>
@ -11,7 +12,19 @@
extern pgd_t early_level4_pgt[PTRS_PER_PGD]; extern pgd_t early_level4_pgt[PTRS_PER_PGD];
extern struct range pfn_mapped[E820_X_MAX]; extern struct range pfn_mapped[E820_X_MAX];
extern unsigned char kasan_zero_page[PAGE_SIZE]; static pud_t kasan_zero_pud[PTRS_PER_PUD] __page_aligned_bss;
static pmd_t kasan_zero_pmd[PTRS_PER_PMD] __page_aligned_bss;
static pte_t kasan_zero_pte[PTRS_PER_PTE] __page_aligned_bss;
/*
* This page used as early shadow. We don't use empty_zero_page
* at early stages, stack instrumentation could write some garbage
* to this page.
* Latter we reuse it as zero shadow for large ranges of memory
* that allowed to access, but not instrumented by kasan
* (vmalloc/vmemmap ...).
*/
static unsigned char kasan_zero_page[PAGE_SIZE] __page_aligned_bss;
static int __init map_range(struct range *range) static int __init map_range(struct range *range)
{ {
@ -36,7 +49,7 @@ static void __init clear_pgds(unsigned long start,
pgd_clear(pgd_offset_k(start)); pgd_clear(pgd_offset_k(start));
} }
void __init kasan_map_early_shadow(pgd_t *pgd) static void __init kasan_map_early_shadow(pgd_t *pgd)
{ {
int i; int i;
unsigned long start = KASAN_SHADOW_START; unsigned long start = KASAN_SHADOW_START;
@ -73,7 +86,7 @@ static int __init zero_pmd_populate(pud_t *pud, unsigned long addr,
while (IS_ALIGNED(addr, PMD_SIZE) && addr + PMD_SIZE <= end) { while (IS_ALIGNED(addr, PMD_SIZE) && addr + PMD_SIZE <= end) {
WARN_ON(!pmd_none(*pmd)); WARN_ON(!pmd_none(*pmd));
set_pmd(pmd, __pmd(__pa_nodebug(kasan_zero_pte) set_pmd(pmd, __pmd(__pa_nodebug(kasan_zero_pte)
| __PAGE_KERNEL_RO)); | _KERNPG_TABLE));
addr += PMD_SIZE; addr += PMD_SIZE;
pmd = pmd_offset(pud, addr); pmd = pmd_offset(pud, addr);
} }
@ -99,7 +112,7 @@ static int __init zero_pud_populate(pgd_t *pgd, unsigned long addr,
while (IS_ALIGNED(addr, PUD_SIZE) && addr + PUD_SIZE <= end) { while (IS_ALIGNED(addr, PUD_SIZE) && addr + PUD_SIZE <= end) {
WARN_ON(!pud_none(*pud)); WARN_ON(!pud_none(*pud));
set_pud(pud, __pud(__pa_nodebug(kasan_zero_pmd) set_pud(pud, __pud(__pa_nodebug(kasan_zero_pmd)
| __PAGE_KERNEL_RO)); | _KERNPG_TABLE));
addr += PUD_SIZE; addr += PUD_SIZE;
pud = pud_offset(pgd, addr); pud = pud_offset(pgd, addr);
} }
@ -124,7 +137,7 @@ static int __init zero_pgd_populate(unsigned long addr, unsigned long end)
while (IS_ALIGNED(addr, PGDIR_SIZE) && addr + PGDIR_SIZE <= end) { while (IS_ALIGNED(addr, PGDIR_SIZE) && addr + PGDIR_SIZE <= end) {
WARN_ON(!pgd_none(*pgd)); WARN_ON(!pgd_none(*pgd));
set_pgd(pgd, __pgd(__pa_nodebug(kasan_zero_pud) set_pgd(pgd, __pgd(__pa_nodebug(kasan_zero_pud)
| __PAGE_KERNEL_RO)); | _KERNPG_TABLE));
addr += PGDIR_SIZE; addr += PGDIR_SIZE;
pgd = pgd_offset_k(addr); pgd = pgd_offset_k(addr);
} }
@ -166,6 +179,26 @@ static struct notifier_block kasan_die_notifier = {
}; };
#endif #endif
void __init kasan_early_init(void)
{
int i;
pteval_t pte_val = __pa_nodebug(kasan_zero_page) | __PAGE_KERNEL;
pmdval_t pmd_val = __pa_nodebug(kasan_zero_pte) | _KERNPG_TABLE;
pudval_t pud_val = __pa_nodebug(kasan_zero_pmd) | _KERNPG_TABLE;
for (i = 0; i < PTRS_PER_PTE; i++)
kasan_zero_pte[i] = __pte(pte_val);
for (i = 0; i < PTRS_PER_PMD; i++)
kasan_zero_pmd[i] = __pmd(pmd_val);
for (i = 0; i < PTRS_PER_PUD; i++)
kasan_zero_pud[i] = __pud(pud_val);
kasan_map_early_shadow(early_level4_pgt);
kasan_map_early_shadow(init_level4_pgt);
}
void __init kasan_init(void) void __init kasan_init(void)
{ {
int i; int i;
@ -176,6 +209,7 @@ void __init kasan_init(void)
memcpy(early_level4_pgt, init_level4_pgt, sizeof(early_level4_pgt)); memcpy(early_level4_pgt, init_level4_pgt, sizeof(early_level4_pgt));
load_cr3(early_level4_pgt); load_cr3(early_level4_pgt);
__flush_tlb_all();
clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END); clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
@ -202,5 +236,8 @@ void __init kasan_init(void)
memset(kasan_zero_page, 0, PAGE_SIZE); memset(kasan_zero_page, 0, PAGE_SIZE);
load_cr3(init_level4_pgt); load_cr3(init_level4_pgt);
__flush_tlb_all();
init_task.kasan_depth = 0; init_task.kasan_depth = 0;
pr_info("Kernel address sanitizer initialized\n");
} }

View file

@ -352,13 +352,16 @@ static int acpi_lpss_create_device(struct acpi_device *adev,
pdata->mmio_size = resource_size(rentry->res); pdata->mmio_size = resource_size(rentry->res);
pdata->mmio_base = ioremap(rentry->res->start, pdata->mmio_base = ioremap(rentry->res->start,
pdata->mmio_size); pdata->mmio_size);
if (!pdata->mmio_base)
goto err_out;
break; break;
} }
acpi_dev_free_resource_list(&resource_list); acpi_dev_free_resource_list(&resource_list);
if (!pdata->mmio_base) {
ret = -ENOMEM;
goto err_out;
}
pdata->dev_desc = dev_desc; pdata->dev_desc = dev_desc;
if (dev_desc->setup) if (dev_desc->setup)

View file

@ -18,6 +18,7 @@
#include <linux/list.h> #include <linux/list.h>
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/sort.h> #include <linux/sort.h>
#include <linux/pmem.h>
#include <linux/io.h> #include <linux/io.h>
#include "nfit.h" #include "nfit.h"
@ -305,6 +306,23 @@ static bool add_idt(struct acpi_nfit_desc *acpi_desc,
return true; return true;
} }
static bool add_flush(struct acpi_nfit_desc *acpi_desc,
struct acpi_nfit_flush_address *flush)
{
struct device *dev = acpi_desc->dev;
struct nfit_flush *nfit_flush = devm_kzalloc(dev, sizeof(*nfit_flush),
GFP_KERNEL);
if (!nfit_flush)
return false;
INIT_LIST_HEAD(&nfit_flush->list);
nfit_flush->flush = flush;
list_add_tail(&nfit_flush->list, &acpi_desc->flushes);
dev_dbg(dev, "%s: nfit_flush handle: %d hint_count: %d\n", __func__,
flush->device_handle, flush->hint_count);
return true;
}
static void *add_table(struct acpi_nfit_desc *acpi_desc, void *table, static void *add_table(struct acpi_nfit_desc *acpi_desc, void *table,
const void *end) const void *end)
{ {
@ -338,7 +356,8 @@ static void *add_table(struct acpi_nfit_desc *acpi_desc, void *table,
return err; return err;
break; break;
case ACPI_NFIT_TYPE_FLUSH_ADDRESS: case ACPI_NFIT_TYPE_FLUSH_ADDRESS:
dev_dbg(dev, "%s: flush\n", __func__); if (!add_flush(acpi_desc, table))
return err;
break; break;
case ACPI_NFIT_TYPE_SMBIOS: case ACPI_NFIT_TYPE_SMBIOS:
dev_dbg(dev, "%s: smbios\n", __func__); dev_dbg(dev, "%s: smbios\n", __func__);
@ -389,6 +408,7 @@ static int nfit_mem_add(struct acpi_nfit_desc *acpi_desc,
{ {
u16 dcr = __to_nfit_memdev(nfit_mem)->region_index; u16 dcr = __to_nfit_memdev(nfit_mem)->region_index;
struct nfit_memdev *nfit_memdev; struct nfit_memdev *nfit_memdev;
struct nfit_flush *nfit_flush;
struct nfit_dcr *nfit_dcr; struct nfit_dcr *nfit_dcr;
struct nfit_bdw *nfit_bdw; struct nfit_bdw *nfit_bdw;
struct nfit_idt *nfit_idt; struct nfit_idt *nfit_idt;
@ -442,6 +462,14 @@ static int nfit_mem_add(struct acpi_nfit_desc *acpi_desc,
nfit_mem->idt_bdw = nfit_idt->idt; nfit_mem->idt_bdw = nfit_idt->idt;
break; break;
} }
list_for_each_entry(nfit_flush, &acpi_desc->flushes, list) {
if (nfit_flush->flush->device_handle !=
nfit_memdev->memdev->device_handle)
continue;
nfit_mem->nfit_flush = nfit_flush;
break;
}
break; break;
} }
@ -978,6 +1006,24 @@ static u64 to_interleave_offset(u64 offset, struct nfit_blk_mmio *mmio)
return mmio->base_offset + line_offset + table_offset + sub_line_offset; return mmio->base_offset + line_offset + table_offset + sub_line_offset;
} }
static void wmb_blk(struct nfit_blk *nfit_blk)
{
if (nfit_blk->nvdimm_flush) {
/*
* The first wmb() is needed to 'sfence' all previous writes
* such that they are architecturally visible for the platform
* buffer flush. Note that we've already arranged for pmem
* writes to avoid the cache via arch_memcpy_to_pmem(). The
* final wmb() ensures ordering for the NVDIMM flush write.
*/
wmb();
writeq(1, nfit_blk->nvdimm_flush);
wmb();
} else
wmb_pmem();
}
static u64 read_blk_stat(struct nfit_blk *nfit_blk, unsigned int bw) static u64 read_blk_stat(struct nfit_blk *nfit_blk, unsigned int bw)
{ {
struct nfit_blk_mmio *mmio = &nfit_blk->mmio[DCR]; struct nfit_blk_mmio *mmio = &nfit_blk->mmio[DCR];
@ -1012,7 +1058,10 @@ static void write_blk_ctl(struct nfit_blk *nfit_blk, unsigned int bw,
offset = to_interleave_offset(offset, mmio); offset = to_interleave_offset(offset, mmio);
writeq(cmd, mmio->base + offset); writeq(cmd, mmio->base + offset);
/* FIXME: conditionally perform read-back if mandated by firmware */ wmb_blk(nfit_blk);
if (nfit_blk->dimm_flags & ND_BLK_DCR_LATCH)
readq(mmio->base + offset);
} }
static int acpi_nfit_blk_single_io(struct nfit_blk *nfit_blk, static int acpi_nfit_blk_single_io(struct nfit_blk *nfit_blk,
@ -1026,7 +1075,6 @@ static int acpi_nfit_blk_single_io(struct nfit_blk *nfit_blk,
base_offset = nfit_blk->bdw_offset + dpa % L1_CACHE_BYTES base_offset = nfit_blk->bdw_offset + dpa % L1_CACHE_BYTES
+ lane * mmio->size; + lane * mmio->size;
/* TODO: non-temporal access, flush hints, cache management etc... */
write_blk_ctl(nfit_blk, lane, dpa, len, rw); write_blk_ctl(nfit_blk, lane, dpa, len, rw);
while (len) { while (len) {
unsigned int c; unsigned int c;
@ -1045,13 +1093,19 @@ static int acpi_nfit_blk_single_io(struct nfit_blk *nfit_blk,
} }
if (rw) if (rw)
memcpy(mmio->aperture + offset, iobuf + copied, c); memcpy_to_pmem(mmio->aperture + offset,
iobuf + copied, c);
else else
memcpy(iobuf + copied, mmio->aperture + offset, c); memcpy_from_pmem(iobuf + copied,
mmio->aperture + offset, c);
copied += c; copied += c;
len -= c; len -= c;
} }
if (rw)
wmb_blk(nfit_blk);
rc = read_blk_stat(nfit_blk, lane) ? -EIO : 0; rc = read_blk_stat(nfit_blk, lane) ? -EIO : 0;
return rc; return rc;
} }
@ -1124,7 +1178,7 @@ static void nfit_spa_unmap(struct acpi_nfit_desc *acpi_desc,
} }
static void __iomem *__nfit_spa_map(struct acpi_nfit_desc *acpi_desc, static void __iomem *__nfit_spa_map(struct acpi_nfit_desc *acpi_desc,
struct acpi_nfit_system_address *spa) struct acpi_nfit_system_address *spa, enum spa_map_type type)
{ {
resource_size_t start = spa->address; resource_size_t start = spa->address;
resource_size_t n = spa->length; resource_size_t n = spa->length;
@ -1152,8 +1206,15 @@ static void __iomem *__nfit_spa_map(struct acpi_nfit_desc *acpi_desc,
if (!res) if (!res)
goto err_mem; goto err_mem;
/* TODO: cacheability based on the spa type */ if (type == SPA_MAP_APERTURE) {
spa_map->iomem = ioremap_nocache(start, n); /*
* TODO: memremap_pmem() support, but that requires cache
* flushing when the aperture is moved.
*/
spa_map->iomem = ioremap_wc(start, n);
} else
spa_map->iomem = ioremap_nocache(start, n);
if (!spa_map->iomem) if (!spa_map->iomem)
goto err_map; goto err_map;
@ -1171,6 +1232,7 @@ static void __iomem *__nfit_spa_map(struct acpi_nfit_desc *acpi_desc,
* nfit_spa_map - interleave-aware managed-mappings of acpi_nfit_system_address ranges * nfit_spa_map - interleave-aware managed-mappings of acpi_nfit_system_address ranges
* @nvdimm_bus: NFIT-bus that provided the spa table entry * @nvdimm_bus: NFIT-bus that provided the spa table entry
* @nfit_spa: spa table to map * @nfit_spa: spa table to map
* @type: aperture or control region
* *
* In the case where block-data-window apertures and * In the case where block-data-window apertures and
* dimm-control-regions are interleaved they will end up sharing a * dimm-control-regions are interleaved they will end up sharing a
@ -1180,12 +1242,12 @@ static void __iomem *__nfit_spa_map(struct acpi_nfit_desc *acpi_desc,
* unbound. * unbound.
*/ */
static void __iomem *nfit_spa_map(struct acpi_nfit_desc *acpi_desc, static void __iomem *nfit_spa_map(struct acpi_nfit_desc *acpi_desc,
struct acpi_nfit_system_address *spa) struct acpi_nfit_system_address *spa, enum spa_map_type type)
{ {
void __iomem *iomem; void __iomem *iomem;
mutex_lock(&acpi_desc->spa_map_mutex); mutex_lock(&acpi_desc->spa_map_mutex);
iomem = __nfit_spa_map(acpi_desc, spa); iomem = __nfit_spa_map(acpi_desc, spa, type);
mutex_unlock(&acpi_desc->spa_map_mutex); mutex_unlock(&acpi_desc->spa_map_mutex);
return iomem; return iomem;
@ -1206,12 +1268,35 @@ static int nfit_blk_init_interleave(struct nfit_blk_mmio *mmio,
return 0; return 0;
} }
static int acpi_nfit_blk_get_flags(struct nvdimm_bus_descriptor *nd_desc,
struct nvdimm *nvdimm, struct nfit_blk *nfit_blk)
{
struct nd_cmd_dimm_flags flags;
int rc;
memset(&flags, 0, sizeof(flags));
rc = nd_desc->ndctl(nd_desc, nvdimm, ND_CMD_DIMM_FLAGS, &flags,
sizeof(flags));
if (rc >= 0 && flags.status == 0)
nfit_blk->dimm_flags = flags.flags;
else if (rc == -ENOTTY) {
/* fall back to a conservative default */
nfit_blk->dimm_flags = ND_BLK_DCR_LATCH;
rc = 0;
} else
rc = -ENXIO;
return rc;
}
static int acpi_nfit_blk_region_enable(struct nvdimm_bus *nvdimm_bus, static int acpi_nfit_blk_region_enable(struct nvdimm_bus *nvdimm_bus,
struct device *dev) struct device *dev)
{ {
struct nvdimm_bus_descriptor *nd_desc = to_nd_desc(nvdimm_bus); struct nvdimm_bus_descriptor *nd_desc = to_nd_desc(nvdimm_bus);
struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc); struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc);
struct nd_blk_region *ndbr = to_nd_blk_region(dev); struct nd_blk_region *ndbr = to_nd_blk_region(dev);
struct nfit_flush *nfit_flush;
struct nfit_blk_mmio *mmio; struct nfit_blk_mmio *mmio;
struct nfit_blk *nfit_blk; struct nfit_blk *nfit_blk;
struct nfit_mem *nfit_mem; struct nfit_mem *nfit_mem;
@ -1223,8 +1308,8 @@ static int acpi_nfit_blk_region_enable(struct nvdimm_bus *nvdimm_bus,
if (!nfit_mem || !nfit_mem->dcr || !nfit_mem->bdw) { if (!nfit_mem || !nfit_mem->dcr || !nfit_mem->bdw) {
dev_dbg(dev, "%s: missing%s%s%s\n", __func__, dev_dbg(dev, "%s: missing%s%s%s\n", __func__,
nfit_mem ? "" : " nfit_mem", nfit_mem ? "" : " nfit_mem",
nfit_mem->dcr ? "" : " dcr", (nfit_mem && nfit_mem->dcr) ? "" : " dcr",
nfit_mem->bdw ? "" : " bdw"); (nfit_mem && nfit_mem->bdw) ? "" : " bdw");
return -ENXIO; return -ENXIO;
} }
@ -1237,7 +1322,8 @@ static int acpi_nfit_blk_region_enable(struct nvdimm_bus *nvdimm_bus,
/* map block aperture memory */ /* map block aperture memory */
nfit_blk->bdw_offset = nfit_mem->bdw->offset; nfit_blk->bdw_offset = nfit_mem->bdw->offset;
mmio = &nfit_blk->mmio[BDW]; mmio = &nfit_blk->mmio[BDW];
mmio->base = nfit_spa_map(acpi_desc, nfit_mem->spa_bdw); mmio->base = nfit_spa_map(acpi_desc, nfit_mem->spa_bdw,
SPA_MAP_APERTURE);
if (!mmio->base) { if (!mmio->base) {
dev_dbg(dev, "%s: %s failed to map bdw\n", __func__, dev_dbg(dev, "%s: %s failed to map bdw\n", __func__,
nvdimm_name(nvdimm)); nvdimm_name(nvdimm));
@ -1259,7 +1345,8 @@ static int acpi_nfit_blk_region_enable(struct nvdimm_bus *nvdimm_bus,
nfit_blk->cmd_offset = nfit_mem->dcr->command_offset; nfit_blk->cmd_offset = nfit_mem->dcr->command_offset;
nfit_blk->stat_offset = nfit_mem->dcr->status_offset; nfit_blk->stat_offset = nfit_mem->dcr->status_offset;
mmio = &nfit_blk->mmio[DCR]; mmio = &nfit_blk->mmio[DCR];
mmio->base = nfit_spa_map(acpi_desc, nfit_mem->spa_dcr); mmio->base = nfit_spa_map(acpi_desc, nfit_mem->spa_dcr,
SPA_MAP_CONTROL);
if (!mmio->base) { if (!mmio->base) {
dev_dbg(dev, "%s: %s failed to map dcr\n", __func__, dev_dbg(dev, "%s: %s failed to map dcr\n", __func__,
nvdimm_name(nvdimm)); nvdimm_name(nvdimm));
@ -1277,6 +1364,24 @@ static int acpi_nfit_blk_region_enable(struct nvdimm_bus *nvdimm_bus,
return rc; return rc;
} }
rc = acpi_nfit_blk_get_flags(nd_desc, nvdimm, nfit_blk);
if (rc < 0) {
dev_dbg(dev, "%s: %s failed get DIMM flags\n",
__func__, nvdimm_name(nvdimm));
return rc;
}
nfit_flush = nfit_mem->nfit_flush;
if (nfit_flush && nfit_flush->flush->hint_count != 0) {
nfit_blk->nvdimm_flush = devm_ioremap_nocache(dev,
nfit_flush->flush->hint_address[0], 8);
if (!nfit_blk->nvdimm_flush)
return -ENOMEM;
}
if (!arch_has_pmem_api() && !nfit_blk->nvdimm_flush)
dev_warn(dev, "unable to guarantee persistence of writes\n");
if (mmio->line_size == 0) if (mmio->line_size == 0)
return 0; return 0;
@ -1459,6 +1564,7 @@ int acpi_nfit_init(struct acpi_nfit_desc *acpi_desc, acpi_size sz)
INIT_LIST_HEAD(&acpi_desc->dcrs); INIT_LIST_HEAD(&acpi_desc->dcrs);
INIT_LIST_HEAD(&acpi_desc->bdws); INIT_LIST_HEAD(&acpi_desc->bdws);
INIT_LIST_HEAD(&acpi_desc->idts); INIT_LIST_HEAD(&acpi_desc->idts);
INIT_LIST_HEAD(&acpi_desc->flushes);
INIT_LIST_HEAD(&acpi_desc->memdevs); INIT_LIST_HEAD(&acpi_desc->memdevs);
INIT_LIST_HEAD(&acpi_desc->dimms); INIT_LIST_HEAD(&acpi_desc->dimms);
mutex_init(&acpi_desc->spa_map_mutex); mutex_init(&acpi_desc->spa_map_mutex);

View file

@ -40,6 +40,10 @@ enum nfit_uuids {
NFIT_UUID_MAX, NFIT_UUID_MAX,
}; };
enum {
ND_BLK_DCR_LATCH = 2,
};
struct nfit_spa { struct nfit_spa {
struct acpi_nfit_system_address *spa; struct acpi_nfit_system_address *spa;
struct list_head list; struct list_head list;
@ -60,6 +64,11 @@ struct nfit_idt {
struct list_head list; struct list_head list;
}; };
struct nfit_flush {
struct acpi_nfit_flush_address *flush;
struct list_head list;
};
struct nfit_memdev { struct nfit_memdev {
struct acpi_nfit_memory_map *memdev; struct acpi_nfit_memory_map *memdev;
struct list_head list; struct list_head list;
@ -77,6 +86,7 @@ struct nfit_mem {
struct acpi_nfit_system_address *spa_bdw; struct acpi_nfit_system_address *spa_bdw;
struct acpi_nfit_interleave *idt_dcr; struct acpi_nfit_interleave *idt_dcr;
struct acpi_nfit_interleave *idt_bdw; struct acpi_nfit_interleave *idt_bdw;
struct nfit_flush *nfit_flush;
struct list_head list; struct list_head list;
struct acpi_device *adev; struct acpi_device *adev;
unsigned long dsm_mask; unsigned long dsm_mask;
@ -88,6 +98,7 @@ struct acpi_nfit_desc {
struct mutex spa_map_mutex; struct mutex spa_map_mutex;
struct list_head spa_maps; struct list_head spa_maps;
struct list_head memdevs; struct list_head memdevs;
struct list_head flushes;
struct list_head dimms; struct list_head dimms;
struct list_head spas; struct list_head spas;
struct list_head dcrs; struct list_head dcrs;
@ -109,7 +120,7 @@ struct nfit_blk {
struct nfit_blk_mmio { struct nfit_blk_mmio {
union { union {
void __iomem *base; void __iomem *base;
void *aperture; void __pmem *aperture;
}; };
u64 size; u64 size;
u64 base_offset; u64 base_offset;
@ -123,6 +134,13 @@ struct nfit_blk {
u64 bdw_offset; /* post interleave offset */ u64 bdw_offset; /* post interleave offset */
u64 stat_offset; u64 stat_offset;
u64 cmd_offset; u64 cmd_offset;
void __iomem *nvdimm_flush;
u32 dimm_flags;
};
enum spa_map_type {
SPA_MAP_CONTROL,
SPA_MAP_APERTURE,
}; };
struct nfit_spa_mapping { struct nfit_spa_mapping {

View file

@ -175,10 +175,14 @@ static void __init acpi_request_region (struct acpi_generic_address *gas,
if (!addr || !length) if (!addr || !length)
return; return;
acpi_reserve_region(addr, length, gas->space_id, 0, desc); /* Resources are never freed */
if (gas->space_id == ACPI_ADR_SPACE_SYSTEM_IO)
request_region(addr, length, desc);
else if (gas->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY)
request_mem_region(addr, length, desc);
} }
static void __init acpi_reserve_resources(void) static int __init acpi_reserve_resources(void)
{ {
acpi_request_region(&acpi_gbl_FADT.xpm1a_event_block, acpi_gbl_FADT.pm1_event_length, acpi_request_region(&acpi_gbl_FADT.xpm1a_event_block, acpi_gbl_FADT.pm1_event_length,
"ACPI PM1a_EVT_BLK"); "ACPI PM1a_EVT_BLK");
@ -207,7 +211,10 @@ static void __init acpi_reserve_resources(void)
if (!(acpi_gbl_FADT.gpe1_block_length & 0x1)) if (!(acpi_gbl_FADT.gpe1_block_length & 0x1))
acpi_request_region(&acpi_gbl_FADT.xgpe1_block, acpi_request_region(&acpi_gbl_FADT.xgpe1_block,
acpi_gbl_FADT.gpe1_block_length, "ACPI GPE1_BLK"); acpi_gbl_FADT.gpe1_block_length, "ACPI GPE1_BLK");
return 0;
} }
fs_initcall_sync(acpi_reserve_resources);
void acpi_os_printf(const char *fmt, ...) void acpi_os_printf(const char *fmt, ...)
{ {
@ -1862,7 +1869,6 @@ acpi_status __init acpi_os_initialize(void)
acpi_status __init acpi_os_initialize1(void) acpi_status __init acpi_os_initialize1(void)
{ {
acpi_reserve_resources();
kacpid_wq = alloc_workqueue("kacpid", 0, 1); kacpid_wq = alloc_workqueue("kacpid", 0, 1);
kacpi_notify_wq = alloc_workqueue("kacpi_notify", 0, 1); kacpi_notify_wq = alloc_workqueue("kacpi_notify", 0, 1);
kacpi_hotplug_wq = alloc_ordered_workqueue("kacpi_hotplug", 0); kacpi_hotplug_wq = alloc_ordered_workqueue("kacpi_hotplug", 0);

View file

@ -26,7 +26,6 @@
#include <linux/device.h> #include <linux/device.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/ioport.h> #include <linux/ioport.h>
#include <linux/list.h>
#include <linux/slab.h> #include <linux/slab.h>
#ifdef CONFIG_X86 #ifdef CONFIG_X86
@ -622,164 +621,3 @@ int acpi_dev_filter_resource_type(struct acpi_resource *ares,
return (type & types) ? 0 : 1; return (type & types) ? 0 : 1;
} }
EXPORT_SYMBOL_GPL(acpi_dev_filter_resource_type); EXPORT_SYMBOL_GPL(acpi_dev_filter_resource_type);
struct reserved_region {
struct list_head node;
u64 start;
u64 end;
};
static LIST_HEAD(reserved_io_regions);
static LIST_HEAD(reserved_mem_regions);
static int request_range(u64 start, u64 end, u8 space_id, unsigned long flags,
char *desc)
{
unsigned int length = end - start + 1;
struct resource *res;
res = space_id == ACPI_ADR_SPACE_SYSTEM_IO ?
request_region(start, length, desc) :
request_mem_region(start, length, desc);
if (!res)
return -EIO;
res->flags &= ~flags;
return 0;
}
static int add_region_before(u64 start, u64 end, u8 space_id,
unsigned long flags, char *desc,
struct list_head *head)
{
struct reserved_region *reg;
int error;
reg = kmalloc(sizeof(*reg), GFP_KERNEL);
if (!reg)
return -ENOMEM;
error = request_range(start, end, space_id, flags, desc);
if (error) {
kfree(reg);
return error;
}
reg->start = start;
reg->end = end;
list_add_tail(&reg->node, head);
return 0;
}
/**
* acpi_reserve_region - Reserve an I/O or memory region as a system resource.
* @start: Starting address of the region.
* @length: Length of the region.
* @space_id: Identifier of address space to reserve the region from.
* @flags: Resource flags to clear for the region after requesting it.
* @desc: Region description (for messages).
*
* Reserve an I/O or memory region as a system resource to prevent others from
* using it. If the new region overlaps with one of the regions (in the given
* address space) already reserved by this routine, only the non-overlapping
* parts of it will be reserved.
*
* Returned is either 0 (success) or a negative error code indicating a resource
* reservation problem. It is the code of the first encountered error, but the
* routine doesn't abort until it has attempted to request all of the parts of
* the new region that don't overlap with other regions reserved previously.
*
* The resources requested by this routine are never released.
*/
int acpi_reserve_region(u64 start, unsigned int length, u8 space_id,
unsigned long flags, char *desc)
{
struct list_head *regions;
struct reserved_region *reg;
u64 end = start + length - 1;
int ret = 0, error = 0;
if (space_id == ACPI_ADR_SPACE_SYSTEM_IO)
regions = &reserved_io_regions;
else if (space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY)
regions = &reserved_mem_regions;
else
return -EINVAL;
if (list_empty(regions))
return add_region_before(start, end, space_id, flags, desc, regions);
list_for_each_entry(reg, regions, node)
if (reg->start == end + 1) {
/* The new region can be prepended to this one. */
ret = request_range(start, end, space_id, flags, desc);
if (!ret)
reg->start = start;
return ret;
} else if (reg->start > end) {
/* No overlap. Add the new region here and get out. */
return add_region_before(start, end, space_id, flags,
desc, &reg->node);
} else if (reg->end == start - 1) {
goto combine;
} else if (reg->end >= start) {
goto overlap;
}
/* The new region goes after the last existing one. */
return add_region_before(start, end, space_id, flags, desc, regions);
overlap:
/*
* The new region overlaps an existing one.
*
* The head part of the new region immediately preceding the existing
* overlapping one can be combined with it right away.
*/
if (reg->start > start) {
error = request_range(start, reg->start - 1, space_id, flags, desc);
if (error)
ret = error;
else
reg->start = start;
}
combine:
/*
* The new region is adjacent to an existing one. If it extends beyond
* that region all the way to the next one, it is possible to combine
* all three of them.
*/
while (reg->end < end) {
struct reserved_region *next = NULL;
u64 a = reg->end + 1, b = end;
if (!list_is_last(&reg->node, regions)) {
next = list_next_entry(reg, node);
if (next->start <= end)
b = next->start - 1;
}
error = request_range(a, b, space_id, flags, desc);
if (!error) {
if (next && next->start == b + 1) {
reg->end = next->end;
list_del(&next->node);
kfree(next);
} else {
reg->end = end;
break;
}
} else if (next) {
if (!ret)
ret = error;
reg = next;
} else {
break;
}
}
return ret ? ret : error;
}
EXPORT_SYMBOL_GPL(acpi_reserve_region);

Some files were not shown because too many files have changed in this diff Show more