Commit graph

601979 commits

Author SHA1 Message Date
Linux Build Service Account
697b92b1d2 Merge "ARM: dts: msm: Add a reset gpio for ethernet on msm8996 CV2X boards" 2018-08-27 18:28:17 -07:00
Linux Build Service Account
2119f6469d Merge "cfg80211: never ignore user regulatory hint" 2018-08-27 18:28:16 -07:00
Linux Build Service Account
cfcc5dbf73 Merge "Merge android-4.4.148 (f057ff9) into msm-4.4" 2018-08-27 18:28:15 -07:00
Linux Build Service Account
f63b4db1ff Merge "icnss: Clear ICNSS_MSA0_ASSIGNED flag in cap failure case" 2018-08-27 18:28:15 -07:00
Linux Build Service Account
80f732dc0f Merge "msm: ais: change csid to avoid overflow" 2018-08-27 18:28:13 -07:00
Aditya Mathur
ed45afa1c5 ARM: dts: msm: Add a reset gpio for ethernet on msm8996 CV2X boards
Enable reset gpio for Neutrino ethernet for
msm8996 CV2X boards

Change-Id: I6b00a76640184d34feee382cd1c6de1427464719
Signed-off-by: Aditya Mathur <aditmath@codeaurora.org>
2018-08-24 14:09:28 -07:00
Hardik Kantilal Patel
60e1b7e682 icnss: Clear ICNSS_MSA0_ASSIGNED flag in cap failure case
During capability qmi message failure ICNSS_MSA0_ASSIGNED
flag is not getting clear. Due to this after PDR/SSR next
time it is not configuring the MSA0 permission to q6 which
result into NOC error as q6 is not having access permission.

To address above issue clear ICNSS_MSA0_ASSIGNED bit in
failure case.

CRs-Fixed: 2300877
Change-Id: I6aeaedb5a394b843c4f1c8ef1e0be47a6947b331
Signed-off-by: Hardik Kantilal Patel <hkpatel@codeaurora.org>
2018-08-24 14:19:26 +05:30
Linux Build Service Account
87d4444575 Merge "soc: qcom: hab: fix the incompatible pointer initialization warning" 2018-08-23 23:15:46 -07:00
Linux Build Service Account
710f9d4cd8 Merge "defconfig: gvm: enable TCPMSS and RPFILTER" 2018-08-23 23:15:43 -07:00
Linux Build Service Account
f7561e5ac4 Merge "ARM: dts: msm: Enable upscaling on Sharp Dual DSI panel" 2018-08-23 23:15:39 -07:00
Amar Singhal
f091e9f98b cfg80211: never ignore user regulatory hint
Currently user regulatory hint is ignored if all wiphys
in the system are self managed. But the hint is not ignored
if there is no wiphy in the system. This affects the global
regulatory setting. Global regulatory setting needs to be
maintained so that it can be applied to a new wiphy entering
the system. Therefore, do not ignore user regulatory setting
even if all wiphys in the system are self managed.

Signed-off-by: Amar Singhal <asinghal@codeaurora.org>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Change-Id: I468fcd3403259b03369e011fa41b003e8ff33d3c
CRs-Fixed: 2276224
Git-commit: e31f6456c01c76f154e1b25cd54df97809a49edb
Git-repo: https://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211.git
Signed-off-by: Amar Singhal <asinghal@codeaurora.org>
2018-08-23 13:26:19 -07:00
Srinivasarao P
79de04d806 Merge android-4.4.148 (f057ff9) into msm-4.4
* refs/heads/tmp-f057ff9
  Linux 4.4.148
  x86/speculation/l1tf: Unbreak !__HAVE_ARCH_PFN_MODIFY_ALLOWED architectures
  x86/init: fix build with CONFIG_SWAP=n
  x86/speculation/l1tf: Fix up CPU feature flags
  x86/mm/kmmio: Make the tracer robust against L1TF
  x86/mm/pat: Make set_memory_np() L1TF safe
  x86/speculation/l1tf: Make pmd/pud_mknotpresent() invert
  x86/speculation/l1tf: Invert all not present mappings
  x86/speculation/l1tf: Fix up pte->pfn conversion for PAE
  x86/speculation/l1tf: Protect PAE swap entries against L1TF
  x86/cpufeatures: Add detection of L1D cache flush support.
  x86/speculation/l1tf: Extend 64bit swap file size limit
  x86/bugs: Move the l1tf function and define pr_fmt properly
  x86/speculation/l1tf: Limit swap file size to MAX_PA/2
  x86/speculation/l1tf: Disallow non privileged high MMIO PROT_NONE mappings
  mm: fix cache mode tracking in vm_insert_mixed()
  mm: Add vm_insert_pfn_prot()
  x86/speculation/l1tf: Add sysfs reporting for l1tf
  x86/speculation/l1tf: Make sure the first page is always reserved
  x86/speculation/l1tf: Protect PROT_NONE PTEs against speculation
  x86/speculation/l1tf: Protect swap entries against L1TF
  x86/speculation/l1tf: Change order of offset/type in swap entry
  mm: x86: move _PAGE_SWP_SOFT_DIRTY from bit 7 to bit 1
  x86/mm: Fix swap entry comment and macro
  x86/mm: Move swap offset/type up in PTE to work around erratum
  x86/speculation/l1tf: Increase 32bit PAE __PHYSICAL_PAGE_SHIFT
  x86/irqflags: Provide a declaration for native_save_fl
  kprobes/x86: Fix %p uses in error messages
  x86/speculation: Protect against userspace-userspace spectreRSB
  x86/paravirt: Fix spectre-v2 mitigations for paravirt guests
  ARM: dts: imx6sx: fix irq for pcie bridge
  IB/ocrdma: fix out of bounds access to local buffer
  IB/mlx4: Mark user MR as writable if actual virtual memory is writable
  IB/core: Make testing MR flags for writability a static inline function
  fix __legitimize_mnt()/mntput() race
  fix mntput/mntput race
  root dentries need RCU-delayed freeing
  scsi: sr: Avoid that opening a CD-ROM hangs with runtime power management enabled
  ACPI / LPSS: Add missing prv_offset setting for byt/cht PWM devices
  xen/netfront: don't cache skb_shinfo()
  parisc: Define mb() and add memory barriers to assembler unlock sequences
  parisc: Enable CONFIG_MLONGCALLS by default
  fork: unconditionally clear stack on fork
  ipv4+ipv6: Make INET*_ESP select CRYPTO_ECHAINIV
  tpm: fix race condition in tpm_common_write()
  ext4: fix check to prevent initializing reserved inodes
  Linux 4.4.147
  jfs: Fix inconsistency between memory allocation and ea_buf->max_size
  i2c: imx: Fix reinit_completion() use
  ring_buffer: tracing: Inherit the tracing setting to next ring buffer
  ACPI / PCI: Bail early in acpi_pci_add_bus() if there is no ACPI handle
  ext4: fix false negatives *and* false positives in ext4_check_descriptors()
  netlink: Don't shift on 64 for ngroups
  netlink: Don't shift with UB on nlk->ngroups
  netlink: Do not subscribe to non-existent groups
  nohz: Fix local_timer_softirq_pending()
  genirq: Make force irq threading setup more robust
  scsi: qla2xxx: Return error when TMF returns
  scsi: qla2xxx: Fix ISP recovery on unload

Conflicts:
	include/linux/swapfile.h

Removed CONFIG_CRYPTO_ECHAINIV from defconfig files since this upmerge is
adding this config to Kconfig file.

Change-Id: Ide96c29f919d76590c2bdccf356d1d464a892fd7
Signed-off-by: Srinivasarao P <spathi@codeaurora.org>
2018-08-24 00:07:01 +05:30
Yong Ding
8bdbc287ee soc: qcom: hab: fix the incompatible pointer initialization warning
Such warning of "initialization from incompatible pointer type"
is found in the build time, and it's good to fix it.

Change-Id: Iaf820ae7ec4a7851185febbdebaaab3706fb2402
Signed-off-by: Yong Ding <yongding@codeaurora.org>
2018-08-23 13:22:32 +08:00
Nijun Gong
b8932c685d defconfig: gvm: enable TCPMSS and RPFILTER
wlan tether function depends on these

Change-Id: Ia00c752b46b23e9e4955e09bb9d69231a3b6cabc
Signed-off-by: Nijun Gong <ngong@codeaurora.org>
2018-08-22 20:14:20 +08:00
Linux Build Service Account
c25026979f Merge "iommu/arm-smmu: Add Hibernation support" 2018-08-21 13:10:24 -07:00
Siddhartha Agrawal
ace0b79c2f iommu/arm-smmu: Add Hibernation support
This adds support for saving the arm-smmu client's context
just before going into hibernation. This context is restored
on the subesequent hibernate restore.
Also, invalidate the TLB during the restore phase to avoid
wrong translations post-resume.

Change-Id: Idd8d12bb4d13f8a62bd51e0adaad82bd92f658ee
Signed-off-by: vkakani <vkakani@codeaurora.org>
Signed-off-by: Arun KS <arunks@codeaurora.org>
Signed-off-by: Atul Raut <araut@codeaurora.org>
Signed-off-by: Siddhartha Agrawal <agrawals@codeaurora.org>
2018-08-21 11:16:37 -07:00
Chunhuan Zhan
5373262ff2 msm: ais: change csid to avoid overflow
Check the cid number to be less than MAX_CID in csid.

Change-Id: I16777dc8e8c72e01dc10490cd4c205c939adb7b5
Signed-off-by: Chunhuan Zhan <zhanc@codeaurora.org>
Signed-off-by: Rahul Sharma <rahsha@codeaurora.org>
2018-08-21 05:20:05 -07:00
Jack Pham
4dabf448ae Revert "usb: phy: dual-role: update sysfs attrs when changed"
This reverts commit 563b2f7a6b.

The previous approach of dynamically updating the writeable
permission bits of the power/data_role attributes only works
if the userspace application has root permission since the
call to sysfs_update_group() removes and re-adds the files. If
they had previously been chown/chgrp'ed, the ownership would be
reset. On the other hand, if there was a ueventd rule to
dynamically update the ownership, then the mode would always
be overridden with the static umask given in the ueventd rule,
contradicting the driver's determination of writeability.

Hence, the more comprehensive fix should be done in userspace
to not rely solely on writeability. Still, this change needs
to be reverted since it can still cause a race between ueventd
and the userspace service trying to check writability.

Change-Id: Ic667a97f2bae41e5a86ee45565518b06db959b36
Signed-off-by: Jack Pham <jackp@codeaurora.org>
2018-08-21 00:54:08 -07:00
Linux Build Service Account
7454980b83 Merge "platform: msm: resolve NULL pointer dereference issue" 2018-08-20 08:15:52 -07:00
Linux Build Service Account
207be80505 Merge "msm: adsprpc: DSP device node to provide restricted access to ADSP/SLPI" 2018-08-20 00:38:29 -07:00
Linux Build Service Account
15288f73bd Merge "drm: msm: update dsi state context when splash is on" 2018-08-17 07:09:09 -07:00
Ankit Jain
57bafa4361 ARM: dts: msm: set qcom,guard-memory property for rmtfs on sdm660
This is needed to address the XPU limitation, so that the
shared memory is not contiguous with other memory allocations
that may happen from other clients in the system.

Change-Id: Ibc9961245f32ecc63892007a3d12b7956cf63e67
Signed-off-by: Ankit Jain <jankit@codeaurora.org>
2018-08-16 18:49:14 -07:00
Sahitya Tummala
7b270ed198 uio: msm_sharedmem: add guard page around shared memory
If guard_memory dtsi property is set, then the shared memory
region will be guarded by SZ_4K at the start and at the end.
This is needed to overcome the XPU limitation on few MSM HW,
so as to make this memory not contiguous with other allocations
that may possibly happen from other clients in the system.

Change-Id: I57637619cea8fe7f0f7254624e07177ea4a4fce0
Signed-off-by: Sahitya Tummala <stummala@codeaurora.org>
2018-08-16 08:00:12 -07:00
Yao Jiang
ac44aac0b2 platform: msm: resolve NULL pointer dereference issue
Fix some null pointer dereference flaw and parameter not init issues.

change-Id: I0ed5f3f62c3794775bf97d353c4e50dd8ceb32da
Signed-off-by: Yao Jiang <yaojia@codeaurora.org>
2018-08-16 11:45:37 +08:00
Greg Kroah-Hartman
f057ff9377 This is the 4.4.148 stable release
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAlt0SdMACgkQONu9yGCS
 aT4NLxAAovDVqFFejBk8M1nxAtQSqRzB2PMboc+l62clKa6BAJtWAsgPFjECgzEp
 edlDeUttliQoTB6S3GYYM82oj50myUKlGvlJRptRE3Gr1iYubdB/U2RDmwEzCxbC
 AEzu4tEv+Z23jaLGsuAIOg66faBTqqgVoKtp/TlKwl+Y/b6WzkI0gRzxWTBFnAlj
 AKuhmoc1JoS9JF/MQ4q02gYSQ0g1eTpr1gIU2GMow9pK9Rahk4Jdl4yRjNLUFDxd
 ojrBYCoElf90R3q+NvmZBbzxwanm2OgzeEBffhh647aB5kHEUd5h4z9w+sIoXmSq
 50uD59q62Umdpp2O125HH5KHeHbcTUCXXp3g1VY6A/+d9dTs9GZqo//vf6aJsxEb
 gixoYyNbIcqw1k0jhEEW2ah3F3j+ZHvNmLKPyV4U8h2Tw2K5QKzFu/fVnQw7Xfv6
 Gv0z1TQ4Y+w2bqpzDiDBO4sRgKOXVr3hzWa0jggW5AoKWTco/oIVkE+Rqmj65AfK
 DROqCMQq75K+pymrM8I3wTXRSxtSH9bO/iqCu2LiiaG+JAkvr0OIHEHgizxLtAFO
 ivpREPDsWhVAYUmnoCgJa8Za1GdJk1I9uvxoJY1TBL8gbcYc61yjjeJDYqLghuNT
 EhrvFvJ4r/fQ6BJ76+rO7FSJIl+Kov2Uf7CWql3Lzxps6/u5GNQ=
 =73dO
 -----END PGP SIGNATURE-----

Merge 4.4.148 into android-4.4

Changes in 4.4.148
	ext4: fix check to prevent initializing reserved inodes
	tpm: fix race condition in tpm_common_write()
	ipv4+ipv6: Make INET*_ESP select CRYPTO_ECHAINIV
	fork: unconditionally clear stack on fork
	parisc: Enable CONFIG_MLONGCALLS by default
	parisc: Define mb() and add memory barriers to assembler unlock sequences
	xen/netfront: don't cache skb_shinfo()
	ACPI / LPSS: Add missing prv_offset setting for byt/cht PWM devices
	scsi: sr: Avoid that opening a CD-ROM hangs with runtime power management enabled
	root dentries need RCU-delayed freeing
	fix mntput/mntput race
	fix __legitimize_mnt()/mntput() race
	IB/core: Make testing MR flags for writability a static inline function
	IB/mlx4: Mark user MR as writable if actual virtual memory is writable
	IB/ocrdma: fix out of bounds access to local buffer
	ARM: dts: imx6sx: fix irq for pcie bridge
	x86/paravirt: Fix spectre-v2 mitigations for paravirt guests
	x86/speculation: Protect against userspace-userspace spectreRSB
	kprobes/x86: Fix %p uses in error messages
	x86/irqflags: Provide a declaration for native_save_fl
	x86/speculation/l1tf: Increase 32bit PAE __PHYSICAL_PAGE_SHIFT
	x86/mm: Move swap offset/type up in PTE to work around erratum
	x86/mm: Fix swap entry comment and macro
	mm: x86: move _PAGE_SWP_SOFT_DIRTY from bit 7 to bit 1
	x86/speculation/l1tf: Change order of offset/type in swap entry
	x86/speculation/l1tf: Protect swap entries against L1TF
	x86/speculation/l1tf: Protect PROT_NONE PTEs against speculation
	x86/speculation/l1tf: Make sure the first page is always reserved
	x86/speculation/l1tf: Add sysfs reporting for l1tf
	mm: Add vm_insert_pfn_prot()
	mm: fix cache mode tracking in vm_insert_mixed()
	x86/speculation/l1tf: Disallow non privileged high MMIO PROT_NONE mappings
	x86/speculation/l1tf: Limit swap file size to MAX_PA/2
	x86/bugs: Move the l1tf function and define pr_fmt properly
	x86/speculation/l1tf: Extend 64bit swap file size limit
	x86/cpufeatures: Add detection of L1D cache flush support.
	x86/speculation/l1tf: Protect PAE swap entries against L1TF
	x86/speculation/l1tf: Fix up pte->pfn conversion for PAE
	x86/speculation/l1tf: Invert all not present mappings
	x86/speculation/l1tf: Make pmd/pud_mknotpresent() invert
	x86/mm/pat: Make set_memory_np() L1TF safe
	x86/mm/kmmio: Make the tracer robust against L1TF
	x86/speculation/l1tf: Fix up CPU feature flags
	x86/init: fix build with CONFIG_SWAP=n
	x86/speculation/l1tf: Unbreak !__HAVE_ARCH_PFN_MODIFY_ALLOWED architectures
	Linux 4.4.148

Change-Id: I83c857d9d9d74ee47e61d15eb411f276f057ba3d
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
2018-08-15 18:20:41 +02:00
Greg Kroah-Hartman
30a97c1e2d Linux 4.4.148 2018-08-15 17:42:11 +02:00
Jiri Kosina
8f2adf3d21 x86/speculation/l1tf: Unbreak !__HAVE_ARCH_PFN_MODIFY_ALLOWED architectures
commit 6c26fcd2abfe0a56bbd95271fce02df2896cfd24 upstream.

pfn_modify_allowed() and arch_has_pfn_modify_check() are outside of the
!__ASSEMBLY__ section in include/asm-generic/pgtable.h, which confuses
assembler on archs that don't have __HAVE_ARCH_PFN_MODIFY_ALLOWED (e.g.
ia64) and breaks build:

    include/asm-generic/pgtable.h: Assembler messages:
    include/asm-generic/pgtable.h:538: Error: Unknown opcode `static inline bool pfn_modify_allowed(unsigned long pfn,pgprot_t prot)'
    include/asm-generic/pgtable.h:540: Error: Unknown opcode `return true'
    include/asm-generic/pgtable.h:543: Error: Unknown opcode `static inline bool arch_has_pfn_modify_check(void)'
    include/asm-generic/pgtable.h:545: Error: Unknown opcode `return false'
    arch/ia64/kernel/entry.S:69: Error: `mov' does not fit into bundle

Move those two static inlines into the !__ASSEMBLY__ section so that they
don't confuse the asm build pass.

Fixes: 42e4089c7890 ("x86/speculation/l1tf: Disallow non privileged high MMIO PROT_NONE mappings")
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[groeck: Context changes]
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2018-08-15 17:42:11 +02:00
Vlastimil Babka
4b90ff885c x86/init: fix build with CONFIG_SWAP=n
commit 792adb90fa724ce07c0171cbc96b9215af4b1045 upstream.

The introduction of generic_max_swapfile_size and arch-specific versions has
broken linking on x86 with CONFIG_SWAP=n due to undefined reference to
'generic_max_swapfile_size'. Fix it by compiling the x86-specific
max_swapfile_size() only with CONFIG_SWAP=y.

Reported-by: Tomas Pruzina <pruzinat@gmail.com>
Fixes: 377eeaa8e11f ("x86/speculation/l1tf: Limit swap file size to MAX_PA/2")
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: stable@vger.kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-15 17:42:11 +02:00
Guenter Roeck
eb993211b9 x86/speculation/l1tf: Fix up CPU feature flags
In linux-4.4.y, the definition of X86_FEATURE_RETPOLINE and
X86_FEATURE_RETPOLINE_AMD is different from the upstream
definition. Result is an overlap with the newly introduced
X86_FEATURE_L1TF_PTEINV. Update RETPOLINE definitions to match
upstream definitions to improve alignment with upstream code.

Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-15 17:42:11 +02:00
Andi Kleen
6b06f36f07 x86/mm/kmmio: Make the tracer robust against L1TF
commit 1063711b57393c1999248cccb57bebfaf16739e7 upstream

The mmio tracer sets io mapping PTEs and PMDs to non present when enabled
without inverting the address bits, which makes the PTE entry vulnerable
for L1TF.

Make it use the right low level macros to actually invert the address bits
to protect against L1TF.

In principle this could be avoided because MMIO tracing is not likely to be
enabled on production machines, but the fix is straigt forward and for
consistency sake it's better to get rid of the open coded PTE manipulation.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-15 17:42:11 +02:00
Andi Kleen
02ff2769ed x86/mm/pat: Make set_memory_np() L1TF safe
commit 958f79b9ee55dfaf00c8106ed1c22a2919e0028b upstream

set_memory_np() is used to mark kernel mappings not present, but it has
it's own open coded mechanism which does not have the L1TF protection of
inverting the address bits.

Replace the open coded PTE manipulation with the L1TF protecting low level
PTE routines.

Passes the CPA self test.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
[ dwmw2: Pull in pud_mkhuge() from commit a00cc7d9dd, and pfn_pud() ]
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
[groeck: port to 4.4]
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-15 17:42:11 +02:00
Andi Kleen
9feecdb6cb x86/speculation/l1tf: Make pmd/pud_mknotpresent() invert
commit 0768f91530ff46683e0b372df14fd79fe8d156e5 upstream

Some cases in THP like:
  - MADV_FREE
  - mprotect
  - split

mark the PMD non present for temporarily to prevent races. The window for
an L1TF attack in these contexts is very small, but it wants to be fixed
for correctness sake.

Use the proper low level functions for pmd/pud_mknotpresent() to address
this.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-15 17:42:10 +02:00
Andi Kleen
0aae5fe841 x86/speculation/l1tf: Invert all not present mappings
commit f22cc87f6c1f771b57c407555cfefd811cdd9507 upstream

For kernel mappings PAGE_PROTNONE is not necessarily set for a non present
mapping, but the inversion logic explicitely checks for !PRESENT and
PROT_NONE.

Remove the PROT_NONE check and make the inversion unconditional for all not
present mappings.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-15 17:42:10 +02:00
Michal Hocko
09049f022a x86/speculation/l1tf: Fix up pte->pfn conversion for PAE
commit e14d7dfb41f5807a0c1c26a13f2b8ef16af24935 upstream

Jan has noticed that pte_pfn and co. resp. pfn_pte are incorrect for
CONFIG_PAE because phys_addr_t is wider than unsigned long and so the
pte_val reps. shift left would get truncated. Fix this up by using proper
types.

[dwmw2: Backport to 4.9]

Fixes: 6b28baca9b1f ("x86/speculation/l1tf: Protect PROT_NONE PTEs against speculation")
Reported-by: Jan Beulich <JBeulich@suse.com>
Signed-off-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-15 17:42:10 +02:00
Vlastimil Babka
b55b06bd3b x86/speculation/l1tf: Protect PAE swap entries against L1TF
commit 0d0f6249058834ffe1ceaad0bb31464af66f6e7a upstream

The PAE 3-level paging code currently doesn't mitigate L1TF by flipping the
offset bits, and uses the high PTE word, thus bits 32-36 for type, 37-63 for
offset. The lower word is zeroed, thus systems with less than 4GB memory are
safe. With 4GB to 128GB the swap type selects the memory locations vulnerable
to L1TF; with even more memory, also the swap offfset influences the address.
This might be a problem with 32bit PAE guests running on large 64bit hosts.

By continuing to keep the whole swap entry in either high or low 32bit word of
PTE we would limit the swap size too much. Thus this patch uses the whole PAE
PTE with the same layout as the 64bit version does. The macros just become a
bit tricky since they assume the arch-dependent swp_entry_t to be 32bit.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-15 17:42:10 +02:00
Konrad Rzeszutek Wilk
dc48c1a2f4 x86/cpufeatures: Add detection of L1D cache flush support.
commit 11e34e64e4103955fc4568750914c75d65ea87ee upstream

336996-Speculative-Execution-Side-Channel-Mitigations.pdf defines a new MSR
(IA32_FLUSH_CMD) which is detected by CPUID.7.EDX[28]=1 bit being set.

This new MSR "gives software a way to invalidate structures with finer
granularity than other architectual methods like WBINVD."

A copy of this document is available at
  https://bugzilla.kernel.org/show_bug.cgi?id=199511

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-15 17:42:10 +02:00
Vlastimil Babka
df7fd6ccb3 x86/speculation/l1tf: Extend 64bit swap file size limit
commit 1a7ed1ba4bba6c075d5ad61bb75e3fbc870840d6 upstream

The previous patch has limited swap file size so that large offsets cannot
clear bits above MAX_PA/2 in the pte and interfere with L1TF mitigation.

It assumed that offsets are encoded starting with bit 12, same as pfn. But
on x86_64, offsets are encoded starting with bit 9.

Thus the limit can be raised by 3 bits. That means 16TB with 42bit MAX_PA
and 256TB with 46bit MAX_PA.

Fixes: 377eeaa8e11f ("x86/speculation/l1tf: Limit swap file size to MAX_PA/2")
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-15 17:42:10 +02:00
Konrad Rzeszutek Wilk
fa86c208d2 x86/bugs: Move the l1tf function and define pr_fmt properly
commit 56563f53d3066afa9e63d6c997bf67e76a8b05c0 upstream

The pr_warn in l1tf_select_mitigation would have used the prior pr_fmt
which was defined as "Spectre V2 : ".

Move the function to be past SSBD and also define the pr_fmt.

Fixes: 17dbca119312 ("x86/speculation/l1tf: Add sysfs reporting for l1tf")
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-15 17:42:10 +02:00
Andi Kleen
685b44483f x86/speculation/l1tf: Limit swap file size to MAX_PA/2
commit 377eeaa8e11fe815b1d07c81c4a0e2843a8c15eb upstream

For the L1TF workaround its necessary to limit the swap file size to below
MAX_PA/2, so that the higher bits of the swap offset inverted never point
to valid memory.

Add a mechanism for the architecture to override the swap file size check
in swapfile.c and add a x86 specific max swapfile check function that
enforces that limit.

The check is only enabled if the CPU is vulnerable to L1TF.

In VMs with 42bit MAX_PA the typical limit is 2TB now, on a native system
with 46bit PA it is 32TB. The limit is only per individual swap file, so
it's always possible to exceed these limits with multiple swap files or
partitions.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-15 17:42:10 +02:00
Andi Kleen
d71af2dbac x86/speculation/l1tf: Disallow non privileged high MMIO PROT_NONE mappings
commit 42e4089c7890725fcd329999252dc489b72f2921 upstream

For L1TF PROT_NONE mappings are protected by inverting the PFN in the page
table entry. This sets the high bits in the CPU's address space, thus
making sure to point to not point an unmapped entry to valid cached memory.

Some server system BIOSes put the MMIO mappings high up in the physical
address space. If such an high mapping was mapped to unprivileged users
they could attack low memory by setting such a mapping to PROT_NONE. This
could happen through a special device driver which is not access
protected. Normal /dev/mem is of course access protected.

To avoid this forbid PROT_NONE mappings or mprotect for high MMIO mappings.

Valid page mappings are allowed because the system is then unsafe anyways.

It's not expected that users commonly use PROT_NONE on MMIO. But to
minimize any impact this is only enforced if the mapping actually refers to
a high MMIO address (defined as the MAX_PA-1 bit being set), and also skip
the check for root.

For mmaps this is straight forward and can be handled in vm_insert_pfn and
in remap_pfn_range().

For mprotect it's a bit trickier. At the point where the actual PTEs are
accessed a lot of state has been changed and it would be difficult to undo
on an error. Since this is a uncommon case use a separate early page talk
walk pass for MMIO PROT_NONE mappings that checks for this condition
early. For non MMIO and non PROT_NONE there are no changes.

[dwmw2: Backport to 4.9]
[groeck: Backport to 4.4]

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-15 17:42:10 +02:00
Dan Williams
9ac0dc7d94 mm: fix cache mode tracking in vm_insert_mixed()
commit 87744ab3832b83ba71b931f86f9cfdb000d07da5 upstream

vm_insert_mixed() unlike vm_insert_pfn_prot() and vmf_insert_pfn_pmd(),
fails to check the pgprot_t it uses for the mapping against the one
recorded in the memtype tracking tree.  Add the missing call to
track_pfn_insert() to preclude cases where incompatible aliased mappings
are established for a given physical address range.

[groeck: Backport to v4.4.y]

Link: http://lkml.kernel.org/r/147328717909.35069.14256589123570653697.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Cc: David Airlie <airlied@linux.ie>
Cc: Matthew Wilcox <mawilcox@microsoft.com>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-15 17:42:10 +02:00
Andy Lutomirski
0371d9c4c8 mm: Add vm_insert_pfn_prot()
commit 1745cbc5d0dee0749a6bc0ea8e872c5db0074061 upstream

The x86 vvar vma contains pages with differing cacheability
flags.  x86 currently implements this by manually inserting all
the ptes using (io_)remap_pfn_range when the vma is set up.

x86 wants to move to using .fault with VM_FAULT_NOPAGE to set up
the mappings as needed.  The correct API to use to insert a pfn
in .fault is vm_insert_pfn(), but vm_insert_pfn() can't override the
vma's cache mode, and the HPET page in particular needs to be
uncached despite the fact that the rest of the VMA is cached.

Add vm_insert_pfn_prot() to support varying cacheability within
the same non-COW VMA in a more sane manner.

x86 could alternatively use multiple VMAs, but that's messy,
would break CRIU, and would create unnecessary VMAs that would
waste memory.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/d2938d1eb37be7a5e4f86182db646551f11e45aa.1451446564.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-15 17:42:09 +02:00
Andi Kleen
bf0cca01b8 x86/speculation/l1tf: Add sysfs reporting for l1tf
commit 17dbca119312b4e8173d4e25ff64262119fcef38 upstream

L1TF core kernel workarounds are cheap and normally always enabled, However
they still should be reported in sysfs if the system is vulnerable or
mitigated. Add the necessary CPU feature/bug bits.

- Extend the existing checks for Meltdowns to determine if the system is
  vulnerable. All CPUs which are not vulnerable to Meltdown are also not
  vulnerable to L1TF

- Check for 32bit non PAE and emit a warning as there is no practical way
  for mitigation due to the limited physical address bits

- If the system has more than MAX_PA/2 physical memory the invert page
  workarounds don't protect the system against the L1TF attack anymore,
  because an inverted physical address will also point to valid
  memory. Print a warning in this case and report that the system is
  vulnerable.

Add a function which returns the PFN limit for the L1TF mitigation, which
will be used in follow up patches for sanity and range checks.

[ tglx: Renamed the CPU feature bit to L1TF_PTEINV ]
[ dwmw2: Backport to 4.9 (cpufeatures.h, E820) ]

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-15 17:42:09 +02:00
Andi Kleen
52dc5c9f8e x86/speculation/l1tf: Make sure the first page is always reserved
commit 10a70416e1f067f6c4efda6ffd8ea96002ac4223 upstream

The L1TF workaround doesn't make any attempt to mitigate speculate accesses
to the first physical page for zeroed PTEs. Normally it only contains some
data from the early real mode BIOS.

It's not entirely clear that the first page is reserved in all
configurations, so add an extra reservation call to make sure it is really
reserved. In most configurations (e.g.  with the standard reservations)
it's likely a nop.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-15 17:42:09 +02:00
Andi Kleen
9ee2d2da67 x86/speculation/l1tf: Protect PROT_NONE PTEs against speculation
commit 6b28baca9b1f0d4a42b865da7a05b1c81424bd5c upstream

When PTEs are set to PROT_NONE the kernel just clears the Present bit and
preserves the PFN, which creates attack surface for L1TF speculation
speculation attacks.

This is important inside guests, because L1TF speculation bypasses physical
page remapping. While the host has its own migitations preventing leaking
data from other VMs into the guest, this would still risk leaking the wrong
page inside the current guest.

This uses the same technique as Linus' swap entry patch: while an entry is
is in PROTNONE state invert the complete PFN part part of it. This ensures
that the the highest bit will point to non existing memory.

The invert is done by pte/pmd_modify and pfn/pmd/pud_pte for PROTNONE and
pte/pmd/pud_pfn undo it.

This assume that no code path touches the PFN part of a PTE directly
without using these primitives.

This doesn't handle the case that MMIO is on the top of the CPU physical
memory. If such an MMIO region was exposed by an unpriviledged driver for
mmap it would be possible to attack some real memory.  However this
situation is all rather unlikely.

For 32bit non PAE the inversion is not done because there are really not
enough bits to protect anything.

Q: Why does the guest need to be protected when the HyperVisor already has
   L1TF mitigations?

A: Here's an example:

   Physical pages 1 2 get mapped into a guest as
   GPA 1 -> PA 2
   GPA 2 -> PA 1
   through EPT.

   The L1TF speculation ignores the EPT remapping.

   Now the guest kernel maps GPA 1 to process A and GPA 2 to process B, and
   they belong to different users and should be isolated.

   A sets the GPA 1 PA 2 PTE to PROT_NONE to bypass the EPT remapping and
   gets read access to the underlying physical page. Which in this case
   points to PA 2, so it can read process B's data, if it happened to be in
   L1, so isolation inside the guest is broken.

   There's nothing the hypervisor can do about this. This mitigation has to
   be done in the guest itself.

[ tglx: Massaged changelog ]
[ dwmw2: backported to 4.9 ]

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-15 17:42:09 +02:00
Linus Torvalds
9bbdab847f x86/speculation/l1tf: Protect swap entries against L1TF
commit 2f22b4cd45b67b3496f4aa4c7180a1271c6452f6 upstream

With L1 terminal fault the CPU speculates into unmapped PTEs, and resulting
side effects allow to read the memory the PTE is pointing too, if its
values are still in the L1 cache.

For swapped out pages Linux uses unmapped PTEs and stores a swap entry into
them.

To protect against L1TF it must be ensured that the swap entry is not
pointing to valid memory, which requires setting higher bits (between bit
36 and bit 45) that are inside the CPUs physical address space, but outside
any real memory.

To do this invert the offset to make sure the higher bits are always set,
as long as the swap file is not too big.

Note there is no workaround for 32bit !PAE, or on systems which have more
than MAX_PA/2 worth of memory. The later case is very unlikely to happen on
real systems.

[AK: updated description and minor tweaks by. Split out from the original
     patch ]

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Andi Kleen <ak@linux.intel.com>
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-15 17:42:09 +02:00
Linus Torvalds
614f5e8464 x86/speculation/l1tf: Change order of offset/type in swap entry
commit bcd11afa7adad8d720e7ba5ef58bdcd9775cf45f upstream

If pages are swapped out, the swap entry is stored in the corresponding
PTE, which has the Present bit cleared. CPUs vulnerable to L1TF speculate
on PTE entries which have the present bit set and would treat the swap
entry as phsyical address (PFN). To mitigate that the upper bits of the PTE
must be set so the PTE points to non existent memory.

The swap entry stores the type and the offset of a swapped out page in the
PTE. type is stored in bit 9-13 and offset in bit 14-63. The hardware
ignores the bits beyond the phsyical address space limit, so to make the
mitigation effective its required to start 'offset' at the lowest possible
bit so that even large swap offsets do not reach into the physical address
space limit bits.

Move offset to bit 9-58 and type to bit 59-63 which are the bits that
hardware generally doesn't care about.

That, in turn, means that if you on desktop chip with only 40 bits of
physical addressing, now that the offset starts at bit 9, there needs to be
30 bits of offset actually *in use* until bit 39 ends up being set, which
means when inverted it will again point into existing memory.

So that's 4 terabyte of swap space (because the offset is counted in pages,
so 30 bits of offset is 42 bits of actual coverage). With bigger physical
addressing, that obviously grows further, until the limit of the offset is
hit (at 50 bits of offset - 62 bits of actual swap file coverage).

This is a preparatory change for the actual swap entry inversion to protect
against L1TF.

[ AK: Updated description and minor tweaks. Split into two parts ]
[ tglx: Massaged changelog ]

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Andi Kleen <ak@linux.intel.com>
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-15 17:42:07 +02:00
Naoya Horiguchi
86b0948d7c mm: x86: move _PAGE_SWP_SOFT_DIRTY from bit 7 to bit 1
commit eee4818baac0f2b37848fdf90e4b16430dc536ac upstream

_PAGE_PSE is used to distinguish between a truly non-present
(_PAGE_PRESENT=0) PMD, and a PMD which is undergoing a THP split and
should be treated as present.

But _PAGE_SWP_SOFT_DIRTY currently uses the _PAGE_PSE bit, which would
cause confusion between one of those PMDs undergoing a THP split, and a
soft-dirty PMD.  Dropping _PAGE_PSE check in pmd_present() does not work
well, because it can hurt optimization of tlb handling in thp split.

Thus, we need to move the bit.

In the current kernel, bits 1-4 are not used in non-present format since
commit 00839ee3b299 ("x86/mm: Move swap offset/type up in PTE to work
around erratum").  So let's move _PAGE_SWP_SOFT_DIRTY to bit 1.  Bit 7
is used as reserved (always clear), so please don't use it for other
purpose.

[dwmw2: Pulled in to 4.9 backport to support L1TF changes]

Link: http://lkml.kernel.org/r/20170717193955.20207-3-zi.yan@sent.com
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Signed-off-by: Zi Yan <zi.yan@cs.rutgers.edu>
Acked-by: Dave Hansen <dave.hansen@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Cc: David Nellans <dnellans@nvidia.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-15 17:42:07 +02:00
Dave Hansen
f487cf69cf x86/mm: Fix swap entry comment and macro
commit ace7fab7a6cdd363a615ec537f2aa94dbc761ee2 upstream

A recent patch changed the format of a swap PTE.

The comment explaining the format of the swap PTE is wrong about
the bits used for the swap type field.  Amusingly, the ASCII art
and the patch description are correct, but the comment itself
is wrong.

As I was looking at this, I also noticed that the
SWP_OFFSET_FIRST_BIT has an off-by-one error.  This does not
really hurt anything.  It just wasted a bit of space in the PTE,
giving us 2^59 bytes of addressable space in our swapfiles
instead of 2^60.  But, it doesn't match with the comments, and it
wastes a bit of space, so fix it.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Dave Hansen <dave@sr71.net>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Luis R. Rodriguez <mcgrof@suse.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Toshi Kani <toshi.kani@hp.com>
Fixes: 00839ee3b299 ("x86/mm: Move swap offset/type up in PTE to work around erratum")
Link: http://lkml.kernel.org/r/20160810172325.E56AD7DA@viggo.jf.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-15 17:42:07 +02:00
Dave Hansen
0a5deacaac x86/mm: Move swap offset/type up in PTE to work around erratum
commit 00839ee3b299303c6a5e26a0a2485427a3afcbbf upstream

This erratum can result in Accessed/Dirty getting set by the hardware
when we do not expect them to be (on !Present PTEs).

Instead of trying to fix them up after this happens, we just
allow the bits to get set and try to ignore them.  We do this by
shifting the layout of the bits we use for swap offset/type in
our 64-bit PTEs.

It looks like this:

 bitnrs: |     ...            | 11| 10|  9|8|7|6|5| 4| 3|2|1|0|
 names:  |     ...            |SW3|SW2|SW1|G|L|D|A|CD|WT|U|W|P|
 before: |         OFFSET (9-63)          |0|X|X| TYPE(1-5) |0|
  after: | OFFSET (14-63)  |  TYPE (9-13) |0|X|X|X| X| X|X|X|0|

Note that D was already a don't care (X) even before.  We just
move TYPE up and turn its old spot (which could be hit by the
A bit) into all don't cares.

We take 5 bits away from the offset, but that still leaves us
with 50 bits which lets us index into a 62-bit swapfile (4 EiB).
I think that's probably fine for the moment.  We could
theoretically reclaim 5 of the bits (1, 2, 3, 4, 7) but it
doesn't gain us anything.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Dave Hansen <dave@sr71.net>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Luis R. Rodriguez <mcgrof@suse.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Toshi Kani <toshi.kani@hp.com>
Cc: dave.hansen@intel.com
Cc: linux-mm@kvack.org
Cc: mhocko@suse.com
Link: http://lkml.kernel.org/r/20160708001911.9A3FD2B6@viggo.jf.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-15 17:42:07 +02:00