During capability qmi message failure ICNSS_MSA0_ASSIGNED
flag is not getting clear. Due to this after PDR/SSR next
time it is not configuring the MSA0 permission to q6 which
result into NOC error as q6 is not having access permission.
To address above issue clear ICNSS_MSA0_ASSIGNED bit in
failure case.
CRs-Fixed: 2300877
Change-Id: I6aeaedb5a394b843c4f1c8ef1e0be47a6947b331
Signed-off-by: Hardik Kantilal Patel <hkpatel@codeaurora.org>
Currently user regulatory hint is ignored if all wiphys
in the system are self managed. But the hint is not ignored
if there is no wiphy in the system. This affects the global
regulatory setting. Global regulatory setting needs to be
maintained so that it can be applied to a new wiphy entering
the system. Therefore, do not ignore user regulatory setting
even if all wiphys in the system are self managed.
Signed-off-by: Amar Singhal <asinghal@codeaurora.org>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Change-Id: I468fcd3403259b03369e011fa41b003e8ff33d3c
CRs-Fixed: 2276224
Git-commit: e31f6456c01c76f154e1b25cd54df97809a49edb
Git-repo: https://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211.git
Signed-off-by: Amar Singhal <asinghal@codeaurora.org>
* refs/heads/tmp-f057ff9
Linux 4.4.148
x86/speculation/l1tf: Unbreak !__HAVE_ARCH_PFN_MODIFY_ALLOWED architectures
x86/init: fix build with CONFIG_SWAP=n
x86/speculation/l1tf: Fix up CPU feature flags
x86/mm/kmmio: Make the tracer robust against L1TF
x86/mm/pat: Make set_memory_np() L1TF safe
x86/speculation/l1tf: Make pmd/pud_mknotpresent() invert
x86/speculation/l1tf: Invert all not present mappings
x86/speculation/l1tf: Fix up pte->pfn conversion for PAE
x86/speculation/l1tf: Protect PAE swap entries against L1TF
x86/cpufeatures: Add detection of L1D cache flush support.
x86/speculation/l1tf: Extend 64bit swap file size limit
x86/bugs: Move the l1tf function and define pr_fmt properly
x86/speculation/l1tf: Limit swap file size to MAX_PA/2
x86/speculation/l1tf: Disallow non privileged high MMIO PROT_NONE mappings
mm: fix cache mode tracking in vm_insert_mixed()
mm: Add vm_insert_pfn_prot()
x86/speculation/l1tf: Add sysfs reporting for l1tf
x86/speculation/l1tf: Make sure the first page is always reserved
x86/speculation/l1tf: Protect PROT_NONE PTEs against speculation
x86/speculation/l1tf: Protect swap entries against L1TF
x86/speculation/l1tf: Change order of offset/type in swap entry
mm: x86: move _PAGE_SWP_SOFT_DIRTY from bit 7 to bit 1
x86/mm: Fix swap entry comment and macro
x86/mm: Move swap offset/type up in PTE to work around erratum
x86/speculation/l1tf: Increase 32bit PAE __PHYSICAL_PAGE_SHIFT
x86/irqflags: Provide a declaration for native_save_fl
kprobes/x86: Fix %p uses in error messages
x86/speculation: Protect against userspace-userspace spectreRSB
x86/paravirt: Fix spectre-v2 mitigations for paravirt guests
ARM: dts: imx6sx: fix irq for pcie bridge
IB/ocrdma: fix out of bounds access to local buffer
IB/mlx4: Mark user MR as writable if actual virtual memory is writable
IB/core: Make testing MR flags for writability a static inline function
fix __legitimize_mnt()/mntput() race
fix mntput/mntput race
root dentries need RCU-delayed freeing
scsi: sr: Avoid that opening a CD-ROM hangs with runtime power management enabled
ACPI / LPSS: Add missing prv_offset setting for byt/cht PWM devices
xen/netfront: don't cache skb_shinfo()
parisc: Define mb() and add memory barriers to assembler unlock sequences
parisc: Enable CONFIG_MLONGCALLS by default
fork: unconditionally clear stack on fork
ipv4+ipv6: Make INET*_ESP select CRYPTO_ECHAINIV
tpm: fix race condition in tpm_common_write()
ext4: fix check to prevent initializing reserved inodes
Linux 4.4.147
jfs: Fix inconsistency between memory allocation and ea_buf->max_size
i2c: imx: Fix reinit_completion() use
ring_buffer: tracing: Inherit the tracing setting to next ring buffer
ACPI / PCI: Bail early in acpi_pci_add_bus() if there is no ACPI handle
ext4: fix false negatives *and* false positives in ext4_check_descriptors()
netlink: Don't shift on 64 for ngroups
netlink: Don't shift with UB on nlk->ngroups
netlink: Do not subscribe to non-existent groups
nohz: Fix local_timer_softirq_pending()
genirq: Make force irq threading setup more robust
scsi: qla2xxx: Return error when TMF returns
scsi: qla2xxx: Fix ISP recovery on unload
Conflicts:
include/linux/swapfile.h
Removed CONFIG_CRYPTO_ECHAINIV from defconfig files since this upmerge is
adding this config to Kconfig file.
Change-Id: Ide96c29f919d76590c2bdccf356d1d464a892fd7
Signed-off-by: Srinivasarao P <spathi@codeaurora.org>
Such warning of "initialization from incompatible pointer type"
is found in the build time, and it's good to fix it.
Change-Id: Iaf820ae7ec4a7851185febbdebaaab3706fb2402
Signed-off-by: Yong Ding <yongding@codeaurora.org>
Currently, the EDID parser adds the formats based on the
parsing of the Video data blocks and other CTA blocks.
However, there is no input validation based on the
HDMI HFVSDB block to check whether the mode advertised
by the sink actually falls in the TMDS char rate limits.
Add this check in the EDID parser to make sure invalid
formats are not added to the list.
Change-Id: I9a8e8f023924421710cf27402be98150554d0271
Signed-off-by: Abhinav Kumar <abhinavk@codeaurora.org>
This adds support for saving the arm-smmu client's context
just before going into hibernation. This context is restored
on the subesequent hibernate restore.
Also, invalidate the TLB during the restore phase to avoid
wrong translations post-resume.
Change-Id: Idd8d12bb4d13f8a62bd51e0adaad82bd92f658ee
Signed-off-by: vkakani <vkakani@codeaurora.org>
Signed-off-by: Arun KS <arunks@codeaurora.org>
Signed-off-by: Atul Raut <araut@codeaurora.org>
Signed-off-by: Siddhartha Agrawal <agrawals@codeaurora.org>
Check the cid number to be less than MAX_CID in csid.
Change-Id: I16777dc8e8c72e01dc10490cd4c205c939adb7b5
Signed-off-by: Chunhuan Zhan <zhanc@codeaurora.org>
Signed-off-by: Rahul Sharma <rahsha@codeaurora.org>
jpeg driver is calling class_create with stack variable, which
can be overwritten by other stack variables.
Change-Id: I92ccd4629cef8a06b7715b8483cf53a9607bd22f
Signed-off-by: Deepak Shankar <dees@codeaurora.org>
Signed-off-by: Rahul Sharma <rahsha@codeaurora.org>
This reverts commit 563b2f7a6b.
The previous approach of dynamically updating the writeable
permission bits of the power/data_role attributes only works
if the userspace application has root permission since the
call to sysfs_update_group() removes and re-adds the files. If
they had previously been chown/chgrp'ed, the ownership would be
reset. On the other hand, if there was a ueventd rule to
dynamically update the ownership, then the mode would always
be overridden with the static umask given in the ueventd rule,
contradicting the driver's determination of writeability.
Hence, the more comprehensive fix should be done in userspace
to not rely solely on writeability. Still, this change needs
to be reverted since it can still cause a race between ueventd
and the userspace service trying to check writability.
Change-Id: Ic667a97f2bae41e5a86ee45565518b06db959b36
Signed-off-by: Jack Pham <jackp@codeaurora.org>
commit f19f5c49bbc3ffcc9126cc245fc1b24cc29f4a37 upstream.
It turns out that we should *not* invert all not-present mappings,
because the all zeroes case is obviously special.
clear_page() does not undergo the XOR logic to invert the address bits,
i.e. PTE, PMD and PUD entries that have not been individually written
will have val=0 and so will trigger __pte_needs_invert(). As a result,
{pte,pmd,pud}_pfn() will return the wrong PFN value, i.e. all ones
(adjusted by the max PFN mask) instead of zero. A zeroed entry is ok
because the page at physical address 0 is reserved early in boot
specifically to mitigate L1TF, so explicitly exempt them from the
inversion when reading the PFN.
Manifested as an unexpected mprotect(..., PROT_NONE) failure when called
on a VMA that has VM_PFNMAP and was mmap'd to as something other than
PROT_NONE but never used. mprotect() sends the PROT_NONE request down
prot_none_walk(), which walks the PTEs to check the PFNs.
prot_none_pte_entry() gets the bogus PFN from pte_pfn() and returns
-EACCES because it thinks mprotect() is trying to adjust a high MMIO
address.
[ This is a very modified version of Sean's original patch, but all
credit goes to Sean for doing this and also pointing out that
sometimes the __pte_needs_invert() function only gets the protection
bits, not the full eventual pte. But zero remains special even in
just protection bits, so that's ok. - Linus ]
Fixes: f22cc87f6c1f ("x86/speculation/l1tf: Invert all not present mappings")
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Acked-by: Andi Kleen <ak@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5e0fb5df2ee871b841f96f9cb6a7f2784e96aa4e upstream.
ioremap() calls pud_free_pmd_page() / pmd_free_pte_page() when it creates
a pud / pmd map. The following preconditions are met at their entry.
- All pte entries for a target pud/pmd address range have been cleared.
- System-wide TLB purges have been peformed for a target pud/pmd address
range.
The preconditions assure that there is no stale TLB entry for the range.
Speculation may not cache TLB entries since it requires all levels of page
entries, including ptes, to have P & A-bits set for an associated address.
However, speculation may cache pud/pmd entries (paging-structure caches)
when they have P-bit set.
Add a system-wide TLB purge (INVLPG) to a single page after clearing
pud/pmd entry's P-bit.
SDM 4.10.4.1, Operation that Invalidate TLBs and Paging-Structure Caches,
states that:
INVLPG invalidates all paging-structure caches associated with the
current PCID regardless of the liner addresses to which they correspond.
Fixes: 28ee90fe6048 ("x86/mm: implement free pmd/pte page interfaces")
Signed-off-by: Toshi Kani <toshi.kani@hpe.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: mhocko@suse.com
Cc: akpm@linux-foundation.org
Cc: hpa@zytor.com
Cc: cpandya@codeaurora.org
Cc: linux-mm@kvack.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: Joerg Roedel <joro@8bytes.org>
Cc: stable@vger.kernel.org
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: <stable@vger.kernel.org>
Link: https://lkml.kernel.org/r/20180627141348.21777-4-toshi.kani@hpe.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 785a19f9d1dd8a4ab2d0633be4656653bd3de1fc upstream.
The following kernel panic was observed on ARM64 platform due to a stale
TLB entry.
1. ioremap with 4K size, a valid pte page table is set.
2. iounmap it, its pte entry is set to 0.
3. ioremap the same address with 2M size, update its pmd entry with
a new value.
4. CPU may hit an exception because the old pmd entry is still in TLB,
which leads to a kernel panic.
Commit b6bdb7517c3d ("mm/vmalloc: add interfaces to free unmapped page
table") has addressed this panic by falling to pte mappings in the above
case on ARM64.
To support pmd mappings in all cases, TLB purge needs to be performed
in this case on ARM64.
Add a new arg, 'addr', to pud_free_pmd_page() and pmd_free_pte_page()
so that TLB purge can be added later in seprate patches.
[toshi.kani@hpe.com: merge changes, rewrite patch description]
Fixes: 28ee90fe6048 ("x86/mm: implement free pmd/pte page interfaces")
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
Signed-off-by: Toshi Kani <toshi.kani@hpe.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: mhocko@suse.com
Cc: akpm@linux-foundation.org
Cc: hpa@zytor.com
Cc: linux-mm@kvack.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: Will Deacon <will.deacon@arm.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: stable@vger.kernel.org
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: <stable@vger.kernel.org>
Link: https://lkml.kernel.org/r/20180627141348.21777-3-toshi.kani@hpe.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7992c18810e568b95c869b227137a2215702a805 upstream.
CVE-2018-9363
The buffer length is unsigned at all layers, but gets cast to int and
checked in hidp_process_report and can lead to a buffer overflow.
Switch len parameter to unsigned int to resolve issue.
This affects 3.18 and newer kernels.
Signed-off-by: Mark Salyzyn <salyzyn@android.com>
Fixes: a4b1b5877b ("HID: Bluetooth: hidp: make sure input buffers are big enough")
Cc: Marcel Holtmann <marcel@holtmann.org>
Cc: Johan Hedberg <johan.hedberg@gmail.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Kees Cook <keescook@chromium.org>
Cc: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Cc: linux-bluetooth@vger.kernel.org
Cc: netdev@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: security@kernel.org
Cc: kernel-team@android.com
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 3bbda5a38601f7675a214be2044e41d7749e6c7b upstream.
If the ts3a227e audio accessory detection hardware is present and its
driver probed, the jack needs to be created before enabling jack
detection in the ts3a227e driver. With this patch, the jack is
instantiated in the max98090 headset init function if the ts3a227e is
present. This fixes a null pointer dereference as the jack detection
enabling function in the ts3a driver was called before the jack is
created.
[minor correction to keep error handling on jack creation the same
as before by Pierre Bossart]
Signed-off-by: Thierry Escande <thierry.escande@collabora.com>
Signed-off-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
Acked-By: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 318abdfbe708aaaa652c79fb500e9bd60521f9dc upstream.
Like the skcipher_walk and blkcipher_walk cases:
scatterwalk_done() is only meant to be called after a nonzero number of
bytes have been processed, since scatterwalk_pagedone() will flush the
dcache of the *previous* page. But in the error case of
ablkcipher_walk_done(), e.g. if the input wasn't an integer number of
blocks, scatterwalk_done() was actually called after advancing 0 bytes.
This caused a crash ("BUG: unable to handle kernel paging request")
during '!PageSlab(page)' on architectures like arm and arm64 that define
ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
page-aligned as in that case walk->offset == 0.
Fix it by reorganizing ablkcipher_walk_done() to skip the
scatterwalk_advance() and scatterwalk_done() if an error has occurred.
Reported-by: Liu Chao <liuchao741@huawei.com>
Fixes: bf06099db1 ("crypto: skcipher - Add ablkcipher_walk interfaces")
Cc: <stable@vger.kernel.org> # v2.6.35+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0868def3e4100591e7a1fdbf3eed1439cc8f7ca3 upstream.
Like the skcipher_walk case:
scatterwalk_done() is only meant to be called after a nonzero number of
bytes have been processed, since scatterwalk_pagedone() will flush the
dcache of the *previous* page. But in the error case of
blkcipher_walk_done(), e.g. if the input wasn't an integer number of
blocks, scatterwalk_done() was actually called after advancing 0 bytes.
This caused a crash ("BUG: unable to handle kernel paging request")
during '!PageSlab(page)' on architectures like arm and arm64 that define
ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
page-aligned as in that case walk->offset == 0.
Fix it by reorganizing blkcipher_walk_done() to skip the
scatterwalk_advance() and scatterwalk_done() if an error has occurred.
This bug was found by syzkaller fuzzing.
Reproducer, assuming ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE:
#include <linux/if_alg.h>
#include <sys/socket.h>
#include <unistd.h>
int main()
{
struct sockaddr_alg addr = {
.salg_type = "skcipher",
.salg_name = "ecb(aes-generic)",
};
char buffer[4096] __attribute__((aligned(4096))) = { 0 };
int fd;
fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
bind(fd, (void *)&addr, sizeof(addr));
setsockopt(fd, SOL_ALG, ALG_SET_KEY, buffer, 16);
fd = accept(fd, NULL, NULL);
write(fd, buffer, 15);
read(fd, buffer, 15);
}
Reported-by: Liu Chao <liuchao741@huawei.com>
Fixes: 5cde0af2a9 ("[CRYPTO] cipher: Added block cipher type")
Cc: <stable@vger.kernel.org> # v2.6.19+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit bb29648102335586e9a66289a1d98a0cb392b6e5 upstream.
syzbot reported a crash in vmac_final() when multiple threads
concurrently use the same "vmac(aes)" transform through AF_ALG. The bug
is pretty fundamental: the VMAC template doesn't separate per-request
state from per-tfm (per-key) state like the other hash algorithms do,
but rather stores it all in the tfm context. That's wrong.
Also, vmac_final() incorrectly zeroes most of the state including the
derived keys and cached pseudorandom pad. Therefore, only the first
VMAC invocation with a given key calculates the correct digest.
Fix these bugs by splitting the per-tfm state from the per-request state
and using the proper init/update/final sequencing for requests.
Reproducer for the crash:
#include <linux/if_alg.h>
#include <sys/socket.h>
#include <unistd.h>
int main()
{
int fd;
struct sockaddr_alg addr = {
.salg_type = "hash",
.salg_name = "vmac(aes)",
};
char buf[256] = { 0 };
fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
bind(fd, (void *)&addr, sizeof(addr));
setsockopt(fd, SOL_ALG, ALG_SET_KEY, buf, 16);
fork();
fd = accept(fd, NULL, NULL);
for (;;)
write(fd, buf, 256);
}
The immediate cause of the crash is that vmac_ctx_t.partial_size exceeds
VMAC_NHBYTES, causing vmac_final() to memset() a negative length.
Reported-by: syzbot+264bca3a6e8d645550d3@syzkaller.appspotmail.com
Fixes: f1939f7c56 ("crypto: vmac - New hash algorithm for intel_txt support")
Cc: <stable@vger.kernel.org> # v2.6.32+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 73bf20ef3df262026c3470241ae4ac8196943ffa upstream.
The VMAC template assumes the block cipher has a 128-bit block size, but
it failed to check for that. Thus it was possible to instantiate it
using a 64-bit block size cipher, e.g. "vmac(cast5)", causing
uninitialized memory to be used.
Add the needed check when instantiating the template.
Fixes: f1939f7c56 ("crypto: vmac - New hash algorithm for intel_txt support")
Cc: <stable@vger.kernel.org> # v2.6.32+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 934193a654c1f4d0643ddbf4b2529b508cae926e upstream.
Verify that 'depmod' ($DEPMOD) is installed.
This is a partial revert of commit 620c231c7a
("kbuild: do not check for ancient modutils tools").
Also update Documentation/process/changes.rst to refer to
kmod instead of module-init-tools.
Fixes kernel bugzilla #198965:
https://bugzilla.kernel.org/show_bug.cgi?id=198965
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Lucas De Marchi <lucas.demarchi@profusion.mobi>
Cc: Lucas De Marchi <lucas.de.marchi@gmail.com>
Cc: Michal Marek <michal.lkml@markovi.net>
Cc: Jessica Yu <jeyu@kernel.org>
Cc: Chih-Wei Huang <cwhuang@linux.org.tw>
Cc: stable@vger.kernel.org # any kernel since 2012
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0e410e158e5baa1300bdf678cea4f4e0cf9d8b94 upstream.
With KASAN enabled the kernel has two different memset() functions, one
with KASAN checks (memset) and one without (__memset). KASAN uses some
macro tricks to use the proper version where required. For example
memset() calls in mm/slub.c are without KASAN checks, since they operate
on poisoned slab object metadata.
The issue is that clang emits memset() calls even when there is no
memset() in the source code. They get linked with improper memset()
implementation and the kernel fails to boot due to a huge amount of KASAN
reports during early boot stages.
The solution is to add -fno-builtin flag for files with KASAN_SANITIZE :=
n marker.
Link: http://lkml.kernel.org/r/8ffecfffe04088c52c42b92739c2bd8a0bcb3f5e.1516384594.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Acked-by: Nick Desaulniers <ndesaulniers@google.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Michal Marek <michal.lkml@markovi.net>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
[ Nick : Backported to 4.4 avoiding KUBSAN ]
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The 4.4.y stable backport dc6ae4dffd for the upstream commit
3d4bf93ac120 ("tcp: detect malicious patterns in
tcp_collapse_ofo_queue()") missed a line that enlarges the
range_truesize value, which broke the whole check.
Fixes: dc6ae4dffd ("tcp: detect malicious patterns in tcp_collapse_ofo_queue()")
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Cc: Michal Kubecek <mkubecek@suse.cz>
commit f967db0b9ed44ec3057a28f3b28efc51df51b835 upstream.
ioremap() supports pmd mappings on x86-PAE. However, kernel's pmd
tables are not shared among processes on x86-PAE. Therefore, any
update to sync'd pmd entries need re-syncing. Freeing a pte page
also leads to a vmalloc fault and hits the BUG_ON in vmalloc_sync_one().
Disable free page handling on x86-PAE. pud_free_pmd_page() and
pmd_free_pte_page() simply return 0 if a given pud/pmd entry is present.
This assures that ioremap() does not update sync'd pmd entries at the
cost of falling back to pte mappings.
Fixes: 28ee90fe6048 ("x86/mm: implement free pmd/pte page interfaces")
Reported-by: Joerg Roedel <joro@8bytes.org>
Signed-off-by: Toshi Kani <toshi.kani@hpe.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: mhocko@suse.com
Cc: akpm@linux-foundation.org
Cc: hpa@zytor.com
Cc: cpandya@codeaurora.org
Cc: linux-mm@kvack.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: stable@vger.kernel.org
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: <stable@vger.kernel.org>
Link: https://lkml.kernel.org/r/20180627141348.21777-2-toshi.kani@hpe.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This is needed to address the XPU limitation, so that the
shared memory is not contiguous with other memory allocations
that may happen from other clients in the system.
Change-Id: Ibc9961245f32ecc63892007a3d12b7956cf63e67
Signed-off-by: Ankit Jain <jankit@codeaurora.org>
If guard_memory dtsi property is set, then the shared memory
region will be guarded by SZ_4K at the start and at the end.
This is needed to overcome the XPU limitation on few MSM HW,
so as to make this memory not contiguous with other allocations
that may possibly happen from other clients in the system.
Change-Id: I57637619cea8fe7f0f7254624e07177ea4a4fce0
Signed-off-by: Sahitya Tummala <stummala@codeaurora.org>
Fix some null pointer dereference flaw and parameter not init issues.
change-Id: I0ed5f3f62c3794775bf97d353c4e50dd8ceb32da
Signed-off-by: Yao Jiang <yaojia@codeaurora.org>
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAlt0SdMACgkQONu9yGCS
aT4NLxAAovDVqFFejBk8M1nxAtQSqRzB2PMboc+l62clKa6BAJtWAsgPFjECgzEp
edlDeUttliQoTB6S3GYYM82oj50myUKlGvlJRptRE3Gr1iYubdB/U2RDmwEzCxbC
AEzu4tEv+Z23jaLGsuAIOg66faBTqqgVoKtp/TlKwl+Y/b6WzkI0gRzxWTBFnAlj
AKuhmoc1JoS9JF/MQ4q02gYSQ0g1eTpr1gIU2GMow9pK9Rahk4Jdl4yRjNLUFDxd
ojrBYCoElf90R3q+NvmZBbzxwanm2OgzeEBffhh647aB5kHEUd5h4z9w+sIoXmSq
50uD59q62Umdpp2O125HH5KHeHbcTUCXXp3g1VY6A/+d9dTs9GZqo//vf6aJsxEb
gixoYyNbIcqw1k0jhEEW2ah3F3j+ZHvNmLKPyV4U8h2Tw2K5QKzFu/fVnQw7Xfv6
Gv0z1TQ4Y+w2bqpzDiDBO4sRgKOXVr3hzWa0jggW5AoKWTco/oIVkE+Rqmj65AfK
DROqCMQq75K+pymrM8I3wTXRSxtSH9bO/iqCu2LiiaG+JAkvr0OIHEHgizxLtAFO
ivpREPDsWhVAYUmnoCgJa8Za1GdJk1I9uvxoJY1TBL8gbcYc61yjjeJDYqLghuNT
EhrvFvJ4r/fQ6BJ76+rO7FSJIl+Kov2Uf7CWql3Lzxps6/u5GNQ=
=73dO
-----END PGP SIGNATURE-----
Merge 4.4.148 into android-4.4
Changes in 4.4.148
ext4: fix check to prevent initializing reserved inodes
tpm: fix race condition in tpm_common_write()
ipv4+ipv6: Make INET*_ESP select CRYPTO_ECHAINIV
fork: unconditionally clear stack on fork
parisc: Enable CONFIG_MLONGCALLS by default
parisc: Define mb() and add memory barriers to assembler unlock sequences
xen/netfront: don't cache skb_shinfo()
ACPI / LPSS: Add missing prv_offset setting for byt/cht PWM devices
scsi: sr: Avoid that opening a CD-ROM hangs with runtime power management enabled
root dentries need RCU-delayed freeing
fix mntput/mntput race
fix __legitimize_mnt()/mntput() race
IB/core: Make testing MR flags for writability a static inline function
IB/mlx4: Mark user MR as writable if actual virtual memory is writable
IB/ocrdma: fix out of bounds access to local buffer
ARM: dts: imx6sx: fix irq for pcie bridge
x86/paravirt: Fix spectre-v2 mitigations for paravirt guests
x86/speculation: Protect against userspace-userspace spectreRSB
kprobes/x86: Fix %p uses in error messages
x86/irqflags: Provide a declaration for native_save_fl
x86/speculation/l1tf: Increase 32bit PAE __PHYSICAL_PAGE_SHIFT
x86/mm: Move swap offset/type up in PTE to work around erratum
x86/mm: Fix swap entry comment and macro
mm: x86: move _PAGE_SWP_SOFT_DIRTY from bit 7 to bit 1
x86/speculation/l1tf: Change order of offset/type in swap entry
x86/speculation/l1tf: Protect swap entries against L1TF
x86/speculation/l1tf: Protect PROT_NONE PTEs against speculation
x86/speculation/l1tf: Make sure the first page is always reserved
x86/speculation/l1tf: Add sysfs reporting for l1tf
mm: Add vm_insert_pfn_prot()
mm: fix cache mode tracking in vm_insert_mixed()
x86/speculation/l1tf: Disallow non privileged high MMIO PROT_NONE mappings
x86/speculation/l1tf: Limit swap file size to MAX_PA/2
x86/bugs: Move the l1tf function and define pr_fmt properly
x86/speculation/l1tf: Extend 64bit swap file size limit
x86/cpufeatures: Add detection of L1D cache flush support.
x86/speculation/l1tf: Protect PAE swap entries against L1TF
x86/speculation/l1tf: Fix up pte->pfn conversion for PAE
x86/speculation/l1tf: Invert all not present mappings
x86/speculation/l1tf: Make pmd/pud_mknotpresent() invert
x86/mm/pat: Make set_memory_np() L1TF safe
x86/mm/kmmio: Make the tracer robust against L1TF
x86/speculation/l1tf: Fix up CPU feature flags
x86/init: fix build with CONFIG_SWAP=n
x86/speculation/l1tf: Unbreak !__HAVE_ARCH_PFN_MODIFY_ALLOWED architectures
Linux 4.4.148
Change-Id: I83c857d9d9d74ee47e61d15eb411f276f057ba3d
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
commit 6c26fcd2abfe0a56bbd95271fce02df2896cfd24 upstream.
pfn_modify_allowed() and arch_has_pfn_modify_check() are outside of the
!__ASSEMBLY__ section in include/asm-generic/pgtable.h, which confuses
assembler on archs that don't have __HAVE_ARCH_PFN_MODIFY_ALLOWED (e.g.
ia64) and breaks build:
include/asm-generic/pgtable.h: Assembler messages:
include/asm-generic/pgtable.h:538: Error: Unknown opcode `static inline bool pfn_modify_allowed(unsigned long pfn,pgprot_t prot)'
include/asm-generic/pgtable.h:540: Error: Unknown opcode `return true'
include/asm-generic/pgtable.h:543: Error: Unknown opcode `static inline bool arch_has_pfn_modify_check(void)'
include/asm-generic/pgtable.h:545: Error: Unknown opcode `return false'
arch/ia64/kernel/entry.S:69: Error: `mov' does not fit into bundle
Move those two static inlines into the !__ASSEMBLY__ section so that they
don't confuse the asm build pass.
Fixes: 42e4089c7890 ("x86/speculation/l1tf: Disallow non privileged high MMIO PROT_NONE mappings")
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[groeck: Context changes]
Signed-off-by: Guenter Roeck <linux@roeck-us.net>