android_kernel_oneplus_msm8998/arch/powerpc/mm
Hugh Dickins 4b35943067 mm: larger stack guard gap, between vmas
commit 1be7107fbe18eed3e319a6c3e83c78254b693acb upstream.

Stack guard page is a useful feature to reduce a risk of stack smashing
into a different mapping. We have been using a single page gap which
is sufficient to prevent having stack adjacent to a different mapping.
But this seems to be insufficient in the light of the stack usage in
userspace. E.g. glibc uses as large as 64kB alloca() in many commonly
used functions. Others use constructs liks gid_t buffer[NGROUPS_MAX]
which is 256kB or stack strings with MAX_ARG_STRLEN.

This will become especially dangerous for suid binaries and the default
no limit for the stack size limit because those applications can be
tricked to consume a large portion of the stack and a single glibc call
could jump over the guard page. These attacks are not theoretical,
unfortunatelly.

Make those attacks less probable by increasing the stack guard gap
to 1MB (on systems with 4k pages; but make it depend on the page size
because systems with larger base pages might cap stack allocations in
the PAGE_SIZE units) which should cover larger alloca() and VLA stack
allocations. It is obviously not a full fix because the problem is
somehow inherent, but it should reduce attack space a lot.

One could argue that the gap size should be configurable from userspace,
but that can be done later when somebody finds that the new 1MB is wrong
for some special case applications.  For now, add a kernel command line
option (stack_guard_gap) to specify the stack gap size (in page units).

Implementation wise, first delete all the old code for stack guard page:
because although we could get away with accounting one extra page in a
stack vma, accounting a larger gap can break userspace - case in point,
a program run with "ulimit -S -v 20000" failed when the 1MB gap was
counted for RLIMIT_AS; similar problems could come with RLIMIT_MLOCK
and strict non-overcommit mode.

Instead of keeping gap inside the stack vma, maintain the stack guard
gap as a gap between vmas: using vm_start_gap() in place of vm_start
(or vm_end_gap() in place of vm_end if VM_GROWSUP) in just those few
places which need to respect the gap - mainly arch_get_unmapped_area(),
and and the vma tree's subtree_gap support for that.

Original-patch-by: Oleg Nesterov <oleg@redhat.com>
Original-patch-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Tested-by: Helge Deller <deller@gmx.de> # parisc
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
[wt: backport to 4.11: adjust context]
[wt: backport to 4.9: adjust context ; kernel doc was not in admin-guide]
[wt: backport to 4.4: adjust context ; drop ppc hugetlb_radix changes]
Signed-off-by: Willy Tarreau <w@1wt.eu>
[gkh: minor build fixes for 4.4]
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-06-26 07:13:11 +02:00
..
40x_mmu.c
44x_mmu.c
copro_fault.c powerpc/mm: Prevent unlikely crash in copro_calculate_slb() 2016-10-28 03:01:35 -04:00
dma-noncoherent.c powerpc: Fix compile errors with STRICT_MM_TYPECHECKS enabled 2015-04-10 20:02:47 +10:00
fault.c powerpc: Add plain English description for alignment exception oopses 2015-07-06 20:24:35 +10:00
fsl_booke_mmu.c powerpc/fsl-booke-64: Don't limit ppc64_rma_size to one TLB entry 2015-10-27 18:13:22 -05:00
hash_low_32.S
hash_low_64.S powerpc/mm: Drop CONFIG_PPC_HAS_HASH_64K 2015-08-18 19:32:10 +10:00
hash_native_64.c powerpc/mm: Add missing global TLB invalidate if cxl is active 2017-04-12 12:38:34 +02:00
hash_utils_64.c powerpc/mm: Differentiate between hugetlb and THP during page walk 2015-10-12 15:30:09 +11:00
highmem.c sched/preempt, mm/kmap: Explicitly disable/enable preemption in kmap_atomic_* 2015-05-19 08:39:14 +02:00
hugepage-hash64.c powerpc/mm: Recompute hash value after a failed update 2015-09-16 22:06:03 +10:00
hugetlbpage-book3e.c
hugetlbpage-hash64.c
hugetlbpage.c powerpc/mm: Fixup preempt underflow with huge pages 2016-04-20 15:41:53 +09:00
icswx.c
icswx.h
icswx_pid.c
init_32.c
init_64.c powerpc/mm: Free string after creating kmem cache 2015-03-26 15:23:17 +11:00
Makefile powerpc/mmu: Add userspace-to-physical addresses translation cache 2015-06-11 15:16:54 +10:00
mem.c libnvdimm for 4.3: 2015-09-08 14:35:59 -07:00
mmap.c mm: expose arch_mmap_rnd when available 2015-04-14 16:49:05 -07:00
mmu_context_hash32.c
mmu_context_hash64.c powerpc/mmu: Add userspace-to-physical addresses translation cache 2015-06-11 15:16:54 +10:00
mmu_context_iommu.c powerpc/mmu: Add userspace-to-physical addresses translation cache 2015-06-11 15:16:54 +10:00
mmu_context_nohash.c powerpc/8xx: reduce pressure on TLB due to context switches 2015-01-29 21:51:06 -06:00
mmu_decl.h powerpc/fsl-booke-64: Don't limit ppc64_rma_size to one TLB entry 2015-10-27 18:13:22 -05:00
numa.c powerpc updates for 4.4 2015-11-05 23:38:43 -08:00
pgtable.c mm: convert p[te|md]_numa users to p[te|md]_protnone_numa 2015-02-12 18:54:08 -08:00
pgtable_32.c powerpc: Replace mem_init_done with slab_is_available() 2015-04-10 20:02:48 +10:00
pgtable_64.c powerpc/booke64: Move mb() to __set_pte_at() with kernel-addr test 2015-08-07 23:00:01 -05:00
ppc_mmu_32.c powerpc/mm: Change setbat() to take a pgprot_t rather than flags 2015-04-07 17:15:13 +10:00
slb.c powerpc/slb: Use a local to avoid multiple calls to get_slb_shadow() 2015-10-01 16:52:01 +10:00
slb_low.S powerpc/mm: Don't alias user region to other regions below PAGE_OFFSET 2016-09-24 10:07:35 +02:00
slice.c mm: larger stack guard gap, between vmas 2017-06-26 07:13:11 +02:00
subpage-prot.c arch/powerpc/mm/subpage-prot.c: use walk->vma and walk_page_vma() 2015-02-11 17:06:06 -08:00
tlb_hash32.c
tlb_hash64.c powerpc/mm: Differentiate between hugetlb and THP during page walk 2015-10-12 15:30:09 +11:00
tlb_low_64e.S powerpc/e6500: hw tablewalk: make sure we invalidate and write to the same tlb entry 2015-10-27 18:14:40 -05:00
tlb_nohash.c powerpc/fsl-booke-64: Don't limit ppc64_rma_size to one TLB entry 2015-10-27 18:13:22 -05:00
tlb_nohash_low.S powerpc/85xx: Load all early TLB entries at once 2015-10-22 22:50:46 -05:00
vphn.c powerpc/vphn: parsing code rewrite 2015-03-18 10:48:59 +11:00
vphn.h powerpc/vphn: parsing code rewrite 2015-03-18 10:48:59 +11:00