Commit graph

563574 commits

Author SHA1 Message Date
Rohit Vaswani
c2564a7fb8 dma_removed: Substitute __GFP_WAIT with upstream gfpflags_allow_blocking()
Signed-off-by: Rohit Vaswani <rvaswani@codeaurora.org>
2016-03-22 11:04:01 -07:00
Laura Abbott
67097c7224 arm: Add option to skip buffer zeroing
The DMA framework currently zeros all buffers because it (righfully so)
assumes that drivers will soon need to pass the memory to a device.
Some devices/use case may not require zeroed memory and there can
be an increase in performance if we skip the zeroing. Add a DMA_ATTR
to allow skipping of DMA zeroing.

Change-Id: Id9ccab355554b3163d8e7eae1caa82460e171e34
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
[mitchelh: dropped changes to arm32]
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2016-03-22 11:04:00 -07:00
Susheel Khiani
793d80f988 mm: Update is_vmalloc_addr to account for vmalloc savings
is_vmalloc_addr currently assumes that all vmalloc addresses
exist between VMALLOC_START and VMALLOC_END. This may not be
the case when interleaving vmalloc and lowmem. Update the
is_vmalloc_addr to properly check for this.

Correspondingly we need to ensure that VMALLOC_TOTAL accounts
for all the vmalloc regions when CONFIG_ENABLE_VMALLOC_SAVING
is enabled.

Change-Id: I5def3d6ae1a4de59ea36f095b8c73649a37b1f36
Signed-off-by: Susheel Khiani <skhiani@codeaurora.org>
2016-03-22 11:03:59 -07:00
Susheel Khiani
c064333eac msm: Allow lowmem to be non contiguous and mixed
Currently on 32 bit systems, virtual space above
PAGE_OFFSET is reserved for direct mapped lowmem
and part of virtual address space is reserved for
vmalloc. We want to optimize such as to have as
much direct mapped memory as possible since there is
penalty for mapping/unmapping highmem. Now, we may
have an image that is expected to have a lifetime of
the entire system and is reserved in physical region
that would be part of direct mapped lowmem. The
physical memory which is thus reserved is never used
by Linux. This means that even though the system is
not actually accessing the  virtual memory
corresponding to the reserved physical memory, we
are still losing that portion of direct mapped lowmem
space.

So by allowing lowmem to be non contiguous we can
give this unused virtual address space of reserved
region back for use in vmalloc.

Change-Id: I980b3dfafac71884dcdcb8cd2e4a6363cde5746a
Signed-off-by: Susheel Khiani <skhiani@codeaurora.org>
2016-03-22 11:03:58 -07:00
Shiraz Hashim
4af3c048cf arm: keep address range pmd aligned while remap
During early init, all dma areas are remapped to PAGE_SIZE
granularity. Since full pmd regions are cleared to be
remapped into PAGE_SIZE, ensure that address range is pmd
size aligned while not crossing memory boundaries.

This would ensure that even if address region is not pmd
aligned, its mapping would not be cleared but factored in to
PAGE_SIZE regions.

Change-Id: Iad4ad7fd6169cdc693d532821aba453465addb7c
Signed-off-by: Shiraz Hashim <shashim@codeaurora.org>
2016-03-22 11:03:57 -07:00
Susheel Khiani
7215a1cfee msm: Increase the kernel virtual area to include lowmem
Even though lowmem is accounted for in vmalloc space,
allocation comes only from the region bounded by
VMALLOC_START and VMALLOC_END. The kernel virtual area
can now allocate from any unmapped region starting
from PAGE_OFFSET.

Change-Id: I291b9eb443d3f7445fd979bd7b09e9241ff22ba3
Signed-off-by: Neeti Desai <neetid@codeaurora.org>
Signed-off-by: Susheel Khiani <skhiani@codeaurora.org>
2016-03-22 11:03:56 -07:00
Vinayak Menon
8dd433c495 mm: showmem: make the notifiers atomic
There are places in kernel like the lowmemorykiller which
invokes show_mem_call_notifiers from an atomic context.
So move from a blocking notifier to atomic. At present
the notifier callbacks does not call sleeping functions,
but it should be made sure, it does not happen in future also.

Change-Id: I9668e67463ab8a6a60be55dbc86b88f45be8b041
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
2016-03-22 11:03:55 -07:00
Vinayak Menon
3086328d5f mm: page-writeback: fix page state calculation in throttle_vm_writeout
It was found that a number of tasks were blocked in the reclaim path
(throttle_vm_writeout) for seconds, because of vmstat_diff not being
synced in time. Fix that by adding a new function
global_page_state_snapshot.

Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
Change-Id: Iec167635ad724a55c27bdbd49eb8686e7857216c
2016-03-22 11:03:54 -07:00
Vinayak Menon
d487a9f1f7 mm: compaction: fix the page state calculation in too_many_isolated
Commit "mm: vmscan: fix the page state calculation in too_many_isolated"
fixed an issue where a number of tasks were blocked in reclaim path
for seconds, because of vmstat_diff not being synced in time.
A similar problem can happen in isolate_migratepages_block, where
similar calculation is performed. This patch fixes that.

Change-Id: Ie74f108ef770da688017b515fe37faea6f384589
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
2016-03-22 11:03:54 -07:00
Vinayak Menon
910a8bd108 mm: vmpressure: account allocstalls only on higher pressures
At present any vmpressure value is scaled up if the pages are
reclaimed through direct reclaim. This can result in false
vmpressure values. Consider a case where a device is booted up
and most of the memory is occuppied by file pages. kswapd will
make sure that high watermark is maintained. Now when a sudden
huge allocation request comes in, the system will definitely
have to get into direct reclaims. The vmpressures can be very low,
but because of allocstall accounting logic even these low values
will be scaled to values nearing 100. This can result in
unnecessary LMK kills for example. So define a tunable threshold
for vmpressure above which the allocstalls will be accounted.

CRs-fixed: 893699
Change-Id: Idd7c6724264ac89f1f68f2e9d70a32390ffca3e5
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
2016-03-22 11:03:53 -07:00
Vinayak Menon
e5ce54a9cb mm: swap: don't delay swap free for fast swap devices
There are couple of issues with swapcache usage when ZRAM is used
as swap device.
1) Kernel does a swap readahead which can be around 6 to 8 pages
depending on total ram, which is not required for zram since
accesses are fast.
2) Kernel delays the freeing up of swapcache expecting a later hit,
which again is useless in the case of zram.
3) This is not related to swapcache, but zram usage itself.
As mentioned in (2) kernel delays freeing of swapcache, but along with
that it delays zram compressed page free also. i.e. there can be 2 copies,
though one is compressed.

This patch addresses these issues using two new flags
QUEUE_FLAG_FAST and SWP_FAST, to indicate that accesses to the device
will be fast and cheap, and instructs the swap layer to free up
swap space agressively, and not to do read ahead.

Change-Id: I5d2d5176a5f9420300bb2f843f6ecbdb25ea80e4
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
2016-03-22 11:03:52 -07:00
Vinayak Menon
0a8bf43567 mm: vmpressure: scale pressure based on reclaim context
The existing calculation of vmpressure takes into account only
the ratio of reclaimed to scanned pages, but not the time spent
or the difficulty in reclaiming those pages. For e.g. when there
are quite a number of file pages in the system, an allocation
request can be satisfied by reclaiming the file pages alone. If
such a reclaim is successful, the vmpressure value will remain low
irrespective of the time spent by the reclaim code to free up the
file pages. With a feature like lowmemorykiller, killing a task
can be faster than reclaiming the file pages alone. So if the
vmpressure values reflect the reclaim difficulty level, clients
can make a decision based on that, for e.g. to kill a task early.

This patch monitors the number of pages scanned in the direct
reclaim path and scales the vmpressure level according to that.

Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
Change-Id: I6e643d29a9a1aa0814309253a8b690ad86ec0b13
2016-03-22 11:03:51 -07:00
Vinayak Menon
fb880fe5d1 mm: vmpressure: allow in-kernel clients to subscribe for events
Currently, vmpressure is tied to memcg and its events are
available only to userspace clients. This patch removes
the dependency on CONFIG_MEMCG and adds a mechanism for
in-kernel clients to subscribe for vmpressure events (in
fact raw vmpressure values are delivered instead of vmpressure
levels, to provide clients more flexibility to take actions
on custom pressure levels which are not currently defined
by vmpressure module).

Change-Id: I38010f166546e8d7f12f5f355b5dbfd6ba04d587
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
2016-03-22 11:03:50 -07:00
Liam Mark
a886f65ded mm/Kconfig: support forcing allocators to return ZONE_DMA memory
Add a new config item, CONFIG_FORCE_ALLOC_FROM_DMA_ZONE, which
can be used to optionally force certain allocators to always
return memory from ZONE_DMA.

This option helps ensure that clients who require ZONE_DMA
memory are always using ZONE_DMA memory.

Change-Id: Id2d36214307789f27aa775c2bef2dab5047c4ff0
Signed-off-by: Liam Mark <lmark@codeaurora.org>
2016-03-22 11:03:49 -07:00
Vignesh Radhakrishnan
07ca3d98c6 kmemleak : Make kmemleak_stack_scan optional using config
Currently we have kmemleak_stack_scan enabled by default.
This can hog the cpu with pre-emption disabled for a long
time starving other tasks.

Make this optional at compile time, since if required
we can always write to sysfs entry and enable this option.

Change-Id: Ie30447861c942337c7ff25ac269b6025a527e8eb
Signed-off-by: Vignesh Radhakrishnan <vigneshr@codeaurora.org>
Signed-off-by: Prasad Sodagudi <psodagud@codeaurora.org>
2016-03-22 11:03:48 -07:00
Vignesh Radhakrishnan
12471ac6f8 kmemleak : Make kmemleak_stack_scan optional using config
Currently we have kmemleak_stack_scan enabled by default.
This can hog the cpu with pre-emption disabled for a long
time starving other tasks.

Make this optional at compile time, since if required
we can always write to sysfs entry and enable this option.

Change-Id: Ie30447861c942337c7ff25ac269b6025a527e8eb
Signed-off-by: Vignesh Radhakrishnan <vigneshr@codeaurora.org>
Signed-off-by: Prasad Sodagudi <psodagud@codeaurora.org>
2016-03-22 11:03:47 -07:00
Se Wang (Patrick) Oh
ab5d4ae2e4 mm: switch KASan hook calling order in page alloc/free path
When CONFIG_PAGE_POISONING is enabled, the pages are poisoned
after setting free page in KASan Shadow memory and KASan reports
the read after free warning. The same thing happens in the allocation
path. So change the order of calling KASan_alloc/free API so that
pages poisoning happens when the pages are in alloc status in KASan
shadow memory.
following is the KASan report for reference.
==================================================================
BUG: KASan: use after free in memset+0x24/0x44 at addr ffffffc000000000
Write of size 4096 by task swapper/0
page:ffffffbac5000000 count:0 mapcount:0 mapping:          (null) index:0x0
flags: 0x0()
page dumped because: kasan: bad access detected
CPU: 0 PID: 0 Comm: swapper Not tainted 3.18.0-g5a4a5d5-07242-g6938a8b-dirty #1
Hardware name: Qualcomm Technologies, Inc. MSM 8996 v2 + PMI8994 MTP (DT)
Call trace:
[<ffffffc000089ea4>] dump_backtrace+0x0/0x1c4
[<ffffffc00008a078>] show_stack+0x10/0x1c
[<ffffffc0010ecfd8>] dump_stack+0x74/0xc8
[<ffffffc00020faec>] kasan_report_error+0x2b0/0x408
[<ffffffc00020fd20>] kasan_report+0x34/0x40
[<ffffffc00020f138>] __asan_storeN+0x15c/0x168
[<ffffffc00020f374>] memset+0x20/0x44
[<ffffffc0002086e0>] kernel_map_pages+0x238/0x2a8
[<ffffffc0001ba738>] free_pages_prepare+0x21c/0x25c
[<ffffffc0001bc7e4>] __free_pages_ok+0x20/0xf0
[<ffffffc0001bd3bc>] __free_pages+0x34/0x44
[<ffffffc0001bd5d8>] __free_pages_bootmem+0xf4/0x110
[<ffffffc001ca9050>] free_all_bootmem+0x160/0x1f4
[<ffffffc001c97b30>] mem_init+0x70/0x1ec
[<ffffffc001c909f8>] start_kernel+0x2b8/0x4e4
[<ffffffc001c987dc>] kasan_early_init+0x154/0x160

Change-Id: Idbd3dc629be57ed55a383b069a735ae3ee7b9f05
Signed-off-by: Se Wang (Patrick) Oh <sewango@codeaurora.org>
2016-03-22 11:03:46 -07:00
Shiraz Hashim
917d464f88 drivers: dma-removed: introduce no-map-fixup
For some use cases, it is not known beforehand, how much
removed (carve-out) region size must be reserved. Hence
the reserved region size might need to be adjusted to
support varying use cases. In such cases maintaining
different device tree configurations to support varying
carve-out region size is difficult.

Introduce an optional device tree property, to
reserved-memory, "no-map-fixup" which works in tandem with
"removed-dma-pool" compatibility that tries to shrink and
adjust the removed area on very first successful allocation.
At end of which it returns the additional (unused) pages
from the region back to the system.

Point to note is this that this adjustment is done on very
first allocation and thereafter the region size is big
enough only to support maximum of first allocation request
size. This fixup is attempted only once upon first
allocation and never after that. Clients can allocate and
free from this region as any other dma region.

As the description suggests this type of region is specific
to certain special needs and is not to be used for common
use cases.

Change-Id: I31f49d6bd957814bc2ef3a94910425b820ccc739
Signed-off-by: Shiraz Hashim <shashim@codeaurora.org>
2016-03-22 11:03:46 -07:00
Shiraz Hashim
f38c9b6d7b drivers: dma-removed: use memset_io for ioremap region
Using memset generates unaligned access exception for
device type memory on armv8, hence use memset_io for
ioremap region.

Change-Id: I26c82d4bed20f1c163953680aa200c95842d3f21
Signed-off-by: Shiraz Hashim <shashim@codeaurora.org>
2016-03-22 11:03:45 -07:00
Shiraz Hashim
671e5d6181 drivers: dma-removed: fix data type to hold base address
removed_region->base should be phy_addr_t type as it
directly holds physical memory address. Fix it.

Change-Id: I80d49d209cf0f319b7a468697387d23e6bcb1b98
Signed-off-by: Shiraz Hashim <shashim@codeaurora.org>
2016-03-22 11:03:44 -07:00
Laura Abbott
868aea7832 common: dma-mapping: Store page array in vm_struct
Commit 54329ac (common: dma-mapping: introduce common remapping functions)
factored out common code for remapping arrays of pages. The code before
the refactor relied on setting area->pages with the array of mapped
pages for easy access later. The refactor dropped this, breaking
parts of the ARM DMA API. Fix this by setting the page array in the same
place.

Change-Id: Ie4d085132f350db29eb2aca67156c25b5e842903
Reported-by: Mitchel Humpherys <mitchelh@codeaurora.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2016-03-22 11:03:43 -07:00
Liam Mark
ed71b13963 common: dma-mapping: make dma_common_contiguous_remap more robust
Large allocations can result in the dma_common_contiguous_remap
call not being able to succeed because it can't find enough
contiguous memory to setup the mapping.
Make dma_common_contiguous_remap more robust by using vmalloc
as a fallback.

Change-Id: I12ca710b4c24f4ef24bc33a0d1d4922196fb7492
Signed-off-by: Liam Mark <lmark@codeaurora.org>
2016-03-22 11:03:42 -07:00
Shiraz Hashim
dd0ab21370 drivers: dma-removed: align size first
Align size first and then find required number of bits and
order out of it.

Change-Id: I9b12fb45e5c1ff79e24fe7584cd23923b1a88c87
Signed-off-by: Shiraz Hashim <shashim@codeaurora.org>
2016-03-22 11:03:41 -07:00
Laura Abbott
ce7a052ffc drivers: Add dma removed ops
The current DMA coherent pool assumes that there is a kernel
mapping at all times for hte entire pool. This may not be
what we want for the entire times. Add the dma_removed ops to
support this use case.

Change-Id: Ie4f1e9bdf57b79699fa8fa7e7a6087e6d88ebbfa
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2016-03-22 11:03:40 -07:00
Lee Susman
34144c841b mm: change initial readahead window size calculation
Change the logic which determines the initial readahead window size
such that for small requests (one page) the initial window size
will be x4 the size of the original request, regardless of the
VM_MAX_READAHEAD value. This prevents a rapid ramp-up
that could be caused due to increasing VM_MAX_READAHEAD.

Change-Id: I93d59c515d7e6c6d62348790980ff7bd4f434997
Signed-off-by: Lee Susman <lsusman@codeaurora.org>
2016-03-22 11:03:39 -07:00
Liam Mark
78ec19c5f9 mm: split_free_page ignore memory watermarks for CMA
Memory watermarks were sometimes preventing CMA allocations
in low memory.

Change-Id: I550ec987cbd6bc6dadd72b4a764df20cd0758479
Signed-off-by: Liam Mark <lmark@codeaurora.org>
2016-03-22 11:03:38 -07:00
Laura Abbott
81f6201534 mm: Don't put CMA pages on per cpu lists
CMA allocations rely on being able to migrate pages out
quickly to fulfill the allocations. Most use cases for
movable allocations meet this requirement. File system
allocations may take an unaccpetably long time to
migrate, which creates delays from CMA. Prevent CMA
pages from ending up on the per-cpu lists to avoid
code paths grabbing CMA pages on the fast path. CMA
pages can still be allocated as a fallback under tight
memory pressure.

CRs-Fixed: 452508
Change-Id: I79a28f697275a2a1870caabae53c8ea345b4b47d
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2016-03-22 11:03:38 -07:00
Laura Abbott
e48a20a27c mm: Add is_cma_pageblock definition
Bring back the is_cma_pageblock definition for determining if a
page is CMA or not.

Change-Id: I39fd546e22e240b752244832c79514f109c8e84b
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2016-03-22 11:03:37 -07:00
Liam Mark
8918861861 mm: vmscan: support setting of kswapd cpu affinity
Allow the kswapd cpu affinity to be configured.
There can be power benefits on certain targets when limiting kswapd
to run only on certain cores.

CRs-fixed: 752344
Change-Id: I8a83337ff313a7e0324361140398226a09f8be0f
Signed-off-by: Liam Mark <lmark@codeaurora.org>
[imaund@codeaurora.org: Resolved trivial context conflicts.]
Signed-off-by: Ian Maund <imaund@codeaurora.org>
2016-03-22 11:03:36 -07:00
Vinayak Menon
90863369e5 mm: vmscan: lock page on swap error in pageout
A workaround was added ealier to move a page to active
list if swapping to devices like zram fails. But this
can result in try_to_free_swap being called from
shrink_page_list, without a properly locked page.
Lock the page when we indicate to activate a page
in pageout().
Add a check to ensure that error is on swap, and
clear the error flag before moving the page to
active list.

CRs-fixed: 760049
Change-Id: I77a8bbd6ed13efdec943298fe9448412feeac176
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
2016-03-22 11:03:35 -07:00
Matt Wagantall
6999257f3c arm64: mark split_pmd() with __init to avoid section mismatch warnings
split_pmd() calls early_alloc(), which is marked with __init. Mark
split_pmd() similarly. The only current caller of split_pmd() is
remap_pages(), which is already __init, so there was no real danger
here in the first place.

Change-Id: I3bbc4c66f1ced8fe772366b7e5287be5f474f314
Signed-off-by: Matt Wagantall <mattw@codeaurora.org>
2016-03-22 11:03:34 -07:00
Liam Mark
8a94faffd0 mm: vmscan: support complete shrinker reclaim
Ensure that shrinkers are given the option to completely drop
their caches even when their caches are smaller than the batch size.

This change helps improve memory headroom by ensuring that under
significant memory pressure shrinkers can drop all of their caches.

This change only attempts to more aggressively call the shrinkers
during background memory reclaim, inorder to avoid hurting the
perforamnce of direct memory reclaim.

Change-Id: I8dbc29c054add639e4810e36fd2c8a063e5c52f3
Signed-off-by: Liam Mark <lmark@codeaurora.org>
2016-03-22 11:03:33 -07:00
David Keitel
747b0dceae mm: slub: panic for object and slab errors
If the SLUB_DEBUG_PANIC_ON Kconfig option is
selected, also panic for object and slab
errors to allow capturing relevant debug
data.

Change-Id: Idc582ef48d3c0d866fa89cf8660ff0a5402f7e15
Signed-off-by: David Keitel <dkeitel@codeaurora.org>
2016-03-22 11:03:32 -07:00
David Keitel
a31c7a448b defconfig: 8994: enable CONFIG_DEBUG_SLUB_PANIC_ON
Add the DEBUG_SLUB_PANIC_ON option to KCONFIG preventing
the existing defconfig option from being overwritten
by make config.

This will induce a panic if slab debug catches corruptions
within the padding of a given object.

The intention here is to induce collection of data
immediately after the corruption is detected with
the goal to catch the possible source of the corruption.

Change-Id: Ide0102d0761022c643a761989360ae5c853870a8
Signed-off-by: David Keitel <dkeitel@codeaurora.org>
[imaund@codeaurora.org: Resolved trivial merge conflicts.]
Signed-off-by: Ian Maund <imaund@codeaurora.org>
[lmark@codeaurora.org: ensure change does not create
 arch/arm64/configs/msm8994_defconfig file]
Signed-off-by: Liam Mark <lmark@codeaurora.org>
2016-03-22 11:03:31 -07:00
Abhimanyu Garg
4f98dec419 KSM: Start KSM by default
Strat running KSM by default at device bootup.

Change-Id: I7926c529ea42675f4279bffaf149a0cf1080d61b
Signed-off-by: Abhimanyu Garg <agarg@codeaurora.org>
2016-03-22 11:03:30 -07:00
Liam Mark
71edb92d82 mm, oom: make dump_tasks public
Allow other functions to dump the list of tasks.
Useful for when debugging memory leaks.

Change-Id: I76c33a118a9765b4c2276e8c76de36399c78dbf6
Signed-off-by: Liam Mark <lmark@codeaurora.org>
2016-03-22 11:03:30 -07:00
Laura Abbott
1fe8bfe88a ksm: Add showmem notifier
KSM is yet another framework which may obfuscate some memory
problems. Use the showmem notifier to show how KSM is being
used to give some insight into potential issues or non-issues.

Change-Id: If82405dc33f212d085e6847f7c511fd4d0a32a10
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2016-03-22 11:03:28 -07:00
David Keitel
c5eff6c321 mm: slub: Panic instead of restoring corrupted bytes
Resiliency of slub was added for production systems in an
attempt to restore corruptions and allow production environments
to continue to run.

In debug setups, this may no be desirable. Thus rather than
attempting to restore corrupted bytes in poisoned zones, panic
to attempt to catch more context of what was going on in the
system at the time.

Add the CONFIG_SLUB_DEBUG_PANIC_ON defconfig option to allow
debug builds to turn on this panic option.

Change-Id: I01763e8eea40a4544e9b7e48c4e4d40840b6c82d
Signed-off-by: David Keitel <dkeitel@codeaurora.org>
2016-03-22 11:03:27 -07:00
Chintan Pandya
9ec1d6c8e5 ksm: Provide support to use deferred timers for scanner thread
KSM thread to scan pages is getting schedule on definite timeout.
That wakes up CPU from idle state and hence may affect the power
consumption. Provide an optional support to use deferred timer
which suites low-power use-cases.

To enable deferred timers,
$ echo 1 > /sys/kernel/mm/ksm/deferred_timer

Change-Id: I07fe199f97fe1f72f9a9e1b0b757a3ac533719e8
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
2016-03-22 11:03:27 -07:00
Liam Mark
585690954e mm: vmscan: support equal reclaim for anon and file pages
When performing memory reclaim support treating anonymous and
file backed pages equally.

Swapping anonymous pages out to memory can be efficient enough
to justify treating anonymous and file backed pages equally.

CRs-Fixed: 648984
Change-Id: I6315b8557020d1e27a34225bb9cefbef1fb43266
Signed-off-by: Liam Mark <lmark@codeaurora.org>
2016-03-22 11:03:26 -07:00
Olav Haugan
80370b5f59 mm: vmscan: Move pages that fail swapout to LRU active list
Move pages that fail swapout to the LRU active list to reduce
pressure on swap device when swapping out is already failing.
This helps when using a pseudo swap device such as zram which
starts failing when memory is low.

Change-Id: Ib136cd0a744378aa93d837a24b9143ee818c80b3
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2016-03-22 11:03:25 -07:00
Rohit Vaswani
f4fbcaf9f7 arm64: Add support for KERNEL_TEXT_RDONLY
When using FORCE_PAGES to allocate the kernel memory into pages,
provide an option to mark the the kernel text section as read only.
Since the kernel text pages are always mapped in the kernel, anyone
can write to the page if they have the address.
Enable this option to mark the kernel text pages as read only to
trigger a fault if any code attempts to write to a page that is
part of the kernel text section.

Change-Id: I2a9e105a3340686b4314bb10cc2a6c7bfa19ce8e
Signed-off-by: Rohit Vaswani <rvaswani@codeaurora.org>
2016-03-22 11:03:24 -07:00
Laura Abbott
a7470fb452 mm: Remove __init annotations from free_bootmem_late
free_bootmem_late is currently set up to only be used in init
functions. Some clients need to use this function past initcalls.
The functions themselves have no restrictions on being used later
minus the __init annotations so remove the annotation.

Change-Id: I7c7e15cf2780a8843ebb4610da5b633c9abb0b3d
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
[abhimany@codeaurora.org: resolve minor conflict
and remove __init from nobootmem.c]
Signed-off-by: Abhimanyu Kapur <abhimany@codeaurora.org>
2016-03-22 11:03:23 -07:00
Abhimanyu Kapur
753315050c ARM64: mm: init: bring back poison_init_mem
Strict RWX requires poison_init_mem, bring it back
from the dead.

Change-Id: I09b88a12a47c8694e2ba178caad4415981f4f7e3
Signed-off-by: Abhimanyu Kapur <abhimany@codeaurora.org>
2016-03-22 11:03:22 -07:00
Rohit Vaswani
638f367e04 arm64: mm: Do not create 1GB mappings if FORCE_PAGES is enabled
With CONFIG_FORCE_PAGES enabled, we break down the section mappings
into 4K page mappings. For 1GB mappings, remapping the pages into 4K
chunks becomes unnecessarily complicated. Skip creating the 1GB mapping
if we know it's going to be separated into 4K mappings.

Change-Id: I991768210ed6e1c1e19faf0d5d851d550e51a8c6
Signed-off-by: Rohit Vaswani <rvaswani@codeaurora.org>
(cherry picked from commit 2528a04cba82ce3d655dabf78bc49c5b64c89647)
2016-03-22 11:03:21 -07:00
Laura Abbott
8f13b60413 mm: Mark free pages as read only
Drivers have a tendency to scribble on everything, including free
pages. Make life easier by marking free pages as read only when
on the buddy list and re-marking as read/write when allocating.

Change-Id: I978ed2921394919917307b9c99217fdc22f82c59
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
(cherry picked from commit 752f5aecb0511c4d661dce2538c723675c1e6449)
2016-03-22 11:03:20 -07:00
Neeti Desai
ab7a339160 arm64: Add support for FREE_PAGES_RDONLY
Add config option to trigger page a fault if any
code attempts to write to a page on the buddy list

Change-Id: Ic5ab791c4117606519c7b9eb4c2876f246d23320
Signed-off-by: Neeti Desai <neetid@codeaurora.org>
2016-03-22 11:03:20 -07:00
Neeti Desai
6992ec5181 arm64: Allow remapping lowmem as 4K pages
For certain debug features the lowmem needs to be
mapped as pages instead of sections. Add config
option to allow remapping of lowmem as 4K pages

Change-Id: I50179311facd91b97ecde720da38ec7e47512e95
Signed-off-by: Neeti Desai <neetid@codeaurora.org>
2016-03-22 11:03:19 -07:00
Laura Abbott
52c49cdc5e common: DMA-mapping: Add strongly ordered memory attribute
Strongly ordered memory is occasionally needed for some DMA
allocations for specialized use cases. Add the corresponding
DMA attribute.

Change-Id: Idd9e756c242ef57d6fa6700e51cc38d0863b760d
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2016-03-22 11:03:18 -07:00
Laura Abbott
8d90eb7058 arm64: Support early fixup for CMA
Although it isn't architecturally required, CMA regions may need
to have attributes changed at runtime. Remap the CMA regions as
pages to allow this to happen.

Change-Id: I7dd7fa150ce69fdf05f8bf6f76a5ae26dd67ff1b
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
[lmark@codeaurora.org: resolve merge conflicts]
Signed-off-by: Liam Mark <lmark@codeaurora.org>
2016-03-22 11:03:17 -07:00