android_kernel_oneplus_msm8998/mm
Joonsoo Kim b1cb0982bd slab: change the management method of free objects of the slab
Current free objects management method of the slab is weird, because
it touch random position of the array of kmem_bufctl_t when we try to
get free object. See following example.

struct slab's free = 6
kmem_bufctl_t array: 1 END 5 7 0 4 3 2

To get free objects, we access this array with following pattern.
6 -> 3 -> 7 -> 2 -> 5 -> 4 -> 0 -> 1 -> END

If we have many objects, this array would be larger and be not in the same
cache line. It is not good for performance.

We can do same thing through more easy way, like as the stack.
Only thing we have to do is to maintain stack top to free object. I use
free field of struct slab for this purpose. After that, if we need to get
an object, we can get it at stack top and manipulate top pointer.
That's all. This method already used in array_cache management.
Following is an access pattern when we use this method.

struct slab's free = 0
kmem_bufctl_t array: 6 3 7 2 5 4 0 1

To get free objects, we access this array with following pattern.
0 -> 1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7

This may help cache line footprint if slab has many objects, and,
in addition, this makes code much much simpler.

Acked-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Pekka Enberg <penberg@iki.fi>
2013-10-24 20:17:33 +03:00
..
backing-dev.c sysfs.h: add __ATTR_RW() macro 2013-07-16 10:57:36 -07:00
balloon_compaction.c
bootmem.c mm: kill free_all_bootmem_node() 2013-07-03 16:07:39 -07:00
bounce.c Merge branch 'for-3.10/core' of git://git.kernel.dk/linux-block 2013-05-08 10:13:35 -07:00
cleancache.c mm: cleancache: clean up cleancache_enabled 2013-04-30 17:04:01 -07:00
compaction.c
debug-pagealloc.c
dmapool.c
fadvise.c
failslab.c
filemap.c mm: remove unused VM_<READfoo> macros and expand other in-place 2013-07-09 10:33:23 -07:00
filemap_xip.c
fremap.c mm: save soft-dirty bits on file pages 2013-08-13 17:57:48 -07:00
frontswap.c frontswap: fix incorrect zeroing and allocation size for frontswap_map 2013-06-12 16:29:46 -07:00
highmem.c
huge_memory.c thp, mm: avoid PageUnevictable on active/inactive lru lists 2013-07-31 14:41:03 -07:00
hugetlb.c Fix TLB gather virtual address range invalidation corner cases 2013-08-16 08:52:46 -07:00
hugetlb_cgroup.c
hwpoison-inject.c
init-mm.c
internal.h mm: remove unused __put_page() 2013-07-09 10:33:22 -07:00
interval_tree.c
Kconfig zswap: add to mm/ 2013-07-10 18:11:34 -07:00
Kconfig.debug
kmemcheck.c
kmemleak-test.c
kmemleak.c
ksm.c
maccess.c
madvise.c
Makefile zswap: add to mm/ 2013-07-10 18:11:34 -07:00
memblock.c mm/memblock.c: fix wrong comment in __next_free_mem_range() 2013-07-09 10:33:23 -07:00
memcontrol.c memcg: get rid of swapaccount leftovers 2013-08-23 09:51:22 -07:00
memory-failure.c mm/memory-failure.c: fix memory leak in successful soft offlining 2013-07-03 16:07:31 -07:00
memory.c Fix TLB gather virtual address range invalidation corner cases 2013-08-16 08:52:46 -07:00
memory_hotplug.c mm/memory_hotplug.c: fix return value of online_pages() 2013-07-09 10:33:25 -07:00
mempolicy.c mm: mempolicy: fix mbind_range() && vma_adjust() interaction 2013-07-31 14:41:02 -07:00
mempool.c
migrate.c mm: migration: add migrate_entry_wait_huge() 2013-06-12 16:29:46 -07:00
mincore.c
mlock.c
mm_init.c mm: tune vm_committed_as percpu_counter batching size 2013-07-03 16:07:32 -07:00
mmap.c Fix TLB gather virtual address range invalidation corner cases 2013-08-16 08:52:46 -07:00
mmu_context.c mm: remove old aio use_mm() comment 2013-05-07 18:38:27 -07:00
mmu_notifier.c treewide: relase -> release 2013-06-28 14:34:33 +02:00
mmzone.c
mprotect.c
mremap.c mm: move_ptes -- Set soft dirty bit depending on pte type 2013-08-27 09:36:17 -07:00
msync.c
nobootmem.c mm: concentrate modification of totalram_pages into the mm core 2013-07-03 16:07:33 -07:00
nommu.c mm: remove free_area_cache 2013-07-10 18:11:34 -07:00
oom_kill.c
page-writeback.c kernel: delete __cpuinit usage from all core kernel files 2013-07-14 19:36:59 -04:00
page_alloc.c mm: honor min_free_kbytes set by user 2013-07-09 10:33:25 -07:00
page_cgroup.c
page_io.c mm: remove compressed copy from zram in-memory 2013-07-03 16:07:26 -07:00
page_isolation.c
pagewalk.c mm/pagewalk.c: walk_page_range should avoid VM_PFNMAP areas 2013-05-24 16:22:53 -07:00
percpu-km.c
percpu-vm.c
percpu.c
pgtable-generic.c mm/THP: add pmd args to pgtable deposit and withdraw APIs 2013-06-20 16:55:07 +10:00
process_vm_access.c
quicklist.c
readahead.c mm: change invalidatepage prototype to accept length 2013-05-21 23:17:23 -04:00
rmap.c mm: save soft-dirty bits on file pages 2013-08-13 17:57:48 -07:00
shmem.c cope with potentially long ->d_dname() output for shmem/hugetlb 2013-08-24 12:10:17 -04:00
slab.c slab: change the management method of free objects of the slab 2013-10-24 20:17:33 +03:00
slab.h memcg: check that kmem_cache has memcg_params before accessing it 2013-08-28 19:26:38 -07:00
slab_common.c Merge branch 'slab/for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/linux 2013-07-14 15:14:29 -07:00
slob.c Merge branch 'slab/for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/linux 2013-07-14 15:14:29 -07:00
slub.c Revert "slub: do not put a slab to cpu partial list when cpu_partial is 0" 2013-08-08 09:06:37 -07:00
sparse-vmemmap.c
sparse.c mm/sparse.c: put clear_hwpoisoned_pages within CONFIG_MEMORY_HOTREMOVE 2013-07-09 10:33:22 -07:00
swap.c thp, mm: avoid PageUnevictable on active/inactive lru lists 2013-07-31 14:41:03 -07:00
swap_state.c swap: avoid read_swap_cache_async() race to deadlock while waiting on discard I/O completion 2013-06-12 16:29:45 -07:00
swapfile.c mm: save soft-dirty bits on swapped pages 2013-08-13 17:57:47 -07:00
truncate.c mm: teach truncate_inode_pages_range() to handle non page aligned ranges 2013-05-27 23:32:35 -04:00
util.c mm: remove free_area_cache 2013-07-10 18:11:34 -07:00
vmalloc.c mm/vmalloc.c: fix an overflow bug in alloc_vmap_area() 2013-07-09 10:33:23 -07:00
vmpressure.c vmpressure: make sure there are no events queued after memcg is offlined 2013-07-31 14:41:04 -07:00
vmscan.c mm: vmscan: do not scale writeback pages when deciding whether to set ZONE_WRITEBACK 2013-07-09 10:33:23 -07:00
vmstat.c kernel: delete __cpuinit usage from all core kernel files 2013-07-14 19:36:59 -04:00
zbud.c mm: zbud: fix condition check on allocation size 2013-07-31 14:41:03 -07:00
zswap.c zswap: add to mm/ 2013-07-10 18:11:34 -07:00