android_kernel_oneplus_msm8998/mm
Dave Chinner 29f9d8d958 mm/page-writeback.c: fix range_cyclic writeback vs writepages deadlock
[ Upstream commit 64081362e8ff4587b4554087f3cfc73d3e0a4cd7 ]

We've recently seen a workload on XFS filesystems with a repeatable
deadlock between background writeback and a multi-process application
doing concurrent writes and fsyncs to a small range of a file.

range_cyclic
writeback		Process 1		Process 2

xfs_vm_writepages
  write_cache_pages
    writeback_index = 2
    cycled = 0
    ....
    find page 2 dirty
    lock Page 2
    ->writepage
      page 2 writeback
      page 2 clean
      page 2 added to bio
    no more pages
			write()
			locks page 1
			dirties page 1
			locks page 2
			dirties page 1
			fsync()
			....
			xfs_vm_writepages
			write_cache_pages
			  start index 0
			  find page 1 towrite
			  lock Page 1
			  ->writepage
			    page 1 writeback
			    page 1 clean
			    page 1 added to bio
			  find page 2 towrite
			  lock Page 2
			  page 2 is writeback
			  <blocks>
						write()
						locks page 1
						dirties page 1
						fsync()
						....
						xfs_vm_writepages
						write_cache_pages
						  start index 0

    !done && !cycled
      sets index to 0, restarts lookup
    find page 1 dirty
						  find page 1 towrite
						  lock Page 1
						  page 1 is writeback
						  <blocks>

    lock Page 1
    <blocks>

DEADLOCK because:

	- process 1 needs page 2 writeback to complete to make
	  enough progress to issue IO pending for page 1
	- writeback needs page 1 writeback to complete so process 2
	  can progress and unlock the page it is blocked on, then it
	  can issue the IO pending for page 2
	- process 2 can't make progress until process 1 issues IO
	  for page 1

The underlying cause of the problem here is that range_cyclic writeback is
processing pages in descending index order as we hold higher index pages
in a structure controlled from above write_cache_pages().  The
write_cache_pages() caller needs to be able to submit these pages for IO
before write_cache_pages restarts writeback at mapping index 0 to avoid
wcp inverting the page lock/writeback wait order.

generic_writepages() is not susceptible to this bug as it has no private
context held across write_cache_pages() - filesystems using this
infrastructure always submit pages in ->writepage immediately and so there
is no problem with range_cyclic going back to mapping index 0.

However:
	mpage_writepages() has a private bio context,
	exofs_writepages() has page_collect
	fuse_writepages() has fuse_fill_wb_data
	nfs_writepages() has nfs_pageio_descriptor
	xfs_vm_writepages() has xfs_writepage_ctx

All of these ->writepages implementations can hold pages under writeback
in their private structures until write_cache_pages() returns, and hence
they are all susceptible to this deadlock.

Also worth noting is that ext4 has it's own bastardised version of
write_cache_pages() and so it /may/ have an equivalent deadlock.  I looked
at the code long enough to understand that it has a similar retry loop for
range_cyclic writeback reaching the end of the file and then promptly ran
away before my eyes bled too much.  I'll leave it for the ext4 developers
to determine if their code is actually has this deadlock and how to fix it
if it has.

There's a few ways I can see avoid this deadlock.  There's probably more,
but these are the first I've though of:

1. get rid of range_cyclic altogether

2. range_cyclic always stops at EOF, and we start again from
writeback index 0 on the next call into write_cache_pages()

2a. wcp also returns EAGAIN to ->writepages implementations to
indicate range cyclic has hit EOF. writepages implementations can
then flush the current context and call wpc again to continue. i.e.
lift the retry into the ->writepages implementation

3. range_cyclic uses trylock_page() rather than lock_page(), and it
skips pages it can't lock without blocking. It will already do this
for pages under writeback, so this seems like a no-brainer

3a. all non-WB_SYNC_ALL writeback uses trylock_page() to avoid
blocking as per pages under writeback.

I don't think #1 is an option - range_cyclic prevents frequently
dirtied lower file offset from starving background writeback of
rarely touched higher file offsets.

#2 is simple, and I don't think it will have any impact on
performance as going back to the start of the file implies an
immediate seek. We'll have exactly the same number of seeks if we
switch writeback to another inode, and then come back to this one
later and restart from index 0.

#2a is pretty much "status quo without the deadlock". Moving the
retry loop up into the wcp caller means we can issue IO on the
pending pages before calling wcp again, and so avoid locking or
waiting on pages in the wrong order. I'm not convinced we need to do
this given that we get the same thing from #2 on the next writeback
call from the writeback infrastructure.

#3 is really just a band-aid - it doesn't fix the access/wait
inversion problem, just prevents it from becoming a deadlock
situation. I'd prefer we fix the inversion, not sweep it under the
carpet like this.

#3a is really an optimisation that just so happens to include the
band-aid fix of #3.

So it seems that the simplest way to fix this issue is to implement
solution #2

Link: http://lkml.kernel.org/r/20181005054526.21507-1-david@fromorbit.com
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Jan Kara <jack@suse.de>
Cc: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-11-28 18:25:52 +01:00
..
kasan kasan: fix shadow_size calculation error in kasan_module_alloc 2018-08-24 13:26:58 +02:00
backing-dev.c writeback: synchronize sync(2) against cgroup writeback membership switches 2019-06-11 12:23:41 +02:00
balloon_compaction.c virtio_balloon: fix race between migration and ballooning 2016-03-03 15:07:18 -08:00
bootmem.c bootmem: avoid freeing to bootmem after bootmem is done 2015-09-08 15:35:28 -07:00
cleancache.c cleancache: remove limit on the number of cleancache enabled filesystems 2015-04-14 16:49:03 -07:00
cma.c mm/cma.c: fail if fixed declaration can't be honored 2019-08-06 18:28:27 +02:00
cma.h mm: cma: mark cma_bitmap_maxno() inline in header 2015-08-14 15:56:32 -07:00
cma_debug.c mm/cma_debug.c: fix the break condition in cma_maxchunk_get() 2019-06-22 08:18:18 +02:00
compaction.c mm/compaction: pass only pageblock aligned range to pageblock_pfn_to_page 2018-01-17 09:35:26 +01:00
debug-pagealloc.c mm, hwpoison: fixup "mm: check the return value of lookup_page_ext for all call sites" 2017-11-24 11:26:29 +01:00
debug.c mm: get rid of vmacache_flush_all() entirely 2018-09-19 22:49:00 +02:00
dmapool.c mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd 2015-11-06 17:50:42 -08:00
early_ioremap.c mm/early_ioremap: Fix boot hang with earlyprintk=efi,keep 2018-02-25 11:03:41 +01:00
fadvise.c mm/fadvise.c: fix signed overflow UBSAN complaint 2018-09-15 09:40:38 +02:00
failslab.c mm, page_alloc: rename __GFP_WAIT to __GFP_RECLAIM 2015-11-06 17:50:42 -08:00
filemap.c mm/filemap.c: don't initiate writeback if mapping has no dirty pages 2019-11-12 19:13:30 +01:00
frame_vector.c mm: replace get_vaddr_frames() write/force parameters with gup_flags 2018-12-17 21:55:16 +01:00
frontswap.c frontswap: allow multiple backends 2015-06-24 17:49:45 -07:00
gup.c proc: do not access cmdline nor environ from file-backed areas 2018-12-17 21:55:17 +01:00
highmem.c mm/highmem: make kmap cache coloring aware 2014-08-06 18:01:22 -07:00
huge_memory.c mremap: properly flush TLB before releasing the page 2018-11-10 07:41:42 -08:00
hugetlb.c hugetlbfs: on restore reserve error path retain subpool reservation 2019-06-22 08:18:17 +02:00
hugetlb_cgroup.c mm: hugetlb: switch to css_tryget() in hugetlb_cgroup_charge_cgroup() 2019-11-25 15:53:43 +01:00
hwpoison-inject.c hwpoison: use page_cgroup_ino for filtering by memcg 2015-09-10 13:29:01 -07:00
init-mm.c mm: Add a user_ns owner to mm_struct and fix ptrace permission checks 2017-01-06 11:16:11 +01:00
internal.h mm, mprotect: flush TLB if potentially racing with a parallel reclaim leaving stale TLB entries 2017-08-11 09:08:50 -07:00
interval_tree.c mm: replace vma->sharead.linear with vma->shared 2015-02-10 14:30:31 -08:00
Kconfig mm: don't allow deferred pages with NEED_PER_CPU_KM 2018-05-26 08:48:55 +02:00
Kconfig.debug mm/debug_pagealloc: remove obsolete Kconfig options 2015-01-08 15:10:52 -08:00
kmemcheck.c mm/slab_common: move kmem_cache definition to internal header 2014-10-09 22:25:50 -04:00
kmemleak-test.c
kmemleak.c mm/kmemleak.c: fix check for softirq context 2019-08-04 09:34:59 +02:00
ksm.c mm/ksm.c: don't WARN if page is still mapped in remove_stable_node() 2019-11-28 18:25:27 +01:00
list_lru.c mm/list_lru.c: fix memory leak in __memcg_init_list_lru_node 2019-06-22 08:18:22 +02:00
maccess.c mm/maccess.c: actually return -EFAULT from strncpy_from_unsafe 2015-11-05 19:34:48 -08:00
madvise.c mm: madvise(MADV_DODUMP): allow hugetlbfs pages 2018-10-10 08:52:11 +02:00
Makefile media updates for v4.3-rc1 2015-09-11 16:42:39 -07:00
memblock.c mm: consider memblock reservations for deferred memory initialization sizing 2017-06-14 13:16:26 +02:00
memcontrol.c mm: memcg: switch to css_tryget() in get_mem_cgroup_from_mm() 2019-11-25 15:53:43 +01:00
memory-failure.c hwpoison, memcg: forcibly uncharge LRU pages 2018-01-31 12:06:09 +01:00
memory.c mm: replace access_remote_vm() write parameter with gup_flags 2018-12-17 21:55:17 +01:00
memory_hotplug.c mm, memory_hotplug: test_pages_in_a_zone do not pass the end of zone 2019-03-23 08:44:26 +01:00
mempolicy.c mm: mempolicy: make mbind() return -EIO when MPOL_MF_STRICT is specified 2019-04-27 09:33:47 +02:00
mempool.c mm/mempool: avoid KASAN marking mempool poison checks as use-after-free 2017-08-12 19:29:09 -07:00
memtest.c memtest: remove unused header files 2015-09-08 15:35:28 -07:00
migrate.c hugetlbfs: fix races and page leaks during migration 2019-03-23 08:44:23 +01:00
mincore.c mm/mincore.c: make mincore() more conservative 2019-06-11 12:23:36 +02:00
mlock.c mm: mlock: avoid increase mm->locked_vm on mlock() when already mlock2(,MLOCK_ONFAULT) 2018-12-13 09:21:33 +01:00
mm_init.c mm: meminit: remove mminit_verify_page_links 2015-06-30 19:44:56 -07:00
mmap.c coredump: fix race condition between mmget_not_zero()/get_task_mm() and core dumping 2019-06-22 08:18:27 +02:00
mmu_context.c mm/mmu_context, sched/core: Fix mmu_context.h assumption 2017-12-25 14:22:09 +01:00
mmu_notifier.c mm/mmu_notifier: use hlist_add_head_rcu() 2019-08-04 09:34:59 +02:00
mmzone.c mm: microoptimize zonelist operations 2015-02-11 17:06:02 -08:00
mprotect.c x86/speculation/l1tf: Disallow non privileged high MMIO PROT_NONE mappings 2018-08-15 17:42:10 +02:00
mremap.c mremap: properly flush TLB before releasing the page 2018-11-10 07:41:42 -08:00
msync.c mm/msync: use offset_in_page macro 2015-11-05 19:34:48 -08:00
nobootmem.c mm: page_alloc: pass PFN to __free_pages_bootmem 2015-06-30 19:44:55 -07:00
nommu.c mm: replace access_remote_vm() write parameter with gup_flags 2018-12-17 21:55:17 +01:00
oom_kill.c mm, oom: fix use-after-free in oom_kill_process 2019-02-06 19:43:07 +01:00
page-writeback.c mm/page-writeback.c: fix range_cyclic writeback vs writepages deadlock 2019-11-28 18:25:52 +01:00
page_alloc.c mm, page_alloc: do not break __GFP_THISNODE by zonelist reset 2018-07-11 16:03:51 +02:00
page_counter.c mm: page_counter: let page_counter_try_charge() return bool 2015-11-05 19:34:48 -08:00
page_ext.c mm/page_ext.c: fix an imbalance with kmemleak 2019-04-27 09:33:48 +02:00
page_idle.c mm/page_idle.c: fix oops because end_pfn is larger than max_pfn 2019-07-10 09:56:30 +02:00
page_io.c fs: use helper bio_add_page() instead of open coding on bi_io_vec 2015-08-13 12:32:00 -06:00
page_isolation.c mm: fix invalid node in alloc_migrate_target() 2016-04-20 15:41:53 +09:00
page_owner.c mm: check the return value of lookup_page_ext for all call sites 2017-11-24 08:32:25 +01:00
pagewalk.c mm/pagewalk.c: report holes in hugetlb ranges 2017-11-24 08:32:25 +01:00
percpu-km.c percpu: implmeent pcpu_nr_empty_pop_pages and chunk->nr_populated 2014-09-02 14:46:05 -04:00
percpu-vm.c percpu: move region iterations out of pcpu_[de]populate_chunk() 2014-09-02 14:46:02 -04:00
percpu.c percpu: include linux/sched.h for cond_resched() 2018-05-16 10:06:46 +02:00
pgtable-generic.c mm,thp: khugepaged: call pte flush at the time of collapse 2016-02-25 12:01:23 -08:00
process_vm_access.c mm: replace get_user_pages_unlocked() write/force parameters with gup_flags 2018-12-17 21:55:16 +01:00
quicklist.c
readahead.c mm, fs: introduce mapping_gfp_constraint() 2015-11-06 17:50:42 -08:00
rmap.c mm/rmap: replace BUG_ON(anon_vma->degree) with VM_WARN_ON 2019-04-03 06:23:18 +02:00
shmem.c memfd: Use radix_tree_deref_slot_protected to avoid the warning. 2019-11-25 15:54:19 +01:00
slab.c mm/slab.c: kmemleak no scan alien caches 2019-04-27 09:33:48 +02:00
slab.h slab/slub: adjust kmem_cache_alloc_bulk API 2015-11-22 11:58:44 -08:00
slab_common.c slub: do not merge cache if slub_debug contains a never-merge flag 2017-10-21 17:09:05 +02:00
slob.c slab/slub: adjust kmem_cache_alloc_bulk API 2015-11-22 11:58:44 -08:00
slub.c mm/slub: fix a deadlock in show_slab_objects() 2019-10-29 09:13:28 +01:00
sparse-vmemmap.c
sparse.c
swap.c mm: make compound_head() robust 2015-11-06 17:50:42 -08:00
swap_cgroup.c mm, swap_cgroup: reschedule when neeed in swap_cgroup_swapoff() 2017-07-05 14:37:15 +02:00
swap_state.c mm: swap: zswap: maybe_preload & refactoring 2015-09-08 15:35:28 -07:00
swapfile.c x86/speculation/l1tf: Limit swap file size to MAX_PA/2 2018-08-15 17:42:10 +02:00
truncate.c mm: cleancache: fix corruption on missed inode invalidation 2018-12-13 09:21:33 +01:00
userfaultfd.c userfaultfd: avoid mmap_sem read recursion in mcopy_atomic 2015-09-04 16:54:41 -07:00
util.c mm: replace get_user_pages_unlocked() write/force parameters with gup_flags 2018-12-17 21:55:16 +01:00
vmacache.c mm: get rid of vmacache_flush_all() entirely 2018-09-19 22:49:00 +02:00
vmalloc.c mm/vmalloc: Sync unmappings in __purge_vmap_area_lazy() 2019-08-25 10:52:44 +02:00
vmpressure.c mm: vmpressure: fix sending wrong events on underflow 2017-03-12 06:37:25 +01:00
vmscan.c mm: fix the NULL mapping case in __isolate_lru_page() 2018-06-06 16:46:23 +02:00
vmstat.c mm, vmstat: hide /proc/pagetypeinfo from normal users 2019-11-12 19:13:18 +01:00
workingset.c mm: workingset: fix crash in shadow node shrinker caused by replace_page_cache_page() 2016-10-28 03:01:34 -04:00
zbud.c mm: zsmalloc: constify struct zs_pool name 2015-11-06 17:50:42 -08:00
zpool.c mm: zsmalloc: constify struct zs_pool name 2015-11-06 17:50:42 -08:00
zsmalloc.c zsmalloc: fix zs_can_compact() integer overflow 2016-05-18 17:06:44 -07:00
zswap.c zswap: re-check zswap_is_full() after do zswap_shrink() 2018-09-05 09:18:36 +02:00