binder: fix race between munmap() and direct reclaim

An munmap() on a binder device causes binder_vma_close() to be called
which clears the alloc->vma pointer.

If direct reclaim causes binder_alloc_free_page() to be called, there
is a race where alloc->vma is read into a local vma pointer and then
used later after the mm->mmap_sem is acquired. This can result in
calling zap_page_range() with an invalid vma which manifests as a
use-after-free in zap_page_range().

The fix is to check alloc->vma after acquiring the mmap_sem (which we
were acquiring anyway) and bail out of binder_alloc_free_page() if it
has changed to NULL.

Change-Id: I9ea0558a57635a747d7a48ed35991d39b860abf6
Signed-off-by: Todd Kjos <tkjos@google.com>
(cherry picked from commit 7257eac9401f989a62503b6c12a47af1b10591d1)
This commit is contained in:
Todd Kjos 2018-11-26 17:24:15 -08:00 committed by codeworkx
parent c7498034be
commit d3a52e0ec2

View file

@ -924,15 +924,13 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
index = page - alloc->pages;
page_addr = (uintptr_t)alloc->buffer + index * PAGE_SIZE;
mm = alloc->vma_vm_mm;
if (!atomic_inc_not_zero(&mm->mm_users))
goto err_mmget;
if (!down_write_trylock(&mm->mmap_sem))
goto err_down_write_mmap_sem_failed;
vma = alloc->vma;
if (vma) {
/* Same as mmget_not_zero() in later kernel versions */
if (!atomic_inc_not_zero(&alloc->vma_vm_mm->mm_users))
goto err_mmget;
mm = alloc->vma_vm_mm;
if (!down_write_trylock(&mm->mmap_sem))
goto err_down_write_mmap_sem_failed;
}
list_lru_isolate(lru, item);
spin_unlock(lock);
@ -946,10 +944,9 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
PAGE_SIZE, NULL);
trace_binder_unmap_user_end(alloc, index);
up_write(&mm->mmap_sem);
mmput(mm);
}
up_write(&mm->mmap_sem);
mmput(mm);
trace_binder_unmap_kernel_start(alloc, index);