From 75ade26d1e24e51299e3b5e9836c42234448e56c Mon Sep 17 00:00:00 2001 From: Mitchel Humpherys Date: Mon, 17 Nov 2014 15:38:58 -0800 Subject: [PATCH] ion: msm: fix cache maintenance on unmapped buffers We currently pass a NULL device when doing cache maintenance in msm_ion_heap_buffer_zero. The dma ops that are used when a NULL device is given rely on s->dma_address being the physical address of the underlying memory. However, msm_ion_heap_buffer_zero can be called on buffers that have previously been mapped by a different DMA mapper (like the IOMMU DMA mapper) which might have set dma_address to something other than the physical address of the underlying memory. This results in us trying to do cache maintenance on some stuff that we shouldn't be. Fix this by putting the physical address for the underlying memory back into dma_address just before doing cache maintenance. Change-Id: Ic5df328f5aeac09f7c9280ced887d2ba6098eb88 Signed-off-by: Mitchel Humpherys --- drivers/staging/android/ion/msm/msm_ion.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/staging/android/ion/msm/msm_ion.c b/drivers/staging/android/ion/msm/msm_ion.c index cd3f78c6ec28..4322ce5110c6 100644 --- a/drivers/staging/android/ion/msm/msm_ion.c +++ b/drivers/staging/android/ion/msm/msm_ion.c @@ -785,6 +785,8 @@ int msm_ion_heap_buffer_zero(struct ion_buffer *buffer) for_each_sg(table->sgl, sg, table->nents, i) { struct page *page = sg_page(sg); unsigned long len = sg->length; + /* needed to make dma_sync_sg_for_device work: */ + sg->dma_address = sg_phys(sg); for (j = 0; j < len / PAGE_SIZE; j++) pages_mem.pages[npages++] = page + j;