ion: msm: fix cache maintenance on unmapped buffers

We currently pass a NULL device when doing cache maintenance in
msm_ion_heap_buffer_zero.  The dma ops that are used when a NULL device
is given rely on s->dma_address being the physical address of the
underlying memory.  However, msm_ion_heap_buffer_zero can be called on
buffers that have previously been mapped by a different DMA mapper (like
the IOMMU DMA mapper) which might have set dma_address to something
other than the physical address of the underlying memory.  This results
in us trying to do cache maintenance on some stuff that we shouldn't be.

Fix this by putting the physical address for the underlying memory back
into dma_address just before doing cache maintenance.

Change-Id: Ic5df328f5aeac09f7c9280ced887d2ba6098eb88
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
This commit is contained in:
Mitchel Humpherys 2014-11-17 15:38:58 -08:00 committed by David Keitel
parent 2507a77632
commit 75ade26d1e

View file

@ -785,6 +785,8 @@ int msm_ion_heap_buffer_zero(struct ion_buffer *buffer)
for_each_sg(table->sgl, sg, table->nents, i) {
struct page *page = sg_page(sg);
unsigned long len = sg->length;
/* needed to make dma_sync_sg_for_device work: */
sg->dma_address = sg_phys(sg);
for (j = 0; j < len / PAGE_SIZE; j++)
pages_mem.pages[npages++] = page + j;