These new call-backs defer the allocation and destruction of
'struct iommu_domain' to the iommu driver. This allows
drivers to embed this struct into their private domain
structures and to get rid of the domain_init and
domain_destroy call-backs when all drivers have been
converted.
Tested-by: Thierry Reding <treding@nvidia.com>
Tested-by: Heiko Stuebner <heiko@sntech.de>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
iommu_map() calls trace_map() with iova and size. trace_map()
should report original iova and original size as opposed to
iova and size after they get changed during mapping. size is
always zero at the end of mapping which is useless to report
and iova as it gets incremented, it is not as useful as the
original iova. Change iommu_map() to call trace_map() to
report original iova and original size.
Signed-off-by: Shuah Khan <shuahkh@osg.samsung.com>
Reported-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Currently map and unmap are implemented as events under a
common trace class declaration. The common class forces
trace_unmap() to require a bogus physical address argument
that it doesn't use. Changing unmap to report unmapped size
will provide useful information for debugging. Remove common
map_unmap trace class and change map and unmap into separate
events as opposed to events under the same class to allow for
differences in the reporting information. In addition, map and
unmap are changed to handle size value as size_t instead of int
to match the passed size value and avoid overflow.
Change-Id: Ic231e28c2784fbfe41a6749175cc390920165e81
Signed-off-by: Shuah Khan <shuahkh@osg.samsung.com>
Suggested-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
[pdaly@codeaurora.org Keep map_unmap class]
iommu_unmap() calls trace_unmap() with changed iova and original
size. trace_unmap() should report original iova instead. Change
iommu_unmap() to call trace_unmap() with original iova.
Change-Id: Iedbaf6ddef566626f5c3dd656a8c1b8cf8c15663
Signed-off-by: Shuah Khan <shuahkh@osg.samsung.com>
Reported-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
[pdaly@codeaurora.org Resolve minor conflicts]
If the IOMMU supports pages smaller than the CPU page size, segments
which lie at offsets within the CPU page may be mapped based on the
finer-grained IOMMU page boundaries. This minimises the amount of
non-buffer memory between the CPU page boundary and the start of the
segment which must be mapped and therefore exposed to the device, and
brings the default iommu_map_sg implementation in line with
iommu_map/unmap with respect to alignment.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
IOMMU drivers can be initialized from of_iommu helpers. Such drivers don't
need to provide device_add callbacks to operate properly, so there is no
need to fail initialization if the callback is missing.
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When some part of bus_set_iommu fails it should undo any made changes
and not simply leave everything as is.
This includes unregistering the bus notifier in iommu_bus_init when
add_iommu_group fails and also setting the bus->iommu_ops back to NULL.
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The IOMMU-API works on page boundarys, unlike the DMA-API
which can work with sub-page buffers. The sg->offset
field does not make sense on the IOMMU level, so force it to
be 0. Do some error-path consolidation while at it.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
It might be useful for IOMMU clients to peek and poke at their IOMMU's
registers, but knowing how to access those registers is really the job
of the IOMMU driver (it might need to enable specific clocks and
regulators, for example). Provide an API to read and write IOMMU
registers that can be implemented by the driver.
Change-Id: I5b2f19225f8bd258278780ff24b4ea96460857aa
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
IOMMU drivers might want more control over the types of faults being
triggered with iommu_trigger_fault. Add a flags parameter that can be
used to provide more control over the types of faults that can be
triggered.
Change-Id: I2f21b383437430e957ab52070d3575e8cb3dee90
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
Currently we're creating an "iommu" debugfs directory from the
iommu-debug code. Other IOMMU modules might want to make use of this
same directory, so create it from the IOMMU framework code itself.
Change-Id: I679fdfc34ba5fcbd927dc5981438c6fabcfa3639
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
It can be useful to trigger an IOMMU fault during development and
debugging. Add support to the IOMMU framework to do so.
Change-Id: I908c9f5b52c6abe937f031de546d290027ba64b5
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
Some IOMMU hardware implementations provide hardware translation
operations that can be useful during debugging and development. Add a
function for this purpose along with an associated op in the iommu_ops
structure so that drivers can implement it.
Change-Id: I54ad5df526cdce05f8e04206a4f01253b3976b48
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
Currently, debugging IOMMU issues is done via manual instrumentation of
the code or via low-level techniques like using a JTAG debugger.
Introduce a set of library functions and debugfs files to facilitate
interactive debugging and testing.
This patch introduces the basic infrastructure as well an initial
debugfs for:
- viewing IOMMU attachments (domain->dev mappings created by
iommu_attach_device) and resulting attributes (like the base address
of the page tables)
- basic performance profiling
Example usage:
# cd /sys/kernel/debug/iommu/attachments
# cat b40000.qcom,kgsl-iommu:iommu_kgsl_cb2
Domain: 0xffffffc0cb983f00
PT_BASE_ADDR: virt=0xffffffc057eca000 phys=0x00000000d7eca000
# cd /sys/kernel/debug/iommu/tests
# cat soc:qcom,cam_smmu:msm_cam_smmu_cb1/profiling
size iommu_map iommu_unmap
4K 47 us 909 us
64K 97 us 594 us
2M 1536 us 605 us
12M 8737 us 1193 us
20M 26517 us 1121 us
size iommu_map_sg iommu_unmap
64K 31 us 656 us
2M 885 us 600 us
12M 2674 us 687 us
20M 4352 us 1096 us
Change-Id: I1c301eec6e64688831cad80ffd0380743f7f0df6
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
Add ftrace start and end logging for map, iommu_map_sg and unmap
in order to facilitate performance testing.
Change-Id: I9ddf241ffa6cf519f6abece7b0820640f5ce1975
Signed-off-by: Liam Mark <lmark@codeaurora.org>
The .dma_supported() checks to see if the device can support DMA to the
memory described by the mask. Add support for this operation in the
iommu layer
Change-Id: Icf37b9540aa68c2be3fd603a48402d6fcccd8208
Signed-off-by: Neeti Desai <neetid@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
We're currently BUG()'ing when we can't find a valid IOMMU page size
without printing any other information. Add some more information about
the parameters passed to the function to aide in debugging.
Change-Id: I1797bdfa2ef9d899ef4ffcb36fea769b67a1f991
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
Currently we walk each last-level leaf pte during unmap and zero them
out individually. Since these last-level ptes are all contiguous (up to
512 entries), optimize the unmapping process by simply zeroing them all
out at once rather than operating on them individually.
Change-Id: I21d490e8a94355df4d4caecab33774b5f8ecf3ca
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
It can be useful in IOMMU drivers. Export it.
Change-Id: I4c423d256312250f1e33ca8d64dfe1626f008b5e
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
Mapping and unmapping are more often than not in the critical path.
map_sg allows IOMMU driver implementations to optimize the process
of mapping buffers into the IOMMU page tables.
Instead of mapping a buffer one page at a time and requiring potentially
expensive TLB operations for each page, this function allows the driver
to map all pages in one go and defer TLB maintenance until after all
pages have been mapped.
Additionally, the mapping operation would be faster in general since
clients does not have to keep calling map API over and over again for
each physically contiguous chunk of memory that needs to be mapped to a
virtually contiguous region.
Change-Id: I1f3dd2c3cf67b3db40ee1793580d6af5fec1247d
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Git-commit: 315786ebbf
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
[mitchelh: fix existing callers and implementations of
iommu_{map,unmap}_range to match the new function names and APIs,
maintaining stubs for the old API so that out-of-tree modules can
continue to compile]
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
Reset the following files:
arm-smmu.c
iommu.c
include/linux/iommu.h
include/trace/events/iommu.h
to the git merge base between 3.18 and 4.4:
b2776bf714
Change-Id: I9aa58e08125e3a72d1e9742e26d4a6fab34b0ed5
Signed-off-by: Patrick Daly <pdaly@codeaurora.org>
commit db0fa0cb01 "scatterlist: use sg_phys()" did replacements of
the form:
phys_addr_t phys = page_to_phys(sg_page(s));
phys_addr_t phys = sg_phys(s) & PAGE_MASK;
However, this breaks platforms where sizeof(phys_addr_t) >
sizeof(unsigned long). Revert for 4.3 and 4.4 to make room for a
combined helper in 4.5.
Cc: <stable@vger.kernel.org>
Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Fixes: db0fa0cb01 ("scatterlist: use sg_phys()")
Suggested-by: Joerg Roedel <joro@8bytes.org>
Reported-by: Vitaly Lavrov <vel21ripn@gmail.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Now that the iommu core support for iommu groups is not
pci-centric anymore, we can move default domain allocation
to the bus independent iommu_group_get_for_dev() function.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
All callers of iommu_group_get_for_dev() provide a
device_group call-back now, so this fall-back is no longer
needed.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Rename that function to pci_device_group() and export it, so
that IOMMU drivers can use it as their device_group
call-back.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
That call-back is currently unused, change it into a
call-back function for finding the right IOMMU group for a
device.
This is a first step to remove the hard-coded PCI dependency
in the iommu-group code.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The -ENODEV error just means that the device is not
translated by an IOMMU. We shouldn't bail out of iommu
driver initialization when that happens, as this is a common
scenario on ARM.
Not returning -ENODEV in the drivers would be a bad idea, as
the IOMMU core would have no indication whether a device is
translated or not. This indication is not used at the
moment, but will probably be in the future.
Fixes: 19762d7 ("iommu: Propagate error in add_iommu_group")
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Tested-by: Eric Auger <eric.auger@linaro.org>
Tested-by: Heiko Stuebner <heiko@sntech.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The iommu_group_alloc() and iommu_group_get_for_dev()
functions return error pointers, they never return NULL.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
This function can be called by an IOMMU driver to request
that a device's default domain is direct mapped.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Use the information exported by the IOMMU drivers to create
direct mapped regions in the default domains.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Add two new functions to the IOMMU-API to allow the IOMMU
drivers to export the requirements for direct mapped regions
per device.
This is useful for exporting the information in Intel VT-d's
RMRR entries or AMD-Vi's unity mappings.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Make use of the default domain and re-attach a device to it
when it is detached from another domain. Also enforce that a
device has to be in the default domain before it can be
attached to a different domain.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
This patch changes the behavior of the iommu_attach_device
and iommu_detach_device functions. With this change these
functions only work on devices that have their own group.
For all other devices the iommu_group_attach/detach
functions must be used.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The default domain will be used (if supported by the iommu
driver) when the devices in the iommu group are not attached
to any other domain.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Make sure we call the ->remove_device call-back on all
devices already initialized with ->add_device when the bus
initialization fails.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Write a message to the kernel log when a device is added or
removed from a group and add debug messages to group
allocation and release routines.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Including the function name is only useful for debugging
messages. They don't belong into other messages from the
iommu core.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
iommu_group_alloc might be called very early in case of iommu controllers
activated from of_iommu, so ensure that this part of subsystem is ready
when devices are being populated from device-tree (core_initcall seems to
be okay for this case).
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Tested-by: Javier Martinez Canillas <javier.martinez@collabora.co.uk>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
All drivers have been converted to the new domain_alloc and
domain_free iommu-ops. So remove the old ones and get rid of
iommu_domain->priv too, as this is no longer needed when the
struct iommu_domain is embedded in the private structures of
the iommu drivers.
Tested-by: Thierry Reding <treding@nvidia.com>
Tested-by: Heiko Stuebner <heiko@sntech.de>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Check for the new __IOMMU_DOMAIN_PAGING flag before calling
into the iommu drivers ->map and ->unmap call-backs.
Tested-by: Thierry Reding <treding@nvidia.com>
Tested-by: Heiko Stuebner <heiko@sntech.de>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
This allows to handle domains differently based on their
type in the future. An IOMMU driver can implement certain
optimizations for DMA-API domains for example.
The domain types can be extended later and some of the
existing domain attributes can be migrated to become domain
flags.
Tested-by: Thierry Reding <treding@nvidia.com>
Tested-by: Heiko Stuebner <heiko@sntech.de>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
These new call-backs defer the allocation and destruction of
'struct iommu_domain' to the iommu driver. This allows
drivers to embed this struct into their private domain
structures and to get rid of the domain_init and
domain_destroy call-backs when all drivers have been
converted.
Tested-by: Thierry Reding <treding@nvidia.com>
Tested-by: Heiko Stuebner <heiko@sntech.de>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>