[PATCH v4 05/16] iommu/dma: rename iommu_dma_*map_page to iommu_dma_*map_phys
Jason Gunthorpe
jgg at nvidia.com
Thu Aug 28 23:38:57 AEST 2025
On Tue, Aug 19, 2025 at 08:36:49PM +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro at nvidia.com>
>
> Rename the IOMMU DMA mapping functions to better reflect their actual
> calling convention. The functions iommu_dma_map_page() and
> iommu_dma_unmap_page() are renamed to iommu_dma_map_phys() and
> iommu_dma_unmap_phys() respectively, as they already operate on physical
> addresses rather than page structures.
>
> The calling convention changes from accepting (struct page *page,
> unsigned long offset) to (phys_addr_t phys), which eliminates the need
> for page-to-physical address conversion within the functions. This
> renaming prepares for the broader DMA API conversion from page-based
> to physical address-based mapping throughout the kernel.
>
> All callers are updated to pass physical addresses directly, including
> dma_map_page_attrs(), scatterlist mapping functions, and DMA page
> allocation helpers. The change simplifies the code by removing the
> page_to_phys() + offset calculation that was previously done inside
> the IOMMU functions.
>
> Signed-off-by: Leon Romanovsky <leonro at nvidia.com>
> ---
> drivers/iommu/dma-iommu.c | 14 ++++++--------
> include/linux/iommu-dma.h | 7 +++----
> kernel/dma/mapping.c | 4 ++--
> kernel/dma/ops_helpers.c | 6 +++---
> 4 files changed, 14 insertions(+), 17 deletions(-)
This looks fine
Reviewed-by: Jason Gunthorpe <jgg at nvidia.com>
But related to other patches..
iommu_dma_map_phys() ends up like this:
if (dev_use_swiotlb(dev, size, dir) &&
iova_unaligned(iovad, phys, size)) {
if (attrs & DMA_ATTR_MMIO)
return DMA_MAPPING_ERROR;
phys = iommu_dma_map_swiotlb(dev, phys, size, dir, attrs);
But attrs is passed all the way down to swiotlb_tbl_map_single(),
maybe the DMA_ATTR_MMIO check should be moved there?
There are a few call chains with this redundancy:
dma_iova_link()
-> iommu_dma_iova_link_swiotlb
-> iommu_dma_iova_bounce_and_link
-> iommu_dma_map_swiotlb
-> swiotlb_tbl_map_single()
iommu_dma_map_phys()
-> iommu_dma_map_swiotlb
-> swiotlb_tbl_map_single()
dma_direct_map_phys()
-> swiotlb_map()
-> swiotlb_tbl_map_single()
It makes alot of sense to put the check for MMIO when
slots[].orig_addr is stored because that is the point where we start
to require a pfn_to_page().
Jason
More information about the Linuxppc-dev
mailing list