[PATCH v6 00/16] dma-mapping: migrate to physical address-based API
Jason Gunthorpe
jgg at nvidia.com
Wed Sep 24 03:09:36 AEST 2025
On Sat, Sep 20, 2025 at 06:47:27PM -0600, Keith Busch wrote:
> On Sat, Sep 20, 2025 at 06:53:52PM +0300, Leon Romanovsky wrote:
> > On Fri, Sep 19, 2025 at 10:08:21AM -0600, Keith Busch wrote:
> > > On Fri, Sep 12, 2025 at 12:03:27PM +0300, Leon Romanovsky wrote:
> > > > On Fri, Sep 12, 2025 at 12:25:38AM +0200, Marek Szyprowski wrote:
> > > > > >
> > > > > > This series does the core code and modern flows. A followup series
> > > > > > will give the same treatment to the legacy dma_ops implementation.
> > > > >
> > > > > Applied patches 1-13 into dma-mapping-for-next branch. Let's check if it
> > > > > works fine in linux-next.
> > > >
> > > > Thanks a lot.
> > >
> > > Just fyi, when dma debug is enabled, we're seeing this new warning
> > > below. I have not had a chance to look into it yet, so I'm just
> > > reporting the observation.
> >
> > Did you apply all patches or only Marek's branch?
> > I don't get this warning when I run my NVMe tests on current dmabuf-vfio branch.
>
> This was the snapshot of linux-next from the 20250918 tag. It doesn't
> have the full patchset applied.
>
> One other thing to note, this was runing on arm64 platform using smmu
> configured with 64k pages. If your iommu granule is 4k instead, we
> wouldn't use the blk_dma_map_direct path.
I spent some time looking to see if I could guess what this is and
came up empty. It seems most likely we are leaking a dma mapping
tracking somehow? The DMA API side is pretty simple here though..
Not sure the 64k/4k itself is a cause, but triggering the non-iova
flow is probably the issue.
Can you check the output of this debugfs:
/*
* Dump mappings entries on user space via debugfs
*/
static int dump_show(struct seq_file *seq, void *v)
? If the system is idle and it has lots of entries that is probably
confirmation of the theory.
Jason
More information about the Linuxppc-dev
mailing list