[PATCH 12/13] kvm/powerpc: Accelerate H_PUT_TCE by implementing it in real mode

Alexander Graf agraf at suse.de
Tue May 17 19:39:01 EST 2011


On 17.05.2011, at 11:35, Benjamin Herrenschmidt wrote:

> On Tue, 2011-05-17 at 11:31 +0200, Alexander Graf wrote:
>> On 17.05.2011, at 11:11, Benjamin Herrenschmidt wrote:
>> 
>>> On Tue, 2011-05-17 at 10:01 +0200, Alexander Graf wrote:
>>>> I'm not sure I fully understand how this is supposed to work. If the
>>>> tables are kept inside the kernel, how does userspace get to know
>>>> where to DMA to?
>>> 
>>> The guest gets a dma range from the device-tree which is the range of
>>> device-side dma addresses it can use that correspond to the table.
>>> 
>>> The guest kernel uses the normal linux iommu space allocator to allocate
>>> space in that region and uses H_PUT_TCE to populate the corresponding
>>> table entries.
>>> 
>>> This is the same interface that is used for "real" iommu's with PCI
>>> devices btw.
>> 
>> I'm still slightly puzzled here :). IIUC the main point of an IOMMU is for the kernel
>> to change where device accesses actually go to. So device DMAs address A, goes through
>> the IOMMU, in reality accesses address B.
> 
> Right :-)
> 
>> Now, how do we tell the devices implemented in qemu that they're supposed to DMA to
>> address B instead of A if the mapping table is kept in-kernel?
> 
> Oh, bcs qemu mmaps the table :-)

That's the piece to the puzzle I was missing. Please document that interface properly - it needs to be rock stable :)


Alex



More information about the Linuxppc-dev mailing list