[PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu implementation.
Sethi Varun-B16395
B16395 at freescale.com
Fri Apr 5 11:01:38 EST 2013
> -----Original Message-----
> From: Alex Williamson [mailto:alex.williamson at redhat.com]
> Sent: Thursday, April 04, 2013 10:14 PM
> To: Sethi Varun-B16395
> Cc: Joerg Roedel; Yoder Stuart-B08248; Wood Scott-B07421;
> iommu at lists.linux-foundation.org; linuxppc-dev at lists.ozlabs.org; linux-
> kernel at vger.kernel.org; galak at kernel.crashing.org;
> benh at kernel.crashing.org
> Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu
> implementation.
>
> On Thu, 2013-04-04 at 16:35 +0000, Sethi Varun-B16395 wrote:
> >
> > > -----Original Message-----
> > > From: Alex Williamson [mailto:alex.williamson at redhat.com]
> > > Sent: Thursday, April 04, 2013 8:52 PM
> > > To: Sethi Varun-B16395
> > > Cc: Joerg Roedel; Yoder Stuart-B08248; Wood Scott-B07421;
> > > iommu at lists.linux-foundation.org; linuxppc-dev at lists.ozlabs.org;
> > > linux- kernel at vger.kernel.org; galak at kernel.crashing.org;
> > > benh at kernel.crashing.org
> > > Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and
> > > iommu implementation.
> > >
> > > On Thu, 2013-04-04 at 13:00 +0000, Sethi Varun-B16395 wrote:
> > > >
> > > > > -----Original Message-----
> > > > > From: Alex Williamson [mailto:alex.williamson at redhat.com]
> > > > > Sent: Wednesday, April 03, 2013 11:32 PM
> > > > > To: Joerg Roedel
> > > > > Cc: Sethi Varun-B16395; Yoder Stuart-B08248; Wood Scott-B07421;
> > > > > iommu at lists.linux-foundation.org; linuxppc-dev at lists.ozlabs.org;
> > > > > linux- kernel at vger.kernel.org; galak at kernel.crashing.org;
> > > > > benh at kernel.crashing.org
> > > > > Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver
> > > > > and iommu implementation.
> > > > >
> > > > > On Tue, 2013-04-02 at 18:18 +0200, Joerg Roedel wrote:
> > > > > > Cc'ing Alex Williamson
> > > > > >
> > > > > > Alex, can you please review the iommu-group part of this patch?
> > > > >
> > > > > Sure, it looks pretty reasonable. AIUI, all PCI devices are
> > > > > below some kind of host bridge that is either new and supports
> > > > > partitioning or old and doesn't. I don't know if that's a
> > > > > visibility or isolation requirement, perhaps PCI ACS-ish. In
> > > > > the new host bridge case, each device gets a group. This seems
> > > > > not to have any quirks for multifunction devices though. On AMD
> > > > > and Intel IOMMUs we test multifunction device ACS support to
> > > > > determine whether all the functions should be in the same group.
> > > > > Is there any reason
> > > to trust multifunction devices on PAMU?
> > > > >
> > > > [Sethi Varun-B16395] In the case where we can partition endpoints
> > > > we can distinguish transactions based on the bus,device,function
> > > > number combination. This support is available in the PCIe
> > > > controller (host bridge).
> > >
> > > So can x86 IOMMUs, that's the visibility aspect of IOMMU groups.
> > > Visibility alone doesn't necessarily imply that a device is isolated
> > > though. A multifunction PCI device that doesn't expose ACS support
> > > may not isolate functions from each other. For example a
> > > peer-to-peer DMA between functions may not be translated by the
> > > upstream IOMMU. IOMMU groups should encompass both visibility and
> isolation.
> > [Sethi Varun-B16395] We can isolate the DMA access to the host based
> > on the to the pci bus,device,function number.
>
> The IOMMU can only isolate DMA that it can see. A multifunction device
> may never expose peer-to-peer DMA to the upstream device, it's
> implementation specific. The ACS flags allow that possibility to be
> controlled and prevented.
>
> > I thought that was enough to put devices in to separate iommu groups.
> > This is a PCIe controller property which allows us to partition PCIe
> > devices. But, what I can understand from your point is that we also
> > need to consider isolation at PCIe device level as well. I will check
> > for the case of multifunction devices.
> >
> > >
> > > > > I also find it curious what happens to the iommu group of the
> > > > > host bridge. In the partitionable case the host bridge group is
> > > > > removed, in the non-partitionable case the host bridge group
> > > > > becomes the group for the children, removing the host bridge.
> > > > > It's unique to PAMU so far that these host bridges are even in
> > > > > an iommu group (x86 only adds pci devices), but I don't see it
> > > > > as necessarily wrong leaving it in either scenario. Does it
> > > > > solve some problem to remove
> > > them from the groups?
> > > > > Thanks,
> > > > [Sethi Varun-B16395] The PCIe controller isn't a partitionable
> > > > entity, it would always be owned by the host.
> > >
> > > Ownership of a device shouldn't play into the group context. An
> > > IOMMU group should be defined by it's visibility and isolation from
> > > other devices. Whether the PCIe controller is allowed to be handed
> > > to userspace is a question for VFIO.
> > [Sethi Varun-B16395] The problem is in the case, where we can't
> > partition PCIe devices. PCIe devices share the same device group as
> > the PCI controller. This becomes a problem while assigning the devices
> > to the guest, as you are required to unbind all the PCIe devices
> > including the controller from the host. PCIe controller can't be
> > unbound from the host, so we simply delete the controller iommu_group.
>
> Unbinding devices is a VFIO implementation, it shouldn't leak into IOMMU
> groups. Also note that VFIO has a driver white list where we can have
> exceptions to the rule. I recently added pciehp to that list because the
> host driver provides functionality. Being attached to the host driver
> means the device is not accessible to the user through VFIO, but other
> devices in the group are. Thanks,
>
Also, as Stuart pointed out the PCIe controller aren't the actual DMA devices (endpoints are the actual DMA devices). So, we remove the device group allocated for the PCIe controllers.
-Varun
More information about the Linuxppc-dev
mailing list