[PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)

Bharat Bhushan Bharat.Bhushan at freescale.com
Fri Dec 6 15:17:15 EST 2013



> -----Original Message-----
> From: Wood Scott-B07421
> Sent: Friday, December 06, 2013 5:31 AM
> To: Bhushan Bharat-R65777
> Cc: Alex Williamson; linux-pci at vger.kernel.org; agraf at suse.de; Yoder Stuart-
> B08248; iommu at lists.linux-foundation.org; bhelgaas at google.com; linuxppc-
> dev at lists.ozlabs.org; linux-kernel at vger.kernel.org
> Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
> 
> On Sun, 2013-11-24 at 23:33 -0600, Bharat Bhushan wrote:
> >
> > > -----Original Message-----
> > > From: Alex Williamson [mailto:alex.williamson at redhat.com]
> > > Sent: Friday, November 22, 2013 2:31 AM
> > > To: Wood Scott-B07421
> > > Cc: Bhushan Bharat-R65777; linux-pci at vger.kernel.org; agraf at suse.de;
> > > Yoder Stuart-B08248; iommu at lists.linux-foundation.org;
> > > bhelgaas at google.com; linuxppc- dev at lists.ozlabs.org;
> > > linux-kernel at vger.kernel.org
> > > Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale
> > > IOMMU (PAMU)
> > >
> > > On Thu, 2013-11-21 at 14:47 -0600, Scott Wood wrote:
> > > > On Thu, 2013-11-21 at 13:43 -0700, Alex Williamson wrote:
> > > > > On Thu, 2013-11-21 at 11:20 +0000, Bharat Bhushan wrote:
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Alex Williamson [mailto:alex.williamson at redhat.com]
> > > > > > > Sent: Thursday, November 21, 2013 12:17 AM
> > > > > > > To: Bhushan Bharat-R65777
> > > > > > > Cc: joro at 8bytes.org; bhelgaas at google.com; agraf at suse.de;
> > > > > > > Wood Scott-B07421; Yoder Stuart-B08248;
> > > > > > > iommu at lists.linux-foundation.org; linux-
> > > > > > > pci at vger.kernel.org; linuxppc-dev at lists.ozlabs.org; linux-
> > > > > > > kernel at vger.kernel.org; Bhushan Bharat-R65777
> > > > > > > Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for
> > > > > > > Freescale IOMMU (PAMU)
> > > > > > >
> > > > > > > Is VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT per aperture (ie. each
> > > > > > > vfio user has $COUNT regions at their disposal exclusively)?
> > > > > >
> > > > > > Number of msi-bank count is system wide and not per aperture,
> > > > > > But will be
> > > setting windows for banks in the device aperture.
> > > > > > So say if we are direct assigning 2 pci device (both have
> > > > > > different iommu
> > > group, so 2 aperture in iommu) to VM.
> > > > > > Now qemu can make only one call to know how many msi-banks are
> > > > > > there but
> > > it must set sub-windows for all banks for both pci device in its
> > > respective aperture.
> > > > >
> > > > > I'm still confused.  What I want to make sure of is that the
> > > > > banks are independent per aperture.  For instance, if we have
> > > > > two separate userspace processes operating independently and
> > > > > they both chose to use msi bank zero for their device, that's
> > > > > bank zero within each aperture and doesn't interfere.  Or
> > > > > another way to ask is can a malicious user interfere with other users by
> using the wrong bank.
> > > > > Thanks,
> > > >
> > > > They can interfere.
> >
> > Want to be sure of how they can interfere?
> 
> If more than one VFIO user shares the same MSI group, one of the users can send
> MSIs to another user, by using the wrong interrupt within the bank.  Unexpected
> MSIs could cause misbehavior or denial of service.
> 
> > >>  With this hardware, the only way to prevent that
> > > > is to make sure that a bank is not shared by multiple protection contexts.
> > > > For some of our users, though, I believe preventing this is less
> > > > important than the performance benefit.
> >
> > So should we let this patch series in without protection?
> 
> No, there should be some sort of opt-in mechanism similar to IOMMU-less VFIO --
> but not the same exact one, since one is a much more serious loss of isolation
> than the other.

Can you please elaborate "opt-in mechanism"?

> 
> > > I think we need some sort of ownership model around the msi banks then.
> > > Otherwise there's nothing preventing another userspace from
> > > attempting an MSI based attack on other users, or perhaps even on
> > > the host.  VFIO can't allow that.  Thanks,
> >
> > We have very few (3 MSI bank on most of chips), so we can not assign
> > one to each userspace.
> 
> That depends on how many users there are.

What I think we can do is:
 - Reserve one MSI region for host. Host will not share MSI region with Guest.
 - For upto 2 Guest (MAX msi with host - 1) give then separate MSI sub regions
 - Additional Guest will share MSI region with other guest.

Any better suggestion are most welcome.

Thanks
-Bharat
> 
> -Scott
> 



More information about the Linuxppc-dev mailing list