[PATCH kernel v2 2/2] KVM: PPC: Check if IOMMU page is contained in the pinned physical page

David Gibson david at gibson.dropbear.id.au
Tue Jul 3 11:36:32 AEST 2018


On Mon, Jul 02, 2018 at 04:32:27PM +1000, Alexey Kardashevskiy wrote:
> On Mon, 2 Jul 2018 14:52:43 +1000
> David Gibson <david at gibson.dropbear.id.au> wrote:
> 
> > On Mon, Jul 02, 2018 at 02:33:30PM +1000, Alexey Kardashevskiy wrote:
> > > On Mon, 2 Jul 2018 14:08:52 +1000
> > > David Gibson <david at gibson.dropbear.id.au> wrote:
> > >   
> > > > On Fri, Jun 29, 2018 at 05:07:47PM +1000, Alexey Kardashevskiy wrote:  
> > > > > On Fri, 29 Jun 2018 15:18:20 +1000
> > > > > Alexey Kardashevskiy <aik at ozlabs.ru> wrote:
> > > > >     
> > > > > > On Fri, 29 Jun 2018 14:57:02 +1000
> > > > > > David Gibson <david at gibson.dropbear.id.au> wrote:
> > > > > >     
> > > > > > > On Fri, Jun 29, 2018 at 02:51:21PM +1000, Alexey Kardashevskiy wrote:      
> > > > > > > > On Fri, 29 Jun 2018 14:12:41 +1000
> > > > > > > > David Gibson <david at gibson.dropbear.id.au> wrote:
> > > > > > > >         
> > > > > > > > > On Tue, Jun 26, 2018 at 03:59:26PM +1000, Alexey Kardashevskiy wrote:        
> > > > > > > > > > We already have a check in drivers/vfio/vfio_iommu_spapr_tce.c that
> > > > > > > > > > an IOMMU page is contained in the physical page so the PCI hardware won't
> > > > > > > > > > get access to unassigned host memory.
> > > > > > > > > > 
> > > > > > > > > > However we do not have this check in KVM fastpath (H_PUT_TCE accelerated
> > > > > > > > > > code) so the user space can pin memory backed with 64k pages and create
> > > > > > > > > > a hardware TCE table with a bigger page size. We were lucky so far and
> > > > > > > > > > did not hit this yet as the very first time the mapping happens
> > > > > > > > > > we do not have tbl::it_userspace allocated yet and fall back to
> > > > > > > > > > the userspace which in turn calls VFIO IOMMU driver and that fails
> > > > > > > > > > because of the check in vfio_iommu_spapr_tce.c which is really
> > > > > > > > > > sustainable solution.
> > > > > > > > > > 
> > > > > > > > > > This stores the smallest preregistered page size in the preregistered
> > > > > > > > > > region descriptor and changes the mm_iommu_xxx API to check this against
> > > > > > > > > > the IOMMU page size.
> > > > > > > > > > 
> > > > > > > > > > Signed-off-by: Alexey Kardashevskiy <aik at ozlabs.ru>
> > > > > > > > > > ---
> > > > > > > > > > Changes:
> > > > > > > > > > v2:
> > > > > > > > > > * explicitly check for compound pages before calling compound_order()
> > > > > > > > > > 
> > > > > > > > > > ---
> > > > > > > > > > The bug is: run QEMU _without_ hugepages (no -mempath) and tell it to
> > > > > > > > > > advertise 16MB pages to the guest; a typical pseries guest will use 16MB
> > > > > > > > > > for IOMMU pages without checking the mmu pagesize and this will fail
> > > > > > > > > > at https://git.qemu.org/?p=qemu.git;a=blob;f=hw/vfio/common.c;h=fb396cf00ac40eb35967a04c9cc798ca896eed57;hb=refs/heads/master#l256
> > > > > > > > > > 
> > > > > > > > > > With the change, mapping will fail in KVM and the guest will print:
> > > > > > > > > > 
> > > > > > > > > > mlx5_core 0000:00:00.0: ibm,create-pe-dma-window(2027) 0 8000000 20000000 18 1f returned 0 (liobn = 0x80000001 starting addr = 8000000 0)
> > > > > > > > > > mlx5_core 0000:00:00.0: created tce table LIOBN 0x80000001 for /pci at 800000020000000/ethernet at 0
> > > > > > > > > > mlx5_core 0000:00:00.0: failed to map direct window for
> > > > > > > > > > /pci at 800000020000000/ethernet at 0: -1          
> > > > > > > > > 
> > > > > > > > > [snip]        
> > > > > > > > > > @@ -124,7 +125,7 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
> > > > > > > > > >  		struct mm_iommu_table_group_mem_t **pmem)
> > > > > > > > > >  {
> > > > > > > > > >  	struct mm_iommu_table_group_mem_t *mem;
> > > > > > > > > > -	long i, j, ret = 0, locked_entries = 0;
> > > > > > > > > > +	long i, j, ret = 0, locked_entries = 0, pageshift;
> > > > > > > > > >  	struct page *page = NULL;
> > > > > > > > > >  
> > > > > > > > > >  	mutex_lock(&mem_list_mutex);
> > > > > > > > > > @@ -166,6 +167,8 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
> > > > > > > > > >  		goto unlock_exit;
> > > > > > > > > >  	}
> > > > > > > > > >  
> > > > > > > >  > > +	mem->pageshift = 30; /* start from 1G pages - the biggest we have */          
> > > > > > > > > 
> > > > > > > > > What about 16G pages on an HPT system?        
> > > > > > > > 
> > > > > > > > 
> > > > > > > > Below in the loop mem->pageshift will reduce to the biggest actual size
> > > > > > > > which will be 16mb/64k/4k. Or remain 1GB if no memory is actually
> > > > > > > > pinned, no loss there.        
> > > > > > > 
> > > > > > > Are you saying that 16G IOMMU pages aren't supported?  Or that there's
> > > > > > > some reason a guest can never use them?      
> > > > > > 
> > > > > > 
> > > > > > ah, 16_G_, not _M_. My bad. I just never tried such huge pages, I will
> > > > > > lift the limit up to 64 then, easier this way.    
> > > > > 
> > > > > 
> > > > > Ah, no, rather this as the upper limit:
> > > > > 
> > > > > mem->pageshift = ilog2(entries) + PAGE_SHIFT;    
> > > > 
> > > > I can't make sense of this comment in context.  I see how you're
> > > > computing the minimum page size in the reserved region.
> > > > 
> > > > My question is about what the  is - the starting
> > > > value from which you calculate.  Currently it's 1G, but I can't
> > > > immediately see a reason that 16G is impossible here.  
> > > 
> > > 
> > > 16GB is impossible if the chunk we are preregistering here is smaller
> > > than that, for example, the entire guest ram is 4GB.  
> > 
> > Of course.  Just like it was for 1GiB if you had a 512MiB guest, for
> > example.  I'm talking about a case where you have a guest that's
> > >=16GiB and you *have* allocated 16GiB hugepages to back it.  
> 
> 
> Then, assuming we are preregistering entire RAM as a single chunk, the
> "maximum minimum" will be initialized as ">=16GiB" (but floor-aligned
> to power of two) before the pinning loop and then reduce to the actual
> page size, inside the loop. I feel like I am missing something in the
> question, what is that?

I wish I knew...

By the "maximum minimum" I'm talking about the shift value you start
with before you start looking at the actual pages.  Currently that is
exactly 30 (1 GiB).  AFAICT that prevents using 16GiB pages, and I
can't see a good reason for that.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: not available
URL: <http://lists.ozlabs.org/pipermail/linuxppc-dev/attachments/20180703/e4d95fe0/attachment.sig>


More information about the Linuxppc-dev mailing list