[PATCH kernel v4 19/19] vfio_pci: Add NVIDIA GV100GL [Tesla V100 SXM2] subdriver

Alex Williamson alex.williamson at redhat.com
Tue Dec 11 12:27:32 AEDT 2018


On Tue, 11 Dec 2018 11:57:20 +1100
Alexey Kardashevskiy <aik at ozlabs.ru> wrote:

> On 11/12/2018 11:08, Alex Williamson wrote:
> > On Fri, 23 Nov 2018 16:53:04 +1100
> > Alexey Kardashevskiy <aik at ozlabs.ru> wrote:
> >   
> >> POWER9 Witherspoon machines come with 4 or 6 V100 GPUs which are not
> >> pluggable PCIe devices but still have PCIe links which are used
> >> for config space and MMIO. In addition to that the GPUs have 6 NVLinks
> >> which are connected to other GPUs and the POWER9 CPU. POWER9 chips
> >> have a special unit on a die called an NPU which is an NVLink2 host bus
> >> adapter with p2p connections to 2 to 3 GPUs, 3 or 2 NVLinks to each.
> >> These systems also support ATS (address translation services) which is
> >> a part of the NVLink2 protocol. Such GPUs also share on-board RAM
> >> (16GB or 32GB) to the system via the same NVLink2 so a CPU has
> >> cache-coherent access to a GPU RAM.
> >>
> >> This exports GPU RAM to the userspace as a new VFIO device region. This
> >> preregisters the new memory as device memory as it might be used for DMA.
> >> This inserts pfns from the fault handler as the GPU memory is not onlined
> >> until the vendor driver is loaded and trained the NVLinks so doing this
> >> earlier causes low level errors which we fence in the firmware so
> >> it does not hurt the host system but still better be avoided.
> >>
> >> This exports an ATSD (Address Translation Shootdown) register of NPU which
> >> allows TLB invalidations inside GPU for an operating system. The register
> >> conveniently occupies a single 64k page. It is also presented to
> >> the userspace as a new VFIO device region.
> >>
> >> In order to provide the userspace with the information about GPU-to-NVLink
> >> connections, this exports an additional capability called "tgt"
> >> (which is an abbreviated host system bus address). The "tgt" property
> >> tells the GPU its own system address and allows the guest driver to
> >> conglomerate the routing information so each GPU knows how to get directly
> >> to the other GPUs.
> >>
> >> For ATS to work, the nest MMU (an NVIDIA block in a P9 CPU) needs to
> >> know LPID (a logical partition ID or a KVM guest hardware ID in other
> >> words) and PID (a memory context ID of a userspace process, not to be
> >> confused with a linux pid). This assigns a GPU to LPID in the NPU and
> >> this is why this adds a listener for KVM on an IOMMU group. A PID comes
> >> via NVLink from a GPU and NPU uses a PID wildcard to pass it through.
> >>
> >> This requires coherent memory and ATSD to be available on the host as
> >> the GPU vendor only supports configurations with both features enabled
> >> and other configurations are known not to work. Because of this and
> >> because of the ways the features are advertised to the host system
> >> (which is a device tree with very platform specific properties),
> >> this requires enabled POWERNV platform.
> >>
> >> The V100 GPUs do not advertise none of these capabilities via the config  
> > 
> > s/none/any/
> >   
> >> space and there are more than just one device ID so this relies on
> >> the platform to tell whether these GPUs have special abilities such as
> >> NVLinks.
> >>
> >> Signed-off-by: Alexey Kardashevskiy <aik at ozlabs.ru>
> >> ---
> >> Changes:
> >> v4:
> >> * added nvlink-speed to the NPU bridge capability as this turned out to
> >> be not a constant value
> >> * instead of looking at the exact device ID (which also changes from system
> >> to system), now this (indirectly) looks at the device tree to know
> >> if GPU and NPU support NVLink
> >>
> >> v3:
> >> * reworded the commit log about tgt
> >> * added tracepoints (do we want them enabled for entire vfio-pci?)
> >> * added code comments
> >> * added write|mmap flags to the new regions
> >> * auto enabled VFIO_PCI_NVLINK2 config option
> >> * added 'tgt' capability to a GPU so QEMU can recreate ibm,npu and ibm,gpu
> >> references; there are required by the NVIDIA driver
> >> * keep notifier registered only for short time
> >> ---
> >>  drivers/vfio/pci/Makefile           |   1 +
> >>  drivers/vfio/pci/trace.h            | 102 +++++++
> >>  drivers/vfio/pci/vfio_pci_private.h |   2 +
> >>  include/uapi/linux/vfio.h           |  27 ++
> >>  drivers/vfio/pci/vfio_pci.c         |  37 ++-
> >>  drivers/vfio/pci/vfio_pci_nvlink2.c | 448 ++++++++++++++++++++++++++++
> >>  drivers/vfio/pci/Kconfig            |   6 +
> >>  7 files changed, 621 insertions(+), 2 deletions(-)
> >>  create mode 100644 drivers/vfio/pci/trace.h
> >>  create mode 100644 drivers/vfio/pci/vfio_pci_nvlink2.c
> >>
> >> diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile
> >> index 76d8ec0..9662c06 100644
> >> --- a/drivers/vfio/pci/Makefile
> >> +++ b/drivers/vfio/pci/Makefile
> >> @@ -1,5 +1,6 @@
> >>  
> >>  vfio-pci-y := vfio_pci.o vfio_pci_intrs.o vfio_pci_rdwr.o vfio_pci_config.o
> >>  vfio-pci-$(CONFIG_VFIO_PCI_IGD) += vfio_pci_igd.o
> >> +vfio-pci-$(CONFIG_VFIO_PCI_NVLINK2) += vfio_pci_nvlink2.o
> >>  
> >>  obj-$(CONFIG_VFIO_PCI) += vfio-pci.o  
> > ...  
> >> diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
> >> index 93c1738..7639241 100644
> >> --- a/drivers/vfio/pci/vfio_pci_private.h
> >> +++ b/drivers/vfio/pci/vfio_pci_private.h
> >> @@ -163,4 +163,6 @@ static inline int vfio_pci_igd_init(struct vfio_pci_device *vdev)
> >>  	return -ENODEV;
> >>  }
> >>  #endif
> >> +extern int vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev);
> >> +extern int vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev);
> >>  #endif /* VFIO_PCI_PRIVATE_H */
> >> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> >> index 8131028..547e71e 100644
> >> --- a/include/uapi/linux/vfio.h
> >> +++ b/include/uapi/linux/vfio.h
> >> @@ -353,6 +353,20 @@ struct vfio_region_gfx_edid {
> >>  #define VFIO_DEVICE_GFX_LINK_STATE_DOWN  2
> >>  };
> >>  
> >> +/* 10de vendor sub-type
> >> + *
> >> + * NVIDIA GPU NVlink2 RAM is coherent RAM mapped onto the host address space.
> >> + */  
> > 
> > nit, prefer the comment style below leaving the first line of a
> > multi-line comment empty, coding style.
> >   
> >> +#define VFIO_REGION_SUBTYPE_NVIDIA_NVLINK2_RAM	(1)
> >> +
> >> +/*
> >> + * 1014 vendor sub-type
> >> + *
> >> + * IBM NPU NVlink2 ATSD (Address Translation Shootdown) register of NPU
> >> + * to do TLB invalidation on a GPU.
> >> + */
> >> +#define VFIO_REGION_SUBTYPE_IBM_NVLINK2_ATSD	(1)
> >> +
> >>  /*
> >>   * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
> >>   * which allows direct access to non-MSIX registers which happened to be within
> >> @@ -363,6 +377,19 @@ struct vfio_region_gfx_edid {
> >>   */
> >>  #define VFIO_REGION_INFO_CAP_MSIX_MAPPABLE	3
> >>  
> >> +/*
> >> + * Capability with compressed real address (aka SSA - small system address)
> >> + * where GPU RAM is mapped on a system bus. Used by a GPU for DMA routing.
> >> + */
> >> +#define VFIO_REGION_INFO_CAP_NPU2		4
> >> +
> >> +struct vfio_region_info_cap_npu2 {
> >> +	struct vfio_info_cap_header header;
> >> +	__u64 tgt;
> >> +	__u32 link_speed;
> >> +	__u32 __pad;
> >> +};
> >> +
> >>  /**
> >>   * VFIO_DEVICE_GET_IRQ_INFO - _IOWR(VFIO_TYPE, VFIO_BASE + 9,
> >>   *				    struct vfio_irq_info)
> >> diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
> >> index 6cb70cf..b8a53f9 100644
> >> --- a/drivers/vfio/pci/vfio_pci.c
> >> +++ b/drivers/vfio/pci/vfio_pci.c
> >> @@ -224,6 +224,16 @@ static bool vfio_pci_nointx(struct pci_dev *pdev)
> >>  	return false;
> >>  }
> >>  
> >> +int __weak vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev)
> >> +{
> >> +	return -ENODEV;
> >> +}
> >> +
> >> +int __weak vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev)
> >> +{
> >> +	return -ENODEV;
> >> +}
> >> +  
> > 
> > Why not static inlines in vfio_pci_private.h like we do for igd hooks?
> > 
> > ...  
> 
> 
> Because the earlier review suggested to do "weak definition" and I took
> it literally :) I'll make it inline.

Oops, that was from me, huh.  Functionally equivalent, but we know
deterministically that we only need this code on ppc, it's not like
some module might provide it externally, and it's more consistent with
igd.  Sorry for the runaround.

> >>  static void vfio_pci_disable(struct vfio_pci_device *vdev)
> >> diff --git a/drivers/vfio/pci/vfio_pci_nvlink2.c b/drivers/vfio/pci/vfio_pci_nvlink2.c
> >> new file mode 100644
> >> index 0000000..e8e06c3
> >> --- /dev/null
> >> +++ b/drivers/vfio/pci/vfio_pci_nvlink2.c  
> > ...  
> >> +static int vfio_pci_nvgpu_mmap(struct vfio_pci_device *vdev,
> >> +		struct vfio_pci_region *region, struct vm_area_struct *vma)
> >> +{
> >> +	long ret;
> >> +	struct vfio_pci_nvgpu_data *data = region->data;
> >> +
> >> +	if (data->useraddr)
> >> +		return -EPERM;
> >> +
> >> +	if (vma->vm_end - vma->vm_start > data->size)
> >> +		return -EINVAL;
> >> +
> >> +	vma->vm_private_data = region;
> >> +	vma->vm_flags |= VM_PFNMAP;
> >> +	vma->vm_ops = &vfio_pci_nvgpu_mmap_vmops;
> >> +
> >> +	/*
> >> +	 * Calling mm_iommu_newdev() here once as the region is not
> >> +	 * registered yet and therefore right initialization will happen now.
> >> +	 * Other places will use mm_iommu_find() which returns
> >> +	 * registered @mem and does not go gup().
> >> +	 */
> >> +	data->useraddr = vma->vm_start;
> >> +	data->mm = current->mm;
> >> +
> >> +	atomic_inc(&data->mm->mm_count);
> >> +	ret = mm_iommu_newdev(data->mm, data->useraddr,
> >> +			(vma->vm_end - vma->vm_start) >> PAGE_SHIFT,
> >> +			data->gpu_hpa, &data->mem);
> >> +
> >> +	trace_vfio_pci_nvgpu_mmap(vdev->pdev, data->gpu_hpa, data->useraddr,
> >> +			vma->vm_end - vma->vm_start, ret);
> >> +
> >> +	return ret;  
> > 
> > It's unfortunate that all these mm_iommu_foo function return long, this
> > function returns int, which made me go down the rabbit hole to see what
> > mm_iommu_newdev() and therefore mmio_iommu_do_alloc() can return.  Can
> > you do a translation somewhere so this doesn't look like a possible
> > overflow?  Thanks,  
> 
> 
> This is not a new thing - gcc produces less assembly for ppc64 if long
> is used and this is why I stick to longs. So I have 2 options here:
> change all mm_iommu_xxxx to return int (I'd rather not) or change the
> @ret type here from long to int, will the latter be ok?

I guess I'd do the latter, use int for ret and cast the return of
mm_iommu_newdev().  Thanks,

Alex


More information about the Linuxppc-dev mailing list