[RFC PATCH 0/7] A General Accelerator Framework, WarpDrive

Kenneth Lee liguozhu at hisilicon.com
Fri Aug 10 11:37:40 AEST 2018


On Thu, Aug 09, 2018 at 08:31:31AM +0000, Tian, Kevin wrote:
> Date: Thu, 9 Aug 2018 08:31:31 +0000
> From: "Tian, Kevin" <kevin.tian at intel.com>
> To: Kenneth Lee <liguozhu at hisilicon.com>, Jerome Glisse <jglisse at redhat.com>
> CC: Kenneth Lee <nek.in.cn at gmail.com>, Alex Williamson
>  <alex.williamson at redhat.com>, Herbert Xu <herbert at gondor.apana.org.au>,
>  "kvm at vger.kernel.org" <kvm at vger.kernel.org>, Jonathan Corbet
>  <corbet at lwn.net>, Greg Kroah-Hartman <gregkh at linuxfoundation.org>, Zaibo
>  Xu <xuzaibo at huawei.com>, "linux-doc at vger.kernel.org"
>  <linux-doc at vger.kernel.org>, "Kumar, Sanjay K" <sanjay.k.kumar at intel.com>,
>  Hao Fang <fanghao11 at huawei.com>, "linux-kernel at vger.kernel.org"
>  <linux-kernel at vger.kernel.org>, "linuxarm at huawei.com"
>  <linuxarm at huawei.com>, "iommu at lists.linux-foundation.org"
>  <iommu at lists.linux-foundation.org>, "linux-crypto at vger.kernel.org"
>  <linux-crypto at vger.kernel.org>, Philippe Ombredanne
>  <pombredanne at nexb.com>, Thomas Gleixner <tglx at linutronix.de>, "David S .
>  Miller" <davem at davemloft.net>, "linux-accelerators at lists.ozlabs.org"
>  <linux-accelerators at lists.ozlabs.org>
> Subject: RE: [RFC PATCH 0/7] A General Accelerator Framework, WarpDrive
> Message-ID: <AADFC41AFE54684AB9EE6CBC0274A5D1912B39B3 at SHSMSX101.ccr.corp.intel.com>
> 
> > From: Kenneth Lee [mailto:liguozhu at hisilicon.com]
> > Sent: Thursday, August 9, 2018 4:04 PM
> > 
> > But we have another requirement which is to combine some device
> > together to
> > share the same address space. This is a little like these kinds of solution:
> > 
> > http://tce.technion.ac.il/wp-content/uploads/sites/8/2015/06/SC-7.2-M.-
> > Silberstein.pdf
> > 
> > With that, the application can directly pass the NiC packet pointer to the
> > decryption accelerator, and get the bare data in place. This is the feature
> > that
> > the VFIO container can provide.
> 
> above is not a good argument, at least in the context of your discussion.
> If each device has their own interface (similar to GPU) for process to bind 
> with, then having the process binding to multiple devices one-by-one then
> you still get same address space shared cross them...

If we consider this from the VFIO container perspective, with a container, you
can do DMA to the container applying it to all devices, even the device is added
after the DMA operation.  

So your argument remains true only when SVM is enabled and the whole process space
is devoted to the devices. 

Yes, the process can do the same all by itself. But if we agree with that, it
makes no sense to keep the container concept in VFIO;)

> 
> Thanks
> Kevin

-- 
			-Kenneth(Hisilicon)

================================================================================
本邮件及其附件含有华为公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁
止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中
的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
This e-mail and its attachments contain confidential information from HUAWEI,
which is intended only for the person or entity whose address is listed above.
Any use of the 
information contained herein in any way (including, but not limited to, total or
partial disclosure, reproduction, or dissemination) by persons other than the
intended 
recipient(s) is prohibited. If you receive this e-mail in error, please notify
the sender by phone or email immediately and delete it!



More information about the Linux-accelerators mailing list