[RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce

Leon Romanovsky leon at kernel.org
Tue Nov 20 16:17:44 AEDT 2018

On Tue, Nov 20, 2018 at 11:07:02AM +0800, Kenneth Lee wrote:
> On Mon, Nov 19, 2018 at 11:49:54AM -0700, Jason Gunthorpe wrote:
> > Date: Mon, 19 Nov 2018 11:49:54 -0700
> > From: Jason Gunthorpe <jgg at ziepe.ca>
> > To: Kenneth Lee <liguozhu at hisilicon.com>
> > CC: Leon Romanovsky <leon at kernel.org>, Kenneth Lee <nek.in.cn at gmail.com>,
> >  Tim Sell <timothy.sell at unisys.com>, linux-doc at vger.kernel.org, Alexander
> >  Shishkin <alexander.shishkin at linux.intel.com>, Zaibo Xu
> >  <xuzaibo at huawei.com>, zhangfei.gao at foxmail.com, linuxarm at huawei.com,
> >  haojian.zhuang at linaro.org, Christoph Lameter <cl at linux.com>, Hao Fang
> >  <fanghao11 at huawei.com>, Gavin Schenk <g.schenk at eckelmann.de>, RDMA mailing
> >  list <linux-rdma at vger.kernel.org>, Zhou Wang <wangzhou1 at hisilicon.com>,
> >  Doug Ledford <dledford at redhat.com>, Uwe Kleine-König
> >  <u.kleine-koenig at pengutronix.de>, David Kershner
> >  <david.kershner at unisys.com>, Johan Hovold <johan at kernel.org>, Cyrille
> >  Pitchen <cyrille.pitchen at free-electrons.com>, Sagar Dharia
> >  <sdharia at codeaurora.org>, Jens Axboe <axboe at kernel.dk>,
> >  guodong.xu at linaro.org, linux-netdev <netdev at vger.kernel.org>, Randy Dunlap
> >  <rdunlap at infradead.org>, linux-kernel at vger.kernel.org, Vinod Koul
> >  <vkoul at kernel.org>, linux-crypto at vger.kernel.org, Philippe Ombredanne
> >  <pombredanne at nexb.com>, Sanyog Kale <sanyog.r.kale at intel.com>, "David S.
> >  Miller" <davem at davemloft.net>, linux-accelerators at lists.ozlabs.org
> > Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> > User-Agent: Mutt/1.9.4 (2018-02-28)
> > Message-ID: <20181119184954.GB4890 at ziepe.ca>
> >
> > On Mon, Nov 19, 2018 at 05:14:05PM +0800, Kenneth Lee wrote:
> >
> > > If the hardware cannot share page table with the CPU, we then need to have
> > > some way to change the device page table. This is what happen in ODP. It
> > > invalidates the page table in device upon mmu_notifier call back. But this cannot
> > > solve the COW problem: if the user process A share a page P with device, and A
> > > forks a new process B, and it continue to write to the page. By COW, the
> > > process B will keep the page P, while A will get a new page P'. But you have
> > > no way to let the device know it should use P' rather than P.
> >
> > Is this true? I thought mmu_notifiers covered all these cases.
> >
> > The mm_notifier for A should fire if B causes the physical address of
> > A's pages to change via COW.
> >
> > And this causes the device page tables to re-synchronize.
> I don't see such code. The current do_cow_fault() implemenation has nothing to
> do with mm_notifer.
> >
> > > In WarpDrive/uacce, we make this simple. If you support IOMMU and it support
> > > SVM/SVA. Everything will be fine just like ODP implicit mode. And you don't need
> > > to write any code for that. Because it has been done by IOMMU framework. If it
> >
> > Looks like the IOMMU code uses mmu_notifier, so it is identical to
> > IB's ODP. The only difference is that IB tends to have the IOMMU page
> > table in the device, not in the CPU.
> >
> > The only case I know if that is different is the new-fangled CAPI
> > stuff where the IOMMU can directly use the CPU's page table and the
> > IOMMU page table (in device or CPU) is eliminated.
> >
> Yes. We are not focusing on the current implementation. As mentioned in the
> cover letter. We are expecting Jean Philips' SVA patch:
> git://linux-arm.org/linux-jpb.
> > Anyhow, I don't think a single instance of hardware should justify an
> > entire new subsystem. Subsystems are hard to make and without multiple
> > hardware examples there is no way to expect that it would cover any
> > future use cases.
> Yes. That's our first expectation. We can keep it with our driver. But because
> there is no user driver support for any accelerator in mainline kernel. Even the
> well known QuickAssit has to be maintained out of tree. So we try to see if
> people is interested in working together to solve the problem.
> >
> > If all your driver needs is to mmap some PCI bar space, route
> > interrupts and do DMA mapping then mediated VFIO is probably a good
> > choice.
> Yes. That is what is done in our RFCv1/v2. But we accepted Jerome's opinion and
> try not to add complexity to the mm subsystem.
> >
> > If it needs to do a bunch of other stuff, not related to PCI bar
> > space, interrupts and DMA mapping (ie special code for compression,
> > crypto, AI, whatever) then you should probably do what Jerome said and
> > make a drivers/char/hisillicon_foo_bar.c that exposes just what your
> > hardware does.
> Yes. If no other accelerator driver writer is interested. That is the
> expectation:)
> But we really like to have a public solution here. Consider this scenario:
> You create some connections (queues) to NIC, RSA, and AI engine. Then you got
> data direct from the NIC and pass the pointer to RSA engine for decryption. The
> CPU then finish some data taking or operation and then pass through to the AI
> engine for CNN calculation....This will need a place to maintain the same
> address space by some means.

You are using NIC terminology, in the documentation, you wrote that it is needed
for DPDK use and I don't really understand, why do we need another shiny new
interface for DPDK.

> It is not complex, but it is helpful.
> >
> > If you have networking involved in here then consider RDMA,
> > particularly if this functionality is already part of the same
> > hardware that the hns infiniband driver is servicing.
> >
> > 'computational MRs' are a reasonable approach to a side-car offload of
> > already existing RDMA support.
> OK. Thanks. I will spend some time on it. But personally, I really don't like
> RDMA's complexity. I cannot even try one single function without a...some
> expensive hardwares and complexity connection in the lab. This is not like a
> open source way.

It is not very accurate. We have RXE driver which is virtual RDMA device
which is implemented purely in SW. It struggles from bad performance and
sporadic failures, but it is enough to try RDMA on your laptop in VM.


> >
> > Jason
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: not available
URL: <http://lists.ozlabs.org/pipermail/linux-accelerators/attachments/20181120/0869b3a6/attachment-0001.sig>

More information about the Linux-accelerators mailing list