[PATCH kernel v2 0/7] powerpc/powenv/ioda: Allow huge DMA window at 4GB

Oliver O'Halloran oohall at gmail.com
Wed Apr 22 19:11:46 AEST 2020


On Wed, Apr 22, 2020 at 4:49 PM Alexey Kardashevskiy <aik at ozlabs.ru> wrote:
>
> 32bit MMIO is what puzzles me in this picture, how does it work?

For devices with no m64 we allocate a PE number as described above. In
the 32bit MMIO window we have a segment-to-PE remapping table so any
m32 segment can be assigned to any PE. As a result slave PE concept
isn't really needed. If the BARs of a device span multiple m32
segments then we can setup the remapping table so that all the
segments point to the same PE.

> > I was thinking we should try minimise the number of DMA-only PEs since
> > it complicates the EEH freeze handling. When MMIO and DMA are mapped
> > to the same PE an error on either will cause the hardware to stop
> > both. When seperate PEs are used for DMA and MMIO you lose that
> > atomicity. It's not a big deal if DMA is stopped and MMIO allowed
> > since PAPR (sort-of) allows that, but having MMIO frozen with DMA
> > unfrozen is a bit sketch.
>
> You suggested using slave PEs for crippled functions - won't we have the
> same problem then?

Yes, but I think it's probably worth doing in that case. You get
slightly janky EEH in exchange for better DMA performance.

> And is this "slave PE" something the hardware supports or it is a
> software concept?

It's all in software. The hardware does have the PELT-V which allows
you to specify a group of PEs to additionally freeze when a PE is
frozen, but the PELT-V is only used when handling AER messages.  All
other error sources (DMAs, MMIOs, etc) will only freeze one PE (or all
of them in very rare cases).

> > There's been no official FW releases with a skiboot that supports the
> > phb get/set option opal calls so the only systems that can actually
> > take advantage of it are our lab systems. It might still be useful for
> > future systems, but I'd rather something that doesn't depend on FW
> > support.
>
> Pensando folks use it ;)

the what folks

Oliver


More information about the Linuxppc-dev mailing list