[PATCH 1/4] dt/bindings: Introduce the FSL QorIQ DPAA BMan

Scott Wood scottwood at freescale.com
Fri Oct 31 03:29:50 AEDT 2014


On Thu, 2014-10-30 at 11:19 -0500, Emil Medve wrote:
> Hello Scott,
> 
> 
> On 10/30/2014 09:51 AM, Scott Wood wrote:
> > On Wed, 2014-10-29 at 23:32 -0500, Emil Medve wrote:
> >> Hello Scott,
> >>
> >>
> >> On 10/29/2014 05:16 PM, Scott Wood wrote:
> >>> On Wed, 2014-10-29 at 16:40 -0500, Emil Medve wrote:
> >>>> Hello Scott,
> >>>>
> >>>>
> >>>> On 10/28/2014 01:08 PM, Scott Wood wrote:
> >>>>> On Tue, 2014-10-28 at 09:36 -0500, Kumar Gala wrote:
> >>>>>> On Oct 22, 2014, at 9:09 AM, Emil Medve <Emilian.Medve at freescale.com> wrote:
> >>>>>>
> >>>>>>> The Buffer Manager is part of the Data-Path Acceleration Architecture (DPAA).
> >>>>>>> BMan supports hardware allocation and deallocation of buffers belonging to
> >>>>>>> pools originally created by software with configurable depletion thresholds.
> >>>>>>> This binding covers the CCSR space programming model
> >>>>>>>
> >>>>>>> Signed-off-by: Emil Medve <Emilian.Medve at Freescale.com>
> >>>>>>> Change-Id: I3ec479bfb3c91951e96902f091f5d7d2adbef3b2
> >>>>>>> ---
> >>>>>>> .../devicetree/bindings/powerpc/fsl/bman.txt       | 98 ++++++++++++++++++++++
> >>>>>>> 1 file changed, 98 insertions(+)
> >>>>>>> create mode 100644 Documentation/devicetree/bindings/powerpc/fsl/bman.txt
> >>>>>>
> >>>>>> Should these really be in bindings/powerpc/fsl, aren’t you guys using this on ARM SoCs as well?
> >>>>>
> >>>>> The hardware on the ARM SoCs is different enough that I'm not sure the
> >>>>> same binding will cover it.  That said, putting things under <arch>
> >>>>> should be a last resort if nowhere else fits.
> >>>>
> >>>> OTC started ported the driver to the the ARM SoC and the feedback has
> >>>> been that the driver needed minimal changes. The IOMMU has been the only
> >>>> area of concern, and a small change to the binding has been suggested
> >>>
> >>> Do we need something in the binding to indicate device endianness?
> >>
> >> As I said, I didn't have enough exposure to the ARM SoC so I can't
> >> answer that
> >>
> >>> If this binding is going to continue to be relevant to future DPAA
> >>> generations, I think we really ought to deal with the possibility that
> >>> there is more than one datapath instance
> >>
> >> I'm unsure how relevant this will be going forward. In LS2 B/QMan is
> >> abstracted/hidden away behind the MC (firmware).
> > 
> > This is why I was wondering whether the binding would be at all the
> > same...
> > 
> >>  I wouldn't over-engineer this without a clear picture of what multiple
> >> data-paths per SoC even means at this point
> > 
> > I don't think it's over-engineering.  Assuming only one instance of
> > something is generally sloppy engineering.  Linux doesn't need to
> > actually pay attention to it until and unless it becomes necessary, but
> > it's good to have the information in the device tree up front.
> 
> I asked around and the "multiple data-path SoC" seems to be at this
> point a speculation. It seems unclear how would it work, what
> requirements/problems it would address/solve, what programming interface
> it would have. I'm not sure what do you suggest we do
> 
> In order to reduce the sloppiness of this binding. I'll add a
> memory-region phandle to connect each B/QMan node to their
> reserved-memory node

Thanks, that's the sort of thing I was looking for.  There should also
be a connection from the portals to the relevant bqman node, though we
need to deal with the possibility that the bqman node may not be present
(e.g. in a vm guest).

> >>> by having phandles and/or a parent container to connect the related
> >>> components.
> >>
> >> Connecting the related components is beyond the scope of this binding.
> >> It will soon hit the e-mail list(s) as part of upstreaming the Ethernet
> >> driver
> > 
> > So you want us to merge this binding without being told how this works?
> 
> This binding stands on its own and each block (B/QMan) can be used for
> some useful purpose by itself. All other blocks/applications that use
> the B/QMan use the same basic interface acquire/release a "buffer" and
> enqueue/dequeue a "packet". I'm not sure what you feel I didn't share

So there's no hardware connection between the bman and qman themselves?

-Scott




More information about the Linuxppc-dev mailing list