sbefifo userspace api
Brad Bishop
bradleyb at fuzziesquirrel.com
Sat Feb 25 15:25:15 AEDT 2017
[snip]
>
> To me, one key thing a kernel driver should be providing is resource
> sharing. Without finding the sbei spec, I have heard the description
> is one outstanding op at a time, send the request and the sbe responds
> with a response; asynchronous notifications are done by issuing a poll
> request.
That is my understanding as well.
>
> For this interface to be shared by the kernel and user programs, it
> would seem that the write data would have to be submitted in one
> system call and this file descriptor would be the only one that
> could satisfy a read, and the data should be read in one system call.
>
>
> If the data is just a byte stream then how can multiple users or
> sources share the fifo?
By treating it as fifo-like and not storage-like. If a write hasn’t
been done to an fd and a read occurs on that fd, the read would block
until a write happens (on that same fd).
> Each agent would get a random response
> and even with exclusive open the occ driver would have to be
> stopped or paused or unbound for the user operation to occur.
>
> Am I missing something fundamental?
>
> milton
>
>>>
>>> Below are not assertions but relevant items yet to be resolved, and
>>> should not prevent moving forward with implementation of the above.
>>>
>>> - Whether or not an ‘sbescom’ DD is valuable. If it is, it will
>>> assemble requests, parse responses for whatever chip-ops it
>> requires
>>> and use the sbefifo in-kernel api in the same manner as the
>> occ-hwmon DD.
>>> imho, this feels like convenience code since we have a real hw scom
>>> CFAM engine with UAPI. A wrapper that hides this distinction could
>>> just as easily be done in userspace without another DD.
>>
>> Thanks for taking the time to write this up! I don't have any
>> concerns
>> with the above summary. I agree the sbescom driver seems unneccesary
>> as whether to use chip-ops or real HW scom is something that
>> userspace
>> (rather than the kernel) should decide anyway, so I'm not sure what
>> value having that wrapper in the kernel would have.
>>
>> - Alistair
>>
>>> Thanks to everyone participating in the discussion.
>>>
>>> -brad
>>>
>>>> On Feb 22, 2017, at 12:59 PM, Alistair Popple
>> <alistair at popple.id.au> wrote:
>>>>
>>>> On Wed, 22 Feb 2017 10:53:12 PM Venkatesh Sainath wrote:
>>>>
>>>>> In general, I agree with this approach to contain only those
>> interfaces
>>>>> required by hwmon within the kernel and leave the rest to the
>>>>> user-space. But, we need to interlock with the hwmon to
>> understand
>>>>> exactly which interfaces are required.
>>>>> IMO, we need (a) Get/Put Memory for tunneling data to OCC and
>> back (b)
>>>>> Get/Put Scom? (c) Get/Put SRAM (?) for getting nominal voltages.
>> If
>>>>> these three are the right ones, then we need to contain them
>> within the
>>>>> kernel.
>>>>
>>>> Where are the OCC interfaces documented? We need to get a better
>>>> understanding of these.
>>>>
>>>> Regardless I think you need a way for userspace to do chip-ops
>> (for
>>>> debug/testing/development/etc). So I guess the question is then
>> which
>>>> ones need a kernel implementation, and if there is a kernel
>>>> implementation should userspace use that or it's own
>> implementation?
>>>>
>>>> I think it's easiest to just have the userspace library contain
>> all
>>>> the neccessary chip-ops and use those. Formatting of chip-ops
>> seems
>>>> like it should be a trival amount of straight-forward code so the
>>>> minor duplication shouldn't be a concern (especially if we leave
>>>> framing up to the kernel). It's simpler than maintaining a long
>> list
>>>> of ioctls which would still need a userspace library imho.
>>>>
>>>>>>> I don't think it is a problem having get/putscom as an
>> exceptional
>>>>>>> case for the reasons described above. My question about the
>>>>>>> consistency argument is what does it get us?
>>>>>>>
>>>>>>> I think Venkatesh was suggesting it would reduce duplication
>> of
>>>>>>> protocol driver code, however perhaps this isn't the case. We
>> could
>>>>>>> create a kernel chip-ops driver that deals with sending a
>> buffer and
>>>>>>> getting a response but doesn't have to know anything about the
>> chip-op
>>>>>>> itself. It could also implement the kernel sbe get/putscom.
>>>>>>>
>>>>>>> What I am suggesting is the chip-op driver deal with 1.1.2.4 &
>> 1.1.2.5
>>>>>>> of the sbe interface spec. Userspace would submit
>> command-class,
>>>>>>> command-code & data words and the chip-ops driver would
>> forward that
>>>>>>> to the SBE and send the response back to userspace without
>> having to
>>>>>>> know what any of that data means.
>>>>>> I like it!
>>>>>>
>>>>>> Would this ‘chip-op’ driver be distinct from the sbefifo driver
>> or
>>>>>> are you proposing a possible sbefifo driver API with a tad bit
>> more
>>>>>> abstraction?
>>>>>>
>>>>>> If we did something like this, does a standalone sbescom driver
>> still
>>>>>> have any value?
>>>>> I think the proposal is to have a combined sbefifo+sbechipop
>> driver that
>>>>> provides a user space api to submit operations to sbefifo and
>> user space
>>>>> api for those chipops contained within ( scom, memory and sram).
>> The
>>>>> user space library will have the other sbe interfaces and will
>> call the
>>>>> submit operations of the chip-op driver. Is this correct? We
>> could get
>>>>> the user-space library provide a wrapper interface for even the
>> chip-ops
>>>>> contained within the kernel so that the caller doesnt have to
>> know
>>>>> whether to call the library or the driver apis.
>>>>>>> In any case I think you will end up with a userspace
>> implementation of
>>>>>>> chip-ops anyway as it is much easier to test and develop new
>> chip-ops
>>>>>>> without having to also understand how to build and flash a new
>> kernel.
>>>>>>> Inevitably someone will want to add some kind of "debug"
>> chip-op or
>>>>>>> other private chip-op that they don't want published in kernel
>> code,
>>>>>>> and as soon as you have one userspace chip-op you may as well
>> do all
>>>>>>> of the ones you can from there.
>>>>>>>
>>>>>>>>> Of course we can implement just the get/put scom chip ops in
>> kernel and the others in user space. I just want to make sure
>> everyone understands exactly what we’d be doing there and would be OK
>> with that approach.
>>>>>>>>>
>>>>>>>>>> There's the OCC hwmon driver - what chip-ops does that
>>>>>>>>>> need? Just get/putscom?
>>>>>>>>> Correct, just get/putscom. But sbe scom, not direct scom.
>>>>>>>> occ hwmon driver should use get/put memory to communicate
>> with OCC. One
>>>>>>> What memory is the OCC using to store these buffer? Does it
>> have
>>>>>>> memory mapped onto the PowerBus into an MMIO space or
>> something?
>>>>>>>> other option ( I am not too happy about this ) is to embed
>> the get/put
>>>>>>>> memory protocol construction inside occ hwmon driver and make
>> it call
>>>>>>>> sbe fifo driver interface directly. The sbei protocol driver
>> can then be
>>>>>>>> in the user-space as a library and call the sbe fifo driver
>> interface
>>>>>>>> via ioctl. This creates duplication of the protocol driver
>> code and we
>>>>>>>> have to fix it in multiple places.
>>>>>>>>>> If that's the case I thought we were already
>>>>>>>>>> exposing scom chip-op operations via an OpenFSI master?
>>>>>>>>> I have the same consistency question here. Why is it
>> preferable to have a UAPI for these chip-ops but not the others?
>>>>>>>> openfsi driver is not exposing chipops. The sbe fifo driver
>> must be
>>>>>>>> calling the openfsi driver interfaces to write to the fifo.
>> The openfsi
>>>>>>>> driver must also be providing a UAPI for a non-sbe scom or
>> cfam-write
>>>>>>>> operations via cronus, pdbg or rest api.
>>>>>>>>
>>>>>>>> Net: openfsi driver need to have both intra-kernel and
>> user-space APIs.
>>>>>>>> sbe fifo driver also need to have both intra-kernel and
>> user-space APIs
>>>>>>>> if the sbei protocol driver goes to user-space.
>>>>>>>>>> Regards,
>>>>>>>>>>
>>>>>>>>>> Alistair
>>>>>>>>>>
>>>>>>>>>>> thx - brad
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> On Feb 20, 2017, at 12:05 AM, Venkatesh Sainath
>> <vsainath at linux.vnet.ibm.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Hi Brad,
>>>>>>>>>>>>
>>>>>>>>>>>> I am not sure why we need a user-space api for SBE FIFO
>> driver. Even the pdbg and cronus apps have to go through the SBEI
>> protocol driver in order to construct the packets according to the
>> SBEI interface spec which is done by the SBEI protocol driver only. I
>> think only an in-kernel api is sufficient for the SBE FIFO driver.
>>>>>>>>>>>>
>>>>>>>>>>>> However, for the SBEI protocol driver, we would need both
>> in-kernel ( for use by OCC hwmon ) and user-space api ( for use by
>> pdbg, cronus and rest apis ).
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks
>>>>>>>>>>>>
>>>>>>>>>>>> With regards
>>>>>>>>>>>> Venkatesh
>>>>>>>>>>>>
>>>>>>>>>>>> On 20/02/17 8:17 AM, Brad Bishop wrote:
>>>>>>>>>>>>> Thanks Jeremy for the reply. I’ve added participants
>> from this thread:
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>> https://lists.ozlabs.org/pipermail/openbmc/2017-February/006563.html
>>>>>>>>>>>>>
>>>>>>>>>>>>> in an attempt to consolidate the whole sbefifo design
>> discussion to a single thread.
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hi Brad,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Looking to start a discussion around possible user
>> space and kernel
>>>>>>>>>>>>>>> APIs for the POWER9 sbefifo driver.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> There exists today an “alternate" sbefifo driver :-)
>> that provides a
>>>>>>>>>>>>>>> single submit ioctl. Applications submit a request
>> and get a reply in
>>>>>>>>>>>>>>> single system call.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Is something like that the best approach for an
>> upstream driver? Or
>>>>>>>>>>>>>>> should we try something more "pipe like" with
>> read/write interfaces?
>>>>>>>>>>>>>> It probably depends on the functionality there; ioctl()
>> is useful in
>>>>>>>>>>>>>> that (as you say) we can handle request and response in
>> a single
>>>>>>>>>>>>>> syscall, read() / write() may be more appropriate if
>> ordering can be
>>>>>>>>>>>>>> handled in userspace.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Can you add a little description about the
>> functionality we're exposing?
>>>>>>>>>>>>>> That may suggest a particular API.
>>>>>>>>>>>>> In terms of hardware its pretty simple. sbefifo is just
>> two 8-word queues for sending/receiving messages to/from the SBE.
>> Each queue entry has a single ‘end of transfer’ flag to let the other
>> side know the message is done.
>>>>>>>>>>>>>
>>>>>>>>>>>>> In terms of data flowing through it, there is an SBEI
>> protocol that covers encoding operations (like getscom, getmem, etc..
>> aka chip-ops) and the SBE response.
>>>>>>>>>>>>>
>>>>>>>>>>>>> For users there seems to be two classes:
>>>>>>>>>>>>>
>>>>>>>>>>>>> 1 - user space wanting to do chip-ops (pdbg, cronus).
>>>>>>>>>>>>> 2 - device drivers wanting to do chip-ops (occ-hwmon).
>>>>>>>>>>>>>
>>>>>>>>>>>>> If it weren’t for occ-hwmon, it doesn’t seem like there
>> would be any need for the kernel to have any knowledge of the data
>> flowing through the fifo (at the moment anyway). An sbe-scom driver
>> has been suggested but I wonder what the point of that driver would
>> be, if userspace could simply encode a get/putscom chip-op and use
>> the fifo directly.
>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Would the in-kernel API be the same as the user space
>> API?
>>>>>>>>>>>>>> Probably not :)
>>>>>>>>>>>>> I realize they wouldn’t be _exactly_ the same if thats
>> why I got the smiley face :-) ...
>>>>>>>>>>>>>
>>>>>>>>>>>>> But I would have figured they’d at least be similar -
>> meaning if we were go the ‘submit’ route for a UAPI..the kernel
>>>>>>>>>>>>> would probably not have a split read/write API or vise
>> versa.
>>>>>>>>>>>>>
>>>>>>>>>>>>> So to rephrase the question - would the chardev fops
>> implementation simply be something like this:
>>>>>>>>>>>>>
>>>>>>>>>>>>> sbe-uapi-fops(chardev)
>>>>>>>>>>>>> data = copy to/from user space;
>>>>>>>>>>>>> sbedev = from_chardev(chardev);
>>>>>>>>>>>>> kernel-api(sbedev, data);
>>>>>>>>>>>>>
>>>>>>>>>>>>> Or are the other things to consider here?
>>>>>>>>>>>>>
>>>>>>>>>>>>> -thx
>>>>>>>>>>>>>
>>>>>>>>>>>>> brad
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Cheers,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jeremy
More information about the openbmc
mailing list