[v5, 1/2] cxl: Add mechanism for delivering AFU driver specific events
Matthew R. Ochs
mrochs at linux.vnet.ibm.com
Wed Jun 15 00:47:51 AEST 2016
Vaibhav/Philippe,
Finally getting back around to looking at this.
-matt
> On May 25, 2016, at 2:22 AM, Vaibhav Jain <vaibhav at linux.vnet.ibm.com> wrote:
>
> Hi Matt,
>
> "Matthew R. Ochs" <mrochs at linux.vnet.ibm.com> writes:
>> The purpose for the count is so the AFU driver is only called when it
>> has something to send. Otherwise we don't want to be called.
>
> Agreed, but this opens up a possible boundary condition where in we have
> non-zero event count and deliver_event callback returns a NULL/empty
> struct (This isn't handled correctly at the moment). Imo the condition
> event-count == number-of-calls-to-deliver_events is bit too rigid.
>
> Instead a more relaxed condition can be number-of-calls-to-deliver_event ==
> count-until-deliver_event-returns-NULL. This could be implemented as
> boolean flag inside the context to indicate that afu-driver has some
> events queued. This flag can be set when cxl_context_events_pending gets
> called. The cxl code can simply call deliver_event on each read call
> until it returns NULL in which case this boolean flag is reset.
We're fine with being called until we return NULL. We just don't want to
always be called. =)
I believe the earlier discussions we had with Ian indicated that us returning
NULL (effectively 'failing') could be problematic for the read handler. Perhaps
this is no longer the case with the updated patch.
Regardless of your internal implementation, we would still like for the API
we call to indicate the number of events we've enqueued and desire to send
to the user. This will allow for flexibility in the future should your internal
implementation change.
>
> This should slightly simplify the code flow at the afu-driver end as
> enquing to an event need not be paired by a call to
> cxl_context_events_pending. It can quite possibly enqueue bunch of
> events and then do a single call to cxl_context_events_pending. In this
> case the function cxl_context_events_pending essentially works more like
> a function named as cxl_context_events_flush.
>
> ~ Vaibhav
>
> _______________________________________________
> Linuxppc-dev mailing list
> Linuxppc-dev at lists.ozlabs.org
> https://lists.ozlabs.org/listinfo/linuxppc-dev
More information about the Linuxppc-dev
mailing list