Proposal for caching/buffering POST codes list for one boot process.

Wang, Kuiying kuiying.wang at intel.com
Thu Aug 30 17:23:40 AEST 2018


Hi Brad,
1. Accept your suggestion.
     Define an interface PostCodeList.yaml under "/xyz/openbmc_project/State/Boot "
2. Accept your suggestion
    Define a property "List" w/ type "array[uint64]"
3.  Accept your suggestion
    To create a new dbus object for different boot cycle but not using end flag
4. compare w/ "phosphor-host-postd", it is suitable for client. But for other module like IPMI command tool, it is kind of server.
    How about "phosphor-post-code-manager"? do you think it is ok?


Thanks,
Kuiying.


-----Original Message-----
From: Brad Bishop [mailto:bradleyb at fuzziesquirrel.com] 
Sent: Wednesday, August 29, 2018 8:20 PM
To: Wang, Kuiying <kuiying.wang at intel.com>
Cc: Tanous, Ed <ed.tanous at intel.com>; kunyi731 at gmail.com; Xu, Qiang <qiang.xu at intel.com>; Mihm, James <james.mihm at intel.com>; Nguyen, Hai V <hai.v.nguyen at intel.com>; Feist, James <james.feist at intel.com>; Jia, Chunhui <chunhui.jia at intel.com>; venture at google.com; openbmc at lists.ozlabs.org; chunhui.jia at linux.intel.com; Yang, Cheng C <cheng.c.yang at intel.com>; Li, Yong B <yong.b.li at intel.com>; geissonator at yahoo.com
Subject: Re: Proposal for caching/buffering POST codes list for one boot process.



> On Aug 29, 2018, at 12:44 AM, Wang, Kuiying <kuiying.wang at intel.com> wrote:
> 
> Thanks a lot for you all comments and suggestions.
> Let me summary and update my solution now.
> 1. Define an interface to post code list "CodeList.yaml" in " phosphor-dbus-interfaces" repo.
>     Under folder " xyz/openbmc_project/Post /“

I suggest /xyz/openbmc_project/State/Boot

This is where Patrick put the existing post code interface (Raw).

> 2. Define a property "PostCodeList" w/ type "array[uint64]" 
> (std::vector<uint64_t>)

How about just List?  or History?  Something without “Post” in it please.
We have a similar concept on POWER and I can implement this interface but we don’t call them post codes.

> 3. Developing a post code client to monitor and collect all the post codes.
>        a). define a "MAX_BOOT_CYCLE_NUM" to limit how many boot cycle post codes can be buffered.
>        b). when a post code comes, push back to the end of CodeList array.
>        c). when a boot cycle ends, push end flag like "0xffff *** ffff " to the end of CodeList array.

Can we just create a new dbus object in this situation?  Rather than shoving multiple boots into a single object with ffff delimeters?

>        d). when hit to max boot cycle number, delete one cycle post code set at the beginning of CodeList array.
>        e). save CodeList property into file system " /var/lib/phosphor-state-manager/postCodeList”

This would be whatever we call this new application.  /var/lib/phosphor-state-manager is for the phosphor-state-manager program's state data.

> 4. create a new repo for post code client " phosphor-host-post-code-client”.

Why is it a client?  Is it also a server?

> 
> Thanks,
> Kuiying.

Thank you!

> 
> 
> -----Original Message——

This is called top posting, please try to avoid when using the mail-list.
It makes threaded conversation hard to follow and respond to.  thx.

> From: Tanous, Ed
> Sent: Tuesday, August 28, 2018 2:10 PM
> To: kunyi731 at gmail.com
> Cc: chunhui.jia at linux.intel.com; venture at google.com; Wang, Kuiying 
> <kuiying.wang at intel.com>; Mihm, James <james.mihm at intel.com>; Nguyen, 
> Hai V <hai.v.nguyen at intel.com>; Feist, James <james.feist at intel.com>; 
> Jia, Chunhui <chunhui.jia at intel.com>; openbmc at lists.ozlabs.org; Li, 
> Yong B <yong.b.li at intel.com>; Yang, Cheng C <cheng.c.yang at intel.com>; 
> bradleyb at fuzziesquirrel.com; Xu, Qiang <qiang.xu at intel.com>; 
> geissonator at yahoo.com; Kun Yi <kunyi at google.com>
> Subject: RE: RE: Proposal for caching/buffering POST codes list for one boot process.
> 
>> Obvious another thing we would need to consider is performance. A 
>> host booting session could produce dozens or hundreds of POST codes depending on how verbose the BIOS is, and we should be careful not to design something that creates too much DBus traffic. These embedded processors are not performance ?
>> beasts by any means.
> 
> I would be really surprised if POST codes were ever a performance bottleneck even in with the worst implementation possible.  Hundreds of post codes in a minute is still orders of magnitude less data than the sensors are already pushing over DBus.
> 
>> On Mon, Aug 27, 2018 at 4:52 PM Kun Yi <kun.yi.731 at gmail.com> wrote:
>> I think the choice of *where* to put such buffering warrants some thoughts and design. Going through what I have thought:
> 
>> 1. It's possible to implement host state detection and host POST code 
>> buffering all in a client daemon, which is a long-lived process that
>> - keeps listening to POST codes published
>> - keeps polling host state
>> - when host power state toggled, write the POST codes received to a 
>> file on disk
> This would mean that partial boots, or boots that have over current issues wouldn't be persisted at all.  It's a bit of an implementation detail at this point, but I suspect we're going to want to persist post codes more often than just every boot.
> 
>> Pros of this approach is that server daemons are kept simple. POST code server doesn't need to talk to host state daemon or even assume its existence.
>> Pros of buffering on server side: potentially there will be more than 
>> one identities needing the list of POST codes. IPMI? Logging? It would really help if we can identify some concrete use cases.
> I think we also need to consider that the POST code saving mechanism should be up as soon as possible after boot to make sure that in the case where power restore policy is set to ON, we can capture as many post codes as possible from the host booting.  In previous implementations, this meant buffering in the kernel driver, and making the application a lightweight system for persisting POST codes, rather than actually capture them itself.
> 


More information about the openbmc mailing list