[Skiboot] [PATCH v7 18/22] fadump: Add documentation

Oliver oohall at gmail.com
Mon May 20 16:20:24 AEST 2019


On Mon, May 20, 2019 at 12:30 PM Nicholas Piggin <npiggin at gmail.com> wrote:
>
> Vasant Hegde's on May 18, 2019 9:12 pm:
> > On 05/16/2019 11:05 AM, Nicholas Piggin wrote:
> >> Vasant Hegde's on May 14, 2019 9:23 pm:
> >>> On 05/09/2019 10:28 AM, Nicholas Piggin wrote:
> >>>> Vasant Hegde's on April 13, 2019 7:15 pm:
> >>>>> diff --git a/doc/opal-api/opal-fadump-manage-173.rst b/doc/opal-api/opal-fadump-manage-173.rst
> >>>>> new file mode 100644
> >>>>> index 000000000..916167503
> >>>>> --- /dev/null
> >>>>> +++ b/doc/opal-api/opal-fadump-manage-173.rst
> >>>>> @@ -0,0 +1,73 @@
> >>>>> +.. _opal-api-fadump-manage:
> >>>>> +
> >>>>> +OPAL fadump manage call
> >>>>> +=======================
> >>>>> +::
> >>>>> +
> >>>>> +   #define OPAL_FADUMP_MANAGE                      173
> >>>>> +
> >>>>> +This call is used to manage FADUMP (aka MPIPL) on OPAL platform.
> >>>>> +Linux kernel will use this call to register/unregister FADUMP.
> >>>>> +
> >>>>> +Parameters
> >>>>> +----------
> >>>>> +::
> >>>>> +
> >>>>> +   uint64_t     command
> >>>>> +   void         *data
> >>>>> +   uint64_t     dsize
> >>>>> +
> >>>>> +``command``
> >>>>> +   ``command`` parameter supports below values:
> >>>>> +
> >>>>> +::
> >>>>> +
> >>>>> +      0x01 - Register for fadump
> >>>>> +      0x02 - Unregister fadump
> >>>>> +      0x03 - Invalidate existing fadump
> >>>>> +
> >>>>> +``data``
> >>>>> +   ``data`` is valid when ``command`` is 0x01 (registration).
> >>>>> +   We use fadump structure (see below) to pass Linux kernel
> >>>>> +   memory reservation details.
> >>>>> +
> >>>>> +::
> >>>>> +
> >>>>> +
> >>>>> +   struct fadump_section {
> >>>>> + u8      source_type;
> >>>>> + u8      reserved[7];
> >>>>> + u64     source_addr;
> >>>>> + u64     source_size;
> >>>>> + u64     dest_addr;
> >>>>> + u64     dest_size;
> >>>>> +   } __packed;
> >>>>> +
> >>>>> +   struct fadump {
> >>>>> + u16     fadump_section_size;
> >>>>> + u16     section_count;
> >>>>> + u32     crashing_cpu;
> >>>>> + u64     reserved;
> >>>>> + struct  fadump_section section[];
> >>>>> +   };
> >>>>
> >>>> This API seems quite complicated. The kernel wants to tell firmware to
> >>>> preserve some ranges of memory in case of reboot, and to have those
> >>>> ranges advertised to the reboot kernel.
> >>>
> >>> Kernel informs OPAL about range of memory to be preserved during MPIPL
> >>> (source, destination, size).
> >>
> >> Well it also contains crashing_cpu, type, and comes in this clunky
> >> structure.
> >
> > crashing_cpu : This information is passed by OPAL to kernel during MPIPL boot.
> > So that
> > kernel can generate proper backtrace for OPAL dump.  This is not needed for
> > registration.
> > This is *OPAL* generated information. Kernel won't pass this information.
> > (For kernel initiated crash, kernel will keep track of crashing CPU pt_regs data
> > and it will use
> >    that to generate vmcore).
> >
> >
> > Type : Identifies memory content type (like OPAL, kernel, etc). During MPIPL
> > registration
> > we pass this data to HDAT.. Hostboot will just copy this back to Result table
> > inside HDAT.
> > During MPIPL boot, OPAL passes this information to kernel.. so that kernel can
> > generate
> > proper dumps.
>
> Right. But it's all metadata that "MPIPL" does not need to know. We want
> a service that preserves memory over reboot. Then Linux can create its
> own metadata to use that for fadump crashes, for example.
>
> >>
> >>> After reboot, we will result range from hostboot . We pass that to kernel via
> >>> device tree.
> >>>
> >>>>
> >>>> Why not just an API which can add a range, and delete a range, and
> >>>> that's it? Range would just be physical start, end, plus an arbitrary
> >>>> tag (which caller can use to retrieve metadata that is used to
> >>>> decipher the dump).
> >>>
> >>> We want one to one mapping between source and destination.
> >>
> >> Ah yes, sure that too. So two calls, one which adds or removes
> >> (source, dest, length) entries, and another which sets a tag.
> >
> > Sorry. I'm still not getting what we gain by multiple calls here.
>
> No ugly structure that's tied to some internal dump metadata.
>
> >
> > - With structure we can pass all information in one call. So kernel can make
> > single call for registration.
>
> We don't gain much there AFAIKS.

I added some code to count what OPAL calls we do to get into petitboot
on ozrom2 and got this:

+---------------------------------+-------+
|            OPAL Call            | Count |
+---------------------------------+-------+
| OPAL_READ_NVRAM                 |  4612 |
| OPAL_PCI_EEH_FREEZE_STATUS      |  2832 |
| OPAL_PCI_CONFIG_READ_HALF_WORD  |  1345 |
| OPAL_PCI_CONFIG_READ_WORD       |  1232 |
| OPAL_PCI_CONFIG_READ_BYTE       |   680 |
| OPAL_WRITE_TPO                  |   648 |
| OPAL_HANDLE_INTERRUPT           |   585 |
| OPAL_CONSOLE_WRITE_BUFFER_SPACE |   467 |
| OPAL_CONSOLE_WRITE              |   467 |
| OPAL_POLL_EVENTS                |   465 |
| OPAL_XIVE_GET_QUEUE_INFO        |   356 |
| OPAL_PCI_CONFIG_WRITE_WORD      |   350 |
| OPAL_XIVE_DONATE_PAGE           |   349 |
| OPAL_PCI_CONFIG_WRITE_HALF_WORD |   296 |
| OPAL_PCI_SET_XIVE_PE            |   180 |
| OPAL_GET_MSI_                   |   180 |
| OPAL_XIVE_SYNC                  |   144 |
| OPAL_XIVE_FREE_VP_BLOCK         |   144 |
| OPAL_XIVE_FREE_IRQ              |   144 |
| OPAL_XIVE_DUMP                  |   144 |
| OPAL_XIVE_ALLOCATE_VP_BLOCK     |   144 |
| OPAL_START_CPU                  |   143 |
| OPAL_QUERY_CPU_STATUS           |   143 |
| OPAL_CONSOLE_READ               |   113 |
| OPAL_SENSOR_READ                |    53 |
| OPAL_PCI_EEH_FREEZE_CLEAR       |    47 |
| OPAL_GET_POWERCAP               |    36 |
| OPAL_FLASH_ERASE                |    33 |
| OPAL_FLASH_WRITE                |    31 |
| OPAL_FLASH_READ                 |    31 |
| OPAL_PCI_SET_PELTV              |    23 |
| OPAL_PCI_SET_PE                 |    16 |
| OPAL_REGISTER_DUMP_REGION       |    15 |
| OPAL_PCI_MAP_PE_MMIO_WINDOW     |    14 |
| OPAL_PCI_SET_PHB_MEM_WINDOW     |     9 |
| OPAL_PCI_PHB_MMIO_ENABLE        |     9 |
| OPAL_PCI_FENCE_PHB              |     9 |
| OPAL_XIVE_GET_IRQ_INFO          |     8 |
| OPAL_PCI_MAP_PE_DMA_WINDOW_REAL |     6 |
| OPAL_PCI_MAP_PE_DMA_WINDOW      |     6 |
| OPAL_DUMP_READ                  |     6 |
| OPAL_PRD_MSG                    |     5 |
| OPAL_RTC_READ                   |     3 |
| OPAL_GET_POWER_SHIFT_RATIO      |     2 |
| OPAL_XIVE_SET_VP_INFO           |     1 |
| OPAL_XIVE_SET_IRQ_CONFIG        |     1 |
| OPAL_XIVE_GET_IRQ_CONFIG        |     1 |
| OPAL_PCI_GET_POWER_STATE        |     1 |
| OPAL_ELOG_ACK                   |     1 |
+---------------------------------+-------+

I'd say we gain nothing from doing one OPAL call.

> > - Its controlled by version field (we realized need of version field during
> > review and I will
> > add that in v8) . So makes it easy to handle comparability issues.
>
> But you only have backward compatibility concerns because you're
> exposing the structure and putting crash metadata into it in the first
> place.
>
>
> > - Its easy to extend/modify later without breaking API. If we just pass source,
> > destination,
> > length then for any change in future we have to add new API.
> >
> > Only thing I mixed up in the structure is `crashing_cpu` information. This is
> > not needed
> > for registration. This is needed during MPIPL boot for OPAL core.  May be this
> > is creating
> > confusion. May be we can remove this field from structure and put it in device tree.
>
> No, just make it an arbitrary tag. Then the caller can use that as a
> pointer and use that to find its own metadata.
>
> >>> Also we have
> >>> to update this information in HDAT so that hostboot can access it.
> >>
> >> That's okay though, isn't it? You can return failure if you don't
> >> have enough room.
> >
> > Yes. that's fine.
> >
> >
> >>
> >>> Also having structure allows us to pass all these information nicely to OPAL.
> >>
> >> I don't think OPAL needs to know about the kernel crash metadata, and
> >> it could get its own by looking at addresses and tags that come up.
> >
> > As explained above kernel won't pass metadata to OPAL. Kernel keeps track
> > of crashing CPU information and uses it during vmcore generation.
>
> You still have the type field.
>
> >> Although I'm not really convinced it's a good idea to have a
> >> cooperative system where you have kernel and OPAL both managing crash
> >> dumps at the same time...
> >
> > OPAL is not going to manage dumps. During registration it updates HDAT with
> > information
> > needed to capture dump. During MPIPL boot, it just passes those information to
> > kernel.
> >
> > Kernel will generate both vmcore and opalcore based on the information provided
> > by OPAL.
>
> Okay good, in that case there is only ever need for a single tag to
> be preserved.
>
> >> I really think OPAL crash information and
> >> especially when the host is running could benefit from more thought.
> >
> > I think OPAL core is really useful to debug OPAL issues.
>
> Sorry, yes the general OPAL core idea is fine. It's the OPAL boot crash
> facility I'm more skeptical of.
>
> (I have no problem with improving OPAL boot debugability btw, it's great
> you're looking into it and I would like to continue to debate it, I'm
> just thinking it would help getting the Linux part of the series merged
> faster if OPAL boot is deferred)
>
> >>> Finally this is similar concept we have in PowerVM LPAR as well. Hence I have
> >>> added structure.
> >>
> >> Is that a point for or against this structure? :)
> >
> > In this case, I'm in favor of structure :-)
>
> I'm still pretty set on no structure and no metadata (except one tag
> that has no predefined semantics to the MPIPL layer).
>
> That's the minimum necessary and sufficient for a "preserve memory
> across reboot" facility to support Linux crash dumps, right?

I think we can (and probably need to) do better than the minimum. For
the current design the "good" path is something like:

Old kernel does a bad -> MPIPL request -> *magic occurs* -> hostboot
-> skiboot -> petitboot -> ???

I'm wondering what we can safely do once we hit the final step. As far
as I can tell the intention is to boot into the same kernel that we
crashed from so that it can run makedumpsterfire to produce a
crashdump, invalidate the dump, and continue to boot into a
functioning OS. However I don't see how we'd actually guarantee that
actually happens. I realise that it's *probably* going to work most of
the time since we'll probably be running the same kernel that's the
default boot option, but surely we can come up with something that's
less jank.

For contrast the kdump approach allows the crashing kernel to specify
what the crash environment is going to look like. If I were an OS
vendor I'd say that's a pretty compelling reason to use kdump instead
of this. If the main benifit of fadump is that we can reliably reset
and reinitialise hardware devices then maybe we should look at trying
to use MPIPL as an alternative kdump entry path. Rather than having
skiboot load petitboot from flash, we could have skiboot enter the
preloaded crash kernel and go from there.

Mahesh, Hari, what are your thoughts?

Oliver

> Thanks,
> Nick
>
> _______________________________________________
> Skiboot mailing list
> Skiboot at lists.ozlabs.org
> https://lists.ozlabs.org/listinfo/skiboot


More information about the Skiboot mailing list