new port seeing ipmid exiting with seg fault

Patton, Schuyler spatton at ti.com
Tue Aug 30 01:15:48 AEST 2022


Hi all,

In our port the ipmid is exiting with a seg fault.  Does anyone have any suggestions on what to look at or what the problem might be? I have included some info I collected from systemctl and journalctl. Thanks in advance for any pointers, suggestions.

root at evb-am62xx:~# systemctl status phosphor-ipmi-host
x phosphor-ipmi-host.service - Phosphor Inband IPMI
     Loaded: loaded (/lib/systemd/system/phosphor-ipmi-host.service; enabled; vendor preset: enabled)
    Drop-In: /lib/systemd/system/phosphor-ipmi-host.service.d
             `-10-override.conf
     Active: failed (Result: core-dump) since Mon 2022-08-29 15:01:40 UTC; 3min 8s ago
   Duration: 1.163s
    Process: 368 ExecStart=/usr/bin/env ipmid (code=dumped, signal=SEGV)
   Main PID: 368 (code=dumped, signal=SEGV)

Aug 29 15:01:40 evb-am62xx systemd[1]: phosphor-ipmi-host.service: Scheduled restart job, restart counter is at 2.
Aug 29 15:01:40 evb-am62xx systemd[1]: Stopped Phosphor Inband IPMI.
Aug 29 15:01:40 evb-am62xx systemd[1]: phosphor-ipmi-host.service: Start request repeated too quickly.
Aug 29 15:01:40 evb-am62xx systemd[1]: phosphor-ipmi-host.service: Failed with result 'core-dump'.
Aug 29 15:01:40 evb-am62xx systemd[1]: Failed to start Phosphor Inband IPMI.


root at evb-am62xx:~# journalctl | grep ipmi
Jan 01 00:00:04 evb-am62xx systemd[1]: /lib/systemd/system/phosphor-ipmi-net at .socket:3: Invalid interface name, ignoring: sys-subsystem-net-devices-%i.device
Jan 01 00:00:04 evb-am62xx systemd[1]: Created slice Slice /system/phosphor-ipmi-net.
Aug 29 15:01:19 evb-am62xx systemd[1]: Listening on phosphor-ipmi-net at eth0.socket.
Aug 29 15:01:21 evb-am62xx ipmid[329]: JSON file not found
Aug 29 15:01:22 evb-am62xx systemd-coredump[339]: Process 334 (netipmid) of user 0 dumped core.
Aug 29 15:01:22 evb-am62xx systemd[1]: phosphor-ipmi-net at eth0.service: Main process exited, code=dumped, status=11/SEGV
Aug 29 15:01:22 evb-am62xx systemd[1]: phosphor-ipmi-net at eth0.service: Failed with result 'core-dump'.
Aug 29 15:01:23 evb-am62xx systemd-coredump[338]: Process 329 (ipmid) of user 0 dumped core.
Aug 29 15:01:23 evb-am62xx systemd[1]: phosphor-ipmi-host.service: Main process exited, code=dumped, status=11/SEGV
Aug 29 15:01:23 evb-am62xx systemd[1]: phosphor-ipmi-host.service: Failed with result 'core-dump'.
Aug 29 15:01:38 evb-am62xx systemd[1]: phosphor-ipmi-net at eth0.service: Scheduled restart job, restart counter is at 1.
Aug 29 15:01:38 evb-am62xx systemd[1]: phosphor-ipmi-host.service: Scheduled restart job, restart counter is at 1.
Aug 29 15:01:39 evb-am62xx systemd-coredump[373]: Process 370 (netipmid) of user 0 dumped core.
Aug 29 15:01:39 evb-am62xx systemd[1]: phosphor-ipmi-net at eth0.service: Main process exited, code=dumped, status=11/SEGV
Aug 29 15:01:39 evb-am62xx systemd[1]: phosphor-ipmi-net at eth0.service: Failed with result 'core-dump'.
Aug 29 15:01:39 evb-am62xx systemd-coredump[371]: Process 368 (ipmid) of user 0 dumped core.
Aug 29 15:01:39 evb-am62xx systemd[1]: phosphor-ipmi-host.service: Main process exited, code=dumped, status=11/SEGV
Aug 29 15:01:39 evb-am62xx systemd[1]: phosphor-ipmi-host.service: Failed with result 'core-dump'.
Aug 29 15:01:40 evb-am62xx systemd[1]: phosphor-ipmi-net at eth0.service: Scheduled restart job, restart counter is at 2.
Aug 29 15:01:40 evb-am62xx systemd[1]: phosphor-ipmi-host.service: Scheduled restart job, restart counter is at 2.
Aug 29 15:01:40 evb-am62xx systemd[1]: phosphor-ipmi-host.service: Start request repeated too quickly.
Aug 29 15:01:40 evb-am62xx systemd[1]: phosphor-ipmi-host.service: Failed with result 'core-dump'.
Aug 29 15:01:40 evb-am62xx systemd[1]: phosphor-ipmi-net at eth0.service: Job phosphor-ipmi-net at eth0.service/start failed with result 'dependency'.

Regards,
Schuyler Patton
Sitara MPU System Applications
Texas Instruments

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ozlabs.org/pipermail/openbmc/attachments/20220829/9d87d848/attachment.htm>


More information about the openbmc mailing list