KVM on POWER8 host lock up since 10d91611f426 ("powerpc/64s: Reimplement book3s idle code in C")

Ruediger Oertel ro at suse.de
Mon Aug 31 18:48:22 AEST 2020


Am 31.08.20 um 03:14 schrieb Nicholas Piggin:
> Excerpts from Michal Suchánek's message of August 31, 2020 6:11 am:
>> Hello,
>>
>> on POWER8 KVM hosts lock up since commit 10d91611f426 ("powerpc/64s:
>> Reimplement book3s idle code in C").
>>
>> The symptom is host locking up completely after some hours of KVM
>> workload with messages like
>>
>> 2020-08-30T10:51:31+00:00 obs-power8-01 kernel: KVM: couldn't grab cpu 47
>> 2020-08-30T10:51:31+00:00 obs-power8-01 kernel: KVM: couldn't grab cpu 71
>> 2020-08-30T10:51:31+00:00 obs-power8-01 kernel: KVM: couldn't grab cpu 47
>> 2020-08-30T10:51:31+00:00 obs-power8-01 kernel: KVM: couldn't grab cpu 71
>> 2020-08-30T10:51:31+00:00 obs-power8-01 kernel: KVM: couldn't grab cpu 47
>>
>> printed before the host locks up.
>>
>> The machines run sandboxed builds which is a mixed workload resulting in
>> IO/single core/mutiple core load over time and there are periods of no
>> activity and no VMS runnig as well. The VMs are shortlived so VM
>> setup/terdown is somewhat excercised as well.
>>
>> POWER9 with the new guest entry fast path does not seem to be affected.
>>
>> Reverted the patch and the followup idle fixes on top of 5.2.14 and
>> re-applied commit a3f3072db6ca ("powerpc/powernv/idle: Restore IAMR
>> after idle") which gives same idle code as 5.1.16 and the kernel seems
>> stable.
>>
>> Config is attached.
>>
>> I cannot easily revert this commit, especially if I want to use the same
>> kernel on POWER8 and POWER9 - many of the POWER9 fixes are applicable
>> only to the new idle code.
>>
>> Any idea what can be the problem?
> 
> So hwthread_state is never getting back to to HWTHREAD_IN_IDLE on
> those threads. I wonder what they are doing. POWER8 doesn't have a good
> NMI IPI and I don't know if it supports pdbg dumping registers from the
> BMC unfortunately. Do the messages always come in pairs of CPUs?
> 
> I'm not sure where to start with reproducing, I'll have to try. How many
> vCPUs in the guests? Do you have several guests running at once?

Hello all,

some details on the setup...
these machines are buildservice workers, (build.opensuse.org) and all they
do is spawn new VMs, run a package building job inside (rpmbuild, debbuild,...)

The machines are running in OPAL/PowerNV mode, with "ppc64_cpu --smt=off".
The number of VMs varies across the machines:
obs-power8-01: 18 instances, "-smp 16,threads=8"
obs-power8-02: 20 instances, "-smp 8,threads=8"
obs-power8-03: 30 instances, "-smp 8,threads=8"
obs-power8-04: 20 instances, "-smp 8,threads=8"
obs-power8-05: 36 instances, "-smp 4,threads=2" (this one with "ppc64_cpu --subcores-per-core=4"

but anyway the stalls can be seen on all of them, sometimes after 4 hours
sometimes just after about a day. The 01 with more cpu overcommit seems
a little faster reproducing the problem, but that's more gut feeling than
anything backed by real numbers.


-- 
with kind regards (mit freundlichem Grinsen),
  Ruediger Oertel (ro at suse.com,ro at suse.de,bugfinder at t-online.de)
--------Do-Not-Accept-Binary-Blobs.----Ever.----From-Anyone.------------
Key fingerprint   =   17DC 6553 86A7 384B 53C5  CA5C 3CE4 F2E7 23F2 B417
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg,
  Germany, (HRB 36809, AG Nürnberg), Geschäftsführer: Felix Imendörffer

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 195 bytes
Desc: OpenPGP digital signature
URL: <http://lists.ozlabs.org/pipermail/linuxppc-dev/attachments/20200831/b8d7f6b6/attachment.sig>


More information about the Linuxppc-dev mailing list