KVM on POWER8 host lock up since 10d91611f426 ("powerpc/64s: Reimplement book3s idle code in C")

Nicholas Piggin npiggin at gmail.com
Fri Nov 5 12:47:26 AEDT 2021


Excerpts from Michal Suchánek's message of November 3, 2021 1:48 am:
> On Thu, Jan 14, 2021 at 11:08:03PM +1000, Nicholas Piggin wrote:
>> Excerpts from Michal Suchánek's message of January 14, 2021 10:40 pm:
>> > On Mon, Oct 19, 2020 at 02:50:51PM +1000, Nicholas Piggin wrote:
>> >> Excerpts from Nicholas Piggin's message of October 19, 2020 11:00 am:
>> >> > Excerpts from Michal Suchánek's message of October 17, 2020 6:14 am:
>> >> >> On Mon, Sep 07, 2020 at 11:13:47PM +1000, Nicholas Piggin wrote:
>> >> >>> Excerpts from Michael Ellerman's message of August 31, 2020 8:50 pm:
>> >> >>> > Michal Suchánek <msuchanek at suse.de> writes:
>> >> >>> >> On Mon, Aug 31, 2020 at 11:14:18AM +1000, Nicholas Piggin wrote:
>> >> >>> >>> Excerpts from Michal Suchánek's message of August 31, 2020 6:11 am:
>> >> >>> >>> > Hello,
>> >> >>> >>> > 
>> >> >>> >>> > on POWER8 KVM hosts lock up since commit 10d91611f426 ("powerpc/64s:
>> >> >>> >>> > Reimplement book3s idle code in C").
>> >> >>> >>> > 
>> >> >>> >>> > The symptom is host locking up completely after some hours of KVM
>> >> >>> >>> > workload with messages like
>> >> >>> >>> > 
>> >> >>> >>> > 2020-08-30T10:51:31+00:00 obs-power8-01 kernel: KVM: couldn't grab cpu 47
>> >> >>> >>> > 2020-08-30T10:51:31+00:00 obs-power8-01 kernel: KVM: couldn't grab cpu 71
>> >> >>> >>> > 2020-08-30T10:51:31+00:00 obs-power8-01 kernel: KVM: couldn't grab cpu 47
>> >> >>> >>> > 2020-08-30T10:51:31+00:00 obs-power8-01 kernel: KVM: couldn't grab cpu 71
>> >> >>> >>> > 2020-08-30T10:51:31+00:00 obs-power8-01 kernel: KVM: couldn't grab cpu 47
>> >> >>> >>> > 
>> >> >>> >>> > printed before the host locks up.
>> >> >>> >>> > 
>> >> >>> >>> > The machines run sandboxed builds which is a mixed workload resulting in
>> >> >>> >>> > IO/single core/mutiple core load over time and there are periods of no
>> >> >>> >>> > activity and no VMS runnig as well. The VMs are shortlived so VM
>> >> >>> >>> > setup/terdown is somewhat excercised as well.
>> >> >>> >>> > 
>> >> >>> >>> > POWER9 with the new guest entry fast path does not seem to be affected.
>> >> >>> >>> > 
>> >> >>> >>> > Reverted the patch and the followup idle fixes on top of 5.2.14 and
>> >> >>> >>> > re-applied commit a3f3072db6ca ("powerpc/powernv/idle: Restore IAMR
>> >> >>> >>> > after idle") which gives same idle code as 5.1.16 and the kernel seems
>> >> >>> >>> > stable.
>> >> >>> >>> > 
>> >> >>> >>> > Config is attached.
>> >> >>> >>> > 
>> >> >>> >>> > I cannot easily revert this commit, especially if I want to use the same
>> >> >>> >>> > kernel on POWER8 and POWER9 - many of the POWER9 fixes are applicable
>> >> >>> >>> > only to the new idle code.
>> >> >>> >>> > 
>> >> >>> >>> > Any idea what can be the problem?
>> >> >>> >>> 
>> >> >>> >>> So hwthread_state is never getting back to to HWTHREAD_IN_IDLE on
>> >> >>> >>> those threads. I wonder what they are doing. POWER8 doesn't have a good
>> >> >>> >>> NMI IPI and I don't know if it supports pdbg dumping registers from the
>> >> >>> >>> BMC unfortunately.
>> >> >>> >>
>> >> >>> >> It may be possible to set up fadump with a later kernel version that
>> >> >>> >> supports it on powernv and dump the whole kernel.
>> >> >>> > 
>> >> >>> > Your firmware won't support it AFAIK.
>> >> >>> > 
>> >> >>> > You could try kdump, but if we have CPUs stuck in KVM then there's a
>> >> >>> > good chance it won't work :/
>> >> >>> 
>> >> >>> I haven't had any luck yet reproducing this still. Testing with sub 
>> >> >>> cores of various different combinations, etc. I'll keep trying though.
>> >> >> 
>> >> >> Hello,
>> >> >> 
>> >> >> I tried running some KVM guests to simulate the workload and what I get
>> >> >> is guests failing to start with a rcu stall. Tried both 5.3 and 5.9
>> >> >> kernel and qemu 4.2.1 and 5.1.0
>> >> >> 
>> >> >> To start some guests I run
>> >> >> 
>> >> >> for i in $(seq 0 9) ; do /opt/qemu/bin/qemu-system-ppc64 -m 2048 -accel kvm -smp 8 -kernel /boot/vmlinux -initrd /boot/initrd -nodefaults -nographic -serial mon:telnet::444$i,server,wait & done
>> >> >> 
>> >> >> To simulate some workload I run
>> >> >> 
>> >> >> xz -zc9T0 < /dev/zero > /dev/null &
>> >> >> while true; do
>> >> >>     killall -STOP xz; sleep 1; killall -CONT xz; sleep 1;
>> >> >> done &
>> >> >> 
>> >> >> on the host and add a job that executes this to the ramdisk. However, most
>> >> >> guests never get to the point where the job is executed.
>> >> >> 
>> >> >> Any idea what might be the problem?
>> >> > 
>> >> > I would say try without pv queued spin locks (but if the same thing is 
>> >> > happening with 5.3 then it must be something else I guess). 
>> >> > 
>> >> > I'll try to test a similar setup on a POWER8 here.
>> >> 
>> >> Couldn't reproduce the guest hang, they seem to run fine even with 
>> >> queued spinlocks. Might have a different .config.
>> >> 
>> >> I might have got a lockup in the host (although different symptoms than 
>> >> the original report). I'll look into that a bit further.
>> > 
>> > Hello,
>> > 
>> > any progress on this?
>> 
>> No progress, I still wasn't able to reproduce, and it fell off the 
>> radar sorry.
>> 
>> I expect hwthred_state must be getting corrupted somewhere or a
>> secondary thread getting stuck but I couldn't see where. I try pick
>> it up again thanks for the reminder.
> 
> Hello,
> 
> the fixes pointed out in
> https://lore.kernel.org/linuxppc-dev/87pmrtbbdt.fsf@mpe.ellerman.id.au/T/#u
> resolve the problem.
> 
> Thanks
> 
> Michal

Hey Michal, great thanks for testing. Sorry I couldn't fix it, but a 
good result in the end.

Thanks,
Nick


More information about the Linuxppc-dev mailing list