Crash in kvmppc_xive_release()
Cédric Le Goater
clg at kaod.org
Fri Jul 19 22:05:18 AEST 2019
On 19/07/2019 13:20, Michael Ellerman wrote:
> Cédric Le Goater <clg at kaod.org> writes:
>> On 18/07/2019 14:49, Michael Ellerman wrote:
>>> Anyone else seen this?
>>>
>>> This is running ~176 VMs on a Power9 (1 per thread), host crashes:
>>
>> This is beyond the underlying limits of XIVE.
>>
>> As we allocate 2K vCPUs per VM, that is 16K EQs for interrupt events. The overall
>> EQ count is 1M. I let you calculate what is our max number of VMs ...
>
> We need to fix it somehow, people will expect to be able to run a VM per
> thread.
we are limited by two spaces : VP space (1 << 19) system overall and
EQ space (1 << 20 per chip, this one we could increase). But one of
the big issue is the way we allocate the XIVE VPs in the XIVE devices.
As we have no idea of how much vCPUs we should provision for, we take
the max: 2048 ...
If we had the maxcpu of the VM (from QEMU) or at least some hints on
a rough figure, lets say a power of 2 [ 32 - 4096 ] CPUs, we would
fragment less our VP space and increase a lot our #VMs per system.
It could be a kernel global, sysfs or what ever, a new KVM PPC control
on the VM to tune maxcpu, or a KVM device creation parameter. we
could also register multiple KVM devices each having its maximum :
tiny (5), small (6), normal (8), big (11, default legacy), huge (12),
and create from QEMU the one we think fits the best.
I have to think this over.
Nevertheless, I am trying to increase by 2 or 4 the XIVE spaces for
POWER10.
C.
More information about the Linuxppc-dev
mailing list