[PATCH 0/2] powerpc/kvm: Enable running guests on RT Linux
Purcareata Bogdan
b43198 at freescale.com
Mon Apr 20 20:53:24 AEST 2015
On 10.04.2015 02:53, Scott Wood wrote:
> On Thu, 2015-04-09 at 10:44 +0300, Purcareata Bogdan wrote:
>> So at this point I was getting kinda frustrated so I decided to measure
>> the time spend in kvm_mpic_write and kvm_mpic_read. I assumed these were
>> the main entry points in the in-kernel MPIC and were basically executed
>> while holding the spinlock. The scenario was the same - 24 VCPUs guest,
>> with 24 virtio+vhost interfaces, only this time I ran 24 ping flood
>> threads to another board instead of netperf. I assumed this would impose
>> a heavier stress.
>>
>> The latencies look pretty ok, around 1-2 us on average, with the max
>> shown below:
>>
>> .kvm_mpic_read 14.560
>> .kvm_mpic_write 12.608
>>
>> Those are also microseconds. This was run for about 15 mins.
>
> What about other entry points such as kvm_set_msi() and
> kvmppc_mpic_set_epr()?
Thanks for the pointers! I redid the measurements, this time for the functions
run with the openpic lock down:
.kvm_mpic_read_internal (.kvm_mpic_read) 1.664
.kvmppc_mpic_set_epr 6.880
.kvm_mpic_write_internal (.kvm_mpic_write) 7.840
.openpic_msi_write (.kvm_set_msi) 10.560
Same scenario, 15 mins, numbers are microseconds.
There was a weird situation for .kvmppc_mpic_set_epr - its corresponding inner
function is kvmppc_set_epr, which is a static inline. Removing the static inline
yields a compiler crash (Segmentation fault (core dumped) -
scripts/Makefile.build:441: recipe for target 'arch/powerpc/kvm/kvm.o' failed),
but that's a different story, so I just let it be for now. Point is the time may
include other work after the lock has been released, but before the function
actually returned. I noticed this was the case for .kvm_set_msi, which could
work up to 90 ms, not actually under the lock. This made me change what I'm
looking at.
So far it looks pretty decent. Are there any other MPIC entry points worthy of
investigation? Or perhaps a different stress scenario involving a lot of VCPUs
and external interrupts?
Thanks,
Bogdan P.
More information about the Linuxppc-dev
mailing list