[PATCH 0/2] powerpc/kvm: Enable running guests on RT Linux

Scott Wood scottwood at freescale.com
Tue Feb 24 10:27:31 AEDT 2015


On Fri, 2015-02-20 at 15:54 +0100, Sebastian Andrzej Siewior wrote:
> On 02/20/2015 03:12 PM, Paolo Bonzini wrote:
> >> Thomas, what is the usual approach for patches like this? Do you take
> >> them into your rt tree or should they get integrated to upstream?
> > 
> > Patch 1 is definitely suitable for upstream, that's the reason why we
> > have raw_spin_lock vs. raw_spin_unlock.
> 
> raw_spin_lock were introduced in c2f21ce2e31286a0a32 ("locking:
> Implement new raw_spinlock). They are used in context which runs with
> IRQs off - especially on -RT. This includes usually interrupt
> controllers and related core-code pieces.
> 
> Usually you see "scheduling while atomic" on -RT and convert them to
> raw locks if it is appropriate.
> 
> Bogdan wrote in 2/2 that he needs to limit the number of CPUs in oder
> not cause a DoS and large latencies in the host. I haven't seen an
> answer to my why question. Because if the conversation leads to
> large latencies in the host then it does not look right.
> 
> Each host PIC has a rawlock and does mostly just mask/unmask and the
> raw lock makes sure the value written is not mixed up due to
> preemption.
> This hardly increase latencies because the "locked" path is very short.
> If this conversation leads to higher latencies then the locked path is
> too long and hardly suitable to become a rawlock.

This isn't a host PIC driver.  It's guest PIC emulation, some of which
is indeed not suitable for a rawlock (in particular, openpic_update_irq
which loops on the number of vcpus, with a loop body that calls
IRQ_check() which loops over all pending IRQs).  The vcpu limits are a
temporary bandaid to avoid the worst latencies, but I'm still skeptical
about this being upstream material.

-Scott




More information about the Linuxppc-dev mailing list