[PATCH 2/2] KVM: PPC: Book3E: Emulate MCSRR0/1 SPR and rfmci instruction

Scott Wood scottwood at freescale.com
Thu Jul 11 04:24:58 EST 2013


On 07/10/2013 05:23:36 AM, Alexander Graf wrote:
> 
> On 10.07.2013, at 00:26, Scott Wood wrote:
> 
> > On 07/09/2013 05:00:26 PM, Alexander Graf wrote:
> >> It'll also be more flexible at the same time. You could take the  
> logs and actually check what's going on to debug issues that you're  
> encountering for example.
> >> We could even go as far as sharing the same tool with other  
> architectures, so that we only have to learn how to debug things once.
> >
> > Have you encountered an actual need for this flexibility, or is it  
> theoretical?
> 
> Yeah, first thing I did back then to actually debug kvm failures was  
> to add trace points.

I meant specifically for handling exit timings this way.

> > Is there common infrastructure for dealing with measuring intervals  
> and tracking statistics thereof, rather than just tracking points and  
> letting userspace connect the dots (though it could still do that as  
> an option)?  Even if it must be done in userspace, it doesn't seem  
> like something that should be KVM-specific.
> 
> Would you like to have different ways of measuring mm subsystem  
> overhead? I don't :). The same goes for KVM really. If we could  
> converge towards a single user space interface to get exit timings,  
> it'd make debugging a lot easier.

I agree -- that's why I said it doesn't seem like something that should  
be KVM-specific.  But that's orthogonal to whether it's done in kernel  
space or user space.  The ability to get begin/end events from  
userspace would be nice when it is specifically requested, but it would  
also be nice if the kernel could track some basic statistics so we  
wouldn't have to ship so much data around to arrive at the same result.

At the very least, I'd like such a tool/infrastructure to exist before  
we start complaining about doing minor maintenance of the current  
mechanism.

> We already have this for the debugfs counters btw. And the timing  
> framework does break kvm_stat today already, as it emits textual  
> stats rather than numbers which all of the other debugfs stats do.  
> But at least I can take the x86 kvm_stat tool and run it on ppc just  
> fine to see exit stats.

We already have what?  The last two sentences seem contradictory -- can  
you or can't you use kvm_stat as is?  I'm not familiar with kvm_stat.

What does x86 KVM expose in debugfs?

> >> > Lots of debug options are enabled at build time; why must this  
> be different?
> >> Because I think it's valuable as debug tool for cases where  
> compile time switches are not the best way of debugging things. It's  
> not a high profile thing to tackle for me tbh, but I don't really  
> think working heavily on the timing stat thing is the correct path to  
> walk along.
> >
> > Adding new exit types isn't "working heavily" on it.
> 
> No, but the fact that the first patch is a fix to add exit stats for  
> exits that we missed out before doesn't give me a lot of confidence  
> that lots of people use timing stats. And I am always very weary of  
> #ifdef'ed code, as it blows up the test matrix heavily.

I used it quite a lot when I was doing KVM performance work.  It's just  
been a while since I last did that.

-Scott


More information about the Linuxppc-dev mailing list