[PATCH 1/2] PPC: powernv: remove redundant cpuidle_idle_call()

Nicolas Pitre nicolas.pitre at linaro.org
Fri Feb 7 21:48:31 EST 2014


On Fri, 7 Feb 2014, Preeti U Murthy wrote:

> Hi Nicolas,
> 
> On 02/07/2014 06:47 AM, Nicolas Pitre wrote:
> > 
> > What about creating arch_cpu_idle_enter() and arch_cpu_idle_exit() in 
> > arch/powerpc/kernel/idle.c and calling ppc64_runlatch_off() and 
> > ppc64_runlatch_on() respectively from there instead?  Would that work?  
> > That would make the idle consolidation much easier afterwards.
> 
> I would not suggest doing this. The ppc64_runlatch_*() routines need to
> be called when we are sure that the cpu is about to enter or has exit an
> idle state. Moving the ppc64_runlatch_on() routine to
> arch_cpu_idle_enter() for instance is not a good idea because there are
> places where the cpu can decide not to enter any idle state before the
> call to cpuidle_idle_call() itself. In that case communicating
> prematurely that we are in an idle state would not be a good idea.
> 
> So its best to add the ppc64_runlatch_* calls in the powernv cpuidle
> driver IMO. We could however create idle_loop_prologue/epilogue()
> variants inside it so that in addition to the runlatch routines we could
> potentially add more such similar routines that are powernv specific.
>   If there are cases where there is work to be done prior to and post an
> entry into an idle state common to both pseries and powernv, we will
> probably put them in arch_cpu_idle_enter/exit(). But the runlatch
> routines are not suitable to be moved there as far as I can see.

OK.

However, one thing we need to do as much as possible is to remove those 
loops based on need_resched() from idle backend drivers.  A somewhat 
common pattern is:

my_idle()
{
	/* interrupts disabled on entry */
	while (!need_resched()) {
		lowpower_wait_for_interrupts();
		local_irq_enable();
		/* IRQ serviced from here */
		local_irq_disable();
	}
	local_irq_enable();
	/* interrupts enabled on exit */
}

To be able to keep statistics on the actual idleness of the CPU we'd 
need for all idle backends to always return to generic code on every 
interrupt similar to this:

my_idle()
{
	/* interrupts disabled on entry */
	lowpower_wait_for_interrupts();
	local_irq_enable();
	/* interrupts enabled on exit */
}

The generic code would be responsible for dealing with need_resched() 
and call back into the backend right away if necessary after updating 
some stats.

Do you see a problem with the runlatch calls happening around each 
interrrupt from such a simplified idle backend?


Nicolas


More information about the Linuxppc-dev mailing list