[PATCH 18/18] powerpc/64s: move power4 idle entirely to C

Nicholas Piggin npiggin at gmail.com
Tue Nov 10 19:43:48 AEDT 2020


Excerpts from Christophe Leroy's message of November 7, 2020 7:43 pm:
> 
> 
> Le 05/11/2020 à 15:34, Nicholas Piggin a écrit :
>> Christophe asked about doing this, most of the code is still in
>> asm but maybe it's slightly nicer? I don't know if it's worthwhile.
> 
> Heu... I don't think I was asking for that, but why not, see later comments.
> 
> At first I was just asking to write the following in C:
> 
> +
> +	.globl power4_idle_nap_return
> +power4_idle_nap_return:
> +	blr
> 
> 
> In extenso, instead of the above do somewhere something like:
> 
> void power4_idle_nap_return(void)
> {
> }

Ah! Well either was a good question. I don't mind attempting it :)

>> ---
>>   arch/powerpc/kernel/idle.c        | 25 ++++++++++++++++++++-----
>>   arch/powerpc/kernel/idle_book3s.S | 22 ----------------------
>>   2 files changed, 20 insertions(+), 27 deletions(-)
>> 
>> diff --git a/arch/powerpc/kernel/idle.c b/arch/powerpc/kernel/idle.c
>> index ae0e2632393d..849e77a45915 100644
>> --- a/arch/powerpc/kernel/idle.c
>> +++ b/arch/powerpc/kernel/idle.c
>> @@ -72,6 +72,9 @@ int powersave_nap;
>>   #ifdef CONFIG_PPC_970_NAP
>>   void power4_idle(void)
>>   {
>> +	unsigned long msr_idle = MSR_KERNEL|MSR_EE|MSR_POW;
>> +	unsigned long tmp1, tmp2;
>> +
>>   	if (!cpu_has_feature(CPU_FTR_CAN_NAP))
>>   		return;
>>   
>> @@ -84,13 +87,25 @@ void power4_idle(void)
>>   	if (cpu_has_feature(CPU_FTR_ALTIVEC))
>>   		asm volatile("DSSALL ; sync" ::: "memory");
>>   
>> -	power4_idle_nap();
>> -
>> +	asm volatile(
>> +"	ld	%0,PACA_THREAD_INFO(r13)		\n"
>> +"	ld	%1,TI_LOCAL_FLAGS(%0)			\n"
>> +"	ori	%1,%1,_TLF_NAPPING			\n"
>> +"	std	%1,TI_LOCAL_FLAGS(%0)			\n"
> 
> Can't this just be:
> 
> 	current_thread_info()->local_flags |= _TLF_NAPPING;
> 
>>   	/*
>> -	 * power4_idle_nap returns with interrupts enabled (soft and hard).
>> -	 * to our caller with interrupts enabled (soft and hard). Our caller
>> -	 * can cope with either interrupts disabled or enabled upon return.
>> +	 * NAPPING bit is set, from this point onward nap_adjust_return()
>> +	 * will cause interrupts to return to power4_idle_nap_return.
>>   	 */
>> +"1:	sync						\n"
>> +"	isync						\n"
>> +"	mtmsrd	%2					\n"
>> +"	isync						\n"
>> +"	b	1b					\n"
> 
> And this:
> 
> 	for (;;) {
> 		mb();
> 		isync();
> 		mtmsr(MSR_KERNEL|MSR_EE|MSR_POW);
> 		isync();
> 	}

I was hoping something nicer like this but I think not because as soon 
as we set _TLF_NAPPING, we might take an interrupt which returns 
somewhere else, and you aren't allowed to do that in C code (mainly 
because the stack and register state would be unknown). Even going 
immediately to blr or end of function might miss restoring NVGPRs etc.

There might be some tricks we could play with soft-masking interrupts, 
using MSR[EE]=0, and then doing all this and returning to right after 
the mtmsr POW with a flag set...  But it's a bit of tricky churn for an 
old CPU that works okay.

Thanks,
Nick

> 
> 
>> +"	.globl power4_idle_nap_return			\n"
>> +"power4_idle_nap_return:				\n"
>> +	: "=r"(tmp1), "=r"(tmp2)
>> +	: "r"(msr_idle)
>> +	);
>>   }
>>   #endif
>>   
> 
> Christophe
> 


More information about the Linuxppc-dev mailing list