[PATCH v02] powerpc/pseries: Check for ceded CPU's during LPAR migration

Tyrel Datwyler tyreld at linux.vnet.ibm.com
Fri Feb 1 09:34:05 AEDT 2019


On 01/31/2019 02:21 PM, Tyrel Datwyler wrote:
> On 01/31/2019 01:53 PM, Michael Bringmann wrote:
>> On 1/30/19 11:38 PM, Michael Ellerman wrote:
>>> Michael Bringmann <mwb at linux.vnet.ibm.com> writes:
>>>> This patch is to check for cede'ed CPUs during LPM.  Some extreme
>>>> tests encountered a problem ehere Linux has put some threads to
>>>> sleep (possibly to save energy or something), LPM was attempted,
>>>> and the Linux kernel didn't awaken the sleeping threads, but issued
>>>> the H_JOIN for the active threads.  Since the sleeping threads
>>>> are not awake, they can not issue the expected H_JOIN, and the
>>>> partition would never suspend.  This patch wakes the sleeping
>>>> threads back up.
>>>
>>> I'm don't think this is the right solution.
>>>
>>> Just after your for loop we do an on_each_cpu() call, which sends an IPI
>>> to every CPU, and that should wake all CPUs up from CEDE.
>>>
>>> If that's not happening then there is a bug somewhere, and we need to
>>> work out where.
>>
>> Let me explain the scenario of the LPM case that Pete Heyrman found, and
>> that Nathan F. was working upon, previously.
>>
>> In the scenario, the partition has 5 dedicated processors each with 8 threads
>> running.
> 
> Do we CEDE processors when running dedicated? I thought H_CEDE was part of the
> Shared Processor LPAR option.

Looks like the cpuidle-pseries driver uses CEDE with dedicated processors as
long as firmware supports SPLPAR option.

> 
>>
>> From the PHYP data we can see that on VP 0, threads 3, 4, 5, 6 and 7 issued
>> a H_CEDE requesting to save energy by putting the requesting thread into
>> sleep mode.  In this state, the thread will only be awakened by H_PROD from
>> another running thread or from an external user action (power off, reboot
>> and such).  Timers and external interrupts are disabled in this mode.
> 
> Not according to PAPR. A CEDE'd processor should awaken if signaled by external
> interrupt such as decrementer or IPI as well.

This statement should still apply though. From PAPR:

14.11.3.3 H_CEDE
The architectural intent of this hcall() is to have the virtual processor, which
has no useful work to do, enter a wait state ceding its processor capacity to
other virtual processors until some useful work appears, signaled either through
an interrupt or a prod hcall(). To help the caller reduce race conditions, this
call may be made with interrupts disabled but the semantics of the hcall()
enable the virtual processor’s interrupts so that it may always receive wake up
interrupt signals.

-Tyrel

> 
> -Tyrel
> 
>>
>> About 3 seconds later, as part of the LPM operation, the other 35 threads
>> have all issued a H_JOIN request.  Join is part of the LPM process where
>> the threads suspend themselves as part of the LPM operation so the partition
>> can be migrated to the target server.
>>
>> So, the current state is the the OS has suspended the execution of all the
>> threads in the partition without successfully suspending all threads as part
>> of LPM.
>>
>> Net, OS has an issue where they suspended every processor thread so nothing
>> can run.
>>
>> This appears to be slightly different than the previous LPM stalls we have
>> seen where the migration stalls because of cpus being taken offline and not
>> making the H_JOIN call.
>>
>> In this scenario we appear to have CPUs that have done an H_CEDE prior to
>> the LPM. For these CPUs we would need to do a H_PROD to wake them back up
>> so they can do a H_JOIN and allow the LPM to continue.
>>
>> The problem is that Linux has some threads that they put to sleep (probably
>> to save energy or something), LPM was attempted, Linux didn't awaken the
>> sleeping threads but issued the H_JOIN for the active threads.  Since the
>> sleeping threads don't issue the H_JOIN the partition will never suspend.
>>
>> I am checking again with Pete regarding your concerns.
>>
>> Thanks.
>>
>>>
>>>
>>>> diff --git a/arch/powerpc/include/asm/plpar_wrappers.h b/arch/powerpc/include/asm/plpar_wrappers.h
>>>> index cff5a41..8292eff 100644
>>>> --- a/arch/powerpc/include/asm/plpar_wrappers.h
>>>> +++ b/arch/powerpc/include/asm/plpar_wrappers.h
>>>> @@ -26,10 +26,8 @@ static inline void set_cede_latency_hint(u8 latency_hint)
>>>>  	get_lppaca()->cede_latency_hint = latency_hint;
>>>>  }
>>>>  
>>>> -static inline long cede_processor(void)
>>>> -{
>>>> -	return plpar_hcall_norets(H_CEDE);
>>>> -}
>>>> +int cpu_is_ceded(int cpu);
>>>> +long cede_processor(void);
>>>>  
>>>>  static inline long extended_cede_processor(unsigned long latency_hint)
>>>>  {
>>>> diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
>>>> index de35bd8f..fea3d21 100644
>>>> --- a/arch/powerpc/kernel/rtas.c
>>>> +++ b/arch/powerpc/kernel/rtas.c
>>>> @@ -44,6 +44,7 @@
>>>>  #include <asm/time.h>
>>>>  #include <asm/mmu.h>
>>>>  #include <asm/topology.h>
>>>> +#include <asm/plpar_wrappers.h>
>>>>  
>>>>  /* This is here deliberately so it's only used in this file */
>>>>  void enter_rtas(unsigned long);
>>>> @@ -942,7 +943,7 @@ int rtas_ibm_suspend_me(u64 handle)
>>>>  	struct rtas_suspend_me_data data;
>>>>  	DECLARE_COMPLETION_ONSTACK(done);
>>>>  	cpumask_var_t offline_mask;
>>>> -	int cpuret;
>>>> +	int cpuret, cpu;
>>>>  
>>>>  	if (!rtas_service_present("ibm,suspend-me"))
>>>>  		return -ENOSYS;
>>>> @@ -991,6 +992,11 @@ int rtas_ibm_suspend_me(u64 handle)
>>>>  		goto out_hotplug_enable;
>>>>  	}
>>>>  
>>>> +	for_each_present_cpu(cpu) {
>>>> +		if (cpu_is_ceded(cpu))
>>>> +			plpar_hcall_norets(H_PROD, get_hard_smp_processor_id(cpu));
>>>> +	}
>>>
>>> There's a race condition here, there's nothing to prevent the CPUs you
>>> just PROD'ed from going back into CEDE before you do the on_each_cpu()
>>> call below> 
>>>>  	/* Call function on all CPUs.  One of us will make the
>>>>  	 * rtas call
>>>>  	 */
>>>> diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
>>>> index 41f62ca2..48ae6d4 100644
>>>> --- a/arch/powerpc/platforms/pseries/setup.c
>>>> +++ b/arch/powerpc/platforms/pseries/setup.c
>>>> @@ -331,6 +331,24 @@ static int alloc_dispatch_log_kmem_cache(void)
>>>>  }
>>>>  machine_early_initcall(pseries, alloc_dispatch_log_kmem_cache);
>>>>  
>>>> +static DEFINE_PER_CPU(int, cpu_ceded);
>>>> +
>>>> +int cpu_is_ceded(int cpu)
>>>> +{
>>>> +	return per_cpu(cpu_ceded, cpu);
>>>> +}
>>>> +
>>>> +long cede_processor(void)
>>>> +{
>>>> +	long rc;
>>>> +
>>>> +	per_cpu(cpu_ceded, raw_smp_processor_id()) = 1;
>>>
>>> And there's also a race condition here. From the other CPU's perspective
>>> the store to cpu_ceded is not necessarily ordered vs the hcall below.
>>> Which means the other CPU can see cpu_ceded = 0, and therefore not prod
>>> us, but this CPU has already called H_CEDE.
>>>
>>>> +	rc = plpar_hcall_norets(H_CEDE);
>>>> +	per_cpu(cpu_ceded, raw_smp_processor_id()) = 0;
>>>> +
>>>> +	return rc;
>>>> +}
>>>> +
>>>>  static void pseries_lpar_idle(void)
>>>>  {
>>>>  	/*
>>>
>>> cheers
>>>
>>>
>>
> 



More information about the Linuxppc-dev mailing list