[PATCH v2 1/4] powerpc/64: handle MSR EE and RI in interrupt entry wrapper

Nicholas Piggin npiggin at gmail.com
Mon Aug 30 17:32:36 AEST 2021


Excerpts from Daniel Axtens's message of August 27, 2021 5:31 pm:
> Hi,
> 
>> Similarly to the system call change in the previous patch, the mtmsrd to
> 
> I don't actually know what patch this was - I assume it's from a series
> that has since been applied?

Good catch yes that used to be in the same series. Now merged, it's 
dd152f, I'll update the changelog.

>> enable RI can be combined with the mtmsrd to enable EE for interrupts
>> which enable the latter, which tends to be the important synchronous
>> interrupts (i.e., page faults).
>>
>> Do this by enabling EE and RI together at the beginning of the entry
>> wrapper if PACA_IRQ_HARD_DIS is clear, and just enabling RI if it is set
>> (which means something wanted EE=0).
> 
> 
>> diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
>> index 6b800d3e2681..e3228a911b35 100644
>> --- a/arch/powerpc/include/asm/interrupt.h
>> +++ b/arch/powerpc/include/asm/interrupt.h
>> @@ -148,9 +148,21 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrup
>>  #endif
>>  
>>  #ifdef CONFIG_PPC64
>> -	if (irq_soft_mask_set_return(IRQS_ALL_DISABLED) == IRQS_ENABLED)
>> +	bool trace_enable = false;
>> +
>> +	if (IS_ENABLED(CONFIG_TRACE_IRQFLAGS)) {
>> +		if (irq_soft_mask_set_return(IRQS_DISABLED) == IRQS_ENABLED)
> 
> In the previous code we had IRQS_ALL_DISABLED, now we just have
> IRQS_DISABLED. Is that intentional?

Hmm, no it should be IRQS_ALL_DISABLED. It doesn't matter much in
practice I think because MSR[EE] is disabled, but obviously shouldn't
be changed by this patch (and shouldn't be changed at all IMO having
ALL_DISABLED).

> 
>> +			trace_enable = true;
>> +	} else {
>> +		irq_soft_mask_set(IRQS_DISABLED);
>> +	}
>> +	/* If the interrupt was taken with HARD_DIS set, don't enable MSR[EE] */
>> +	if (local_paca->irq_happened & PACA_IRQ_HARD_DIS)
>> +		__hard_RI_enable();
>> +	else
>> +		__hard_irq_enable();
>> +	if (trace_enable)
>>  		trace_hardirqs_off();
>> -	local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
>>  
>>  	if (user_mode(regs)) {
>>  		CT_WARN_ON(ct_state() != CONTEXT_USER);
> 
> 
>> @@ -901,11 +892,6 @@ INT_DEFINE_BEGIN(system_reset)
>>  	IVEC=0x100
>>  	IAREA=PACA_EXNMI
>>  	IVIRT=0 /* no virt entry point */
>> -	/*
>> -	 * MSR_RI is not enabled, because PACA_EXNMI and nmi stack is
>> -	 * being used, so a nested NMI exception would corrupt it.
>> -	 */
>> -	ISET_RI=0
>>  	ISTACK=0
>>  	IKVM_REAL=1
>>  INT_DEFINE_END(system_reset)
> 
>> @@ -986,8 +972,6 @@ EXC_COMMON_BEGIN(system_reset_common)
> 
> Right before this change, there's a comment that reads:
> 
> 	/*
> 	 * Increment paca->in_nmi then enable MSR_RI. SLB or MCE will be able
> 	 * to recover, but nested NMI will notice in_nmi and not recover
>     ...
> 
> You've taken out the bit that enables MSR_RI, which means the comment is
> no longer accurate.

Ah yep, that should be okay because we moved the RI enable to the NMI 
entry wrapper. Comment just needs a tweak.

> 
> Beyond that, I'm still trying to follow all the various changes in code
> flows. It seems to make at least vague sense: we move the setting of
> MSR_RI from the early asm to interrupt*enter_prepare. I'm just
> struggling to convince myself that when we hit the underlying handler
> that the RI states all line up.

Thanks,
Nick


More information about the Linuxppc-dev mailing list