[RFC PATCH 7/9] powerpc: Add support to mask perf interrupts
maddy at linux.vnet.ibm.com
Tue Jul 26 16:25:51 AEST 2016
On Tuesday 26 July 2016 11:16 AM, Nicholas Piggin wrote:
> On Mon, 25 Jul 2016 20:22:20 +0530
> Madhavan Srinivasan <maddy at linux.vnet.ibm.com> wrote:
>> To support masking of the PMI interrupts, couple of new interrupt
>> handler macros are added MASKABLE_EXCEPTION_PSERIES_OOL and
>> MASKABLE_RELON_EXCEPTION_PSERIES_OOL. These are needed to include the
>> SOFTEN_TEST and implement the support at both host and guest kernel.
>> Couple of new irq #defs "PACA_IRQ_PMI" and "SOFTEN_VALUE_0xf0*" added
>> to use in the exception code to check for PMI interrupts.
>> __SOFTEN_TEST macro is modified to support the PMI interrupt.
>> Present __SOFTEN_TEST code loads the soft_enabled from paca and check
>> to call masked_interrupt handler code. To support both current
>> behaviour and PMI masking, these changes are added,
>> 1) Current LR register content are saved in R11
>> 2) "bge" branch operation is changed to "bgel".
>> 3) restore R11 to LR
>> To retain PMI as NMI behaviour for flag state of 1, we save the LR
>> regsiter value in R11 and branch to "masked_interrupt" handler with
>> LR update. And in "masked_interrupt" handler, we check for the
>> "SOFTEN_VALUE_*" value in R10 for PMI and branch back with "blr" if
>> To mask PMI for a flag >1 value, masked_interrupt vaoid's the above
>> check and continue to execute the masked_interrupt code and disabled
>> MSR[EE] and updated the irq_happend with PMI info.
>> Finally, saving of R11 is moved before calling SOFTEN_TEST in the
>> __EXCEPTION_PROLOG_1 macro to support saving of LR values in
>> Signed-off-by: Madhavan Srinivasan <maddy at linux.vnet.ibm.com>
>> arch/powerpc/include/asm/exception-64s.h | 22 ++++++++++++++++++++--
>> arch/powerpc/include/asm/hw_irq.h | 1 +
>> arch/powerpc/kernel/exceptions-64s.S | 27
>> ++++++++++++++++++++++++--- 3 files changed, 45 insertions(+), 5
>> diff --git a/arch/powerpc/include/asm/exception-64s.h
>> b/arch/powerpc/include/asm/exception-64s.h index
>> 44d3f539d8a5..c951b7ab5108 100644 ---
>> a/arch/powerpc/include/asm/exception-64s.h +++
>> b/arch/powerpc/include/asm/exception-64s.h @@ -166,8 +166,8 @@
>> OPT_SAVE_REG_TO_PACA(area+EX_CFAR, r10, CPU_FTR_CFAR);
>> \ SAVE_CTR(r10, area);
>> \ mfcr
>> r9; \
>> extra(vec); \
>> r11,area+EX_R11(r13); \
>> extra(vec); \
>> r12,area+EX_R12(r13); \
>> GET_SCRATCH0(r10); \
>> std r10,area+EX_R13(r13) @@ -403,12 +403,17 @@
>> label##_relon_hv: \
>> #define SOFTEN_VALUE_0xe82 PACA_IRQ_DBELL #define
>> SOFTEN_VALUE_0xe60 PACA_IRQ_HMI #define
>> SOFTEN_VALUE_0xe62 PACA_IRQ_HMI +#define
>> SOFTEN_VALUE_0xf01 PACA_IRQ_PMI +#define
>> SOFTEN_VALUE_0xf00 PACA_IRQ_PMI
> #define __SOFTEN_TEST(h,
>> vec) \ lbz
>> r10,PACASOFTIRQEN(r13); \
>> r10,LAZY_INTERRUPT_DISABLED; \
>> r10,SOFTEN_VALUE_##vec; \
>> - bge masked_##h##interrupt
> At which point, can't we pass in the interrupt level we want to mask
> for to SOFTEN_TEST, and avoid all this extra code changes?
IIUC, we do pass the interrupt info to SOFTEN_TEST. Incase of
PMU interrupt we will have the value as PACA_IRQ_PMI.
> PMU masked interrupt will compare with SOFTEN_LEVEL_PMU, existing
> interrupts will compare with SOFTEN_LEVEL_EE (or whatever suitable
> names there are).
>> + mflr
>> r11; \
>> + bgel
>> masked_##h##interrupt; \
>> + mtlr r11;
> This might corrupt return prediction when masked_interrupt does not
Hmm this is a valid point.
> return. I guess that's uncommon case though.
No, it is. kernel mostly use irq_disable with (1) today and only in
we disable all the interrupts. So we are going to return almost always
when irqs are
Since we need to support the PMIs as NMI when irq disable level is 1,
we need to skip masked_interrupt.
As you mentioned if we have a separate macro (SOFTEN_TEST_PMU),
these can be avoided, but then it is code replication and we may need
to change some more macros. But this interesting, let me work on this.
> But I think we can avoid
> this if we do the above, no?
More information about the Linuxppc-dev