[PATCH] poewrpc/mce: Fix SLB rebolting during MCE recovery path.

Nicholas Piggin npiggin at gmail.com
Thu Aug 23 14:36:31 AEST 2018


On Thu, 23 Aug 2018 09:58:31 +0530
Mahesh Jagannath Salgaonkar <mahesh at linux.vnet.ibm.com> wrote:

> On 08/21/2018 03:57 PM, Nicholas Piggin wrote:
> > On Fri, 17 Aug 2018 14:51:47 +0530
> > Mahesh J Salgaonkar <mahesh at linux.vnet.ibm.com> wrote:
> >   
> >> From: Mahesh Salgaonkar <mahesh at linux.vnet.ibm.com>
> >>
> >> With the powrpc next commit e7e81847478 (poewrpc/mce: Fix SLB rebolting
> >> during MCE recovery path.), the SLB error recovery is broken. The
> >> commit missed a crucial change of OR-ing index value to RB[52-63] which
> >> selects the SLB entry while rebolting. This patch fixes that.
> >>
> >> Signed-off-by: Mahesh Salgaonkar <mahesh at linux.vnet.ibm.com>
> >> Reviewed-by: Nicholas Piggin <npiggin at gmail.com>
> >> ---
> >>  arch/powerpc/mm/slb.c |    5 ++++-
> >>  1 file changed, 4 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
> >> index 0b095fa54049..6dd9913425bc 100644
> >> --- a/arch/powerpc/mm/slb.c
> >> +++ b/arch/powerpc/mm/slb.c
> >> @@ -101,9 +101,12 @@ void __slb_restore_bolted_realmode(void)
> >>  
> >>  	 /* No isync needed because realmode. */
> >>  	for (index = 0; index < SLB_NUM_BOLTED; index++) {
> >> +		unsigned long rb = be64_to_cpu(p->save_area[index].esid);
> >> +
> >> +		rb = (rb & ~0xFFFul) | index;
> >>  		asm volatile("slbmte  %0,%1" :
> >>  		     : "r" (be64_to_cpu(p->save_area[index].vsid)),
> >> -		       "r" (be64_to_cpu(p->save_area[index].esid)));
> >> +		       "r" (rb));
> >>  	}
> >>  }
> >>  
> >>  
> > 
> > I'm just looking at this again. The bolted save areas do have the
> > index field set. So for the OS, your patch should be equivalent to
> > this, right?
> > 
> >  static inline void slb_shadow_clear(enum slb_index index)
> >  {
> > -       WRITE_ONCE(get_slb_shadow()->save_area[index].esid, 0);
> > +       WRITE_ONCE(get_slb_shadow()->save_area[index].esid, index);
> >  }
> > 
> > Which seems like a better fix.  
> 
> Yeah this also fixes the issue. The only additional change required is
> cpu_to_be64(index).

Ah yep.

> As long as we maintain index in bolted save areas
> (for valid/invalid entries) we should be ok. Will respin v2 with this
> change.

Cool, Reviewed-by: Nicholas Piggin <npiggin at gmail.com> in that case :)

Thanks,
Nick


More information about the Linuxppc-dev mailing list