[PATCH] poewrpc/mce: Fix SLB rebolting during MCE recovery path.

Nicholas Piggin npiggin at gmail.com
Tue Aug 21 20:27:02 AEST 2018


On Fri, 17 Aug 2018 14:51:47 +0530
Mahesh J Salgaonkar <mahesh at linux.vnet.ibm.com> wrote:

> From: Mahesh Salgaonkar <mahesh at linux.vnet.ibm.com>
> 
> With the powrpc next commit e7e81847478 (poewrpc/mce: Fix SLB rebolting
> during MCE recovery path.), the SLB error recovery is broken. The
> commit missed a crucial change of OR-ing index value to RB[52-63] which
> selects the SLB entry while rebolting. This patch fixes that.
> 
> Signed-off-by: Mahesh Salgaonkar <mahesh at linux.vnet.ibm.com>
> Reviewed-by: Nicholas Piggin <npiggin at gmail.com>
> ---
>  arch/powerpc/mm/slb.c |    5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
> index 0b095fa54049..6dd9913425bc 100644
> --- a/arch/powerpc/mm/slb.c
> +++ b/arch/powerpc/mm/slb.c
> @@ -101,9 +101,12 @@ void __slb_restore_bolted_realmode(void)
>  
>  	 /* No isync needed because realmode. */
>  	for (index = 0; index < SLB_NUM_BOLTED; index++) {
> +		unsigned long rb = be64_to_cpu(p->save_area[index].esid);
> +
> +		rb = (rb & ~0xFFFul) | index;
>  		asm volatile("slbmte  %0,%1" :
>  		     : "r" (be64_to_cpu(p->save_area[index].vsid)),
> -		       "r" (be64_to_cpu(p->save_area[index].esid)));
> +		       "r" (rb));
>  	}
>  }
>  
> 

I'm just looking at this again. The bolted save areas do have the
index field set. So for the OS, your patch should be equivalent to
this, right?

 static inline void slb_shadow_clear(enum slb_index index)
 {
-       WRITE_ONCE(get_slb_shadow()->save_area[index].esid, 0);
+       WRITE_ONCE(get_slb_shadow()->save_area[index].esid, index);
 }

Which seems like a better fix.

PAPR says:

  Note: SLB is filled sequentially starting at index 0
  from the shadow buffer ignoring the contents of
  RB field bits 52-63

So that shouldn't be an issue.

Thanks,
Nick


More information about the Linuxppc-dev mailing list