[PATCH 5/8] powerpc: Restore FPU/VEC/VSX if previously used

Michael Ellerman mpe at ellerman.id.au
Mon Nov 23 10:07:13 AEDT 2015


On Mon, 2015-11-23 at 09:18 +1100, Cyril Bur wrote:
> On Fri, 20 Nov 2015 22:01:04 +1100
> Michael Ellerman <mpe at ellerman.id.au> wrote:
> > On Wed, 2015-11-18 at 14:26 +1100, Cyril Bur wrote:
> > > diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
> > > index c8b4225..46e9869 100644
> > > --- a/arch/powerpc/kernel/entry_64.S
> > > +++ b/arch/powerpc/kernel/entry_64.S
> > > @@ -210,7 +210,54 @@ system_call:			/* label this so stack traces look sane */
> > >  	li	r11,-MAX_ERRNO
> > >  	andi.	r0,r9,(_TIF_SYSCALL_DOTRACE|_TIF_SINGLESTEP|_TIF_USER_WORK_MASK|_TIF_PERSYSCALL_MASK)
> > >  	bne-	syscall_exit_work
> > > -	cmpld	r3,r11
> > > +
> > > +	/*
> > > +	 * This is an assembly version of checks performed in restore_math()
> > > +	 * to avoid calling C unless absolutely necessary.
> > > +	 * Note: In order to simplify the assembly, if the FP or VEC registers
> > > +	 * are hot (and therefore restore_math() isn't called) the
> > > +	 * LOAD_{FP,VEC} thread counter doesn't get incremented.
> > > +	 * This is likely the best thing to do anyway because hot regs indicate
> > > +	 * that the workload is doing a lot of syscalls that can be handled
> > > +	 * quickly and without the need to touch FP or VEC regs (by the kernel).
> > > +	 * a) If this workload is long running then this is exactly what the
> > > +	 * kernel should be doing.
> > > +	 * b) If this workload isn't long running then we'll soon fall back to
> > > +	 * calling into C and the counter will be incremented regularly again
> > > +	 * anyway.
> > > +	 */
> > > +	ld	r9,PACACURRENT(r13)
> > > +	andi.	r0,r8,MSR_FP
> > > +	addi	r9,r9,THREAD
> > > +	lbz	r5,THREAD_LOAD_FP(r9)
> > > +	/*
> > > +	 * Goto 2 if !r0 && r5
> > > +	 * The cmpb works because r5 can only have bits set in the lowest byte
> > > +	 * and r0 may or may not have bit 13 set (different byte) but will have
> > > +	 * a zero low byte therefore the low bytes must differ if r5 == true
> > > +	 * and the bit 13 byte must be the same if !r0
> > > +	 */
> > > +	cmpb	r7,r0,r5  
> > 
> > cmpb is new since Power6, which means it doesn't exist on Cell -> Program Check :)
> > 
> Oops, sorry.

That's fine, there's almost no way for you to know that from reading the
documentation.

> > I'm testing a patch using crandc, but I don't like it.
> > 
> > I'm not a big fan of the logic here, it's unpleasantly complicated. Did you
> > benchmark going to C to do the checks? Or I wonder if we could just check
> > THREAD_LOAD_FP || THREAD_LOAD_VEC and if either is set we go to restore_math().
> > 
> 
> I didn't benchmark going to C mostly because you wanted to avoid calling C
> unless necessary in that path. Based off the results I got benchmarking the
> this series I expect calling C will also be in the noise of removing the
> exception.

Yeah I figured it was probably me that said "avoid C at all costs". But I've
changed my mind ;)

> > Or on the other hand we check !MSR_FP && !MSR_VEC and if so we go to
> > restore_math()?
> 
> That seems like the best check to leave in the assembly if you want to avoid
> complicated assembly in there.

Cool. If you can benchmark that that'd be great, mmkay.

cheers



More information about the Linuxppc-dev mailing list