[PATCH] powerpc32: use stmw/lmw for non volatile registers save/restore

Segher Boessenkool segher at kernel.crashing.org
Tue May 24 06:17:47 AEST 2016


On Mon, May 23, 2016 at 10:46:36AM +0200, Christophe Leroy wrote:
> lmw/stmw have a 1 cycle (2 cycles for lmw on some ppc) in addition
> and implies serialising, however it reduces the amount of instructions
> hence the amount of instruction fetch compared to the equivalent
> operation with several lzw/stw. It means less pressure on cache and
> less fetching delays on slow memory.

lmw/stmw do not work at all in LE mode, on most processors.  This is a
supported configuration.  NAK.

> When we transfer 20 registers, it is worth it.
> gcc uses stmw/lmw at function entry/exit to save/restore non
> volatile register, so lets also do it that way.

No, C code is compiled with -mno-multiple for LE configs.  Saving a few
bytes of code is not "worth it", anyway.

> --- a/arch/powerpc/kernel/misc_32.S
> +++ b/arch/powerpc/kernel/misc_32.S
> @@ -1086,3 +1086,25 @@ relocate_new_kernel_end:
>  relocate_new_kernel_size:
>  	.long relocate_new_kernel_end - relocate_new_kernel
>  #endif
> +
> +_GLOBAL(setjmp)
> +	mflr	r0
> +	li	r3, 0
> +	stw	r0, 0(r3)
> +	stw	r1, 4(r3)
> +	stw	r2, 8(r3)
> +	mfcr	r12
> +	stmw	r12, 12(r3)
> +	blr

This code has been tested?  I very much doubt it.


Segher


More information about the Linuxppc-dev mailing list