[PATCH -V3 09/11] arch/powerpc: Use 50 bits of VSID in slbmte

Aneesh Kumar K.V aneesh.kumar at linux.vnet.ibm.com
Mon Jul 23 20:22:55 EST 2012


Paul Mackerras <paulus at samba.org> writes:

> On Mon, Jul 23, 2012 at 01:51:49PM +0530, Aneesh Kumar K.V wrote:
>> Paul Mackerras <paulus at samba.org> writes:
>> 
>> > On Mon, Jul 09, 2012 at 06:43:39PM +0530, Aneesh Kumar K.V wrote:
>> >> From: "Aneesh Kumar K.V" <aneesh.kumar at linux.vnet.ibm.com>
>> >> 
>> >> Increase the number of valid VSID bits in slbmte instruction.
>> >> We will use the new bits when we increase valid VSID bits.
>> >> 
>> >> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar at linux.vnet.ibm.com>
>> >> ---
>> >>  arch/powerpc/mm/slb_low.S |    4 ++--
>> >>  1 file changed, 2 insertions(+), 2 deletions(-)
>> >> 
>> >> diff --git a/arch/powerpc/mm/slb_low.S b/arch/powerpc/mm/slb_low.S
>> >> index c355af6..c1fc81c 100644
>> >> --- a/arch/powerpc/mm/slb_low.S
>> >> +++ b/arch/powerpc/mm/slb_low.S
>> >> @@ -226,7 +226,7 @@ _GLOBAL(slb_allocate_user)
>> >>   */
>> >>  slb_finish_load:
>> >>  	ASM_VSID_SCRAMBLE(r10,r9,256M)
>> >> -	rldimi	r11,r10,SLB_VSID_SHIFT,16	/* combine VSID and flags */
>> >> +	rldimi	r11,r10,SLB_VSID_SHIFT,2	/* combine VSID and flags */
>> >
>> > You can't do that without either changing ASM_VSID_SCRAMBLE or masking
>> > the VSID it generates to 36 bits, since the logic in ASM_VSID_SCRAMBLE
>> > can leave non-zero bits in the high 28 bits of the result.  Similarly
>> > for the 1T case.
>> >
>> 
>> How about change ASM_VSID_SCRAMBLE to clear the high bits ? That would
>> also make it close to vsid_scramble()
>
> One more instruction in a hot path - I'd rather not.  How about
> changing the rldimi instruction to:
> 	rldimi	r11,r10,SLB_VSID_SHIFT,(64-SLB_VSID_SHIFT-VSID_BITS_256M)
>
> and similarly for the 1T case.  That will give the proper masking
> when you change VSID_BITS_256M.
>

This is better. I have made this change.

-aneesh



More information about the Linuxppc-dev mailing list