[PATCH -V3 09/11] arch/powerpc: Use 50 bits of VSID in slbmte
Aneesh Kumar K.V
aneesh.kumar at linux.vnet.ibm.com
Mon Jul 23 18:21:49 EST 2012
Paul Mackerras <paulus at samba.org> writes:
> On Mon, Jul 09, 2012 at 06:43:39PM +0530, Aneesh Kumar K.V wrote:
>> From: "Aneesh Kumar K.V" <aneesh.kumar at linux.vnet.ibm.com>
>>
>> Increase the number of valid VSID bits in slbmte instruction.
>> We will use the new bits when we increase valid VSID bits.
>>
>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar at linux.vnet.ibm.com>
>> ---
>> arch/powerpc/mm/slb_low.S | 4 ++--
>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/powerpc/mm/slb_low.S b/arch/powerpc/mm/slb_low.S
>> index c355af6..c1fc81c 100644
>> --- a/arch/powerpc/mm/slb_low.S
>> +++ b/arch/powerpc/mm/slb_low.S
>> @@ -226,7 +226,7 @@ _GLOBAL(slb_allocate_user)
>> */
>> slb_finish_load:
>> ASM_VSID_SCRAMBLE(r10,r9,256M)
>> - rldimi r11,r10,SLB_VSID_SHIFT,16 /* combine VSID and flags */
>> + rldimi r11,r10,SLB_VSID_SHIFT,2 /* combine VSID and flags */
>
> You can't do that without either changing ASM_VSID_SCRAMBLE or masking
> the VSID it generates to 36 bits, since the logic in ASM_VSID_SCRAMBLE
> can leave non-zero bits in the high 28 bits of the result. Similarly
> for the 1T case.
>
How about change ASM_VSID_SCRAMBLE to clear the high bits ? That would
also make it close to vsid_scramble()
diff --git a/arch/powerpc/include/asm/mmu-hash64.h b/arch/powerpc/include/asm/mmu-hash64.h
index d24d484..173bb34 100644
--- a/arch/powerpc/include/asm/mmu-hash64.h
+++ b/arch/powerpc/include/asm/mmu-hash64.h
@@ -420,7 +420,8 @@ extern void slb_set_size(u16 size);
* cases the answer is the low 36 bits of (r3 + ((r3+1) >> 36))*/\
addi rx,rt,1; \
srdi rx,rx,VSID_BITS_##size; /* extract 2^VSID_BITS bit */ \
- add rt,rt,rx
+ add rt,rt,rx; \
+ clrldi rt,rt,(64 - VSID_BITS_##size);
/* 4 bits per slice and we have one slice per 1TB */
#if 0 /* We can't directly include pgtable.h hence this hack */
More information about the Linuxppc-dev
mailing list