[RFCv2 5/9] arch/powerpc: Split hash page table sizing heuristic into a helper

Anshuman Khandual khandual at linux.vnet.ibm.com
Thu Feb 4 21:56:20 AEDT 2016


On 02/02/2016 06:34 AM, David Gibson wrote:
> On Mon, Feb 01, 2016 at 12:34:32PM +0530, Anshuman Khandual wrote:
>> On 01/29/2016 10:53 AM, David Gibson wrote:
>>> htab_get_table_size() either retrieve the size of the hash page table (HPT)
>>> from the device tree - if the HPT size is determined by firmware - or
>>> uses a heuristic to determine a good size based on RAM size if the kernel
>>> is responsible for allocating the HPT.
>>>
>>> To support a PAPR extension allowing resizing of the HPT, we're going to
>>> want the memory size -> HPT size logic elsewhere, so split it out into a
>>> helper function.
>>>
>>> Signed-off-by: David Gibson <david at gibson.dropbear.id.au>
>>> ---
>>>  arch/powerpc/include/asm/mmu-hash64.h |  3 +++
>>>  arch/powerpc/mm/hash_utils_64.c       | 30 +++++++++++++++++-------------
>>>  2 files changed, 20 insertions(+), 13 deletions(-)
>>>
>>> diff --git a/arch/powerpc/include/asm/mmu-hash64.h b/arch/powerpc/include/asm/mmu-hash64.h
>>> index 7352d3f..cf070fd 100644
>>> --- a/arch/powerpc/include/asm/mmu-hash64.h
>>> +++ b/arch/powerpc/include/asm/mmu-hash64.h
>>> @@ -607,6 +607,9 @@ static inline unsigned long get_kernel_vsid(unsigned long ea, int ssize)
>>>  	context = (MAX_USER_CONTEXT) + ((ea >> 60) - 0xc) + 1;
>>>  	return get_vsid(context, ea, ssize);
>>>  }
>>> +
>>> +unsigned htab_shift_for_mem_size(unsigned long mem_size);
>>> +
>>>  #endif /* __ASSEMBLY__ */
>>>  
>>>  #endif /* _ASM_POWERPC_MMU_HASH64_H_ */
>>> diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
>>> index e88a86e..d63f7dc 100644
>>> --- a/arch/powerpc/mm/hash_utils_64.c
>>> +++ b/arch/powerpc/mm/hash_utils_64.c
>>> @@ -606,10 +606,24 @@ static int __init htab_dt_scan_pftsize(unsigned long node,
>>>  	return 0;
>>>  }
>>>  
>>> -static unsigned long __init htab_get_table_size(void)
>>> +unsigned htab_shift_for_mem_size(unsigned long mem_size)
>>>  {
>>> -	unsigned long mem_size, rnd_mem_size, pteg_count, psize;
>>> +	unsigned memshift = __ilog2(mem_size);
>>> +	unsigned pshift = mmu_psize_defs[mmu_virtual_psize].shift;
>>> +	unsigned pteg_shift;
>>> +
>>> +	/* round mem_size up to next power of 2 */
>>> +	if ((1UL << memshift) < mem_size)
>>> +		memshift += 1;
>>> +
>>> +	/* aim for 2 pages / pteg */
>>
>> While here I guess its a good opportunity to write couple of lines
>> about why one PTE group for every two physical pages on the system,
> 
> Well, that don't really know, it's just copied from the existing code.

Aneesh, would you know why ?

> 
>> why minimum (1UL << 11 = 2048) number of PTE groups required,

Aneesh, would you know why ?

> 
> Ok.
> 
>> why
>> (1U << 7 = 128) entries per PTE group
> 
> Um.. what?  Because that's how big a PTEG is, I don't think
> re-explaining the HPT structure here is useful.

Agreed, though think some where these things should be macros not used
as hard coded numbers like this.



More information about the Linuxppc-dev mailing list