[RFC PATCH] powerpc/mm: Reduce memory usage for mm_context_t for radix
Christophe Leroy
christophe.leroy at c-s.fr
Fri Apr 5 03:55:45 AEDT 2019
Le 04/04/2019 à 18:13, Nicholas Piggin a écrit :
> Christophe Leroy's on April 3, 2019 4:31 am:
>>
>>
>> Le 02/04/2019 à 16:34, Aneesh Kumar K.V a écrit :
>>> Currently, our mm_context_t on book3s64 include all hash specific
>>> context details like slice mask, subpage protection details. We
>>> can skip allocating those on radix. This will help us to save
>>> 8K per mm_context with radix translation.
>>>
>>> With the patch applied we have
>>>
>>> sizeof(mm_context_t) = 136
>>> sizeof(struct hash_mm_context) = 8288
>>>
>>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar at linux.ibm.com>
>>> ---
>>> NOTE:
>>>
>>> If we want to do this, I am still trying to figure out how best we can do this
>>> without all the #ifdef and other overhead for 8xx book3e
>>>
>>>
>>> arch/powerpc/include/asm/book3s/64/mmu-hash.h | 2 +-
>>> arch/powerpc/include/asm/book3s/64/mmu.h | 48 +++++++++++--------
>>> arch/powerpc/include/asm/book3s/64/slice.h | 6 +--
>>> arch/powerpc/kernel/paca.c | 9 ++--
>>> arch/powerpc/kernel/setup-common.c | 7 ++-
>>> arch/powerpc/mm/hash_utils_64.c | 10 ++--
>>> arch/powerpc/mm/mmu_context_book3s64.c | 16 ++++++-
>>> arch/powerpc/mm/slb.c | 2 +-
>>> arch/powerpc/mm/slice.c | 48 +++++++++----------
>>> arch/powerpc/mm/subpage-prot.c | 8 ++--
>>> 10 files changed, 91 insertions(+), 65 deletions(-)
>>>
>>> diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
>>> index a28a28079edb..d801be977623 100644
>>> --- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
>>> +++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
>>> @@ -657,7 +657,7 @@ extern void slb_set_size(u16 size);
>>>
>>> /* 4 bits per slice and we have one slice per 1TB */
>>> #define SLICE_ARRAY_SIZE (H_PGTABLE_RANGE >> 41)
>>> -#define TASK_SLICE_ARRAY_SZ(x) ((x)->context.slb_addr_limit >> 41)
>>> +#define TASK_SLICE_ARRAY_SZ(x) ((x)->context.hash_context->slb_addr_limit >> 41)
>>>
>>> #ifndef __ASSEMBLY__
>>>
>>> diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
>>> index a809bdd77322..07e76e304a3b 100644
>>> --- a/arch/powerpc/include/asm/book3s/64/mmu.h
>>> +++ b/arch/powerpc/include/asm/book3s/64/mmu.h
>>> @@ -114,6 +114,33 @@ struct slice_mask {
>>> DECLARE_BITMAP(high_slices, SLICE_NUM_HIGH);
>>> };
>>>
>>> +struct hash_mm_context {
>>> +
>>> + u16 user_psize; /* page size index */
>>
>> Could we keep that in mm_context_t ?
>
> Why do you want it there?
It was just to avoid so many changes and also the pointer complexity for
just saving 2 bytes. But your suggestion below seems good.
>
>
>>> @@ -155,15 +155,15 @@ static struct slice_mask *slice_mask_for_size(struct mm_struct *mm, int psize)
>>> {
>>> #ifdef CONFIG_PPC_64K_PAGES
>>> if (psize == MMU_PAGE_64K)
>>> - return &mm->context.mask_64k;
>>> + return &mm->context.hash_context->mask_64k;
>>
>> You should take the two patches below, that would help:
>> https://patchwork.ozlabs.org/patch/1059056/
>> https://patchwork.ozlabs.org/patch/1059058/
>
> Above patches seem good I think. What I think Aneesh should have is
> is a macro or inline that gives a pointer to the hash_mm_context from
> an mm_context pointer.
Good idea. But please call it something else, as it has nothing to do
with hash outside of book3s64 (ie the 8xx is a nohash32).
Christophe
>
> Architectures which always want it should just put the hash struct
> in their mm_context struct and that avoids the pointer overhead
> completely.
>
> Thanks,
> Nick
>
More information about the Linuxppc-dev
mailing list