[PATCH v9 05/12] mm: HUGE_VMAP arch support cleanup
Nicholas Piggin
npiggin at gmail.com
Sun Jan 24 18:43:43 AEDT 2021
Excerpts from Ding Tianhong's message of January 4, 2021 10:33 pm:
> On 2020/12/5 14:57, Nicholas Piggin wrote:
>> This changes the awkward approach where architectures provide init
>> functions to determine which levels they can provide large mappings for,
>> to one where the arch is queried for each call.
>>
>> This removes code and indirection, and allows constant-folding of dead
>> code for unsupported levels.
>>
>> This also adds a prot argument to the arch query. This is unused
>> currently but could help with some architectures (e.g., some powerpc
>> processors can't map uncacheable memory with large pages).
>>
>> Cc: linuxppc-dev at lists.ozlabs.org
>> Cc: Catalin Marinas <catalin.marinas at arm.com>
>> Cc: Will Deacon <will at kernel.org>
>> Cc: linux-arm-kernel at lists.infradead.org
>> Cc: Thomas Gleixner <tglx at linutronix.de>
>> Cc: Ingo Molnar <mingo at redhat.com>
>> Cc: Borislav Petkov <bp at alien8.de>
>> Cc: x86 at kernel.org
>> Cc: "H. Peter Anvin" <hpa at zytor.com>
>> Acked-by: Catalin Marinas <catalin.marinas at arm.com> [arm64]
>> Signed-off-by: Nicholas Piggin <npiggin at gmail.com>
>> ---
>> arch/arm64/include/asm/vmalloc.h | 8 +++
>> arch/arm64/mm/mmu.c | 10 +--
>> arch/powerpc/include/asm/vmalloc.h | 8 +++
>> arch/powerpc/mm/book3s64/radix_pgtable.c | 8 +--
>> arch/x86/include/asm/vmalloc.h | 7 ++
>> arch/x86/mm/ioremap.c | 10 +--
>> include/linux/io.h | 9 ---
>> include/linux/vmalloc.h | 6 ++
>> init/main.c | 1 -
>> mm/ioremap.c | 88 +++++++++---------------
>> 10 files changed, 77 insertions(+), 78 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/vmalloc.h b/arch/arm64/include/asm/vmalloc.h
>> index 2ca708ab9b20..597b40405319 100644
>> --- a/arch/arm64/include/asm/vmalloc.h
>> +++ b/arch/arm64/include/asm/vmalloc.h
>> @@ -1,4 +1,12 @@
>> #ifndef _ASM_ARM64_VMALLOC_H
>> #define _ASM_ARM64_VMALLOC_H
>>
>> +#include <asm/page.h>
>> +
>> +#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP
>> +bool arch_vmap_p4d_supported(pgprot_t prot);
>> +bool arch_vmap_pud_supported(pgprot_t prot);
>> +bool arch_vmap_pmd_supported(pgprot_t prot);
>> +#endif
>> +
>> #endif /* _ASM_ARM64_VMALLOC_H */
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index ca692a815731..1b60079c1cef 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -1315,12 +1315,12 @@ void *__init fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot)
>> return dt_virt;
>> }
>>
>> -int __init arch_ioremap_p4d_supported(void)
>> +bool arch_vmap_p4d_supported(pgprot_t prot)
>> {
>> - return 0;
>> + return false;
>> }
>>
>
> I think you should put this function in the CONFIG_HAVE_ARCH_HUGE_VMAP, otherwise it may break the compile when disable the CONFIG_HAVE_ARCH_HUGE_VMAP, the same
> as the x86 and ppc.
Ah, good catch. arm64 is okay because it always selects
HAVE_ARCH_HUGE_VMAP, powerpc is okay because it places
them in a file that's only compiled for configs that select
huge vmap, but x86-32 without PAE build breaks. I'll fix that.
Thanks,
Nick
More information about the Linuxppc-dev
mailing list