[TECH TOPIC] Reaching consensus on CONFIG_HIGHMEM phaseout

H. Peter Anvin hpa at zytor.com
Fri Sep 12 19:36:58 AEST 2025


On September 12, 2025 2:32:04 AM PDT, Andreas Larsson <andreas at gaisler.com> wrote:
>On 2025-09-11 09:53, Arnd Bergmann wrote:
>> On Thu, Sep 11, 2025, at 07:38, Andreas Larsson wrote:
>>>
>>> We have a upcoming SoC with support for up to 16 GiB of DRAM. When that is
>>> used in LEON sparc32 configuration (using 36-bit physical addressing), a
>>> removed CONFIG_HIGHMEM would be a considerable limitation, even after an
>>> introduction of different CONFIG_VMSPLIT_* options for sparc32.
>> 
>> I agree that without highmem that chip is going to be unusable from Linux,
>> but I wonder if there is a chance to actually use it even with highmem,
>> for a combination of reasons:
>
>I would definitely not call it unusable in LEON sparc32 mode with
>HIGHMEM gone, but it would of course be seriously hampered memory wise
>without HIGHMEM support compared to with HIGHMEM. In NOEL-V 64-bit
>RISC-V mode it will of course not be affected by these matters.
>
>
>> - sparc32 has 36-bit addressing in the MMU, but Linux apparently never
>>   supported a 64-bit phys_addr_t here, which would be required.
>>   This is probably the easiest part and I assume you already have patches
>>   for it.
>> 
>> - As far as I can tell, the current lowmem area is 192MB, which would
>>   be ok(-ish) on a 512MB maxed-out SPARCstation, but for anything bigger
>>   you likely run out of lowmem long before being able to touch the
>>   all highmem pages. This obviously depends a lot on the workload.
>> 
>> - If you come up with patches to extend lowmem to 2GB at the expense
>>   of a lower TASK_SIZE, you're still  looking at a ration of 7:1 with
>>   14GB of highmem on the maxed-out configuration, so many workloads
>>   would still struggle to actually use that memory for page cache.
>
>Yes, we already have patches for 36-bit addressing with 64-bit
>phys_addr_t. Patches for CONFIG_VMSPLIT_* are under development.
>
>Even with 192 MiB lowmem we have being using up to 4 GiB without running
>into problems. Could you elaborate on why you think lowmem would run out
>before 14 GiB highmem in a VMSPLIT_3G or VMSPLIT_2G configuration?
>
>And even if 14 GiB highmem would be hard to get full usage out of, for a
>board with 8 GiB memory (or a configuration limiting 16 GiB down to only
>use 8 GiB or somewhere in between) the difference between getting to use
>2 GiB and 8 GiB is quite hefty.
>
> 
>> - If we remove HIGHPTE (as discussed in this thread) but keep HIGHMEM,
>>   you probably still lose on the 16GB configuration. On 4GB configurations,
>>   HIGHPTE is not really a requirement, but for workloads with many
>>   concurrent tasks using a lot of virtual address space, you would
>>   likely want to /add/ HIGHPTE support on sparc32 first.
>
>That is an interesting point. Regardless of workloads though, it still
>would be a huge difference between having or not having HIGHMEM, with or
>without HIGHPTE.
>
>
>> When you say "used in LEON sparc32 configuration", does that mean
>> you can also run Linux in some other confuration like an rv64
>> kernel on a NOEL-V core on that chip?
>
>Yes, boot strapping will select between sparc32 LEON and rv64 NOEL-V.
>
>
>> Aside from the upcoming SoC and whatever happens to that, what is
>> the largest LEON Linux memory configuration that you know is used
>> in production today and still requires kernel updates beyond ~2029?
>
>The maximum I know of for systems currently in production has the
>capacity to have up to 2 GiB memory.
>
>
>Cheers,
>Andreas
>
>

SPARC32 has a 4:4 address space.  You still use HIGHMEM?!


More information about the Linuxppc-dev mailing list