[TECH TOPIC] Reaching consensus on CONFIG_HIGHMEM phaseout

Arnd Bergmann arnd at arndb.de
Fri Sep 12 20:17:26 AEST 2025


On Fri, Sep 12, 2025, at 11:32, Andreas Larsson wrote:
> On 2025-09-11 09:53, Arnd Bergmann wrote:
>> On Thu, Sep 11, 2025, at 07:38, Andreas Larsson wrote:
>>>
>>> We have a upcoming SoC with support for up to 16 GiB of DRAM. When that is
>>> used in LEON sparc32 configuration (using 36-bit physical addressing), a
>>> removed CONFIG_HIGHMEM would be a considerable limitation, even after an
>>> introduction of different CONFIG_VMSPLIT_* options for sparc32.
>> 
>> I agree that without highmem that chip is going to be unusable from Linux,
>> but I wonder if there is a chance to actually use it even with highmem,
>> for a combination of reasons:
>
> I would definitely not call it unusable in LEON sparc32 mode with
> HIGHMEM gone, but it would of course be seriously hampered memory wise
> without HIGHMEM support compared to with HIGHMEM.

I meant specifically a configuration with 16GB of RAM would be
unusable.

>> - If you come up with patches to extend lowmem to 2GB at the expense
>>   of a lower TASK_SIZE, you're still  looking at a ration of 7:1 with
>>   14GB of highmem on the maxed-out configuration, so many workloads
>>   would still struggle to actually use that memory for page cache.
>
> Yes, we already have patches for 36-bit addressing with 64-bit
> phys_addr_t. Patches for CONFIG_VMSPLIT_* are under development.

Ok

> Even with 192 MiB lowmem we have being using up to 4 GiB without running
> into problems. Could you elaborate on why you think lowmem would run out
> before 14 GiB highmem in a VMSPLIT_3G or VMSPLIT_2G configuration?
>
> And even if 14 GiB highmem would be hard to get full usage out of, for a
> board with 8 GiB memory (or a configuration limiting 16 GiB down to only
> use 8 GiB or somewhere in between) the difference between getting to use
> 2 GiB and 8 GiB is quite hefty.

This is highly workload dependent, but usually what happens is that
one type of allocations fills up lowmem to the point where the
system runs out of memory. This could be any of:

- AFAICT the mem_map[] array uses 40 bytes per physical page on sparc64,
  so on 16GB, you need 160MB of lowmem for the mem_map[] alone.
- to actually access the memory from user space, you need several tasks
  that each map a portion of the physical memory. Each task requires
  at least page tables, task_struct, kernel stack, inodes, vma structures
  etc, all of which have to be in lowmem.
- anything you find in /proc/slabinfo comes from lowmem, and for any
  network or filesystem heavy workload, there are a lot of those

It's easy to construct an artifical testcase that maximises the
highmem usage, but much harder to change an existing workload to
use highmem without using more lowmem as well.

>> When you say "used in LEON sparc32 configuration", does that mean
>> you can also run Linux in some other confuration like an rv64
>> kernel on a NOEL-V core on that chip?
>
> Yes, boot strapping will select between sparc32 LEON and rv64 NOEL-V.

>> Aside from the upcoming SoC and whatever happens to that, what is
>> the largest LEON Linux memory configuration that you know is used
>> in production today and still requires kernel updates beyond ~2029?
>
> The maximum I know of for systems currently in production has the
> capacity to have up to 2 GiB memory.

Ok. The 2GB point is clearly the one that works well enough on
x86-32, arm32, powerpc32 and others, with VMSPLIT_3G+highmem
that gives you a 1:1 or 2:1 ratio of highmem to lowmem, and
a single process is able to use all the available memory in
its 3GB virtual space. Alternatively you can already use 2GB
on those architectures with VMSPLIT_2G_OPT to have everything
in lowmem, again depending on your workload.

Once you go to 4GB and beyond, you really want a 64-bit kernel.
CPU designers have added 36-bit addressing to all the major
32-bit architectures at some point, but it's been very rare
that this was actually used after 64-bit CPUs became available.

    Arnd


More information about the Linuxppc-dev mailing list