NUMA memory block size

Olof Johansson olof at
Tue Apr 6 04:44:17 EST 2004

Mike Kravetz wrote:
> On Sat, Apr 03, 2004 at 12:50:13AM -0600, Olof Johansson wrote:
>>2. A lmb_alloc() approach has the benefit of only allocating as much table
>>as we actually have physical memory in the system.
> Yes and no (if I understand correctly).  The lmb array is currently
> limited to 128 entries.  In practice, most of the lmbs are physically
> contiguous so they are coalesced into a much smaller number.  However,
> couldn't there be a worst case scenario where we could only support
> 128 16MB lmbs?  Does anyone see a need to be concerned about this?
> -OR- Is this just too unlikely to occur?  Does the hypervisor try to
> take this type of fragmentation into account?

(This is not directly related to the NUMA stuff though, since there's
nothing in NUMA that stops the aggregation in the LMB layers. It's just
that the "memory blocks" that numa keeps track of need to be small
enough to reflect node id's on the same granularity that the system
assigns them in.)

I'm guessing the 128 LMB limit can be reached on a system with _very_
fragmented memory, such that the hypervisor can't allocate contigous
LMBs more than a few in a row. I would be surprised if it happened in
reality, but I guess one could always set up a testcase that will make
it break. :-)


Olof Johansson                                        Office: 4F005/905
Linux on Power Development                            IBM Systems Group
Email: olof at                          Phone: 512-838-9858
All opinions are my own and not those of IBM

** Sent via the linuxppc64-dev mail list. See

More information about the Linuxppc64-dev mailing list