NUMA memory block size
Dave Hansen
haveblue at us.ibm.com
Sat Apr 3 21:07:29 EST 2004
On Fri, 2004-04-02 at 22:50, Olof Johansson wrote:
> 1. Why do we use a full int for node ID?
Because none of the other NUMA architectures have such insanely small
mapping units :)
> It's quite unlikely that we will
> have 2 billion nodes anytime soon. Current limit is 16. :-) Switching to a
> char instead of int might be worth it.
Yep. Probably a good idea.
> 2. A lmb_alloc() approach has the benefit of only allocating as much table
> as we actually have physical memory in the system. At least this way we'd
> only allocate in proportion to how much memory the machine has. 1MB table
> for a 2TB machine isn't too bad. On a 128GB system, size will be the same
> as before (32KB).
How about using the bootmem_alloc() functions instead of the LMB ones?
They're a bit more standard, and everyone else will realize what you're
doing. That isn't too early, is it?
> I'll take it as a later todo to look at a better data structure for this,
> to avoid wasting too much space (but keep lookups fast).
There's not a whole lot else you can do. In theory, each 16MB LMB could
be going to any node. Any tree-based approach is just going to make the
worst-case more cache misses than the 1 that it is now. I like wasting
the space and keeping it dirt-simple.
-- Dave
** Sent via the linuxppc64-dev mail list. See http://lists.linuxppc.org/
More information about the Linuxppc64-dev
mailing list