[PATCH v3] powerpc/numa: set node_possible_map to only node_online_map during boot
Nishanth Aravamudan
nacc at linux.vnet.ibm.com
Wed Mar 11 10:50:59 AEDT 2015
On 10.03.2015 [10:55:05 +1100], Michael Ellerman wrote:
> On Thu, 2015-03-05 at 21:27 -0800, Nishanth Aravamudan wrote:
> > diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
> > index 0257a7d659ef..0c1716cd271f 100644
> > --- a/arch/powerpc/mm/numa.c
> > +++ b/arch/powerpc/mm/numa.c
> > @@ -958,6 +958,13 @@ void __init initmem_init(void)
> >
> > memblock_dump_all();
> >
> > + /*
> > + * Reduce the possible NUMA nodes to the online NUMA nodes,
> > + * since we do not support node hotplug. This ensures that we
> > + * lower the maximum NUMA node ID to what is actually present.
> > + */
> > + node_possible_map = node_online_map;
>
> That looks nice, but is it generating what we want?
>
> ie. is the content of node_online_map being *copied* into node_possible_map.
>
> Or are we changing node_possible_map to point at node_online_map?
I think it ends up being the latter, which is probably fine in practice
(I think node_online_map is static on power after boot), but perhaps it
would be better to do:
nodes_and(node_possible_map, node_possible_map, node_online_map);
?
e.g.:
powerpc/numa: reset node_possible_map to only node_online_map
Raghu noticed an issue with excessive memory allocation on power with a
simple cgroup test, specifically, in mem_cgroup_css_alloc ->
for_each_node -> alloc_mem_cgroup_per_zone_info(), which ends up blowing
up the kmalloc-2048 slab (to the order of 200MB for 400 cgroup
directories).
The underlying issue is that NODES_SHIFT on power is 8 (256 NUMA nodes
possible), which defines node_possible_map, which in turn defines the
value of nr_node_ids in setup_nr_node_ids and the iteration of
for_each_node.
In practice, we never see a system with 256 NUMA nodes, and in fact, we
do not support node hotplug on power in the first place, so the nodes
that are online when we come up are the nodes that will be present for
the lifetime of this kernel. So let's, at least, drop the NUMA possible
map down to the online map at runtime. This is similar to what x86 does
in its initialization routines.
mem_cgroup_css_alloc should also be fixed to only iterate over
memory-populated nodes and handle hotplug, but that is a separate
change.
Signed-off-by: Nishanth Aravamudan <nacc at linux.vnet.ibm.com>
To: Michael Ellerman <mpe at ellerman.id.au>
Cc: linuxppc-dev at lists.ozlabs.org
Cc: Tejun Heo <tj at kernel.org>
Cc: David Rientjes <rientjes at google.com>
Cc: Benjamin Herrenschmidt <benh at kernel.crashing.org>
Cc: Paul Mackerras <paulus at samba.org>
Cc: Anton Blanchard <anton at samba.org>
Cc: Raghavendra K T <raghavendra.kt at linux.vnet.ibm.com>
---
v1 -> v2:
Rather than clear node_possible_map and set it nid-by-nid, just
directly assign node_online_map to it, as suggested by Michael
Ellerman and Tejun Heo.
v2 -> v3:
Rather than direct assignment (which is just repointing the pointer),
modify node_possible_map in-place.
diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index 0257a7d659ef..1a118b08fad2 100644
--- a/arch/powerpc/mm/numa.c
+++ b/arch/powerpc/mm/numa.c
@@ -958,6 +958,13 @@ void __init initmem_init(void)
memblock_dump_all();
+ /*
+ * Reduce the possible NUMA nodes to the online NUMA nodes,
+ * since we do not support node hotplug. This ensures that we
+ * lower the maximum NUMA node ID to what is actually present.
+ */
+ nodes_and(node_possible_map, node_possible_map, node_online_map);
+
for_each_online_node(nid) {
unsigned long start_pfn, end_pfn;
More information about the Linuxppc-dev
mailing list