[RFC PATCH 2/3] topology: support node_numa_mem() for determining the fallback node
Nishanth Aravamudan
nacc at linux.vnet.ibm.com
Wed Jul 23 09:47:26 EST 2014
On 22.07.2014 [14:43:11 -0700], Nishanth Aravamudan wrote:
> Hi David,
<snip>
> on powerpc now, things look really good. On a KVM instance with the
> following topology:
>
> available: 2 nodes (0-1)
> node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
> node 0 size: 0 MB
> node 0 free: 0 MB
> node 1 cpus: 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99
> node 1 size: 16336 MB
> node 1 free: 14274 MB
> node distances:
> node 0 1
> 0: 10 40
> 1: 40 10
>
> 3.16.0-rc6 gives:
>
> Slab: 1039744 kB
> SReclaimable: 38976 kB
> SUnreclaim: 1000768 kB
<snip>
> Adding my patch on top of Joonsoo's and the revert, I get:
>
> Slab: 411776 kB
> SReclaimable: 40960 kB
> SUnreclaim: 370816 kB
>
> So CONFIG_SLUB still uses about 3x as much slab memory, but it's not so
> much that we are close to OOM with small VM/LPAR sizes.
Just to clarify/add one more datapoint, with a balanced topology:
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
node 0 size: 8154 MB
node 0 free: 8075 MB
node 1 cpus: 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99
node 1 size: 8181 MB
node 1 free: 7776 MB
node distances:
node 0 1
0: 10 40
1: 40 10
I see the following for my patch + Joonsoo's + the revert:
Slab: 495872 kB
SReclaimable: 46528 kB
SUnreclaim: 449344 kB
(Although these numbers fluctuate quite a bit between 250M and 500M),
which indicates that the memoryless node slab consumption is now on-par
with a populated topology. And both are still more than CONFIG_SLAB
requires.
Thanks,
Nish
More information about the Linuxppc-dev
mailing list