[Linuxppc-users] memmap kernel option on ubuntu 16.04 power8

Calvin Sze calvins at us.ibm.com
Sat Sep 30 08:27:45 AEST 2017


"Linuxppc-users" <linuxppc-users-bounces
+calvins=us.ibm.com at lists.ozlabs.org> wrote on 09/27/2017 08:56:33 PM:

> From: Michael Ellerman <mpe at ellerman.id.au>
> To: Brian Horton <brianh at linux.vnet.ibm.com>,
linuxppc-users at lists.ozlabs.org
> Date: 09/27/2017 08:59 PM
> Subject: Re: [Linuxppc-users] memmap kernel option on ubuntu 16.04 power8
> Sent by: "Linuxppc-users" <linuxppc-users-bounces
> +calvins=us.ibm.com at lists.ozlabs.org>
>
> Brian Horton <brianh at linux.vnet.ibm.com> writes:
> > On 09/27/2017 05:45 AM, Michael Ellerman wrote:
> ...
> >> What hardware/platform are you trying to do this on? Are you familiar
> >> with kexec?
> >
> > P8, Tuleta and Firestone, baremetal, ppc64le w/ Ubuntu 16.04 HWE
kernel.
> >
> > No, I haven't looked into anything with kexec - what are my options
with
> > that? I would prefer to stay away from building my own kernel code -
> > i'll probably just live with the testing that i'm doing if that's my
> > only option. i was just trying to be a little more 'realistic'.
>
> Using kexec you can modify the device tree before booting the 2nd kernel.
>
> So you could remove memory from the device tree to simulate the setup
> you're looking for, and then as far as the 2nd kernel is concerned that
> memory doesn't exist.
>
> On my Tuleta here, the normal config is 4 nodes with 32GB per node:
>
>   # numactl -H
>   available: 4 nodes (0-1,16-17)
>   ...
>   node 0 size: 32599 MB
>   ...
>   node 1 size: 32713 MB
>   ...
>   node 16 size: 32714 MB
>   ...
>   node 17 size: 32569 MB
>
>
> If I reboot and drop to the petiboot shell, then I can do the following:
>
> $ dtc -O dts -I fs -o devtree.dts /proc/device-tree
>
> $ vi devtree.dts ...
>
> Look for all the nodes with device_type = "memory", eg:
>
>         memory at 0 {
>                 device_type = "memory";
>                 ibm,associativity = <0x4 0x0 0x0 0x0 0x0>;
>                 ibm,chip-id = <0x0>;
>                 phandle = <0x86>;
>                 reg = <0x0 0x0 0x8 0x0>;
>         };
>
>         memory at 1800000000 {
>                 device_type = "memory";
>                 ibm,associativity = <0x4 0x0 0x0 0x1 0x11>;
>                 ibm,chip-id = <0x11>;
>                 phandle = <0x89>;
>                 reg = <0x18 0x0 0x8 0x0>;
>         };
>
>         memory at 800000000 {
>                 device_type = "memory";
>                 ibm,associativity = <0x4 0x0 0x0 0x0 0x1>;
>                 ibm,chip-id = <0x1>;
>                 phandle = <0x87>;
>                 reg = <0x8 0x0 0x8 0x0>;
>         };
>
>         memory at 1000000000 {
>                 device_type = "memory";
>                 ibm,associativity = <0x4 0x0 0x0 0x1 0x10>;
>                 ibm,chip-id = <0x10>;
>                 phandle = <0x88>;
>                 reg = <0x10 0x0 0x8 0x0>;
>         };
>
> For each one, edit the reg so that the second value is 0x4 0x0, meaning
> 0x400000000 or 16GB, eg:
>
>   reg = <0x10 0x0 0x8 0x0>;
>
> Becomes:
>
>   reg = <0x10 0x0 0x4 0x0>;
>
> And so on for each memory node.
>
> Then:
>
>   $ dtc -I dts -O dtb -o devtree.dtb devtree.dts
>   $ kexec -l -b devtree.dtb -c "loglevel=8 nosplash
> root=PARTUUID=9ee9301a-50b3-4cae-8cf7-d46fff4ea920" vmlinux
>   $ kexec -e
>
>
> Then once it boots:
>
>   $ numactl -H
>   available: 4 nodes (0-1,16-17)
>   node 0 size: 16244 MB
>   ...
>   node 1 size: 16345 MB
>   ...
>   node 16 size: 16346 MB
>   ...
>   node 17 size: 16273 MB
>
>
> cheers

Thank you Michael, I was able to do it on a Briigs machine following your
instructions,  this is one way to handle it.


Before

numactl -H
available: 2 nodes (0,8)
....
node 0 size: 131072 MB
...
node 8 size: 131072 MB


After

numactl -H
available: 2 nodes (0,8)
node 0 size: 65536 MB
...
node 8 size: 65536 MB
...




> _______________________________________________
> Linuxppc-users mailing list
> Linuxppc-users at lists.ozlabs.org
> https://urldefense.proofpoint.com/v2/url?
>
u=https-3A__lists.ozlabs.org_listinfo_linuxppc-2Dusers&d=DwIGaQ&c=jf_iaSHvJObTbx-

> siA1ZOg&r=XEmD5LeSMcW0pq1ccjREOfGKKSZEo6Uip7M_x_R-NUY&m=dskCpMc-L-
> vwg-
>
CbWnmcGz83RpAHnlLTAHA06UfuqVc&s=2QZvf0mKFlokttVbA0rfb3dtwxQDXvLDZy5D_Q-o1_I&e=

>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ozlabs.org/pipermail/linuxppc-users/attachments/20170929/0777bdf2/attachment.html>


More information about the Linuxppc-users mailing list