libnuma interleaving oddness

Nishanth Aravamudan nacc at us.ibm.com
Wed Aug 30 10:21:10 EST 2006


On 29.08.2006 [16:57:35 -0700], Christoph Lameter wrote:
> On Tue, 29 Aug 2006, Nishanth Aravamudan wrote:
> 
> > I don't know if this is a libnuma bug (I extracted out the code from
> > libnuma, it looked sane; and even reimplemented it in libhugetlbfs
> > for testing purposes, but got the same results) or a NUMA kernel bug
> > (mbind is some hairy code...) or a ppc64 bug or maybe not a bug at
> > all.  Regardless, I'm getting somewhat inconsistent behavior. I can
> > provide more debugging output, or whatever is requested, but I
> > wasn't sure what to include. I'm hoping someone has heard of or seen
> > something similar?
> 
> Are you setting the tasks allocation policy before the allocation or
> do you set a vma based policy? The vma based policies will only work
> for anonymous pages.

The order is (with necessary params filled in):

p = mmap( , newsize, RW, PRIVATE, unlinked_hugetlbfs_heap_fd, );

numa_interleave_memory(p, newsize);

mlock(p, newsize); /* causes all the hugepages to be faulted in */

munlock(p,newsize);

>From what I gathered from the numa manpages, the interleave policy
should take effect on the mlock, as that is "fault-time" in this
context. We're forcing the fault, that is.

Does that answer your question? Sorry if I'm unclear, I'm a bit of a
newbie to the VM.

Thanks,
Nish

-- 
Nishanth Aravamudan <nacc at us.ibm.com>
IBM Linux Technology Center



More information about the Linuxppc-dev mailing list