libnuma interleaving oddness
Adam Litke
agl at us.ibm.com
Thu Aug 31 03:44:10 EST 2006
On Wed, 2006-08-30 at 09:19 +0200, Andi Kleen wrote:
> mous pages.
> >
> > The order is (with necessary params filled in):
> >
> > p = mmap( , newsize, RW, PRIVATE, unlinked_hugetlbfs_heap_fd, );
> >
> > numa_interleave_memory(p, newsize);
> >
> > mlock(p, newsize); /* causes all the hugepages to be faulted in */
> >
> > munlock(p,newsize);
> >
> > From what I gathered from the numa manpages, the interleave policy
> > should take effect on the mlock, as that is "fault-time" in this
> > context. We're forcing the fault, that is.
>
> mlock shouldn't be needed at all here. the new hugetlbfs is supposed
> to reserve at mmap time and numa_interleave_memory() sets a VMA
> policy which will should do the right thing no matter when the fault
> occurs.
mmap-time reservation of huge pages is done only for shared mappings.
MAP_PRIVATE mappings have full-overcommit semantics. We use the mlock
call to "guarantee" the MAP_PRIVATE memory to the process. If mlock
fails, we simply unmap the hugetlb region and tell glibc to revert to
its normal allocation method (mmap normal pages).
> Hmm, maybe mlock() policy() is broken.
The policy decision is made further down than mlock. As each huge page
is allocated from the static pool, the policy is consulted to see from
which node to pop a huge page.
The function huge_zonelist() seems to encapsulate the numa policy logic
and after sniffing the code, it looks right to me.
--
Adam Litke - (agl at us.ibm.com)
IBM Linux Technology Center
More information about the Linuxppc-dev
mailing list