[PATCH] fix NUMA interleaving for huge pages (was RE: libnuma interleaving oddness)
Nishanth Aravamudan
nacc at us.ibm.com
Fri Sep 1 02:00:52 EST 2006
On 30.08.2006 [23:00:36 -0700], Nishanth Aravamudan wrote:
> On 30.08.2006 [14:04:40 -0700], Christoph Lameter wrote:
> > > I took out the mlock() call, and I get the same results, FWIW.
> >
> > What zones are available on your box? Any with HIGHMEM?
>
> How do I tell the available zones from userspace? This is ppc64 with
> about 64GB of memory total, it looks like. So, none of the nodes
> (according to /sys/devices/system/node/*/meminfo) have highmem.
>
> > Also what kernel version are we talking about? Before 2.6.18?
>
> The SuSE default, 2.6.16.21 -- I thought I mentioned that in one of my
> replies, sorry.
>
> Tim and I spent most of this afternoon debugging the huge_zonelist()
> callpath with kprobes and jprobes. We found the following via a jprobe
> to offset_li_node():
<snip lengthy previous discussion>
Since vma->vm_pgoff is in units of smallpages, VMAs for huge pages have
the lower HPAGE_SHIFT - PAGE_SHIFT bits always cleared, which results in
badd offsets to the interleave functions. Take this difference from
small pages into account when calculating the offset. This does add a
0-bit shift into the small-page path (via alloc_page_vma()), but I think
that is negligible. Also add a BUG_ON to prevent the offset from growing
due to a negative right-shift, which probably shouldn't be allowed
anyways.
Tested on an 8-memory node ppc64 NUMA box and got the interleaving I
expected.
Signed-off-by: Nishanth Aravamudan <nacc at us.ibm.com>
---
Results with this patch applied, which shouldn't go into the changelog,
I don't think:
for the 4-hugepages at a time case:
20000000 interleave=0-7 file=/hugetlbfs/libhugetlbfs.tmp.r1YKfL huge dirty=4 N0=1 N1=1 N2=1 N3=1
24000000 interleave=0-7 file=/hugetlbfs/libhugetlbfs.tmp.r1YKfL huge dirty=4 N4=1 N5=1 N6=1 N7=1
28000000 interleave=0-7 file=/hugetlbfs/libhugetlbfs.tmp.r1YKfL huge dirty=4 N0=1 N1=1 N2=1 N3=1
for the 1-hugepage at a time case:
20000000 interleave=0-7 file=/hugetlbfs/libhugetlbfs.tmp.LeSnPN huge dirty=1 N0=1
21000000 interleave=0-7 file=/hugetlbfs/libhugetlbfs.tmp.LeSnPN huge dirty=1 N1=1
22000000 interleave=0-7 file=/hugetlbfs/libhugetlbfs.tmp.LeSnPN huge dirty=1 N2=1
23000000 interleave=0-7 file=/hugetlbfs/libhugetlbfs.tmp.LeSnPN huge dirty=1 N3=1
24000000 interleave=0-7 file=/hugetlbfs/libhugetlbfs.tmp.LeSnPN huge dirty=1 N4=1
25000000 interleave=0-7 file=/hugetlbfs/libhugetlbfs.tmp.LeSnPN huge dirty=1 N5=1
26000000 interleave=0-7 file=/hugetlbfs/libhugetlbfs.tmp.LeSnPN huge dirty=1 N6=1
27000000 interleave=0-7 file=/hugetlbfs/libhugetlbfs.tmp.LeSnPN huge dirty=1 N7=1
28000000 interleave=0-7 file=/hugetlbfs/libhugetlbfs.tmp.LeSnPN huge dirty=1 N0=1
Andrew, can we get this into 2.6.18?
diff -urpN 2.6.18-rc5/mm/mempolicy.c 2.6.18-rc5-dev/mm/mempolicy.c
--- 2.6.18-rc5/mm/mempolicy.c 2006-08-30 22:55:33.000000000 -0700
+++ 2.6.18-rc5-dev/mm/mempolicy.c 2006-08-31 08:46:22.000000000 -0700
@@ -1176,7 +1176,15 @@ static inline unsigned interleave_nid(st
if (vma) {
unsigned long off;
- off = vma->vm_pgoff;
+ /*
+ * for small pages, there is no difference between
+ * shift and PAGE_SHIFT, so the bit-shift is safe.
+ * for huge pages, since vm_pgoff is in units of small
+ * pages, we need to shift off the always 0 bits to get
+ * a useful offset.
+ */
+ BUG_ON(shift < PAGE_SHIFT);
+ off = vma->vm_pgoff >> (shift - PAGE_SHIFT);
off += (addr - vma->vm_start) >> shift;
return offset_il_node(pol, vma, off);
} else
--
Nishanth Aravamudan <nacc at us.ibm.com>
IBM Linux Technology Center
More information about the Linuxppc-dev
mailing list