ppc64: Make hash_preload() and update_mmu_cache() cope with hugepages
David Gibson
david at gibson.dropbear.id.au
Tue Nov 8 14:41:47 EST 2005
Paulus, please apply.
At present, hash_preload() (and hence update_mmu_cache()) will not
work correctly if called on a hugepage address, thus relying on the
fact that the hugepage fault paths never call update_mmu_cache(). I'm
not 100% sure that's safe now (although I think it is), and it
certainly won't be safe for some of the places we want to go with
hugepage.
Thus, this patch extends hash_preload() to work correctly on hugepage
addresses.
Signed-off-by: David Gibson <dwg at au1.ibm.com>
Index: working-2.6/arch/powerpc/mm/hash_utils_64.c
===================================================================
--- working-2.6.orig/arch/powerpc/mm/hash_utils_64.c 2005-11-08 11:11:29.000000000 +1100
+++ working-2.6/arch/powerpc/mm/hash_utils_64.c 2005-11-08 12:14:09.000000000 +1100
@@ -638,6 +638,7 @@
cpumask_t mask;
unsigned long flags;
int local = 0;
+ int huge = in_hugepage_area(mm->context, ea);
/* We don't want huge pages prefaulted for now
*/
@@ -651,9 +652,11 @@
pgdir = mm->pgd;
if (pgdir == NULL)
return;
- ptep = find_linux_pte(pgdir, ea);
- if (!ptep)
- return;
+ if (likely(!huge)) {
+ ptep = find_linux_pte(pgdir, ea);
+ if (!ptep)
+ return;
+ }
vsid = get_vsid(mm->context.id, ea);
/* Hash it in */
@@ -661,6 +664,9 @@
mask = cpumask_of_cpu(smp_processor_id());
if (cpus_equal(mm->cpu_vm_mask, mask))
local = 1;
+ if (unlikely(huge))
+ hash_huge_page(mm, access, ea, vsid, local);
+ else
#ifndef CONFIG_PPC_64K_PAGES
__hash_page_4K(ea, access, vsid, ptep, trap, local);
#else
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
More information about the Linuxppc64-dev
mailing list