[PATCH] ppc64: Fix pages marked dirty abusively

Benjamin Herrenschmidt benh at kernel.crashing.org
Fri Oct 21 14:12:51 EST 2005


While working on 64K pages, I found this little buglet in our
update_mmu_cache() implementation. This code calls __hash_page() passing
it an "access" parameter (the type of access that triggers the hash)
containing the bits _PAGE_RW and _PAGE_USER of the linux PTE. The later
is useless in this case and the former is wrong. In fact, if we have a
writeable PTE and we pass _PAGE_RW to hash_page(), it will set
_PAGE_DIRTY (since we track dirty that way, by hash faulting !dirty)
which is not what we want.

In fact, the correct fix is to always pass 0. That means that only
read-only or already dirty read write PTEs will be preloaded. The
(hopefully rare) case of a non dirty read write PTE can't be preloaded
this way, it will have to fault in hash_page on the actual access.

Signed-off-by: Benjamin Herrenschmidt <benh at kernel.crashing.org>

Index: linux-work/arch/ppc64/mm/init.c
===================================================================
--- linux-work.orig/arch/ppc64/mm/init.c	2005-09-23 12:43:22.000000000 +1000
+++ linux-work/arch/ppc64/mm/init.c	2005-10-21 14:07:51.000000000 +1000
@@ -799,8 +799,7 @@
 	if (cpus_equal(vma->vm_mm->cpu_vm_mask, tmp))
 		local = 1;
 
-	__hash_page(ea, pte_val(pte) & (_PAGE_USER|_PAGE_RW), vsid, ptep,
-		    0x300, local);
+	__hash_page(ea, 0, vsid, ptep, 0x300, local);
 	local_irq_restore(flags);
 }
 






More information about the Linuxppc64-dev mailing list