SLB bolting for iSeries

David Gibson david at gibson.dropbear.id.au
Wed Aug 18 11:25:29 EST 2004


Unfortunately, I don't have easy access to an SLB (i.e. Power4)
iSeries machine to test the patch below.  From discussions with the
iSeries architecture people, I believe it should work, but...

Can someone with access to a suitable machine please test this?

On pSeries SLB machines we "bolt" an SLB entry for the first segment
of the vmalloc() area into the SLB, to reduce the SLB miss rate.  This
caused problems, so was disabled, on iSeries because the bolted entry
was not restored properly on shared processor switch.  This patch adds
information about the bolted vmalloc segment to the lpar map, which
should be restored on shared processor switch.

Index: working-2.6/arch/ppc64/mm/slb.c
===================================================================
--- working-2.6.orig/arch/ppc64/mm/slb.c	2004-08-09 09:51:38.000000000 +1000
+++ working-2.6/arch/ppc64/mm/slb.c	2004-08-18 11:09:00.027529840 +1000
@@ -36,7 +36,6 @@

 static void slb_add_bolted(void)
 {
-#ifndef CONFIG_PPC_ISERIES
 	WARN_ON(!irqs_disabled());

 	/* If you change this make sure you change SLB_NUM_BOLTED
@@ -49,7 +48,6 @@
 		    SLB_VSID_KERNEL, 1);

 	asm volatile("isync":::"memory");
-#endif
 }

 /* Flush all user entries from the segment table of the current processor. */
Index: working-2.6/arch/ppc64/kernel/head.S
===================================================================
--- working-2.6.orig/arch/ppc64/kernel/head.S	2004-08-09 09:51:38.000000000 +1000
+++ working-2.6/arch/ppc64/kernel/head.S	2004-08-18 11:09:00.029529536 +1000
@@ -580,7 +580,7 @@
 	 * VSID generation algorithm.  See include/asm/mmu_context.h.
 	 */

-	.llong	1		/* # ESIDs to be mapped by hypervisor	 */
+	.llong	2		/* # ESIDs to be mapped by hypervisor	 */
 	.llong	1		/* # memory ranges to be mapped by hypervisor */
 	.llong	STAB0_PAGE	/* Page # of segment table within load area	*/
 	.llong	0		/* Reserved */
@@ -588,8 +588,12 @@
 	.llong	0		/* Reserved */
 	.llong	0		/* Reserved */
 	.llong	0		/* Reserved */
-	.llong	0x0c00000000	/* ESID to map (Kernel at EA = 0xC000000000000000) */
-	.llong	0x06a99b4b14	/* VSID to map (Kernel at VA = 0x6a99b4b140000000) */
+	.llong	0xc00000000	/* KERNELBASE ESID */
+	.llong	0x6a99b4b14	/* KERNELBASE VSID */
+	/* We have to list the bolted VMALLOC segment here, too, so that it
+	 * will be restored on shared processor switch */
+	.llong	0xd00000000	/* VMALLOCBASE ESID */
+	.llong	0x08d12e6ab	/* VMALLOCBASE VSID */
 	.llong	8192		/* # pages to map (32 MB) */
 	.llong	0		/* Offset from start of loadarea to start of map */
 	.llong	0x0006a99b4b140000	/* VPN of first page to map */

--
David Gibson			| For every complex problem there is a
david AT gibson.dropbear.id.au	| solution which is simple, neat and
				| wrong.
http://www.ozlabs.org/people/dgibson

** Sent via the linuxppc64-dev mail list. See http://lists.linuxppc.org/





More information about the Linuxppc64-dev mailing list