[PATCH 3/4] powerpc/64s/hash: Fix preloading of SLB entries

Nicholas Piggin npiggin at gmail.com
Sat Sep 29 02:00:57 AEST 2018


slb_setup_new_exec and preload_new_slb_context assumed if an address
missed the preload cache, then it would not be in the SLB and could
be added. This is wrong if the preload cache has started to overflow.
This can cause SLB multi-hits on user addresses.

That assumption came from an earlier version of the patch which
cleared the preload cache when copying the task, but even that was
technically wrong because some user accesses occur before these
preloads, and the preloads themselves could overflow the cache
depending on the size.

Fixes: 89ca4e126a3f ("powerpc/64s/hash: Add a SLB preload cache")
Signed-off-by: Nicholas Piggin <npiggin at gmail.com>
---
 arch/powerpc/mm/slb.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index b438220c4336..c1425853af5d 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -311,6 +311,13 @@ void slb_setup_new_exec(void)
 	struct mm_struct *mm = current->mm;
 	unsigned long exec = 0x10000000;
 
+	/*
+	 * preload cache can only be used to determine whether a SLB
+	 * entry exists if it does not start to overflow.
+	 */
+	if (ti->slb_preload_nr + 2 > SLB_PRELOAD_NR)
+		return;
+
 	/*
 	 * We have no good place to clear the slb preload cache on exec,
 	 * flush_thread is about the earliest arch hook but that happens
@@ -345,6 +352,10 @@ void preload_new_slb_context(unsigned long start, unsigned long sp)
 	struct mm_struct *mm = current->mm;
 	unsigned long heap = mm->start_brk;
 
+	/* see above */
+	if (ti->slb_preload_nr + 3 > SLB_PRELOAD_NR)
+		return;
+
 	/* Userspace entry address. */
 	if (!is_kernel_addr(start)) {
 		if (preload_add(ti, start))
-- 
2.18.0



More information about the Linuxppc-dev mailing list