[PATCH 3/6] powerpc/mm: Ensure cpumask update is ordered

Benjamin Herrenschmidt benh at kernel.crashing.org
Mon Jul 24 14:28:00 AEST 2017


There is no guarantee that the various isync's involved with
the context switch will order the update of the CPU mask with
the first TLB entry for the new context being loaded by the HW.

Be safe here and add a memory barrier to order any subsequent
load/store which may bring entries into the TLB.

The corresponding barrier on the other side already exists as
pte updates use pte_xchg() which uses __cmpxchg_u64 which has
a sync after the atomic operation.

Signed-off-by: Benjamin Herrenschmidt <benh at kernel.crashing.org>
---
 arch/powerpc/include/asm/mmu_context.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
index ed9a36ee3107..ff1aeb2cd19f 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -110,6 +110,7 @@ static inline void switch_mm_irqs_off(struct mm_struct *prev,
 	/* Mark this context has been used on the new CPU */
 	if (!cpumask_test_cpu(smp_processor_id(), mm_cpumask(next))) {
 		cpumask_set_cpu(smp_processor_id(), mm_cpumask(next));
+		smp_mb();
 		new_on_cpu = true;
 	}
 
-- 
2.13.3



More information about the Linuxppc-dev mailing list