[PATCH] powerpc: always enable queued spinlocks for 64s, disable for others
Christophe Leroy
christophe.leroy at csgroup.eu
Mon Dec 21 17:04:01 AEDT 2020
Le 21/12/2020 à 04:22, Nicholas Piggin a écrit :
> Queued spinlocks have shown to have good performance and fairness
> properties even on smaller (2 socket) POWER systems. This selects
> them automatically for 64s. For other platforms they are de-selected,
> the standard spinlock is far simpler and smaller code, and single
> chips with a handful of cores is unlikely to show any improvement.
>
> CONFIG_EXPERT still allows this to be changed, e.g., to help debug
> performance or correctness issues.
>
> Signed-off-by: Nicholas Piggin <npiggin at gmail.com>
> ---
> arch/powerpc/Kconfig | 8 +++-----
> 1 file changed, 3 insertions(+), 5 deletions(-)
>
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index ae7391627054..1f9f9e64d638 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -255,6 +255,7 @@ config PPC
> select PCI_MSI_ARCH_FALLBACKS if PCI_MSI
> select PCI_SYSCALL if PCI
> select PPC_DAWR if PPC64
> + select PPC_QUEUED_SPINLOCKS if !EXPERT && PPC_BOOK3S_64 && SMP
The condition is a bit complicated, and it doesn't set it to Y by default when EXPERT is selected.
> select RTC_LIB
> select SPARSE_IRQ
> select SYSCTL_EXCEPTION_TRACE
> @@ -506,16 +507,13 @@ config HOTPLUG_CPU
> config PPC_QUEUED_SPINLOCKS
> bool "Queued spinlocks"
> depends on SMP
> + depends on EXPERT || PPC_BOOK3S_64
> +
I would do:
config PPC_QUEUED_SPINLOCKS
bool "Queued spinlocks" if EXPERT
depends on SMP
default PPC_BOOK3S_64
> help
> Say Y here to use queued spinlocks which give better scalability and
> fairness on large SMP and NUMA systems without harming single threaded
> performance.
>
> - This option is currently experimental, the code is more complex and
> - less tested so it defaults to "N" for the moment.
> -
> - If unsure, say "N".
> -
> config ARCH_CPU_PROBE_RELEASE
> def_bool y
> depends on HOTPLUG_CPU
>
Christophe
More information about the Linuxppc-dev
mailing list