[PATCH 4/4] powerpc/64s: Support shrinking the SLB for debugging
Michal Suchánek
msuchanek at suse.de
Sat Jan 19 11:13:14 AEDT 2019
On Thu, 17 Jan 2019 23:13:28 +1100
Michael Ellerman <mpe at ellerman.id.au> wrote:
> On machines with 1TB segments and a 32-entry SLB it's quite hard to
> cause sufficient SLB pressure to trigger bugs caused due to badly
> timed SLB faults.
>
> We have seen this in the past and a few years ago added the
> disable_1tb_segments command line option to force the use of 256MB
> segments. However even this allows some bugs to slip through testing
> if the SLB entry in question was recently accessed.
>
> So add a new command line parameter for debugging which shrinks the
> SLB to the minimum size we can support. Currently that size is 3, two
> bolted SLBs and 1 for dynamic use. This creates the maximal SLB
Doesn't this violate the power of 2 requirement stated in 2/4?
Thanks
Michal
> pressure while still allowing the kernel to operate.
>
> Signed-off-by: Michael Ellerman <mpe at ellerman.id.au>
> ---
> arch/powerpc/mm/slb.c | 14 ++++++++++++++
> 1 file changed, 14 insertions(+)
>
> diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
> index 61450a9cf30d..0f33e28f97da 100644
> --- a/arch/powerpc/mm/slb.c
> +++ b/arch/powerpc/mm/slb.c
> @@ -506,10 +506,24 @@ void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
> asm volatile("isync" : : : "memory");
> }
>
> +static bool shrink_slb = false;
> +
> +static int __init parse_shrink_slb(char *p)
> +{
> + shrink_slb = true;
> + slb_set_size(0);
> +
> + return 0;
> +}
> +early_param("shrink_slb", parse_shrink_slb);
> +
> static u32 slb_full_bitmap;
>
> void slb_set_size(u16 size)
> {
> + if (shrink_slb)
> + size = SLB_NUM_BOLTED + 1;
> +
> mmu_slb_size = size;
>
> if (size >= 32)
More information about the Linuxppc-dev
mailing list