[PATCH 09/13] powerpc: Disable KMSAN checks on functions which walk the stack
Christophe Leroy
christophe.leroy at csgroup.eu
Thu Dec 14 20:00:40 AEDT 2023
Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
> Functions which walk the stack read parts of the stack which cannot be
> instrumented by KMSAN e.g. the backchain. Disable KMSAN sanitization of
> these functions to prevent false positives.
Do other architectures have to do it as well ?
I don't see it for show_stack(), is that a specific need for powerpc ?
>
> Signed-off-by: Nicholas Miehlbradt <nicholas at linux.ibm.com>
> ---
> arch/powerpc/kernel/process.c | 6 +++---
> arch/powerpc/kernel/stacktrace.c | 10 ++++++----
> arch/powerpc/perf/callchain.c | 2 +-
> 3 files changed, 10 insertions(+), 8 deletions(-)
>
> diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
> index 392404688cec..3dc88143c3b2 100644
> --- a/arch/powerpc/kernel/process.c
> +++ b/arch/powerpc/kernel/process.c
> @@ -2276,9 +2276,9 @@ static bool empty_user_regs(struct pt_regs *regs, struct task_struct *tsk)
>
> static int kstack_depth_to_print = CONFIG_PRINT_STACK_DEPTH;
>
> -void __no_sanitize_address show_stack(struct task_struct *tsk,
> - unsigned long *stack,
> - const char *loglvl)
> +void __no_sanitize_address __no_kmsan_checks show_stack(struct task_struct *tsk,
> + unsigned long *stack,
> + const char *loglvl)
> {
> unsigned long sp, ip, lr, newsp;
> int count = 0;
> diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
> index e6a958a5da27..369b8b2a1bcd 100644
> --- a/arch/powerpc/kernel/stacktrace.c
> +++ b/arch/powerpc/kernel/stacktrace.c
> @@ -24,8 +24,9 @@
>
> #include <asm/paca.h>
>
> -void __no_sanitize_address arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
> - struct task_struct *task, struct pt_regs *regs)
> +void __no_sanitize_address __no_kmsan_checks
> + arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
> + struct task_struct *task, struct pt_regs *regs)
> {
> unsigned long sp;
>
> @@ -62,8 +63,9 @@ void __no_sanitize_address arch_stack_walk(stack_trace_consume_fn consume_entry,
> *
> * If the task is not 'current', the caller *must* ensure the task is inactive.
> */
> -int __no_sanitize_address arch_stack_walk_reliable(stack_trace_consume_fn consume_entry,
> - void *cookie, struct task_struct *task)
> +int __no_sanitize_address __no_kmsan_checks
> + arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, void *cookie,
> + struct task_struct *task)
> {
> unsigned long sp;
> unsigned long newsp;
> diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
> index 6b4434dd0ff3..c7610b38e9b8 100644
> --- a/arch/powerpc/perf/callchain.c
> +++ b/arch/powerpc/perf/callchain.c
> @@ -40,7 +40,7 @@ static int valid_next_sp(unsigned long sp, unsigned long prev_sp)
> return 0;
> }
>
> -void __no_sanitize_address
> +void __no_sanitize_address __no_kmsan_checks
> perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
> {
> unsigned long sp, next_sp;
More information about the Linuxppc-dev
mailing list