[powerpc] ftrace warning kernel/trace/ftrace.c:2068 with code-patching selftests

Mark Rutland mark.rutland at arm.com
Fri Jan 28 00:33:02 AEDT 2022


On Thu, Jan 27, 2022 at 02:16:31PM +0100, Sven Schnelle wrote:
> Mark Rutland <mark.rutland at arm.com> writes:
> 
> > On Thu, Jan 27, 2022 at 07:46:01AM -0500, Steven Rostedt wrote:
> >> On Thu, 27 Jan 2022 12:27:04 +0000
> >> Mark Rutland <mark.rutland at arm.com> wrote:
> >> 
> >> > Ah, so those non-ELF relocations for the mcount_loc table just mean "apply the
> >> > KASLR offset here", which is equivalent for all entries.
> >> > 
> >> > That makes sense, thanks!
> >> 
> >> And this is why we were having such a hard time understanding each other ;-)
> >
> > ;)
> >
> > With that in mind, I think that we understand that the build-time sort works
> > for:
> >
> > * arch/x86, becuase the non-ELF relocations for mcount_loc happen to be
> >   equivalent.
> >  
> > * arch/arm, because there's no dynamic relocaiton and the mcount_loc entries
> >   have been finalized prior to sorting.
> >
> > ... but doesn't work for anyone else (including arm64) because the ELF
> > relocations are not equivalent, and need special care that is not yet
> > implemented.
> 
> For s390 my idea is to just skip the addresses between __start_mcount_loc
> and __stop_mcount_loc, because for these addresses we know that they are
> 64 bits wide, so we just need to add the KASLR offset.
> 
> I'm thinking about something like this:
> 
> diff --git a/arch/s390/boot/compressed/decompressor.h b/arch/s390/boot/compressed/decompressor.h
> index f75cc31a77dd..015d7e2e94ef 100644
> --- a/arch/s390/boot/compressed/decompressor.h
> +++ b/arch/s390/boot/compressed/decompressor.h
> @@ -25,6 +25,8 @@ struct vmlinux_info {
>  	unsigned long rela_dyn_start;
>  	unsigned long rela_dyn_end;
>  	unsigned long amode31_size;
> +	unsigned long start_mcount_loc;
> +	unsigned long stop_mcount_loc;
>  };
>  
>  /* Symbols defined by linker scripts */
> diff --git a/arch/s390/boot/startup.c b/arch/s390/boot/startup.c
> index 1aa11a8f57dd..7bb0d88db5c6 100644
> --- a/arch/s390/boot/startup.c
> +++ b/arch/s390/boot/startup.c
> @@ -88,6 +88,11 @@ static void handle_relocs(unsigned long offset)
>  	dynsym = (Elf64_Sym *) vmlinux.dynsym_start;
>  	for (rela = rela_start; rela < rela_end; rela++) {
>  		loc = rela->r_offset + offset;
> +		if ((loc >= vmlinux.start_mcount_loc) &&
> +		    (loc < vmlinux.stop_mcount_loc)) {
> +			(*(unsigned long *)loc) += offset;
> +			continue;
> +		}
>  		val = rela->r_addend;
>  		r_sym = ELF64_R_SYM(rela->r_info);
>  		if (r_sym) {
> @@ -232,6 +237,8 @@ static void offset_vmlinux_info(unsigned long offset)
>  	vmlinux.rela_dyn_start += offset;
>  	vmlinux.rela_dyn_end += offset;
>  	vmlinux.dynsym_start += offset;
> +	vmlinux.start_mcount_loc += offset;
> +	vmlinux.stop_mcount_loc += offset;
>  }
>  
>  static unsigned long reserve_amode31(unsigned long safe_addr)
> diff --git a/arch/s390/kernel/vmlinux.lds.S b/arch/s390/kernel/vmlinux.lds.S
> index 42c43521878f..51c773405608 100644
> --- a/arch/s390/kernel/vmlinux.lds.S
> +++ b/arch/s390/kernel/vmlinux.lds.S
> @@ -213,6 +213,8 @@ SECTIONS
>  		QUAD(__rela_dyn_start)				/* rela_dyn_start */
>  		QUAD(__rela_dyn_end)				/* rela_dyn_end */
>  		QUAD(_eamode31 - _samode31)			/* amode31_size */
> +		QUAD(__start_mcount_loc)
> +		QUAD(__stop_mcount_loc)
>  	} :NONE
>  
>  	/* Debugging sections.	*/
> 
> Not sure whether that would also work on power, and also not sure
> whether i missed something thinking about that. Maybe it doesn't even
> work. ;-)

I don't know enough about s390 or powerpc relocs to say whether that works, but
I can say that approach isn't going to work for arm64 without other signficant
changes.

I want to get the regression fixed ASAP, so can we take a simple patch for -rc2
which disables the build-time sort where it's currently broken (by limiting the
opt-in to arm and x86), then follow-up per-architecture to re-enable it if
desired/safe?

Thanks,
Mark.


More information about the Linuxppc-dev mailing list