[PATCH v2] powerpc: fix boot on BOOK3S_32 with CONFIG_STRICT_KERNEL_RWX

Michael Ellerman mpe at ellerman.id.au
Wed Nov 22 22:48:57 AEDT 2017


Christophe LEROY <christophe.leroy at c-s.fr> writes:

> Le 22/11/2017 à 00:07, Balbir Singh a écrit :
>> On Wed, Nov 22, 2017 at 1:28 AM, Christophe Leroy
>> <christophe.leroy at c-s.fr> wrote:
>>> On powerpc32, patch_instruction() is called by apply_feature_fixups()
>>> which is called from early_init()
>>>
>>> There is the following note in front of early_init():
>>>   * Note that the kernel may be running at an address which is different
>>>   * from the address that it was linked at, so we must use RELOC/PTRRELOC
>>>   * to access static data (including strings).  -- paulus
>>>
>>> Therefore, slab_is_available() cannot be called yet, and
>>> text_poke_area must be addressed with PTRRELOC()
>>>
>>> Fixes: 37bc3e5fd764f ("powerpc/lib/code-patching: Use alternate map
>>> for patch_instruction()")
>>> Reported-by: Meelis Roos <mroos at linux.ee>
>>> Cc: Balbir Singh <bsingharora at gmail.com>
>>> Signed-off-by: Christophe Leroy <christophe.leroy at c-s.fr>
>>> ---
>>>   v2: Added missing asm/setup.h
>>>
>>>   arch/powerpc/lib/code-patching.c | 6 ++----
>>>   1 file changed, 2 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
>>> index c9de03e0c1f1..d469224c4ada 100644
>>> --- a/arch/powerpc/lib/code-patching.c
>>> +++ b/arch/powerpc/lib/code-patching.c
>>> @@ -21,6 +21,7 @@
>>>   #include <asm/tlbflush.h>
>>>   #include <asm/page.h>
>>>   #include <asm/code-patching.h>
>>> +#include <asm/setup.h>
>>>
>>>   static int __patch_instruction(unsigned int *addr, unsigned int instr)
>>>   {
>>> @@ -146,11 +147,8 @@ int patch_instruction(unsigned int *addr, unsigned int instr)
>>>           * During early early boot patch_instruction is called
>>>           * when text_poke_area is not ready, but we still need
>>>           * to allow patching. We just do the plain old patching
>>> -        * We use slab_is_available and per cpu read * via this_cpu_read
>>> -        * of text_poke_area. Per-CPU areas might not be up early
>>> -        * this can create problems with just using this_cpu_read()
>>>           */
>>> -       if (!slab_is_available() || !this_cpu_read(text_poke_area))
>>> +       if (!this_cpu_read(*PTRRELOC(&text_poke_area)))
>>>                  return __patch_instruction(addr, instr);
>> 
>> On ppc64, we call apply_feature_fixups() in early_setup() after we've
>> relocated ourselves. Sorry for missing the ppc32 case. I would like to
>> avoid PTRRELOC when unnecessary.
>
> What do you suggest then ?
>
> Some #ifdef PPC32 around that ?

No I don't think that improves anything.

I think the comment about per-cpu not being up is wrong, you'll just get
the static version of text_poke_area, which should be NULL. So we don't
need the slab_available() check anyway.

So I'll take this as-is.

Having said that I absolutely hate PTRRELOC, so if it starts spreading
we will have to come up with something less bug prone.

cheers


More information about the Linuxppc-dev mailing list