[PATCH v3 1/3] powerpc/mm: prepare kernel for KAsan on PPC32

Dmitry Vyukov dvyukov at google.com
Wed Jan 16 21:03:51 AEDT 2019


On Tue, Jan 15, 2019 at 6:25 PM Christophe Leroy
<christophe.leroy at c-s.fr> wrote:
>
> Le 15/01/2019 à 18:10, Dmitry Vyukov a écrit :
> > On Tue, Jan 15, 2019 at 6:06 PM Andrey Ryabinin <aryabinin at virtuozzo.com> wrote:
> >>
> >> On 1/15/19 2:14 PM, Dmitry Vyukov wrote:
> >>> On Tue, Jan 15, 2019 at 8:27 AM Christophe Leroy
> >>> <christophe.leroy at c-s.fr> wrote:
> >>>> On 01/14/2019 09:34 AM, Dmitry Vyukov wrote:
> >>>>> On Sat, Jan 12, 2019 at 12:16 PM Christophe Leroy
> >>>>> <christophe.leroy at c-s.fr> wrote:
> >>>>> >
> >>>>> > In kernel/cputable.c, explicitly use memcpy() in order
> >>>>> > to allow GCC to replace it with __memcpy() when KASAN is
> >>>>> > selected.
> >>>>> >
> >>>>> > Since commit 400c47d81ca38 ("powerpc32: memset: only use dcbz once cache is
> >>>>> > enabled"), memset() can be used before activation of the cache,
> >>>>> > so no need to use memset_io() for zeroing the BSS.
> >>>>> >
> >>>>> > Signed-off-by: Christophe Leroy <christophe.leroy at c-s.fr>
> >>>>> > ---
> >>>>> >  arch/powerpc/kernel/cputable.c | 4 ++--
> >>>>> >  arch/powerpc/kernel/setup_32.c | 6 ++----
> >>>>> >  2 files changed, 4 insertions(+), 6 deletions(-)
> >>>>> >
> >>>>> > diff --git a/arch/powerpc/kernel/cputable.c
> >>>>> b/arch/powerpc/kernel/cputable.c
> >>>>> > index 1eab54bc6ee9..84814c8d1bcb 100644
> >>>>> > --- a/arch/powerpc/kernel/cputable.c
> >>>>> > +++ b/arch/powerpc/kernel/cputable.c
> >>>>> > @@ -2147,7 +2147,7 @@ void __init set_cur_cpu_spec(struct cpu_spec *s)
> >>>>> >         struct cpu_spec *t = &the_cpu_spec;
> >>>>> >
> >>>>> >         t = PTRRELOC(t);
> >>>>> > -       *t = *s;
> >>>>> > +       memcpy(t, s, sizeof(*t));
> >>>>>
> >>>>> Hi Christophe,
> >>>>>
> >>>>> I understand why you are doing this, but this looks a bit fragile and
> >>>>> non-scalable. This may not work with the next version of compiler,
> >>>>> just different than yours version of compiler, clang, etc.
> >>>>
> >>>> My felling would be that this change makes it more solid.
> >>>>
> >>>> My understanding is that when you do *t = *s, the compiler can use
> >>>> whatever way it wants to do the copy.
> >>>> When you do memcpy(), you ensure it will do it that way and not another
> >>>> way, don't you ?
> >>>
> >>> It makes this single line more deterministic wrt code-gen (though,
> >>> strictly saying compiler can turn memcpy back into inlines
> >>> instructions, it knows memcpy semantics anyway).
> >>> But the problem I meant is that the set of places that are subject to
> >>> this problem is not deterministic. So if we go with this solution,
> >>> after this change it's in the status "works on your machine" and we
> >>> either need to commit to not using struct copies and zeroing
> >>> throughout kernel code or potentially have a long tail of other
> >>> similar cases, and since they can be triggered by another compiler
> >>> version, we may need to backport these changes to previous releases
> >>> too. Whereas if we would go with compiler flags, it would prevent the
> >>> problem in all current and future places and with other past/future
> >>> versions of compilers.
> >>>
> >>
> >> The patch will work for any compiler. The point of this patch is to make
> >> memcpy() visible to the preprocessor which will replace it with __memcpy().
> >
> > For this single line, yes. But it does not mean that KASAN will work.
> >
> >> After preprocessor's work, compiler will see just __memcpy() call here.
>
> This problem can affect any arch I believe. Maybe the 'solution' would
> be to run a generic script similar to
> arch/powerpc/kernel/prom_init_check.sh on all objects compiled with
> KASAN_SANITIZE_object.o := n don't include any reference to memcpy()
> memset() or memmove() ?


We do this when building user-space sanitizers runtime. There all code
always runs with sanitizer enabled, but at the same time must not be
instrumented. So we committed to changing all possible memcpy/memset
injection points and have a script that checks that we indeed have no
such calls at any paths. There problem is a bit simpler as we don't
have gazillion combinations of configs and the runtime is usually
self-hosted (as it is bundled with compiler), so we know what compiler
is used to build it. And that all is checked on CI.
I don't know how much work it is to do the same for kernel, though.
Adding -ffreestanding, if worked, looked like a cheap option to
achieve the same.

Another option is to insert checks into KASAN's memcpy/memset that at
least some early init has completed. If early init hasn't finished
yet, then they could skip all additional work besides just doing
memcpy/memset. We can't afford this for memory access instrumentation
for performance reasons, but it should be bearable for memcpy/memset.


More information about the Linuxppc-dev mailing list