[PATCH v4 3/3] powerpc/32: Add KASAN support

christophe leroy christophe.leroy at c-s.fr
Sat Feb 9 22:55:12 AEDT 2019


Hi Andrey,

Le 08/02/2019 à 18:40, Andrey Konovalov a écrit :
> On Fri, Feb 8, 2019 at 6:17 PM Christophe Leroy <christophe.leroy at c-s.fr> wrote:
>>
>> Hi Daniel,
>>
>> Le 08/02/2019 à 17:18, Daniel Axtens a écrit :
>>> Hi Christophe,
>>>
>>> I've been attempting to port this to 64-bit Book3e nohash (e6500),
>>> although I think I've ended up with an approach more similar to Aneesh's
>>> much earlier (2015) series for book3s.
>>>
>>> Part of this is just due to the changes between 32 and 64 bits - we need
>>> to hack around the discontiguous mappings - but one thing that I'm
>>> particularly puzzled by is what the kasan_early_init is supposed to do.
>>
>> It should be a problem as my patch uses a 'for_each_memblock(memory,
>> reg)' loop.
>>
>>>
>>>> +void __init kasan_early_init(void)
>>>> +{
>>>> +    unsigned long addr = KASAN_SHADOW_START;
>>>> +    unsigned long end = KASAN_SHADOW_END;
>>>> +    unsigned long next;
>>>> +    pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
>>>> +    int i;
>>>> +    phys_addr_t pa = __pa(kasan_early_shadow_page);
>>>> +
>>>> +    BUILD_BUG_ON(KASAN_SHADOW_START & ~PGDIR_MASK);
>>>> +
>>>> +    if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE))
>>>> +            panic("KASAN not supported with Hash MMU\n");
>>>> +
>>>> +    for (i = 0; i < PTRS_PER_PTE; i++)
>>>> +            __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
>>>> +                         kasan_early_shadow_pte + i,
>>>> +                         pfn_pte(PHYS_PFN(pa), PAGE_KERNEL_RO), 0);
>>>> +
>>>> +    do {
>>>> +            next = pgd_addr_end(addr, end);
>>>> +            pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
>>>> +    } while (pmd++, addr = next, addr != end);
>>>> +}
>>>
>>> As far as I can tell it's mapping the early shadow page, read-only, over
>>> the KASAN_SHADOW_START->KASAN_SHADOW_END range, and it's using the early
>>> shadow PTE array from the generic code.
>>>
>>> I haven't been able to find an answer to why this is in the docs, so I
>>> was wondering if you or anyone else could explain the early part of
>>> kasan init a bit better.
>>
>> See https://www.kernel.org/doc/html/latest/dev-tools/kasan.html for an
>> explanation of the shadow.
>>
>> When shadow is 0, it means the memory area is entirely accessible.
>>
>> It is necessary to setup a shadow area as soon as possible because all
>> data accesses check the shadow area, from the begining (except for a few
>> files where sanitizing has been disabled in Makefiles).
>>
>> Until the real shadow area is set, all access are granted thanks to the
>> zero shadow area beeing for of zeros.
> 
> Not entirely correct. kasan_early_init() indeed maps the whole shadow
> memory range to the same kasan_early_shadow_page. However as kernel
> loads and memory gets allocated this shadow page gets rewritten with
> non-zero values by different KASAN allocator hooks. Since these values
> come from completely different parts of the kernel, but all land on
> the same page, kasan_early_shadow_page's content can be considered
> garbage. When KASAN checks memory accesses for validity it detects
> these garbage shadow values, but doesn't print any reports, as the
> reporting routine bails out on the current->kasan_depth check (which
> has the value of 1 initially). Only after kasan_init() completes, when
> the proper shadow memory is mapped, current->kasan_depth gets set to 0
> and we start reporting bad accesses.

That's surprising, because in the early phase I map the shadow area 
read-only, so I do not expect it to get modified unless RO protection is 
failing for some reason.

Next week I'll add a test in early_init() to check the content of the 
early shadow area.

Christophe

> 
>>
>> I mainly used ARM arch as an exemple when I implemented KASAN for ppc32.
>>
>>>
>>> At the moment, I don't do any early init, and like Aneesh's series for
>>> book3s, I end up needing a special flag to disable kasan until after
>>> kasan_init. Also, as with Balbir's seris for Radix, some tests didn't
>>> fire, although my missing tests are a superset of his. I suspect the
>>> early init has something to do with these...?
>>
>> I think you should really focus on establishing a zero shadow area as
>> early as possible instead of trying to ack the core parts of KASAN.
>>
>>>
>>> (I'm happy to collate answers into a patch to the docs, btw!)
>>
>> We can also have the discussion going via
>> https://github.com/linuxppc/issues/issues/106
>>
>>>
>>> In the long term I hope to revive Aneesh's and Balbir's series for hash
>>> and radix as well.
>>
>> Great.
>>
>> Christophe
>>
>>>
>>> Regards,
>>> Daniel
>>>
>>>> +
>>>> +static void __init kasan_init_region(struct memblock_region *reg)
>>>> +{
>>>> +    void *start = __va(reg->base);
>>>> +    void *end = __va(reg->base + reg->size);
>>>> +    unsigned long k_start, k_end, k_cur, k_next;
>>>> +    pmd_t *pmd;
>>>> +
>>>> +    if (start >= end)
>>>> +            return;
>>>> +
>>>> +    k_start = (unsigned long)kasan_mem_to_shadow(start);
>>>> +    k_end = (unsigned long)kasan_mem_to_shadow(end);
>>>> +    pmd = pmd_offset(pud_offset(pgd_offset_k(k_start), k_start), k_start);
>>>> +
>>>> +    for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd++) {
>>>> +            k_next = pgd_addr_end(k_cur, k_end);
>>>> +            if ((void *)pmd_page_vaddr(*pmd) == kasan_early_shadow_pte) {
>>>> +                    pte_t *new = pte_alloc_one_kernel(&init_mm);
>>>> +
>>>> +                    if (!new)
>>>> +                            panic("kasan: pte_alloc_one_kernel() failed");
>>>> +                    memcpy(new, kasan_early_shadow_pte, PTE_TABLE_SIZE);
>>>> +                    pmd_populate_kernel(&init_mm, pmd, new);
>>>> +            }
>>>> +    };
>>>> +
>>>> +    for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE) {
>>>> +            void *va = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>>>> +            pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
>>>> +
>>>> +            if (!va)
>>>> +                    panic("kasan: memblock_alloc() failed");
>>>> +            pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur);
>>>> +            pte_update(pte_offset_kernel(pmd, k_cur), ~0, pte_val(pte));
>>>> +    }
>>>> +    flush_tlb_kernel_range(k_start, k_end);
>>>> +}
>>>> +
>>>> +void __init kasan_init(void)
>>>> +{
>>>> +    struct memblock_region *reg;
>>>> +
>>>> +    for_each_memblock(memory, reg)
>>>> +            kasan_init_region(reg);
>>>> +
>>>> +    kasan_init_tags();
>>>> +
>>>> +    /* At this point kasan is fully initialized. Enable error messages */
>>>> +    init_task.kasan_depth = 0;
>>>> +    pr_info("KASAN init done\n");
>>>> +}
>>>> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
>>>> index 33cc6f676fa6..ae7db88b72d6 100644
>>>> --- a/arch/powerpc/mm/mem.c
>>>> +++ b/arch/powerpc/mm/mem.c
>>>> @@ -369,6 +369,10 @@ void __init mem_init(void)
>>>>       pr_info("  * 0x%08lx..0x%08lx  : highmem PTEs\n",
>>>>               PKMAP_BASE, PKMAP_ADDR(LAST_PKMAP));
>>>>    #endif /* CONFIG_HIGHMEM */
>>>> +#ifdef CONFIG_KASAN
>>>> +    pr_info("  * 0x%08lx..0x%08lx  : kasan shadow mem\n",
>>>> +            KASAN_SHADOW_START, KASAN_SHADOW_END);
>>>> +#endif
>>>>    #ifdef CONFIG_NOT_COHERENT_CACHE
>>>>       pr_info("  * 0x%08lx..0x%08lx  : consistent mem\n",
>>>>               IOREMAP_TOP, IOREMAP_TOP + CONFIG_CONSISTENT_SIZE);
>>>> --
>>>> 2.13.3
>>
>> --
>> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
>> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe at googlegroups.com.
>> To post to this group, send email to kasan-dev at googlegroups.com.
>> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/69720148-fd19-0810-5a1d-96c45e2ec00c%40c-s.fr.
>> For more options, visit https://groups.google.com/d/optout.

---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus



More information about the Linuxppc-dev mailing list