[RFC PATCH V1 0/8] KASAN ppc64 support
ryabinin.a.a at gmail.com
Tue Aug 18 18:50:49 AEST 2015
2015-08-18 8:42 GMT+03:00 Aneesh Kumar K.V <aneesh.kumar at linux.vnet.ibm.com>:
> Andrey Ryabinin <ryabinin.a.a at gmail.com> writes:
>> 2015-08-17 12:50 GMT+03:00 Aneesh Kumar K.V <aneesh.kumar at linux.vnet.ibm.com>:
>>> Because of the above I concluded that we may not be able to do
>>> inline instrumentation. Now if we are not doing inline instrumentation,
>>> we can simplify kasan support by not creating a shadow mapping at all
>>> for vmalloc and vmemmap region. Hence the idea of returning the address
>>> of a zero page for anything other than kernel linear map region.
>> Yes, mapping zero page needed only for inline instrumentation.
>> You simply don't need to check shadow for vmalloc/vmemmap.
>> So, instead of redefining kasan_mem_to_shadow() I'd suggest to
>> add one more arch hook. Something like:
>> bool kasan_tracks_vaddr(unsigned long addr)
>> return REGION_ID(addr) == KERNEL_REGION_ID;
>> And in check_memory_region():
>> if (!(kasan_enabled() && kasan_tracks_vaddr(addr)))
> But that is introducting conditionals in core code for no real benefit.
> This also will break when we eventually end up tracking vmalloc ?
Ok, that's a very good reason to not do this.
I see one potential problem in the way you use kasan_zero_page, though.
memset/memcpy of large portions of memory ( > 8 * PAGE_SIZE) will end up
in overflowing kasan_zero_page when we check shadow in memory_is_poisoned_n()
> In that case our mem_to_shadow will esentially be a switch
> statement returning different offsets for kernel region and vmalloc
> region. As far as core kernel code is considered, it just need to
> ask arch to get the shadow address for a memory and instead of adding
> conditionals in core, my suggestion is, we handle this in an arch function.
More information about the Linuxppc-dev