[RFC PATCH V1 0/8] KASAN ppc64 support
benh at kernel.crashing.org
Mon Aug 17 21:21:24 AEST 2015
On Mon, 2015-08-17 at 16:20 +0530, Aneesh Kumar K.V wrote:
> Benjamin Herrenschmidt <benh at kernel.crashing.org> writes:
> > On Mon, 2015-08-17 at 15:20 +0530, Aneesh Kumar K.V wrote:
> > > For kernel linear mapping, our address space looks like
> > > 0xc000000000000000 - 0xc0003fffffffffff (64TB)
> > >
> > > We can't have virtual address(effective address) above that range
> > > in 0xc region. Hence in-order to shadow the linear mapping, I am
> > > using region 0xe. ie, the shadow mapping now looks liwe
> > >
> > > 0xc000000000000000 -> 0xe000000000000000
> > Why ? IE. Why can't you put the shadow at address +64T and have it
> > work
> > for everything ?
> > .../...
> Above +64TB ? How will that work ? We have check in different parts
> code like below, where we check each region's top address is within
> 64TB range.
> PGTABLE_RANGE and (ESID_BITS + SID_SHIFT) and all dependendent on
> range. (46 bits).
For the VSID we could just mask the address with 64T-1. Depends if it's
some place we want to actually bound check or not. In general though,
we can safely assume that a region will never be bigger than
PGTABLE_RANGE so having another PGTABLE_RANGE zone making the kasan bit
somewhat makes sense. Or if you want KSAN to actually use page tables
make it PGTABLE_RANGE/2 and use the upper half. I don't understand
enough of what ksan does ...
> static inline unsigned long get_vsid(unsigned long context, unsigned
> long ea,
> int ssize)
> * Bad address. We return VSID 0 for that
> if ((ea & ~REGION_MASK) >= PGTABLE_RANGE)
> return 0;
> if (ssize == MMU_SEGSIZE_256M)
> return vsid_scramble((context << ESID_BITS)
> | (ea >> SID_SHIFT), 256M);
> return vsid_scramble((context << ESID_BITS_1T)
> | (ea >> SID_SHIFT_1T), 1T);
> > > Another reason why inline instrumentation is difficult is that
> > > for
> > > inline instrumentation to work, we need to create a mapping for
> > > _possible_
> > > virtual address space before kasan is fully initialized. ie, we
> > > need
> > > to create page table entries for the shadow of the entire 64TB
> > > range,
> > > with zero page, even though we have lesser ram. We definitely
> > > can't
> > > bolt those entries. I am yet to get the shadow for kernel linear
> > > mapping to work without bolting. Also we will have to get the
> > > page
> > > table allocated for that, because we can't share page table
> > > entries.
> > > Our fault path use pte entries for storing hash slot index.
> > Hrm, that means we might want to start considering a page table to
> > cover the linear mapping...
> But that would require us to get a large zero page ? Are you
> to use 16G page ?
> > > If we are ok to steal part of that 64TB range, for kasan mapping
> > > , ie
> > > we make shadow of each region part of the same region, may be we
> > > can
> > > get inline instrumentation to work. But that still doesn't solve
> > > the
> > > page table allocation overhead issue mentioned above.
> > >
More information about the Linuxppc-dev