[PATCH 3/3] powerpc/mm: Speed up computation of base and actual page size for a HPTE

Paul Mackerras paulus at ozlabs.org
Wed Sep 7 15:07:14 AEST 2016


On Mon, Sep 05, 2016 at 10:34:16AM +0530, Aneesh Kumar K.V wrote:
> > +static void init_hpte_page_sizes(void)
> > +{
> > +	long int ap, bp;
> > +	long int shift, penc;
> > +
> > +	for (bp = 0; bp < MMU_PAGE_COUNT; ++bp) {
> > +		if (!mmu_psize_defs[bp].shift)
> > +			continue;	/* not a supported page size */
> > +		for (ap = bp; ap < MMU_PAGE_COUNT; ++ap) {
> > +			penc = mmu_psize_defs[bp].penc[ap];
> > +			if (penc == -1)
> > +				continue;
> > +			shift = mmu_psize_defs[ap].shift - LP_SHIFT;
> > +			if (shift <= 0)
> > +				continue;	/* should never happen */
> > +			while (penc < (1 << LP_BITS)) {
> > +				hpte_page_sizes[penc] = (ap << 4) | bp;
> > +				penc += 1 << shift;
> > +			}
> > +		}
> > +	}
> > +}
> > +
> 
> Going through this again, it is confusing . How are we differentiating
> between the below penc values
> 
>  0000 000z		>=8KB (z = 1)
>  0000 zzzz		>=64KB (zzzz = 0001)
> 
> Those are made up 'z' values.

That wouldn't be a valid set of page encodings.  If the page encoding
for 8kB pages is z=1 then then encodings for all larger page sizes
would have to have the least significant bit be a 0.  In fact none of
the POWER processors has an 8kB page size; the smallest implemented
large page size is 64kB.  Consequently the first level of decoding of
the page size on these CPUs can look at the bottom 4 bits.

The 00000000 encoding is used for 16MB pages, because 16MB was the
first large page size implemented back in the POWER4+ days, and there
was no page size field at that time, so these 8 bits were reserved and
set to zero by OSes at that time.  For compatibility, the 00000000
encoding continues to be used, so the encodings for other page sizes
always have at least one 1 in the zzzz bits.

Paul.


More information about the Linuxppc-dev mailing list