[1/4] RFC: SLB rewrite (core rewrite)
Jake Moilanen
moilanen at austin.ibm.com
Thu Jul 8 01:06:43 EST 2004
This is pretty nice. Good work. I have just one nit below:
> +_GLOBAL(slb_allocate)
> + /*
> + * First find a slot, round robin. Previously we tried to find
> + * a free slot first but that took too long. Unfortunately we
> + * dont have any LRU information to help us choose a slot.
> + */
> + srdi r9,r1,27
> + ori r9,r9,1 /* mangle SP for later compare */
> +
> + ld r10,PACASTABRR(r13)
> +3:
> + addi r10,r10,1
> + /* use a cpu feature mask if we ever change our slb size */
> + cmpldi r10,SLB_NUM_ENTRIES
> +
> + blt+ 4f
This branch probably shouldn't be predicted. The general rule on branch
prediction is for an error case, or a missed lock. Since about power 4,
the branch prediction is a little over 99% correct, you'll get a miss 1
out of 62 times or 1.6% of the time. It's probably not measurable, just
might save a few cycles.
> + * The >> 27 (rather than >> 28) is so that the LSB is the
> + * valid bit - this way we check valid and ESID in one compare.
> + */
> + srdi r11,r11,27
> + cmpd r11,r9
> + beq- 3b
Same as above.
Thanks,
Jake
** Sent via the linuxppc64-dev mail list. See http://lists.linuxppc.org/
More information about the Linuxppc64-dev
mailing list