[PATCH] powerpc/64s: relocation, register save fixes for system reset interrupt

Gautham R Shenoy ego at linux.vnet.ibm.com
Fri Nov 4 20:04:09 AEDT 2016


Hi Nick,

On Thu, Nov 03, 2016 at 06:02:44PM +1100, Nicholas Piggin wrote:
> On Thu, 3 Nov 2016 02:32:39 -0400
> "Shreyas B. Prabhu" <shreyasbp at gmail.com> wrote:
> 
> > On Thu, Nov 3, 2016 at 2:17 AM, Nicholas Piggin <npiggin at gmail.com> wrote:
> > > On Thu, 3 Nov 2016 01:56:46 -0400
> > > "Shreyas B. Prabhu" <shreyasbp at gmail.com> wrote:
> > >  
> > >> On Thu, Nov 3, 2016 at 1:21 AM, Nicholas Piggin <npiggin at gmail.com> wrote:  
> > >> > On Wed, 2 Nov 2016 14:15:48 +0530
> > >> > Gautham R Shenoy <ego at linux.vnet.ibm.com> wrote:
> > >> >  
> > >> >> Hi Nick,
> > >> >>
> > >> >> On Wed, Nov 02, 2016 at 07:36:24PM +1100, Nicholas Piggin wrote:  
> > >> >> >
> > >> >> > Okay, I'll work with that. What's the best way to make a P8 do
> > >> >> > winkle sleeps?  
> > >> >>
> > >> >> From the userspace, offlining the CPUs of the core will put them to
> > >> >> winkle.  
> > >> >
> > >> > Thanks for this. Hum, that r13 manipulation throughout the idle
> > >> > and exception code is a bit interesting. I'll do the minimal patch
> > >> > for 4.9, but what's the reason not to just use the winkle state
> > >> > in the PACA rather than storing it into HSPRG0 bit, can you (or
> > >> > Shreyas) explain?
> > >> >  
> > >> Hi Nick,
> > >>
> > >> Before deep winkle, checking SRR1's wakeup bits (Bits 46:47) was enough to
> > >> figure out which idle state we are waking up from. But in P8, SRR1's wakeup
> > >> bits aren't enough since bits 46:47 are 0b11 for both fast sleep and
> > >> deep winkle.
> > >> So to distinguish bw fastsleep and deep winkle, we use the current HSPRG0/PORE
> > >> trick. We program the PORE engine (which is used for state restore when waking
> > >> up from deep winkle) to restore HSPRG0 with the last bit set (we do this in
> > >> pnv_save_sprs_for_winkle()). R13 bit manipulation in pnv_restore_hyp_resource
> > >> is related to this.  
> > >
> > > Right, I didn't realize how that exactly worked until I had to go read
> > > the code just now. It's a neat little trick. I'm wondering can we use PACA_THREAD_IDLE_STATE==PNV_THREAD_WINKLE for this instead? It would just
> > > make the early PACA usage in the exception handlers able to use more common
> > > code.
> > >  
> > 
> > PACA_THREAD_IDLE_STATE will have what was 'requested'. It may not be the
> > state we are waking up from. For example, if 7 threads of the core execute
> > winkle instruction while 1 thread of the same core executes sleep. Here
> > the core only enters sleep whereas PACA_THREAD_IDLE_STATE for the 7 threads
> > will have PNV_THREAD_WINKLE.
> 
> I see, that makes sense. Would it be possible to keep count of the number of
> threads going into winkle in core_idle_state? Even if that is not a guarantee
> if them requiring a PORE wakeup, would the restore case be harmful?

Doing a full restore on wakeup when the hardware didn't actually go to
winkle isn't harmful. The first few iterations of the winkle
enablement patchset based the decision on
PACA_THREAD_IDLE_STATE==PNV_THREAD_WINKLE upon a wakeup.

The disadvantage was that we would end up restoring a whole bunch of
Subcore SPRs (SDR1, RPR, AMOR), Core SPRs (TSCR,WORC) and per thread
SPRs (SLBs, SPURR,PURR,DSCR,WORT) which would waste quite lot of
cycles if the hardware didn't actually demote the CPU all the way to
winkle.

Hence Ben suggested piggybacking on PORE engine to set the LSB of the
HSPRG0 to indicate wakeup from winkle.

> 
> Thanks,
> Nick
> 
--
Thanks and Regards
gautham.



More information about the Linuxppc-dev mailing list