RCU lockup issues when CONFIG_SOFTLOCKUP_DETECTOR=n - any one else seeing this?
Jonathan Cameron
Jonathan.Cameron at huawei.com
Mon Aug 21 20:18:33 AEST 2017
On Mon, 21 Aug 2017 16:06:05 +1000
Nicholas Piggin <npiggin at gmail.com> wrote:
> On Mon, 21 Aug 2017 10:52:58 +1000
> Nicholas Piggin <npiggin at gmail.com> wrote:
>
> > On Sun, 20 Aug 2017 14:14:29 -0700
> > "Paul E. McKenney" <paulmck at linux.vnet.ibm.com> wrote:
> >
> > > On Sun, Aug 20, 2017 at 11:35:14AM -0700, Paul E. McKenney wrote:
> > > > On Sun, Aug 20, 2017 at 11:00:40PM +1000, Nicholas Piggin wrote:
> > > > > On Sun, 20 Aug 2017 14:45:53 +1000
> > > > > Nicholas Piggin <npiggin at gmail.com> wrote:
> > > > >
> > > > > > On Wed, 16 Aug 2017 09:27:31 -0700
> > > > > > "Paul E. McKenney" <paulmck at linux.vnet.ibm.com> wrote:
> > > > > > > On Wed, Aug 16, 2017 at 05:56:17AM -0700, Paul E. McKenney wrote:
> > > > > > >
> > > > > > > Thomas, John, am I misinterpreting the timer trace event messages?
> > > > > >
> > > > > > So I did some digging, and what you find is that rcu_sched seems to do a
> > > > > > simple scheudle_timeout(1) and just goes out to lunch for many seconds.
> > > > > > The process_timeout timer never fires (when it finally does wake after
> > > > > > one of these events, it usually removes the timer with del_timer_sync).
> > > > > >
> > > > > > So this patch seems to fix it. Testing, comments welcome.
> > > > >
> > > > > Okay this had a problem of trying to forward the timer from a timer
> > > > > callback function.
> > > > >
> > > > > This was my other approach which also fixes the RCU warnings, but it's
> > > > > a little more complex. I reworked it a bit so the mod_timer fast path
> > > > > hopefully doesn't have much more overhead (actually by reading jiffies
> > > > > only when needed, it probably saves a load).
> > > >
> > > > Giving this one a whirl!
> > >
> > > No joy here, but then again there are other reasons to believe that I
> > > am seeing a different bug than Dave and Jonathan are.
> > >
> > > OK, not -entirely- without joy -- 10 of 14 runs were error-free, which
> > > is a good improvement over 0 of 84 for your earlier patch. ;-) But
> > > not statistically different from what I see without either patch.
> > >
> > > But no statistical difference compared to without patch, and I still
> > > see the "rcu_sched kthread starved" messages. For whatever it is worth,
> > > by the way, I also see this: "hrtimer: interrupt took 5712368 ns".
> > > Hmmm... I am also seeing that without any of your patches. Might
> > > be hypervisor preemption, I guess.
> >
> > Okay it makes the warnings go away for me, but I'm just booting then
> > leaving the system idle. You're doing some CPU hotplug activity?
>
> Okay found a bug in the patch (it was not forwarding properly before
> adding the first timer after an idle) and a few other concerns.
>
> There's still a problem of a timer function doing a mod timer from
> within expire_timers. It can't forward the base, which might currently
> be quite a way behind. I *think* after we close these gaps and get
> timely wakeups for timers on there, it should not get too far behind
> for standard timers.
>
> Deferrable is a different story. Firstly it has no idle tracking so we
> never forward it. Even if we wanted to, we can't do it reliably because
> it could contain timers way behind the base. They are "deferrable", so
> you get what you pay for, but this still means there's a window where
> you can add a deferrable timer and get a far later expiry than you
> asked for despite the CPU never going idle after you added it.
>
> All these problems would seem to go away if mod_timer just queued up
> the timer to a single list on the base then pushed them into the
> wheel during your wheel processing softirq... Although maybe you end
> up with excessive passes over big queue of timers. Anyway that
> wouldn't be suitable for 4.13 even if it could work.
>
> I'll send out an updated minimal fix after some more testing...
Hi All,
I'm back in the office with hardware access on our D05 64 core ARM64
boards.
I think we still have by far the quickest test cases for this so
feel free to ping me anything you want tested quickly (we were
looking at an average of less than 10 minutes to trigger
with machine idling).
Nick, I'm currently running your previous version and we are over an
hour so even without any instances of the issue so it looks like a
considerable improvement. I'll see if I can line a couple of boards
up for an overnight run if you have your updated version out by then.
Be great to finally put this one to bed.
Thanks,
Jonathan
>
> Thanks,
> Nick
More information about the Linuxppc-dev
mailing list