more eeh

Greg KH greg at
Fri Mar 19 11:01:16 EST 2004

On Thu, Mar 18, 2004 at 09:54:21AM +1100, Paul Mackerras wrote:
> So the scheme that the hardware designers came up with was to add
> logic to the PCI-PCI bridges (we have one per slot, to support
> hotplug) to allow a slot to be electrically isolated from the rest of
> the system.  Then, if the system detects an address parity error on a
> DMA transaction initiated by a particular device, it can just abort
> that transaction and isolate that device immediately, and thus stop
> the error from affecting any other part of the system.
> When the slot is in this state, any writes to the device get thrown
> away and any reads return all 1's.

Which is the same as PCMCIA sees when the device is disconnected, right?

> The idea of presenting this to drivers as a hot-unplug event followed
> by a hot-plug event (after the device has been reset and reconnected)
> was my suggestion as the best way to present to the drivers what the
> hardware is doing.  I envisaged three classes of drivers: (a) those
> that were very pSeries-specific and could use a pSeries-specific API
> to cope with all this; (b) drivers that could cope with asynchronous
> plug and unplug events, to which the EEH shenanigans could be
> presented as plug/unplug events, and (c) drivers which couldn't cope
> at all.
> My hope was that a lot of drivers could be in class (b).

They should be, if they work with PCI hotplug systems.  Unfortunately a
lot of SCSI drivers are still not there, but with 2.6 it's gotten a lot

> I was hoping that most hot-plug aware drivers could be hardened
> sufficiently to be in class (b) without too much effort, and that that
> hardening would be acceptable to the driver maintainers

I don't think anyone would disagree with this.

> (whereas the changes to put a driver in class (a) would, I expect, not
> be acceptable).


> I was thinking that the unplug event generation, resetting and
> reconnecting of the device, and plug event generation would be done by
> a kernel thread.  I don't think we want to rely on userspace for that,
> because userspace may get blocked while the device is gone.

But you want userspace to do this.  There are systems with a few
different PCI Hotplug controller drivers on them.  The different
controller drivers control different slots.  Userspace is the only place
that can reliably handle this.

And if you are a kernel thread, you would have the same issues that
dropping to userspace and doing the disconnect there causes.

So I still think that my userspace proposal is the proper way to do
this.  It works with all pci hotplug drivers, and allows userspace to
implement any type of policy that it wishes to (disconnecting
filesystems, bringing down network connections, logging the event to the
proper place, etc.)

> I would rather get the notification to the driver quickly without
> relying on userspace (but of course from task context not interrupt
> context).  What happens after that could be driven by userspace,
> except that I worry about what happens if userspace gets blocked by
> the device being unavailable.

You've never actually timed a hotplug event have you :)

They are blindly fast.  So bloody fast that I had to put a lot of dumb
logic in the hotplug and udev code to sit and spin and wait for the
kernel to catch up.

Now the issue of putting the hotplug script on a disk that just got a
error would indicate that you really need a type (a) driver for that
kind of thing.

Hope this helps,

greg k-h

** Sent via the linuxppc64-dev mail list. See

More information about the Linuxppc64-dev mailing list