MMIO and gcc re-ordering issue
Jes Sorensen
jes at sgi.com
Fri May 30 19:48:12 EST 2008
Benjamin Herrenschmidt wrote:
> On Thu, 2008-05-29 at 10:47 -0400, Jes Sorensen wrote:
>> The only way to guarantee ordering in the above setup, is to either
>> make writel() fully ordered or adding the mmiowb()'s inbetween the two
>> writel's. On Altix you have to go and read from the PCI brige to
>> ensure all writes to it have been flushed, which is also what mmiowb()
>> is doing. If writel() was to guarantee this ordering, it would make
>> every writel() call extremely expensive :-(
>
> Interesting. I've always been taught by ia64 people that mmiowb() was
> intended to be used solely between writel() and spin_unlock().
>
> I think in the above case, you really should make writel() ordered.
> Anything else is asking for trouble, for the exact same reasons that I
> made it fully ordered on powerpc at least vs. previous stores. I only
> kept it relaxed vs. subsequent cacheable stores (ie, spin_unlock), for
> which I use the trick mentioned before.
Hmmm I hope I didn't mess up the description of this and added to the
confusion.
The net result of that would be to kill performance completely, I really
don't like that idea.... Having each writel() go out and poll the PCI
bridge is going to make every driver used on Altix slow as a dog.
In addition it's still not going to solve the problem for userland
mapped stuff such as infinibug.
> Yes, this has some cost (can be fairly significant on powerpc too) but
> I think it's a very basic assumption from drivers that consecutive
> writel's, especially issued by the same CPU, will get to the device
> in order.
In this case the cost is more than just significant, it's pretty
crippling.
> If this is a performance problem, then provide relaxed variants and
> use them in selected drivers.
We'd have to make major changes to drivers like e1000, tg3, mptsas, the
qla2/3/4xxx and a bunch of others to get performance back. I really
think the code maintenance issue there will get far worse than what we
have today :(
Cheers,
Jes
More information about the Linuxppc-dev
mailing list