PCI bus node location
Rafal Jaworowski
raj at semihalf.com
Thu Nov 12 01:16:56 EST 2009
On 2009-11-11, at 01:05, David Gibson wrote:
>> The current approach seems a bit of a maintenance problem: the PCI
>> bridges control reg need to specify the whole address instead of
>> just an offset, which is more error prone in case of changes (when a
>
> Well, yes. And worse, it means there's two places that need to be
> adjusted rather than one, if the the IMMR is relocated (which it can
> be). But it's a trade-off of this versus the inconvenience of dealing
> with separate "control" and "bridge" nodes for the PCI and following
> phandles between them.
Would the technique with additional control node and a phandle
complicate bindings handling much? The clear benefit is the ability to
truly reflect hierarchy of devices available within IMMR/CCSR block.
>> number of places need to be adjusted etc.). What would need to be
>> done/extended for the ranges prop you mention to allow for better
>> handling cases like this?
>
> I don't really understand the question. As Grant has said the
> "correct" approach is to have one node representing the control
> registers - located under the IMMR ("soc") node - and another
> representing the PCI host bridge itself (which would be in its present
> location). There would need to be phandles linking the two. It
> doesn't really need any extension to the device tree semantics itself
> - just a more complex binding for this device.
Maybe I misunderstood Grant, my impression was that there was possible
some 'fixing' of ranges properties (which would be alternative to the
control node approach).
> Bear in mind with all this that we've been working out conventions for
> representing various devices as we go along - and in the early days
> nearly everyone was pretty inexperienced with device tree design. A
> number of the bindings that have been established have made less than
> ideal choices. We're getting better, but we're going to have to live
> with some of those mistakes.
>
> Dealing with badly designed device tree bindings is pretty icky, but
> usually the code to handle it can be reasonably well isolated, so it
> doesn't infect too much of the codebase. Just dealing with ugly
> representations when parsing, or having some code which applies fixups
> to the initially supplied device tree are both feasible approaches.
> But we're never going to reach a place where we always get perfect
> device trees, so one way or another, you're going to have to deal with
> some uglies. Our view - borne out by experience so far - is that the
> device tree representation is still worth it, despite the problems.
Don't get me wrong -- I'm just trying to understand what is not clear
to me in the first place. The other aspect though is that if there are
areas, which could be improved in the design and implementation why
not do so, especially long term? I realize the already established
representations cannot be immediately ditched, but switching
conventions could be applied over time (e.g. using compatible prop in
a smart way to have code support legacy approach or some time after
which there is a complete switchover etc.).
Please note we are targetting ARM (and other arches in the future)
besides PowerPC, so if there can be any lessons learnt from previous
encounters I'd rather embrace them.
Rafal
More information about the devicetree-discuss
mailing list