[PATCH v2 07/10] ARM: tegra: pcie: Add device tree support
Thierry Reding
thierry.reding at avionic-design.de
Fri Jun 15 16:12:36 EST 2012
On Thu, Jun 14, 2012 at 01:50:56PM -0600, Stephen Warren wrote:
> On 06/14/2012 01:29 PM, Thierry Reding wrote:
> > On Thu, Jun 14, 2012 at 12:30:50PM -0600, Stephen Warren wrote:
> >> On 06/14/2012 03:19 AM, Thierry Reding wrote:
> ...
> >>> #address-cells = <1>; #size-cells = <1>;
> >>>
> >>> pci at 80000000 {
> >>
> >> I'm still not convinced that using the address of the port's
> >> registers is the correct way to represent each port. The port
> >> index seems much more useful.
> >>
> >> The main reason here is that there are a lot of registers that
> >> contain fields for each port - far more than the combination of
> >> this node's reg and ctrl-offset (which I assume is an address
> >> offset for just one example of this issue) properties can
> >> describe. The bit position and bit stride of these fields isn't
> >> necessarily the same in each register. Do we want a property like
> >> ctrl-offset for every single type of field in every single shared
> >> register that describes the location of the relevant data, or
> >> just a single "port ID" bit that can be applied to anything?
> >>
> >> (Perhaps this isn't so obvious looking at the TRM since it
> >> doesn't document all registers, and I'm also looking at the
> >> Tegra30 documentation too, which might be more exposed to this -
> >> I haven't correlated all the documentation sources to be sure
> >> though)
> >
> > I agree that maybe adding properties for each bit position or
> > register offset may not work out too well. But I think it still
> > makes sense to use the base address of the port's registers (see
> > below). We could of course add some code to determine the index
> > from the base address at initialization time and reuse the index
> > where appropriate.
>
> To me, working back from address to ID then using the ID to calculate
> some other addresses seems far more icky than just calculating all the
> addresses based off of one ID. But, I suppose this doesn't make a huge
> practical difference.
This really depends on the device vs. no device decision below. If we can
make it work without needing an extra device for it, then using the index
is certainly better. However, if we instantiate devices from the DT, then
we have the address anyway and adding the index as a property would be
redundant and error prone (what happens if somebody sets the index of the
port at address 0x80000000 to 2?).
> >>> compatible = "nvidia,tegra20-pcie-port"; reg = <0x80000000
> >>> 0x00001000>; status = "disabled";
> >>>
> >>> #address-cells = <3>; #size-cells = <2>;
> >>>
> >>> ranges = <0x81000000 0 0 0x80400000 0 0x00008000 /* I/O */
> >>> 0x82000000 0 0 0x90000000 0 0x08000000 /* non-prefetchable
> >>> memory */ 0xc2000000 0 0 0xa0000000 0 0x08000000>; /*
> >>> prefetchable memory */
> >>
> >> The values here appear identical for both ports. Surely they
> >> should describe just the parts of the overall address space that
> >> have been assigned/delegated to the individual port/bridge?
> >
> > They're not identical. Port 0 gets the first half and port 1 gets
> > the second half of the ranges specified in the parent.
>
> Oh right, I missed some 8s and 0s that looked the same!
>
> >>> While looking into some more code, trying to figure out how to
> >>> hook this all up with the device tree I ran into a problem. I
> >>> need to actually create a 'struct device' for each of the
> >>> ports, so I added the "simple-bus" to the pcie-controller's
> >>> "compatible" property. Furthermore, each PCI root port now
> >>> becomes a platform_device, which are supported by a new
> >>> tegra-pcie-port driver. I'm not sure if "port" is very common
> >>> in PCI speek, so something like tegra-pcie-bridge (compatible =
> >>> "nvidia,tegra20-pcie-bridge") may be more appropriate?
> >>
> >> What is it that drives the need for each port to be a 'struct
> >> device'? The current driver supports 2 host ports, yet there's
> >> only a single struct device for it. Does the DT code assume a 1:1
> >> mapping between struct device and DT node that represents the
> >> child bus? If so, perhaps it'd be better to rework that code to
> >> accept a DT node as a parameter and call it multiple times,
> >> rather than accept a struct device as a parameter and hence need
> >> multiple devices?
> >
> > It's not so much the DT code, but rather the PCI core and
> > ultimately the device model that requires it. Each port is
> > basically a PCI host bridge that provides a root PCI bus and the
> > device model is used to represent the hierachy of the busses.
> > Providing just the DT node isn't going to be enough.
>
> But the existing driver works without /any/ devices, let alone one per
> port.
That doesn't necessarily mean it is correct. The representation of the
device tree (as in the Linux kernel device tree) isn't quite accurate
because you're missing the link of the host bridge to the parent PCIe
controller. It also means that the representation of the kernel device
tree doesn't match the DT representation.
Additionally, if you look at how PCI busses and devices are matched to
their respective DT nodes, the code in drivers/pci/of.c provides a
default implementation of pcibios_get_phb_of_node(), which matches the
struct pci_bus up with the device_node of the parent device.
If we keep the current implementation that passes NULL as parent to the
pci_scan_root_bus() function, then we'll have to provide a custom
implementation of pcibios_get_phb_of_node() which has to go through
similar hoops as the x86 version (see arch/x86/kernel/devicetree.c).
That of course will not contribute to improving the current state of
fragmentation in the PCI subsystem.
Thierry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: not available
URL: <http://lists.ozlabs.org/pipermail/devicetree-discuss/attachments/20120615/0fe915be/attachment.sig>
More information about the devicetree-discuss
mailing list