[PATCH] of/fdt: Add unflatten_partial_device_tree

Stephen Neuendorffer stephen.neuendorffer at xilinx.com
Fri Jul 2 01:59:16 EST 2010



> -----Original Message-----
> From:
devicetree-discuss-bounces+stephen.neuendorffer=xilinx.com at lists.ozlabs.
org [mailto:devicetree-
> discuss-bounces+stephen.neuendorffer=xilinx.com at lists.ozlabs.org] On
Behalf Of Stephen Neuendorffer
> Sent: Monday, June 28, 2010 9:51 PM
> To: David Gibson; grant.likely at secretlab.ca
> Cc: devicetree-discuss at lists.ozlabs.org
> Subject: RE: [PATCH] of/fdt: Add unflatten_partial_device_tree
> 
> 
> >> Another question is what to do with the unflattened tree once it is
> >> unflattened.  Some of the existing code expects the node to be part
of
> >> the global tree.  Those could either be refactored, or the new
partial
> >> tree could be grafted into the global tree.  Grafting will have the
> >> least impact, but it probably isn't a good idea in the long term.
> >> Grafting together unrelated trees seems messy to me.
> >
> >I think I must have missed an earlier discussion.  What's the use
case
> >for multiple fdt blobs?
> 
> Basically, we are building systems which have an FPGA sitting on a
PCIe.  The
> FPGA contains an internal bus with lots of devices.  From the
structural point
> of view, it seems to make sense to describe the connectivity of this
subsystem
> with a device tree: albeit a slightly strange one with no processor:
just a pcie<->plb
> bridge and a number of devices.  Since this subsystem is, in fact,
architecture independent
> (you could physically plug it into a PCIe slot on a system with any
processor architecture)
> it is a bit of a poster case for generalizing more of the device tree
infrastructure.
> In particular, we are most interested in doing this in X86 systems.
> 
> Which leads me back to the first question: My approach so far is that
the device
> tree fragment is completely independent of any toplevel device tree,
for the simple
> reason that on X86, there *isn't* a toplevel device tree...   The
devices that get generated
> are in the regular device structure, which is similar to the way pci
bridges and pcimcia
> drivers work.

Going down this path, I've taken an approach where the PCI driver reads
the PCI BARs which are set earlier
and stuffs the correct ranges=<> property into the device tree.  I think
this is necessary because (I believe)
powerpc programs the BARs based on the flat device tree, whereas on X86,
the BIOS enumerates the BARs and
everyone else just deals with it.  This should enable the device driver
to use the existing address translation
code in driver/of/address.c:of_address_to_resource().  The current
status is that I get something like:

OF: ** translation for device /plb at 0/xps-hwicap at 80030000 **
OF: bus is default (na=1, ns=1) on /plb at 0
OF: translating address: 80030000
OF: parent bus is default (na=1, ns=1) on /
OF: walking ranges...
OF: default map, cp=80000000, s=10000000, da=80030000
OF: parent translation for: d0000000
OF: with offset: 30000
OF: one level translation: d0030000
OF: reached root node
of_icap d0030000.xps-hwicap: Xilinx icap port driver
of_icap d0030000.xps-hwicap: Couldn't lock memory region at d0030000
of_icap: probe of d0030000.xps-hwicap failed with error -16

Apparently because the address region is already assigned to the PCI
device.  I think I need to figure out how
PCI bridges declare their address range but don't lock it, so that the
device in the FPGA (in this case the ICAP) can
claim it later.

Steve




This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.




More information about the devicetree-discuss mailing list