[RFC PATCH v3 5/6] dt-bindings: of: Add restricted DMA pool
Rob Herring
robh at kernel.org
Thu Jan 21 08:31:17 AEDT 2021
On Wed, Jan 20, 2021 at 11:30 AM Robin Murphy <robin.murphy at arm.com> wrote:
>
> On 2021-01-20 16:53, Rob Herring wrote:
> > On Wed, Jan 06, 2021 at 11:41:23AM +0800, Claire Chang wrote:
> >> Introduce the new compatible string, restricted-dma-pool, for restricted
> >> DMA. One can specify the address and length of the restricted DMA memory
> >> region by restricted-dma-pool in the device tree.
> >
> > If this goes into DT, I think we should be able to use dma-ranges for
> > this purpose instead. Normally, 'dma-ranges' is for physical bus
> > restrictions, but there's no reason it can't be used for policy or to
> > express restrictions the firmware has enabled.
>
> There would still need to be some way to tell SWIOTLB to pick up the
> corresponding chunk of memory and to prevent the kernel from using it
> for anything else, though.
Don't we already have that problem if dma-ranges had a very small
range? We just get lucky because the restriction is generally much
more RAM than needed.
In any case, wouldn't finding all the dma-ranges do this? We're
already walking the tree to find the max DMA address now.
> >> Signed-off-by: Claire Chang <tientzu at chromium.org>
> >> ---
> >> .../reserved-memory/reserved-memory.txt | 24 +++++++++++++++++++
> >> 1 file changed, 24 insertions(+)
> >>
> >> diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
> >> index e8d3096d922c..44975e2a1fd2 100644
> >> --- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
> >> +++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
> >> @@ -51,6 +51,20 @@ compatible (optional) - standard definition
> >> used as a shared pool of DMA buffers for a set of devices. It can
> >> be used by an operating system to instantiate the necessary pool
> >> management subsystem if necessary.
> >> + - restricted-dma-pool: This indicates a region of memory meant to be
> >> + used as a pool of restricted DMA buffers for a set of devices. The
> >> + memory region would be the only region accessible to those devices.
> >> + When using this, the no-map and reusable properties must not be set,
> >> + so the operating system can create a virtual mapping that will be used
> >> + for synchronization. The main purpose for restricted DMA is to
> >> + mitigate the lack of DMA access control on systems without an IOMMU,
> >> + which could result in the DMA accessing the system memory at
> >> + unexpected times and/or unexpected addresses, possibly leading to data
> >> + leakage or corruption. The feature on its own provides a basic level
> >> + of protection against the DMA overwriting buffer contents at
> >> + unexpected times. However, to protect against general data leakage and
> >> + system memory corruption, the system needs to provide way to restrict
> >> + the DMA to a predefined memory region.
> >> - vendor specific string in the form <vendor>,[<device>-]<usage>
> >> no-map (optional) - empty property
> >> - Indicates the operating system must not create a virtual mapping
> >> @@ -120,6 +134,11 @@ one for multimedia processing (named multimedia-memory at 77000000, 64MiB).
> >> compatible = "acme,multimedia-memory";
> >> reg = <0x77000000 0x4000000>;
> >> };
> >> +
> >> + restricted_dma_mem_reserved: restricted_dma_mem_reserved {
> >> + compatible = "restricted-dma-pool";
> >> + reg = <0x50000000 0x400000>;
> >> + };
> >> };
> >>
> >> /* ... */
> >> @@ -138,4 +157,9 @@ one for multimedia processing (named multimedia-memory at 77000000, 64MiB).
> >> memory-region = <&multimedia_reserved>;
> >> /* ... */
> >> };
> >> +
> >> + pcie_device: pcie_device at 0,0 {
> >> + memory-region = <&restricted_dma_mem_reserved>;
> >
> > PCI hosts often have inbound window configurations that limit the
> > address range and translate PCI to bus addresses. Those windows happen
> > to be configured by dma-ranges. In any case, wouldn't you want to put
> > the configuration in the PCI host node? Is there a usecase of
> > restricting one PCIe device and not another?
>
> The general design seems to accommodate devices having their own pools
> such that they can't even snoop on each others' transient DMA data. If
> the interconnect had a way of wiring up, say, PCI RIDs to AMBA NSAIDs,
> then in principle you could certainly apply that to PCI endpoints too
> (presumably you'd also disallow them from peer-to-peer transactions at
> the PCI level too).
At least for PCI, I think we can handle this. We have the BDF in the
3rd address cell in dma-ranges. The Openfirmware spec says those are 0
in the case of ranges. It doesn't talk about dma-ranges though. But I
think we could extend it to allow for BDF. Though typically with PCIe
every device is behind its own bridge and each bridge node can have a
dma-ranges.
Rob
More information about the Linuxppc-dev
mailing list