[PATCH 10/21] csky: dma-mapping: skip invalidating before DMA from device
Guo Ren
guoren at kernel.org
Tue Mar 28 00:37:22 AEDT 2023
On Mon, Mar 27, 2023 at 8:15 PM Arnd Bergmann <arnd at kernel.org> wrote:
>
> From: Arnd Bergmann <arnd at arndb.de>
>
> csky is the only architecture that does a full flush for the
> dma_sync_*_for_device(..., DMA_FROM_DEVICE) operation. The requirement
> is only make sure there are no dirty cache lines for the buffer,
> which can be either done through an invalidate operation (as on most
> architectures including arm32, mips and arc), or a writeback (as on
> arm64 and riscv). The cache also has to be invalidated eventually but
> csky already does that after the transfer.
>
> Use a 'clean' operation here for consistency with arm64 and riscv.
>
> Signed-off-by: Arnd Bergmann <arnd at arndb.de>
> ---
> arch/csky/mm/dma-mapping.c | 4 +---
> 1 file changed, 1 insertion(+), 3 deletions(-)
>
> diff --git a/arch/csky/mm/dma-mapping.c b/arch/csky/mm/dma-mapping.c
> index 82447029feb4..c90f912e2822 100644
> --- a/arch/csky/mm/dma-mapping.c
> +++ b/arch/csky/mm/dma-mapping.c
> @@ -60,11 +60,9 @@ void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
> {
> switch (dir) {
> case DMA_TO_DEVICE:
> - cache_op(paddr, size, dma_wb_range);
> - break;
> case DMA_FROM_DEVICE:
> case DMA_BIDIRECTIONAL:
> - cache_op(paddr, size, dma_wbinv_range);
> + cache_op(paddr, size, dma_wb_range);
Reviewed-by: Guo Ren <guoren at kernel.org>
> break;
> default:
> BUG();
> --
> 2.39.2
>
--
Best Regards
Guo Ren
More information about the Linuxppc-dev
mailing list