From: Arnd Bergmann <arnd@arndb.de>
For a DMA_BIDIRECTIONAL transfer, the caches have to be cleaned
first to let the device see data written by the CPU, and invalidated
after the transfer to let the CPU see data written by the device.
riscv also invalidates the caches before the transfer, which does
not appear to serve any purpose.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Conor Dooley <conor.dooley@microchip.com>
Reviewed-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Acked-by: Palmer Dabbelt <palmer@rivosinc.com>
Acked-by: Guo Ren <guoren@kernel.org>
Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
---
v2->v3
* No change
v1->v2
* Included RB and ACKs
---
arch/riscv/mm/dma-noncoherent.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/riscv/mm/dma-noncoherent.c b/arch/riscv/mm/dma-noncoherent.c
index 94614cf61cdd..fc6377a64c8d 100644
--- a/arch/riscv/mm/dma-noncoherent.c
+++ b/arch/riscv/mm/dma-noncoherent.c
@@ -25,7 +25,7 @@ void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
ALT_CMO_OP(clean, vaddr, size, riscv_cbom_block_size);
break;
case DMA_BIDIRECTIONAL:
- ALT_CMO_OP(flush, vaddr, size, riscv_cbom_block_size);
+ ALT_CMO_OP(clean, vaddr, size, riscv_cbom_block_size);
break;
default:
break;
--
2.34.1
On Thu, Aug 17, 2023 at 12:23:35AM +0100, Prabhakar wrote: > From: Arnd Bergmann <arnd@arndb.de> > > For a DMA_BIDIRECTIONAL transfer, the caches have to be cleaned > first to let the device see data written by the CPU, and invalidated > after the transfer to let the CPU see data written by the device. > > riscv also invalidates the caches before the transfer, which does > not appear to serve any purpose. > > Signed-off-by: Arnd Bergmann <arnd@arndb.de> > Reviewed-by: Conor Dooley <conor.dooley@microchip.com> > Reviewed-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> > Acked-by: Palmer Dabbelt <palmer@rivosinc.com> > Acked-by: Guo Ren <guoren@kernel.org> > Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> > --- > v2->v3 > * No change > > v1->v2 > * Included RB and ACKs > --- > arch/riscv/mm/dma-noncoherent.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/riscv/mm/dma-noncoherent.c b/arch/riscv/mm/dma-noncoherent.c > index 94614cf61cdd..fc6377a64c8d 100644 > --- a/arch/riscv/mm/dma-noncoherent.c > +++ b/arch/riscv/mm/dma-noncoherent.c > @@ -25,7 +25,7 @@ void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, > ALT_CMO_OP(clean, vaddr, size, riscv_cbom_block_size); > break; > case DMA_BIDIRECTIONAL: > - ALT_CMO_OP(flush, vaddr, size, riscv_cbom_block_size); > + ALT_CMO_OP(clean, vaddr, size, riscv_cbom_block_size); The code could be simplified a lot since after this patch, the action is always "clean". Thanks > break; > default: > break; > -- > 2.34.1 >
On Sun, Aug 27, 2023, at 06:22, Jisheng Zhang wrote:
> On Thu, Aug 17, 2023 at 12:23:35AM +0100, Prabhakar wrote:
>
> The code could be simplified a lot since after this patch, the action
> is always "clean".
>
I think it's better to leave it at the elaborate version, as I still
want to merge my patch series to unify this with the other architectures,
and this would introduce a few compile-time conditionals in here.
Arnd
© 2016 - 2025 Red Hat, Inc.