Document the __dma_align_begin/__dma_align_end annotations
introduced by the previous patch.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
Documentation/core-api/dma-api-howto.rst | 42 ++++++++++++++++++++++++
1 file changed, 42 insertions(+)
diff --git a/Documentation/core-api/dma-api-howto.rst b/Documentation/core-api/dma-api-howto.rst
index 96fce2a9aa90..99eda4c5c8e7 100644
--- a/Documentation/core-api/dma-api-howto.rst
+++ b/Documentation/core-api/dma-api-howto.rst
@@ -146,6 +146,48 @@ What about block I/O and networking buffers? The block I/O and
networking subsystems make sure that the buffers they use are valid
for you to DMA from/to.
+__dma_from_device_aligned_begin/end annotations
+===============================================
+
+As explained previously, when a structure contains a DMA_FROM_DEVICE buffer
+(device writes to memory) alongside fields that the CPU writes to, cache line
+sharing between the DMA buffer and CPU-written fields can cause data corruption
+on CPUs with DMA-incoherent caches.
+
+The ``__dma_from_device_aligned_begin/__dma_from_device_aligned_end``
+annotations ensure proper alignment to prevent this::
+
+ struct my_device {
+ spinlock_t lock1;
+ __dma_from_device_aligned_begin char dma_buffer1[16];
+ char dma_buffer2[16];
+ __dma_from_device_aligned_end spinlock_t lock2;
+ };
+
+On cache-coherent platforms these macros expand to nothing. On non-coherent
+platforms, they ensure the minimal DMA alignment, which can be as large as 128
+bytes.
+
+.. note::
+
+ To isolate a DMA buffer from adjacent fields, you must apply
+ ``__dma_from_device_aligned_begin`` to the first DMA buffer field
+ **and additionally** apply ``__dma_from_device_aligned_end`` to the
+ **next** field in the structure, **beyond** the DMA buffer (as opposed
+ to the last field of the DMA buffer!). This protects both the head and
+ tail of the buffer from cache line sharing.
+
+ When the DMA buffer is the **last field** in the structure, just
+ ``__dma_from_device_aligned_begin`` is enough - the compiler's struct
+ padding protects the tail::
+
+ struct my_device {
+ spinlock_t lock;
+ struct mutex mlock;
+ __dma_from_device_aligned_begin char dma_buffer1[16];
+ char dma_buffer2[16];
+ };
+
DMA addressing capabilities
===========================
--
MST