kernel/dma/direct.c | 31 ++++++++++++++++++++++++++++--- 1 file changed, 28 insertions(+), 3 deletions(-)
Devices that are DMA non-coherent and require a remap were skipping
dma_set_decrypted(), leaving DMA buffers encrypted even when the device
requires unencrypted access. Move the call after the if (remap) branch
so that both the direct and remapped allocation paths correctly mark the
allocation as decrypted (or fail cleanly) before use.
Architectures such as arm64 cannot mark vmap addresses as decrypted, and
highmem pages necessarily require a vmap remap. As a result, such
allocations cannot be safely used for unencrypted DMA. Therefore, when
an unencrypted DMA buffer is requested, avoid allocating high PFNs from
__dma_direct_alloc_pages().
Other architectures (e.g. x86) do not have this limitation. However,
rather than making this architecture-specific, apply the restriction
only when the device requires unencrypted DMA access, for simplicity.
Fixes: f3c962226dbe ("dma-direct: clean up the remapping checks in dma_direct_alloc")
Signed-off-by: Aneesh Kumar K.V (Arm) <aneesh.kumar@kernel.org>
---
kernel/dma/direct.c | 31 ++++++++++++++++++++++++++++---
1 file changed, 28 insertions(+), 3 deletions(-)
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index ffa267020a1e..faf1e41afde8 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -204,6 +204,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs)
{
bool remap = false, set_uncached = false;
+ bool allow_highmem = true;
struct page *page;
void *ret;
@@ -250,8 +251,18 @@ void *dma_direct_alloc(struct device *dev, size_t size,
dma_direct_use_pool(dev, gfp))
return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
+
+ if (force_dma_unencrypted(dev))
+ /*
+ * Unencrypted/shared DMA requires a linear-mapped buffer
+ * address to look up the PFN and set architecture-required PFN
+ * attributes. This is not possible with HighMem. Avoid HighMem
+ * allocation.
+ */
+ allow_highmem = false;
+
/* we always manually zero the memory once we are done */
- page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, true);
+ page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, allow_highmem);
if (!page)
return NULL;
@@ -282,7 +293,13 @@ void *dma_direct_alloc(struct device *dev, size_t size,
goto out_free_pages;
} else {
ret = page_address(page);
- if (dma_set_decrypted(dev, ret, size))
+ }
+
+ if (force_dma_unencrypted(dev)) {
+ void *lm_addr;
+
+ lm_addr = page_address(page);
+ if (set_memory_decrypted((unsigned long)lm_addr, PFN_UP(size)))
goto out_leak_pages;
}
@@ -344,8 +361,16 @@ void dma_direct_free(struct device *dev, size_t size,
} else {
if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
arch_dma_clear_uncached(cpu_addr, size);
- if (dma_set_encrypted(dev, cpu_addr, size))
+ }
+
+ if (force_dma_unencrypted(dev)) {
+ void *lm_addr;
+
+ lm_addr = phys_to_virt(dma_to_phys(dev, dma_addr));
+ if (set_memory_encrypted((unsigned long)lm_addr, PFN_UP(size))) {
+ pr_warn_ratelimited("leaking DMA memory that can't be re-encrypted\n");
return;
+ }
}
__dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size);
--
2.43.0
"Aneesh Kumar K.V (Arm)" <aneesh.kumar@kernel.org> writes:
> Devices that are DMA non-coherent and require a remap were skipping
> dma_set_decrypted(), leaving DMA buffers encrypted even when the device
> requires unencrypted access. Move the call after the if (remap) branch
> so that both the direct and remapped allocation paths correctly mark the
> allocation as decrypted (or fail cleanly) before use.
>
> Architectures such as arm64 cannot mark vmap addresses as decrypted, and
> highmem pages necessarily require a vmap remap. As a result, such
> allocations cannot be safely used for unencrypted DMA. Therefore, when
> an unencrypted DMA buffer is requested, avoid allocating high PFNs from
> __dma_direct_alloc_pages().
>
> Other architectures (e.g. x86) do not have this limitation. However,
> rather than making this architecture-specific, apply the restriction
> only when the device requires unencrypted DMA access, for simplicity.
>
Considering that we don’t expect to use HighMem on systems that support
memory encryption or Confidential Compute, should we go ahead and merge
this change so that the behavior is technically correct? We can address
the separate question of whether DMA allocations should ever return
HighMem independently.
>
> Fixes: f3c962226dbe ("dma-direct: clean up the remapping checks in dma_direct_alloc")
> Signed-off-by: Aneesh Kumar K.V (Arm) <aneesh.kumar@kernel.org>
> ---
> kernel/dma/direct.c | 31 ++++++++++++++++++++++++++++---
> 1 file changed, 28 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index ffa267020a1e..faf1e41afde8 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -204,6 +204,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
> dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs)
> {
> bool remap = false, set_uncached = false;
> + bool allow_highmem = true;
> struct page *page;
> void *ret;
>
> @@ -250,8 +251,18 @@ void *dma_direct_alloc(struct device *dev, size_t size,
> dma_direct_use_pool(dev, gfp))
> return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
>
> +
> + if (force_dma_unencrypted(dev))
> + /*
> + * Unencrypted/shared DMA requires a linear-mapped buffer
> + * address to look up the PFN and set architecture-required PFN
> + * attributes. This is not possible with HighMem. Avoid HighMem
> + * allocation.
> + */
> + allow_highmem = false;
> +
> /* we always manually zero the memory once we are done */
> - page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, true);
> + page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, allow_highmem);
> if (!page)
> return NULL;
>
> @@ -282,7 +293,13 @@ void *dma_direct_alloc(struct device *dev, size_t size,
> goto out_free_pages;
> } else {
> ret = page_address(page);
> - if (dma_set_decrypted(dev, ret, size))
> + }
> +
> + if (force_dma_unencrypted(dev)) {
> + void *lm_addr;
> +
> + lm_addr = page_address(page);
> + if (set_memory_decrypted((unsigned long)lm_addr, PFN_UP(size)))
> goto out_leak_pages;
> }
>
> @@ -344,8 +361,16 @@ void dma_direct_free(struct device *dev, size_t size,
> } else {
> if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
> arch_dma_clear_uncached(cpu_addr, size);
> - if (dma_set_encrypted(dev, cpu_addr, size))
> + }
> +
> + if (force_dma_unencrypted(dev)) {
> + void *lm_addr;
> +
> + lm_addr = phys_to_virt(dma_to_phys(dev, dma_addr));
> + if (set_memory_encrypted((unsigned long)lm_addr, PFN_UP(size))) {
> + pr_warn_ratelimited("leaking DMA memory that can't be re-encrypted\n");
> return;
> + }
> }
>
> __dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size);
> --
> 2.43.0
-aneesh
On Fri, Jan 02, 2026 at 09:20:37PM +0530, Aneesh Kumar K.V (Arm) wrote: > Devices that are DMA non-coherent and require a remap were skipping > dma_set_decrypted(), leaving DMA buffers encrypted even when the device > requires unencrypted access. Move the call after the if (remap) branch > so that both the direct and remapped allocation paths correctly mark the > allocation as decrypted (or fail cleanly) before use. This is probably fine, but IMHO, we should be excluding the combination of highmem and CC at the kconfig level :\ Jason
On Mon, Jan 5, 2026 at 6:53 PM Jason Gunthorpe <jgg@ziepe.ca> wrote: > > On Fri, Jan 02, 2026 at 09:20:37PM +0530, Aneesh Kumar K.V (Arm) wrote: > > Devices that are DMA non-coherent and require a remap were skipping > > dma_set_decrypted(), leaving DMA buffers encrypted even when the device > > requires unencrypted access. Move the call after the if (remap) branch > > so that both the direct and remapped allocation paths correctly mark the > > allocation as decrypted (or fail cleanly) before use. > > This is probably fine, but IMHO, we should be excluding the > combination of highmem and CC at the kconfig level :\ The only way you can get CMA in highmem is by passing in a highmem location to the allocator from the command line. I have a strong urge to just patch CMA to not allow that and see what happens. Or at least have it print a big fat warning that this will go away soon. I think this is only used on legacy ARM32 products that are no longer maintained, but I might be wrong. Yours, Linus Walleij
© 2016 - 2026 Red Hat, Inc.