In atomic_pool_expand(), set_memory_encrypted()/set_memory_decrypted()
are currently called with page_to_virt(page). On ARM64 with
CONFIG_DMA_DIRECT_REMAP=y, the atomic pool is mapped via vmap(), so
page_to_virt(page) does not reference the actual mapped region.
Using this incorrect address can cause encryption attribute updates to
be applied to the wrong memory region. On ARM64 systems with memory
encryption enabled (e.g. CCA), this can lead to data corruption or
crashes.
Fix this by using the vmap() address ('addr') on ARM64 when invoking
the memory encryption helpers, while retaining the existing
page_to_virt(page) usage for other architectures.
Fixes: 76a19940bd62 ("dma-direct: atomic allocations must come from atomic coherent pools")
Signed-off-by: Shanker Donthineni <sdonthineni@nvidia.com>
---
kernel/dma/pool.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
index 7b04f7575796b..ba08a301590fd 100644
--- a/kernel/dma/pool.c
+++ b/kernel/dma/pool.c
@@ -81,6 +81,7 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size,
{
unsigned int order;
struct page *page = NULL;
+ void *vaddr;
void *addr;
int ret = -ENOMEM;
@@ -113,8 +114,8 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size,
* Memory in the atomic DMA pools must be unencrypted, the pools do not
* shrink so no re-encryption occurs in dma_direct_free().
*/
- ret = set_memory_decrypted((unsigned long)page_to_virt(page),
- 1 << order);
+ vaddr = IS_ENABLED(CONFIG_ARM64) ? addr : page_to_virt(page);
+ ret = set_memory_decrypted((unsigned long)vaddr, 1 << order);
if (ret)
goto remove_mapping;
ret = gen_pool_add_virt(pool, (unsigned long)addr, page_to_phys(page),
@@ -126,8 +127,7 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size,
return 0;
encrypt_mapping:
- ret = set_memory_encrypted((unsigned long)page_to_virt(page),
- 1 << order);
+ ret = set_memory_encrypted((unsigned long)vaddr, 1 << order);
if (WARN_ON_ONCE(ret)) {
/* Decrypt succeeded but encrypt failed, purposely leak */
goto out;
--
2.25.1
On Sun, Aug 10, 2025 at 07:50:34PM -0500, Shanker Donthineni wrote: > In atomic_pool_expand(), set_memory_encrypted()/set_memory_decrypted() > are currently called with page_to_virt(page). On ARM64 with > CONFIG_DMA_DIRECT_REMAP=y, the atomic pool is mapped via vmap(), so > page_to_virt(page) does not reference the actual mapped region. > > Using this incorrect address can cause encryption attribute updates to > be applied to the wrong memory region. On ARM64 systems with memory > encryption enabled (e.g. CCA), this can lead to data corruption or > crashes. > > Fix this by using the vmap() address ('addr') on ARM64 when invoking > the memory encryption helpers, while retaining the existing > page_to_virt(page) usage for other architectures. > > Fixes: 76a19940bd62 ("dma-direct: atomic allocations must come from atomic coherent pools") > Signed-off-by: Shanker Donthineni <sdonthineni@nvidia.com> > --- > kernel/dma/pool.c | 8 ++++---- > 1 file changed, 4 insertions(+), 4 deletions(-) > > diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c > index 7b04f7575796b..ba08a301590fd 100644 > --- a/kernel/dma/pool.c > +++ b/kernel/dma/pool.c > @@ -81,6 +81,7 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size, > { > unsigned int order; > struct page *page = NULL; > + void *vaddr; > void *addr; > int ret = -ENOMEM; > > @@ -113,8 +114,8 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size, > * Memory in the atomic DMA pools must be unencrypted, the pools do not > * shrink so no re-encryption occurs in dma_direct_free(). > */ > - ret = set_memory_decrypted((unsigned long)page_to_virt(page), > - 1 << order); > + vaddr = IS_ENABLED(CONFIG_ARM64) ? addr : page_to_virt(page); > + ret = set_memory_decrypted((unsigned long)vaddr, 1 << order); At least with arm CCA, there are two aspects to setting the memory encrypted/decrypted: an RMM (realm management monitor) call and setting of the attributes of the stage 1 mapping. The RMM call doesn't care about the virtual address, only the (intermediate) physical address, so having page_to_virt(page) here is fine. The second part is setting the (fake) attribute for this mapping (top bit of the IPA space). Can we not instead just call: addr = dma_common_contiguous_remap(page, pool_size, pgprot_decrypted(pgprot_dmacoherent(PAGE_KERNEL)), __builtin_return_address(0)); in the atomic pool code? The advantage is that we keep the set_memory_decrypted() call on the linear map so that we change its attributes as well. I want avoid walking the page tables for vmap regions if possible in the arm64 set_memory_* implementation. At some point I was proposing a GFP_DECRYPTED flag for allocations but never got around to post a patch (and implement vmalloc() support): https://lore.kernel.org/linux-arm-kernel/ZmNJdSxSz-sYpVgI@arm.com/ -- Catalin
Hi Catalin, On 8/11/25 12:26, Catalin Marinas wrote: > External email: Use caution opening links or attachments > > > On Sun, Aug 10, 2025 at 07:50:34PM -0500, Shanker Donthineni wrote: >> In atomic_pool_expand(), set_memory_encrypted()/set_memory_decrypted() >> are currently called with page_to_virt(page). On ARM64 with >> CONFIG_DMA_DIRECT_REMAP=y, the atomic pool is mapped via vmap(), so >> page_to_virt(page) does not reference the actual mapped region. >> >> Using this incorrect address can cause encryption attribute updates to >> be applied to the wrong memory region. On ARM64 systems with memory >> encryption enabled (e.g. CCA), this can lead to data corruption or >> crashes. >> >> Fix this by using the vmap() address ('addr') on ARM64 when invoking >> the memory encryption helpers, while retaining the existing >> page_to_virt(page) usage for other architectures. >> >> Fixes: 76a19940bd62 ("dma-direct: atomic allocations must come from atomic coherent pools") >> Signed-off-by: Shanker Donthineni <sdonthineni@nvidia.com> >> --- >> kernel/dma/pool.c | 8 ++++---- >> 1 file changed, 4 insertions(+), 4 deletions(-) >> >> diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c >> index 7b04f7575796b..ba08a301590fd 100644 >> --- a/kernel/dma/pool.c >> +++ b/kernel/dma/pool.c >> @@ -81,6 +81,7 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size, >> { >> unsigned int order; >> struct page *page = NULL; >> + void *vaddr; >> void *addr; >> int ret = -ENOMEM; >> >> @@ -113,8 +114,8 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size, >> * Memory in the atomic DMA pools must be unencrypted, the pools do not >> * shrink so no re-encryption occurs in dma_direct_free(). >> */ >> - ret = set_memory_decrypted((unsigned long)page_to_virt(page), >> - 1 << order); >> + vaddr = IS_ENABLED(CONFIG_ARM64) ? addr : page_to_virt(page); >> + ret = set_memory_decrypted((unsigned long)vaddr, 1 << order); > > At least with arm CCA, there are two aspects to setting the memory > encrypted/decrypted: an RMM (realm management monitor) call and setting > of the attributes of the stage 1 mapping. The RMM call doesn't care > about the virtual address, only the (intermediate) physical address, so > having page_to_virt(page) here is fine. > > The second part is setting the (fake) attribute for this mapping (top > bit of the IPA space). Can we not instead just call: > > addr = dma_common_contiguous_remap(page, pool_size, > pgprot_decrypted(pgprot_dmacoherent(PAGE_KERNEL)), > __builtin_return_address(0)); > Thanks for the simple fix, it resolves the crash issue. I’ve posted the v2 patch and dropped patch 2/2 which was added to support non-linear memory regions in pageattr.c. > in the atomic pool code? The advantage is that we keep the > set_memory_decrypted() call on the linear map so that we change its > attributes as well. > > I want avoid walking the page tables for vmap regions if possible in the > arm64 set_memory_* implementation. At some point I was proposing a > GFP_DECRYPTED flag for allocations but never got around to post a patch > (and implement vmalloc() support): > > https://lore.kernel.org/linux-arm-kernel/ZmNJdSxSz-sYpVgI@arm.com/ > > -- > Catalin
On Sun, Aug 10, 2025 at 07:50:34PM -0500, Shanker Donthineni wrote: > In atomic_pool_expand(), set_memory_encrypted()/set_memory_decrypted() > are currently called with page_to_virt(page). On ARM64 with > CONFIG_DMA_DIRECT_REMAP=y, the atomic pool is mapped via vmap(), so > page_to_virt(page) does not reference the actual mapped region. > > Using this incorrect address can cause encryption attribute updates to > be applied to the wrong memory region. On ARM64 systems with memory > encryption enabled (e.g. CCA), this can lead to data corruption or > crashes. > > Fix this by using the vmap() address ('addr') on ARM64 when invoking > the memory encryption helpers, while retaining the existing > page_to_virt(page) usage for other architectures. > > Fixes: 76a19940bd62 ("dma-direct: atomic allocations must come from atomic coherent pools") > Signed-off-by: Shanker Donthineni <sdonthineni@nvidia.com> > --- > kernel/dma/pool.c | 8 ++++---- > 1 file changed, 4 insertions(+), 4 deletions(-) > > diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c > index 7b04f7575796b..ba08a301590fd 100644 > --- a/kernel/dma/pool.c > +++ b/kernel/dma/pool.c > @@ -81,6 +81,7 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size, > { > unsigned int order; > struct page *page = NULL; > + void *vaddr; > void *addr; > int ret = -ENOMEM; > > @@ -113,8 +114,8 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size, > * Memory in the atomic DMA pools must be unencrypted, the pools do not > * shrink so no re-encryption occurs in dma_direct_free(). > */ > - ret = set_memory_decrypted((unsigned long)page_to_virt(page), > - 1 << order); > + vaddr = IS_ENABLED(CONFIG_ARM64) ? addr : page_to_virt(page); There's address calculation just before this code: #ifdef CONFIG_DMA_DIRECT_REMAP addr = dma_common_contiguous_remap(page, pool_size, pgprot_dmacoherent(PAGE_KERNEL), __builtin_return_address(0)); if (!addr) goto free_page; #else addr = page_to_virt(page); #endif It should be enough to s/page_to_virt(page)/addr in the call to set_memory_decrypted(). > + ret = set_memory_decrypted((unsigned long)vaddr, 1 << order); > if (ret) > goto remove_mapping; > ret = gen_pool_add_virt(pool, (unsigned long)addr, page_to_phys(page), > @@ -126,8 +127,7 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size, > return 0; > > encrypt_mapping: > - ret = set_memory_encrypted((unsigned long)page_to_virt(page), > - 1 << order); > + ret = set_memory_encrypted((unsigned long)vaddr, 1 << order); > if (WARN_ON_ONCE(ret)) { > /* Decrypt succeeded but encrypt failed, purposely leak */ > goto out; > -- > 2.25.1 > -- Sincerely yours, Mike.
On 2025-08-11 9:48 am, Mike Rapoport wrote: > On Sun, Aug 10, 2025 at 07:50:34PM -0500, Shanker Donthineni wrote: >> In atomic_pool_expand(), set_memory_encrypted()/set_memory_decrypted() >> are currently called with page_to_virt(page). On ARM64 with >> CONFIG_DMA_DIRECT_REMAP=y, the atomic pool is mapped via vmap(), so >> page_to_virt(page) does not reference the actual mapped region. >> >> Using this incorrect address can cause encryption attribute updates to >> be applied to the wrong memory region. On ARM64 systems with memory >> encryption enabled (e.g. CCA), this can lead to data corruption or >> crashes. >> >> Fix this by using the vmap() address ('addr') on ARM64 when invoking >> the memory encryption helpers, while retaining the existing >> page_to_virt(page) usage for other architectures. >> >> Fixes: 76a19940bd62 ("dma-direct: atomic allocations must come from atomic coherent pools") >> Signed-off-by: Shanker Donthineni <sdonthineni@nvidia.com> >> --- >> kernel/dma/pool.c | 8 ++++---- >> 1 file changed, 4 insertions(+), 4 deletions(-) >> >> diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c >> index 7b04f7575796b..ba08a301590fd 100644 >> --- a/kernel/dma/pool.c >> +++ b/kernel/dma/pool.c >> @@ -81,6 +81,7 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size, >> { >> unsigned int order; >> struct page *page = NULL; >> + void *vaddr; >> void *addr; >> int ret = -ENOMEM; >> >> @@ -113,8 +114,8 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size, >> * Memory in the atomic DMA pools must be unencrypted, the pools do not >> * shrink so no re-encryption occurs in dma_direct_free(). >> */ >> - ret = set_memory_decrypted((unsigned long)page_to_virt(page), >> - 1 << order); >> + vaddr = IS_ENABLED(CONFIG_ARM64) ? addr : page_to_virt(page); > > There's address calculation just before this code: > > #ifdef CONFIG_DMA_DIRECT_REMAP > addr = dma_common_contiguous_remap(page, pool_size, > pgprot_dmacoherent(PAGE_KERNEL), > __builtin_return_address(0)); > if (!addr) > goto free_page; > #else > addr = page_to_virt(page); > #endif > > It should be enough to s/page_to_virt(page)/addr in the call to > set_memory_decrypted(). Indeed, and either way this is clearly a DMA_DIRECT_REMAP concern rather than just an ARM64 one. Thanks, Robin. >> + ret = set_memory_decrypted((unsigned long)vaddr, 1 << order); >> if (ret) >> goto remove_mapping; >> ret = gen_pool_add_virt(pool, (unsigned long)addr, page_to_phys(page), >> @@ -126,8 +127,7 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size, >> return 0; >> >> encrypt_mapping: >> - ret = set_memory_encrypted((unsigned long)page_to_virt(page), >> - 1 << order); >> + ret = set_memory_encrypted((unsigned long)vaddr, 1 << order); >> if (WARN_ON_ONCE(ret)) { >> /* Decrypt succeeded but encrypt failed, purposely leak */ >> goto out; >> -- >> 2.25.1 >> >
© 2016 - 2025 Red Hat, Inc.