On TDX it is possible for the untrusted host to cause
set_memory_encrypted() or set_memory_decrypted() to fail such that an
error is returned and the resulting memory is shared. Callers need to take
care to handle these errors to avoid returning decrypted (shared) memory to
the page allocator, which could lead to functional or security issues.
Swiotlb could free decrypted/shared pages if set_memory_decrypted() fails.
Use the recently added free_decrypted_pages() to avoid this.
In swiotlb_exit(), check for set_memory_encrypted() errors manually,
because the pages are not nessarily going to the page allocator.
Cc: Christoph Hellwig <hch@lst.de>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: iommu@lists.linux.dev
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
kernel/dma/swiotlb.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 394494a6b1f3..ad06786c4f98 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -524,6 +524,7 @@ void __init swiotlb_exit(void)
unsigned long tbl_vaddr;
size_t tbl_size, slots_size;
unsigned int area_order;
+ int ret;
if (swiotlb_force_bounce)
return;
@@ -536,17 +537,19 @@ void __init swiotlb_exit(void)
tbl_size = PAGE_ALIGN(mem->end - mem->start);
slots_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), mem->nslabs));
- set_memory_encrypted(tbl_vaddr, tbl_size >> PAGE_SHIFT);
+ ret = set_memory_encrypted(tbl_vaddr, tbl_size >> PAGE_SHIFT);
if (mem->late_alloc) {
area_order = get_order(array_size(sizeof(*mem->areas),
mem->nareas));
free_pages((unsigned long)mem->areas, area_order);
- free_pages(tbl_vaddr, get_order(tbl_size));
+ if (!ret)
+ free_pages(tbl_vaddr, get_order(tbl_size));
free_pages((unsigned long)mem->slots, get_order(slots_size));
} else {
memblock_free_late(__pa(mem->areas),
array_size(sizeof(*mem->areas), mem->nareas));
- memblock_free_late(mem->start, tbl_size);
+ if (!ret)
+ memblock_free_late(mem->start, tbl_size);
memblock_free_late(__pa(mem->slots), slots_size);
}
@@ -581,7 +584,7 @@ static struct page *alloc_dma_pages(gfp_t gfp, size_t bytes)
return page;
error:
- __free_pages(page, order);
+ free_decrypted_pages((unsigned long)vaddr, order);
return NULL;
}
--
2.34.1
On Tue, 17 Oct 2023 13:24:59 -0700
Rick Edgecombe <rick.p.edgecombe@intel.com> wrote:
> On TDX it is possible for the untrusted host to cause
> set_memory_encrypted() or set_memory_decrypted() to fail such that an
> error is returned and the resulting memory is shared. Callers need to take
> care to handle these errors to avoid returning decrypted (shared) memory to
> the page allocator, which could lead to functional or security issues.
>
> Swiotlb could free decrypted/shared pages if set_memory_decrypted() fails.
> Use the recently added free_decrypted_pages() to avoid this.
>
> In swiotlb_exit(), check for set_memory_encrypted() errors manually,
> because the pages are not nessarily going to the page allocator.
>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: iommu@lists.linux.dev
> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
> ---
> kernel/dma/swiotlb.c | 11 +++++++----
> 1 file changed, 7 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 394494a6b1f3..ad06786c4f98 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -524,6 +524,7 @@ void __init swiotlb_exit(void)
> unsigned long tbl_vaddr;
> size_t tbl_size, slots_size;
> unsigned int area_order;
> + int ret;
>
> if (swiotlb_force_bounce)
> return;
> @@ -536,17 +537,19 @@ void __init swiotlb_exit(void)
> tbl_size = PAGE_ALIGN(mem->end - mem->start);
> slots_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), mem->nslabs));
>
> - set_memory_encrypted(tbl_vaddr, tbl_size >> PAGE_SHIFT);
> + ret = set_memory_encrypted(tbl_vaddr, tbl_size >> PAGE_SHIFT);
> if (mem->late_alloc) {
> area_order = get_order(array_size(sizeof(*mem->areas),
> mem->nareas));
> free_pages((unsigned long)mem->areas, area_order);
> - free_pages(tbl_vaddr, get_order(tbl_size));
> + if (!ret)
> + free_pages(tbl_vaddr, get_order(tbl_size));
> free_pages((unsigned long)mem->slots, get_order(slots_size));
> } else {
> memblock_free_late(__pa(mem->areas),
> array_size(sizeof(*mem->areas), mem->nareas));
> - memblock_free_late(mem->start, tbl_size);
> + if (!ret)
> + memblock_free_late(mem->start, tbl_size);
> memblock_free_late(__pa(mem->slots), slots_size);
> }
>
> @@ -581,7 +584,7 @@ static struct page *alloc_dma_pages(gfp_t gfp, size_t bytes)
> return page;
>
> error:
> - __free_pages(page, order);
> + free_decrypted_pages((unsigned long)vaddr, order);
> return NULL;
> }
I admit I'm not familiar with the encryption/decryption API, but if a
__free_pages() is not sufficient here, then it is quite confusing.
The error label is reached only if set_memory_decrypted() returns
non-zero. My naive expectation is that the memory is *not* decrypted in
that case and does not require special treatment. Is this assumption
wrong?
OTOH I believe there is a bug in the logic. The subsequent
__free_pages() in swiotlb_alloc_tlb() would have to be changed to a
free_decrypted_pages(). However, I'm proposing a different approach to
address the latter issue here:
https://lore.kernel.org/linux-iommu/20231026095123.222-1-petrtesarik@huaweicloud.com/T/
Petr T
On Tue, Oct 17, 2023 at 01:24:59PM -0700, Rick Edgecombe wrote: > On TDX it is possible for the untrusted host to cause > set_memory_encrypted() or set_memory_decrypted() to fail such that an > error is returned and the resulting memory is shared. Callers need to take > care to handle these errors to avoid returning decrypted (shared) memory to > the page allocator, which could lead to functional or security issues. > > Swiotlb could free decrypted/shared pages if set_memory_decrypted() fails. > Use the recently added free_decrypted_pages() to avoid this. > > In swiotlb_exit(), check for set_memory_encrypted() errors manually, > because the pages are not nessarily going to the page allocator. Whatever recently introduced it didn't make it to my mailbox. Please always CC everyone on every patch in a series, everything else is impossible to review.
On Wed, 2023-10-18 at 06:43 +0200, Christoph Hellwig wrote: > On Tue, Oct 17, 2023 at 01:24:59PM -0700, Rick Edgecombe wrote: > > On TDX it is possible for the untrusted host to cause > > set_memory_encrypted() or set_memory_decrypted() to fail such that > > an > > error is returned and the resulting memory is shared. Callers need > > to take > > care to handle these errors to avoid returning decrypted (shared) > > memory to > > the page allocator, which could lead to functional or security > > issues. > > > > Swiotlb could free decrypted/shared pages if set_memory_decrypted() > > fails. > > Use the recently added free_decrypted_pages() to avoid this. > > > > In swiotlb_exit(), check for set_memory_encrypted() errors > > manually, > > because the pages are not nessarily going to the page allocator. > > Whatever recently introduced it didn't make it to my mailbox. Please > always CC everyone on every patch in a series, everything else is > impossible to review. Ok. The series touches a bunch of set_memory() callers all over, so I was trying to manage the CC list to something reasonable. I tried to give a summary in each commit, but I guess it wasn't in depth enough. Here is the lore link, if you haven't already found it: https://lore.kernel.org/lkml/20231017202505.340906-1-rick.p.edgecombe@intel.com/
© 2016 - 2026 Red Hat, Inc.