When a TDX guest runs on Hyper-V, the hv_netvsc driver's netvsc_init_buf()
allocates buffers using vzalloc(), and needs to share the buffers with the
host OS by calling set_memory_decrypted(), which is not working for
vmalloc() yet. Add the support by handling the pages one by one.
Signed-off-by: Dexuan Cui <decui@microsoft.com>
---
Changes in v2:
Changed tdx_enc_status_changed() in place.
Hi, Dave, I checked the huge vmalloc mapping code, but still don't know
how to get the underlying huge page info (if huge page is in use) and
try to use PG_LEVEL_2M/1G in try_accept_page() for vmalloc: I checked
is_vm_area_hugepages() and __vfree() -> __vunmap(), and I think the
underlying page allocation info is internal to the mm code, and there
is no mm API to for me get the info in tdx_enc_status_changed().
Hi, Kirill, the load_unaligned_zeropad() issue is not addressed in
this patch. The issue looks like a generic issue that also happens to
AMD SNP vTOM mode and C-bit mode. Will need to figure out how to
address the issue. If we decide to adjust direct mapping to have the
shared bit set, it lools like we need to do the below for each
'start_va' vmalloc page:
pa = slow_virt_to_phys(start_va);
set_memory_decrypted(phys_to_virt(pa), 1); -- this line calls
tdx_enc_status_changed() the second time for the same age, which is not
great. It looks like we need to find a way to reuse the cpa_flush()
related code in __set_memory_enc_pgtable() and make sure we call
tdx_enc_status_changed() only once for the same page from vmalloc()?
Changes in v3:
No change since v2.
arch/x86/coco/tdx/tdx.c | 69 ++++++++++++++++++++++++++---------------
1 file changed, 44 insertions(+), 25 deletions(-)
diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
index 6e2665c07395..2cad4b8c4dc4 100644
--- a/arch/x86/coco/tdx/tdx.c
+++ b/arch/x86/coco/tdx/tdx.c
@@ -7,6 +7,7 @@
#include <linux/cpufeature.h>
#include <linux/export.h>
#include <linux/io.h>
+#include <linux/mm.h>
#include <asm/coco.h>
#include <asm/tdx.h>
#include <asm/vmx.h>
@@ -800,6 +801,34 @@ static bool try_accept_one(phys_addr_t *start, unsigned long len,
return true;
}
+static bool try_accept_page(phys_addr_t start, phys_addr_t end)
+{
+ /*
+ * For shared->private conversion, accept the page using
+ * TDX_ACCEPT_PAGE TDX module call.
+ */
+ while (start < end) {
+ unsigned long len = end - start;
+
+ /*
+ * Try larger accepts first. It gives chance to VMM to keep
+ * 1G/2M SEPT entries where possible and speeds up process by
+ * cutting number of hypercalls (if successful).
+ */
+
+ if (try_accept_one(&start, len, PG_LEVEL_1G))
+ continue;
+
+ if (try_accept_one(&start, len, PG_LEVEL_2M))
+ continue;
+
+ if (!try_accept_one(&start, len, PG_LEVEL_4K))
+ return false;
+ }
+
+ return true;
+}
+
/*
* Notify the VMM about page mapping conversion. More info about ABI
* can be found in TDX Guest-Host-Communication Interface (GHCI),
@@ -856,37 +885,27 @@ static bool tdx_map_gpa(phys_addr_t start, phys_addr_t end, bool enc)
*/
static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc)
{
- phys_addr_t start = __pa(vaddr);
- phys_addr_t end = __pa(vaddr + numpages * PAGE_SIZE);
+ bool is_vmalloc = is_vmalloc_addr((void *)vaddr);
+ unsigned long len = numpages * PAGE_SIZE;
+ void *start_va = (void *)vaddr, *end_va = start_va + len;
+ phys_addr_t start_pa, end_pa;
- if (!tdx_map_gpa(start, end, enc))
+ if (offset_in_page(start_va) != 0)
return false;
- /* private->shared conversion requires only MapGPA call */
- if (!enc)
- return true;
-
- /*
- * For shared->private conversion, accept the page using
- * TDX_ACCEPT_PAGE TDX module call.
- */
- while (start < end) {
- unsigned long len = end - start;
-
- /*
- * Try larger accepts first. It gives chance to VMM to keep
- * 1G/2M SEPT entries where possible and speeds up process by
- * cutting number of hypercalls (if successful).
- */
-
- if (try_accept_one(&start, len, PG_LEVEL_1G))
- continue;
+ while (start_va < end_va) {
+ start_pa = is_vmalloc ? slow_virt_to_phys(start_va) :
+ __pa(start_va);
+ end_pa = start_pa + (is_vmalloc ? PAGE_SIZE : len);
- if (try_accept_one(&start, len, PG_LEVEL_2M))
- continue;
+ if (!tdx_map_gpa(start_pa, end_pa, enc))
+ return false;
- if (!try_accept_one(&start, len, PG_LEVEL_4K))
+ /* private->shared conversion requires only MapGPA call */
+ if (enc && !try_accept_page(start_pa, end_pa))
return false;
+
+ start_va += is_vmalloc ? PAGE_SIZE : len;
}
return true;
--
2.25.1
On Mon, Feb 06, 2023 at 11:24:15AM -0800, Dexuan Cui wrote:
> When a TDX guest runs on Hyper-V, the hv_netvsc driver's netvsc_init_buf()
> allocates buffers using vzalloc(), and needs to share the buffers with the
> host OS by calling set_memory_decrypted(), which is not working for
> vmalloc() yet. Add the support by handling the pages one by one.
>
> Signed-off-by: Dexuan Cui <decui@microsoft.com>
>
> ---
>
> Changes in v2:
> Changed tdx_enc_status_changed() in place.
>
> Hi, Dave, I checked the huge vmalloc mapping code, but still don't know
> how to get the underlying huge page info (if huge page is in use) and
> try to use PG_LEVEL_2M/1G in try_accept_page() for vmalloc: I checked
> is_vm_area_hugepages() and __vfree() -> __vunmap(), and I think the
> underlying page allocation info is internal to the mm code, and there
> is no mm API to for me get the info in tdx_enc_status_changed().
I also don't obvious way to retrieve this info after vmalloc() is
complete. split_page() makes all pages independent.
I think you can try to do this manually: allocate a vmalloc region,
allocate pages manually, and put into the region. This way you always know
page sizes and can optimize conversion to shared memory.
But it is tedious and I'm not sure if it worth the gain.
> Hi, Kirill, the load_unaligned_zeropad() issue is not addressed in
> this patch. The issue looks like a generic issue that also happens to
> AMD SNP vTOM mode and C-bit mode. Will need to figure out how to
> address the issue. If we decide to adjust direct mapping to have the
> shared bit set, it lools like we need to do the below for each
> 'start_va' vmalloc page:
> pa = slow_virt_to_phys(start_va);
> set_memory_decrypted(phys_to_virt(pa), 1); -- this line calls
> tdx_enc_status_changed() the second time for the same age, which is not
> great. It looks like we need to find a way to reuse the cpa_flush()
> related code in __set_memory_enc_pgtable() and make sure we call
> tdx_enc_status_changed() only once for the same page from vmalloc()?
Actually, current code will change direct mapping for you. I just
double-checked: the alias processing in __change_page_attr_set_clr() will
change direct mapping if you call it on vmalloc()ed memory.
Splitting direct mapping is still unfortunate, but well.
>
> Changes in v3:
> No change since v2.
>
> arch/x86/coco/tdx/tdx.c | 69 ++++++++++++++++++++++++++---------------
> 1 file changed, 44 insertions(+), 25 deletions(-)
I don't hate what you did here. But I think the code below is a bit
cleaner.
Any opinions?
static bool tdx_enc_status_changed_phys(phys_addr_t start, phys_addr_t end,
bool enc)
{
if (!tdx_map_gpa(start, end, enc))
return false;
/* private->shared conversion requires only MapGPA call */
if (!enc)
return true;
return try_accept_page(start, end);
}
/*
* Inform the VMM of the guest's intent for this physical page: shared with
* the VMM or private to the guest. The VMM is expected to change its mapping
* of the page in response.
*/
static bool tdx_enc_status_changed(unsigned long start, int numpages, bool enc)
{
unsigned long end = start + numpages * PAGE_SIZE;
if (offset_in_page(start) != 0)
return false;
if (!is_vmalloc_addr((void *)start))
return tdx_enc_status_changed_phys(__pa(start), __pa(end), enc);
while (start < end) {
phys_addr_t start_pa = slow_virt_to_phys((void *)start);
phys_addr_t end_pa = start_pa + PAGE_SIZE;
if (!tdx_enc_status_changed_phys(start_pa, end_pa, enc))
return false;
start += PAGE_SIZE;
}
return true;
}
--
Kiryl Shutsemau / Kirill A. Shutemov
> From: Kirill A. Shutemov <kirill@shutemov.name> > Sent: Friday, February 17, 2023 5:20 AM > To: Dexuan Cui <decui@microsoft.com> > > ... Hi Krill, sorry for my late reply! > > Hi, Dave, I checked the huge vmalloc mapping code, but still don't know > > how to get the underlying huge page info (if huge page is in use) and > > try to use PG_LEVEL_2M/1G in try_accept_page() for vmalloc: I checked > > is_vm_area_hugepages() and __vfree() -> __vunmap(), and I think the > > underlying page allocation info is internal to the mm code, and there > > is no mm API to for me get the info in tdx_enc_status_changed(). > > I also don't obvious way to retrieve this info after vmalloc() is > complete. split_page() makes all pages independent. > > I think you can try to do this manually: allocate a vmalloc region, > allocate pages manually, and put into the region. This way you always know > page sizes and can optimize conversion to shared memory. > > But it is tedious and I'm not sure if it worth the gain. Thanks, I'll do some research on this idea. > > Hi, Kirill, the load_unaligned_zeropad() issue is not addressed in > > this patch. The issue looks like a generic issue that also happens to > > AMD SNP vTOM mode and C-bit mode. Will need to figure out how to > > address the issue. If we decide to adjust direct mapping to have the > > shared bit set, it lools like we need to do the below for each > > 'start_va' vmalloc page: > > pa = slow_virt_to_phys(start_va); > > set_memory_decrypted(phys_to_virt(pa), 1); -- this line calls > > tdx_enc_status_changed() the second time for the same age, which is not > > great. It looks like we need to find a way to reuse the cpa_flush() > > related code in __set_memory_enc_pgtable() and make sure we call > > tdx_enc_status_changed() only once for the same page from vmalloc()? > > Actually, current code will change direct mapping for you. I just > double-checked: the alias processing in __change_page_attr_set_clr() will > change direct mapping if you call it on vmalloc()ed memory. > > Splitting direct mapping is still unfortunate, but well. > > > > > Changes in v3: > > No change since v2. > > > > arch/x86/coco/tdx/tdx.c | 69 ++++++++++++++++++++++++++--------------- > > 1 file changed, 44 insertions(+), 25 deletions(-) > > I don't hate what you did here. But I think the code below is a bit > cleaner. > > Any opinions? Thanks! Your version looks much better. I'll use it in in v4.
© 2016 - 2025 Red Hat, Inc.