From nobody Sat Oct 11 04:11:35 2025 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4E1D624E4A8 for ; Wed, 11 Jun 2025 21:17:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749676631; cv=none; b=K268cDkO9weC9LHdfnWlwzWEeSnppUiyv9NjvP5d3Fo95a9JphlYHkz4RpGJrAf/9W3mURlS0M5wzfJxumECkytUnMShweUIjeGnTezokx3tf4Dv5EqiBOBo3q6C/fBKUujWRWK4InjvcsLugkVr+DOrVCYIzEMMRIfUTWIZOu4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749676631; c=relaxed/simple; bh=9bHODc+L3jY8CU4AGA0f9Uj2uodDYKTbx2LQsdLuFe8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=dup8Ihed6FsBPIoLCha9zQPpfIpgaOJzLtg6x7aEczECqfrSLCYH1Ua/rpodQ+K5IiH7YaBIUsUX2xc+WxHs8v69Edvx4qyvAmu2cV4pIG3+Z+5+7Uf02jNMwCkBnMmY9+3UpeeywYhFzGy/X9TaPcDTgxWJmKdz8qfw4p8irwo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=yW4OwMA5; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="yW4OwMA5" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-7462aff55bfso204735b3a.2 for ; Wed, 11 Jun 2025 14:17:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749676629; x=1750281429; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ssjnbQ4kJflk7fXBHBaf7ALVkd5j+ovjxWVLHw4vo40=; b=yW4OwMA5+E3hWIoxPhcBzfrFkWa/ZxvxscuiJyItpBbf32NStZuTgoNAket1hu0HCN RNTOg6fWhNGCtoMKBt9MvSWs3Bht0wsj8KGvGBPY8BiWyrtrnfzC0iRG7fKOLmGUalVm 5OaSu4i7OIE91w8ILEbFbXzKSSfWDSQX3PHijTAjLurasKbzC2D0vkvXL6Du9WFBhs/x H22bKbu2IjNZzP6HQRFC7dED+uD+7bxucl3nM5J06l+FKzhWYmPkvkFzXgpJAHNrkAZ/ OXBbQ3bgenhidqd7qFJOVVag7ptjyke2Mpzh+MTH3boyjXIOiPR9s0kOjUFqxbooCvyp nSvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749676629; x=1750281429; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ssjnbQ4kJflk7fXBHBaf7ALVkd5j+ovjxWVLHw4vo40=; b=oCaqoj4PW34uWq5tjtogP6Ro0u4+dpfCFM0U1XaF4uhgszp7H5LMh+Zwxo3pUxCcok yp8r5FmGQada46QIBlCc5zi7k5vic8eKSyO9+LOdYwFE1JnscsnZeFnjxAfIXwGJS11Q SrBTVS3yKO3n3LDEq/PLiARO8GY92K8vr8VxbzVRocjJBrWLzURqY86IBYPNxXNVnSqR AuY0gSjgrHlrsIFuIdcE9Oez/V2naGxcdjB8TYaKUETPU+nBH2On9L+DQJGTF/e/sDEy otG11od+f81+uJFKUrp1JZ4d1uXYD/LS8hr5h0oXWBUWZQIo1MOHGM05Huffa+TaUWZP jejw== X-Forwarded-Encrypted: i=1; AJvYcCUk3nekGcd9IsIbvWCWjR8OMNg7t+ZpG7AxtTUCh23YE0kwyUCV/UrX2kwqZS2QjT8iUtaY9OnvpPqB7+U=@vger.kernel.org X-Gm-Message-State: AOJu0YygoIX/SJrEK2F3U2ywzXj+fAbCVD83xCPgD+y0rcaNc9CyC7CM O/LuZ7tHAPIc04acgJ8nj9L40b9PYiHjOlaJhsnvkBFFzItberqoG9007k7pOB3ZnQ2vMmCbTOE hiiJV5sniog== X-Google-Smtp-Source: AGHT+IGt8itXpNhAIFBj62DiJ64U9JmDFLzjGyRqSV6OdShtMpiACjneNt3RcQvG2dGMamb53/SKPyXvNeW+ X-Received: from pfbfr17.prod.google.com ([2002:a05:6a00:8111:b0:73e:665:360]) (user=afranji job=prod-delivery.src-stubby-dispatcher) by 2002:aa7:8882:0:b0:73d:ff02:8d83 with SMTP id d2e1a72fcca58-7486cb21c08mr7425055b3a.3.1749676628808; Wed, 11 Jun 2025 14:17:08 -0700 (PDT) Date: Wed, 11 Jun 2025 21:16:35 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.50.0.rc1.591.g9c95f17f64-goog Message-ID: <92f22ace98238b79c25bd8759c75a1143d82a741.1749672978.git.afranji@google.com> Subject: [RFC PATCH v2 08/10] KVM: selftests: TDX: Add tests for TDX in-place migration From: Ryan Afranji To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: sagis@google.com, bp@alien8.de, chao.p.peng@linux.intel.com, dave.hansen@linux.intel.com, dmatlack@google.com, erdemaktas@google.com, isaku.yamahata@intel.com, kai.huang@intel.com, mingo@redhat.com, pbonzini@redhat.com, seanjc@google.com, tglx@linutronix.de, zhi.wang.linux@gmail.com, ackerleytng@google.com, andrew.jones@linux.dev, david@redhat.com, hpa@zytor.com, kirill.shutemov@linux.intel.com, linux-kselftest@vger.kernel.org, tabba@google.com, vannapurve@google.com, yan.y.zhao@intel.com, rick.p.edgecombe@intel.com, Ryan Afranji Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Sagi Shahar Adds selftests for TDX in-place migration. Signed-off-by: Ryan Afranji Signed-off-by: Sagi Shahar --- tools/testing/selftests/kvm/Makefile.kvm | 1 + .../testing/selftests/kvm/include/kvm_util.h | 20 + .../selftests/kvm/include/x86/tdx/tdx_util.h | 1 + tools/testing/selftests/kvm/lib/kvm_util.c | 50 ++- .../selftests/kvm/lib/x86/tdx/tdx_util.c | 3 +- .../selftests/kvm/x86/tdx_migrate_tests.c | 358 ++++++++++++++++++ 6 files changed, 429 insertions(+), 4 deletions(-) create mode 100644 tools/testing/selftests/kvm/x86/tdx_migrate_tests.c diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selft= ests/kvm/Makefile.kvm index 1c7ea61e9031..d4c8cfb5910f 100644 --- a/tools/testing/selftests/kvm/Makefile.kvm +++ b/tools/testing/selftests/kvm/Makefile.kvm @@ -155,6 +155,7 @@ TEST_GEN_PROGS_x86 +=3D pre_fault_memory_test TEST_GEN_PROGS_x86 +=3D x86/tdx_vm_test TEST_GEN_PROGS_x86 +=3D x86/tdx_shared_mem_test TEST_GEN_PROGS_x86 +=3D x86/tdx_upm_test +TEST_GEN_PROGS_x86 +=3D x86/tdx_migrate_tests =20 # Compiled outputs used by test targets TEST_GEN_PROGS_EXTENDED_x86 +=3D x86/nx_huge_pages_test diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing= /selftests/kvm/include/kvm_util.h index 267f78f3f16f..1b6489081e74 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -110,6 +110,9 @@ struct kvm_vm { =20 struct kvm_binary_stats stats; =20 + /* VM was migrated using KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM */ + bool enc_migrated; + /* * KVM region slots. These are the default memslots used by page * allocators, e.g., lib/elf uses the memslots[MEM_REGION_CODE] @@ -673,6 +676,7 @@ static inline bool vm_arch_has_protected_memory(struct = kvm_vm *vm) =20 void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t fl= ags); void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa= ); +void vm_migrate_mem_regions(struct kvm_vm *dst_vm, struct kvm_vm *src_vm); void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot); struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id); void vm_populate_vaddr_bitmap(struct kvm_vm *vm); @@ -1132,6 +1136,22 @@ static inline struct kvm_vcpu *vm_vcpu_add(struct kv= m_vm *vm, uint32_t vcpu_id, return vcpu; } =20 +/* + * Adds a vCPU with no defaults. This vcpu will be used for migration + * + * Input Args: + * vm - Virtual Machine + * vcpu_id - The id of the VCPU to add to the VM. + */ +struct kvm_vcpu *vm_arch_vcpu_add_for_migration(struct kvm_vm *vm, + uint32_t vcpu_id); + +static inline struct kvm_vcpu *vm_vcpu_add_for_migration(struct kvm_vm *vm, + uint32_t vcpu_id) +{ + return vm_arch_vcpu_add_for_migration(vm, vcpu_id); +} + /* Re-create a vCPU after restarting a VM, e.g. for state save/restore tes= ts. */ struct kvm_vcpu *vm_arch_vcpu_recreate(struct kvm_vm *vm, uint32_t vcpu_id= ); =20 diff --git a/tools/testing/selftests/kvm/include/x86/tdx/tdx_util.h b/tools= /testing/selftests/kvm/include/x86/tdx/tdx_util.h index ae39b78aa4af..9b495e621225 100644 --- a/tools/testing/selftests/kvm/include/x86/tdx/tdx_util.h +++ b/tools/testing/selftests/kvm/include/x86/tdx/tdx_util.h @@ -9,6 +9,7 @@ extern uint64_t tdx_s_bit; void tdx_filter_cpuid(struct kvm_vm *vm, struct kvm_cpuid2 *cpuid_data); void __tdx_mask_cpuid_features(struct kvm_cpuid_entry2 *entry); +void tdx_enable_capabilities(struct kvm_vm *vm); =20 struct kvm_vcpu *td_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, void *gu= est_code); =20 diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/sel= ftests/kvm/lib/kvm_util.c index 3c131718b81a..9dc3c7bf0443 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -805,8 +805,10 @@ static void __vm_mem_region_delete(struct kvm_vm *vm, =20 sparsebit_free(®ion->unused_phy_pages); sparsebit_free(®ion->protected_phy_pages); - ret =3D munmap(region->mmap_start, region->mmap_size); - TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("munmap()", ret)); + if (!vm->enc_migrated) { + ret =3D munmap(region->mmap_start, region->mmap_size); + TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("munmap()", ret)); + } if (region->fd >=3D 0) { /* There's an extra map when using shared memory. */ ret =3D munmap(region->mmap_alias, region->mmap_size); @@ -1287,6 +1289,50 @@ void vm_mem_region_move(struct kvm_vm *vm, uint32_t = slot, uint64_t new_gpa) ret, errno, slot, new_gpa); } =20 +static void vm_migrate_mem_region(struct kvm_vm *dst_vm, struct kvm_vm *sr= c_vm, + struct userspace_mem_region *src_region) +{ + struct userspace_mem_region *dst_region; + int dst_guest_memfd; + + dst_guest_memfd =3D + vm_link_guest_memfd(dst_vm, src_region->region.guest_memfd, 0); + + dst_region =3D vm_mem_region_alloc( + dst_vm, src_region->region.guest_phys_addr, + src_region->region.slot, + src_region->region.memory_size / src_vm->page_size, + src_region->region.flags); + + dst_region->mmap_size =3D src_region->mmap_size; + dst_region->mmap_start =3D src_region->mmap_start; + dst_region->host_mem =3D src_region->host_mem; + + src_region->mmap_start =3D 0; + src_region->host_mem =3D 0; + + dst_region->region.guest_memfd =3D dst_guest_memfd; + dst_region->region.guest_memfd_offset =3D + src_region->region.guest_memfd_offset; + + userspace_mem_region_commit(dst_vm, dst_region); +} + +void vm_migrate_mem_regions(struct kvm_vm *dst_vm, struct kvm_vm *src_vm) +{ + int bkt; + struct hlist_node *node; + struct userspace_mem_region *region; + + hash_for_each_safe(src_vm->regions.slot_hash, bkt, node, region, + slot_node) { + TEST_ASSERT(region->region.guest_memfd >=3D 0, + "Migrating mem regions is only supported for GUEST_MEMFD"); + + vm_migrate_mem_region(dst_vm, src_vm, region); + } +} + /* * VM Memory Region Delete * diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/tdx_util.c b/tools/tes= ting/selftests/kvm/lib/x86/tdx/tdx_util.c index c5bee67099c5..ef03d42f58d0 100644 --- a/tools/testing/selftests/kvm/lib/x86/tdx/tdx_util.c +++ b/tools/testing/selftests/kvm/lib/x86/tdx/tdx_util.c @@ -344,7 +344,7 @@ static void register_encrypted_memory_region(struct kvm= _vm *vm, * TD creation/setup/finalization */ =20 -static void tdx_enable_capabilities(struct kvm_vm *vm) +void tdx_enable_capabilities(struct kvm_vm *vm) { int rc; =20 @@ -574,7 +574,6 @@ void td_initialize(struct kvm_vm *vm, enum vm_mem_backi= ng_src_type src_type, uint64_t nr_pages_required; =20 tdx_enable_capabilities(vm); - tdx_td_init(vm, attributes); =20 nr_pages_required =3D vm_nr_pages_required(VM_MODE_DEFAULT, 1, 0); diff --git a/tools/testing/selftests/kvm/x86/tdx_migrate_tests.c b/tools/te= sting/selftests/kvm/x86/tdx_migrate_tests.c new file mode 100644 index 000000000000..e15da2aa0437 --- /dev/null +++ b/tools/testing/selftests/kvm/x86/tdx_migrate_tests.c @@ -0,0 +1,358 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include "tdx/tdcall.h" +#include "tdx/tdx.h" +#include "tdx/tdx_util.h" +#include "tdx/test_util.h" +#include +#include + +#define NR_MIGRATE_TEST_VMS 10 +#define TDX_IOEXIT_TEST_PORT 0x50 + +static int __tdx_migrate_from(int dst_fd, int src_fd) +{ + struct kvm_enable_cap cap =3D { + .cap =3D KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM, + .args =3D { src_fd } + }; + + return ioctl(dst_fd, KVM_ENABLE_CAP, &cap); +} + + +static void tdx_migrate_from(struct kvm_vm *dst_vm, struct kvm_vm *src_vm) +{ + int ret; + + vm_migrate_mem_regions(dst_vm, src_vm); + ret =3D __tdx_migrate_from(dst_vm->fd, src_vm->fd); + TEST_ASSERT(!ret, "Migration failed, ret: %d, errno: %d\n", ret, errno); + src_vm->enc_migrated =3D true; +} + +void guest_code(void) +{ + int ret; + uint64_t data; + + data =3D 1; + ret =3D tdg_vp_vmcall_instruction_io(TDX_IOEXIT_TEST_PORT, 1, + PORT_WRITE, + &data); + if (ret) + tdx_test_fatal_with_data(ret, __LINE__); + + data++; + ret =3D tdg_vp_vmcall_instruction_io(TDX_IOEXIT_TEST_PORT, 1, + PORT_WRITE, + &data); + if (ret) + tdx_test_fatal_with_data(ret, __LINE__); + + tdx_test_success(); +} + +static void test_tdx_migrate_vm_with_private_memory(void) +{ + struct kvm_vm *src_vm; + struct kvm_vm *dst_vm; + struct kvm_vcpu *dst_vcpu; + uint32_t data; + + printf("Verifying migration of VM with private memory:\n"); + + src_vm =3D td_create(); + td_initialize(src_vm, VM_MEM_SRC_ANONYMOUS, 0); + td_vcpu_add(src_vm, 0, guest_code); + td_finalize(src_vm); + + dst_vm =3D td_create(); + tdx_enable_capabilities(dst_vm); + dst_vcpu =3D vm_vcpu_recreate(dst_vm, 0); + + tdx_migrate_from(dst_vm, src_vm); + + kvm_vm_free(src_vm); + + tdx_run(dst_vcpu); + tdx_test_assert_io(dst_vcpu, TDX_IOEXIT_TEST_PORT, 1, + PORT_WRITE); + data =3D *(uint8_t *)((void *)dst_vcpu->run + + dst_vcpu->run->io.data_offset); + TEST_ASSERT_EQ(data, 1); + + tdx_run(dst_vcpu); + tdx_test_assert_io(dst_vcpu, TDX_IOEXIT_TEST_PORT, 1, + PORT_WRITE); + data =3D *(uint8_t *)((void *)dst_vcpu->run + + dst_vcpu->run->io.data_offset); + TEST_ASSERT_EQ(data, 2); + + tdx_run(dst_vcpu); + tdx_test_assert_success(dst_vcpu); + + kvm_vm_free(dst_vm); + + printf("\t ... PASSED\n"); +} + +static void test_tdx_migrate_running_vm(void) +{ + struct kvm_vm *src_vm; + struct kvm_vm *dst_vm; + struct kvm_vcpu *src_vcpu; + struct kvm_vcpu *dst_vcpu; + uint32_t data; + + printf("Verifying migration of a running VM:\n"); + + src_vm =3D td_create(); + td_initialize(src_vm, VM_MEM_SRC_ANONYMOUS, 0); + src_vcpu =3D td_vcpu_add(src_vm, 0, guest_code); + td_finalize(src_vm); + + dst_vm =3D td_create(); + tdx_enable_capabilities(dst_vm); + dst_vcpu =3D vm_vcpu_recreate(dst_vm, 0); + + tdx_run(src_vcpu); + tdx_test_assert_io(src_vcpu, TDX_IOEXIT_TEST_PORT, 1, + PORT_WRITE); + data =3D *(uint8_t *)((void *)src_vcpu->run + + src_vcpu->run->io.data_offset); + TEST_ASSERT_EQ(data, 1); + + tdx_migrate_from(dst_vm, src_vm); + + kvm_vm_free(src_vm); + + tdx_run(dst_vcpu); + tdx_test_assert_io(dst_vcpu, TDX_IOEXIT_TEST_PORT, 1, + PORT_WRITE); + data =3D *(uint8_t *)((void *)dst_vcpu->run + + dst_vcpu->run->io.data_offset); + TEST_ASSERT_EQ(data, 2); + + tdx_run(dst_vcpu); + tdx_test_assert_success(dst_vcpu); + + kvm_vm_free(dst_vm); + + printf("\t ... PASSED\n"); +} + +#define TDX_SHARED_MEM_TEST_PRIVATE_GVA (0x80000000) +#define TDX_SHARED_MEM_TEST_VADDR_SHARED_MASK BIT_ULL(30) +#define TDX_SHARED_MEM_TEST_SHARED_GVA \ + (TDX_SHARED_MEM_TEST_PRIVATE_GVA | \ + TDX_SHARED_MEM_TEST_VADDR_SHARED_MASK) + +#define TDX_SHARED_MEM_TEST_PRIVATE_VALUE (100) +#define TDX_SHARED_MEM_TEST_SHARED_VALUE (200) +#define TDX_SHARED_MEM_TEST_DIFF_VALUE (1) + + +static uint64_t test_mem_private_gpa; +static uint64_t test_mem_shared_gpa; + +void guest_with_shared_mem(void) +{ + uint64_t *test_mem_shared_gva =3D + (uint64_t *)TDX_SHARED_MEM_TEST_SHARED_GVA; + + uint64_t *private_data, *shared_data; + uint64_t placeholder; + uint64_t failed_gpa; + uint64_t data; + int ret; + + /* Map gpa as shared */ + tdg_vp_vmcall_map_gpa(test_mem_shared_gpa, PAGE_SIZE, + &failed_gpa); + + shared_data =3D test_mem_shared_gva; + private_data =3D &data; + + *private_data =3D TDX_SHARED_MEM_TEST_PRIVATE_VALUE; + *shared_data =3D TDX_SHARED_MEM_TEST_SHARED_VALUE; + + ret =3D tdg_vp_vmcall_instruction_io(TDX_IOEXIT_TEST_PORT, 4, + PORT_WRITE, + private_data); + if (ret) + tdx_test_fatal_with_data(ret, __LINE__); + + /* Exit so host can read shared value */ + ret =3D tdg_vp_vmcall_instruction_io(TDX_IOEXIT_TEST_PORT, 4, + PORT_WRITE, + &placeholder); + if (ret) + tdx_test_fatal_with_data(ret, __LINE__); + + *private_data +=3D TDX_SHARED_MEM_TEST_DIFF_VALUE; + *shared_data +=3D TDX_SHARED_MEM_TEST_DIFF_VALUE; + + ret =3D tdg_vp_vmcall_instruction_io(TDX_IOEXIT_TEST_PORT, 4, + PORT_WRITE, + private_data); + if (ret) + tdx_test_fatal_with_data(ret, __LINE__); + + /* Exit so host can read shared value */ + ret =3D tdg_vp_vmcall_instruction_io(TDX_IOEXIT_TEST_PORT, 4, + PORT_WRITE, + &placeholder); + if (ret) + tdx_test_fatal_with_data(ret, __LINE__); + + tdx_test_success(); +} + +static void test_tdx_migrate_vm_with_shared_mem(void) +{ + uint32_t private_data; + vm_vaddr_t test_mem_private_gva; + uint32_t *test_mem_hva; + struct kvm_vm *src_vm; + struct kvm_vm *dst_vm; + struct kvm_vcpu *src_vcpu; + struct kvm_vcpu *dst_vcpu; + + printf("Verifying migration of a VM with shared memory:\n"); + + src_vm =3D td_create(); + td_initialize(src_vm, VM_MEM_SRC_ANONYMOUS, 0); + src_vcpu =3D td_vcpu_add(src_vm, 0, guest_with_shared_mem); + + /* + * Set up shared memory page for testing by first allocating as private + * and then mapping the same GPA again as shared. This way, the TD does + * not have to remap its page tables at runtime. + */ + test_mem_private_gva =3D vm_vaddr_alloc(src_vm, src_vm->page_size, + TDX_SHARED_MEM_TEST_PRIVATE_GVA); + TEST_ASSERT_EQ(test_mem_private_gva, TDX_SHARED_MEM_TEST_PRIVATE_GVA); + + test_mem_hva =3D addr_gva2hva(src_vm, test_mem_private_gva); + TEST_ASSERT(test_mem_hva !=3D NULL, + "Guest address not found in guest memory regions\n"); + + test_mem_private_gpa =3D addr_gva2gpa(src_vm, test_mem_private_gva); + virt_map_shared(src_vm, TDX_SHARED_MEM_TEST_SHARED_GVA, + test_mem_private_gpa, 1); + + test_mem_shared_gpa =3D test_mem_private_gpa | src_vm->arch.s_bit; + sync_global_to_guest(src_vm, test_mem_shared_gpa); + + td_finalize(src_vm); + + dst_vm =3D td_create(); + tdx_enable_capabilities(dst_vm); + dst_vcpu =3D vm_vcpu_recreate(dst_vm, 0); + + vm_enable_cap(src_vm, KVM_CAP_EXIT_HYPERCALL, + BIT_ULL(KVM_HC_MAP_GPA_RANGE)); + + printf("Verifying shared memory accesses for TDX\n"); + + /* Begin guest execution; guest writes to shared memory. */ + printf("\t ... Starting guest execution\n"); + + /* Handle map gpa as shared */ + tdx_run(src_vcpu); + + tdx_run(src_vcpu); + tdx_test_assert_io(src_vcpu, TDX_IOEXIT_TEST_PORT, 4, PORT_WRITE); + TEST_ASSERT_EQ(*(uint32_t *)((void *)src_vcpu->run + + src_vcpu->run->io.data_offset), + TDX_SHARED_MEM_TEST_PRIVATE_VALUE); + + tdx_run(src_vcpu); + tdx_test_assert_io(src_vcpu, TDX_IOEXIT_TEST_PORT, 4, PORT_WRITE); + TEST_ASSERT_EQ(*test_mem_hva, TDX_SHARED_MEM_TEST_SHARED_VALUE); + + tdx_migrate_from(dst_vm, src_vm); + + kvm_vm_free(src_vm); + + tdx_run(dst_vcpu); + tdx_test_assert_io(dst_vcpu, TDX_IOEXIT_TEST_PORT, 4, + PORT_WRITE); + private_data =3D *(uint32_t *)((void *)dst_vcpu->run + + dst_vcpu->run->io.data_offset); + TEST_ASSERT_EQ(private_data, TDX_SHARED_MEM_TEST_PRIVATE_VALUE + + TDX_SHARED_MEM_TEST_DIFF_VALUE); + + tdx_run(dst_vcpu); + tdx_test_assert_io(dst_vcpu, TDX_IOEXIT_TEST_PORT, 4, + PORT_WRITE); + TEST_ASSERT_EQ(*test_mem_hva, TDX_SHARED_MEM_TEST_SHARED_VALUE + + TDX_SHARED_MEM_TEST_DIFF_VALUE); + + tdx_run(dst_vcpu); + tdx_test_assert_success(dst_vcpu); + + kvm_vm_free(dst_vm); + + printf("\t ... PASSED\n"); +} + +void guest_code_empty(void) +{ + tdx_test_success(); +} + +static void test_tdx_migrate_multiple_vms(void) +{ + struct kvm_vm *src_vm; + struct kvm_vm *dst_vms[NR_MIGRATE_TEST_VMS]; + int i, ret; + + printf("Verifying migration between multiple VMs:\n"); + + src_vm =3D td_create(); + td_initialize(src_vm, VM_MEM_SRC_ANONYMOUS, 0); + td_vcpu_add(src_vm, 0, guest_code_empty); + td_finalize(src_vm); + + for (i =3D 0; i < NR_MIGRATE_TEST_VMS; ++i) { + dst_vms[i] =3D td_create(); + tdx_enable_capabilities(dst_vms[i]); + vm_vcpu_recreate(dst_vms[i], 0); + } + + /* Initial migration from the src to the first dst. */ + tdx_migrate_from(dst_vms[0], src_vm); + + for (i =3D 1; i < NR_MIGRATE_TEST_VMS; i++) + tdx_migrate_from(dst_vms[i], dst_vms[i - 1]); + + /* Migrate the guest back to the original VM. */ + ret =3D __tdx_migrate_from(src_vm->fd, + dst_vms[NR_MIGRATE_TEST_VMS - 1]->fd); + TEST_ASSERT(ret =3D=3D -1 && errno =3D=3D EIO, + "VM that was migrated from should be dead. ret %d, errno: %d\n", + ret, errno); + + kvm_vm_free(src_vm); + for (i =3D 0; i < NR_MIGRATE_TEST_VMS; ++i) + kvm_vm_free(dst_vms[i]); + + printf("\t ... PASSED\n"); +} + +int main(int argc, char *argv[]) +{ + if (!is_tdx_enabled()) { + print_skip("TDX is not supported by the KVM"); + exit(KSFT_SKIP); + } + + run_in_new_process(&test_tdx_migrate_vm_with_private_memory); + run_in_new_process(&test_tdx_migrate_running_vm); + run_in_new_process(&test_tdx_migrate_vm_with_shared_mem); + run_in_new_process(&test_tdx_migrate_multiple_vms); + + return 0; +} --=20 2.50.0.rc1.591.g9c95f17f64-goog