From nobody Tue Apr 7 05:43:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB3BAECAAA1 for ; Tue, 30 Aug 2022 22:43:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230055AbiH3WnZ (ORCPT ); Tue, 30 Aug 2022 18:43:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231442AbiH3WnQ (ORCPT ); Tue, 30 Aug 2022 18:43:16 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6BECB6E2C9 for ; Tue, 30 Aug 2022 15:43:14 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id 200-20020a6217d1000000b00538090d37f3so3615941pfx.3 for ; Tue, 30 Aug 2022 15:43:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=+fxq/Nava99rJhdF98nlLDcbHMSZkK7JwuVLQ9+L7LI=; b=DHoNLClh8yhyoZiS/SOMsIl2aIpcJVxVV9GzQMVByyHDqoNx9nRUYK3wjKzOHhK0e2 Ve3zejRFM1b/U/lUIsZhvyCBprcOZliQNCImj2hhSVKWglHjW7hU2xd1iw4LisE5ZcF/ uMmSLVuMNtDcimD5pKzi1h7xa+NjhczDQF0+LA2q+Tj/evqyt1Blo/vp/M9piIdnjawa kUo4CwWgul6o1em8VllsdzIuhJdp0U7KDGYgXhOQLTkbnBD7K+kPu4F7Z/uzdEJ8/r5P 6yOxb0YMQl7IPsWr1/eIW5lLZArDYe5KiZ/rhyvx0ThyfD7ofsUrXb6n6DOcVlBpF6tb W9ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=+fxq/Nava99rJhdF98nlLDcbHMSZkK7JwuVLQ9+L7LI=; b=yQcSBvZFBt593AjWw7fn7SfnJeK5Kcvnps9AdHOzNmjZs6Pd8Tpi18pUEktfmXcnZG R5PUYcP+zcF6LSyjXlBTdmB1FIA0F+cgv3/KB0HYPfcaRooKxvzJPnsAONVK33w/qxht wbxVMK2ZgU6kOwYOQslW321mWYVaRKosCjEc9UsAQaIOJOerTytUxcoo+jnftHcQFFu1 EDa3LKJQQl7Iq/PwpnilGc5I+vQqemm5oWwsSUVIhffWnqzDCHFa5mq4aEu4cBaybz90 xdcUs7BrNQgXPifnI1zheg8975YtZY0AFoFvSqlyQz2NKP8dZSsrOVJzyv+q+zr08YQz 41cw== X-Gm-Message-State: ACgBeo30AB/8KrFAbAWiG8kFu9/i08DsBn/q44ycGzERNTNJhCRXqV6P zHIj8+jkLY47omHOICjePwxLtOcfr5lFPa4K X-Google-Smtp-Source: AA6agR6RJsD1NAYvHIDVRGOYRaxfpafbYixbKpI62dXEqhoHdt0NLNK3jfnQP9gQFc6+VnKg/he2n/SEEQ0HNsGL X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:aa7:81c6:0:b0:535:2aea:e29f with SMTP id c6-20020aa781c6000000b005352aeae29fmr23476929pfn.78.1661899393921; Tue, 30 Aug 2022 15:43:13 -0700 (PDT) Date: Tue, 30 Aug 2022 22:42:52 +0000 In-Reply-To: <20220830224259.412342-1-vannapurve@google.com> Mime-Version: 1.0 References: <20220830224259.412342-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220830224259.412342-2-vannapurve@google.com> Subject: [RFC V2 PATCH 1/8] selftests: kvm: x86_64: Add support for pagetable tracking From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add support for mapping guest pagetable pages to a contiguous guest virtual address range and sharing the physical to virtual mappings with the guest in a pre-defined format. This functionality will allow the guests to modify their page table entries. One such usecase for CC VMs is to toggle encryption bit in their ptes to switch from encrypted to shared memory and vice a versa. Signed-off-by: Vishal Annapurve --- .../selftests/kvm/include/kvm_util_base.h | 105 ++++++++++++++++++ tools/testing/selftests/kvm/lib/kvm_util.c | 78 ++++++++++++- .../selftests/kvm/lib/x86_64/processor.c | 32 ++++++ 3 files changed, 214 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/te= sting/selftests/kvm/include/kvm_util_base.h index dfe454f228e7..f57ced56da1b 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -74,6 +74,11 @@ struct vm_memcrypt { int8_t enc_bit; }; =20 +struct pgt_page { + vm_paddr_t paddr; + struct list_head list; +}; + struct kvm_vm { int mode; unsigned long type; @@ -98,6 +103,10 @@ struct kvm_vm { vm_vaddr_t handlers; uint32_t dirty_ring_size; struct vm_memcrypt memcrypt; + struct list_head pgt_pages; + bool track_pgt_pages; + uint32_t num_pgt_pages; + vm_vaddr_t pgt_vaddr_start; =20 /* Cache of information for binary stats interface */ int stats_fd; @@ -184,6 +193,23 @@ struct vm_guest_mode_params { unsigned int page_size; unsigned int page_shift; }; + +/* + * Structure shared with the guest containing information about: + * - Starting virtual address for num_pgt_pages physical pagetable + * page addresses tracked via paddrs array + * - page size of the guest + * + * Guest can walk through its pagetables using this information to + * read/modify pagetable attributes. + */ +struct guest_pgt_info { + uint64_t num_pgt_pages; + uint64_t pgt_vaddr_start; + uint64_t page_size; + uint64_t paddrs[]; +}; + extern const struct vm_guest_mode_params vm_guest_mode_params[]; =20 int open_path_or_exit(const char *path, int flags); @@ -394,6 +420,49 @@ void vm_mem_region_delete(struct kvm_vm *vm, uint32_t = slot); =20 struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id); vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_m= in); +void vm_map_page_table(struct kvm_vm *vm, vm_vaddr_t vaddr_min); + +/* + * function called by guest code to translate physical address of a pageta= ble + * page to guest virtual address. + * + * input args: + * gpgt_info - pointer to the guest_pgt_info structure containing info + * about guest virtual address mappings for guest physical + * addresses of page table pages. + * pgt_pa - physical address of guest page table page to be translated + * to a virtual address. + * + * output args: none + * + * return: + * pointer to the pagetable page, null in case physical address is not + * tracked via given guest_pgt_info structure. + */ +void *guest_code_get_pgt_vaddr(struct guest_pgt_info *gpgt_info, uint64_t = pgt_pa); + +/* + * Allocate and setup a page to be shared with guest containing guest_pgt_= info + * structure. + * + * Note: + * 1) vm_set_pgt_alloc_tracking function should be used to start tracking + * of physical page table page allocation. + * 2) This function should be invoked after needed pagetable pages are + * mapped to the VM using virt_pg_map. + * + * input args: + * vm - virtual machine + * vaddr_min - Minimum guest virtual address to start mapping the + * guest_pgt_info structure page(s). + * + * output args: none + * + * return: + * virtual address mapping guest_pgt_info structure. + */ +vm_vaddr_t vm_setup_pgt_info_buf(struct kvm_vm *vm, vm_vaddr_t vaddr_min); + vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t = vaddr_min); vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages); vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm); @@ -647,10 +716,46 @@ void kvm_gsi_routing_write(struct kvm_vm *vm, struct = kvm_irq_routing *routing); =20 const char *exit_reason_str(unsigned int exit_reason); =20 +#ifdef __x86_64__ +/* + * Guest called function to get a pointer to pte corresponding to a given + * guest virtual address and pointer to the guest_pgt_info structure. + * + * input args: + * gpgt_info - pointer to guest_pgt_info structure containing information + * about guest virtual addresses mapped to pagetable physical + * addresses. + * vaddr - guest virtual address + * + * output args: none + * + * return: + * pointer to the pte corresponding to guest virtual address, + * Null if pte is not found + */ +uint64_t *guest_code_get_pte(struct guest_pgt_info *gpgt_info, uint64_t va= ddr); +#endif + vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, uint32_t memslot); vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, vm_paddr_t paddr_min, uint32_t memslot); + +/* + * Enable tracking of physical guest pagetable pages for the given vm. + * This function should be called right after vm creation before any pages= are + * mapped into the VM using vm_alloc_* / vm_vaddr_alloc* functions. + * + * input args: + * vm - virtual machine + * + * output args: none + * + * return: + * None + */ +void vm_set_pgt_alloc_tracking(struct kvm_vm *vm); + vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm); =20 /* diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/sel= ftests/kvm/lib/kvm_util.c index f153c71d6988..243d04a3d4b6 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -155,6 +155,7 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode, u= int64_t nr_pages) TEST_ASSERT(vm !=3D NULL, "Insufficient Memory"); =20 INIT_LIST_HEAD(&vm->vcpus); + INIT_LIST_HEAD(&vm->pgt_pages); vm->regions.gpa_tree =3D RB_ROOT; vm->regions.hva_tree =3D RB_ROOT; hash_init(vm->regions.slot_hash); @@ -573,6 +574,7 @@ void kvm_vm_free(struct kvm_vm *vmp) { int ctr; struct hlist_node *node; + struct pgt_page *entry, *nentry; struct userspace_mem_region *region; =20 if (vmp =3D=3D NULL) @@ -588,6 +590,9 @@ void kvm_vm_free(struct kvm_vm *vmp) hash_for_each_safe(vmp->regions.slot_hash, ctr, node, region, slot_node) __vm_mem_region_delete(vmp, region, false); =20 + list_for_each_entry_safe(entry, nentry, &vmp->pgt_pages, list) + free(entry); + /* Free sparsebit arrays. */ sparsebit_free(&vmp->vpages_valid); sparsebit_free(&vmp->vpages_mapped); @@ -1195,9 +1200,24 @@ vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_p= addr_t paddr_min, /* Arbitrary minimum physical address used for virtual translation tables.= */ #define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000 =20 +void vm_set_pgt_alloc_tracking(struct kvm_vm *vm) +{ + vm->track_pgt_pages =3D true; +} + vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm) { - return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + struct pgt_page *pgt; + vm_paddr_t paddr =3D vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR= , 0); + + if (vm->track_pgt_pages) { + pgt =3D calloc(1, sizeof(*pgt)); + TEST_ASSERT(pgt !=3D NULL, "Insufficient memory"); + pgt->paddr =3D addr_gpa2raw(vm, paddr); + list_add(&pgt->list, &vm->pgt_pages); + vm->num_pgt_pages++; + } + return paddr; } =20 /* @@ -1286,6 +1306,27 @@ static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm = *vm, size_t sz, return pgidx_start * vm->page_size; } =20 +void vm_map_page_table(struct kvm_vm *vm, vm_vaddr_t vaddr_min) +{ + struct pgt_page *pgt_page_entry; + vm_vaddr_t vaddr; + + /* Stop tracking further pgt pages, mapping pagetable may itself need + * new pages. + */ + vm->track_pgt_pages =3D false; + vm_vaddr_t vaddr_start =3D vm_vaddr_unused_gap(vm, + vm->num_pgt_pages * vm->page_size, vaddr_min); + vaddr =3D vaddr_start; + list_for_each_entry(pgt_page_entry, &vm->pgt_pages, list) { + /* Map the virtual page. */ + virt_pg_map(vm, vaddr, addr_raw2gpa(vm, pgt_page_entry->paddr)); + sparsebit_set(vm->vpages_mapped, vaddr >> vm->page_shift); + vaddr +=3D vm->page_size; + } + vm->pgt_vaddr_start =3D vaddr_start; +} + /* * VM Virtual Address Allocate Shared/Encrypted * @@ -1345,6 +1386,41 @@ vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, = size_t sz, vm_vaddr_t vaddr_ return _vm_vaddr_alloc(vm, sz, vaddr_min, false); } =20 +void *guest_code_get_pgt_vaddr(struct guest_pgt_info *gpgt_info, + uint64_t pgt_pa) +{ + uint64_t num_pgt_pages =3D gpgt_info->num_pgt_pages; + uint64_t pgt_vaddr_start =3D gpgt_info->pgt_vaddr_start; + uint64_t page_size =3D gpgt_info->page_size; + + for (uint32_t i =3D 0; i < num_pgt_pages; i++) { + if (gpgt_info->paddrs[i] =3D=3D pgt_pa) + return (void *)(pgt_vaddr_start + i * page_size); + } + return NULL; +} + +vm_vaddr_t vm_setup_pgt_info_buf(struct kvm_vm *vm, vm_vaddr_t vaddr_min) +{ + struct pgt_page *pgt_page_entry; + struct guest_pgt_info *gpgt_info; + uint64_t info_size =3D sizeof(*gpgt_info) + (sizeof(uint64_t) * vm->num_p= gt_pages); + uint64_t num_pages =3D align_up(info_size, vm->page_size); + vm_vaddr_t buf_start =3D vm_vaddr_alloc(vm, num_pages, vaddr_min); + uint32_t i =3D 0; + + gpgt_info =3D (struct guest_pgt_info *)addr_gva2hva(vm, buf_start); + gpgt_info->num_pgt_pages =3D vm->num_pgt_pages; + gpgt_info->pgt_vaddr_start =3D vm->pgt_vaddr_start; + gpgt_info->page_size =3D vm->page_size; + list_for_each_entry(pgt_page_entry, &vm->pgt_pages, list) { + gpgt_info->paddrs[i] =3D pgt_page_entry->paddr; + i++; + } + TEST_ASSERT((i =3D=3D vm->num_pgt_pages), "pgt entries mismatch with the = counter"); + return buf_start; +} + /* * VM Virtual Address Allocate Pages * diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/tes= ting/selftests/kvm/lib/x86_64/processor.c index 09d757a0b148..02252cabf9ec 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -217,6 +217,38 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vadd= r, uint64_t paddr) __virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K); } =20 +uint64_t *guest_code_get_pte(struct guest_pgt_info *gpgt_info, uint64_t va= ddr) +{ + uint16_t index[4]; + uint64_t *pml4e, *pdpe, *pde, *pte; + uint64_t pgt_paddr =3D get_cr3(); + uint64_t page_size =3D gpgt_info->page_size; + + index[0] =3D (vaddr >> 12) & 0x1ffu; + index[1] =3D (vaddr >> 21) & 0x1ffu; + index[2] =3D (vaddr >> 30) & 0x1ffu; + index[3] =3D (vaddr >> 39) & 0x1ffu; + + pml4e =3D guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr); + GUEST_ASSERT(pml4e && (pml4e[index[3]] & PTE_PRESENT_MASK)); + + pgt_paddr =3D (PTE_GET_PFN(pml4e[index[3]]) * page_size); + pdpe =3D guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr); + GUEST_ASSERT(pdpe && (pdpe[index[2]] & PTE_PRESENT_MASK) && + !(pdpe[index[2]] & PTE_LARGE_MASK)); + + pgt_paddr =3D (PTE_GET_PFN(pdpe[index[2]]) * page_size); + pde =3D guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr); + GUEST_ASSERT(pde && (pde[index[1]] & PTE_PRESENT_MASK) && + !(pde[index[1]] & PTE_LARGE_MASK)); + + pgt_paddr =3D (PTE_GET_PFN(pde[index[1]]) * page_size); + pte =3D guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr); + GUEST_ASSERT(pte && (pte[index[0]] & PTE_PRESENT_MASK)); + + return (uint64_t *)&pte[index[0]]; +} + static uint64_t *_vm_get_page_table_entry(struct kvm_vm *vm, struct kvm_vcpu *vcpu, uint64_t vaddr) --=20 2.37.2.672.g94769d06f0-goog From nobody Tue Apr 7 05:43:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0622ECAAA1 for ; Tue, 30 Aug 2022 22:43:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231308AbiH3Wnc (ORCPT ); Tue, 30 Aug 2022 18:43:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231653AbiH3WnT (ORCPT ); Tue, 30 Aug 2022 18:43:19 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D23D074CDE for ; Tue, 30 Aug 2022 15:43:17 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id v65-20020a626144000000b0052f89472f54so5186535pfb.11 for ; Tue, 30 Aug 2022 15:43:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=2fODE+rP/n4IhsKmyo9TrmeTKfUIF5nKP0v2EkUrneo=; b=gbNNPUBOKBKap2s5shHHjkVzc/iNv8XHjMpvm09nylSEQ3SOdpxMf259plptLgOyhm qtBnC5ARhX9mqeSUEQxwbwuiVXzeMEEAIsNqGvXWYo4cqOY+Vq10bLL0/E/u117bHK1I BppB22ajOfm+RKivJWNiOP4n6ezKLNCsf/eBjzeNYmUBoiTwWRYoGrABmFQhHfdANFR8 xc1V/n47Al7DVl2uTC7HoRXD3hwb0eWA8BBGtjplkqwmjVe1rbwtJJdWIr9ER/UOLK8x BsStvO82puuasYylYs10rEkZa8kjaQb1W/tseZoyMgSIsRHmFNCCa5zWJT5GNmRtA45f 3i6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=2fODE+rP/n4IhsKmyo9TrmeTKfUIF5nKP0v2EkUrneo=; b=AhDjBhcN80UQ++nkp4AHw7GuzhoVUW4wlJevHwzNGVOFBhvmosNoftWwjx3Ga6/hUz yCZEm+TSFG98QkxRAf4M40DoPzSIdP9BnbkStVv+tv4FTQCoH9XB+I7KOxU22oREopXX V4ayq9xtsXisNoEZcwWfP3xeYjh6xJw3zcJ2zPC81phvwCEr7pnJPX4YRCJ5DfK/tQPJ uVbTCsPsFk3s5l2DZHWvXLvzmF9fsKBGtyAn5Zn8Tr+XFjPicSieBwMBbCzHulIKiaLz 79pKEqQmVAVtnRw1ORdwF3UYdEVvRDDiaeQszruDkZ5XzMzPTwgBCq+BUTeQsUhWwvfi FoYA== X-Gm-Message-State: ACgBeo3+JcAcLYZ4W0aRImX7Pj8Lei06PymadvKGo6PykhisP931ylR1 fgvO7K32p/N+N1jVJToTLV6cwH1P0zQevyqW X-Google-Smtp-Source: AA6agR6s8cwIw86sUzfNeW1dAbiwVvALjW5Jj3PJi8QbYI7h65XJwdYYtjA2IhmoCKLUNMKH83DOkyjt1ks1NcY1 X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:902:e883:b0:175:22e8:f30a with SMTP id w3-20020a170902e88300b0017522e8f30amr4205183plg.127.1661899397359; Tue, 30 Aug 2022 15:43:17 -0700 (PDT) Date: Tue, 30 Aug 2022 22:42:53 +0000 In-Reply-To: <20220830224259.412342-1-vannapurve@google.com> Mime-Version: 1.0 References: <20220830224259.412342-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220830224259.412342-3-vannapurve@google.com> Subject: [RFC V2 PATCH 2/8] kvm: Add HVA range operator From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce HVA range operator so that other KVM subsystems can operate on HVA range. Signed-off-by: Vishal Annapurve --- include/linux/kvm_host.h | 6 +++++ virt/kvm/kvm_main.c | 48 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 54 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 4508fa0e8fb6..c860e6d6408d 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1398,6 +1398,12 @@ void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memo= ry_cache *mc); void kvm_mmu_updating_begin(struct kvm *kvm, gfn_t start, gfn_t end); void kvm_mmu_updating_end(struct kvm *kvm, gfn_t start, gfn_t end); =20 +typedef int (*kvm_hva_range_op_t)(struct kvm *kvm, + struct kvm_gfn_range *range, void *data); + +int kvm_vm_do_hva_range_op(struct kvm *kvm, unsigned long hva_start, + unsigned long hva_end, kvm_hva_range_op_t handler, void *data); + long kvm_arch_dev_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg); long kvm_arch_vcpu_ioctl(struct file *filp, diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 7597949fe031..16cb9ab59143 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -647,6 +647,54 @@ static __always_inline int __kvm_handle_hva_range(stru= ct kvm *kvm, return (int)ret; } =20 +int kvm_vm_do_hva_range_op(struct kvm *kvm, unsigned long hva_start, + unsigned long hva_end, kvm_hva_range_op_t handler, void *data) +{ + int ret =3D 0; + struct kvm_gfn_range gfn_range; + struct kvm_memory_slot *slot; + struct kvm_memslots *slots; + int i, idx; + + if (WARN_ON_ONCE(hva_end <=3D hva_start)) + return -EINVAL; + + idx =3D srcu_read_lock(&kvm->srcu); + + for (i =3D 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + struct interval_tree_node *node; + + slots =3D __kvm_memslots(kvm, i); + kvm_for_each_memslot_in_hva_range(node, slots, + hva_start, hva_end - 1) { + unsigned long start, end; + + slot =3D container_of(node, struct kvm_memory_slot, + hva_node[slots->node_idx]); + start =3D max(hva_start, slot->userspace_addr); + end =3D min(hva_end, slot->userspace_addr + + (slot->npages << PAGE_SHIFT)); + + /* + * {gfn(page) | page intersects with [hva_start, hva_end)} =3D + * {gfn_start, gfn_start+1, ..., gfn_end-1}. + */ + gfn_range.start =3D hva_to_gfn_memslot(start, slot); + gfn_range.end =3D hva_to_gfn_memslot(end + PAGE_SIZE - 1, slot); + gfn_range.slot =3D slot; + + ret =3D handler(kvm, &gfn_range, data); + if (ret) + goto e_ret; + } + } + +e_ret: + srcu_read_unlock(&kvm->srcu, idx); + + return ret; +} + static __always_inline int kvm_handle_hva_range(struct mmu_notifier *mn, unsigned long start, unsigned long end, --=20 2.37.2.672.g94769d06f0-goog From nobody Tue Apr 7 05:43:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA8A3ECAAA1 for ; Tue, 30 Aug 2022 22:43:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232338AbiH3Wnz (ORCPT ); Tue, 30 Aug 2022 18:43:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231768AbiH3WnY (ORCPT ); Tue, 30 Aug 2022 18:43:24 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 505EA80507 for ; Tue, 30 Aug 2022 15:43:21 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id k16-20020a635a50000000b0042986056df6so6084393pgm.2 for ; Tue, 30 Aug 2022 15:43:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=72P9i6U+1Dr4e9vSKI1gPWv6Lo3gM5hWP8wyM1fHGL0=; b=BmsjrGFWHc6Y2fjLh0Guc+hSkU6LRLDr7ZN+urJtAfJo95/yn8UtEUaA70RAyPcwJM eEJbzhv3Xr6tLaVaI+SWn5OUmXgPH67zijb6AQI/RVZXxJhHWJ0vv83NKZbLcGm3A6n2 JG1d3U32+PPI+dIUPMz2XWqz7Hu8ha7qzMHGuEcjWHizAmgB7MOYAoel5QpUdu/31AAV bMl6W1aLYSjEywBO7oap4TzR3PW31n9rJbdFV+Eslv700pf3UfKhHVc/F+t7mUxeXuxU ILWWvuw8/K0JiX+VZbAUAnrgUo9ixRflWR9XK1Dj9ERFmYsdHCi5YD8zbUj26WBY07ly StgQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=72P9i6U+1Dr4e9vSKI1gPWv6Lo3gM5hWP8wyM1fHGL0=; b=I+o15RCfdx+MHYUSuOP8zdcgr0ITxa7StBU3g3m0+lJ1Z0SpnmDxiDI5MYUkv4V8fI yoTUInHQRwCOJeCuTFA4BcfTbiqM7Pq7/hXHD8bsSrjKOs/n2KtY/wFiFdXLKmaDcMz3 AKwwv4nTIEPqfLj/yIt9EzizsIdSoWjXf2S2wCSYk8FUCLgTEXQOJ4dAlTfAF8MzfXe8 tswMfStt8oM2dkMk1Ql/d05dhLN/ughWJV43TJ00u1poCSrXWYmIrxpSyoMDreYXUbgS SE4GYPjn4Ey3X8L/5Lf6o0dLjt9JcsXOOy+eGEVUhzB6s9ihO54035+/RWZuZAxOFBsj tDDg== X-Gm-Message-State: ACgBeo3Q0DR2oz3IcH6ctPvg7tShf0ghYPx2mXdcyhCqHjRzwE879sfv E0bPlAHbwF/pYre+pySbdo3uYAKJUbr1d1ug X-Google-Smtp-Source: AA6agR694skgUA9ff0wN7YLAz9d1J3Zwv7gfGw4NDsA/Q5y26oGB9uvpFpTcJTINj3D3/2tLzNcd9gkmf5zQAIEi X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:902:ef50:b0:171:516d:d2ce with SMTP id e16-20020a170902ef5000b00171516dd2cemr22222790plx.171.1661899400834; Tue, 30 Aug 2022 15:43:20 -0700 (PDT) Date: Tue, 30 Aug 2022 22:42:54 +0000 In-Reply-To: <20220830224259.412342-1-vannapurve@google.com> Mime-Version: 1.0 References: <20220830224259.412342-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220830224259.412342-4-vannapurve@google.com> Subject: [RFC V2 PATCH 3/8] arch: x86: sev: Populate private memory fd during LAUNCH_UPDATE_DATA From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This change adds handling of HVA ranges to copy contents to private memory while doing sev launch update data. mem_attr array is updated during LAUNCH_UPDATE_DATA to ensure that encrypted memory is marked as private. Signed-off-by: Vishal Annapurve --- arch/x86/kvm/svm/sev.c | 99 ++++++++++++++++++++++++++++++++++++---- include/linux/kvm_host.h | 2 + virt/kvm/kvm_main.c | 39 ++++++++++------ 3 files changed, 116 insertions(+), 24 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 309bcdb2f929..673dca318cd4 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -492,23 +492,22 @@ static unsigned long get_num_contig_pages(unsigned lo= ng idx, return pages; } =20 -static int sev_launch_update_data(struct kvm *kvm, struct kvm_sev_cmd *arg= p) +int sev_launch_update_shared_gfn_handler(struct kvm *kvm, + struct kvm_gfn_range *range, struct kvm_sev_cmd *argp) { unsigned long vaddr, vaddr_end, next_vaddr, npages, pages, size, i; struct kvm_sev_info *sev =3D &to_kvm_svm(kvm)->sev_info; - struct kvm_sev_launch_update_data params; struct sev_data_launch_update_data data; struct page **inpages; int ret; =20 - if (!sev_guest(kvm)) - return -ENOTTY; - - if (copy_from_user(¶ms, (void __user *)(uintptr_t)argp->data, sizeof(= params))) - return -EFAULT; + vaddr =3D gfn_to_hva_memslot(range->slot, range->start); + if (kvm_is_error_hva(vaddr)) { + pr_err("vaddr is erroneous 0x%lx\n", vaddr); + return -EINVAL; + } =20 - vaddr =3D params.uaddr; - size =3D params.len; + size =3D (range->end - range->start) << PAGE_SHIFT; vaddr_end =3D vaddr + size; =20 /* Lock the user memory. */ @@ -560,6 +559,88 @@ static int sev_launch_update_data(struct kvm *kvm, str= uct kvm_sev_cmd *argp) return ret; } =20 +int sev_launch_update_priv_gfn_handler(struct kvm *kvm, + struct kvm_gfn_range *range, struct kvm_sev_cmd *argp) +{ + struct sev_data_launch_update_data data; + struct kvm_sev_info *sev =3D &to_kvm_svm(kvm)->sev_info; + gfn_t gfn; + kvm_pfn_t pfn; + struct kvm_memory_slot *memslot =3D range->slot; + int ret =3D 0; + + data.reserved =3D 0; + data.handle =3D sev->handle; + + for (gfn =3D range->start; gfn < range->end; gfn++) { + int order; + void *kvaddr; + + ret =3D kvm_private_mem_get_pfn(memslot, + gfn, &pfn, &order); + if (ret) + return ret; + + kvaddr =3D pfn_to_kaddr(pfn); + if (!virt_addr_valid(kvaddr)) { + pr_err("Invalid kvaddr 0x%lx\n", (uint64_t)kvaddr); + ret =3D -EINVAL; + goto e_ret; + } + + ret =3D kvm_read_guest_page(kvm, gfn, kvaddr, 0, PAGE_SIZE); + if (ret) { + pr_err("guest read failed 0x%lx\n", ret); + goto e_ret; + } + + if (!this_cpu_has(X86_FEATURE_SME_COHERENT)) + clflush_cache_range(kvaddr, PAGE_SIZE); + + data.len =3D PAGE_SIZE; + data.address =3D __sme_set(pfn << PAGE_SHIFT); + ret =3D sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_DATA, &data, &argp->err= or); + if (ret) + goto e_ret; + + kvm_private_mem_put_pfn(memslot, pfn); + } + kvm_vm_set_region_attr(kvm, range->start, range->end, + true /* priv_attr */); + + return ret; + +e_ret: + kvm_private_mem_put_pfn(memslot, pfn); + return ret; +} + +int sev_launch_update_gfn_handler(struct kvm *kvm, + struct kvm_gfn_range *range, void *data) +{ + struct kvm_sev_cmd *argp =3D (struct kvm_sev_cmd *)data; + + if (kvm_slot_can_be_private(range->slot)) + return sev_launch_update_priv_gfn_handler(kvm, range, argp); + + return sev_launch_update_shared_gfn_handler(kvm, range, argp); +} + +static int sev_launch_update_data(struct kvm *kvm, + struct kvm_sev_cmd *argp) +{ + struct kvm_sev_launch_update_data params; + + if (!sev_guest(kvm)) + return -ENOTTY; + + if (copy_from_user(¶ms, (void __user *)(uintptr_t)argp->data, sizeof(= params))) + return -EFAULT; + + return kvm_vm_do_hva_range_op(kvm, params.uaddr, params.uaddr + params.le= n, + sev_launch_update_gfn_handler, argp); +} + static int sev_es_sync_vmsa(struct vcpu_svm *svm) { struct sev_es_save_area *save =3D svm->sev_es.vmsa; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c860e6d6408d..5d0054e957b4 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -980,6 +980,8 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned= vcpu_align, void kvm_exit(void); =20 void kvm_get_kvm(struct kvm *kvm); +int kvm_vm_set_region_attr(struct kvm *kvm, unsigned long gfn_start, + unsigned long gfn_end, bool priv_attr); bool kvm_get_kvm_safe(struct kvm *kvm); void kvm_put_kvm(struct kvm *kvm); bool file_is_kvm(struct file *file); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 16cb9ab59143..9463737c2172 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -981,7 +981,7 @@ static int kvm_vm_populate_private_mem(struct kvm *kvm,= unsigned long gfn_start, } =20 mutex_lock(&kvm->slots_lock); - for (gfn =3D gfn_start; gfn <=3D gfn_end; gfn++) { + for (gfn =3D gfn_start; gfn < gfn_end; gfn++) { int order; void *kvaddr; =20 @@ -1012,12 +1012,29 @@ static int kvm_vm_populate_private_mem(struct kvm *= kvm, unsigned long gfn_start, } #endif =20 +int kvm_vm_set_region_attr(struct kvm *kvm, unsigned long gfn_start, + unsigned long gfn_end, bool priv_attr) +{ + int r; + void *entry; + unsigned long index; + + entry =3D priv_attr ? xa_mk_value(KVM_MEM_ATTR_PRIVATE) : NULL; + + for (index =3D gfn_start; index < gfn_end; index++) { + r =3D xa_err(xa_store(&kvm->mem_attr_array, index, entry, + GFP_KERNEL_ACCOUNT)); + if (r) + break; + } + + return r; +} + static int kvm_vm_ioctl_set_encrypted_region(struct kvm *kvm, unsigned int= ioctl, struct kvm_enc_region *region) { unsigned long start, end; - unsigned long index; - void *entry; int r; =20 if (region->size =3D=3D 0 || region->addr + region->size < region->addr) @@ -1026,22 +1043,14 @@ static int kvm_vm_ioctl_set_encrypted_region(struct= kvm *kvm, unsigned int ioctl return -EINVAL; =20 start =3D region->addr >> PAGE_SHIFT; - end =3D (region->addr + region->size - 1) >> PAGE_SHIFT; - - entry =3D ioctl =3D=3D KVM_MEMORY_ENCRYPT_REG_REGION ? - xa_mk_value(KVM_MEM_ATTR_PRIVATE) : NULL; - - for (index =3D start; index <=3D end; index++) { - r =3D xa_err(xa_store(&kvm->mem_attr_array, index, entry, - GFP_KERNEL_ACCOUNT)); - if (r) - break; - } + end =3D (region->addr + region->size) >> PAGE_SHIFT; + r =3D kvm_vm_set_region_attr(kvm, start, end, + (ioctl =3D=3D KVM_MEMORY_ENCRYPT_REG_REGION)); =20 kvm_zap_gfn_range(kvm, start, end + 1); =20 #ifdef CONFIG_HAVE_KVM_PRIVATE_MEM_TESTING - if (!kvm->vm_entry_attempted && (ioctl =3D=3D KVM_MEMORY_ENCRYPT_REG_REGI= ON)) + if (!r && !kvm->vm_entry_attempted && (ioctl =3D=3D KVM_MEMORY_ENCRYPT_RE= G_REGION)) r =3D kvm_vm_populate_private_mem(kvm, start, end); #endif =20 --=20 2.37.2.672.g94769d06f0-goog From nobody Tue Apr 7 05:43:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0563ECAAD5 for ; Tue, 30 Aug 2022 22:43:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232492AbiH3Wnt (ORCPT ); Tue, 30 Aug 2022 18:43:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231567AbiH3Wn1 (ORCPT ); Tue, 30 Aug 2022 18:43:27 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08F7680B65 for ; Tue, 30 Aug 2022 15:43:25 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id k13-20020a056902024d00b0066fa7f50b97so793439ybs.6 for ; Tue, 30 Aug 2022 15:43:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=FnC2MODp1Ilbo/3qVuV673Djb8hUYYAKXHPp7uV/l3U=; b=QPd795t1C0ahrsS55brxetoonol6V2jAppY01knmBO6dTEexYKdxvNbkTSQcIcPS0g 2kVquW8pntnzZ8ayf2KGc74LWg1ImA0LODvDVKrQ5RNE9iPAZqoluWZufVPBUn48fc67 kW1IFw4yDbnloy5QQ/STgOmCIxz9plIKPFM0HDvfvRd0xIR/a+XXnVcrsaDE4dXMXlvp 2IrquyFJf8Ta/3dM4DHzEUE06trbxbXXW3PUtdEzPpNE9l9jhx85/6oghBZ5fuiPnSIa HihGAukLvraBpoNRHwoZwUflCiTPtuG2KVmnc5uUF0aSGnXsPpEIuWMW3HwQM/7lAwtJ qghA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=FnC2MODp1Ilbo/3qVuV673Djb8hUYYAKXHPp7uV/l3U=; b=JYl+ROJ7RqIhLG1N7lJU8G++rlYpvb+7QWKi0+lBVf3fEniViLS2yukMtUsCWCJIfV xyEhODDRyGB1PxroV3Qz6mHSgMDlpzfWQ5DJus7HRRCHUi5HfnSYSF/yCnZ+adm/c5Td HBkwsDtM9X9uAS0azmsAO6fqJy2kX7014n3JD9Sr9yMJ4w2tWtflkL8s/PelFvojaa4F J6U2AMXH/67gwkzUYZvyzWJ7Z9EgRn9gCVY6nxmGDA9ljSsAxWwp8H5LYAc59J0euVBK 7Z3KITJOJCr1uQEPIwb6HjrtsudixXzp7O0ib4QoqiRgRuvmRu77t9lY0lMFhjpyyTSa GtRA== X-Gm-Message-State: ACgBeo27nh4DDJauRgMSgG1XpAMmB9qhL1zKoJbnYNLQ+sZf8exaXoCo MMmNrpc7MhXLZr44sRJGKdJ9/Pt3wWVWWSxW X-Google-Smtp-Source: AA6agR7J2g4yWWCjTXSW8bJGJNJQDF4Q6cZWWSd4WOmEIBPpC8PfIDGdBXAbKzkcTBd8rZjpqBovURMzodQSv4dn X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a81:bb43:0:b0:33d:cdd9:aa56 with SMTP id a3-20020a81bb43000000b0033dcdd9aa56mr15659487ywl.240.1661899403835; Tue, 30 Aug 2022 15:43:23 -0700 (PDT) Date: Tue, 30 Aug 2022 22:42:55 +0000 In-Reply-To: <20220830224259.412342-1-vannapurve@google.com> Mime-Version: 1.0 References: <20220830224259.412342-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220830224259.412342-5-vannapurve@google.com> Subject: [RFC V2 PATCH 4/8] selftests: kvm: sev: Support memslots with private memory From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce an additional helper API to create a SEV VM with private memory memslots. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/include/x86_64/sev.h | 2 ++ tools/testing/selftests/kvm/lib/x86_64/sev.c | 15 ++++++++++++--- 2 files changed, 14 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/sev.h b/tools/testi= ng/selftests/kvm/include/x86_64/sev.h index b6552ea1c716..628801707917 100644 --- a/tools/testing/selftests/kvm/include/x86_64/sev.h +++ b/tools/testing/selftests/kvm/include/x86_64/sev.h @@ -38,6 +38,8 @@ void kvm_sev_ioctl(struct sev_vm *sev, int cmd, void *dat= a); struct kvm_vm *sev_get_vm(struct sev_vm *sev); uint8_t sev_get_enc_bit(struct sev_vm *sev); =20 +struct sev_vm *sev_vm_create_with_flags(uint32_t policy, uint64_t npages, + uint32_t memslot_flags); struct sev_vm *sev_vm_create(uint32_t policy, uint64_t npages); void sev_vm_free(struct sev_vm *sev); void sev_vm_launch(struct sev_vm *sev); diff --git a/tools/testing/selftests/kvm/lib/x86_64/sev.c b/tools/testing/s= elftests/kvm/lib/x86_64/sev.c index 44b5ce5cd8db..6a329ea17f9f 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/sev.c +++ b/tools/testing/selftests/kvm/lib/x86_64/sev.c @@ -171,7 +171,8 @@ void sev_vm_free(struct sev_vm *sev) free(sev); } =20 -struct sev_vm *sev_vm_create(uint32_t policy, uint64_t npages) +struct sev_vm *sev_vm_create_with_flags(uint32_t policy, uint64_t npages, + uint32_t memslot_flags) { struct sev_vm *sev; struct kvm_vm *vm; @@ -188,9 +189,12 @@ struct sev_vm *sev_vm_create(uint32_t policy, uint64_t= npages) vm->vpages_mapped =3D sparsebit_alloc(); vm_set_memory_encryption(vm, true, true, sev->enc_bit); pr_info("SEV cbit: %d\n", sev->enc_bit); - vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, 0, 0, npages, 0); - sev_register_user_region(sev, addr_gpa2hva(vm, 0), + vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, 0, 0, npages, + memslot_flags); + if (!(memslot_flags & KVM_MEM_PRIVATE)) { + sev_register_user_region(sev, addr_gpa2hva(vm, 0), npages * vm->page_size); + } =20 pr_info("SEV guest created, policy: 0x%x, size: %lu KB\n", sev->sev_policy, npages * vm->page_size / 1024); @@ -198,6 +202,11 @@ struct sev_vm *sev_vm_create(uint32_t policy, uint64_t= npages) return sev; } =20 +struct sev_vm *sev_vm_create(uint32_t policy, uint64_t npages) +{ + return sev_vm_create_with_flags(policy, npages, 0); +} + void sev_vm_launch(struct sev_vm *sev) { struct kvm_sev_launch_start ksev_launch_start =3D {0}; --=20 2.37.2.672.g94769d06f0-goog From nobody Tue Apr 7 05:43:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3365BECAAA1 for ; Tue, 30 Aug 2022 22:44:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231237AbiH3WoH (ORCPT ); Tue, 30 Aug 2022 18:44:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39408 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231846AbiH3Wni (ORCPT ); Tue, 30 Aug 2022 18:43:38 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 549DB6E2C9 for ; Tue, 30 Aug 2022 15:43:28 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id k3-20020a170902c40300b001743aafd6c6so8833781plk.20 for ; Tue, 30 Aug 2022 15:43:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=FxpX7DoPE3GMtBbFVyScvJLAhxo3IL8fu/3CfhEaMzM=; b=erWtdHITEzoRn+OGEzvo+IuiSJLt5Me15nfx9IkgRhJht9nbPSXu7kj81eibvCHxOT 8M3ZqpSXqa4NaBg7NigHUZSvPcXHBoADeRVYKq4WEO/UNmBJRFNXVdSTO4ue0ZRuQtaD JiAZANlRx8tKA5Ye5zGOIcUn4Q5VLY2za0WwBB/xhwaQ+jJVvnOkIWTivOC3U5prOyh/ WGPWTkMRkz22EMo1pSMVLroa6Xnfr0I1IfJczL6Prk3f7lk9vDClqGKzUoKFMGqjwEqC 66QWJbO+YFFfMWVaqpjedTSIejKgWpjppNYV5cP4JqFCSkjB+30XQ+ZVcmXp2EVOkuOr gWgQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=FxpX7DoPE3GMtBbFVyScvJLAhxo3IL8fu/3CfhEaMzM=; b=F5cH30eoxc83PPLaexqT9VUARLBZNEGXrSZRtRzwJ77fRSJhHQr+7ja+LnEL4dhPxr 4T8fQ5mqPiNpjWrebBmwMgfFCnc9+zwS73Zn6m2BvUo328zLBoDyoZVusBjgcRf0W8wU 6dgjh6HQpEnFijZAUleNGmVU4Z+pp4vp8XJRj7vYQXnn8ku6lSPufO8kmG2XZBltnJH4 MdctJmFXKKTKOCgD4XUvVK4w1DY1EUK5p5WGz5Ow+9c6KD2j6t1y6l79As30aPDJw+X3 HPwq5ZqeefYC+kiyNBX8mU1noj7YF4XY2YmKSCxoL0P5xiDLkqhkYjaRX2U8bPaYnkya AOIg== X-Gm-Message-State: ACgBeo3TiuRiB16V4o8V1W4cTtMvDxOnI5n2iXz50MqN4/MQ3vpXtRe2 VDhRS4pb7icdT8YT1XogFD6LCjk9bWh2anDc X-Google-Smtp-Source: AA6agR4B1IUI8+BUs70F/elePrZ2UGxVCcCeqHEWdmOOw5t10UrnaK5juPiTyW2FEuX0yLwZlQ/8qutIu2KwxlQ1 X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90a:249:b0:1e0:a8a3:3c6c with SMTP id t9-20020a17090a024900b001e0a8a33c6cmr5346pje.0.1661899406644; Tue, 30 Aug 2022 15:43:26 -0700 (PDT) Date: Tue, 30 Aug 2022 22:42:56 +0000 In-Reply-To: <20220830224259.412342-1-vannapurve@google.com> Mime-Version: 1.0 References: <20220830224259.412342-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220830224259.412342-6-vannapurve@google.com> Subject: [RFC V2 PATCH 5/8] selftests: kvm: Update usage of private mem lib for SEV VMs From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add/update APIs to allow reusing private mem lib for SEV VMs. Memory conversion for SEV VMs includes updating guest pagetables based on virtual addresses to toggle C-bit. Signed-off-by: Vishal Annapurve --- .../kvm/include/x86_64/private_mem.h | 9 +- .../selftests/kvm/lib/x86_64/private_mem.c | 103 +++++++++++++----- 2 files changed, 83 insertions(+), 29 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/private_mem.h b/too= ls/testing/selftests/kvm/include/x86_64/private_mem.h index 645bf3f61d1e..183b53b8c486 100644 --- a/tools/testing/selftests/kvm/include/x86_64/private_mem.h +++ b/tools/testing/selftests/kvm/include/x86_64/private_mem.h @@ -14,10 +14,10 @@ enum mem_conversion_type { TO_SHARED }; =20 -void guest_update_mem_access(enum mem_conversion_type type, uint64_t gpa, - uint64_t size); -void guest_update_mem_map(enum mem_conversion_type type, uint64_t gpa, - uint64_t size); +void guest_update_mem_access(enum mem_conversion_type type, uint64_t gva, + uint64_t gpa, uint64_t size); +void guest_update_mem_map(enum mem_conversion_type type, uint64_t gva, + uint64_t gpa, uint64_t size); =20 void guest_map_ucall_page_shared(void); =20 @@ -45,6 +45,7 @@ struct vm_setup_info { struct test_setup_info test_info; guest_code_fn guest_fn; io_exit_handler ioexit_cb; + uint32_t policy; /* Used for Sev VMs */ }; =20 void execute_vm_with_private_mem(struct vm_setup_info *info); diff --git a/tools/testing/selftests/kvm/lib/x86_64/private_mem.c b/tools/t= esting/selftests/kvm/lib/x86_64/private_mem.c index f6dcfa4d353f..28d93754e1f2 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/private_mem.c +++ b/tools/testing/selftests/kvm/lib/x86_64/private_mem.c @@ -22,12 +22,45 @@ #include #include #include +#include + +#define GUEST_PGT_MIN_VADDR 0x10000 + +/* Variables populated by userspace logic and consumed by guest code */ +static bool is_sev_vm; +static struct guest_pgt_info *sev_gpgt_info; +static uint8_t sev_enc_bit; + +static void sev_guest_set_clr_pte_bit(uint64_t vaddr_start, uint64_t mem_s= ize, + bool set) +{ + uint64_t vaddr =3D vaddr_start; + uint32_t guest_page_size =3D sev_gpgt_info->page_size; + uint32_t num_pages; + + GUEST_ASSERT(!(mem_size % guest_page_size) && !(vaddr_start % + guest_page_size)); + + num_pages =3D mem_size / guest_page_size; + for (uint32_t i =3D 0; i < num_pages; i++) { + uint64_t *pte =3D guest_code_get_pte(sev_gpgt_info, vaddr); + + GUEST_ASSERT(pte); + if (set) + *pte |=3D (1ULL << sev_enc_bit); + else + *pte &=3D ~(1ULL << sev_enc_bit); + asm volatile("invlpg (%0)" :: "r"(vaddr) : "memory"); + vaddr +=3D guest_page_size; + } +} =20 /* * Execute KVM hypercall to change memory access type for a given gpa rang= e. * * Input Args: * type - memory conversion type TO_SHARED/TO_PRIVATE + * gva - starting gva address * gpa - starting gpa address * size - size of the range starting from gpa for which memory access ne= eds * to be changed @@ -40,9 +73,12 @@ * for a given gpa range. This API is useful in exercising implicit conver= sion * path. */ -void guest_update_mem_access(enum mem_conversion_type type, uint64_t gpa, - uint64_t size) +void guest_update_mem_access(enum mem_conversion_type type, uint64_t gva, + uint64_t gpa, uint64_t size) { + if (is_sev_vm) + sev_guest_set_clr_pte_bit(gva, size, type =3D=3D TO_PRIVATE ? true : fal= se); + int ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, gpa, size >> MIN_PAGE_SHI= FT, type =3D=3D TO_PRIVATE ? KVM_MARK_GPA_RANGE_ENC_ACCESS : KVM_CLR_GPA_RANGE_ENC_ACCESS, 0); @@ -54,6 +90,7 @@ void guest_update_mem_access(enum mem_conversion_type typ= e, uint64_t gpa, * * Input Args: * type - memory conversion type TO_SHARED/TO_PRIVATE + * gva - starting gva address * gpa - starting gpa address * size - size of the range starting from gpa for which memory type needs * to be changed @@ -65,9 +102,12 @@ void guest_update_mem_access(enum mem_conversion_type t= ype, uint64_t gpa, * Function called by guest logic in selftests to update the memory type f= or a * given gpa range. This API is useful in exercising explicit conversion p= ath. */ -void guest_update_mem_map(enum mem_conversion_type type, uint64_t gpa, - uint64_t size) +void guest_update_mem_map(enum mem_conversion_type type, uint64_t gva, + uint64_t gpa, uint64_t size) { + if (is_sev_vm) + sev_guest_set_clr_pte_bit(gva, size, type =3D=3D TO_PRIVATE ? true : fal= se); + int ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, gpa, size >> MIN_PAGE_SHI= FT, type =3D=3D TO_PRIVATE ? KVM_MAP_GPA_RANGE_ENCRYPTED : KVM_MAP_GPA_RANGE_DECRYPTED, 0); @@ -90,30 +130,15 @@ void guest_update_mem_map(enum mem_conversion_type typ= e, uint64_t gpa, void guest_map_ucall_page_shared(void) { vm_paddr_t ucall_paddr =3D get_ucall_pool_paddr(); + GUEST_ASSERT(ucall_paddr); =20 - guest_update_mem_access(TO_SHARED, ucall_paddr, 1 << MIN_PAGE_SHIFT); + int ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, ucall_paddr, 1, + KVM_MAP_GPA_RANGE_DECRYPTED, 0); + GUEST_ASSERT_1(!ret, ret); } =20 -/* - * Execute KVM ioctl to back/unback private memory for given gpa range. - * - * Input Args: - * vm - kvm_vm handle - * gpa - starting gpa address - * size - size of the gpa range - * op - mem_op indicating whether private memory needs to be allocated or - * unbacked - * - * Output Args: None - * - * Return: None - * - * Function called by host userspace logic in selftests to back/unback pri= vate - * memory for gpa ranges. This function is useful to setup initial boot pr= ivate - * memory and then convert memory during runtime. - */ -void vm_update_private_mem(struct kvm_vm *vm, uint64_t gpa, uint64_t size, - enum mem_op op) +static void vm_update_private_mem_internal(struct kvm_vm *vm, uint64_t gpa, + uint64_t size, enum mem_op op, bool encrypt) { int priv_memfd; uint64_t priv_offset, guest_phys_base, fd_offset; @@ -142,6 +167,10 @@ void vm_update_private_mem(struct kvm_vm *vm, uint64_t= gpa, uint64_t size, TEST_ASSERT(ret =3D=3D 0, "fallocate failed\n"); enc_region.addr =3D gpa; enc_region.size =3D size; + + if (!encrypt) + return; + if (op =3D=3D ALLOCATE_MEM) { printf("doing encryption for gpa 0x%lx size 0x%lx\n", gpa, size); vm_ioctl(vm, KVM_MEMORY_ENCRYPT_REG_REGION, &enc_region); @@ -151,6 +180,30 @@ void vm_update_private_mem(struct kvm_vm *vm, uint64_t= gpa, uint64_t size, } } =20 +/* + * Execute KVM ioctl to back/unback private memory for given gpa range. + * + * Input Args: + * vm - kvm_vm handle + * gpa - starting gpa address + * size - size of the gpa range + * op - mem_op indicating whether private memory needs to be allocated or + * unbacked + * + * Output Args: None + * + * Return: None + * + * Function called by host userspace logic in selftests to back/unback pri= vate + * memory for gpa ranges. This function is useful to setup initial boot pr= ivate + * memory and then convert memory during runtime. + */ +void vm_update_private_mem(struct kvm_vm *vm, uint64_t gpa, uint64_t size, + enum mem_op op) +{ + vm_update_private_mem_internal(vm, gpa, size, op, true /* encrypt */); +} + static void handle_vm_exit_map_gpa_hypercall(struct kvm_vm *vm, volatile struct kvm_run *run) { --=20 2.37.2.672.g94769d06f0-goog From nobody Tue Apr 7 05:43:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5B5CECAAA1 for ; Tue, 30 Aug 2022 22:44:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232496AbiH3WoL (ORCPT ); Tue, 30 Aug 2022 18:44:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40116 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232476AbiH3Wno (ORCPT ); Tue, 30 Aug 2022 18:43:44 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E913282D17 for ; Tue, 30 Aug 2022 15:43:31 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id f1-20020a170902ce8100b001731029cd6bso8656685plg.1 for ; Tue, 30 Aug 2022 15:43:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=EVXb2yKFUQIy1Xc2nGSh/Bz10UhUMA0E75xzrEY47PY=; b=q+XH7fUvfPeID0YJ7U49n0ESnUIAi9/wL4R5kzkpECScWmPDnLOQ/WFkvPK9NZIkMN U0SEt6cWlTUnHi/iUEFTFIZKLTMLr/g66walLDBWob5x7/No9VET34rSMUeEutZVfjSw 1X4jn74tCUPffUvNTJaoBJdXk+wLiycEDfOfX8+Bnj3GX6VZozfHxOkjKprKfKh/Dsxk Scj5SGIzJZcaNJPtTIH5WLZ56op9vjZngNgpFaD143XaT/6NLYCtjH9NEXb5im9+S6Mj wPygBXNh4ZUo4Qdctl45gYcfalT/KzL68lPiECZGR5YXE1+9cnHJa/LMw9/wM9MEqof4 JmCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=EVXb2yKFUQIy1Xc2nGSh/Bz10UhUMA0E75xzrEY47PY=; b=NvcQb4M0I2qt2S9Q1UOBpaQ7omcaFCQen3DezYy/uPHat9MyzkslkDKrmMtut7I4mL OjSdK29PVopxhhQV4zvD+sdlS9UhMe1DTEQS9sg0TShN7A7UfRs46JHwcgH+Mxetj2lA BaBusiVXrkgY9vBeWu0cTnvoiSJBdrbRgFRULEpAndVX6uwdRy+m4+7+x45hTu8/uVla hFGXk/tmIN9c3IfWRMz60KoXVp8FNW5MJUEVOuCcFh8Eu9aHBr71RUhClpV7ZXqOy0wi 9d7LTxRsfDy0YNR/yfneJeC+O6Ys53cCC1TjIOWoXSV22k0IUeTSaUSlwXwJHEVEUAto JHhw== X-Gm-Message-State: ACgBeo1i0Mx2WWXDRLjB1LR8WcuYTfeuhF9yjqTLe7FyCFX4G055WVyg 5AKkJoHV2yelqLXdXr6ECHaMNL26O4fskdLu X-Google-Smtp-Source: AA6agR68oFvGPoyeb4Zbkf7izlnNfsd6sBNLwkNFBgmYKu6sYs9AXZrWGe6F9IAzmiul6hm4bS858itrP23BXUnL X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90a:e558:b0:1fb:c4b7:1a24 with SMTP id ei24-20020a17090ae55800b001fbc4b71a24mr5289pjb.1.1661899410287; Tue, 30 Aug 2022 15:43:30 -0700 (PDT) Date: Tue, 30 Aug 2022 22:42:57 +0000 In-Reply-To: <20220830224259.412342-1-vannapurve@google.com> Mime-Version: 1.0 References: <20220830224259.412342-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220830224259.412342-7-vannapurve@google.com> Subject: [RFC V2 PATCH 6/8] selftests: kvm: Support executing SEV VMs with private memory From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add support of executing SEV VMs for testing private memory conversion scenarios. Signed-off-by: Vishal Annapurve --- .../kvm/include/x86_64/private_mem.h | 1 + .../selftests/kvm/lib/x86_64/private_mem.c | 86 +++++++++++++++++++ 2 files changed, 87 insertions(+) diff --git a/tools/testing/selftests/kvm/include/x86_64/private_mem.h b/too= ls/testing/selftests/kvm/include/x86_64/private_mem.h index 183b53b8c486..d3ef88da837c 100644 --- a/tools/testing/selftests/kvm/include/x86_64/private_mem.h +++ b/tools/testing/selftests/kvm/include/x86_64/private_mem.h @@ -49,5 +49,6 @@ struct vm_setup_info { }; =20 void execute_vm_with_private_mem(struct vm_setup_info *info); +void execute_sev_vm_with_private_mem(struct vm_setup_info *info); =20 #endif /* SELFTEST_KVM_PRIVATE_MEM_H */ diff --git a/tools/testing/selftests/kvm/lib/x86_64/private_mem.c b/tools/t= esting/selftests/kvm/lib/x86_64/private_mem.c index 28d93754e1f2..0eb8f92d19e8 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/private_mem.c +++ b/tools/testing/selftests/kvm/lib/x86_64/private_mem.c @@ -348,3 +348,89 @@ void execute_vm_with_private_mem(struct vm_setup_info = *info) ucall_uninit(vm); kvm_vm_free(vm); } + +/* + * Execute Sev vm with private memory memslots. + * + * Input Args: + * info - pointer to a structure containing information about setting up= a SEV + * VM with private memslots + * + * Output Args: None + * + * Return: None + * + * Function called by host userspace logic in selftests to execute SEV vm + * logic. It will install two memslots: + * 1) memslot 0 : containing all the boot code/stack pages + * 2) test_mem_slot : containing the region of memory that would be used t= o test + * private/shared memory accesses to a memory backed by private memslots + */ +void execute_sev_vm_with_private_mem(struct vm_setup_info *info) +{ + uint8_t measurement[512]; + struct sev_vm *sev; + struct kvm_vm *vm; + struct kvm_enable_cap cap; + struct kvm_vcpu *vcpu; + uint32_t memslot0_pages =3D info->memslot0_pages; + uint64_t test_area_gpa, test_area_size; + struct test_setup_info *test_info =3D &info->test_info; + + sev =3D sev_vm_create_with_flags(info->policy, memslot0_pages, KVM_MEM_PR= IVATE); + TEST_ASSERT(sev, "Sev VM creation failed"); + vm =3D sev_get_vm(sev); + vm->use_ucall_pool =3D true; + vm_set_pgt_alloc_tracking(vm); + vm_create_irqchip(vm); + + TEST_ASSERT(info->guest_fn, "guest_fn not present"); + vcpu =3D vm_vcpu_add(vm, 0, info->guest_fn); + kvm_vm_elf_load(vm, program_invocation_name); + + vm_check_cap(vm, KVM_CAP_EXIT_HYPERCALL); + cap.cap =3D KVM_CAP_EXIT_HYPERCALL; + cap.flags =3D 0; + cap.args[0] =3D (1 << KVM_HC_MAP_GPA_RANGE); + vm_ioctl(vm, KVM_ENABLE_CAP, &cap); + + TEST_ASSERT(test_info->test_area_size, "Test mem size not present"); + + test_area_size =3D test_info->test_area_size; + test_area_gpa =3D test_info->test_area_gpa; + vm_userspace_mem_region_add(vm, test_info->test_area_mem_src, test_area_g= pa, + test_info->test_area_slot, test_area_size / vm->page_size, + KVM_MEM_PRIVATE); + vm_update_private_mem(vm, test_area_gpa, test_area_size, ALLOCATE_MEM); + + virt_map(vm, test_area_gpa, test_area_gpa, test_area_size/vm->page_size); + + vm_map_page_table(vm, GUEST_PGT_MIN_VADDR); + sev_gpgt_info =3D (struct guest_pgt_info *)vm_setup_pgt_info_buf(vm, + GUEST_PGT_MIN_VADDR); + sev_enc_bit =3D sev_get_enc_bit(sev); + is_sev_vm =3D true; + sync_global_to_guest(vm, sev_enc_bit); + sync_global_to_guest(vm, sev_gpgt_info); + sync_global_to_guest(vm, is_sev_vm); + + vm_update_private_mem_internal(vm, 0, (memslot0_pages << MIN_PAGE_SHIFT), + ALLOCATE_MEM, false); + + /* Allocations/setup done. Encrypt initial guest payload. */ + sev_vm_launch(sev); + + /* Dump the initial measurement. A test to actually verify it would be ni= ce. */ + sev_vm_launch_measure(sev, measurement); + pr_info("guest measurement: "); + for (uint32_t i =3D 0; i < 32; ++i) + pr_info("%02x", measurement[i]); + pr_info("\n"); + + sev_vm_launch_finish(sev); + + vcpu_work(vm, vcpu, info); + + sev_vm_free(sev); + is_sev_vm =3D false; +} --=20 2.37.2.672.g94769d06f0-goog From nobody Tue Apr 7 05:43:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8828ECAAD4 for ; Tue, 30 Aug 2022 22:44:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232397AbiH3Woe (ORCPT ); Tue, 30 Aug 2022 18:44:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39256 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232493AbiH3Wnt (ORCPT ); Tue, 30 Aug 2022 18:43:49 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 86B8684ECC for ; Tue, 30 Aug 2022 15:43:34 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id z14-20020a170903018e00b00174fff57d17so2918173plg.14 for ; Tue, 30 Aug 2022 15:43:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=lqixpQXBBABWZQORlxTuydpIP0GZwF2DjDV0e6REe5k=; b=d4nHpC0Ur/2l1zx8uv7diUCkftpV5Vp6eS/8jS9BbOEp7ec/TJDCKBYUpr9NZxQl3z xQP9x3TvMEjbgUkDeXQ2VMC+VKfyIcvqmhZrak7Ty2iI0+aaf2A7iDFImRoKRV9WNH7s kN94+gLz32tYIMkjEtL3H034U2gDjvqDdApLgHXd+qeBwDGhmG7g6J/8Ov+xm4OnDBIg wzO6ZXO8E1BD5P+rcR22geRPjBZMVcEWeA4VOPWlfwCMM4uGbI6xHwX1/cPZ59vc0mWb 8ApRoYpRd7FYjG0ufGmgGYXWH4h9c86bQCEinqquiZmYmu/WkIGKbO67kSaJOSBq4FI6 XA7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=lqixpQXBBABWZQORlxTuydpIP0GZwF2DjDV0e6REe5k=; b=tGwkgPSoyUH2inzeFKWmGJ7PwtdUF6TQP/fFmgLbhBCEw+ALA1nS7J6JikBYwkD8QG o9GUqtAMu3FQ79tLG2YqAFwF3Op0wRghGqbXJ5MZGEQja1vOoUHK0jGg482jNhYvrT7Q MynWvegDy9I7owXFFLLpEBs1K8RieptCZxrSZ+rYtCkvPasJGvyWE/m4eClAbYDqS6Uh tpDL31A6NO6SZytiyWIQeOXhI/73AvwYgcxI1Q1BzxMegR+MgEr0jP+vuMiqBPymdZ+Y +P/gYkXDA8n+1+FNT1b8nAt+w7ufkbSAAWp30pQG4InxRvxW16Ui9k6gY2J2mEaJYVTB Oo4Q== X-Gm-Message-State: ACgBeo1qlGWdBsuSYqez7U6bAGoPJc7x1yUsFDgREs7H7uzVueJQhL6B WG0Qvkg1TwDZD33EaT1bK8K8aNOByXJBrMUx X-Google-Smtp-Source: AA6agR4ml/Ql2xekqUAm4T/hSZGEEww9e0VlFIKKFuer2hjk7TU2Wk7Uwk9mBoOlkkPKqFgQci6wOzxkIp6dKzBD X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90a:e515:b0:1fd:6e58:40 with SMTP id t21-20020a17090ae51500b001fd6e580040mr272209pjy.46.1661899413403; Tue, 30 Aug 2022 15:43:33 -0700 (PDT) Date: Tue, 30 Aug 2022 22:42:58 +0000 In-Reply-To: <20220830224259.412342-1-vannapurve@google.com> Mime-Version: 1.0 References: <20220830224259.412342-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220830224259.412342-8-vannapurve@google.com> Subject: [RFC V2 PATCH 7/8] selftests: kvm: Refactor testing logic for private memory From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move all of the logic to execute memory conversion tests into library to allow sharing the logic between normal non-confidential VMs and SEV VMs. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/Makefile | 1 + .../include/x86_64/private_mem_test_helper.h | 13 + .../kvm/lib/x86_64/private_mem_test_helper.c | 273 ++++++++++++++++++ .../selftests/kvm/x86_64/private_mem_test.c | 246 +--------------- 4 files changed, 289 insertions(+), 244 deletions(-) create mode 100644 tools/testing/selftests/kvm/include/x86_64/private_mem_= test_helper.h create mode 100644 tools/testing/selftests/kvm/lib/x86_64/private_mem_test= _helper.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests= /kvm/Makefile index c5fc8ea2c843..36874fedff4a 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -52,6 +52,7 @@ LIBKVM_x86_64 +=3D lib/x86_64/apic.c LIBKVM_x86_64 +=3D lib/x86_64/handlers.S LIBKVM_x86_64 +=3D lib/x86_64/perf_test_util.c LIBKVM_x86_64 +=3D lib/x86_64/private_mem.c +LIBKVM_x86_64 +=3D lib/x86_64/private_mem_test_helper.c LIBKVM_x86_64 +=3D lib/x86_64/processor.c LIBKVM_x86_64 +=3D lib/x86_64/svm.c LIBKVM_x86_64 +=3D lib/x86_64/ucall.c diff --git a/tools/testing/selftests/kvm/include/x86_64/private_mem_test_he= lper.h b/tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper= .h new file mode 100644 index 000000000000..31bc559cd813 --- /dev/null +++ b/tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2022, Google LLC. + */ + +#ifndef SELFTEST_KVM_PRIVATE_MEM_TEST_HELPER_H +#define SELFTEST_KVM_PRIVATE_MEM_TEST_HELPER_H + +void execute_memory_conversion_tests(void); + +void execute_sev_memory_conversion_tests(void); + +#endif // SELFTEST_KVM_PRIVATE_MEM_TEST_HELPER_H diff --git a/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper= .c b/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c new file mode 100644 index 000000000000..ce53bef7896e --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c @@ -0,0 +1,273 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2022, Google LLC. + */ +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +#define VM_MEMSLOT0_PAGES (512 * 10) + +#define TEST_AREA_SLOT 10 +#define TEST_AREA_GPA 0xC0000000 +#define TEST_AREA_SIZE (2 * 1024 * 1024) +#define GUEST_TEST_MEM_OFFSET (1 * 1024 * 1024) +#define GUEST_TEST_MEM_SIZE (10 * 4096) + +#define VM_STAGE_PROCESSED(x) pr_info("Processed stage %s\n", #x) + +#define TEST_MEM_DATA_PAT1 0x66 +#define TEST_MEM_DATA_PAT2 0x99 +#define TEST_MEM_DATA_PAT3 0x33 +#define TEST_MEM_DATA_PAT4 0xaa +#define TEST_MEM_DATA_PAT5 0x12 + +static bool verify_mem_contents(void *mem, uint32_t size, uint8_t pat) +{ + uint8_t *buf =3D (uint8_t *)mem; + + for (uint32_t i =3D 0; i < size; i++) { + if (buf[i] !=3D pat) + return false; + } + + return true; +} + +/* + * Add custom implementation for memset to avoid using standard/builtin me= mset + * which may use features like SSE/GOT that don't work with guest vm execu= tion + * within selftests. + */ +void *memset(void *mem, int byte, size_t size) +{ + uint8_t *buf =3D (uint8_t *)mem; + + for (uint32_t i =3D 0; i < size; i++) + buf[i] =3D byte; + + return buf; +} + +static void populate_test_area(void *test_area_base, uint64_t pat) +{ + memset(test_area_base, pat, TEST_AREA_SIZE); +} + +static void populate_guest_test_mem(void *guest_test_mem, uint64_t pat) +{ + memset(guest_test_mem, pat, GUEST_TEST_MEM_SIZE); +} + +static bool verify_test_area(void *test_area_base, uint64_t area_pat, + uint64_t guest_pat) +{ + void *test_area1_base =3D test_area_base; + uint64_t test_area1_size =3D GUEST_TEST_MEM_OFFSET; + void *guest_test_mem =3D test_area_base + test_area1_size; + uint64_t guest_test_size =3D GUEST_TEST_MEM_SIZE; + void *test_area2_base =3D guest_test_mem + guest_test_size; + uint64_t test_area2_size =3D (TEST_AREA_SIZE - (GUEST_TEST_MEM_OFFSET + + GUEST_TEST_MEM_SIZE)); + + return (verify_mem_contents(test_area1_base, test_area1_size, area_pat) && + verify_mem_contents(guest_test_mem, guest_test_size, guest_pat) && + verify_mem_contents(test_area2_base, test_area2_size, area_pat)); +} + +#define GUEST_STARTED 0 +#define GUEST_PRIVATE_MEM_POPULATED 1 +#define GUEST_SHARED_MEM_POPULATED 2 +#define GUEST_PRIVATE_MEM_POPULATED2 3 +#define GUEST_IMPLICIT_MEM_CONV1 4 +#define GUEST_IMPLICIT_MEM_CONV2 5 + +/* + * Run memory conversion tests supporting two types of conversion: + * 1) Explicit: Execute KVM hypercall to map/unmap gpa range which will ca= use + * userspace exit to back/unback private memory. Subsequent accesses by = guest + * to the gpa range will not cause exit to userspace. + * 2) Implicit: Execute KVM hypercall to update memory access to a gpa ran= ge as + * private/shared without exiting to userspace. Subsequent accesses by g= uest + * to the gpa range will result in KVM EPT/NPT faults and then exit to + * userspace for each page. + * + * Test memory conversion scenarios with following steps: + * 1) Access private memory using private access and verify that memory co= ntents + * are not visible to userspace. + * 2) Convert memory to shared using explicit/implicit conversions and ens= ure + * that userspace is able to access the shared regions. + * 3) Convert memory back to private using explicit/implicit conversions a= nd + * ensure that userspace is again not able to access converted private + * regions. + */ +static void guest_conv_test_fn(bool test_explicit_conv) +{ + void *test_area_base =3D (void *)TEST_AREA_GPA; + void *guest_test_mem =3D (void *)(TEST_AREA_GPA + GUEST_TEST_MEM_OFFSET); + uint64_t guest_test_size =3D GUEST_TEST_MEM_SIZE; + + guest_map_ucall_page_shared(); + GUEST_SYNC(GUEST_STARTED); + + populate_test_area(test_area_base, TEST_MEM_DATA_PAT1); + GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED); + GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1, + TEST_MEM_DATA_PAT1)); + + if (test_explicit_conv) + guest_update_mem_map(TO_SHARED, (uint64_t)guest_test_mem, + (uint64_t)guest_test_mem, guest_test_size); + else { + guest_update_mem_access(TO_SHARED, (uint64_t)guest_test_mem, + (uint64_t)guest_test_mem, guest_test_size); + GUEST_SYNC(GUEST_IMPLICIT_MEM_CONV1); + } + + populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PAT2); + + GUEST_SYNC(GUEST_SHARED_MEM_POPULATED); + GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1, + TEST_MEM_DATA_PAT5)); + + if (test_explicit_conv) + guest_update_mem_map(TO_PRIVATE, (uint64_t)guest_test_mem, + (uint64_t)guest_test_mem, guest_test_size); + else { + guest_update_mem_access(TO_PRIVATE, (uint64_t)guest_test_mem, + (uint64_t)guest_test_mem, guest_test_size); + GUEST_SYNC(GUEST_IMPLICIT_MEM_CONV2); + } + + populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PAT3); + GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED2); + + GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1, + TEST_MEM_DATA_PAT3)); + GUEST_DONE(); +} + +static void conv_test_ioexit_fn(struct kvm_vm *vm, uint32_t uc_arg1) +{ + void *test_area_hva =3D addr_gpa2hva(vm, TEST_AREA_GPA); + void *guest_test_mem_hva =3D (test_area_hva + GUEST_TEST_MEM_OFFSET); + uint64_t guest_mem_gpa =3D (TEST_AREA_GPA + GUEST_TEST_MEM_OFFSET); + uint64_t guest_test_size =3D GUEST_TEST_MEM_SIZE; + + switch (uc_arg1) { + case GUEST_STARTED: + populate_test_area(test_area_hva, TEST_MEM_DATA_PAT4); + VM_STAGE_PROCESSED(GUEST_STARTED); + break; + case GUEST_PRIVATE_MEM_POPULATED: + TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4, + TEST_MEM_DATA_PAT4), "failed"); + VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED); + break; + case GUEST_SHARED_MEM_POPULATED: + TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4, + TEST_MEM_DATA_PAT2), "failed"); + populate_guest_test_mem(guest_test_mem_hva, TEST_MEM_DATA_PAT5); + VM_STAGE_PROCESSED(GUEST_SHARED_MEM_POPULATED); + break; + case GUEST_PRIVATE_MEM_POPULATED2: + TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4, + TEST_MEM_DATA_PAT5), "failed"); + VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED2); + break; + case GUEST_IMPLICIT_MEM_CONV1: + /* + * For first implicit conversion, memory is already private so + * mark it private again just to zap the pte entries for the gpa + * range, so that subsequent accesses from the guest will + * generate ept/npt fault and memory conversion path will be + * exercised by KVM. + */ + vm_update_private_mem(vm, guest_mem_gpa, guest_test_size, + ALLOCATE_MEM); + VM_STAGE_PROCESSED(GUEST_IMPLICIT_MEM_CONV1); + break; + case GUEST_IMPLICIT_MEM_CONV2: + /* + * For second implicit conversion, memory is already shared so + * mark it shared again just to zap the pte entries for the gpa + * range, so that subsequent accesses from the guest will + * generate ept/npt fault and memory conversion path will be + * exercised by KVM. + */ + vm_update_private_mem(vm, guest_mem_gpa, guest_test_size, + UNBACK_MEM); + VM_STAGE_PROCESSED(GUEST_IMPLICIT_MEM_CONV2); + break; + default: + TEST_FAIL("Unknown stage %d\n", uc_arg1); + break; + } +} + +static void guest_explicit_conv_test_fn(void) +{ + guest_conv_test_fn(true); +} + +static void guest_implicit_conv_test_fn(void) +{ + guest_conv_test_fn(false); +} + +/* + * Execute implicit and explicit memory conversion tests with non-confiden= tial + * VMs using memslots with private memory. + */ +void execute_memory_conversion_tests(void) +{ + struct vm_setup_info info; + struct test_setup_info *test_info =3D &info.test_info; + + info.vm_mem_src =3D VM_MEM_SRC_ANONYMOUS; + info.memslot0_pages =3D VM_MEMSLOT0_PAGES; + test_info->test_area_gpa =3D TEST_AREA_GPA; + test_info->test_area_size =3D TEST_AREA_SIZE; + test_info->test_area_slot =3D TEST_AREA_SLOT; + test_info->test_area_mem_src =3D VM_MEM_SRC_ANONYMOUS; + info.ioexit_cb =3D conv_test_ioexit_fn; + + info.guest_fn =3D guest_explicit_conv_test_fn; + execute_vm_with_private_mem(&info); + + info.guest_fn =3D guest_implicit_conv_test_fn; + execute_vm_with_private_mem(&info); +} + +/* + * Execute implicit and explicit memory conversion tests with SEV VMs using + * memslots with private memory. + */ +void execute_sev_memory_conversion_tests(void) +{ + struct vm_setup_info info; + struct test_setup_info *test_info =3D &info.test_info; + + info.vm_mem_src =3D VM_MEM_SRC_ANONYMOUS; + info.memslot0_pages =3D VM_MEMSLOT0_PAGES; + test_info->test_area_gpa =3D TEST_AREA_GPA; + test_info->test_area_size =3D TEST_AREA_SIZE; + test_info->test_area_slot =3D TEST_AREA_SLOT; + test_info->test_area_mem_src =3D VM_MEM_SRC_ANONYMOUS; + info.ioexit_cb =3D conv_test_ioexit_fn; + + info.policy =3D SEV_POLICY_NO_DBG; + info.guest_fn =3D guest_explicit_conv_test_fn; + execute_sev_vm_with_private_mem(&info); + + info.guest_fn =3D guest_implicit_conv_test_fn; + execute_sev_vm_with_private_mem(&info); +} diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_test.c b/tools/= testing/selftests/kvm/x86_64/private_mem_test.c index 52430b97bd0b..49da626e5807 100644 --- a/tools/testing/selftests/kvm/x86_64/private_mem_test.c +++ b/tools/testing/selftests/kvm/x86_64/private_mem_test.c @@ -1,263 +1,21 @@ // SPDX-License-Identifier: GPL-2.0 /* - * tools/testing/selftests/kvm/lib/kvm_util.c - * * Copyright (C) 2022, Google LLC. */ #define _GNU_SOURCE /* for program_invocation_short_name */ -#include -#include -#include -#include #include #include #include -#include - -#include -#include -#include -#include =20 #include #include -#include -#include - -#define VM_MEMSLOT0_PAGES (512 * 10) - -#define TEST_AREA_SLOT 10 -#define TEST_AREA_GPA 0xC0000000 -#define TEST_AREA_SIZE (2 * 1024 * 1024) -#define GUEST_TEST_MEM_OFFSET (1 * 1024 * 1024) -#define GUEST_TEST_MEM_SIZE (10 * 4096) - -#define VM_STAGE_PROCESSED(x) pr_info("Processed stage %s\n", #x) - -#define TEST_MEM_DATA_PAT1 0x66 -#define TEST_MEM_DATA_PAT2 0x99 -#define TEST_MEM_DATA_PAT3 0x33 -#define TEST_MEM_DATA_PAT4 0xaa -#define TEST_MEM_DATA_PAT5 0x12 - -static bool verify_mem_contents(void *mem, uint32_t size, uint8_t pat) -{ - uint8_t *buf =3D (uint8_t *)mem; - - for (uint32_t i =3D 0; i < size; i++) { - if (buf[i] !=3D pat) - return false; - } - - return true; -} - -/* - * Add custom implementation for memset to avoid using standard/builtin me= mset - * which may use features like SSE/GOT that don't work with guest vm execu= tion - * within selftests. - */ -void *memset(void *mem, int byte, size_t size) -{ - uint8_t *buf =3D (uint8_t *)mem; - - for (uint32_t i =3D 0; i < size; i++) - buf[i] =3D byte; - - return buf; -} - -static void populate_test_area(void *test_area_base, uint64_t pat) -{ - memset(test_area_base, pat, TEST_AREA_SIZE); -} - -static void populate_guest_test_mem(void *guest_test_mem, uint64_t pat) -{ - memset(guest_test_mem, pat, GUEST_TEST_MEM_SIZE); -} - -static bool verify_test_area(void *test_area_base, uint64_t area_pat, - uint64_t guest_pat) -{ - void *test_area1_base =3D test_area_base; - uint64_t test_area1_size =3D GUEST_TEST_MEM_OFFSET; - void *guest_test_mem =3D test_area_base + test_area1_size; - uint64_t guest_test_size =3D GUEST_TEST_MEM_SIZE; - void *test_area2_base =3D guest_test_mem + guest_test_size; - uint64_t test_area2_size =3D (TEST_AREA_SIZE - (GUEST_TEST_MEM_OFFSET + - GUEST_TEST_MEM_SIZE)); - - return (verify_mem_contents(test_area1_base, test_area1_size, area_pat) && - verify_mem_contents(guest_test_mem, guest_test_size, guest_pat) && - verify_mem_contents(test_area2_base, test_area2_size, area_pat)); -} - -#define GUEST_STARTED 0 -#define GUEST_PRIVATE_MEM_POPULATED 1 -#define GUEST_SHARED_MEM_POPULATED 2 -#define GUEST_PRIVATE_MEM_POPULATED2 3 -#define GUEST_IMPLICIT_MEM_CONV1 4 -#define GUEST_IMPLICIT_MEM_CONV2 5 - -/* - * Run memory conversion tests supporting two types of conversion: - * 1) Explicit: Execute KVM hypercall to map/unmap gpa range which will ca= use - * userspace exit to back/unback private memory. Subsequent accesses by = guest - * to the gpa range will not cause exit to userspace. - * 2) Implicit: Execute KVM hypercall to update memory access to a gpa ran= ge as - * private/shared without exiting to userspace. Subsequent accesses by g= uest - * to the gpa range will result in KVM EPT/NPT faults and then exit to - * userspace for each page. - * - * Test memory conversion scenarios with following steps: - * 1) Access private memory using private access and verify that memory co= ntents - * are not visible to userspace. - * 2) Convert memory to shared using explicit/implicit conversions and ens= ure - * that userspace is able to access the shared regions. - * 3) Convert memory back to private using explicit/implicit conversions a= nd - * ensure that userspace is again not able to access converted private - * regions. - */ -static void guest_conv_test_fn(bool test_explicit_conv) -{ - void *test_area_base =3D (void *)TEST_AREA_GPA; - void *guest_test_mem =3D (void *)(TEST_AREA_GPA + GUEST_TEST_MEM_OFFSET); - uint64_t guest_test_size =3D GUEST_TEST_MEM_SIZE; - - guest_map_ucall_page_shared(); - GUEST_SYNC(GUEST_STARTED); - - populate_test_area(test_area_base, TEST_MEM_DATA_PAT1); - GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED); - GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1, - TEST_MEM_DATA_PAT1)); - - if (test_explicit_conv) - guest_update_mem_map(TO_SHARED, (uint64_t)guest_test_mem, - guest_test_size); - else { - guest_update_mem_access(TO_SHARED, (uint64_t)guest_test_mem, - guest_test_size); - GUEST_SYNC(GUEST_IMPLICIT_MEM_CONV1); - } - - populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PAT2); - - GUEST_SYNC(GUEST_SHARED_MEM_POPULATED); - GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1, - TEST_MEM_DATA_PAT5)); - - if (test_explicit_conv) - guest_update_mem_map(TO_PRIVATE, (uint64_t)guest_test_mem, - guest_test_size); - else { - guest_update_mem_access(TO_PRIVATE, (uint64_t)guest_test_mem, - guest_test_size); - GUEST_SYNC(GUEST_IMPLICIT_MEM_CONV2); - } - - populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PAT3); - GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED2); - - GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1, - TEST_MEM_DATA_PAT3)); - GUEST_DONE(); -} - -static void conv_test_ioexit_fn(struct kvm_vm *vm, uint32_t uc_arg1) -{ - void *test_area_hva =3D addr_gpa2hva(vm, TEST_AREA_GPA); - void *guest_test_mem_hva =3D (test_area_hva + GUEST_TEST_MEM_OFFSET); - uint64_t guest_mem_gpa =3D (TEST_AREA_GPA + GUEST_TEST_MEM_OFFSET); - uint64_t guest_test_size =3D GUEST_TEST_MEM_SIZE; - - switch (uc_arg1) { - case GUEST_STARTED: - populate_test_area(test_area_hva, TEST_MEM_DATA_PAT4); - VM_STAGE_PROCESSED(GUEST_STARTED); - break; - case GUEST_PRIVATE_MEM_POPULATED: - TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4, - TEST_MEM_DATA_PAT4), "failed"); - VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED); - break; - case GUEST_SHARED_MEM_POPULATED: - TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4, - TEST_MEM_DATA_PAT2), "failed"); - populate_guest_test_mem(guest_test_mem_hva, TEST_MEM_DATA_PAT5); - VM_STAGE_PROCESSED(GUEST_SHARED_MEM_POPULATED); - break; - case GUEST_PRIVATE_MEM_POPULATED2: - TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4, - TEST_MEM_DATA_PAT5), "failed"); - VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED2); - break; - case GUEST_IMPLICIT_MEM_CONV1: - /* - * For first implicit conversion, memory is already private so - * mark it private again just to zap the pte entries for the gpa - * range, so that subsequent accesses from the guest will - * generate ept/npt fault and memory conversion path will be - * exercised by KVM. - */ - vm_update_private_mem(vm, guest_mem_gpa, guest_test_size, - ALLOCATE_MEM); - VM_STAGE_PROCESSED(GUEST_IMPLICIT_MEM_CONV1); - break; - case GUEST_IMPLICIT_MEM_CONV2: - /* - * For second implicit conversion, memory is already shared so - * mark it shared again just to zap the pte entries for the gpa - * range, so that subsequent accesses from the guest will - * generate ept/npt fault and memory conversion path will be - * exercised by KVM. - */ - vm_update_private_mem(vm, guest_mem_gpa, guest_test_size, - UNBACK_MEM); - VM_STAGE_PROCESSED(GUEST_IMPLICIT_MEM_CONV2); - break; - default: - TEST_FAIL("Unknown stage %d\n", uc_arg1); - break; - } -} - -static void guest_explicit_conv_test_fn(void) -{ - guest_conv_test_fn(true); -} - -static void guest_implicit_conv_test_fn(void) -{ - guest_conv_test_fn(false); -} - -static void execute_memory_conversion_test(void) -{ - struct vm_setup_info info; - struct test_setup_info *test_info =3D &info.test_info; - - info.vm_mem_src =3D VM_MEM_SRC_ANONYMOUS; - info.memslot0_pages =3D VM_MEMSLOT0_PAGES; - test_info->test_area_gpa =3D TEST_AREA_GPA; - test_info->test_area_size =3D TEST_AREA_SIZE; - test_info->test_area_slot =3D TEST_AREA_SLOT; - test_info->test_area_mem_src =3D VM_MEM_SRC_ANONYMOUS; - info.ioexit_cb =3D conv_test_ioexit_fn; - - info.guest_fn =3D guest_explicit_conv_test_fn; - execute_vm_with_private_mem(&info); - - info.guest_fn =3D guest_implicit_conv_test_fn; - execute_vm_with_private_mem(&info); -} +#include =20 int main(int argc, char *argv[]) { /* Tell stdout not to buffer its content */ setbuf(stdout, NULL); =20 - execute_memory_conversion_test(); + execute_memory_conversion_tests(); return 0; } --=20 2.37.2.672.g94769d06f0-goog From nobody Tue Apr 7 05:43:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD6EEECAAD4 for ; Tue, 30 Aug 2022 22:44:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229560AbiH3Wo2 (ORCPT ); Tue, 30 Aug 2022 18:44:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40244 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232336AbiH3Wns (ORCPT ); Tue, 30 Aug 2022 18:43:48 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 293BE870B5 for ; Tue, 30 Aug 2022 15:43:37 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id f186-20020a636ac3000000b0042af745d56cso6157645pgc.17 for ; Tue, 30 Aug 2022 15:43:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=SZFI/BD05xGXUwWRf10O9GR3zZEiAgwS8+s9rPD1nCs=; b=Vwrj9vaDWiIAREcxlkWs0glRtmYa/bvTrp9YFdAUMOToYQ/jD2vTpSo+mv1IBBtwAG M3ITYnVQRBUdmQYzzVVPX5vdP3ZMEurc/94khBlubV3HgdL5stp2v+lPFcvf9TBd8NgN xrlHIl4wkwnMzqTR9T5+bZ+udASe8M2Mz6OczwNLYyAxwvJiP7tElKXOYpxDndH1/+4X C7mgUnfvokFFAgzbzf31jhSdVRLUPR45TOj74L+KgoouNHqAepXGu2HpOnETpOcMzmzd W1x9DMjSjl6xxJDaJ1ZdHTDS43ilcCt9EqAsOZZORWSW2z1WYj3tfon8hUDPGIDoyq3c kpiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=SZFI/BD05xGXUwWRf10O9GR3zZEiAgwS8+s9rPD1nCs=; b=btmJ24o5jkP2YvhSBolsMLQeqm6jUy0gQxy/qk14eXKAON5uGZV6yZcARGPt+vmLEy SB+VAzfv4l+oEeZqrd2/iFHtli/b4ByEMw4f7m9kSg3PGsm7rIQJhMulxAJ4gFixvaiR XNrM8zx6y6UeHJh08abP1imzqonnEE+lODqEhzElJYrdu2oFKNprpySLjC5vfVB/b80y iEHFE1/3Z4u0UuKyVLF3MzjOyTOqaQcfXwWwK1KZVhkuG2RVXh/b6sYzilT0WSpDA4zc sAlbo/Z8PoXuk8eg1pl7i7fQFsWJ9L7CNzcuY47bJ+LPX8wscHFyZ78DrnzosKJGB1Po XHcw== X-Gm-Message-State: ACgBeo0Aua9CvhOrPB96xtI5cv0yEJLhHI4D7nyJFVd3bOd+w6LMoVjA oXHdViswvBCc5F70opTMjNJAs2byHNpHGJ+3 X-Google-Smtp-Source: AA6agR5w/7GSSIl/Y9SPf0ncYlhNdf+tjVar8dIyS0KvqIeS5dpzuysMFBeXdDbt8wMm5SsXSQ2EpS1h7NawurFS X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90b:1b0f:b0:1fd:e29c:d8e1 with SMTP id nu15-20020a17090b1b0f00b001fde29cd8e1mr255751pjb.118.1661899416074; Tue, 30 Aug 2022 15:43:36 -0700 (PDT) Date: Tue, 30 Aug 2022 22:42:59 +0000 In-Reply-To: <20220830224259.412342-1-vannapurve@google.com> Mime-Version: 1.0 References: <20220830224259.412342-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220830224259.412342-9-vannapurve@google.com> Subject: [RFC V2 PATCH 8/8] selftests: kvm: Add private memory test for SEV VMs From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a selftest placeholder for executing private memory conversion tests with SEV VMs. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../kvm/x86_64/sev_private_mem_test.c | 21 +++++++++++++++++++ 3 files changed, 23 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/sev_private_mem_test= .c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftes= ts/kvm/.gitignore index 095b67dc632e..757d4cac19b4 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -37,6 +37,7 @@ /x86_64/set_sregs_test /x86_64/sev_all_boot_test /x86_64/sev_migrate_tests +/x86_64/sev_private_mem_test /x86_64/smm_test /x86_64/state_test /x86_64/svm_vmcall_test diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests= /kvm/Makefile index 36874fedff4a..3f8030c46b72 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -98,6 +98,7 @@ TEST_GEN_PROGS_x86_64 +=3D x86_64/pmu_event_filter_test TEST_GEN_PROGS_x86_64 +=3D x86_64/private_mem_test TEST_GEN_PROGS_x86_64 +=3D x86_64/set_boot_cpu_id TEST_GEN_PROGS_x86_64 +=3D x86_64/set_sregs_test +TEST_GEN_PROGS_x86_64 +=3D x86_64/sev_private_mem_test TEST_GEN_PROGS_x86_64 +=3D x86_64/smm_test TEST_GEN_PROGS_x86_64 +=3D x86_64/state_test TEST_GEN_PROGS_x86_64 +=3D x86_64/vmx_preemption_timer_test diff --git a/tools/testing/selftests/kvm/x86_64/sev_private_mem_test.c b/to= ols/testing/selftests/kvm/x86_64/sev_private_mem_test.c new file mode 100644 index 000000000000..2c8edbaef627 --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/sev_private_mem_test.c @@ -0,0 +1,21 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2022, Google LLC. + */ +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include +#include + +#include +#include +#include + +int main(int argc, char *argv[]) +{ + /* Tell stdout not to buffer its content */ + setbuf(stdout, NULL); + + execute_sev_memory_conversion_tests(); + return 0; +} --=20 2.37.2.672.g94769d06f0-goog