From nobody Sun May 10 09:54:15 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30119C433EF for ; Wed, 11 May 2022 00:08:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239281AbiEKAIa (ORCPT ); Tue, 10 May 2022 20:08:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229484AbiEKAIY (ORCPT ); Tue, 10 May 2022 20:08:24 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E0B092B26C for ; Tue, 10 May 2022 17:08:23 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id s18-20020a17090aa11200b001d92f7609e8so336928pjp.3 for ; Tue, 10 May 2022 17:08:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=nd70IAruy6YLk7HF+wmxASsgGrGBVsuWU+XIZwvh4Kw=; b=nhFPNERpqCW2rbwKZXsNo79ee146E7oAuRVFLoP1ULP1eLhKBZkBSWGlvsR3Buku1a LrFupfFPfU4SygJO3xlFTVJkZon9R0M3LCnghz0wiLa1uWHVtSlqckrCxf98GzQd1cWH rb0EsdVmyTnTNjTjeCEK4r/KM1nYOFzWN80jJtWwzzs50Q6PHlpuu3moK5L6jyZ5/iCu 8wFdv3qhr1iVLv4gmcu+EQscizQT9MMTJBy9j1GHVn14TS8Lpbnc6VK8SpprHPTbpVxN 5NHZrJ70cnR7POPlCM7If2iz52B88ke1Eq5dKx+4ngNc47S7T9J1H3M2CF3UmI3TanrW 1okg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=nd70IAruy6YLk7HF+wmxASsgGrGBVsuWU+XIZwvh4Kw=; b=lT8uMa4qgjJyrtEpPKmghUY21D7jMjPwARV2cdPUojcQz4w7Tt/iJsp7wazoSagO2b gr6+3C2ynceewV3lN1KmOirfVbC/P2Pxak8fx88XanboxtSZamcv1WehLw6G0zLiQwst fsbV//7I5u6Yo2zKBcy3GsnLYGiToMKb4zpTxogpyQbtbdWPfu2rD4Kc1M4pEbYxt+yl fxqvT2RrF6KgnH3b3Ptlt9RxwbDmvfkmJpv618FuKIlwsjEkPjtpLuz3MzIgHI6xfTMp H5LYtf0+NC7i9h1PmY/vD03eOFO/+9zCFJOzVPMEmGhC+2fN2EGxrHftE0d2KikZByuo IS7g== X-Gm-Message-State: AOAM532wfk1FVxKmCjExvVNQN0IzU2+l7d6bWFWTEnZBxHRJ1b+BErkh JcKtr/2A1/CwJ+3r1/CkpQ8LewK3iLMtygZD X-Google-Smtp-Source: ABdhPJz43OpKlmYlcMGSh78/cL9Iy9s6NHLb4cSgmpzz40FSTyAhNdUD62Iydw0OAWV3QlFAIOgqxxRMVhzAdxYm X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90b:3e8b:b0:1dc:e920:e072 with SMTP id rj11-20020a17090b3e8b00b001dce920e072mr2380190pjb.151.1652227703435; Tue, 10 May 2022 17:08:23 -0700 (PDT) Date: Wed, 11 May 2022 00:08:03 +0000 In-Reply-To: <20220511000811.384766-1-vannapurve@google.com> Message-Id: <20220511000811.384766-2-vannapurve@google.com> Mime-Version: 1.0 References: <20220511000811.384766-1-vannapurve@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [RFC V2 PATCH 1/8] selftests: kvm: Fix inline assembly for hypercall From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shauh@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Fix inline assembly for hypercall to explicitly set eax with hypercall number to allow the implementation to work even in cases where compiler would inline the function. Signed-off-by: Vishal Annapurve Reviewed-by: Shuah Khan --- tools/testing/selftests/kvm/lib/x86_64/processor.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/tes= ting/selftests/kvm/lib/x86_64/processor.c index 9f000dfb5594..4d88e1a553bf 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -1461,7 +1461,7 @@ uint64_t kvm_hypercall(uint64_t nr, uint64_t a0, uint= 64_t a1, uint64_t a2, =20 asm volatile("vmcall" : "=3Da"(r) - : "b"(a0), "c"(a1), "d"(a2), "S"(a3)); + : "a"(nr), "b"(a0), "c"(a1), "d"(a2), "S"(a3)); return r; } =20 --=20 2.36.0.550.gb090851708-goog From nobody Sun May 10 09:54:15 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62CF5C433F5 for ; Wed, 11 May 2022 00:08:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239274AbiEKAIo (ORCPT ); Tue, 10 May 2022 20:08:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239262AbiEKAI1 (ORCPT ); Tue, 10 May 2022 20:08:27 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 302512F02D for ; Tue, 10 May 2022 17:08:26 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id m6-20020a17090a730600b001d9041534e4so325817pjk.7 for ; Tue, 10 May 2022 17:08:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=O0EFOzQdWm5ZixJFRjd/96H+25owXlD7vV3xIewP5r4=; b=aenQBg0dKo77C0zSSpVxbfgnrp/A5lfnUE4N5d5nikXUp8XDj/CMqo/vbFZj/0z0lo eGedg9/Pbh4VLJhmdDdcaxZwmlt688sPrxRNsvoO687SUmzPocpmCimQVYLRjc6SK4+X E8fLJNc/crdUmP5CaEv1uzCyzF1g+8LgLCXSukYOnDW/VJZceukUBSZM6A45RgKNaHv6 T7oksxo4YBX3ctfy6CWJJejMER/kwmxcIKizhd9Feub3NBw2e0PsbCxDnY368KsBQhql ytNKwYygoz9QSUbUe/6GG4nfcFkCGAFyYqE01srQftWhyDZqRx5syN1Dcht+rHIwydju X9qw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=O0EFOzQdWm5ZixJFRjd/96H+25owXlD7vV3xIewP5r4=; b=pgs3YqAJGsgsSDPx2z+0ldAOZzNfOAFuOpia2wa8/KpTTtIGiGniUolCU6vUBqJh79 TJD2Xo/xgp288CEi/KOnvSLi6Lrq7WjTX2ONsQzfkrQweOQvRIzHdnmcZza6+wP/DHZT dqywLm56Rp5VrCjChenIGdpJJUJ60LbrlR0ir16B3nbg1K+Y5I8C+CG9z8KB+pPiBgMz c3aYpW+IaXU+fAoe3a3TLpEVIGHeA1q9QJbrjguiOf5GOD/UV84dAWcd0GXtNtIGh4l/ vyspwhVb+Zytk4MJjAGxXNf2sUKvNVRlFEhizOBxfQYD46r3khQx+dIA/YuJKXO5cXHM ZWng== X-Gm-Message-State: AOAM533YsU/IbPGDTLVP+puO1ijIGIfQCr0OpX+MdKkyoxq+8FNLHLAc ivzj27qQiWf+AQte6bLQ202Rysph82RC8y/u X-Google-Smtp-Source: ABdhPJydH//tkdiC4ZIYoHpXVVtQZPEUR1pUQ6oSoKddrVo6Fwi61TMQmmJ8+mvynJf8XT8Zp0GfKoDy/BmaQhn6 X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:903:185:b0:15e:8bfa:ed63 with SMTP id z5-20020a170903018500b0015e8bfaed63mr23108705plg.153.1652227705708; Tue, 10 May 2022 17:08:25 -0700 (PDT) Date: Wed, 11 May 2022 00:08:04 +0000 In-Reply-To: <20220511000811.384766-1-vannapurve@google.com> Message-Id: <20220511000811.384766-3-vannapurve@google.com> Mime-Version: 1.0 References: <20220511000811.384766-1-vannapurve@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [RFC V2 PATCH 2/8] selftests: kvm: Add a basic selftest to test private memory From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shauh@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add KVM selftest to access private memory privately from the guest to test that memory updates from guest and userspace vmm don't affect each other. Signed-off-by: Vishal Annapurve Reviewed-by: Shuah Khan --- tools/testing/selftests/kvm/Makefile | 1 + tools/testing/selftests/kvm/priv_memfd_test.c | 283 ++++++++++++++++++ 2 files changed, 284 insertions(+) create mode 100644 tools/testing/selftests/kvm/priv_memfd_test.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests= /kvm/Makefile index 21c2dbd21a81..f2f9a8546c66 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -97,6 +97,7 @@ TEST_GEN_PROGS_x86_64 +=3D max_guest_memory_test TEST_GEN_PROGS_x86_64 +=3D memslot_modification_stress_test TEST_GEN_PROGS_x86_64 +=3D memslot_perf_test TEST_GEN_PROGS_x86_64 +=3D rseq_test +TEST_GEN_PROGS_x86_64 +=3D priv_memfd_test TEST_GEN_PROGS_x86_64 +=3D set_memory_region_test TEST_GEN_PROGS_x86_64 +=3D steal_time TEST_GEN_PROGS_x86_64 +=3D kvm_binary_stats_test diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/= selftests/kvm/priv_memfd_test.c new file mode 100644 index 000000000000..bbb58c62e186 --- /dev/null +++ b/tools/testing/selftests/kvm/priv_memfd_test.c @@ -0,0 +1,283 @@ +// SPDX-License-Identifier: GPL-2.0 +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include +#include +#include + +#define TEST_MEM_GPA 0xb0000000 +#define TEST_MEM_SIZE 0x2000 +#define TEST_MEM_END (TEST_MEM_GPA + TEST_MEM_SIZE) +#define TEST_MEM_DATA_PAT1 0x6666666666666666 +#define TEST_MEM_DATA_PAT2 0x9999999999999999 +#define TEST_MEM_DATA_PAT3 0x3333333333333333 +#define TEST_MEM_DATA_PAT4 0xaaaaaaaaaaaaaaaa + +enum mem_op { + SET_PAT, + VERIFY_PAT +}; + +#define TEST_MEM_SLOT 10 + +#define VCPU_ID 0 + +#define VM_STAGE_PROCESSED(x) pr_info("Processed stage %s\n", #x) + +typedef bool (*vm_stage_handler_fn)(struct kvm_vm *, + void *, uint64_t); +typedef void (*guest_code_fn)(void); +struct test_run_helper { + char *test_desc; + vm_stage_handler_fn vmst_handler; + guest_code_fn guest_fn; + void *shared_mem; + int priv_memfd; +}; + +/* Guest code in selftests is loaded to guest memory using kvm_vm_elf_load + * which doesn't handle global offset table updates. Calling standard libc + * functions would normally result in referring to the global offset table. + * Adding O1 here seems to prohibit compiler from replacing the memory + * operations with standard libc functions such as memset. + */ +static bool __attribute__((optimize("O1"))) do_mem_op(enum mem_op op, + void *mem, uint64_t pat, uint32_t size) +{ + uint64_t *buf =3D (uint64_t *)mem; + uint32_t chunk_size =3D sizeof(pat); + uint64_t mem_addr =3D (uint64_t)mem; + + if (((mem_addr % chunk_size) !=3D 0) || ((size % chunk_size) !=3D 0)) + return false; + + for (uint32_t i =3D 0; i < (size / chunk_size); i++) { + if (op =3D=3D SET_PAT) + buf[i] =3D pat; + if (op =3D=3D VERIFY_PAT) { + if (buf[i] !=3D pat) + return false; + } + } + + return true; +} + +/* Test to verify guest private accesses on private memory with following = steps: + * 1) Upon entry, guest signals VMM that it has started. + * 2) VMM populates the shared memory with known pattern and continues gue= st + * execution. + * 3) Guest writes a different pattern on the private memory and signals V= MM + * that it has updated private memory. + * 4) VMM verifies its shared memory contents to be same as the data popul= ated + * in step 2 and continues guest execution. + * 5) Guest verifies its private memory contents to be same as the data + * populated in step 3 and marks the end of the guest execution. + */ +#define PMPAT_ID 0 +#define PMPAT_DESC "PrivateMemoryPrivateAccessTest" + +/* Guest code execution stages for private mem access test */ +#define PMPAT_GUEST_STARTED 0ULL +#define PMPAT_GUEST_PRIV_MEM_UPDATED 1ULL + +static bool pmpat_handle_vm_stage(struct kvm_vm *vm, + void *test_info, + uint64_t stage) +{ + void *shared_mem =3D ((struct test_run_helper *)test_info)->shared_mem; + + switch (stage) { + case PMPAT_GUEST_STARTED: { + /* Initialize the contents of shared memory */ + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + "Shared memory update failure"); + VM_STAGE_PROCESSED(PMPAT_GUEST_STARTED); + break; + } + case PMPAT_GUEST_PRIV_MEM_UPDATED: { + /* verify host updated data is still intact */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(PMPAT_GUEST_PRIV_MEM_UPDATED); + break; + } + default: + printf("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void pmpat_guest_code(void) +{ + void *priv_mem =3D (void *)TEST_MEM_GPA; + int ret; + + GUEST_SYNC(PMPAT_GUEST_STARTED); + + /* Mark the GPA range to be treated as always accessed privately */ + ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret =3D=3D 0, ret); + + GUEST_ASSERT(do_mem_op(SET_PAT, priv_mem, TEST_MEM_DATA_PAT2, + TEST_MEM_SIZE)); + GUEST_SYNC(PMPAT_GUEST_PRIV_MEM_UPDATED); + + GUEST_ASSERT(do_mem_op(VERIFY_PAT, priv_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + + GUEST_DONE(); +} + +static struct test_run_helper priv_memfd_testsuite[] =3D { + [PMPAT_ID] =3D { + .test_desc =3D PMPAT_DESC, + .vmst_handler =3D pmpat_handle_vm_stage, + .guest_fn =3D pmpat_guest_code, + }, +}; + +static void vcpu_work(struct kvm_vm *vm, uint32_t test_id) +{ + struct kvm_run *run; + struct ucall uc; + uint64_t cmd; + + /* + * Loop until the guest is done. + */ + run =3D vcpu_state(vm, VCPU_ID); + + while (true) { + vcpu_run(vm, VCPU_ID); + + if (run->exit_reason =3D=3D KVM_EXIT_IO) { + cmd =3D get_ucall(vm, VCPU_ID, &uc); + if (cmd !=3D UCALL_SYNC) + break; + + if (!priv_memfd_testsuite[test_id].vmst_handler( + vm, &priv_memfd_testsuite[test_id], uc.args[1])) + break; + + continue; + } + + TEST_FAIL("Unhandled VCPU exit reason %d\n", run->exit_reason); + break; + } + + if (run->exit_reason =3D=3D KVM_EXIT_IO && cmd =3D=3D UCALL_ABORT) + TEST_FAIL("%s at %s:%ld, val =3D %lu", (const char *)uc.args[0], + __FILE__, uc.args[1], uc.args[2]); +} + +static void priv_memory_region_add(struct kvm_vm *vm, void *mem, uint32_t = slot, + uint32_t size, uint64_t guest_addr, + uint32_t priv_fd, uint64_t priv_offset) +{ + struct kvm_userspace_memory_region_ext region_ext; + int ret; + + region_ext.region.slot =3D slot; + region_ext.region.flags =3D KVM_MEM_PRIVATE; + region_ext.region.guest_phys_addr =3D guest_addr; + region_ext.region.memory_size =3D size; + region_ext.region.userspace_addr =3D (uintptr_t) mem; + region_ext.private_fd =3D priv_fd; + region_ext.private_offset =3D priv_offset; + ret =3D ioctl(vm_get_fd(vm), KVM_SET_USER_MEMORY_REGION, ®ion_ext); + TEST_ASSERT(ret =3D=3D 0, "Failed to register user region for gpa 0x%lx\n= ", + guest_addr); +} + +/* Do private access to the guest's private memory */ +static void setup_and_execute_test(uint32_t test_id) +{ + struct kvm_vm *vm; + int priv_memfd; + int ret; + void *shared_mem; + struct kvm_enable_cap cap; + + vm =3D vm_create_default(VCPU_ID, 0, + priv_memfd_testsuite[test_id].guest_fn); + + /* Allocate shared memory */ + shared_mem =3D mmap(NULL, TEST_MEM_SIZE, + PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1, 0); + TEST_ASSERT(shared_mem !=3D MAP_FAILED, "Failed to mmap() host"); + + /* Allocate private memory */ + priv_memfd =3D memfd_create("vm_private_mem", MFD_INACCESSIBLE); + TEST_ASSERT(priv_memfd !=3D -1, "Failed to create priv_memfd"); + ret =3D fallocate(priv_memfd, 0, 0, TEST_MEM_SIZE); + TEST_ASSERT(ret !=3D -1, "fallocate failed"); + + priv_memory_region_add(vm, shared_mem, + TEST_MEM_SLOT, TEST_MEM_SIZE, + TEST_MEM_GPA, priv_memfd, 0); + + pr_info("Mapping test memory pages 0x%x page_size 0x%x\n", + TEST_MEM_SIZE/vm_get_page_size(vm), + vm_get_page_size(vm)); + virt_map(vm, TEST_MEM_GPA, TEST_MEM_GPA, + (TEST_MEM_SIZE/vm_get_page_size(vm))); + + /* Enable exit on KVM_HC_MAP_GPA_RANGE */ + pr_info("Enabling exit on map_gpa_range hypercall\n"); + ret =3D ioctl(vm_get_fd(vm), KVM_CHECK_EXTENSION, KVM_CAP_EXIT_HYPERCALL); + TEST_ASSERT(ret & (1 << KVM_HC_MAP_GPA_RANGE), + "VM exit on MAP_GPA_RANGE HC not supported"); + cap.cap =3D KVM_CAP_EXIT_HYPERCALL; + cap.flags =3D 0; + cap.args[0] =3D (1 << KVM_HC_MAP_GPA_RANGE); + ret =3D ioctl(vm_get_fd(vm), KVM_ENABLE_CAP, &cap); + TEST_ASSERT(ret =3D=3D 0, + "Failed to enable exit on MAP_GPA_RANGE hypercall\n"); + + priv_memfd_testsuite[test_id].shared_mem =3D shared_mem; + priv_memfd_testsuite[test_id].priv_memfd =3D priv_memfd; + vcpu_work(vm, test_id); + + munmap(shared_mem, TEST_MEM_SIZE); + priv_memfd_testsuite[test_id].shared_mem =3D NULL; + close(priv_memfd); + priv_memfd_testsuite[test_id].priv_memfd =3D -1; + kvm_vm_free(vm); +} + +int main(int argc, char *argv[]) +{ + /* Tell stdout not to buffer its content */ + setbuf(stdout, NULL); + + for (uint32_t i =3D 0; i < ARRAY_SIZE(priv_memfd_testsuite); i++) { + pr_info("=3D=3D=3D Starting test %s... =3D=3D=3D\n", + priv_memfd_testsuite[i].test_desc); + setup_and_execute_test(i); + pr_info("--- completed test %s ---\n\n", + priv_memfd_testsuite[i].test_desc); + } + + return 0; +} --=20 2.36.0.550.gb090851708-goog From nobody Sun May 10 09:54:15 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76D3AC4332F for ; Wed, 11 May 2022 00:08:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239566AbiEKAIt (ORCPT ); Tue, 10 May 2022 20:08:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35000 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239291AbiEKAIh (ORCPT ); Tue, 10 May 2022 20:08:37 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D8593CFC5 for ; Tue, 10 May 2022 17:08:28 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id t29-20020a056a00139d00b005107ebaefeaso229646pfg.20 for ; Tue, 10 May 2022 17:08:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ousdsQMGoWGPLGfdJHRlc9S6OKJxuNlkPfZprs239LE=; b=e3rAodX5f8r102qY1GuiiUNVeM6ToTpHYpSFbU4cVPlW3L9nkcmzWJgDA9lX+agIkr Qd/bEFbE9ZCvcix75fmuOcnctpaM6mvYi4Gfiw+o24cYfyNIq5I0zDiUPvmsBPlXs729 HrOSPCyCn7ZI0Qy92j8PyfGtDJrCsCRQIe3RM3/roq6okzGovg5YE6P0S2cyeW0Hsx/E F9uOz81XMpK6On/HDJXdfcGYs6j/Cc75gc8HlGFYSeEq5BQ+kjrRDldiCso5cNyeWzxA S3CFwBiAUqGMPL2ERrhipC20Wzs7FZ6/gI2ZK47SUW/9tm79/wYkxDmNHVIbaaNR43Ym IfBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ousdsQMGoWGPLGfdJHRlc9S6OKJxuNlkPfZprs239LE=; b=0TP2SNhY6BRA78Kk3FLk4yCXh/cgWgZ/pfYdEoKFK5ulkqi1sg2ie1gRmHSOqNQAVj mAv9Qujyl6lX4P16yorqFEu5tBs94MsjgbTaSklsaIrKZeDFEMzei2Pe9TSJN2TPQf45 nUvIwnPowet8PiFa0aI2p7baIzBQLwI9iEdQFl3njzY7MnUzGsKg30nBItO+z0n1IB9S uGDaVapGIEbWqbu1csNKCpA6VgO5WWV0SeLJpTi6N6FT8ZuTc2Lv7quqRQHR43WNYm98 9Qr/uA+t5H1sIACJVApeMfmSe13JsDwQVLq6ks3I5zNn1v6OLkfqh8UoJKko+bq1C0Nd ZpPw== X-Gm-Message-State: AOAM530hS2y+N07O78Zojm5nBKm/v8j3c2/nu9PjPVGZxW/BGX5ceVOH oCXJLFhfxmFjhnHetV2HHTDDG8x9I/LCCq3d X-Google-Smtp-Source: ABdhPJzTmLb9y8mDaPQrDC9lq/Q88kd08tL6Wmu2p9a7OyjMRa4/Lb3QJlT+4la0xLIeV0XCQla+8xYaGu3VNwyk X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:902:e549:b0:15e:aa63:6fd8 with SMTP id n9-20020a170902e54900b0015eaa636fd8mr22998658plf.152.1652227707794; Tue, 10 May 2022 17:08:27 -0700 (PDT) Date: Wed, 11 May 2022 00:08:05 +0000 In-Reply-To: <20220511000811.384766-1-vannapurve@google.com> Message-Id: <20220511000811.384766-4-vannapurve@google.com> Mime-Version: 1.0 References: <20220511000811.384766-1-vannapurve@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [RFC V2 PATCH 3/8] selftests: kvm: priv_memfd_test: Add support for memory conversion From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shauh@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add handling of explicit private/shared memory conversion using KVM_HC_MAP_GPA_RANGE and implicit memory conversion by handling KVM_EXIT_MEMORY_ERROR. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/priv_memfd_test.c | 87 +++++++++++++++++++ 1 file changed, 87 insertions(+) diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/= selftests/kvm/priv_memfd_test.c index bbb58c62e186..55e24c893b07 100644 --- a/tools/testing/selftests/kvm/priv_memfd_test.c +++ b/tools/testing/selftests/kvm/priv_memfd_test.c @@ -155,6 +155,83 @@ static struct test_run_helper priv_memfd_testsuite[] = =3D { }, }; =20 +static void handle_vm_exit_hypercall(struct kvm_run *run, + uint32_t test_id) +{ + uint64_t gpa, npages, attrs; + int priv_memfd =3D + priv_memfd_testsuite[test_id].priv_memfd; + int ret; + int fallocate_mode; + + if (run->hypercall.nr !=3D KVM_HC_MAP_GPA_RANGE) { + TEST_FAIL("Unhandled Hypercall %lld\n", + run->hypercall.nr); + } + + gpa =3D run->hypercall.args[0]; + npages =3D run->hypercall.args[1]; + attrs =3D run->hypercall.args[2]; + + if ((gpa < TEST_MEM_GPA) || ((gpa + + (npages << MIN_PAGE_SHIFT)) > TEST_MEM_END)) { + TEST_FAIL("Unhandled gpa 0x%lx npages %ld\n", + gpa, npages); + } + + if (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED) + fallocate_mode =3D 0; + else { + fallocate_mode =3D (FALLOC_FL_PUNCH_HOLE | + FALLOC_FL_KEEP_SIZE); + } + pr_info("Converting off 0x%lx pages 0x%lx to %s\n", + (gpa - TEST_MEM_GPA), npages, + fallocate_mode ? + "shared" : "private"); + ret =3D fallocate(priv_memfd, fallocate_mode, + (gpa - TEST_MEM_GPA), + npages << MIN_PAGE_SHIFT); + TEST_ASSERT(ret !=3D -1, + "fallocate failed in hc handling"); + run->hypercall.ret =3D 0; +} + +static void handle_vm_exit_memory_error(struct kvm_run *run, + uint32_t test_id) +{ + uint64_t gpa, size, flags; + int ret; + int priv_memfd =3D + priv_memfd_testsuite[test_id].priv_memfd; + int fallocate_mode; + + gpa =3D run->memory.gpa; + size =3D run->memory.size; + flags =3D run->memory.flags; + + if ((gpa < TEST_MEM_GPA) || ((gpa + size) + > TEST_MEM_END)) { + TEST_FAIL("Unhandled gpa 0x%lx size 0x%lx\n", + gpa, size); + } + + if (flags & KVM_MEMORY_EXIT_FLAG_PRIVATE) + fallocate_mode =3D 0; + else { + fallocate_mode =3D (FALLOC_FL_PUNCH_HOLE | + FALLOC_FL_KEEP_SIZE); + } + pr_info("Converting off 0x%lx size 0x%lx to %s\n", + (gpa - TEST_MEM_GPA), size, + fallocate_mode ? + "shared" : "private"); + ret =3D fallocate(priv_memfd, fallocate_mode, + (gpa - TEST_MEM_GPA), size); + TEST_ASSERT(ret !=3D -1, + "fallocate failed in memory error handling"); +} + static void vcpu_work(struct kvm_vm *vm, uint32_t test_id) { struct kvm_run *run; @@ -181,6 +258,16 @@ static void vcpu_work(struct kvm_vm *vm, uint32_t test= _id) continue; } =20 + if (run->exit_reason =3D=3D KVM_EXIT_HYPERCALL) { + handle_vm_exit_hypercall(run, test_id); + continue; + } + + if (run->exit_reason =3D=3D KVM_EXIT_MEMORY_ERROR) { + handle_vm_exit_memory_error(run, test_id); + continue; + } + TEST_FAIL("Unhandled VCPU exit reason %d\n", run->exit_reason); break; } --=20 2.36.0.550.gb090851708-goog From nobody Sun May 10 09:54:15 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4702CC433F5 for ; Wed, 11 May 2022 00:09:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239368AbiEKAJF (ORCPT ); Tue, 10 May 2022 20:09:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239387AbiEKAIj (ORCPT ); Tue, 10 May 2022 20:08:39 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 755B25EDFA for ; Tue, 10 May 2022 17:08:31 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id r16-20020a17090b051000b001db302efed7so339072pjz.2 for ; Tue, 10 May 2022 17:08:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=wLjvVbIdoRCH49YCihnoSmA9ubv4t2s+xqBSCz7F9jg=; b=Ki3eaTjsQ7M5NE3TLT5tuYldDGfIk2buuSWB26+3w04czbg6Ph679zSyiiZgtT9PQs 6chWf7oec2aK+bxvv3wEI653g/J5vu6EkLMuyK1wYj3x1tTeDby0vt6KjCQvEFwUIBH6 ut043eELAUSze7vzZH+uxRVRXoRZFSZuFvoMuWscyNgr/3bM6OtHGPWEXelTgJauaNk+ nBcCnvg+3SPAcI2O7dVVV3wQPahjvvLidqndO5FdjzqXtuFuRteNvFVbzjClCWZza/us AILv/Sr51khSNwawZts3gpqxwMCWbPAm0liWiAInJ/kTt3VltafGCej6wxxvpmkL1Isn jn2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=wLjvVbIdoRCH49YCihnoSmA9ubv4t2s+xqBSCz7F9jg=; b=4IYwA2pTBXyk4a4sotOpr/QJxWV4uUQe8D8QPJAHASKn9Q/TYJixiOfsIdXTxxf5w6 s1ls4J5rN7VAB5DOmvXN5ysFhgRw7i3b04UbaJJM0jsJZG0qQSuIz+cJd79Z7JJtbN2c 6octfSm8f2Vh6oBpBdeD+hlllje8n7eB090KEu/3Ss7lAwTRBjJ+tMn+EJ6WbkNEU8ME w/UVBFMrfW5RKYgRwCfr2X11UaXA+RgEyI3opqN9A5nPNgsVmLd+IDKTNRb3UNmx1jVm N13cQIBfZRkKCfQyL5eJCjj0kYHWjxjrrVszWJSatDZE2PFSz+mFYPqHPisNlY6n1eNI S7JQ== X-Gm-Message-State: AOAM531HFWglYaDMRKntM/NqCoqP7raAZ0C4kI5weKw+vjSjOJ92qUa+ qESB7od0nSi4hc2MURhVk95PFUqRORpQpJlN X-Google-Smtp-Source: ABdhPJzVAtm2BkSdXZuq4Ku3XaizDmU0qHSs3QkXhQaKqeH6miipiHUsyaTDTE95H36auWRbhZrpMBywulp15kHC X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90a:e510:b0:1d9:ee23:9fa1 with SMTP id t16-20020a17090ae51000b001d9ee239fa1mr55560pjy.0.1652227710272; Tue, 10 May 2022 17:08:30 -0700 (PDT) Date: Wed, 11 May 2022 00:08:06 +0000 In-Reply-To: <20220511000811.384766-1-vannapurve@google.com> Message-Id: <20220511000811.384766-5-vannapurve@google.com> Mime-Version: 1.0 References: <20220511000811.384766-1-vannapurve@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [RFC V2 PATCH 4/8] selftests: kvm: priv_memfd_test: Add shared access test From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shauh@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a test to access private memory in shared fashion which should exercise implicit memory conversion path using KVM_EXIT_MEMORY_ERROR. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/priv_memfd_test.c | 69 +++++++++++++++++++ 1 file changed, 69 insertions(+) diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/= selftests/kvm/priv_memfd_test.c index 55e24c893b07..48bc4343e7b5 100644 --- a/tools/testing/selftests/kvm/priv_memfd_test.c +++ b/tools/testing/selftests/kvm/priv_memfd_test.c @@ -147,12 +147,81 @@ static void pmpat_guest_code(void) GUEST_DONE(); } =20 +/* Test to verify guest shared accesses on private memory with following s= teps: + * 1) Upon entry, guest signals VMM that it has started. + * 2) VMM populates the shared memory with known pattern and continues gue= st + * execution. + * 3) Guest reads private gpa range in a shared fashion and verifies that = it + * reads what VMM has written in step2. + * 3) Guest writes a different pattern on the shared memory and signals VMM + * that it has updated the shared memory. + * 4) VMM verifies shared memory contents to be same as the data populated + * in step 3 and continues guest execution. + */ +#define PMSAT_ID 1 +#define PMSAT_DESC "PrivateMemorySharedAccessTest" + +/* Guest code execution stages for private mem access test */ +#define PMSAT_GUEST_STARTED 0ULL +#define PMSAT_GUEST_TEST_MEM_UPDATED 1ULL + +static bool pmsat_handle_vm_stage(struct kvm_vm *vm, + void *test_info, + uint64_t stage) +{ + void *shared_mem =3D ((struct test_run_helper *)test_info)->shared_mem; + + switch (stage) { + case PMSAT_GUEST_STARTED: { + /* Initialize the contents of shared memory */ + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + "Shared memory update failed"); + VM_STAGE_PROCESSED(PMSAT_GUEST_STARTED); + break; + } + case PMSAT_GUEST_TEST_MEM_UPDATED: { + /* verify data to be same as what guest wrote */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(PMSAT_GUEST_TEST_MEM_UPDATED); + break; + } + default: + printf("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void pmsat_guest_code(void) +{ + void *shared_mem =3D (void *)TEST_MEM_GPA; + + GUEST_SYNC(PMSAT_GUEST_STARTED); + GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + + GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + GUEST_SYNC(PMSAT_GUEST_TEST_MEM_UPDATED); + + GUEST_DONE(); +} + static struct test_run_helper priv_memfd_testsuite[] =3D { [PMPAT_ID] =3D { .test_desc =3D PMPAT_DESC, .vmst_handler =3D pmpat_handle_vm_stage, .guest_fn =3D pmpat_guest_code, }, + [PMSAT_ID] =3D { + .test_desc =3D PMSAT_DESC, + .vmst_handler =3D pmsat_handle_vm_stage, + .guest_fn =3D pmsat_guest_code, + }, }; =20 static void handle_vm_exit_hypercall(struct kvm_run *run, --=20 2.36.0.550.gb090851708-goog From nobody Sun May 10 09:54:15 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFC1FC433F5 for ; Wed, 11 May 2022 00:09:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239620AbiEKAJs (ORCPT ); Tue, 10 May 2022 20:09:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35252 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239436AbiEKAIj (ORCPT ); Tue, 10 May 2022 20:08:39 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4DD659398C for ; Tue, 10 May 2022 17:08:33 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id h128-20020a636c86000000b003c574b3422aso155875pgc.12 for ; Tue, 10 May 2022 17:08:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=+OvHhqt+M1RLlLwyc9HE0Jj3sR2gjcJo/jfqnOHXFtA=; b=pDdvYl4liRqjWSaJccStjWYwZZV9ExTf0TKOqyZH/a0lXvGR6JPGMutccbUkwXwEjC O0E455AAnVQY7wmeDgpPMRVSQqY6IVoeJ+ueNBr/Z/oRcAN325cjTiBBk282CDqXPreT 0l/4xm6ztKCiAn+wELjc2d/8QVtDZFYdTl9Xf6omeAzXArKc9KkBQIyaVnoevwHXYCPE jIJ9BjoCoFX0eChwxxagIni3jDz/TL9XTEZ2OFclurVRhl67K2iqkxscPnSzF3fNhjBr f4yiTS6mDrLKN2PeRcezWbMlKgRNM37sHLWpgVhGbbSTTGYs8SrP/UILcgudfVFlgfQN VX9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+OvHhqt+M1RLlLwyc9HE0Jj3sR2gjcJo/jfqnOHXFtA=; b=hF2BjWTPi7UoHhkYYIYSbhKnA9pHD/QrTQCs+OjWtBqNNJEtAo653T/llw1vdb83qt 5X9Qs4G6s77syhiRvuWZ4PMLN+Pdg6iSvwG1zO07kmYMfFL89c9VV2tdOsgrKKrQlcME t7xsxy2ZuU8L+Fhe41evJWHHSXoZedYCgBAte1Z3W0CxDOwdabUw9Bd7+C7rsz1sZOLe DDoVG95ORiA6iHch3bsf0CNUdHYw5z/YMO7qfxwl7UAXBK9zlmoXJQH8Lxnzx1/FTol9 HXuC4AW9jZjKm5TwN0MZxaKZgw6a7mmFjowQzXzJnb1u0r3u937CBTX3rNw3wU/JST5O l4zA== X-Gm-Message-State: AOAM530RJR56T/Cu6z4uU4vLmHnt+9mNC3RwoNyaJO4Oo5YAEBwwGwVt zxJ1rN2GO3qEaY/uJdUSv94JK1RPPIpkdJHr X-Google-Smtp-Source: ABdhPJxkoMI9iyLMP/Xf2UCY0LX57CH7UuCXvd7tBR3phxh4aQm7Y1+mJkCzCI+BwFYC/MJVg+Jsw2Eb7yRnd1t7 X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:902:ecc2:b0:15e:9add:104c with SMTP id a2-20020a170902ecc200b0015e9add104cmr22970611plh.140.1652227712910; Tue, 10 May 2022 17:08:32 -0700 (PDT) Date: Wed, 11 May 2022 00:08:07 +0000 In-Reply-To: <20220511000811.384766-1-vannapurve@google.com> Message-Id: <20220511000811.384766-6-vannapurve@google.com> Mime-Version: 1.0 References: <20220511000811.384766-1-vannapurve@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [RFC V2 PATCH 5/8] selftests: kvm: Add implicit memory conversion tests From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shauh@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add tests to exercise implicit memory conversion path. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/priv_memfd_test.c | 384 +++++++++++++++++- 1 file changed, 383 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/= selftests/kvm/priv_memfd_test.c index 48bc4343e7b5..f6f6b064a101 100644 --- a/tools/testing/selftests/kvm/priv_memfd_test.c +++ b/tools/testing/selftests/kvm/priv_memfd_test.c @@ -211,6 +211,369 @@ static void pmsat_guest_code(void) GUEST_DONE(); } =20 +/* Test to verify guest shared accesses on shared memory with following st= eps: + * 1) Upon entry, guest signals VMM that it has started. + * 2) VMM deallocates the backing private memory and populates the shared = memory + * with known pattern and continues guest execution. + * 3) Guest reads shared gpa range in a shared fashion and verifies that it + * reads what VMM has written in step2. + * 3) Guest writes a different pattern on the shared memory and signals VMM + * that it has updated the shared memory. + * 4) VMM verifies shared memory contents to be same as the data populated + * in step 3 and continues guest execution. + */ +#define SMSAT_ID 2 +#define SMSAT_DESC "SharedMemorySharedAccessTest" + +#define SMSAT_GUEST_STARTED 0ULL +#define SMSAT_GUEST_TEST_MEM_UPDATED 1ULL + +static bool smsat_handle_vm_stage(struct kvm_vm *vm, + void *test_info, + uint64_t stage) +{ + void *shared_mem =3D ((struct test_run_helper *)test_info)->shared_mem; + int priv_memfd =3D ((struct test_run_helper *)test_info)->priv_memfd; + + switch (stage) { + case SMSAT_GUEST_STARTED: { + /* Remove the backing private memory storage */ + int ret =3D fallocate(priv_memfd, + FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, + 0, TEST_MEM_SIZE); + TEST_ASSERT(ret !=3D -1, + "fallocate failed in smsat handling"); + /* Initialize the contents of shared memory */ + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + "Shared memory updated failed"); + VM_STAGE_PROCESSED(SMSAT_GUEST_STARTED); + break; + } + case SMSAT_GUEST_TEST_MEM_UPDATED: { + /* verify data to be same as what guest wrote */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(SMSAT_GUEST_TEST_MEM_UPDATED); + break; + } + default: + printf("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void smsat_guest_code(void) +{ + void *shared_mem =3D (void *)TEST_MEM_GPA; + + GUEST_SYNC(SMSAT_GUEST_STARTED); + GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + + GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + GUEST_SYNC(SMSAT_GUEST_TEST_MEM_UPDATED); + + GUEST_DONE(); +} + +/* Test to verify guest private accesses on shared memory with following s= teps: + * 1) Upon entry, guest signals VMM that it has started. + * 2) VMM deallocates the backing private memory and populates the shared = memory + * with known pattern and continues guest execution. + * 3) Guest writes gpa range via private access and signals VMM. + * 4) VMM verifies shared memory contents to be same as the data populated + * in step 2 and continues guest execution. + * 5) Guest reads gpa range via private access and verifies that the conte= nts + * are same as written in step 3. + */ +#define SMPAT_ID 3 +#define SMPAT_DESC "SharedMemoryPrivateAccessTest" + +#define SMPAT_GUEST_STARTED 0ULL +#define SMPAT_GUEST_TEST_MEM_UPDATED 1ULL + +static bool smpat_handle_vm_stage(struct kvm_vm *vm, + void *test_info, + uint64_t stage) +{ + void *shared_mem =3D ((struct test_run_helper *)test_info)->shared_mem; + int priv_memfd =3D ((struct test_run_helper *)test_info)->priv_memfd; + + switch (stage) { + case SMPAT_GUEST_STARTED: { + /* Remove the backing private memory storage */ + int ret =3D fallocate(priv_memfd, + FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, + 0, TEST_MEM_SIZE); + TEST_ASSERT(ret !=3D -1, + "fallocate failed in smpat handling"); + /* Initialize the contents of shared memory */ + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + "Shared memory updated failed"); + VM_STAGE_PROCESSED(SMPAT_GUEST_STARTED); + break; + } + case SMPAT_GUEST_TEST_MEM_UPDATED: { + /* verify data to be same as what vmm wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(SMPAT_GUEST_TEST_MEM_UPDATED); + break; + } + default: + printf("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void smpat_guest_code(void) +{ + void *shared_mem =3D (void *)TEST_MEM_GPA; + int ret; + + GUEST_SYNC(SMPAT_GUEST_STARTED); + + /* Mark the GPA range to be treated as always accessed privately */ + ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret =3D=3D 0, ret); + + GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + GUEST_SYNC(SMPAT_GUEST_TEST_MEM_UPDATED); + GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + + GUEST_DONE(); +} + +/* Test to verify guest shared and private accesses on memory with followi= ng + * steps: + * 1) Upon entry, guest signals VMM that it has started. + * 2) VMM populates the shared memory with known pattern and continues gue= st + * execution. + * 3) Guest writes shared gpa range in a private fashion and signals VMM + * 4) VMM verifies that shared memory still contains the pattern written in + * step 2 and continues guest execution. + * 5) Guest verifies private memory contents to be same as the data popula= ted + * in step 3 and signals VMM. + * 6) VMM removes the private memory backing which should also clear out t= he + * second stage mappings for the VM + * 6) Guest does shared write access on shared memory and signals vmm + * 7) VMM reads the shared memory and verifies that the data is same as wh= at + * guest wrote in step 6 and continues guest execution. + * 8) Guest reads the private memory and verifies that the data is same as + * written in step 6. + */ +#define PSAT_ID 4 +#define PSAT_DESC "PrivateSharedAccessTest" + +#define PSAT_GUEST_STARTED 0ULL +#define PSAT_GUEST_PRIVATE_MEM_UPDATED 1ULL +#define PSAT_GUEST_PRIVATE_MEM_VERIFIED 2ULL +#define PSAT_GUEST_SHARED_MEM_UPDATED 3ULL + +static bool psat_handle_vm_stage(struct kvm_vm *vm, + void *test_info, + uint64_t stage) +{ + void *shared_mem =3D ((struct test_run_helper *)test_info)->shared_mem; + int priv_memfd =3D ((struct test_run_helper *)test_info)->priv_memfd; + + switch (stage) { + case PSAT_GUEST_STARTED: { + /* Initialize the contents of shared memory */ + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + "Shared memory update failed"); + VM_STAGE_PROCESSED(PSAT_GUEST_STARTED); + break; + } + case PSAT_GUEST_PRIVATE_MEM_UPDATED: { + /* verify data to be same as what vmm wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(PSAT_GUEST_PRIVATE_MEM_UPDATED); + break; + } + case PSAT_GUEST_PRIVATE_MEM_VERIFIED: { + /* Remove the backing private memory storage so that + * subsequent accesses from guest cause a second stage + * page fault + */ + int ret =3D fallocate(priv_memfd, + FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, + 0, TEST_MEM_SIZE); + TEST_ASSERT(ret !=3D -1, + "fallocate failed in smpat handling"); + VM_STAGE_PROCESSED(PSAT_GUEST_PRIVATE_MEM_VERIFIED); + break; + } + case PSAT_GUEST_SHARED_MEM_UPDATED: { + /* verify data to be same as what guest wrote */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(PSAT_GUEST_SHARED_MEM_UPDATED); + break; + } + default: + printf("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void psat_guest_code(void) +{ + void *shared_mem =3D (void *)TEST_MEM_GPA; + int ret; + + GUEST_SYNC(PSAT_GUEST_STARTED); + /* Mark the GPA range to be treated as always accessed privately */ + ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret =3D=3D 0, ret); + + GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + GUEST_SYNC(PSAT_GUEST_PRIVATE_MEM_UPDATED); + GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + + GUEST_SYNC(PSAT_GUEST_PRIVATE_MEM_VERIFIED); + + /* Mark no GPA range to be treated as accessed privately */ + ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, 0, + 0, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret =3D=3D 0, ret); + GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + GUEST_SYNC(PSAT_GUEST_SHARED_MEM_UPDATED); + GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + + GUEST_DONE(); +} + +/* Test to verify guest shared and private accesses on memory with followi= ng + * steps: + * 1) Upon entry, guest signals VMM that it has started. + * 2) VMM removes the private memory backing and populates the shared memo= ry + * with known pattern and continues guest execution. + * 3) Guest reads shared gpa range in a shared fashion and verifies that it + * reads what VMM has written in step2. + * 4) Guest writes a different pattern on the shared memory and signals VMM + * that it has updated the shared memory. + * 5) VMM verifies shared memory contents to be same as the data populated + * in step 4 and installs private memory backing again to allow guest + * to do private access and invalidate second stage mappings. + * 6) Guest does private write access on shared memory and signals vmm + * 7) VMM reads the shared memory and verified that the data is still same + * as in step 4 and continues guest execution. + * 8) Guest reads the private memory and verifies that the data is same as + * written in step 6. + */ +#define SPAT_ID 5 +#define SPAT_DESC "SharedPrivateAccessTest" + +#define SPAT_GUEST_STARTED 0ULL +#define SPAT_GUEST_SHARED_MEM_UPDATED 1ULL +#define SPAT_GUEST_PRIVATE_MEM_UPDATED 2ULL + +static bool spat_handle_vm_stage(struct kvm_vm *vm, + void *test_info, + uint64_t stage) +{ + void *shared_mem =3D ((struct test_run_helper *)test_info)->shared_mem; + int priv_memfd =3D ((struct test_run_helper *)test_info)->priv_memfd; + + switch (stage) { + case SPAT_GUEST_STARTED: { + /* Remove the backing private memory storage so that + * subsequent accesses from guest cause a second stage + * page fault + */ + int ret =3D fallocate(priv_memfd, + FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, + 0, TEST_MEM_SIZE); + TEST_ASSERT(ret !=3D -1, + "fallocate failed in spat handling"); + + /* Initialize the contents of shared memory */ + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + "Shared memory updated failed"); + VM_STAGE_PROCESSED(SPAT_GUEST_STARTED); + break; + } + case SPAT_GUEST_SHARED_MEM_UPDATED: { + /* verify data to be same as what guest wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + "Shared memory view mismatch"); + /* Allocate memory for private backing store */ + int ret =3D fallocate(priv_memfd, + 0, 0, TEST_MEM_SIZE); + TEST_ASSERT(ret !=3D -1, + "fallocate failed in spat handling"); + VM_STAGE_PROCESSED(SPAT_GUEST_SHARED_MEM_UPDATED); + break; + } + case SPAT_GUEST_PRIVATE_MEM_UPDATED: { + /* verify data to be same as what guest wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(SPAT_GUEST_PRIVATE_MEM_UPDATED); + break; + } + default: + printf("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void spat_guest_code(void) +{ + void *shared_mem =3D (void *)TEST_MEM_GPA; + int ret; + + GUEST_SYNC(SPAT_GUEST_STARTED); + GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + GUEST_SYNC(SPAT_GUEST_SHARED_MEM_UPDATED); + /* Mark the GPA range to be treated as always accessed privately */ + ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret =3D=3D 0, ret); + + GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + GUEST_SYNC(PSAT_GUEST_PRIVATE_MEM_UPDATED); + GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + GUEST_DONE(); +} + static struct test_run_helper priv_memfd_testsuite[] =3D { [PMPAT_ID] =3D { .test_desc =3D PMPAT_DESC, @@ -222,6 +585,26 @@ static struct test_run_helper priv_memfd_testsuite[] = =3D { .vmst_handler =3D pmsat_handle_vm_stage, .guest_fn =3D pmsat_guest_code, }, + [SMSAT_ID] =3D { + .test_desc =3D SMSAT_DESC, + .vmst_handler =3D smsat_handle_vm_stage, + .guest_fn =3D smsat_guest_code, + }, + [SMPAT_ID] =3D { + .test_desc =3D SMPAT_DESC, + .vmst_handler =3D smpat_handle_vm_stage, + .guest_fn =3D smpat_guest_code, + }, + [PSAT_ID] =3D { + .test_desc =3D PSAT_DESC, + .vmst_handler =3D psat_handle_vm_stage, + .guest_fn =3D psat_guest_code, + }, + [SPAT_ID] =3D { + .test_desc =3D SPAT_DESC, + .vmst_handler =3D spat_handle_vm_stage, + .guest_fn =3D spat_guest_code, + }, }; =20 static void handle_vm_exit_hypercall(struct kvm_run *run, @@ -365,7 +748,6 @@ static void priv_memory_region_add(struct kvm_vm *vm, v= oid *mem, uint32_t slot, guest_addr); } =20 -/* Do private access to the guest's private memory */ static void setup_and_execute_test(uint32_t test_id) { struct kvm_vm *vm; --=20 2.36.0.550.gb090851708-goog From nobody Sun May 10 09:54:15 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0A91C433EF for ; Wed, 11 May 2022 00:09:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234533AbiEKAJl (ORCPT ); Tue, 10 May 2022 20:09:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35294 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239313AbiEKAIl (ORCPT ); Tue, 10 May 2022 20:08:41 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14CB21CA064 for ; Tue, 10 May 2022 17:08:36 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id t14-20020a1709028c8e00b0015cf7e541feso176184plo.1 for ; Tue, 10 May 2022 17:08:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=xt6DblK/5Mc6y+G7wnyMgG6W7K9OB1zLu8lmSnC6BmA=; b=s7DCNPxAkIoe7rjo5twbzJaNF4Ge/oZxzkng81sloIIe4mURDY/NbjwUJUxHLo0wBH 32Yvo7V45ahXwikJNl6C8yHAniozumjVaWzwNhrq2KAwAtH+n9dy3w5Qs7hzZUeGHmO3 BqOvl+EZyihh+Jt2n8rtS3ttsbTX8gwCOOO5dWOOKAZ5cEO/KNCd5FCcvQRx1z4km0CP 6Uv2CfRX5Z7EiXiJlN0t/BLZ1h6tZrkhBiNRrPOQK+HthMXHR8t5X/Az2Ufr5m9RewBP YbQsxR7VpTb7wOuf88nfdEFeDCRBSJzlcyVrIQUyYULdaDIWkQnvGQHuNF/Z0QlBPpja RTSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xt6DblK/5Mc6y+G7wnyMgG6W7K9OB1zLu8lmSnC6BmA=; b=Xmfz0TtSjlHcPIiR0XbAz+JJH1E6owgZfx0D2LdF6oaqx7zQ4KLY0/8aGCr1jjwark sf+JJEoHlIF3JE/n/5Gl2ZUcrsFl/ZHqaSbV3q3r+9F/MQ7rhxXenJH7ffKqkEzI4gTk W5hx7EuaPFl3Xgj5ZBl3W58gs4y7LnlvXzOUvRm8d0THdLS2RyPXa7h+QuPVhyLALdPd Ao2OhtMyCgpJt6wtFaq03ry/YoKqsDoXqzig5i1LYsVFw144Qd7ubnyExhlZON4UT8Go UzT8PrDwWTL6yB2dluu+7jFaL3SKKQWAzXR6tvsgze4yIjsxgzHLv05E0BKFg+cmPiov yZ0g== X-Gm-Message-State: AOAM531rWQ/4oIpJKz14XD1ln7I/FcjFgLxiLBZ/84vgNyp/FZ9fCiYz KQdfjofAd7tv8+9QOUQ9SodSCxoDUMg9tfHa X-Google-Smtp-Source: ABdhPJwbj7MHfhMRq9CoUYw1eK/wvuJckUiIb8665/USZ+RSGQ888NUKRDhrIF1c7BL4wmnzAYfq9HXrbf4W5Kh7 X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90a:8d83:b0:1dd:258c:7c55 with SMTP id d3-20020a17090a8d8300b001dd258c7c55mr55621pjo.1.1652227715196; Tue, 10 May 2022 17:08:35 -0700 (PDT) Date: Wed, 11 May 2022 00:08:08 +0000 In-Reply-To: <20220511000811.384766-1-vannapurve@google.com> Message-Id: <20220511000811.384766-7-vannapurve@google.com> Mime-Version: 1.0 References: <20220511000811.384766-1-vannapurve@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [RFC V2 PATCH 6/8] selftests: kvm: Add KVM_HC_MAP_GPA_RANGE hypercall test From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shauh@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add test to exercise explicit memory conversion path using KVM_HC_MAP_GPA_RANGE hypercall. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/priv_memfd_test.c | 148 ++++++++++++++++++ 1 file changed, 148 insertions(+) diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/= selftests/kvm/priv_memfd_test.c index f6f6b064a101..c2ea8f67337c 100644 --- a/tools/testing/selftests/kvm/priv_memfd_test.c +++ b/tools/testing/selftests/kvm/priv_memfd_test.c @@ -574,6 +574,149 @@ static void spat_guest_code(void) GUEST_DONE(); } =20 +/* Test to verify guest private, shared, private accesses on memory with + * following steps: + * 1) Upon entry, guest signals VMM that it has started. + * 2) VMM initializes the shared memory with known pattern and continues g= uest + * execution + * 3) Guest writes the private memory privately via a known pattern and + * signals VMM + * 4) VMM reads the shared memory and verifies that it's same as whats wri= tten + * in step 2 and continues guest execution + * 5) Guest reads the private memory privately and verifies that the conte= nts + * are same as written in step 3. + * 6) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as shared + * and marks the range to be accessed via shared access. + * 7) Guest does a shared access to shared memory and verifies that the + * contents are same as written in step 2. + * 8) Guest writes known pattern to test memory and signals VMM. + * 9) VMM verifies the memory contents to be same as written by guest in s= tep + * 8 + * 10) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as private + * and marks the range to be accessed via private access. + * 11) Guest writes a known pattern to the test memory and signals VMM. + * 12) VMM verifies the memory contents to be same as written by guest in = step + * 8 and continues guest execution. + * 13) Guest verififes the memory pattern to be same as written in step 11. + */ +#define PSPAHCT_ID 6 +#define PSPAHCT_DESC "PrivateSharedPrivateAccessHyperCallTest" + +#define PSPAHCT_GUEST_STARTED 0ULL +#define PSPAHCT_GUEST_PRIVATE_MEM_UPDATED 1ULL +#define PSPAHCT_GUEST_SHARED_MEM_UPDATED 2ULL +#define PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2 3ULL + +static bool pspahct_handle_vm_stage(struct kvm_vm *vm, + void *test_info, + uint64_t stage) +{ + void *shared_mem =3D ((struct test_run_helper *)test_info)->shared_mem; + + switch (stage) { + case PSPAHCT_GUEST_STARTED: { + /* Initialize the contents of shared memory */ + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + "Shared memory update failed"); + VM_STAGE_PROCESSED(PSPAHCT_GUEST_STARTED); + break; + } + case PSPAHCT_GUEST_PRIVATE_MEM_UPDATED: { + /* verify data to be same as what guest wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED); + break; + } + case PSPAHCT_GUEST_SHARED_MEM_UPDATED: { + /* verify data to be same as what guest wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(PSPAHCT_GUEST_SHARED_MEM_UPDATED); + break; + } + case PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2: { + /* verify data to be same as what guest wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2); + break; + } + default: + printf("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void pspahct_guest_code(void) +{ + void *test_mem =3D (void *)TEST_MEM_GPA; + int ret; + + GUEST_SYNC(PSPAHCT_GUEST_STARTED); + + /* Mark the GPA range to be treated as always accessed privately */ + ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret =3D=3D 0, ret); + GUEST_ASSERT(do_mem_op(SET_PAT, test_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + + GUEST_SYNC(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED); + GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + + /* Map the GPA range to be treated as shared */ + ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + KVM_MAP_GPA_RANGE_DECRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0); + GUEST_ASSERT_1(ret =3D=3D 0, ret); + + /* Mark the GPA range to be treated as always accessed via shared + * access + */ + ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, 0, 0, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret =3D=3D 0, ret); + + GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + GUEST_ASSERT(do_mem_op(SET_PAT, test_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + GUEST_SYNC(PSPAHCT_GUEST_SHARED_MEM_UPDATED); + + GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + + /* Map the GPA range to be treated as private */ + ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + KVM_MAP_GPA_RANGE_ENCRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0); + GUEST_ASSERT_1(ret =3D=3D 0, ret); + + /* Mark the GPA range to be treated as always accessed via private + * access + */ + ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret =3D=3D 0, ret); + + GUEST_ASSERT(do_mem_op(SET_PAT, test_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + GUEST_SYNC(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2); + GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + GUEST_DONE(); +} + static struct test_run_helper priv_memfd_testsuite[] =3D { [PMPAT_ID] =3D { .test_desc =3D PMPAT_DESC, @@ -605,6 +748,11 @@ static struct test_run_helper priv_memfd_testsuite[] = =3D { .vmst_handler =3D spat_handle_vm_stage, .guest_fn =3D spat_guest_code, }, + [PSPAHCT_ID] =3D { + .test_desc =3D PSPAHCT_DESC, + .vmst_handler =3D pspahct_handle_vm_stage, + .guest_fn =3D pspahct_guest_code, + }, }; =20 static void handle_vm_exit_hypercall(struct kvm_run *run, --=20 2.36.0.550.gb090851708-goog From nobody Sun May 10 09:54:15 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91182C4332F for ; Wed, 11 May 2022 00:09:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235803AbiEKAJX (ORCPT ); Tue, 10 May 2022 20:09:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35246 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239476AbiEKAIp (ORCPT ); Tue, 10 May 2022 20:08:45 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D9892249884 for ; Tue, 10 May 2022 17:08:38 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id d4-20020a17090ac24400b001dcec51802cso2029087pjx.4 for ; Tue, 10 May 2022 17:08:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=EqhkSxXkH/+hwiomE5rucysbEqX3JGglkHqA2C3iBW8=; b=NgWeY6kV8a8KizJs9328ro0t3UwQMjfOojbQ/esljEF5c0+r5IRbfIBbUlb7BhJqZn 2O0yfh85utUX7BuoAGzCPhhigxQLDK+kyny4dFgOFnCCD25qTZxVwMH9MjnEbm4f2de3 IPDo2q+VxTmi7biw8RLJ2ddvosAS5pHlKbrJX6FVehyUK1JIfNPjWq4Nr7yrcW+krjK+ 0GU5+Kl+oJ6z3ByiTxl0oBPL70vEuqECPpWrb1JrbVI4htRWhtZs3IOCSaPyH1bZAx4E jVo6/4217uPqbfarKpd+e2zA/lj/G0wx/iGFZWpy7/KfNFpmCR1HlT0ziSyYSgtAT0Yi fk6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=EqhkSxXkH/+hwiomE5rucysbEqX3JGglkHqA2C3iBW8=; b=7b32uLyVmmcz44Ldsvw5oy6RM9oxw6E2m4S/7DhwoJPKzBcNBnpSSXBOcdUfWz9aKy yIku1ilPIcDsAJTJoU7Q0z5+SsdVMp5Rd938s+rrTl5h8dUn5Jsxrc6jtG39IV2P5jOj oIzWnJeLHlcUyV1qoliSTkupLx1SLJ910H9ERQhHQnBPk3WxzZ1YIwRBv1/2zsmttU8k vBD/DDbTOuTyWm+PVIozZoyR10eksQEJ0XS5Dfly7ot5w/NgxP6PkDdZQ59MijYVIQJ6 ZWM2ceryybUGwtDCAMjcYNGfywwepxFZkpWUfif2nMdm0/ylD1kOI6MQWJMrbHiuqAfs hHNQ== X-Gm-Message-State: AOAM532CIjo+w8Q+lb36xhaInI+VG+4TQhSYojZryYRFJWS6aIR/WVKx jMzujhsFTfQKPUuphjFHIJi7Q+IwdX2sA+wX X-Google-Smtp-Source: ABdhPJzx2oJnAJehuC8ueDi+L9zp9s7bHRnweeOMEWptNIKQmM5xJu2sYC/uYEEg1NEMclHrGt8Z2mA+Nm3Atxz2 X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90b:3908:b0:1dc:710e:643 with SMTP id ob8-20020a17090b390800b001dc710e0643mr2377006pjb.210.1652227717896; Tue, 10 May 2022 17:08:37 -0700 (PDT) Date: Wed, 11 May 2022 00:08:09 +0000 In-Reply-To: <20220511000811.384766-1-vannapurve@google.com> Message-Id: <20220511000811.384766-8-vannapurve@google.com> Mime-Version: 1.0 References: <20220511000811.384766-1-vannapurve@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [RFC V2 PATCH 7/8] selftests: kvm: Add hugepage support to priv_memfd_test suite. From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shauh@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Austin Diviness Adds ability to run priv_memfd_test test suite across various sizes of pages for shared/private memory. Shared and private memory can be allocated with different sized pages. In order to verify that there isn't a behavior change based on different page sizes, this change runs the tests using the currently supported permutations. Adds command line flags to adjust whether the tests should run with hugepages backing the test memory. Signed-off-by: Austin Diviness Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/priv_memfd_test.c | 369 ++++++++++++++---- 1 file changed, 294 insertions(+), 75 deletions(-) diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/= selftests/kvm/priv_memfd_test.c index c2ea8f67337c..dbe6ead92ba7 100644 --- a/tools/testing/selftests/kvm/priv_memfd_test.c +++ b/tools/testing/selftests/kvm/priv_memfd_test.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 #define _GNU_SOURCE /* for program_invocation_short_name */ #include +#include #include #include #include @@ -17,9 +18,18 @@ #include #include =20 +#define BYTE_MASK 0xFF + +// flags for mmap +#define MAP_HUGE_2MB (21 << MAP_HUGE_SHIFT) +#define MAP_HUGE_1GB (30 << MAP_HUGE_SHIFT) + +// page sizes +#define PAGE_SIZE_4KB ((size_t)0x1000) +#define PAGE_SIZE_2MB (PAGE_SIZE_4KB * (size_t)512) +#define PAGE_SIZE_1GB ((PAGE_SIZE_4KB * 256) * 1024) + #define TEST_MEM_GPA 0xb0000000 -#define TEST_MEM_SIZE 0x2000 -#define TEST_MEM_END (TEST_MEM_GPA + TEST_MEM_SIZE) #define TEST_MEM_DATA_PAT1 0x6666666666666666 #define TEST_MEM_DATA_PAT2 0x9999999999999999 #define TEST_MEM_DATA_PAT3 0x3333333333333333 @@ -34,8 +44,16 @@ enum mem_op { =20 #define VCPU_ID 0 =20 +// address where guests can receive the mem size of the data +// allocated to them by the vmm +#define MEM_SIZE_MMIO_ADDRESS 0xa0000000 + #define VM_STAGE_PROCESSED(x) pr_info("Processed stage %s\n", #x) =20 +// global used for storing the current mem allocation size +// for the running test +static size_t test_mem_size; + typedef bool (*vm_stage_handler_fn)(struct kvm_vm *, void *, uint64_t); typedef void (*guest_code_fn)(void); @@ -47,6 +65,36 @@ struct test_run_helper { int priv_memfd; }; =20 +enum page_size { + PAGE_4KB, + PAGE_2MB, + PAGE_1GB +}; + +struct page_combo { + enum page_size shared; + enum page_size private; +}; + +static char *page_size_to_str(enum page_size x) +{ + switch (x) { + case PAGE_4KB: + return "PAGE_4KB"; + case PAGE_2MB: + return "PAGE_2MB"; + case PAGE_1GB: + return "PAGE_1GB"; + default: + return "UNKNOWN"; + } +} + +static uint64_t test_mem_end(const uint64_t start, const uint64_t size) +{ + return start + size; +} + /* Guest code in selftests is loaded to guest memory using kvm_vm_elf_load * which doesn't handle global offset table updates. Calling standard libc * functions would normally result in referring to the global offset table. @@ -103,7 +151,7 @@ static bool pmpat_handle_vm_stage(struct kvm_vm *vm, case PMPAT_GUEST_STARTED: { /* Initialize the contents of shared memory */ TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT1, test_mem_size), "Shared memory update failure"); VM_STAGE_PROCESSED(PMPAT_GUEST_STARTED); break; @@ -111,7 +159,7 @@ static bool pmpat_handle_vm_stage(struct kvm_vm *vm, case PMPAT_GUEST_PRIV_MEM_UPDATED: { /* verify host updated data is still intact */ TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT1, test_mem_size), "Shared memory view mismatch"); VM_STAGE_PROCESSED(PMPAT_GUEST_PRIV_MEM_UPDATED); break; @@ -131,18 +179,20 @@ static void pmpat_guest_code(void) =20 GUEST_SYNC(PMPAT_GUEST_STARTED); =20 + const size_t mem_size =3D *((size_t *)MEM_SIZE_MMIO_ADDRESS); + /* Mark the GPA range to be treated as always accessed privately */ ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, - TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + mem_size >> MIN_PAGE_SHIFT, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); GUEST_ASSERT_1(ret =3D=3D 0, ret); =20 GUEST_ASSERT(do_mem_op(SET_PAT, priv_mem, TEST_MEM_DATA_PAT2, - TEST_MEM_SIZE)); + mem_size)); GUEST_SYNC(PMPAT_GUEST_PRIV_MEM_UPDATED); =20 GUEST_ASSERT(do_mem_op(VERIFY_PAT, priv_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); =20 GUEST_DONE(); } @@ -175,7 +225,7 @@ static bool pmsat_handle_vm_stage(struct kvm_vm *vm, case PMSAT_GUEST_STARTED: { /* Initialize the contents of shared memory */ TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT1, test_mem_size), "Shared memory update failed"); VM_STAGE_PROCESSED(PMSAT_GUEST_STARTED); break; @@ -183,7 +233,7 @@ static bool pmsat_handle_vm_stage(struct kvm_vm *vm, case PMSAT_GUEST_TEST_MEM_UPDATED: { /* verify data to be same as what guest wrote */ TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT2, test_mem_size), "Shared memory view mismatch"); VM_STAGE_PROCESSED(PMSAT_GUEST_TEST_MEM_UPDATED); break; @@ -199,13 +249,14 @@ static bool pmsat_handle_vm_stage(struct kvm_vm *vm, static void pmsat_guest_code(void) { void *shared_mem =3D (void *)TEST_MEM_GPA; + const size_t mem_size =3D *((size_t *)MEM_SIZE_MMIO_ADDRESS); =20 GUEST_SYNC(PMSAT_GUEST_STARTED); GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT1, mem_size)); =20 GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); GUEST_SYNC(PMSAT_GUEST_TEST_MEM_UPDATED); =20 GUEST_DONE(); @@ -240,12 +291,12 @@ static bool smsat_handle_vm_stage(struct kvm_vm *vm, /* Remove the backing private memory storage */ int ret =3D fallocate(priv_memfd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, - 0, TEST_MEM_SIZE); + 0, test_mem_size); TEST_ASSERT(ret !=3D -1, "fallocate failed in smsat handling"); /* Initialize the contents of shared memory */ TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT1, test_mem_size), "Shared memory updated failed"); VM_STAGE_PROCESSED(SMSAT_GUEST_STARTED); break; @@ -253,7 +304,7 @@ static bool smsat_handle_vm_stage(struct kvm_vm *vm, case SMSAT_GUEST_TEST_MEM_UPDATED: { /* verify data to be same as what guest wrote */ TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT2, test_mem_size), "Shared memory view mismatch"); VM_STAGE_PROCESSED(SMSAT_GUEST_TEST_MEM_UPDATED); break; @@ -269,13 +320,14 @@ static bool smsat_handle_vm_stage(struct kvm_vm *vm, static void smsat_guest_code(void) { void *shared_mem =3D (void *)TEST_MEM_GPA; + const size_t mem_size =3D *((size_t *)MEM_SIZE_MMIO_ADDRESS); =20 GUEST_SYNC(SMSAT_GUEST_STARTED); GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT1, mem_size)); =20 GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); GUEST_SYNC(SMSAT_GUEST_TEST_MEM_UPDATED); =20 GUEST_DONE(); @@ -309,12 +361,12 @@ static bool smpat_handle_vm_stage(struct kvm_vm *vm, /* Remove the backing private memory storage */ int ret =3D fallocate(priv_memfd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, - 0, TEST_MEM_SIZE); + 0, test_mem_size); TEST_ASSERT(ret !=3D -1, "fallocate failed in smpat handling"); /* Initialize the contents of shared memory */ TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT1, test_mem_size), "Shared memory updated failed"); VM_STAGE_PROCESSED(SMPAT_GUEST_STARTED); break; @@ -322,7 +374,7 @@ static bool smpat_handle_vm_stage(struct kvm_vm *vm, case SMPAT_GUEST_TEST_MEM_UPDATED: { /* verify data to be same as what vmm wrote earlier */ TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT1, test_mem_size), "Shared memory view mismatch"); VM_STAGE_PROCESSED(SMPAT_GUEST_TEST_MEM_UPDATED); break; @@ -342,17 +394,19 @@ static void smpat_guest_code(void) =20 GUEST_SYNC(SMPAT_GUEST_STARTED); =20 + const size_t mem_size =3D *((size_t *)MEM_SIZE_MMIO_ADDRESS); + /* Mark the GPA range to be treated as always accessed privately */ ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, - TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + mem_size >> MIN_PAGE_SHIFT, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); GUEST_ASSERT_1(ret =3D=3D 0, ret); =20 GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); GUEST_SYNC(SMPAT_GUEST_TEST_MEM_UPDATED); GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); =20 GUEST_DONE(); } @@ -394,7 +448,7 @@ static bool psat_handle_vm_stage(struct kvm_vm *vm, case PSAT_GUEST_STARTED: { /* Initialize the contents of shared memory */ TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT1, test_mem_size), "Shared memory update failed"); VM_STAGE_PROCESSED(PSAT_GUEST_STARTED); break; @@ -402,7 +456,7 @@ static bool psat_handle_vm_stage(struct kvm_vm *vm, case PSAT_GUEST_PRIVATE_MEM_UPDATED: { /* verify data to be same as what vmm wrote earlier */ TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT1, test_mem_size), "Shared memory view mismatch"); VM_STAGE_PROCESSED(PSAT_GUEST_PRIVATE_MEM_UPDATED); break; @@ -414,7 +468,7 @@ static bool psat_handle_vm_stage(struct kvm_vm *vm, */ int ret =3D fallocate(priv_memfd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, - 0, TEST_MEM_SIZE); + 0, test_mem_size); TEST_ASSERT(ret !=3D -1, "fallocate failed in smpat handling"); VM_STAGE_PROCESSED(PSAT_GUEST_PRIVATE_MEM_VERIFIED); @@ -423,7 +477,7 @@ static bool psat_handle_vm_stage(struct kvm_vm *vm, case PSAT_GUEST_SHARED_MEM_UPDATED: { /* verify data to be same as what guest wrote */ TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT2, test_mem_size), "Shared memory view mismatch"); VM_STAGE_PROCESSED(PSAT_GUEST_SHARED_MEM_UPDATED); break; @@ -442,17 +496,20 @@ static void psat_guest_code(void) int ret; =20 GUEST_SYNC(PSAT_GUEST_STARTED); + + const size_t mem_size =3D *((size_t *)MEM_SIZE_MMIO_ADDRESS); + /* Mark the GPA range to be treated as always accessed privately */ ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, - TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + mem_size >> MIN_PAGE_SHIFT, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); GUEST_ASSERT_1(ret =3D=3D 0, ret); =20 GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); GUEST_SYNC(PSAT_GUEST_PRIVATE_MEM_UPDATED); GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); =20 GUEST_SYNC(PSAT_GUEST_PRIVATE_MEM_VERIFIED); =20 @@ -461,10 +518,10 @@ static void psat_guest_code(void) 0, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); GUEST_ASSERT_1(ret =3D=3D 0, ret); GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); GUEST_SYNC(PSAT_GUEST_SHARED_MEM_UPDATED); GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); =20 GUEST_DONE(); } @@ -509,13 +566,13 @@ static bool spat_handle_vm_stage(struct kvm_vm *vm, */ int ret =3D fallocate(priv_memfd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, - 0, TEST_MEM_SIZE); + 0, test_mem_size); TEST_ASSERT(ret !=3D -1, "fallocate failed in spat handling"); =20 /* Initialize the contents of shared memory */ TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT1, test_mem_size), "Shared memory updated failed"); VM_STAGE_PROCESSED(SPAT_GUEST_STARTED); break; @@ -523,11 +580,11 @@ static bool spat_handle_vm_stage(struct kvm_vm *vm, case SPAT_GUEST_SHARED_MEM_UPDATED: { /* verify data to be same as what guest wrote earlier */ TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT2, test_mem_size), "Shared memory view mismatch"); /* Allocate memory for private backing store */ int ret =3D fallocate(priv_memfd, - 0, 0, TEST_MEM_SIZE); + 0, 0, test_mem_size); TEST_ASSERT(ret !=3D -1, "fallocate failed in spat handling"); VM_STAGE_PROCESSED(SPAT_GUEST_SHARED_MEM_UPDATED); @@ -536,7 +593,7 @@ static bool spat_handle_vm_stage(struct kvm_vm *vm, case SPAT_GUEST_PRIVATE_MEM_UPDATED: { /* verify data to be same as what guest wrote earlier */ TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT2, test_mem_size), "Shared memory view mismatch"); VM_STAGE_PROCESSED(SPAT_GUEST_PRIVATE_MEM_UPDATED); break; @@ -554,23 +611,26 @@ static void spat_guest_code(void) void *shared_mem =3D (void *)TEST_MEM_GPA; int ret; =20 + const size_t mem_size =3D *((size_t *)MEM_SIZE_MMIO_ADDRESS); + GUEST_SYNC(SPAT_GUEST_STARTED); GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT1, mem_size)); GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); + GUEST_SYNC(SPAT_GUEST_SHARED_MEM_UPDATED); /* Mark the GPA range to be treated as always accessed privately */ ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, - TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + mem_size >> MIN_PAGE_SHIFT, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); GUEST_ASSERT_1(ret =3D=3D 0, ret); =20 GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT1, mem_size)); GUEST_SYNC(PSAT_GUEST_PRIVATE_MEM_UPDATED); GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT1, mem_size)); GUEST_DONE(); } =20 @@ -617,7 +677,7 @@ static bool pspahct_handle_vm_stage(struct kvm_vm *vm, case PSPAHCT_GUEST_STARTED: { /* Initialize the contents of shared memory */ TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT1, test_mem_size), "Shared memory update failed"); VM_STAGE_PROCESSED(PSPAHCT_GUEST_STARTED); break; @@ -625,7 +685,7 @@ static bool pspahct_handle_vm_stage(struct kvm_vm *vm, case PSPAHCT_GUEST_PRIVATE_MEM_UPDATED: { /* verify data to be same as what guest wrote earlier */ TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT1, test_mem_size), "Shared memory view mismatch"); VM_STAGE_PROCESSED(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED); break; @@ -633,7 +693,7 @@ static bool pspahct_handle_vm_stage(struct kvm_vm *vm, case PSPAHCT_GUEST_SHARED_MEM_UPDATED: { /* verify data to be same as what guest wrote earlier */ TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT2, test_mem_size), "Shared memory view mismatch"); VM_STAGE_PROCESSED(PSPAHCT_GUEST_SHARED_MEM_UPDATED); break; @@ -641,7 +701,7 @@ static bool pspahct_handle_vm_stage(struct kvm_vm *vm, case PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2: { /* verify data to be same as what guest wrote earlier */ TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT2, test_mem_size), "Shared memory view mismatch"); VM_STAGE_PROCESSED(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2); break; @@ -661,21 +721,23 @@ static void pspahct_guest_code(void) =20 GUEST_SYNC(PSPAHCT_GUEST_STARTED); =20 + const size_t mem_size =3D *((size_t *)MEM_SIZE_MMIO_ADDRESS); + /* Mark the GPA range to be treated as always accessed privately */ ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, - TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + mem_size >> MIN_PAGE_SHIFT, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); GUEST_ASSERT_1(ret =3D=3D 0, ret); GUEST_ASSERT(do_mem_op(SET_PAT, test_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); =20 GUEST_SYNC(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED); GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); =20 /* Map the GPA range to be treated as shared */ ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, - TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + mem_size >> MIN_PAGE_SHIFT, KVM_MAP_GPA_RANGE_DECRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0); GUEST_ASSERT_1(ret =3D=3D 0, ret); =20 @@ -687,17 +749,17 @@ static void pspahct_guest_code(void) GUEST_ASSERT_1(ret =3D=3D 0, ret); =20 GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT1, mem_size)); GUEST_ASSERT(do_mem_op(SET_PAT, test_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); GUEST_SYNC(PSPAHCT_GUEST_SHARED_MEM_UPDATED); =20 GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); =20 /* Map the GPA range to be treated as private */ ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, - TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + mem_size >> MIN_PAGE_SHIFT, KVM_MAP_GPA_RANGE_ENCRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0); GUEST_ASSERT_1(ret =3D=3D 0, ret); =20 @@ -705,15 +767,15 @@ static void pspahct_guest_code(void) * access */ ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, - TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + mem_size >> MIN_PAGE_SHIFT, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); GUEST_ASSERT_1(ret =3D=3D 0, ret); =20 GUEST_ASSERT(do_mem_op(SET_PAT, test_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT1, mem_size)); GUEST_SYNC(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2); GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT1, mem_size)); GUEST_DONE(); } =20 @@ -758,7 +820,7 @@ static struct test_run_helper priv_memfd_testsuite[] = =3D { static void handle_vm_exit_hypercall(struct kvm_run *run, uint32_t test_id) { - uint64_t gpa, npages, attrs; + uint64_t gpa, npages, attrs, mem_end; int priv_memfd =3D priv_memfd_testsuite[test_id].priv_memfd; int ret; @@ -772,9 +834,10 @@ static void handle_vm_exit_hypercall(struct kvm_run *r= un, gpa =3D run->hypercall.args[0]; npages =3D run->hypercall.args[1]; attrs =3D run->hypercall.args[2]; + mem_end =3D test_mem_end(gpa, test_mem_size); =20 if ((gpa < TEST_MEM_GPA) || ((gpa + - (npages << MIN_PAGE_SHIFT)) > TEST_MEM_END)) { + (npages << MIN_PAGE_SHIFT)) > mem_end)) { TEST_FAIL("Unhandled gpa 0x%lx npages %ld\n", gpa, npages); } @@ -800,7 +863,7 @@ static void handle_vm_exit_hypercall(struct kvm_run *ru= n, static void handle_vm_exit_memory_error(struct kvm_run *run, uint32_t test_id) { - uint64_t gpa, size, flags; + uint64_t gpa, size, flags, mem_end; int ret; int priv_memfd =3D priv_memfd_testsuite[test_id].priv_memfd; @@ -809,9 +872,10 @@ static void handle_vm_exit_memory_error(struct kvm_run= *run, gpa =3D run->memory.gpa; size =3D run->memory.size; flags =3D run->memory.flags; + mem_end =3D test_mem_end(gpa, test_mem_size); =20 if ((gpa < TEST_MEM_GPA) || ((gpa + size) - > TEST_MEM_END)) { + > mem_end)) { TEST_FAIL("Unhandled gpa 0x%lx size 0x%lx\n", gpa, size); } @@ -858,6 +922,22 @@ static void vcpu_work(struct kvm_vm *vm, uint32_t test= _id) continue; } =20 + if (run->exit_reason =3D=3D KVM_EXIT_MMIO) { + if (run->mmio.phys_addr =3D=3D MEM_SIZE_MMIO_ADDRESS) { + // tell the guest the size of the memory + // it's been allocated + int shift_amount =3D 0; + + for (int i =3D 0; i < sizeof(uint64_t); ++i) { + run->mmio.data[i] =3D + (test_mem_size >> + shift_amount) & BYTE_MASK; + shift_amount +=3D CHAR_BIT; + } + } + continue; + } + if (run->exit_reason =3D=3D KVM_EXIT_HYPERCALL) { handle_vm_exit_hypercall(run, test_id); continue; @@ -896,7 +976,9 @@ static void priv_memory_region_add(struct kvm_vm *vm, v= oid *mem, uint32_t slot, guest_addr); } =20 -static void setup_and_execute_test(uint32_t test_id) +static void setup_and_execute_test(uint32_t test_id, + const enum page_size shared, + const enum page_size private) { struct kvm_vm *vm; int priv_memfd; @@ -907,27 +989,82 @@ static void setup_and_execute_test(uint32_t test_id) vm =3D vm_create_default(VCPU_ID, 0, priv_memfd_testsuite[test_id].guest_fn); =20 + // use 2 pages by default + size_t mem_size =3D PAGE_SIZE_4KB * 2; + bool using_hugepages =3D false; + + int mmap_flags =3D MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE; + + switch (shared) { + case PAGE_4KB: + // no additional flags are needed + break; + case PAGE_2MB: + mmap_flags |=3D MAP_HUGETLB | MAP_HUGE_2MB | MAP_POPULATE; + mem_size =3D max(mem_size, PAGE_SIZE_2MB); + using_hugepages =3D true; + break; + case PAGE_1GB: + mmap_flags |=3D MAP_HUGETLB | MAP_HUGE_1GB | MAP_POPULATE; + mem_size =3D max(mem_size, PAGE_SIZE_1GB); + using_hugepages =3D true; + break; + default: + TEST_FAIL("unknown page size for shared memory\n"); + } + + unsigned int memfd_flags =3D MFD_INACCESSIBLE; + + switch (private) { + case PAGE_4KB: + // no additional flags are needed + break; + case PAGE_2MB: + memfd_flags |=3D MFD_HUGETLB | MFD_HUGE_2MB; + mem_size =3D PAGE_SIZE_2MB; + using_hugepages =3D true; + break; + case PAGE_1GB: + memfd_flags |=3D MFD_HUGETLB | MFD_HUGE_1GB; + mem_size =3D PAGE_SIZE_1GB; + using_hugepages =3D true; + break; + default: + TEST_FAIL("unknown page size for private memory\n"); + } + + // set global for mem size to use later + test_mem_size =3D mem_size; + /* Allocate shared memory */ - shared_mem =3D mmap(NULL, TEST_MEM_SIZE, + shared_mem =3D mmap(NULL, mem_size, PROT_READ | PROT_WRITE, - MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1, 0); + mmap_flags, -1, 0); TEST_ASSERT(shared_mem !=3D MAP_FAILED, "Failed to mmap() host"); =20 + if (using_hugepages) { + ret =3D madvise(shared_mem, mem_size, MADV_WILLNEED); + TEST_ASSERT(ret =3D=3D 0, "madvise failed"); + } + /* Allocate private memory */ - priv_memfd =3D memfd_create("vm_private_mem", MFD_INACCESSIBLE); + priv_memfd =3D memfd_create("vm_private_mem", memfd_flags); TEST_ASSERT(priv_memfd !=3D -1, "Failed to create priv_memfd"); - ret =3D fallocate(priv_memfd, 0, 0, TEST_MEM_SIZE); + ret =3D fallocate(priv_memfd, 0, 0, mem_size); TEST_ASSERT(ret !=3D -1, "fallocate failed"); =20 priv_memory_region_add(vm, shared_mem, - TEST_MEM_SLOT, TEST_MEM_SIZE, + TEST_MEM_SLOT, mem_size, TEST_MEM_GPA, priv_memfd, 0); =20 - pr_info("Mapping test memory pages 0x%x page_size 0x%x\n", - TEST_MEM_SIZE/vm_get_page_size(vm), + pr_info("Mapping test memory pages 0x%zx page_size 0x%x\n", + mem_size/vm_get_page_size(vm), vm_get_page_size(vm)); virt_map(vm, TEST_MEM_GPA, TEST_MEM_GPA, - (TEST_MEM_SIZE/vm_get_page_size(vm))); + (mem_size/vm_get_page_size(vm))); + + // add mmio communication page + virt_map(vm, MEM_SIZE_MMIO_ADDRESS, MEM_SIZE_MMIO_ADDRESS, 1); =20 /* Enable exit on KVM_HC_MAP_GPA_RANGE */ pr_info("Enabling exit on map_gpa_range hypercall\n"); @@ -945,24 +1082,106 @@ static void setup_and_execute_test(uint32_t test_id) priv_memfd_testsuite[test_id].priv_memfd =3D priv_memfd; vcpu_work(vm, test_id); =20 - munmap(shared_mem, TEST_MEM_SIZE); + munmap(shared_mem, mem_size); priv_memfd_testsuite[test_id].shared_mem =3D NULL; close(priv_memfd); priv_memfd_testsuite[test_id].priv_memfd =3D -1; kvm_vm_free(vm); } =20 +static void hugepage_requirements_text(const struct page_combo matrix) +{ + int pages_needed_2mb =3D 0; + int pages_needed_1gb =3D 0; + enum page_size sizes[] =3D { matrix.shared, matrix.private }; + + for (int i =3D 0; i < ARRAY_SIZE(sizes); ++i) { + if (sizes[i] =3D=3D PAGE_2MB) + ++pages_needed_2mb; + if (sizes[i] =3D=3D PAGE_1GB) + ++pages_needed_1gb; + } + if (pages_needed_2mb !=3D 0 && pages_needed_1gb !=3D 0) { + pr_info("This test requires %d 2MB page(s) and %d 1GB page(s)\n", + pages_needed_2mb, pages_needed_1gb); + } else if (pages_needed_2mb !=3D 0) { + pr_info("This test requires %d 2MB page(s)\n", pages_needed_2mb); + } else if (pages_needed_1gb !=3D 0) { + pr_info("This test requires %d 1GB page(s)\n", pages_needed_1gb); + } +} + +static bool should_skip_test(const struct page_combo matrix, + const bool use_2mb_pages, + const bool use_1gb_pages) +{ + if ((matrix.shared =3D=3D PAGE_2MB || matrix.private =3D=3D PAGE_2MB) + && !use_2mb_pages) + return true; + if ((matrix.shared =3D=3D PAGE_1GB || matrix.private =3D=3D PAGE_1GB) + && !use_1gb_pages) + return true; + return false; +} + +static void print_help(const char *const name) +{ + puts(""); + printf("usage %s [-h] [-m] [-g]\n", name); + puts(""); + printf(" -h: Display this help message\n"); + printf(" -m: include test runs using 2MB page permutations\n"); + printf(" -g: include test runs using 1GB page permutations\n"); + exit(0); +} + int main(int argc, char *argv[]) { /* Tell stdout not to buffer its content */ setbuf(stdout, NULL); =20 + // arg parsing + int opt; + bool use_2mb_pages =3D false; + bool use_1gb_pages =3D false; + + while ((opt =3D getopt(argc, argv, "mgh")) !=3D -1) { + switch (opt) { + case 'm': + use_2mb_pages =3D true; + break; + case 'g': + use_1gb_pages =3D true; + break; + case 'h': + default: + print_help(argv[0]); + } + } + + struct page_combo page_size_matrix[] =3D { + { .shared =3D PAGE_4KB, .private =3D PAGE_4KB }, + { .shared =3D PAGE_2MB, .private =3D PAGE_4KB }, + }; + for (uint32_t i =3D 0; i < ARRAY_SIZE(priv_memfd_testsuite); i++) { - pr_info("=3D=3D=3D Starting test %s... =3D=3D=3D\n", - priv_memfd_testsuite[i].test_desc); - setup_and_execute_test(i); - pr_info("--- completed test %s ---\n\n", - priv_memfd_testsuite[i].test_desc); + for (uint32_t j =3D 0; j < ARRAY_SIZE(page_size_matrix); j++) { + const struct page_combo current_page_matrix =3D page_size_matrix[j]; + + if (should_skip_test(current_page_matrix, + use_2mb_pages, use_1gb_pages)) + break; + pr_info("=3D=3D=3D Starting test %s... =3D=3D=3D\n", + priv_memfd_testsuite[i].test_desc); + pr_info("using page sizes shared: %s private: %s\n", + page_size_to_str(current_page_matrix.shared), + page_size_to_str(current_page_matrix.private)); + hugepage_requirements_text(current_page_matrix); + setup_and_execute_test(i, current_page_matrix.shared, + current_page_matrix.private); + pr_info("--- completed test %s ---\n\n", + priv_memfd_testsuite[i].test_desc); + } } =20 return 0; --=20 2.36.0.550.gb090851708-goog From nobody Sun May 10 09:54:15 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61762C433EF for ; Wed, 11 May 2022 00:09:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239399AbiEKAJw (ORCPT ); Tue, 10 May 2022 20:09:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38292 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239375AbiEKAJL (ORCPT ); Tue, 10 May 2022 20:09:11 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4795B66F80 for ; Tue, 10 May 2022 17:08:46 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id ij27-20020a170902ab5b00b0015d41282214so163633plb.9 for ; Tue, 10 May 2022 17:08:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=FKYCy8nTeKBSce/4iCSXAO9o2F9LxoRfpI1neokEeTE=; b=WrZJT/TTcn8b/jtcJKP77InSNooQFUCjeau/X0H0SV09QpVull24vp0CmIuLtWLJiw fCJ+/jiOlCMCEbU3fOVbp/r8u6+YhOZRw12u1T65WftzSSwYsOxlhV/4N6Vxnn8BXvxk h4cqcdgskBOOZvtOJozqZCGvUeA+MdartxN0ILu3UuuM2unHTauGZdk2M4Rb7gnSIibv B1cYFhbyq1OZl08wTouqbJBkEaP2r4FvJT3csHtYs4iWzQ9cj/5VRkpm8rV/lRxzWLOL NDufTqaZz+h9de/5ln2+xnQiAXOJmVhaLGTNvTfvTkRmv2Q7B+OKf9sm3qy9zITR9o7f Edmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=FKYCy8nTeKBSce/4iCSXAO9o2F9LxoRfpI1neokEeTE=; b=LWqHPbhNA502Kx0cWhiMidOu6sDvTiPlLN5N7ef3PLJFw26KhI9V04lp8VDWhp3OkQ bpa9JQSGpG4/Pwjh94S+btvAPkS7VzqVEIQpRLZ91RS6H7nSGDuKNvcqqYo7GVfAbwOu 2LRyToy1HT9lrIpMWfCr2geqRAe0Tz+YqEhiCmece9rRn+y4bIRl79NMLvDqVfFHd16C Uj5KmQCgapFSFbrLFBmmNk6If1IyLUnTtwAG8xG9C6W8/sToySKDCwxLAFxBcbIEGrTg CdvnqVf86qkq4/Kd/PA9h8bA+NxZeXHtGMm5e5qAcz2ArMMBR3z77z6oxPjags1E8u3v XXlg== X-Gm-Message-State: AOAM5312NA6NZ52YpkbF/g+TgZQI7Cd8/KgYq3IzV2tsgMCoyeCC9mtD B9f4BvJwwN/+TvEqHJOvEepIej/+Gdf5WUEK X-Google-Smtp-Source: ABdhPJwl/lO+0cjrfsYrNaI2y61itV/RzyivjHB/9eGEeXIrW2BIZ4lIStQL7yRe9lkokTR3DMkLl8lXRk6cv9Y3 X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90a:e510:b0:1d9:ee23:9fa1 with SMTP id t16-20020a17090ae51000b001d9ee239fa1mr55596pjy.0.1652227725108; Tue, 10 May 2022 17:08:45 -0700 (PDT) Date: Wed, 11 May 2022 00:08:11 +0000 In-Reply-To: <20220511000811.384766-1-vannapurve@google.com> Message-Id: <20220511000811.384766-10-vannapurve@google.com> Mime-Version: 1.0 References: <20220511000811.384766-1-vannapurve@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [RFC V2 PATCH 8/8] selftests: kvm: priv_memfd: Add test without double allocation From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shauh@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a memory conversion test without leading to double allocation of memory backing gpa ranges. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/priv_memfd_test.c | 225 ++++++++++++++++-- 1 file changed, 211 insertions(+), 14 deletions(-) diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/= selftests/kvm/priv_memfd_test.c index dbe6ead92ba7..3b6e84cf6a44 100644 --- a/tools/testing/selftests/kvm/priv_memfd_test.c +++ b/tools/testing/selftests/kvm/priv_memfd_test.c @@ -63,6 +63,8 @@ struct test_run_helper { guest_code_fn guest_fn; void *shared_mem; int priv_memfd; + bool disallow_boot_shared_access; + bool toggle_shared_mem_state; }; =20 enum page_size { @@ -779,6 +781,151 @@ static void pspahct_guest_code(void) GUEST_DONE(); } =20 +/* Test to verify guest accesses without double allocation: + * Guest starts with shared memory access disallowed by default. + * 1) Guest writes the private memory privately via a known pattern + * 3) Guest reads the private memory privately and verifies that the conte= nts + * are same as written. + * 4) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as shared + * and marks the range to be accessed via shared access. + * 5) Guest writes shared memory with another pattern and signals VMM + * 6) VMM verifies the memory contents to be same as written by guest in s= tep + * 5 and updates the memory with a different pattern + * 7) Guest verifies the memory contents to be same as written in step 6. + * 8) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as private + * and marks the range to be accessed via private access. + * 9) Guest writes a known pattern to the test memory and verifies the con= tents + * to be same as written. + * 10) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as shared + * and marks the range to be accessed via shared access. + * 11) Guest writes shared memory with another pattern and signals VMM + * 12) VMM verifies the memory contents to be same as written by guest in = step + * 5 and updates the memory with a different pattern + * 13) Guest verifies the memory contents to be same as written in step 6. + */ +#define PSAWDAT_ID 7 +#define PSAWDAT_DESC "PrivateSharedAccessWithoutDoubleAllocationTest" + +#define PSAWDAT_GUEST_SHARED_MEM_UPDATED1 1ULL +#define PSAWDAT_GUEST_SHARED_MEM_UPDATED2 2ULL + +static bool psawdat_handle_vm_stage(struct kvm_vm *vm, + void *test_info, + uint64_t stage) +{ + void *shared_mem =3D ((struct test_run_helper *)test_info)->shared_mem; + + switch (stage) { + case PSAWDAT_GUEST_SHARED_MEM_UPDATED1: { + /* verify data to be same as what guest wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, test_mem_size), + "Shared memory view mismatch"); + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, test_mem_size), + "Shared mem update Failure"); + VM_STAGE_PROCESSED(PSAWDAT_GUEST_SHARED_MEM_UPDATED); + break; + } + case PSAWDAT_GUEST_SHARED_MEM_UPDATED2: { + /* verify data to be same as what guest wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT3, test_mem_size), + "Shared memory view mismatch"); + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT4, test_mem_size), + "Shared mem update Failure"); + VM_STAGE_PROCESSED(PSAWDAT_GUEST_SHARED_MEM_UPDATED2); + break; + } + default: + printf("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void psawdat_guest_code(void) +{ + void *test_mem =3D (void *)TEST_MEM_GPA; + int ret; + + const size_t mem_size =3D *((size_t *)MEM_SIZE_MMIO_ADDRESS); + + /* Mark the GPA range to be treated as always accessed privately */ + ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret =3D=3D 0, ret); + GUEST_ASSERT(do_mem_op(SET_PAT, test_mem, + TEST_MEM_DATA_PAT1, mem_size)); + + GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT1, mem_size)); + + /* Map the GPA range to be treated as shared */ + ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, + KVM_MAP_GPA_RANGE_DECRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0); + GUEST_ASSERT_1(ret =3D=3D 0, ret); + + /* Mark the GPA range to be treated as always accessed via shared + * access + */ + ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, 0, 0, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret =3D=3D 0, ret); + + GUEST_ASSERT(do_mem_op(SET_PAT, test_mem, + TEST_MEM_DATA_PAT2, mem_size)); + GUEST_SYNC(PSAWDAT_GUEST_SHARED_MEM_UPDATED1); + + GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT1, mem_size)); + + /* Map the GPA range to be treated as private */ + ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, + KVM_MAP_GPA_RANGE_ENCRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0); + GUEST_ASSERT_1(ret =3D=3D 0, ret); + + /* Mark the GPA range to be treated as always accessed via private + * access + */ + ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret =3D=3D 0, ret); + + GUEST_ASSERT(do_mem_op(SET_PAT, test_mem, + TEST_MEM_DATA_PAT2, mem_size)); + GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT2, mem_size)); + + /* Map the GPA range to be treated as shared */ + ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, + KVM_MAP_GPA_RANGE_DECRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0); + GUEST_ASSERT_1(ret =3D=3D 0, ret); + + /* Mark the GPA range to be treated as always accessed via shared + * access + */ + ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, 0, 0, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret =3D=3D 0, ret); + + GUEST_ASSERT(do_mem_op(SET_PAT, test_mem, + TEST_MEM_DATA_PAT3, mem_size)); + GUEST_SYNC(PSAWDAT_GUEST_SHARED_MEM_UPDATED2); + + GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT4, mem_size)); + + GUEST_DONE(); +} + static struct test_run_helper priv_memfd_testsuite[] =3D { [PMPAT_ID] =3D { .test_desc =3D PMPAT_DESC, @@ -815,6 +962,13 @@ static struct test_run_helper priv_memfd_testsuite[] = =3D { .vmst_handler =3D pspahct_handle_vm_stage, .guest_fn =3D pspahct_guest_code, }, + [PSAWDAT_ID] =3D { + .test_desc =3D PSAWDAT_DESC, + .vmst_handler =3D psawdat_handle_vm_stage, + .guest_fn =3D psawdat_guest_code, + .toggle_shared_mem_state =3D true, + .disallow_boot_shared_access =3D true, + }, }; =20 static void handle_vm_exit_hypercall(struct kvm_run *run, @@ -825,6 +979,10 @@ static void handle_vm_exit_hypercall(struct kvm_run *r= un, priv_memfd_testsuite[test_id].priv_memfd; int ret; int fallocate_mode; + void *shared_mem =3D priv_memfd_testsuite[test_id].shared_mem; + bool toggle_shared_mem_state =3D + priv_memfd_testsuite[test_id].toggle_shared_mem_state; + int mprotect_mode; =20 if (run->hypercall.nr !=3D KVM_HC_MAP_GPA_RANGE) { TEST_FAIL("Unhandled Hypercall %lld\n", @@ -842,11 +1000,13 @@ static void handle_vm_exit_hypercall(struct kvm_run = *run, gpa, npages); } =20 - if (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED) + if (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED) { fallocate_mode =3D 0; - else { + mprotect_mode =3D PROT_NONE; + } else { fallocate_mode =3D (FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE); + mprotect_mode =3D PROT_READ | PROT_WRITE; } pr_info("Converting off 0x%lx pages 0x%lx to %s\n", (gpa - TEST_MEM_GPA), npages, @@ -857,6 +1017,17 @@ static void handle_vm_exit_hypercall(struct kvm_run *= run, npages << MIN_PAGE_SHIFT); TEST_ASSERT(ret !=3D -1, "fallocate failed in hc handling"); + if (toggle_shared_mem_state) { + if (fallocate_mode) { + ret =3D madvise(shared_mem, test_mem_size, + MADV_DONTNEED); + TEST_ASSERT(ret !=3D -1, + "madvise failed in hc handling"); + } + ret =3D mprotect(shared_mem, test_mem_size, mprotect_mode); + TEST_ASSERT(ret !=3D -1, + "mprotect failed in hc handling"); + } run->hypercall.ret =3D 0; } =20 @@ -867,7 +1038,11 @@ static void handle_vm_exit_memory_error(struct kvm_ru= n *run, int ret; int priv_memfd =3D priv_memfd_testsuite[test_id].priv_memfd; + void *shared_mem =3D priv_memfd_testsuite[test_id].shared_mem; + bool toggle_shared_mem_state =3D + priv_memfd_testsuite[test_id].toggle_shared_mem_state; int fallocate_mode; + int mprotect_mode; =20 gpa =3D run->memory.gpa; size =3D run->memory.size; @@ -880,11 +1055,13 @@ static void handle_vm_exit_memory_error(struct kvm_r= un *run, gpa, size); } =20 - if (flags & KVM_MEMORY_EXIT_FLAG_PRIVATE) + if (flags & KVM_MEMORY_EXIT_FLAG_PRIVATE) { fallocate_mode =3D 0; - else { + mprotect_mode =3D PROT_NONE; + } else { fallocate_mode =3D (FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE); + mprotect_mode =3D PROT_READ | PROT_WRITE; } pr_info("Converting off 0x%lx size 0x%lx to %s\n", (gpa - TEST_MEM_GPA), size, @@ -894,6 +1071,18 @@ static void handle_vm_exit_memory_error(struct kvm_ru= n *run, (gpa - TEST_MEM_GPA), size); TEST_ASSERT(ret !=3D -1, "fallocate failed in memory error handling"); + + if (toggle_shared_mem_state) { + if (fallocate_mode) { + ret =3D madvise(shared_mem, test_mem_size, + MADV_DONTNEED); + TEST_ASSERT(ret !=3D -1, + "madvise failed in memory error handling"); + } + ret =3D mprotect(shared_mem, test_mem_size, mprotect_mode); + TEST_ASSERT(ret !=3D -1, + "mprotect failed in memory error handling"); + } } =20 static void vcpu_work(struct kvm_vm *vm, uint32_t test_id) @@ -924,14 +1113,14 @@ static void vcpu_work(struct kvm_vm *vm, uint32_t te= st_id) =20 if (run->exit_reason =3D=3D KVM_EXIT_MMIO) { if (run->mmio.phys_addr =3D=3D MEM_SIZE_MMIO_ADDRESS) { - // tell the guest the size of the memory - // it's been allocated + /* tell the guest the size of the memory it's + * been allocated + */ int shift_amount =3D 0; =20 for (int i =3D 0; i < sizeof(uint64_t); ++i) { - run->mmio.data[i] =3D - (test_mem_size >> - shift_amount) & BYTE_MASK; + run->mmio.data[i] =3D (test_mem_size >> + shift_amount) & BYTE_MASK; shift_amount +=3D CHAR_BIT; } } @@ -985,6 +1174,9 @@ static void setup_and_execute_test(uint32_t test_id, int ret; void *shared_mem; struct kvm_enable_cap cap; + bool disallow_boot_shared_access =3D + priv_memfd_testsuite[test_id].disallow_boot_shared_access; + int prot_flags =3D PROT_READ | PROT_WRITE; =20 vm =3D vm_create_default(VCPU_ID, 0, priv_memfd_testsuite[test_id].guest_fn); @@ -1036,10 +1228,12 @@ static void setup_and_execute_test(uint32_t test_id, // set global for mem size to use later test_mem_size =3D mem_size; =20 + if (disallow_boot_shared_access) + prot_flags =3D PROT_NONE; + /* Allocate shared memory */ shared_mem =3D mmap(NULL, mem_size, - PROT_READ | PROT_WRITE, - mmap_flags, -1, 0); + prot_flags, mmap_flags, -1, 0); TEST_ASSERT(shared_mem !=3D MAP_FAILED, "Failed to mmap() host"); =20 if (using_hugepages) { @@ -1166,7 +1360,8 @@ int main(int argc, char *argv[]) =20 for (uint32_t i =3D 0; i < ARRAY_SIZE(priv_memfd_testsuite); i++) { for (uint32_t j =3D 0; j < ARRAY_SIZE(page_size_matrix); j++) { - const struct page_combo current_page_matrix =3D page_size_matrix[j]; + const struct page_combo current_page_matrix =3D + page_size_matrix[j]; =20 if (should_skip_test(current_page_matrix, use_2mb_pages, use_1gb_pages)) @@ -1174,8 +1369,10 @@ int main(int argc, char *argv[]) pr_info("=3D=3D=3D Starting test %s... =3D=3D=3D\n", priv_memfd_testsuite[i].test_desc); pr_info("using page sizes shared: %s private: %s\n", - page_size_to_str(current_page_matrix.shared), - page_size_to_str(current_page_matrix.private)); + page_size_to_str( + current_page_matrix.shared), + page_size_to_str( + current_page_matrix.private)); hugepage_requirements_text(current_page_matrix); setup_and_execute_test(i, current_page_matrix.shared, current_page_matrix.private); --=20 2.36.0.550.gb090851708-goog From nobody Sun May 10 09:54:15 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D18ACC433EF for ; Wed, 11 May 2022 00:09:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236837AbiEKAJg (ORCPT ); Tue, 10 May 2022 20:09:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239675AbiEKAIr (ORCPT ); Tue, 10 May 2022 20:08:47 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2727631373 for ; Tue, 10 May 2022 17:08:41 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id y5-20020aa79425000000b005104c6e01efso227482pfo.23 for ; Tue, 10 May 2022 17:08:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=FKYCy8nTeKBSce/4iCSXAO9o2F9LxoRfpI1neokEeTE=; b=ssPbYUDDUR7Lk75ZwCroNZYn4ps9HZK45xYG0ZA/1nAmYyewmiB9+4jTK3lomDzHCU bP1sovRM/YX1uIo2/l1kOLMSeItaVf9KI6OUJN/9MvLBkBOq/hgF8YgLyfK8iCTSWWBA xW54BCXfW5Erd2zyTo259DTs1cZi2LgsB1J0ve5P7UlLI+P9G6L1Uxe3dD2cZ0pRPOHR wFOYkgJh4+qea58ZVGFCdzMK6fAHF+cv4Kp25iMEUIKSqHMNHlBN3z7kZf5Rm53gtQox dtNcYWkO4FsXB5F3Auf7UEIizxCT51k6ldFJPdqdprMxnhah1vRLXSU/pkLoUHW2ePn9 isMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=FKYCy8nTeKBSce/4iCSXAO9o2F9LxoRfpI1neokEeTE=; b=rv7kKOvqvHY8EzqSi0LkvxIeZoQSyPNZ6po5fARMlCEozvOvqW6EdOYW7X2CE+eFW4 2poJoc270L9TNdPx686uN8Sa0A+rwr6oJV5Nus3aG19O5B3hUtAJbPa6CSDt+D1Co7kd ByKXeVObOwQgSep+4AJSzQlqunVcid8ZawH9uLblrBXqVdrTXWXHuDKHBGYuLgABvs5z l0Y0uNrRBz+nCHwy8bzWjWH0rcv7y4951JyXxSsHNolRSOIXjkeBhaj01IzpJtUYi45h QUFpHzQtRRoz7t3oirezynQOYwKRFgl1OXvj/h40HMSw3MltD7eoDuL/UZ3y1qyBSejo CdTA== X-Gm-Message-State: AOAM532opMxmHNz+h8CmZ/2Q5V+15rK8dPhNZ1S/f+jjGLvigxqAArgH XMPseKKjCOZyUiw4u2rngvZYPWEr3Gqmnpjb X-Google-Smtp-Source: ABdhPJzR473SpZp8Yag+VqE5GdAoG5IupcfbIthbh8IUjySyWZjJRrDwxsWbsBYaG8nQDo2oH5pQpOExjRQNMOsZ X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a05:6a00:a0e:b0:4fd:fa6e:95fc with SMTP id p14-20020a056a000a0e00b004fdfa6e95fcmr22764103pfh.17.1652227720345; Tue, 10 May 2022 17:08:40 -0700 (PDT) Date: Wed, 11 May 2022 00:08:10 +0000 In-Reply-To: <20220511000811.384766-1-vannapurve@google.com> Message-Id: <20220511000811.384766-9-vannapurve@google.com> Mime-Version: 1.0 References: <20220511000811.384766-1-vannapurve@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [RFC V2 PATCH 8/8] selftests: kvm: priv_memfd: Add test avoiding double allocation From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shauh@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a memory conversion test without leading to double allocation of memory backing gpa ranges. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/priv_memfd_test.c | 225 ++++++++++++++++-- 1 file changed, 211 insertions(+), 14 deletions(-) diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/= selftests/kvm/priv_memfd_test.c index dbe6ead92ba7..3b6e84cf6a44 100644 --- a/tools/testing/selftests/kvm/priv_memfd_test.c +++ b/tools/testing/selftests/kvm/priv_memfd_test.c @@ -63,6 +63,8 @@ struct test_run_helper { guest_code_fn guest_fn; void *shared_mem; int priv_memfd; + bool disallow_boot_shared_access; + bool toggle_shared_mem_state; }; =20 enum page_size { @@ -779,6 +781,151 @@ static void pspahct_guest_code(void) GUEST_DONE(); } =20 +/* Test to verify guest accesses without double allocation: + * Guest starts with shared memory access disallowed by default. + * 1) Guest writes the private memory privately via a known pattern + * 3) Guest reads the private memory privately and verifies that the conte= nts + * are same as written. + * 4) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as shared + * and marks the range to be accessed via shared access. + * 5) Guest writes shared memory with another pattern and signals VMM + * 6) VMM verifies the memory contents to be same as written by guest in s= tep + * 5 and updates the memory with a different pattern + * 7) Guest verifies the memory contents to be same as written in step 6. + * 8) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as private + * and marks the range to be accessed via private access. + * 9) Guest writes a known pattern to the test memory and verifies the con= tents + * to be same as written. + * 10) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as shared + * and marks the range to be accessed via shared access. + * 11) Guest writes shared memory with another pattern and signals VMM + * 12) VMM verifies the memory contents to be same as written by guest in = step + * 5 and updates the memory with a different pattern + * 13) Guest verifies the memory contents to be same as written in step 6. + */ +#define PSAWDAT_ID 7 +#define PSAWDAT_DESC "PrivateSharedAccessWithoutDoubleAllocationTest" + +#define PSAWDAT_GUEST_SHARED_MEM_UPDATED1 1ULL +#define PSAWDAT_GUEST_SHARED_MEM_UPDATED2 2ULL + +static bool psawdat_handle_vm_stage(struct kvm_vm *vm, + void *test_info, + uint64_t stage) +{ + void *shared_mem =3D ((struct test_run_helper *)test_info)->shared_mem; + + switch (stage) { + case PSAWDAT_GUEST_SHARED_MEM_UPDATED1: { + /* verify data to be same as what guest wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, test_mem_size), + "Shared memory view mismatch"); + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, test_mem_size), + "Shared mem update Failure"); + VM_STAGE_PROCESSED(PSAWDAT_GUEST_SHARED_MEM_UPDATED); + break; + } + case PSAWDAT_GUEST_SHARED_MEM_UPDATED2: { + /* verify data to be same as what guest wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT3, test_mem_size), + "Shared memory view mismatch"); + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT4, test_mem_size), + "Shared mem update Failure"); + VM_STAGE_PROCESSED(PSAWDAT_GUEST_SHARED_MEM_UPDATED2); + break; + } + default: + printf("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void psawdat_guest_code(void) +{ + void *test_mem =3D (void *)TEST_MEM_GPA; + int ret; + + const size_t mem_size =3D *((size_t *)MEM_SIZE_MMIO_ADDRESS); + + /* Mark the GPA range to be treated as always accessed privately */ + ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret =3D=3D 0, ret); + GUEST_ASSERT(do_mem_op(SET_PAT, test_mem, + TEST_MEM_DATA_PAT1, mem_size)); + + GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT1, mem_size)); + + /* Map the GPA range to be treated as shared */ + ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, + KVM_MAP_GPA_RANGE_DECRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0); + GUEST_ASSERT_1(ret =3D=3D 0, ret); + + /* Mark the GPA range to be treated as always accessed via shared + * access + */ + ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, 0, 0, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret =3D=3D 0, ret); + + GUEST_ASSERT(do_mem_op(SET_PAT, test_mem, + TEST_MEM_DATA_PAT2, mem_size)); + GUEST_SYNC(PSAWDAT_GUEST_SHARED_MEM_UPDATED1); + + GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT1, mem_size)); + + /* Map the GPA range to be treated as private */ + ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, + KVM_MAP_GPA_RANGE_ENCRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0); + GUEST_ASSERT_1(ret =3D=3D 0, ret); + + /* Mark the GPA range to be treated as always accessed via private + * access + */ + ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret =3D=3D 0, ret); + + GUEST_ASSERT(do_mem_op(SET_PAT, test_mem, + TEST_MEM_DATA_PAT2, mem_size)); + GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT2, mem_size)); + + /* Map the GPA range to be treated as shared */ + ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, + KVM_MAP_GPA_RANGE_DECRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0); + GUEST_ASSERT_1(ret =3D=3D 0, ret); + + /* Mark the GPA range to be treated as always accessed via shared + * access + */ + ret =3D kvm_hypercall(KVM_HC_MAP_GPA_RANGE, 0, 0, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret =3D=3D 0, ret); + + GUEST_ASSERT(do_mem_op(SET_PAT, test_mem, + TEST_MEM_DATA_PAT3, mem_size)); + GUEST_SYNC(PSAWDAT_GUEST_SHARED_MEM_UPDATED2); + + GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT4, mem_size)); + + GUEST_DONE(); +} + static struct test_run_helper priv_memfd_testsuite[] =3D { [PMPAT_ID] =3D { .test_desc =3D PMPAT_DESC, @@ -815,6 +962,13 @@ static struct test_run_helper priv_memfd_testsuite[] = =3D { .vmst_handler =3D pspahct_handle_vm_stage, .guest_fn =3D pspahct_guest_code, }, + [PSAWDAT_ID] =3D { + .test_desc =3D PSAWDAT_DESC, + .vmst_handler =3D psawdat_handle_vm_stage, + .guest_fn =3D psawdat_guest_code, + .toggle_shared_mem_state =3D true, + .disallow_boot_shared_access =3D true, + }, }; =20 static void handle_vm_exit_hypercall(struct kvm_run *run, @@ -825,6 +979,10 @@ static void handle_vm_exit_hypercall(struct kvm_run *r= un, priv_memfd_testsuite[test_id].priv_memfd; int ret; int fallocate_mode; + void *shared_mem =3D priv_memfd_testsuite[test_id].shared_mem; + bool toggle_shared_mem_state =3D + priv_memfd_testsuite[test_id].toggle_shared_mem_state; + int mprotect_mode; =20 if (run->hypercall.nr !=3D KVM_HC_MAP_GPA_RANGE) { TEST_FAIL("Unhandled Hypercall %lld\n", @@ -842,11 +1000,13 @@ static void handle_vm_exit_hypercall(struct kvm_run = *run, gpa, npages); } =20 - if (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED) + if (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED) { fallocate_mode =3D 0; - else { + mprotect_mode =3D PROT_NONE; + } else { fallocate_mode =3D (FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE); + mprotect_mode =3D PROT_READ | PROT_WRITE; } pr_info("Converting off 0x%lx pages 0x%lx to %s\n", (gpa - TEST_MEM_GPA), npages, @@ -857,6 +1017,17 @@ static void handle_vm_exit_hypercall(struct kvm_run *= run, npages << MIN_PAGE_SHIFT); TEST_ASSERT(ret !=3D -1, "fallocate failed in hc handling"); + if (toggle_shared_mem_state) { + if (fallocate_mode) { + ret =3D madvise(shared_mem, test_mem_size, + MADV_DONTNEED); + TEST_ASSERT(ret !=3D -1, + "madvise failed in hc handling"); + } + ret =3D mprotect(shared_mem, test_mem_size, mprotect_mode); + TEST_ASSERT(ret !=3D -1, + "mprotect failed in hc handling"); + } run->hypercall.ret =3D 0; } =20 @@ -867,7 +1038,11 @@ static void handle_vm_exit_memory_error(struct kvm_ru= n *run, int ret; int priv_memfd =3D priv_memfd_testsuite[test_id].priv_memfd; + void *shared_mem =3D priv_memfd_testsuite[test_id].shared_mem; + bool toggle_shared_mem_state =3D + priv_memfd_testsuite[test_id].toggle_shared_mem_state; int fallocate_mode; + int mprotect_mode; =20 gpa =3D run->memory.gpa; size =3D run->memory.size; @@ -880,11 +1055,13 @@ static void handle_vm_exit_memory_error(struct kvm_r= un *run, gpa, size); } =20 - if (flags & KVM_MEMORY_EXIT_FLAG_PRIVATE) + if (flags & KVM_MEMORY_EXIT_FLAG_PRIVATE) { fallocate_mode =3D 0; - else { + mprotect_mode =3D PROT_NONE; + } else { fallocate_mode =3D (FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE); + mprotect_mode =3D PROT_READ | PROT_WRITE; } pr_info("Converting off 0x%lx size 0x%lx to %s\n", (gpa - TEST_MEM_GPA), size, @@ -894,6 +1071,18 @@ static void handle_vm_exit_memory_error(struct kvm_ru= n *run, (gpa - TEST_MEM_GPA), size); TEST_ASSERT(ret !=3D -1, "fallocate failed in memory error handling"); + + if (toggle_shared_mem_state) { + if (fallocate_mode) { + ret =3D madvise(shared_mem, test_mem_size, + MADV_DONTNEED); + TEST_ASSERT(ret !=3D -1, + "madvise failed in memory error handling"); + } + ret =3D mprotect(shared_mem, test_mem_size, mprotect_mode); + TEST_ASSERT(ret !=3D -1, + "mprotect failed in memory error handling"); + } } =20 static void vcpu_work(struct kvm_vm *vm, uint32_t test_id) @@ -924,14 +1113,14 @@ static void vcpu_work(struct kvm_vm *vm, uint32_t te= st_id) =20 if (run->exit_reason =3D=3D KVM_EXIT_MMIO) { if (run->mmio.phys_addr =3D=3D MEM_SIZE_MMIO_ADDRESS) { - // tell the guest the size of the memory - // it's been allocated + /* tell the guest the size of the memory it's + * been allocated + */ int shift_amount =3D 0; =20 for (int i =3D 0; i < sizeof(uint64_t); ++i) { - run->mmio.data[i] =3D - (test_mem_size >> - shift_amount) & BYTE_MASK; + run->mmio.data[i] =3D (test_mem_size >> + shift_amount) & BYTE_MASK; shift_amount +=3D CHAR_BIT; } } @@ -985,6 +1174,9 @@ static void setup_and_execute_test(uint32_t test_id, int ret; void *shared_mem; struct kvm_enable_cap cap; + bool disallow_boot_shared_access =3D + priv_memfd_testsuite[test_id].disallow_boot_shared_access; + int prot_flags =3D PROT_READ | PROT_WRITE; =20 vm =3D vm_create_default(VCPU_ID, 0, priv_memfd_testsuite[test_id].guest_fn); @@ -1036,10 +1228,12 @@ static void setup_and_execute_test(uint32_t test_id, // set global for mem size to use later test_mem_size =3D mem_size; =20 + if (disallow_boot_shared_access) + prot_flags =3D PROT_NONE; + /* Allocate shared memory */ shared_mem =3D mmap(NULL, mem_size, - PROT_READ | PROT_WRITE, - mmap_flags, -1, 0); + prot_flags, mmap_flags, -1, 0); TEST_ASSERT(shared_mem !=3D MAP_FAILED, "Failed to mmap() host"); =20 if (using_hugepages) { @@ -1166,7 +1360,8 @@ int main(int argc, char *argv[]) =20 for (uint32_t i =3D 0; i < ARRAY_SIZE(priv_memfd_testsuite); i++) { for (uint32_t j =3D 0; j < ARRAY_SIZE(page_size_matrix); j++) { - const struct page_combo current_page_matrix =3D page_size_matrix[j]; + const struct page_combo current_page_matrix =3D + page_size_matrix[j]; =20 if (should_skip_test(current_page_matrix, use_2mb_pages, use_1gb_pages)) @@ -1174,8 +1369,10 @@ int main(int argc, char *argv[]) pr_info("=3D=3D=3D Starting test %s... =3D=3D=3D\n", priv_memfd_testsuite[i].test_desc); pr_info("using page sizes shared: %s private: %s\n", - page_size_to_str(current_page_matrix.shared), - page_size_to_str(current_page_matrix.private)); + page_size_to_str( + current_page_matrix.shared), + page_size_to_str( + current_page_matrix.private)); hugepage_requirements_text(current_page_matrix); setup_and_execute_test(i, current_page_matrix.shared, current_page_matrix.private); --=20 2.36.0.550.gb090851708-goog