From nobody Sat Nov 30 10:40:10 2024 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1F8FD1C2423 for ; Tue, 10 Sep 2024 23:45:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011930; cv=none; b=bsMgO/PKWgAdaIj64DijEVX7sEMrLgMKhy01xDPv40NDnaTBOgZCKLoLZ7CMp6Ko1ItwvB/51l5GmNgRaSGTDW/WHLVNDrIA3ZaItT3ny9mWZM+U2nR0OcZ0gGD1w76BOXi8DArA59T96Zfb/KL6MWrMUrZkNkkLvlpG/QAQdgA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011930; c=relaxed/simple; bh=G1xvv3VhKdX7sOu89I1TMKp58s4sr/aulFadLA7UKxQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=oOQG3SF5y1Q0xq3Fr0vv7Yn0SRBhUFkJ1qI/yec0YygF5BXTPBlliIUtiVLbCoa9IHZrKtuIudL5A+61mqYr6g4IsNUKkm9FWtyAwJTcZFS6coyd4aYaVjwo/TruFpKNCjLZgkmkJOo1N/EaslsYmjrtxSGwt80U12mmoA0EnVU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=liqKuMGg; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="liqKuMGg" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2d8ce69ed4cso5635633a91.0 for ; Tue, 10 Sep 2024 16:45:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011928; x=1726616728; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OpTxhyUXuMtHPMfLsf+VosdcWmSbqHHnAO9eI97+DWM=; b=liqKuMGg/lEQECBg5CK0FuCJNHVl2PQKknoWbsPLpOR99fsxuP+ZEspceZutrYZQtv RUhRj5V8QPJupdqfMSusP9e2aTVLcXS1Jqla70O3eR1FAosCr1mWtg2OFsDHWBzzSPKd Y5i9P4LL4/FvWSKBvNlElj5x9Dj9XhrZLmsP2CDzFFZBTlfSzBAIDET3y5TwIZDb0plA D7s+iL89lLluD3NAcVEu4vOAnw4N+iQ49Bu9xrAl+5qHiXA9a5d0VezZwyp0qXQrz9ud kRcrjcS4rrqGJL2ONpnSm1+XGK3bnl6SvYtZatS/OUGUoA1YucVjtrkcErxi8S9ShbSY qahg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011928; x=1726616728; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OpTxhyUXuMtHPMfLsf+VosdcWmSbqHHnAO9eI97+DWM=; b=pTkIt/a2LeRPjB1kN6CEuNrtaJPCEnS4TSP4qjes5vcTQ7XPZ+NWzjM0HPZ9CAXFnd kgIlznD/DuJKmMQU1pe7Aj/bpwlnn+JKBch4ojsU+HuAu1hR7XqDdZVL8RqIeeXcUvEf sgj7Oo2OZopoFG9Uae3euSspe7TAcIRJVnCT2v0rkrKapKvsyjcgG4pRNWE6nn2JEqHc YDj4JYpWz2rj1bvHCoVO5VRV/900gwY1J3l4bFsTfnfeTnFUEAyJWiZox6SgEzGOsNRg 0o82aRg2nrpawhOz3C3Y5qMUS+GxJLRUGyN3Vz/WF0cUDcxI1wXFvuULOf3DC8ZHq4ZL wucA== X-Forwarded-Encrypted: i=1; AJvYcCVGk0OGAKMzjpnJCKI29qamqww+OhAv3ZCVBu/0aQhL/l0zIiU2qjJRMDtWgza+n92fxLZgPpc0RweFxUE=@vger.kernel.org X-Gm-Message-State: AOJu0YyJ2LeRRS/BTox6Y2Z1LQmQsRKS2Bkxk5ZLIezXxwtg8tZ2Fjuk niK3i5AklCzz5Wi0ht5d9WWBX27pA5p1Au4olo3jmn+gun2WL+46M/W7eJGCeVFg3+fxffsAOo8 K4T+bSdO9yWv+9ExpvEghzA== X-Google-Smtp-Source: AGHT+IGDCVAzc3Os2kH8Tey/rqyfjIzDxCVVgwAtu26YcOS+JGYzGKn0YwTy0lhdNyewynSeBma3RYB4AAVHIEMUkQ== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a17:90b:4f49:b0:2da:6c1e:1576 with SMTP id 98e67ed59e1d1-2dad4b8ba79mr41392a91.0.1726011928288; Tue, 10 Sep 2024 16:45:28 -0700 (PDT) Date: Tue, 10 Sep 2024 23:44:03 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: Subject: [RFC PATCH 32/39] KVM: selftests: Test using guest_memfd memory from userspace From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Test using guest_memfd from userspace, since guest_memfd now has mmap() support. Tests: 1. mmap() should now always return a valid address 2. Test that madvise() doesn't give any issues when pages are not faulted in. 3. Test that pages should not be faultable before association with a memslot, and that faults result in SIGBUS. 4. Test that pages can be faulted if marked faultable, and the flow of setting a memory range as private, which is: a. madvise(MADV_DONTNEED) to request kernel to unmap pages b. Set memory attributes of VM to private Also test that if pages are still mapped, setting memory attributes will fail. 5. Test that madvise(MADV_REMOVE) can be used to remove pages from guest_memfd, forcing zeroing of those pages before the next time the pages are faulted in. Signed-off-by: Ackerley Tng --- .../testing/selftests/kvm/guest_memfd_test.c | 195 +++++++++++++++++- 1 file changed, 189 insertions(+), 6 deletions(-) diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing= /selftests/kvm/guest_memfd_test.c index 3618ce06663e..b6f3c3e6d0dd 100644 --- a/tools/testing/selftests/kvm/guest_memfd_test.c +++ b/tools/testing/selftests/kvm/guest_memfd_test.c @@ -6,6 +6,7 @@ */ #include #include +#include #include #include #include @@ -35,12 +36,192 @@ static void test_file_read_write(int fd) "pwrite on a guest_mem fd should fail"); } =20 -static void test_mmap(int fd, size_t page_size) +static void test_mmap_should_map_pages_into_userspace(int fd, size_t page_= size) { char *mem; =20 mem =3D mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); - TEST_ASSERT_EQ(mem, MAP_FAILED); + TEST_ASSERT(mem !=3D MAP_FAILED, "mmap should return valid address"); + + TEST_ASSERT_EQ(munmap(mem, page_size), 0); +} + +static void test_madvise_no_error_when_pages_not_faulted(int fd, size_t pa= ge_size) +{ + char *mem; + + mem =3D mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT(mem !=3D MAP_FAILED, "mmap should return valid address"); + + TEST_ASSERT_EQ(madvise(mem, page_size, MADV_DONTNEED), 0); + + TEST_ASSERT_EQ(munmap(mem, page_size), 0); +} + +static void assert_not_faultable(char *address) +{ + pid_t child_pid; + + child_pid =3D fork(); + TEST_ASSERT(child_pid !=3D -1, "fork failed"); + + if (child_pid =3D=3D 0) { + *address =3D 'A'; + } else { + int status; + waitpid(child_pid, &status, 0); + + TEST_ASSERT(WIFSIGNALED(status), + "Child should have exited with a signal"); + TEST_ASSERT_EQ(WTERMSIG(status), SIGBUS); + } +} + +/* + * Pages should not be faultable before association with memslot because p= ages + * (in a KVM_X86_SW_PROTECTED_VM) only default to faultable at memslot + * association time. + */ +static void test_pages_not_faultable_if_not_associated_with_memslot(int fd, + size_t page_size) +{ + char *mem =3D mmap(NULL, page_size, PROT_READ | PROT_WRITE, + MAP_SHARED, fd, 0); + TEST_ASSERT(mem !=3D MAP_FAILED, "mmap should return valid address"); + + assert_not_faultable(mem); + + TEST_ASSERT_EQ(munmap(mem, page_size), 0); +} + +static void test_pages_faultable_if_marked_faultable(struct kvm_vm *vm, in= t fd, + size_t page_size) +{ + char *mem; + uint64_t gpa =3D 0; + uint64_t guest_memfd_offset =3D 0; + + /* + * This test uses KVM_X86_SW_PROTECTED_VM is required to set + * arch.has_private_mem, to add a memslot with guest_memfd to a VM. + */ + if (!(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_VM))) { + printf("Faultability test skipped since KVM_X86_SW_PROTECTED_VM is not s= upported."); + return; + } + + mem =3D mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, + guest_memfd_offset); + TEST_ASSERT(mem !=3D MAP_FAILED, "mmap should return valid address"); + + /* + * Setting up this memslot with a KVM_X86_SW_PROTECTED_VM marks all + * offsets in the file as shared, allowing pages to be faulted in. + */ + vm_set_user_memory_region2(vm, 0, KVM_MEM_GUEST_MEMFD, gpa, page_size, + mem, fd, guest_memfd_offset); + + *mem =3D 'A'; + TEST_ASSERT_EQ(*mem, 'A'); + + /* Should fail since the page is still faulted in. */ + TEST_ASSERT_EQ(__vm_set_memory_attributes(vm, gpa, page_size, + KVM_MEMORY_ATTRIBUTE_PRIVATE), + -1); + TEST_ASSERT_EQ(errno, EINVAL); + + /* + * Use madvise() to remove the pages from userspace page tables, then + * test that the page is still faultable, and that page contents remain + * the same. + */ + madvise(mem, page_size, MADV_DONTNEED); + TEST_ASSERT_EQ(*mem, 'A'); + + /* Tell kernel to unmap the page from userspace. */ + madvise(mem, page_size, MADV_DONTNEED); + + /* Now kernel can set this page to private. */ + vm_mem_set_private(vm, gpa, page_size); + assert_not_faultable(mem); + + /* + * Should be able to fault again after setting this back to shared, and + * memory contents should be cleared since pages must be re-prepared for + * SHARED use. + */ + vm_mem_set_shared(vm, gpa, page_size); + TEST_ASSERT_EQ(*mem, 0); + + /* Cleanup */ + vm_set_user_memory_region2(vm, 0, KVM_MEM_GUEST_MEMFD, gpa, 0, mem, fd, + guest_memfd_offset); + + TEST_ASSERT_EQ(munmap(mem, page_size), 0); +} + +static void test_madvise_remove_releases_pages(struct kvm_vm *vm, int fd, + size_t page_size) +{ + char *mem; + uint64_t gpa =3D 0; + uint64_t guest_memfd_offset =3D 0; + + /* + * This test uses KVM_X86_SW_PROTECTED_VM is required to set + * arch.has_private_mem, to add a memslot with guest_memfd to a VM. + */ + if (!(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_VM))) { + printf("madvise test skipped since KVM_X86_SW_PROTECTED_VM is not suppor= ted."); + return; + } + + mem =3D mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT(mem !=3D MAP_FAILED, "mmap should return valid address"); + + /* + * Setting up this memslot with a KVM_X86_SW_PROTECTED_VM marks all + * offsets in the file as shared, allowing pages to be faulted in. + */ + vm_set_user_memory_region2(vm, 0, KVM_MEM_GUEST_MEMFD, gpa, page_size, + mem, fd, guest_memfd_offset); + + *mem =3D 'A'; + TEST_ASSERT_EQ(*mem, 'A'); + + /* + * MADV_DONTNEED causes pages to be removed from userspace page tables + * but should not release pages, hence page contents are kept. + */ + TEST_ASSERT_EQ(madvise(mem, page_size, MADV_DONTNEED), 0); + TEST_ASSERT_EQ(*mem, 'A'); + + /* + * MADV_REMOVE causes pages to be released. Pages are then zeroed when + * prepared for shared use, hence 0 is expected on next fault. + */ + TEST_ASSERT_EQ(madvise(mem, page_size, MADV_REMOVE), 0); + TEST_ASSERT_EQ(*mem, 0); + + TEST_ASSERT_EQ(munmap(mem, page_size), 0); + + /* Cleanup */ + vm_set_user_memory_region2(vm, 0, KVM_MEM_GUEST_MEMFD, gpa, 0, mem, fd, + guest_memfd_offset); +} + +static void test_using_memory_directly_from_userspace(struct kvm_vm *vm, + int fd, size_t page_size) +{ + test_mmap_should_map_pages_into_userspace(fd, page_size); + + test_madvise_no_error_when_pages_not_faulted(fd, page_size); + + test_pages_not_faultable_if_not_associated_with_memslot(fd, page_size); + + test_pages_faultable_if_marked_faultable(vm, fd, page_size); + + test_madvise_remove_releases_pages(vm, fd, page_size); } =20 static void test_file_size(int fd, size_t page_size, size_t total_size) @@ -180,18 +361,17 @@ static void test_guest_memfd(struct kvm_vm *vm, uint3= 2_t flags, size_t page_size size_t total_size; int fd; =20 - TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); - total_size =3D page_size * 4; =20 fd =3D vm_create_guest_memfd(vm, total_size, flags); =20 test_file_read_write(fd); - test_mmap(fd, page_size); test_file_size(fd, page_size, total_size); test_fallocate(fd, page_size, total_size); test_invalid_punch_hole(fd, page_size, total_size); =20 + test_using_memory_directly_from_userspace(vm, fd, page_size); + close(fd); } =20 @@ -201,7 +381,10 @@ int main(int argc, char *argv[]) =20 TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); =20 - vm =3D vm_create_barebones(); + if ((kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_VM))) + vm =3D vm_create_barebones_type(KVM_X86_SW_PROTECTED_VM); + else + vm =3D vm_create_barebones(); =20 test_create_guest_memfd_invalid(vm); test_create_guest_memfd_multiple(vm); --=20 2.46.0.598.g6f2099f65c-goog