From nobody Sun Feb 8 15:57:48 2026 Received: from mail-qv1-f74.google.com (mail-qv1-f74.google.com [209.85.219.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 306BD2063DE for ; Thu, 9 Jan 2025 20:50:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455808; cv=none; b=PPWkhv/MhGbRe+UMAC427FjxweLk9jrMMTg00lnSLsGjxx2k0eYvGEIh3AijTPdTPD5V+8bTkXm9BNjo1KN9KC0EeYBM0xdZR8xQ6euMztT+UAv/GKbvzxZ05eBwj95PM6H9Uwx8MMMS12VRpqd+2MnZOkJ5afr8Uogqzg5OXoM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455808; c=relaxed/simple; bh=CBqKjphRN4yzq0Lnfo/XK0XpaUgUbFJIiIBKlhItOmI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=NLx52//9hvS2g5ZyV573BCgntya1XJC01IJKchqDUd21FYKExNVSrzz/52MUgowhgJfezqDuIF1bj294dqDJGB12Xy5sgHE2GQYGsqO7eINxDCYJ2Npqcn8eNAGHbM6hk/F8fi9qMmpeOWMGHz9nHCQwAnSqkNt8u3t7RpxIyc4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=FjgQ/2A8; arc=none smtp.client-ip=209.85.219.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FjgQ/2A8" Received: by mail-qv1-f74.google.com with SMTP id 6a1803df08f44-6d88833dffcso22751776d6.0 for ; Thu, 09 Jan 2025 12:50:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736455804; x=1737060604; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Z8gbNSyyPwETvpRi8nU7lhBfk6CAq4ixhNTmiBrenco=; b=FjgQ/2A8QlBFU5zwOACan1E+u03HCottwK5nLYt4HK8BkD9Ec8UYbfF5UHBWxv34MD XKQo/5doQ7JK3kmCPBvtOY+CAA4ccNhnHEhtwC/LXSQQ7+c82JVnwygMBISU2OTkjmzK Z/wd4J46MyUJitJAAX/okFSzeFNcgEIfpl+bUhKC/KrkBkrRVc1jeRAGMuPdsh08dgrs vTWaNN0+fr8lm75eqKNVoXEzVaAbWwyfuS7OyTS+HLtEav8Tl9WxnbY+XMZmAgnV8xG5 JMYPrhQHO6J8Y/yUezc+vv73qhbTEpMQHH9L2ltql2vuEj4jgPzyy/HPXX366O+HL4nk pa+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736455804; x=1737060604; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Z8gbNSyyPwETvpRi8nU7lhBfk6CAq4ixhNTmiBrenco=; b=kFYGb7NfbcTAB7+SGf2b1d9zWZInxxhJQvkRXdcgKC1eIo7E07/HZqHBvq6rFdkq2G WG1XhMCTFOyTrtEA+RWM2RMJwLKyYqaEsIr8F3BdN/R4+zxKXOI9zrbGs0+ngG0HEk+K 1L49LJQG6LjGMDp4F63u3ShzrtJmrQ9p6blf+e3x7vp3SWo6N9Xo1+TaEgK1WLwELc4A PM2fPBGdJUiKD9mOVhEGmi7mmPRGeAcQ7wNNag1r8BOnYxMhVBgBM/eaE+w2ODQvjD1/ IRWPvjNpi5FFEM67IOzPA1d0d4bSAL3Vu7pNIXj9nF7dHAtZwYx+duafNmtwfHj+Z7/u IgRQ== X-Forwarded-Encrypted: i=1; AJvYcCWEUm8+9qVL7yhTAozrSaKJE729n66z5AtmLpyRY5z6KrYCBVPFGbuutbCUaQ+okmg4eMcd4RvHARZrmxY=@vger.kernel.org X-Gm-Message-State: AOJu0YyFvluxLRGy3JP8bZ251YivPkDwQCmhWnz56MiXyaMoBbyyaQBf XpGXMwwBvb8oCJ35JdX+ai+kn7nVjILV9wiPY88jvjlehWbjW6izZXFEiJ+kTILbUBnvZC7L0g8 vcJWIzjvh80PQhpY/RA== X-Google-Smtp-Source: AGHT+IEi3mzHkXPZiL5KdytvEtxQWpAfAta5Mi5rbpufEknhZSKdy69JHtvBwhVHRWx68M/7Cxqd+lUk9mWkjayn X-Received: from qvlh7.prod.google.com ([2002:a0c:f407:0:b0:6dd:3c13:842]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6214:53c6:b0:6d4:586:6291 with SMTP id 6a1803df08f44-6df9b232c31mr147596196d6.25.1736455803946; Thu, 09 Jan 2025 12:50:03 -0800 (PST) Date: Thu, 9 Jan 2025 20:49:26 +0000 In-Reply-To: <20250109204929.1106563-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109204929.1106563-1-jthoughton@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109204929.1106563-11-jthoughton@google.com> Subject: [PATCH v2 10/13] KVM: selftests: Add KVM Userfault mode to demand_paging_test From: James Houghton To: Paolo Bonzini , Sean Christopherson Cc: Jonathan Corbet , Marc Zyngier , Oliver Upton , Yan Zhao , James Houghton , Nikita Kalyazin , Anish Moorthy , Peter Gonda , Peter Xu , David Matlack , wei.w.wang@intel.com, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a way for the KVM_RUN loop to handle -EFAULT exits when they are for KVM_MEMORY_EXIT_FLAG_USERFAULT. In this case, preemptively handle the UFFDIO_COPY or UFFDIO_CONTINUE if userfaultfd is also in use. This saves the trip through the userfaultfd poll/read/WAKE loop. When preemptively handling UFFDIO_COPY/CONTINUE, do so with MODE_DONTWAKE, as there will not be a thread to wake. If a thread *does* take the userfaultfd slow path, we will get a regular userfault, and we will call handle_uffd_page_request() which will do a full wake-up. In the EEXIST case, a wake-up will not occur. Make sure to call UFFDIO_WAKE explicitly in this case. When handling KVM userfaults, make sure to set the bitmap with memory_order_release. Although it wouldn't affect the functionality of the test (because memstress doesn't actually require any particular guest memory contents), it is what userspace normally needs to do. Add `-k` to set the test to use KVM Userfault. Add the vm_mem_region_set_flags_userfault() helper for setting `userfault_bitmap` and KVM_MEM_USERFAULT at the same time. Signed-off-by: James Houghton --- .../selftests/kvm/demand_paging_test.c | 139 +++++++++++++++++- .../testing/selftests/kvm/include/kvm_util.h | 5 + tools/testing/selftests/kvm/lib/kvm_util.c | 40 ++++- 3 files changed, 176 insertions(+), 8 deletions(-) diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testi= ng/selftests/kvm/demand_paging_test.c index 315f5c9037b4..183c70731093 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -12,7 +12,9 @@ #include #include #include +#include #include +#include =20 #include "kvm_util.h" #include "test_util.h" @@ -24,11 +26,21 @@ #ifdef __NR_userfaultfd =20 static int nr_vcpus =3D 1; +static int num_uffds; static uint64_t guest_percpu_mem_size =3D DEFAULT_PER_VCPU_MEM_SIZE; =20 static size_t demand_paging_size; +static size_t host_page_size; static char *guest_data_prototype; =20 +static struct { + bool enabled; + int uffd_mode; /* set if userfaultfd is also in use */ + struct uffd_desc **uffd_descs; +} kvm_userfault_data; + +static void resolve_kvm_userfault(u64 gpa, u64 size); + static void vcpu_worker(struct memstress_vcpu_args *vcpu_args) { struct kvm_vcpu *vcpu =3D vcpu_args->vcpu; @@ -41,8 +53,22 @@ static void vcpu_worker(struct memstress_vcpu_args *vcpu= _args) clock_gettime(CLOCK_MONOTONIC, &start); =20 /* Let the guest access its memory */ +restart: ret =3D _vcpu_run(vcpu); - TEST_ASSERT(ret =3D=3D 0, "vcpu_run failed: %d", ret); + if (ret < 0 && errno =3D=3D EFAULT && kvm_userfault_data.enabled) { + /* Check for userfault. */ + TEST_ASSERT(run->exit_reason =3D=3D KVM_EXIT_MEMORY_FAULT, + "Got invalid exit reason: %x", run->exit_reason); + TEST_ASSERT(run->memory_fault.flags =3D=3D + KVM_MEMORY_EXIT_FLAG_USERFAULT, + "Got invalid memory fault exit: %llx", + run->memory_fault.flags); + resolve_kvm_userfault(run->memory_fault.gpa, + run->memory_fault.size); + goto restart; + } else + TEST_ASSERT(ret =3D=3D 0, "vcpu_run failed: %d", ret); + if (get_ucall(vcpu, NULL) !=3D UCALL_SYNC) { TEST_ASSERT(false, "Invalid guest sync status: exit_reason=3D%s", @@ -54,11 +80,10 @@ static void vcpu_worker(struct memstress_vcpu_args *vcp= u_args) ts_diff.tv_sec, ts_diff.tv_nsec); } =20 -static int handle_uffd_page_request(int uffd_mode, int uffd, - struct uffd_msg *msg) +static int resolve_uffd_page_request(int uffd_mode, int uffd, uint64_t add= r, + bool wake) { pid_t tid =3D syscall(__NR_gettid); - uint64_t addr =3D msg->arg.pagefault.address; struct timespec start; struct timespec ts_diff; int r; @@ -71,7 +96,7 @@ static int handle_uffd_page_request(int uffd_mode, int uf= fd, copy.src =3D (uint64_t)guest_data_prototype; copy.dst =3D addr; copy.len =3D demand_paging_size; - copy.mode =3D 0; + copy.mode =3D wake ? 0 : UFFDIO_COPY_MODE_DONTWAKE; =20 r =3D ioctl(uffd, UFFDIO_COPY, ©); /* @@ -96,6 +121,7 @@ static int handle_uffd_page_request(int uffd_mode, int u= ffd, =20 cont.range.start =3D addr; cont.range.len =3D demand_paging_size; + cont.mode =3D wake ? 0 : UFFDIO_CONTINUE_MODE_DONTWAKE; =20 r =3D ioctl(uffd, UFFDIO_CONTINUE, &cont); /* @@ -119,6 +145,20 @@ static int handle_uffd_page_request(int uffd_mode, int= uffd, TEST_FAIL("Invalid uffd mode %d", uffd_mode); } =20 + if (r < 0 && wake) { + /* + * No wake-up occurs when UFFDIO_COPY/CONTINUE fails, but we + * have a thread waiting. Wake it up. + */ + struct uffdio_range range =3D {0}; + + range.start =3D addr; + range.len =3D demand_paging_size; + + TEST_ASSERT(ioctl(uffd, UFFDIO_WAKE, &range) =3D=3D 0, + "UFFDIO_WAKE failed: 0x%lx", addr); + } + ts_diff =3D timespec_elapsed(start); =20 PER_PAGE_DEBUG("UFFD page-in %d \t%ld ns\n", tid, @@ -129,6 +169,58 @@ static int handle_uffd_page_request(int uffd_mode, int= uffd, return 0; } =20 +static int handle_uffd_page_request(int uffd_mode, int uffd, + struct uffd_msg *msg) +{ + uint64_t addr =3D msg->arg.pagefault.address; + + return resolve_uffd_page_request(uffd_mode, uffd, addr, true); +} + +static void resolve_kvm_userfault(u64 gpa, u64 size) +{ + struct kvm_vm *vm =3D memstress_args.vm; + struct userspace_mem_region *region; + unsigned long *bitmap_chunk; + u64 page, gpa_offset; + + region =3D (struct userspace_mem_region *) userspace_mem_region_find( + vm, gpa, (gpa + size - 1)); + + if (kvm_userfault_data.uffd_mode) { + /* + * Resolve userfaults early, without needing to read them + * off the userfaultfd. + */ + uint64_t hva =3D (uint64_t)addr_gpa2hva(vm, gpa); + struct uffd_desc **descs =3D kvm_userfault_data.uffd_descs; + int i, fd; + + for (i =3D 0; i < num_uffds; ++i) + if (hva >=3D (uint64_t)descs[i]->va_start && + hva < (uint64_t)descs[i]->va_end) + break; + + TEST_ASSERT(i < num_uffds, + "Did not find userfaultfd for hva: %lx", hva); + + fd =3D kvm_userfault_data.uffd_descs[i]->uffd; + resolve_uffd_page_request(kvm_userfault_data.uffd_mode, fd, + hva, false); + } else { + uint64_t hva =3D (uint64_t)addr_gpa2hva(vm, gpa); + + memcpy((char *)hva, guest_data_prototype, demand_paging_size); + } + + gpa_offset =3D gpa - region->region.guest_phys_addr; + page =3D gpa_offset / host_page_size; + bitmap_chunk =3D (unsigned long *)region->region.userfault_bitmap + + page / BITS_PER_LONG; + atomic_fetch_and_explicit((_Atomic unsigned long *)bitmap_chunk, + ~(1ul << (page % BITS_PER_LONG)), memory_order_release); +} + struct test_params { int uffd_mode; bool single_uffd; @@ -136,6 +228,7 @@ struct test_params { int readers_per_uffd; enum vm_mem_backing_src_type src_type; bool partition_vcpu_memory_access; + bool kvm_userfault; }; =20 static void prefault_mem(void *alias, uint64_t len) @@ -149,6 +242,25 @@ static void prefault_mem(void *alias, uint64_t len) } } =20 +static void enable_userfault(struct kvm_vm *vm, int slots) +{ + for (int i =3D 0; i < slots; ++i) { + int slot =3D MEMSTRESS_MEM_SLOT_INDEX + i; + struct userspace_mem_region *region; + unsigned long *userfault_bitmap; + int flags =3D KVM_MEM_USERFAULT; + + region =3D memslot2region(vm, slot); + userfault_bitmap =3D bitmap_zalloc(region->mmap_size / + host_page_size); + /* everything is userfault initially */ + memset(userfault_bitmap, -1, region->mmap_size / host_page_size / CHAR_B= IT); + printf("Setting bitmap: %p\n", userfault_bitmap); + vm_mem_region_set_flags_userfault(vm, slot, flags, + userfault_bitmap); + } +} + static void run_test(enum vm_guest_mode mode, void *arg) { struct memstress_vcpu_args *vcpu_args; @@ -159,12 +271,13 @@ static void run_test(enum vm_guest_mode mode, void *a= rg) struct timespec ts_diff; double vcpu_paging_rate; struct kvm_vm *vm; - int i, num_uffds =3D 0; + int i; =20 vm =3D memstress_create_vm(mode, nr_vcpus, guest_percpu_mem_size, 1, p->src_type, p->partition_vcpu_memory_access); =20 demand_paging_size =3D get_backing_src_pagesz(p->src_type); + host_page_size =3D getpagesize(); =20 guest_data_prototype =3D malloc(demand_paging_size); TEST_ASSERT(guest_data_prototype, @@ -208,6 +321,14 @@ static void run_test(enum vm_guest_mode mode, void *ar= g) } } =20 + if (p->kvm_userfault) { + TEST_REQUIRE(kvm_has_cap(KVM_CAP_USERFAULT)); + kvm_userfault_data.enabled =3D true; + kvm_userfault_data.uffd_mode =3D p->uffd_mode; + kvm_userfault_data.uffd_descs =3D uffd_descs; + enable_userfault(vm, 1); + } + pr_info("Finished creating vCPUs and starting uffd threads\n"); =20 clock_gettime(CLOCK_MONOTONIC, &start); @@ -265,6 +386,7 @@ static void help(char *name) printf(" -v: specify the number of vCPUs to run.\n"); printf(" -o: Overlap guest memory accesses instead of partitioning\n" " them into a separate region of memory for each vCPU.\n"); + printf(" -k: Use KVM Userfault\n"); puts(""); exit(0); } @@ -283,7 +405,7 @@ int main(int argc, char *argv[]) =20 guest_modes_append_default(); =20 - while ((opt =3D getopt(argc, argv, "ahom:u:d:b:s:v:c:r:")) !=3D -1) { + while ((opt =3D getopt(argc, argv, "ahokm:u:d:b:s:v:c:r:")) !=3D -1) { switch (opt) { case 'm': guest_modes_cmdline(optarg); @@ -326,6 +448,9 @@ int main(int argc, char *argv[]) "Invalid number of readers per uffd %d: must be >=3D1", p.readers_per_uffd); break; + case 'k': + p.kvm_userfault =3D true; + break; case 'h': default: help(argv[0]); diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing= /selftests/kvm/include/kvm_util.h index 4c4e5a847f67..0d49a9ce832a 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -582,6 +582,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, uint64_t guest_paddr, uint32_t slot, uint64_t npages, uint32_t flags, int guest_memfd_fd, uint64_t guest_memfd_offset); +struct userspace_mem_region * +userspace_mem_region_find(struct kvm_vm *vm, uint64_t start, uint64_t end); =20 #ifndef vm_arch_has_protected_memory static inline bool vm_arch_has_protected_memory(struct kvm_vm *vm) @@ -591,6 +593,9 @@ static inline bool vm_arch_has_protected_memory(struct = kvm_vm *vm) #endif =20 void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t fl= ags); +void vm_mem_region_set_flags_userfault(struct kvm_vm *vm, uint32_t slot, + uint32_t flags, + unsigned long *userfault_bitmap); void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa= ); void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot); struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id); diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/sel= ftests/kvm/lib/kvm_util.c index a87988a162f1..a8f6b949ac59 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -634,7 +634,7 @@ void kvm_parse_vcpu_pinning(const char *pcpus_string, u= int32_t vcpu_to_pcpu[], * of the regions is returned. Null is returned only when no overlapping * region exists. */ -static struct userspace_mem_region * +struct userspace_mem_region * userspace_mem_region_find(struct kvm_vm *vm, uint64_t start, uint64_t end) { struct rb_node *node; @@ -1149,6 +1149,44 @@ void vm_mem_region_set_flags(struct kvm_vm *vm, uint= 32_t slot, uint32_t flags) ret, errno, slot, flags); } =20 +/* + * VM Memory Region Flags Set with a userfault bitmap + * + * Input Args: + * vm - Virtual Machine + * flags - Flags for the memslot + * userfault_bitmap - The bitmap to use for KVM_MEM_USERFAULT + * + * Output Args: None + * + * Return: None + * + * Sets the flags of the memory region specified by the value of slot, + * to the values given by flags. This helper adds a way to provide a + * userfault_bitmap. + */ +void vm_mem_region_set_flags_userfault(struct kvm_vm *vm, uint32_t slot, + uint32_t flags, + unsigned long *userfault_bitmap) +{ + int ret; + struct userspace_mem_region *region; + + region =3D memslot2region(vm, slot); + + TEST_ASSERT(!userfault_bitmap ^ (flags & KVM_MEM_USERFAULT), + "KVM_MEM_USERFAULT must be specified with a bitmap"); + + region->region.flags =3D flags; + region->region.userfault_bitmap =3D (__u64)userfault_bitmap; + + ret =3D __vm_ioctl(vm, KVM_SET_USER_MEMORY_REGION2, ®ion->region); + + TEST_ASSERT(ret =3D=3D 0, "KVM_SET_USER_MEMORY_REGION2 IOCTL failed,\n" + " rc: %i errno: %i slot: %u flags: 0x%x", + ret, errno, slot, flags); +} + /* * VM Memory Region Move * --=20 2.47.1.613.gc27f4b7a9f-goog