From nobody Sat Feb 7 15:12:20 2026 Received: from mail-wm1-f51.google.com (mail-wm1-f51.google.com [209.85.128.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 50555392B9E for ; Tue, 13 Jan 2026 15:27:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768318067; cv=none; b=XegugR/PnFU7tSr6fQsvQKpbLr5HeQ3kbxfX1w0+IbulOOv7aHW9VPOhnHz87Km5efPNBNdSuIOzWcJrmD6dDmfYLQ+fgUYaTVg9cB2WqkOHncQzTv0IlyRs0YbgoUlG+RsomI+abjlPnMtXt8YIg0HYr493SAG3biHXMXP3JaY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768318067; c=relaxed/simple; bh=ebP5zQyBtlokNk0ZIVz50RLP5yx+YrveFGACbcsByhQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ePbyVauzsOSBg5nyE7+6VGKr4NKuJMyH+AzxKxfOYK1V43FIBjI4rXTTipSyMfesF7tINV6V8agFxYPc6IT5iwag1SH41AsLsDLSd7eDQmkUI7yqY8fwDAzIZ4iOy4D8o2fks+yoTJVdjSNcJc4zKbVPsmAi7qDZzZH+B+JIVCI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=nSd/+FrX; arc=none smtp.client-ip=209.85.128.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="nSd/+FrX" Received: by mail-wm1-f51.google.com with SMTP id 5b1f17b1804b1-4775895d69cso38509835e9.0 for ; Tue, 13 Jan 2026 07:27:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1768318061; x=1768922861; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=x5PQLoPw0PMek/LS9vJnQnbwHDO3lcL/jK5ZbU7CCws=; b=nSd/+FrX9GFOQUridO+VDgEzHbNauzwSDxKrHdMBUOgAfA58LlGra3Nki8aiDqbXYr rqpOl6hudlQpZ5ZQWXduyd2dRkUyVWzRWCqroCRnjHHRJncrToMk6bM7R3/Dqv2K3Jy6 OIPLGV9GtkQqaHetQ2PoJ+CKkM3AsAjDfarqbX+Rrnlh/F+jjkv0eI78uz/PVqN+h3Ut LbqXDolerAIltMiHmOOLcEU3+FsW0KXON0BXOxkqsgK9mwRN9rDIbkeaKp4Nb0vGCFXa Kfb5v3h28SGLYcL0t8zIxtd/6zlDVNHkFfy7ioQXOsydF2QKgRBvVZpv7N2Wdv0WdByg tExQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1768318061; x=1768922861; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=x5PQLoPw0PMek/LS9vJnQnbwHDO3lcL/jK5ZbU7CCws=; b=nXhYeNSuyRkey1dxbUe+PV8HFkU0vSxNbbNmX3xo3zf+tipmqashSCDdEoaPbpHxLz JjsPdU4Pn1BQz6diYHql51LaZ9T/a1cTpRRQt5L2fEd9KrnckX2n6mT/yCKmssdxB8dE qGVh9iSusfwb9oawPEzuS32O5zGJeviXOwsY+kYg8Dyibe+9i8Bl4LVkJBBiJcOD/+dV g9HvNIWDRUWKjc7rDfwfb4ARKL2oTfqhzb2SRJDCjBTc0bag5upMcEgF+kB3UJV5RylL cNgM0U9jxcBMQe9+Txjbvw7tlJdSjfVvJMSG/lEV+nRuQqNXMtB48E2DVMIqBr8Bi/+m HE2w== X-Forwarded-Encrypted: i=1; AJvYcCUd+2k/Sdw/Q6oeM3Km3erxX3KqHad5j9meB61fD5zLePNfnUnnIjrQvJcDIs+5Ib4J6Zt9xYscbADxvXk=@vger.kernel.org X-Gm-Message-State: AOJu0YyyRUWzwZGuHBw8caDIVi9iI2vhunfAHZDXbl3S64+ci4q5auqq GC4J+mhob9LDJzP3FWp9eBWE7LlvwPx/HCVnXQ/sxSBTvo44tR1azPZG X-Gm-Gg: AY/fxX7YhmLOPb2DYu+PopxpSo7NjvheQRF8R01KQzOC+KIE95Vim5pi/nii+UBIJvm k/sHkTDZy9MYhevWx1r/viqnCNRRggRd9W3VtjPSkXYsswNcNg27oeBa9dsxrYaRY4YNYhZkIg1 QGnwBMGTY5hrPetq9mwXAwFNp9E1R37Un9YRhRi9O5HXv908Dd1bSQKjjSNmFS00YKyuGxjs14W jjkpA3QcqEbqt4PFYTpaLjlmaOQi/EagL5gaBnJRkHyQa/4IWMdjqIhtkR1io3faSHNNRPV7966 1SDsqZkrESo9frySyaphbwr2T9VYYkBhTQb1XQGVHMI279v6II1uXxnbNCR7pZOtdMGKI5zSH9p Hdi5SRItHXUO6qJmWZD6RwvDEaxclJK1qAeoBbyEi/RTWY+T0mRMDTXUksZDI6/G6s7edVKun1M vniCfEyXuiP8nFJhge6UiO8eFAW5vHDa8zua4DjpmKGHFV8GttRqe03TU= X-Google-Smtp-Source: AGHT+IExCZWJcb/0GZ31JqPekHZy2maoMNKm5K54c/5lKFA3EuWdAV8whKhI88HtcWnW30x8c/SNBQ== X-Received: by 2002:a05:600c:a04:b0:47d:5dae:73b1 with SMTP id 5b1f17b1804b1-47d84b3b668mr281127875e9.23.1768318060510; Tue, 13 Jan 2026 07:27:40 -0800 (PST) Received: from f4d4888f22f2.ant.amazon.com.com ([15.248.2.27]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-47d7f68f4ddsm421025325e9.2.2026.01.13.07.27.39 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 13 Jan 2026 07:27:40 -0800 (PST) From: Jack Thomson To: maz@kernel.org, oliver.upton@linux.dev, pbonzini@redhat.com Cc: joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, shuah@kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, isaku.yamahata@intel.com, xmarcalx@amazon.co.uk, kalyazin@amazon.co.uk, jackabt@amazon.com Subject: [PATCH v4 2/3] KVM: selftests: Enable pre_fault_memory_test for arm64 Date: Tue, 13 Jan 2026 15:26:41 +0000 Message-ID: <20260113152643.18858-3-jackabt.amazon@gmail.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20260113152643.18858-1-jackabt.amazon@gmail.com> References: <20260113152643.18858-1-jackabt.amazon@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Jack Thomson Enable the pre_fault_memory_test to run on arm64 by making it work with different guest page sizes and testing multiple guest configurations. Update the test_assert to compare against the UCALL_EXIT_REASON, for portability, as arm64 exits with KVM_EXIT_MMIO while x86 uses KVM_EXIT_IO. Signed-off-by: Jack Thomson --- tools/testing/selftests/kvm/Makefile.kvm | 1 + .../selftests/kvm/pre_fault_memory_test.c | 85 ++++++++++++++----- 2 files changed, 63 insertions(+), 23 deletions(-) diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selft= ests/kvm/Makefile.kvm index ba5c2b643efa..6d6a74ddad30 100644 --- a/tools/testing/selftests/kvm/Makefile.kvm +++ b/tools/testing/selftests/kvm/Makefile.kvm @@ -187,6 +187,7 @@ TEST_GEN_PROGS_arm64 +=3D memslot_perf_test TEST_GEN_PROGS_arm64 +=3D mmu_stress_test TEST_GEN_PROGS_arm64 +=3D rseq_test TEST_GEN_PROGS_arm64 +=3D steal_time +TEST_GEN_PROGS_arm64 +=3D pre_fault_memory_test =20 TEST_GEN_PROGS_s390 =3D $(TEST_GEN_PROGS_COMMON) TEST_GEN_PROGS_s390 +=3D s390/memop diff --git a/tools/testing/selftests/kvm/pre_fault_memory_test.c b/tools/te= sting/selftests/kvm/pre_fault_memory_test.c index 93e603d91311..be1a84a6c137 100644 --- a/tools/testing/selftests/kvm/pre_fault_memory_test.c +++ b/tools/testing/selftests/kvm/pre_fault_memory_test.c @@ -11,19 +11,29 @@ #include #include #include +#include =20 /* Arbitrarily chosen values */ -#define TEST_SIZE (SZ_2M + PAGE_SIZE) -#define TEST_NPAGES (TEST_SIZE / PAGE_SIZE) +#define TEST_BASE_SIZE SZ_2M #define TEST_SLOT 10 =20 -static void guest_code(uint64_t base_gva) +/* Storage of test info to share with guest code */ +struct test_config { + uint64_t page_size; + uint64_t test_size; + uint64_t test_num_pages; +}; + +static struct test_config test_config; + +static void guest_code(uint64_t base_gpa) { volatile uint64_t val __used; + struct test_config *config =3D &test_config; int i; =20 - for (i =3D 0; i < TEST_NPAGES; i++) { - uint64_t *src =3D (uint64_t *)(base_gva + i * PAGE_SIZE); + for (i =3D 0; i < config->test_num_pages; i++) { + uint64_t *src =3D (uint64_t *)(base_gpa + i * config->page_size); =20 val =3D *src; } @@ -56,7 +66,7 @@ static void *delete_slot_worker(void *__data) cpu_relax(); =20 vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, data->gpa, - TEST_SLOT, TEST_NPAGES, data->flags); + TEST_SLOT, test_config.test_num_pages, data->flags); =20 return NULL; } @@ -159,22 +169,35 @@ static void pre_fault_memory(struct kvm_vcpu *vcpu, u= 64 base_gpa, u64 offset, KVM_PRE_FAULT_MEMORY, ret, vcpu->vm); } =20 -static void __test_pre_fault_memory(unsigned long vm_type, bool private) +struct test_params { + unsigned long vm_type; + bool private; +}; + +static void __test_pre_fault_memory(enum vm_guest_mode guest_mode, void *a= rg) { uint64_t gpa, gva, alignment, guest_page_size; + struct test_params *p =3D arg; const struct vm_shape shape =3D { - .mode =3D VM_MODE_DEFAULT, - .type =3D vm_type, + .mode =3D guest_mode, + .type =3D p->vm_type, }; struct kvm_vcpu *vcpu; struct kvm_run *run; struct kvm_vm *vm; struct ucall uc; =20 + pr_info("Testing guest mode: %s\n", vm_guest_mode_string(guest_mode)); + vm =3D vm_create_shape_with_one_vcpu(shape, &vcpu, guest_code); =20 - alignment =3D guest_page_size =3D vm_guest_mode_params[VM_MODE_DEFAULT].p= age_size; - gpa =3D (vm->max_gfn - TEST_NPAGES) * guest_page_size; + guest_page_size =3D vm_guest_mode_params[guest_mode].page_size; + + test_config.page_size =3D guest_page_size; + test_config.test_size =3D TEST_BASE_SIZE + test_config.page_size; + test_config.test_num_pages =3D vm_calc_num_guest_pages(vm->mode, test_con= fig.test_size); + + gpa =3D (vm->max_gfn - test_config.test_num_pages) * test_config.page_siz= e; #ifdef __s390x__ alignment =3D max(0x100000UL, guest_page_size); #else @@ -183,23 +206,32 @@ static void __test_pre_fault_memory(unsigned long vm_= type, bool private) gpa =3D align_down(gpa, alignment); gva =3D gpa & ((1ULL << (vm->va_bits - 1)) - 1); =20 - vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, gpa, TEST_SLOT, - TEST_NPAGES, private ? KVM_MEM_GUEST_MEMFD : 0); - virt_map(vm, gva, gpa, TEST_NPAGES); + vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, + gpa, TEST_SLOT, test_config.test_num_pages, + p->private ? KVM_MEM_GUEST_MEMFD : 0); + virt_map(vm, gva, gpa, test_config.test_num_pages); + + if (p->private) + vm_mem_set_private(vm, gpa, test_config.test_size); + pre_fault_memory(vcpu, gpa, 0, TEST_BASE_SIZE, 0, p->private); + /* Test pre-faulting over an already faulted range */ + pre_fault_memory(vcpu, gpa, 0, TEST_BASE_SIZE, 0, p->private); + pre_fault_memory(vcpu, gpa, TEST_BASE_SIZE, + test_config.page_size * 2, test_config.page_size, p->private); + pre_fault_memory(vcpu, gpa, test_config.test_size, + test_config.page_size, test_config.page_size, p->private); =20 - if (private) - vm_mem_set_private(vm, gpa, TEST_SIZE); + vcpu_args_set(vcpu, 1, gva); =20 - pre_fault_memory(vcpu, gpa, 0, SZ_2M, 0, private); - pre_fault_memory(vcpu, gpa, SZ_2M, PAGE_SIZE * 2, PAGE_SIZE, private); - pre_fault_memory(vcpu, gpa, TEST_SIZE, PAGE_SIZE, PAGE_SIZE, private); + /* Export the shared variables to the guest. */ + sync_global_to_guest(vm, test_config); =20 - vcpu_args_set(vcpu, 1, gva); vcpu_run(vcpu); =20 run =3D vcpu->run; - TEST_ASSERT(run->exit_reason =3D=3D KVM_EXIT_IO, - "Wanted KVM_EXIT_IO, got exit reason: %u (%s)", + TEST_ASSERT(run->exit_reason =3D=3D UCALL_EXIT_REASON, + "Wanted %s, got exit reason: %u (%s)", + exit_reason_str(UCALL_EXIT_REASON), run->exit_reason, exit_reason_str(run->exit_reason)); =20 switch (get_ucall(vcpu, &uc)) { @@ -218,18 +250,25 @@ static void __test_pre_fault_memory(unsigned long vm_= type, bool private) =20 static void test_pre_fault_memory(unsigned long vm_type, bool private) { + struct test_params p =3D { + .vm_type =3D vm_type, + .private =3D private, + }; + if (vm_type && !(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(vm_type))) { pr_info("Skipping tests for vm_type 0x%lx\n", vm_type); return; } =20 - __test_pre_fault_memory(vm_type, private); + for_each_guest_mode(__test_pre_fault_memory, &p); } =20 int main(int argc, char *argv[]) { TEST_REQUIRE(kvm_check_cap(KVM_CAP_PRE_FAULT_MEMORY)); =20 + guest_modes_append_default(); + test_pre_fault_memory(0, false); #ifdef __x86_64__ test_pre_fault_memory(KVM_X86_SW_PROTECTED_VM, false); --=20 2.43.0