From nobody Fri Apr 3 02:59:47 2026 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4032F20C490; Wed, 25 Mar 2026 00:37:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774399066; cv=none; b=p38WCVlwcmHBLHKQXUtMiaqISu6ovNbD7VYQpoXocbE9keNkq5MpPIDt+Cias2of7luA72OzsjKzi8K60WWr/JQn5Fc2o5BBr7oPIOOl7jJXJrKwhwPAjKA/NK75x4Soz101bb9a8CMilz0ogHojNSbVooohKmrCNkZQWOq6VTY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774399066; c=relaxed/simple; bh=bB650cgvjeM+wlJ2Nx+MaWC6oDU8Qj0aTouMzw+P4tQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HLas9SqICT7bqIPE2iAuPAqYmb4DbFIpjTeuypz5ajCXxkuRPV+9pGTZ8Ur1nVSYqrPI4UNmtOob2f1JVtKgF+lfTjLU+coftfL0PScYRy9VJ0qoq1FURSkhVjztVO4QEBpe4tGPghAZeuvyRSsF40+oIbofUKaYEe1Dz4V80Kc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b=vEfRwCmX; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b="vEfRwCmX" Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9E7F61A9A; Tue, 24 Mar 2026 17:37:32 -0700 (PDT) Received: from workstation-e142269.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 72CC63FB90; Tue, 24 Mar 2026 17:37:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss; t=1774399058; bh=bB650cgvjeM+wlJ2Nx+MaWC6oDU8Qj0aTouMzw+P4tQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=vEfRwCmXPUZlpCpmQMhGyTi8e39TTxWC+HYua+L960bBmaH2mEQ80m8Zp+JAeMeY8 NOoCMZFqXmGwWSq2oLwPAiaJu8v/ODcYoZjsqurEcBudI17Gbp1aelKyK6a82IrTeW XxcqOQ0tXOyS7PTMBF0+/da0hXGPJET0Kww/gVAs= From: Wei-Lin Chang To: kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Shuah Khan , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Subject: [PATCH 1/3] KVM: arm64: selftests: Add library functions for NV Date: Wed, 25 Mar 2026 00:36:18 +0000 Message-ID: <20260325003620.2214766-2-weilin.chang@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260325003620.2214766-1-weilin.chang@arm.com> References: <20260325003620.2214766-1-weilin.chang@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The API is designed for userspace to first call prepare_{l2_stack, hyp_state, eret_destination, nested_sync_handler}, with a function supplied to prepare_eret_destination() to be run in L2. Then run_l2() can be called in L1 to run the given function in L2. Signed-off-by: Wei-Lin Chang --- tools/testing/selftests/kvm/Makefile.kvm | 2 + .../selftests/kvm/include/arm64/nested.h | 18 ++++++ .../testing/selftests/kvm/lib/arm64/nested.c | 61 +++++++++++++++++++ .../selftests/kvm/lib/arm64/nested_asm.S | 35 +++++++++++ 4 files changed, 116 insertions(+) create mode 100644 tools/testing/selftests/kvm/include/arm64/nested.h create mode 100644 tools/testing/selftests/kvm/lib/arm64/nested.c create mode 100644 tools/testing/selftests/kvm/lib/arm64/nested_asm.S diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selft= ests/kvm/Makefile.kvm index 98da9fa4b8b7..5e681e8e0cd7 100644 --- a/tools/testing/selftests/kvm/Makefile.kvm +++ b/tools/testing/selftests/kvm/Makefile.kvm @@ -34,6 +34,8 @@ LIBKVM_arm64 +=3D lib/arm64/gic.c LIBKVM_arm64 +=3D lib/arm64/gic_v3.c LIBKVM_arm64 +=3D lib/arm64/gic_v3_its.c LIBKVM_arm64 +=3D lib/arm64/handlers.S +LIBKVM_arm64 +=3D lib/arm64/nested.c +LIBKVM_arm64 +=3D lib/arm64/nested_asm.S LIBKVM_arm64 +=3D lib/arm64/processor.c LIBKVM_arm64 +=3D lib/arm64/spinlock.c LIBKVM_arm64 +=3D lib/arm64/ucall.c diff --git a/tools/testing/selftests/kvm/include/arm64/nested.h b/tools/tes= ting/selftests/kvm/include/arm64/nested.h new file mode 100644 index 000000000000..739ff2ee0161 --- /dev/null +++ b/tools/testing/selftests/kvm/include/arm64/nested.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * ARM64 Nested virtualization defines + */ + +#ifndef SELFTEST_KVM_NESTED_H +#define SELFTEST_KVM_NESTED_H + +void prepare_l2_stack(struct kvm_vm *vm, struct kvm_vcpu *vcpu); +void prepare_hyp_state(struct kvm_vm *vm, struct kvm_vcpu *vcpu); +void prepare_eret_destination(struct kvm_vm *vm, struct kvm_vcpu *vcpu, vo= id *l2_pc); +void prepare_nested_sync_handler(struct kvm_vm *vm, struct kvm_vcpu *vcpu); + +void run_l2(void); +void after_hvc(void); +void do_hvc(void); + +#endif /* SELFTEST_KVM_NESTED_H */ diff --git a/tools/testing/selftests/kvm/lib/arm64/nested.c b/tools/testing= /selftests/kvm/lib/arm64/nested.c new file mode 100644 index 000000000000..111d02f44cfe --- /dev/null +++ b/tools/testing/selftests/kvm/lib/arm64/nested.c @@ -0,0 +1,61 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * ARM64 Nested virtualization helpers + */ + +#include "kvm_util.h" +#include "nested.h" +#include "processor.h" +#include "test_util.h" + +#include + +static void hvc_handler(struct ex_regs *regs) +{ + GUEST_ASSERT_EQ(get_current_el(), 2); + GUEST_PRINTF("hvc handler\n"); + regs->pstate =3D PSR_MODE_EL2h | PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_= F_BIT; + regs->pc =3D (u64)after_hvc; +} + +void prepare_l2_stack(struct kvm_vm *vm, struct kvm_vcpu *vcpu) +{ + size_t l2_stack_size; + uint64_t l2_stack_paddr; + + l2_stack_size =3D vm->page_size =3D=3D 4096 ? DEFAULT_STACK_PGS * vm->pag= e_size : + vm->page_size; + l2_stack_paddr =3D __vm_phy_pages_alloc(vm, l2_stack_size / vm->page_size, + 0, 0, false); + vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), l2_stack_paddr + l2_stack_size= ); +} + +void prepare_hyp_state(struct kvm_vm *vm, struct kvm_vcpu *vcpu) +{ + vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_HCR_EL2), HCR_EL2_RW); +} + +void prepare_eret_destination(struct kvm_vm *vm, struct kvm_vcpu *vcpu, vo= id *l2_pc) +{ + vm_paddr_t do_hvc_paddr =3D addr_gva2gpa(vm, (vm_vaddr_t)do_hvc); + vm_paddr_t l2_pc_paddr =3D addr_gva2gpa(vm, (vm_vaddr_t)l2_pc); + + vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_SPSR_EL2), PSR_MODE_EL1h | + PSR_D_BIT | + PSR_A_BIT | + PSR_I_BIT | + PSR_F_BIT); + vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ELR_EL2), l2_pc_paddr); + /* HACK: use TPIDR_EL2 to pass address, see run_l2() in nested_asm.S */ + vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_TPIDR_EL2), do_hvc_paddr); +} + +void prepare_nested_sync_handler(struct kvm_vm *vm, struct kvm_vcpu *vcpu) +{ + if (!vm->handlers) { + vm_init_descriptor_tables(vm); + vcpu_init_descriptor_tables(vcpu); + } + vm_install_sync_handler(vm, VECTOR_SYNC_LOWER_64, + ESR_ELx_EC_HVC64, hvc_handler); +} diff --git a/tools/testing/selftests/kvm/lib/arm64/nested_asm.S b/tools/tes= ting/selftests/kvm/lib/arm64/nested_asm.S new file mode 100644 index 000000000000..4ecf2d510a6f --- /dev/null +++ b/tools/testing/selftests/kvm/lib/arm64/nested_asm.S @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * ARM64 Nested virtualization assembly helpers + */ + +.globl run_l2 +.globl after_hvc +.globl do_hvc +run_l2: + /* + * At this point TPIDR_EL2 will contain the gpa of do_hvc from + * prepare_eret_destination(). gpa of do_hvc have to be passed in + * because we want L2 to issue an hvc after it returns from the user + * passed function. In order for that to happen the lr must be + * controlled, which at this point holds the value of the address of + * the next instruction after this run_l2() call, which is not useful + * for L2. Additionally, L1 can't translate gva into gpa, so we can't + * calculate it here. + * + * So first save lr, then move TPIDR_EL2 to lr so when the user supplied + * L2 function returns, L2 jumps to do_hvc and let the L1 hvc handler + * take control. This implies we expect the L2 code to preserve lr and + * calls a regular ret in the end, which is true for normal C functions. + * The hvc handler will jump back to after_hvc when finished, and lr + * will be restored and we can return run_l2(). + */ + stp x29, lr, [sp, #-16]! + mrs x0, tpidr_el2 + mov lr, x0 + eret +after_hvc: + ldp x29, lr, [sp], #16 + ret +do_hvc: + hvc #0 --=20 2.43.0 From nobody Fri Apr 3 02:59:47 2026 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id CD1AC214A8B; Wed, 25 Mar 2026 00:37:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774399066; cv=none; b=deSVvzFgsV6cQNPzCp0AtVja8dbGFzJ93AP/EIYBdEvnIn8TUaQYAL0i5ZfPlxIUd2x9BIxS9EaiA5Q+l9iYZGpK6Izw25nVMnz1j5N6/loeixfI0CjlPc9IFAiYCgmM17jUlEbC4/pJ8XAPXJ8+uRxXu6LhMYBHDWls/uns0vk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774399066; c=relaxed/simple; bh=r2NqJqtxMzptltCE+GymYSs9aCU3DPcd19M/MmyWskE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=TWUrItHjWNxDhpJle/5qB6bKzPZCDNbh3ld68DCafOk7ebev8LVFdWhf2f2HqFUlZI9LM/oWyvRQhQKk0xaEb7A97LCHcRhzh7dma3xrEb34eIkBHcwiJuXFzlTEFeBsx7sn5LuL7/2LHlSRcIlNIhixLsfahDbrU3TcVNy0VPY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b=IXGXS+yk; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b="IXGXS+yk" Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6C82A1C0A; Tue, 24 Mar 2026 17:37:38 -0700 (PDT) Received: from workstation-e142269.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 380233FB90; Tue, 24 Mar 2026 17:37:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss; t=1774399064; bh=r2NqJqtxMzptltCE+GymYSs9aCU3DPcd19M/MmyWskE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IXGXS+ykBFLyiBA3As/Eqt2T1yCIqKBQCggrqBgjYnb6sfzOlJ9y3rqp5FIAOB6gb jIdZXjLGF6flg57RCzDbANEG7hahqs4hSAhbGwjg0A+O6xyXZoiK0O4TdYPhw1JiSh SsEzrofyk+2laz8DFG/BqAOD8i45Vwh7xOU+pBNY= From: Wei-Lin Chang To: kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Shuah Khan , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Subject: [PATCH 2/3] KVM: arm64: sefltests: Add basic NV selftest Date: Wed, 25 Mar 2026 00:36:19 +0000 Message-ID: <20260325003620.2214766-3-weilin.chang@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260325003620.2214766-1-weilin.chang@arm.com> References: <20260325003620.2214766-1-weilin.chang@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a simple NV selftest that uses the NV library functions to eret from vEL2 to EL1, then call an hvc to jump back to vEL2. Signed-off-by: Wei-Lin Chang --- tools/testing/selftests/kvm/Makefile.kvm | 1 + .../selftests/kvm/arm64/hello_nested.c | 65 +++++++++++++++++++ 2 files changed, 66 insertions(+) create mode 100644 tools/testing/selftests/kvm/arm64/hello_nested.c diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selft= ests/kvm/Makefile.kvm index 5e681e8e0cd7..d7499609cd0c 100644 --- a/tools/testing/selftests/kvm/Makefile.kvm +++ b/tools/testing/selftests/kvm/Makefile.kvm @@ -167,6 +167,7 @@ TEST_GEN_PROGS_arm64 +=3D arm64/arch_timer_edge_cases TEST_GEN_PROGS_arm64 +=3D arm64/at TEST_GEN_PROGS_arm64 +=3D arm64/debug-exceptions TEST_GEN_PROGS_arm64 +=3D arm64/hello_el2 +TEST_GEN_PROGS_arm64 +=3D arm64/hello_nested TEST_GEN_PROGS_arm64 +=3D arm64/host_sve TEST_GEN_PROGS_arm64 +=3D arm64/hypercalls TEST_GEN_PROGS_arm64 +=3D arm64/external_aborts diff --git a/tools/testing/selftests/kvm/arm64/hello_nested.c b/tools/testi= ng/selftests/kvm/arm64/hello_nested.c new file mode 100644 index 000000000000..16c600539810 --- /dev/null +++ b/tools/testing/selftests/kvm/arm64/hello_nested.c @@ -0,0 +1,65 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * hello_nested - Go from vEL2 to EL1 then back + */ +#include "kvm_util.h" +#include "nested.h" +#include "processor.h" +#include "test_util.h" +#include "ucall.h" + +static void l2_guest_code(void) +{ + /* nothing */ +} + +static void guest_code(void) +{ + GUEST_ASSERT_EQ(get_current_el(), 2); + GUEST_PRINTF("vEL2 entry\n"); + run_l2(); + GUEST_DONE(); +} + +int main(void) +{ + struct kvm_vcpu_init init; + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + struct ucall uc; + + TEST_REQUIRE(kvm_check_cap(KVM_CAP_ARM_EL2)); + vm =3D vm_create(1); + + kvm_get_default_vcpu_target(vm, &init); + init.features[0] |=3D BIT(KVM_ARM_VCPU_HAS_EL2); + vcpu =3D aarch64_vcpu_add(vm, 0, &init, guest_code); + kvm_arch_vm_finalize_vcpus(vm); + + prepare_l2_stack(vm, vcpu); + prepare_hyp_state(vm, vcpu); + prepare_eret_destination(vm, vcpu, l2_guest_code); + prepare_nested_sync_handler(vm, vcpu); + + while (true) { + vcpu_run(vcpu); + + switch (get_ucall(vcpu, &uc)) { + case UCALL_PRINTF: + pr_info("%s", uc.buffer); + break; + case UCALL_DONE: + pr_info("DONE!\n"); + goto end; + case UCALL_ABORT: + REPORT_GUEST_ASSERT(uc); + fallthrough; + default: + TEST_FAIL("Unhandled ucall: %ld\n", uc.cmd); + } + } + +end: + kvm_vm_free(vm); + return 0; +} --=20 2.43.0 From nobody Fri Apr 3 02:59:47 2026 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 2D2B11F4168; Wed, 25 Mar 2026 00:37:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774399072; cv=none; b=rU/wXKnjJPHLUzcfAtsJTHLlz2I2DaQKZq3gLexSN+DEIJMLosEY+I8hLS3FrJteTVE/TrNQ57av7mkY1int5Yn5ZBc02LY2ad+0bEKq2aOrZfXuMkpr0En8v8aT72CVHvg0MIFNgGGLfa9j494c6ztYrDpNol3bGfwHRDyRqCY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774399072; c=relaxed/simple; bh=Ia+vilw3zsqeIzVE3dF4KM+CNyzCGCU+jhvm1P5T5B8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DvgwW2Tn+2QxBDaJdiGnSbU54wsJFzS90dax5/0yq9zd4F61PlykYYFCJwIcXK7hIw65uU86uPgb9kBiY12Ogog0nmtaauz+HAeEMHAtz6VMNVp5Bd7JHOmn/3VePvwm28p4kqAYj/c3NypQJ6ctqNFi07IaUBUSxZtPb86Q9JM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b=txQiWite; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b="txQiWite" Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D55991C14; Tue, 24 Mar 2026 17:37:43 -0700 (PDT) Received: from workstation-e142269.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A06123FE53; Tue, 24 Mar 2026 17:37:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss; t=1774399069; bh=Ia+vilw3zsqeIzVE3dF4KM+CNyzCGCU+jhvm1P5T5B8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=txQiWiter+YQWq+0d+dLUs3Ut9UQP5DnpuSantdQEso5kSTyrcU3ZGUtb7/qQNTDt xM0HIdfGUI0F6wcMwho7heLm2gW0zKWpIZ9c/FL632wHWG2xfymBxiYUL4LydUZBus YuNN67lZ9og1GYkvFu+EU05KvMY+UJ0ZfLyXxH+k= From: Wei-Lin Chang To: kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Shuah Khan , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Subject: [PATCH 3/3] KVM: arm64: selftests: Enable stage-2 in NV preparation functions Date: Wed, 25 Mar 2026 00:36:20 +0000 Message-ID: <20260325003620.2214766-4-weilin.chang@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260325003620.2214766-1-weilin.chang@arm.com> References: <20260325003620.2214766-1-weilin.chang@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce library functions for setting up guest stage-2 page tables, then use that to give L2 an identity mapped stage-2 and enable it. The translation and stage-2 page table built is simple, start level 0, 4 levels, 4KB granules, normal cachable, 48-bit IA, 40-bit OA. The nested page table code is adapted from lib/x86/vmx.c. Signed-off-by: Wei-Lin Chang --- .../selftests/kvm/include/arm64/nested.h | 7 ++ .../selftests/kvm/include/arm64/processor.h | 9 ++ .../testing/selftests/kvm/lib/arm64/nested.c | 97 ++++++++++++++++++- 3 files changed, 111 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/include/arm64/nested.h b/tools/tes= ting/selftests/kvm/include/arm64/nested.h index 739ff2ee0161..0be10a775e48 100644 --- a/tools/testing/selftests/kvm/include/arm64/nested.h +++ b/tools/testing/selftests/kvm/include/arm64/nested.h @@ -6,6 +6,13 @@ #ifndef SELFTEST_KVM_NESTED_H #define SELFTEST_KVM_NESTED_H =20 +uint64_t get_l1_vtcr(void); + +void nested_map(struct kvm_vm *vm, vm_paddr_t guest_pgd, + uint64_t nested_paddr, uint64_t paddr, uint64_t size); +void nested_map_memslot(struct kvm_vm *vm, vm_paddr_t guest_pgd, + uint32_t memslot); + void prepare_l2_stack(struct kvm_vm *vm, struct kvm_vcpu *vcpu); void prepare_hyp_state(struct kvm_vm *vm, struct kvm_vcpu *vcpu); void prepare_eret_destination(struct kvm_vm *vm, struct kvm_vcpu *vcpu, vo= id *l2_pc); diff --git a/tools/testing/selftests/kvm/include/arm64/processor.h b/tools/= testing/selftests/kvm/include/arm64/processor.h index ac97a1c436fc..5de2e932d95a 100644 --- a/tools/testing/selftests/kvm/include/arm64/processor.h +++ b/tools/testing/selftests/kvm/include/arm64/processor.h @@ -104,6 +104,15 @@ #define TCR_HA (UL(1) << 39) #define TCR_DS (UL(1) << 59) =20 +/* VTCR_EL2 specific flags */ +#define VTCR_EL2_T0SZ_BITS(x) ((UL(64) - (x)) << VTCR_EL2_T0SZ_SHIFT) + +#define VTCR_EL2_SL0_LV0_4K (UL(2) << VTCR_EL2_SL0_SHIFT) +#define VTCR_EL2_SL0_LV1_4K (UL(1) << VTCR_EL2_SL0_SHIFT) +#define VTCR_EL2_SL0_LV2_4K (UL(0) << VTCR_EL2_SL0_SHIFT) + +#define VTCR_EL2_PS_40_BITS (UL(2) << VTCR_EL2_PS_SHIFT) + /* * AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registe= rs). */ diff --git a/tools/testing/selftests/kvm/lib/arm64/nested.c b/tools/testing= /selftests/kvm/lib/arm64/nested.c index 111d02f44cfe..910f8cd30f96 100644 --- a/tools/testing/selftests/kvm/lib/arm64/nested.c +++ b/tools/testing/selftests/kvm/lib/arm64/nested.c @@ -1,8 +1,11 @@ // SPDX-License-Identifier: GPL-2.0 /* - * ARM64 Nested virtualization helpers + * ARM64 Nested virtualization helpers, nested page table code adapted from + * ../x86/vmx.c. */ =20 +#include + #include "kvm_util.h" #include "nested.h" #include "processor.h" @@ -18,6 +21,87 @@ static void hvc_handler(struct ex_regs *regs) regs->pc =3D (u64)after_hvc; } =20 +uint64_t get_l1_vtcr(void) +{ + return VTCR_EL2_PS_40_BITS | VTCR_EL2_TG0_4K | VTCR_EL2_ORGN0_WBWA | + VTCR_EL2_IRGN0_WBWA | VTCR_EL2_SL0_LV0_4K | VTCR_EL2_T0SZ_BITS(48); +} + +static void __nested_pg_map(struct kvm_vm *vm, uint64_t guest_pgd, + uint64_t nested_paddr, uint64_t paddr, uint64_t flags) +{ + uint8_t attr_idx =3D flags & (PTE_ATTRINDX_MASK >> PTE_ATTRINDX_SHIFT); + uint64_t pg_attr; + uint64_t *ptep; + + TEST_ASSERT((nested_paddr % vm->page_size) =3D=3D 0, + "L2 IPA not on page boundary,\n" + " nested_paddr: 0x%lx vm->page_size: 0x%x", nested_paddr, vm->page_size= ); + TEST_ASSERT((paddr % vm->page_size) =3D=3D 0, + "Guest physical address not on page boundary,\n" + " paddr: 0x%lx vm->page_size: 0x%x", paddr, vm->page_size); + TEST_ASSERT((paddr >> vm->page_shift) <=3D vm->max_gfn, + "Physical address beyond maximum supported,\n" + " paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x", + paddr, vm->max_gfn, vm->page_size); + + ptep =3D addr_gpa2hva(vm, guest_pgd) + ((nested_paddr >> 39) & 0x1ffu) * = 8; + if (!*ptep) + *ptep =3D (vm_alloc_page_table(vm) & GENMASK(47, 12)) | PGD_TYPE_TABLE |= PTE_VALID; + ptep =3D addr_gpa2hva(vm, *ptep & GENMASK(47, 12)) + ((nested_paddr >> 30= ) & 0x1ffu) * 8; + if (!*ptep) + *ptep =3D (vm_alloc_page_table(vm) & GENMASK(47, 12)) | PUD_TYPE_TABLE |= PTE_VALID; + ptep =3D addr_gpa2hva(vm, *ptep & GENMASK(47, 12)) + ((nested_paddr >> 21= ) & 0x1ffu) * 8; + if (!*ptep) + *ptep =3D (vm_alloc_page_table(vm) & GENMASK(47, 12)) | PMD_TYPE_TABLE |= PTE_VALID; + ptep =3D addr_gpa2hva(vm, *ptep & GENMASK(47, 12)) + ((nested_paddr >> 12= ) & 0x1ffu) * 8; + + pg_attr =3D PTE_AF | PTE_ATTRINDX(attr_idx) | PTE_TYPE_PAGE | PTE_VALID; + pg_attr |=3D PTE_SHARED; + + *ptep =3D (paddr & GENMASK(47, 12)) | pg_attr; +} + +void nested_map(struct kvm_vm *vm, vm_paddr_t guest_pgd, + uint64_t nested_paddr, uint64_t paddr, uint64_t size) +{ + size_t npages =3D size / SZ_4K; + + TEST_ASSERT(nested_paddr + size > nested_paddr, "Vaddr overflow"); + TEST_ASSERT(paddr + size > paddr, "Paddr overflow"); + + while (npages--) { + __nested_pg_map(vm, guest_pgd, nested_paddr, paddr, MT_NORMAL); + nested_paddr +=3D SZ_4K; + paddr +=3D SZ_4K; + } +} + +/* + * Prepare an identity shadow page table that maps all the + * physical pages in VM. + */ +void nested_map_memslot(struct kvm_vm *vm, vm_paddr_t guest_pgd, + uint32_t memslot) +{ + sparsebit_idx_t i, last; + struct userspace_mem_region *region =3D + memslot2region(vm, memslot); + + i =3D (region->region.guest_phys_addr >> vm->page_shift) - 1; + last =3D i + (region->region.memory_size >> vm->page_shift); + for (;;) { + i =3D sparsebit_next_clear(region->unused_phy_pages, i); + if (i > last) + break; + + nested_map(vm, guest_pgd, + (uint64_t)i << vm->page_shift, + (uint64_t)i << vm->page_shift, + 1 << vm->page_shift); + } +} + void prepare_l2_stack(struct kvm_vm *vm, struct kvm_vcpu *vcpu) { size_t l2_stack_size; @@ -32,7 +116,16 @@ void prepare_l2_stack(struct kvm_vm *vm, struct kvm_vcp= u *vcpu) =20 void prepare_hyp_state(struct kvm_vm *vm, struct kvm_vcpu *vcpu) { - vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_HCR_EL2), HCR_EL2_RW); + vm_paddr_t guest_pgd; + + guest_pgd =3D vm_phy_pages_alloc(vm, 1, + KVM_GUEST_PAGE_TABLE_MIN_PADDR, + vm->memslots[MEM_REGION_PT]); + nested_map_memslot(vm, guest_pgd, 0); + + vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_HCR_EL2), HCR_EL2_RW | HCR_EL2_V= M); + vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_VTTBR_EL2), guest_pgd); + vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_VTCR_EL2), get_l1_vtcr()); } =20 void prepare_eret_destination(struct kvm_vm *vm, struct kvm_vcpu *vcpu, vo= id *l2_pc) --=20 2.43.0