From nobody Mon Feb 9 06:25:45 2026 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 351A32E06D2 for ; Tue, 30 Dec 2025 23:02:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135727; cv=none; b=jC9YBL4dt3rhNHnDGIH+iybry12uLaZ3vV4y765ljclLWjXgVKY3W8zP3Ts7Ur2aGt+flHinCkB6OiqsXjFkA6760oeJceO/gNhOOrQJIY06XOq9GLkx2zBC1OoHqXfvTC1gzjsjNUcNl6S6IJACbL5YGuO03/lBGHW3tcXIkYo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135727; c=relaxed/simple; bh=VBEZazNQmX47aR+zvVpMv3ul4jxDIkXc586G94mUMS8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bo1q94Q5QBWLYgg0E/je10Vnt9HRK5eGR7MvejgQLN2EPBa3W5mnqfDswEpRATgs4668y+SR1zOSE0TzaCPHqyqrwoePiYBwWDwZ5WHokQlcptzLl03NooQ0PGHBzv+R0sdIfVfijz+VMvSNrEVbae+V1OSFig5wIzXJKMgmWqY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=VEaBHz0T; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="VEaBHz0T" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-7c1df71b076so19891815b3a.0 for ; Tue, 30 Dec 2025 15:02:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1767135723; x=1767740523; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=+BOw+hIi1bRSqv3w3/oHSK6eg1GIdqsKODkrn0JtmRs=; b=VEaBHz0TKDiPvAatzM24RjuFb8+ezJRKF43ObSlg+xTXmqQfP+ADTggmKQt4cIYYxc 3xvSC3C+y1tdXKVbxbJsvmBeksKpuK6LLEtqxYYvFATnmrvj4edvyvIrbgt2uB1emVr7 SuuJ8+SM4glkoDTu+WFi9Yy0kcOQOP+z9HO+ku+2kgvH0ZurYa0dagz3P2T4jZ8ScqXw FsA8d3KgoYnZdjAjmoByZ18KDOTIb18llFxCX+763wgdWZbIFelfqREZgkT3bpJ5//Tv 28TmpJ/mJLLOBf/IeJUzspTPPqkGFQTsZbvlftHfriX3f0+0s/MBzuOVxCL/lf8fEhsM m81w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767135723; x=1767740523; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=+BOw+hIi1bRSqv3w3/oHSK6eg1GIdqsKODkrn0JtmRs=; b=saUqKKzuLf+yyS1UrWK4NzV3p1OwNUo9rzj/AIVNTavjiIV8fxDIKs1gWuPQbILFtP cdixp3bNsBtFLSobTwier+dT1wHGIjAxfjtOqNiB+1Lk0q3QKWl7+oPamRetgNQzRpeO eGHeEq0w9fhjlEkzgQg8VTQX8CAoYJxz592Lw1W5LR91m0A1oRiUXAdHpB//U0YD1K++ hYzZ9tZqx5fMwWcsVmSp965PN4rqi6abDaCc6sc5F4gyAIaBYNi4wtePTiprHYkYjIi3 FA9TaOGf6mqIFHnW7tRn8SOqDQFnE+Vo40yaR91fZiMGzY8+u9iHrJML1AlyK943CmoM MtuA== X-Forwarded-Encrypted: i=1; AJvYcCXAVlSfLEtOyzl9URugsS5GtgrSefPtxBpHlmr6b/jDC0a/ozKRXQ3x0khivq0S3S83BHkdzjv80ZuKwic=@vger.kernel.org X-Gm-Message-State: AOJu0Yxc/tem5+buvU+TFhF1ylN43F7wRD+xjuhYUWq4yFeh0xkFTyIL CIfpjkI5xzh5x4WJDrps45NUF2aRjrKswe6ol0G27VMfjGz61M3mjTJVISXk58InijfD2sQHBKI g43mDUw== X-Google-Smtp-Source: AGHT+IEys4f3BVFYDoa/02B3caxExHnUYhMZBOGJg0+IEIIzrst7dQ9T8MyTlUZ9TvEhnr4W7Sf0wNW88Eg= X-Received: from pjbpc3.prod.google.com ([2002:a17:90b:3b83:b0:341:88c5:20ac]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:33a2:b0:35e:824a:dc57 with SMTP id adf61e73a8af0-376a94cb6femr37621114637.37.1767135723117; Tue, 30 Dec 2025 15:02:03 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 30 Dec 2025 15:01:35 -0800 In-Reply-To: <20251230230150.4150236-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251230230150.4150236-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog Message-ID: <20251230230150.4150236-7-seanjc@google.com> Subject: [PATCH v4 06/21] KVM: selftests: Add "struct kvm_mmu" to track a given MMU instance From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Yosry Ahmed Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a "struct kvm_mmu" to track a given MMU instance, e.g. a VM's stage-1 MMU versus a VM's stage-2 MMU, so that x86 can share MMU functionality for both stage-1 and stage-2 MMUs, without creating the potential for subtle bugs, e.g. due to consuming on vm->pgtable_levels when operating a stage-2 MMU. Encapsulate the existing de facto MMU in "struct kvm_vm", e.g instead of burying the MMU details in "struct kvm_vm_arch", to avoid more #ifdefs in ____vm_create(), and in the hopes that other architectures can utilize the formalized MMU structure if/when they too support stage-2 page tables. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Yosry Ahmed --- .../testing/selftests/kvm/include/kvm_util.h | 11 ++++-- .../selftests/kvm/lib/arm64/processor.c | 38 +++++++++---------- tools/testing/selftests/kvm/lib/kvm_util.c | 28 +++++++------- .../selftests/kvm/lib/loongarch/processor.c | 28 +++++++------- .../selftests/kvm/lib/riscv/processor.c | 31 +++++++-------- .../selftests/kvm/lib/s390/processor.c | 16 ++++---- .../testing/selftests/kvm/lib/x86/processor.c | 28 +++++++------- .../kvm/x86/vmx_nested_la57_state_test.c | 2 +- 8 files changed, 94 insertions(+), 88 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing= /selftests/kvm/include/kvm_util.h index 81f4355ff28a..39558c05c0bf 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -88,12 +88,17 @@ enum kvm_mem_region_type { NR_MEM_REGIONS, }; =20 +struct kvm_mmu { + bool pgd_created; + uint64_t pgd; + int pgtable_levels; +}; + struct kvm_vm { int mode; unsigned long type; int kvm_fd; int fd; - unsigned int pgtable_levels; unsigned int page_size; unsigned int page_shift; unsigned int pa_bits; @@ -104,13 +109,13 @@ struct kvm_vm { struct sparsebit *vpages_valid; struct sparsebit *vpages_mapped; bool has_irqchip; - bool pgd_created; vm_paddr_t ucall_mmio_addr; - vm_paddr_t pgd; vm_vaddr_t handlers; uint32_t dirty_ring_size; uint64_t gpa_tag_mask; =20 + struct kvm_mmu mmu; + struct kvm_vm_arch arch; =20 struct kvm_binary_stats stats; diff --git a/tools/testing/selftests/kvm/lib/arm64/processor.c b/tools/test= ing/selftests/kvm/lib/arm64/processor.c index d46e4b13b92c..c40f59d48311 100644 --- a/tools/testing/selftests/kvm/lib/arm64/processor.c +++ b/tools/testing/selftests/kvm/lib/arm64/processor.c @@ -28,7 +28,7 @@ static uint64_t page_align(struct kvm_vm *vm, uint64_t v) =20 static uint64_t pgd_index(struct kvm_vm *vm, vm_vaddr_t gva) { - unsigned int shift =3D (vm->pgtable_levels - 1) * (vm->page_shift - 3) + = vm->page_shift; + unsigned int shift =3D (vm->mmu.pgtable_levels - 1) * (vm->page_shift - 3= ) + vm->page_shift; uint64_t mask =3D (1UL << (vm->va_bits - shift)) - 1; =20 return (gva >> shift) & mask; @@ -39,7 +39,7 @@ static uint64_t pud_index(struct kvm_vm *vm, vm_vaddr_t g= va) unsigned int shift =3D 2 * (vm->page_shift - 3) + vm->page_shift; uint64_t mask =3D (1UL << (vm->page_shift - 3)) - 1; =20 - TEST_ASSERT(vm->pgtable_levels =3D=3D 4, + TEST_ASSERT(vm->mmu.pgtable_levels =3D=3D 4, "Mode %d does not have 4 page table levels", vm->mode); =20 return (gva >> shift) & mask; @@ -50,7 +50,7 @@ static uint64_t pmd_index(struct kvm_vm *vm, vm_vaddr_t g= va) unsigned int shift =3D (vm->page_shift - 3) + vm->page_shift; uint64_t mask =3D (1UL << (vm->page_shift - 3)) - 1; =20 - TEST_ASSERT(vm->pgtable_levels >=3D 3, + TEST_ASSERT(vm->mmu.pgtable_levels >=3D 3, "Mode %d does not have >=3D 3 page table levels", vm->mode); =20 return (gva >> shift) & mask; @@ -104,7 +104,7 @@ static uint64_t pte_addr(struct kvm_vm *vm, uint64_t pt= e) =20 static uint64_t ptrs_per_pgd(struct kvm_vm *vm) { - unsigned int shift =3D (vm->pgtable_levels - 1) * (vm->page_shift - 3) + = vm->page_shift; + unsigned int shift =3D (vm->mmu.pgtable_levels - 1) * (vm->page_shift - 3= ) + vm->page_shift; return 1 << (vm->va_bits - shift); } =20 @@ -117,13 +117,13 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) { size_t nr_pages =3D page_align(vm, ptrs_per_pgd(vm) * 8) / vm->page_size; =20 - if (vm->pgd_created) + if (vm->mmu.pgd_created) return; =20 - vm->pgd =3D vm_phy_pages_alloc(vm, nr_pages, - KVM_GUEST_PAGE_TABLE_MIN_PADDR, - vm->memslots[MEM_REGION_PT]); - vm->pgd_created =3D true; + vm->mmu.pgd =3D vm_phy_pages_alloc(vm, nr_pages, + KVM_GUEST_PAGE_TABLE_MIN_PADDR, + vm->memslots[MEM_REGION_PT]); + vm->mmu.pgd_created =3D true; } =20 static void _virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, @@ -147,12 +147,12 @@ static void _virt_pg_map(struct kvm_vm *vm, uint64_t = vaddr, uint64_t paddr, " paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x", paddr, vm->max_gfn, vm->page_size); =20 - ptep =3D addr_gpa2hva(vm, vm->pgd) + pgd_index(vm, vaddr) * 8; + ptep =3D addr_gpa2hva(vm, vm->mmu.pgd) + pgd_index(vm, vaddr) * 8; if (!*ptep) *ptep =3D addr_pte(vm, vm_alloc_page_table(vm), PGD_TYPE_TABLE | PTE_VALID); =20 - switch (vm->pgtable_levels) { + switch (vm->mmu.pgtable_levels) { case 4: ptep =3D addr_gpa2hva(vm, pte_addr(vm, *ptep)) + pud_index(vm, vaddr) * = 8; if (!*ptep) @@ -190,16 +190,16 @@ uint64_t *virt_get_pte_hva_at_level(struct kvm_vm *vm= , vm_vaddr_t gva, int level { uint64_t *ptep; =20 - if (!vm->pgd_created) + if (!vm->mmu.pgd_created) goto unmapped_gva; =20 - ptep =3D addr_gpa2hva(vm, vm->pgd) + pgd_index(vm, gva) * 8; + ptep =3D addr_gpa2hva(vm, vm->mmu.pgd) + pgd_index(vm, gva) * 8; if (!ptep) goto unmapped_gva; if (level =3D=3D 0) return ptep; =20 - switch (vm->pgtable_levels) { + switch (vm->mmu.pgtable_levels) { case 4: ptep =3D addr_gpa2hva(vm, pte_addr(vm, *ptep)) + pud_index(vm, gva) * 8; if (!ptep) @@ -263,13 +263,13 @@ static void pte_dump(FILE *stream, struct kvm_vm *vm,= uint8_t indent, uint64_t p =20 void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent) { - int level =3D 4 - (vm->pgtable_levels - 1); + int level =3D 4 - (vm->mmu.pgtable_levels - 1); uint64_t pgd, *ptep; =20 - if (!vm->pgd_created) + if (!vm->mmu.pgd_created) return; =20 - for (pgd =3D vm->pgd; pgd < vm->pgd + ptrs_per_pgd(vm) * 8; pgd +=3D 8) { + for (pgd =3D vm->mmu.pgd; pgd < vm->mmu.pgd + ptrs_per_pgd(vm) * 8; pgd += =3D 8) { ptep =3D addr_gpa2hva(vm, pgd); if (!*ptep) continue; @@ -350,7 +350,7 @@ void aarch64_vcpu_setup(struct kvm_vcpu *vcpu, struct k= vm_vcpu_init *init) TEST_FAIL("Unknown guest mode, mode: 0x%x", vm->mode); } =20 - ttbr0_el1 =3D vm->pgd & GENMASK(47, vm->page_shift); + ttbr0_el1 =3D vm->mmu.pgd & GENMASK(47, vm->page_shift); =20 /* Configure output size */ switch (vm->mode) { @@ -358,7 +358,7 @@ void aarch64_vcpu_setup(struct kvm_vcpu *vcpu, struct k= vm_vcpu_init *init) case VM_MODE_P52V48_16K: case VM_MODE_P52V48_64K: tcr_el1 |=3D TCR_IPS_52_BITS; - ttbr0_el1 |=3D FIELD_GET(GENMASK(51, 48), vm->pgd) << 2; + ttbr0_el1 |=3D FIELD_GET(GENMASK(51, 48), vm->mmu.pgd) << 2; break; case VM_MODE_P48V48_4K: case VM_MODE_P48V48_16K: diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/sel= ftests/kvm/lib/kvm_util.c index 8279b6ced8d2..65752daeed90 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -281,34 +281,34 @@ struct kvm_vm *____vm_create(struct vm_shape shape) /* Setup mode specific traits. */ switch (vm->mode) { case VM_MODE_P52V48_4K: - vm->pgtable_levels =3D 4; + vm->mmu.pgtable_levels =3D 4; break; case VM_MODE_P52V48_64K: - vm->pgtable_levels =3D 3; + vm->mmu.pgtable_levels =3D 3; break; case VM_MODE_P48V48_4K: - vm->pgtable_levels =3D 4; + vm->mmu.pgtable_levels =3D 4; break; case VM_MODE_P48V48_64K: - vm->pgtable_levels =3D 3; + vm->mmu.pgtable_levels =3D 3; break; case VM_MODE_P40V48_4K: case VM_MODE_P36V48_4K: - vm->pgtable_levels =3D 4; + vm->mmu.pgtable_levels =3D 4; break; case VM_MODE_P40V48_64K: case VM_MODE_P36V48_64K: - vm->pgtable_levels =3D 3; + vm->mmu.pgtable_levels =3D 3; break; case VM_MODE_P52V48_16K: case VM_MODE_P48V48_16K: case VM_MODE_P40V48_16K: case VM_MODE_P36V48_16K: - vm->pgtable_levels =3D 4; + vm->mmu.pgtable_levels =3D 4; break; case VM_MODE_P47V47_16K: case VM_MODE_P36V47_16K: - vm->pgtable_levels =3D 3; + vm->mmu.pgtable_levels =3D 3; break; case VM_MODE_PXXVYY_4K: #ifdef __x86_64__ @@ -321,22 +321,22 @@ struct kvm_vm *____vm_create(struct vm_shape shape) vm->va_bits); =20 if (vm->va_bits =3D=3D 57) { - vm->pgtable_levels =3D 5; + vm->mmu.pgtable_levels =3D 5; } else { TEST_ASSERT(vm->va_bits =3D=3D 48, "Unexpected guest virtual address width: %d", vm->va_bits); - vm->pgtable_levels =3D 4; + vm->mmu.pgtable_levels =3D 4; } #else TEST_FAIL("VM_MODE_PXXVYY_4K not supported on non-x86 platforms"); #endif break; case VM_MODE_P47V64_4K: - vm->pgtable_levels =3D 5; + vm->mmu.pgtable_levels =3D 5; break; case VM_MODE_P44V64_4K: - vm->pgtable_levels =3D 5; + vm->mmu.pgtable_levels =3D 5; break; default: TEST_FAIL("Unknown guest mode: 0x%x", vm->mode); @@ -1956,8 +1956,8 @@ void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t= indent) fprintf(stream, "%*sMapped Virtual Pages:\n", indent, ""); sparsebit_dump(stream, vm->vpages_mapped, indent + 2); fprintf(stream, "%*spgd_created: %u\n", indent, "", - vm->pgd_created); - if (vm->pgd_created) { + vm->mmu.pgd_created); + if (vm->mmu.pgd_created) { fprintf(stream, "%*sVirtual Translation Tables:\n", indent + 2, ""); virt_dump(stream, vm, indent + 4); diff --git a/tools/testing/selftests/kvm/lib/loongarch/processor.c b/tools/= testing/selftests/kvm/lib/loongarch/processor.c index 07c103369ddb..17aa55a2047a 100644 --- a/tools/testing/selftests/kvm/lib/loongarch/processor.c +++ b/tools/testing/selftests/kvm/lib/loongarch/processor.c @@ -50,11 +50,11 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) int i; vm_paddr_t child, table; =20 - if (vm->pgd_created) + if (vm->mmu.pgd_created) return; =20 child =3D table =3D 0; - for (i =3D 0; i < vm->pgtable_levels; i++) { + for (i =3D 0; i < vm->mmu.pgtable_levels; i++) { invalid_pgtable[i] =3D child; table =3D vm_phy_page_alloc(vm, LOONGARCH_PAGE_TABLE_PHYS_MIN, vm->memslots[MEM_REGION_PT]); @@ -62,8 +62,8 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) virt_set_pgtable(vm, table, child); child =3D table; } - vm->pgd =3D table; - vm->pgd_created =3D true; + vm->mmu.pgd =3D table; + vm->mmu.pgd_created =3D true; } =20 static int virt_pte_none(uint64_t *ptep, int level) @@ -77,11 +77,11 @@ static uint64_t *virt_populate_pte(struct kvm_vm *vm, v= m_vaddr_t gva, int alloc) uint64_t *ptep; vm_paddr_t child; =20 - if (!vm->pgd_created) + if (!vm->mmu.pgd_created) goto unmapped_gva; =20 - child =3D vm->pgd; - level =3D vm->pgtable_levels - 1; + child =3D vm->mmu.pgd; + level =3D vm->mmu.pgtable_levels - 1; while (level > 0) { ptep =3D addr_gpa2hva(vm, child) + virt_pte_index(vm, gva, level) * 8; if (virt_pte_none(ptep, level)) { @@ -161,11 +161,11 @@ void virt_arch_dump(FILE *stream, struct kvm_vm *vm, = uint8_t indent) { int level; =20 - if (!vm->pgd_created) + if (!vm->mmu.pgd_created) return; =20 - level =3D vm->pgtable_levels - 1; - pte_dump(stream, vm, indent, vm->pgd, level); + level =3D vm->mmu.pgtable_levels - 1; + pte_dump(stream, vm, indent, vm->mmu.pgd, level); } =20 void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, uint8_t indent) @@ -297,7 +297,7 @@ static void loongarch_vcpu_setup(struct kvm_vcpu *vcpu) =20 width =3D vm->page_shift - 3; =20 - switch (vm->pgtable_levels) { + switch (vm->mmu.pgtable_levels) { case 4: /* pud page shift and width */ val =3D (vm->page_shift + width * 2) << 20 | (width << 25); @@ -309,15 +309,15 @@ static void loongarch_vcpu_setup(struct kvm_vcpu *vcp= u) val |=3D vm->page_shift | width << 5; break; default: - TEST_FAIL("Got %u page table levels, expected 3 or 4", vm->pgtable_level= s); + TEST_FAIL("Got %u page table levels, expected 3 or 4", vm->mmu.pgtable_l= evels); } =20 loongarch_set_csr(vcpu, LOONGARCH_CSR_PWCTL0, val); =20 /* PGD page shift and width */ - val =3D (vm->page_shift + width * (vm->pgtable_levels - 1)) | width << 6; + val =3D (vm->page_shift + width * (vm->mmu.pgtable_levels - 1)) | width <= < 6; loongarch_set_csr(vcpu, LOONGARCH_CSR_PWCTL1, val); - loongarch_set_csr(vcpu, LOONGARCH_CSR_PGDL, vm->pgd); + loongarch_set_csr(vcpu, LOONGARCH_CSR_PGDL, vm->mmu.pgd); =20 /* * Refill exception runs on real mode diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/test= ing/selftests/kvm/lib/riscv/processor.c index 2eac7d4b59e9..e6ec7c224fc3 100644 --- a/tools/testing/selftests/kvm/lib/riscv/processor.c +++ b/tools/testing/selftests/kvm/lib/riscv/processor.c @@ -60,7 +60,7 @@ static uint64_t pte_index(struct kvm_vm *vm, vm_vaddr_t g= va, int level) { TEST_ASSERT(level > -1, "Negative page table level (%d) not possible", level); - TEST_ASSERT(level < vm->pgtable_levels, + TEST_ASSERT(level < vm->mmu.pgtable_levels, "Invalid page table level (%d)", level); =20 return (gva & pte_index_mask[level]) >> pte_index_shift[level]; @@ -70,19 +70,19 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) { size_t nr_pages =3D page_align(vm, ptrs_per_pte(vm) * 8) / vm->page_size; =20 - if (vm->pgd_created) + if (vm->mmu.pgd_created) return; =20 - vm->pgd =3D vm_phy_pages_alloc(vm, nr_pages, - KVM_GUEST_PAGE_TABLE_MIN_PADDR, - vm->memslots[MEM_REGION_PT]); - vm->pgd_created =3D true; + vm->mmu.pgd =3D vm_phy_pages_alloc(vm, nr_pages, + KVM_GUEST_PAGE_TABLE_MIN_PADDR, + vm->memslots[MEM_REGION_PT]); + vm->mmu.pgd_created =3D true; } =20 void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) { uint64_t *ptep, next_ppn; - int level =3D vm->pgtable_levels - 1; + int level =3D vm->mmu.pgtable_levels - 1; =20 TEST_ASSERT((vaddr % vm->page_size) =3D=3D 0, "Virtual address not on page boundary,\n" @@ -98,7 +98,7 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, = uint64_t paddr) " paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x", paddr, vm->max_gfn, vm->page_size); =20 - ptep =3D addr_gpa2hva(vm, vm->pgd) + pte_index(vm, vaddr, level) * 8; + ptep =3D addr_gpa2hva(vm, vm->mmu.pgd) + pte_index(vm, vaddr, level) * 8; if (!*ptep) { next_ppn =3D vm_alloc_page_table(vm) >> PGTBL_PAGE_SIZE_SHIFT; *ptep =3D (next_ppn << PGTBL_PTE_ADDR_SHIFT) | @@ -126,12 +126,12 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vad= dr, uint64_t paddr) vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) { uint64_t *ptep; - int level =3D vm->pgtable_levels - 1; + int level =3D vm->mmu.pgtable_levels - 1; =20 - if (!vm->pgd_created) + if (!vm->mmu.pgd_created) goto unmapped_gva; =20 - ptep =3D addr_gpa2hva(vm, vm->pgd) + pte_index(vm, gva, level) * 8; + ptep =3D addr_gpa2hva(vm, vm->mmu.pgd) + pte_index(vm, gva, level) * 8; if (!ptep) goto unmapped_gva; level--; @@ -176,13 +176,14 @@ static void pte_dump(FILE *stream, struct kvm_vm *vm,= uint8_t indent, =20 void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent) { - int level =3D vm->pgtable_levels - 1; + struct kvm_mmu *mmu =3D &vm->mmu; + int level =3D mmu->pgtable_levels - 1; uint64_t pgd, *ptep; =20 - if (!vm->pgd_created) + if (!mmu->pgd_created) return; =20 - for (pgd =3D vm->pgd; pgd < vm->pgd + ptrs_per_pte(vm) * 8; pgd +=3D 8) { + for (pgd =3D mmu->pgd; pgd < mmu->pgd + ptrs_per_pte(vm) * 8; pgd +=3D 8)= { ptep =3D addr_gpa2hva(vm, pgd); if (!*ptep) continue; @@ -211,7 +212,7 @@ void riscv_vcpu_mmu_setup(struct kvm_vcpu *vcpu) TEST_FAIL("Unknown guest mode, mode: 0x%x", vm->mode); } =20 - satp =3D (vm->pgd >> PGTBL_PAGE_SIZE_SHIFT) & SATP_PPN; + satp =3D (vm->mmu.pgd >> PGTBL_PAGE_SIZE_SHIFT) & SATP_PPN; satp |=3D SATP_MODE_48; =20 vcpu_set_reg(vcpu, RISCV_GENERAL_CSR_REG(satp), satp); diff --git a/tools/testing/selftests/kvm/lib/s390/processor.c b/tools/testi= ng/selftests/kvm/lib/s390/processor.c index 8ceeb17c819a..6a9a660413a7 100644 --- a/tools/testing/selftests/kvm/lib/s390/processor.c +++ b/tools/testing/selftests/kvm/lib/s390/processor.c @@ -17,7 +17,7 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) TEST_ASSERT(vm->page_size =3D=3D PAGE_SIZE, "Unsupported page size: 0x%x", vm->page_size); =20 - if (vm->pgd_created) + if (vm->mmu.pgd_created) return; =20 paddr =3D vm_phy_pages_alloc(vm, PAGES_PER_REGION, @@ -25,8 +25,8 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) vm->memslots[MEM_REGION_PT]); memset(addr_gpa2hva(vm, paddr), 0xff, PAGES_PER_REGION * vm->page_size); =20 - vm->pgd =3D paddr; - vm->pgd_created =3D true; + vm->mmu.pgd =3D paddr; + vm->mmu.pgd_created =3D true; } =20 /* @@ -70,7 +70,7 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t gva, ui= nt64_t gpa) gva, vm->max_gfn, vm->page_size); =20 /* Walk through region and segment tables */ - entry =3D addr_gpa2hva(vm, vm->pgd); + entry =3D addr_gpa2hva(vm, vm->mmu.pgd); for (ri =3D 1; ri <=3D 4; ri++) { idx =3D (gva >> (64 - 11 * ri)) & 0x7ffu; if (entry[idx] & REGION_ENTRY_INVALID) @@ -94,7 +94,7 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_= t gva) TEST_ASSERT(vm->page_size =3D=3D PAGE_SIZE, "Unsupported page size: 0x%x", vm->page_size); =20 - entry =3D addr_gpa2hva(vm, vm->pgd); + entry =3D addr_gpa2hva(vm, vm->mmu.pgd); for (ri =3D 1; ri <=3D 4; ri++) { idx =3D (gva >> (64 - 11 * ri)) & 0x7ffu; TEST_ASSERT(!(entry[idx] & REGION_ENTRY_INVALID), @@ -149,10 +149,10 @@ static void virt_dump_region(FILE *stream, struct kvm= _vm *vm, uint8_t indent, =20 void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent) { - if (!vm->pgd_created) + if (!vm->mmu.pgd_created) return; =20 - virt_dump_region(stream, vm, indent, vm->pgd); + virt_dump_region(stream, vm, indent, vm->mmu.pgd); } =20 void vcpu_arch_set_entry_point(struct kvm_vcpu *vcpu, void *guest_code) @@ -184,7 +184,7 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, ui= nt32_t vcpu_id) =20 vcpu_sregs_get(vcpu, &sregs); sregs.crs[0] |=3D 0x00040000; /* Enable floating point regs */ - sregs.crs[1] =3D vm->pgd | 0xf; /* Primary region table */ + sregs.crs[1] =3D vm->mmu.pgd | 0xf; /* Primary region table */ vcpu_sregs_set(vcpu, &sregs); =20 vcpu->run->psw_mask =3D 0x0400000180000000ULL; /* DAT enabled + 64 bit m= ode */ diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testin= g/selftests/kvm/lib/x86/processor.c index c14bf2b5f28f..f027f86d1535 100644 --- a/tools/testing/selftests/kvm/lib/x86/processor.c +++ b/tools/testing/selftests/kvm/lib/x86/processor.c @@ -162,9 +162,9 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) "Unknown or unsupported guest mode: 0x%x", vm->mode); =20 /* If needed, create the top-level page table. */ - if (!vm->pgd_created) { - vm->pgd =3D vm_alloc_page_table(vm); - vm->pgd_created =3D true; + if (!vm->mmu.pgd_created) { + vm->mmu.pgd =3D vm_alloc_page_table(vm); + vm->mmu.pgd_created =3D true; } } =20 @@ -175,7 +175,7 @@ static void *virt_get_pte(struct kvm_vm *vm, uint64_t *= parent_pte, uint64_t *page_table =3D addr_gpa2hva(vm, pt_gpa); int index =3D (vaddr >> PG_LEVEL_SHIFT(level)) & 0x1ffu; =20 - TEST_ASSERT((*parent_pte & PTE_PRESENT_MASK) || parent_pte =3D=3D &vm->pg= d, + TEST_ASSERT((*parent_pte & PTE_PRESENT_MASK) || parent_pte =3D=3D &vm->mm= u.pgd, "Parent PTE (level %d) not PRESENT for gva: 0x%08lx", level + 1, vaddr); =20 @@ -218,7 +218,7 @@ static uint64_t *virt_create_upper_pte(struct kvm_vm *v= m, void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int = level) { const uint64_t pg_size =3D PG_LEVEL_SIZE(level); - uint64_t *pte =3D &vm->pgd; + uint64_t *pte =3D &vm->mmu.pgd; int current_level; =20 TEST_ASSERT(vm->mode =3D=3D VM_MODE_PXXVYY_4K, @@ -243,7 +243,7 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, u= int64_t paddr, int level) * Allocate upper level page tables, if not already present. Return * early if a hugepage was created. */ - for (current_level =3D vm->pgtable_levels; + for (current_level =3D vm->mmu.pgtable_levels; current_level > PG_LEVEL_4K; current_level--) { pte =3D virt_create_upper_pte(vm, pte, vaddr, paddr, @@ -309,14 +309,14 @@ static bool vm_is_target_pte(uint64_t *pte, int *leve= l, int current_level) static uint64_t *__vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vad= dr, int *level) { - int va_width =3D 12 + (vm->pgtable_levels) * 9; - uint64_t *pte =3D &vm->pgd; + int va_width =3D 12 + (vm->mmu.pgtable_levels) * 9; + uint64_t *pte =3D &vm->mmu.pgd; int current_level; =20 TEST_ASSERT(!vm->arch.is_pt_protected, "Walking page tables of protected guests is impossible"); =20 - TEST_ASSERT(*level >=3D PG_LEVEL_NONE && *level <=3D vm->pgtable_levels, + TEST_ASSERT(*level >=3D PG_LEVEL_NONE && *level <=3D vm->mmu.pgtable_leve= ls, "Invalid PG_LEVEL_* '%d'", *level); =20 TEST_ASSERT(vm->mode =3D=3D VM_MODE_PXXVYY_4K, @@ -332,7 +332,7 @@ static uint64_t *__vm_get_page_table_entry(struct kvm_v= m *vm, uint64_t vaddr, (((int64_t)vaddr << (64 - va_width) >> (64 - va_width))), "Canonical check failed. The virtual address is invalid."); =20 - for (current_level =3D vm->pgtable_levels; + for (current_level =3D vm->mmu.pgtable_levels; current_level > PG_LEVEL_4K; current_level--) { pte =3D virt_get_pte(vm, pte, vaddr, current_level); @@ -357,7 +357,7 @@ void virt_arch_dump(FILE *stream, struct kvm_vm *vm, ui= nt8_t indent) uint64_t *pde, *pde_start; uint64_t *pte, *pte_start; =20 - if (!vm->pgd_created) + if (!vm->mmu.pgd_created) return; =20 fprintf(stream, "%*s " @@ -365,7 +365,7 @@ void virt_arch_dump(FILE *stream, struct kvm_vm *vm, ui= nt8_t indent) fprintf(stream, "%*s index hvaddr gpaddr " "addr w exec dirty\n", indent, ""); - pml4e_start =3D (uint64_t *) addr_gpa2hva(vm, vm->pgd); + pml4e_start =3D (uint64_t *) addr_gpa2hva(vm, vm->mmu.pgd); for (uint16_t n1 =3D 0; n1 <=3D 0x1ffu; n1++) { pml4e =3D &pml4e_start[n1]; if (!(*pml4e & PTE_PRESENT_MASK)) @@ -538,7 +538,7 @@ static void vcpu_init_sregs(struct kvm_vm *vm, struct k= vm_vcpu *vcpu) sregs.cr4 |=3D X86_CR4_PAE | X86_CR4_OSFXSR; if (kvm_cpu_has(X86_FEATURE_XSAVE)) sregs.cr4 |=3D X86_CR4_OSXSAVE; - if (vm->pgtable_levels =3D=3D 5) + if (vm->mmu.pgtable_levels =3D=3D 5) sregs.cr4 |=3D X86_CR4_LA57; sregs.efer |=3D (EFER_LME | EFER_LMA | EFER_NX); =20 @@ -549,7 +549,7 @@ static void vcpu_init_sregs(struct kvm_vm *vm, struct k= vm_vcpu *vcpu) kvm_seg_set_kernel_data_64bit(&sregs.gs); kvm_seg_set_tss_64bit(vm->arch.tss, &sregs.tr); =20 - sregs.cr3 =3D vm->pgd; + sregs.cr3 =3D vm->mmu.pgd; vcpu_sregs_set(vcpu, &sregs); } =20 diff --git a/tools/testing/selftests/kvm/x86/vmx_nested_la57_state_test.c b= /tools/testing/selftests/kvm/x86/vmx_nested_la57_state_test.c index cf1d2d1f2a8f..915c42001dba 100644 --- a/tools/testing/selftests/kvm/x86/vmx_nested_la57_state_test.c +++ b/tools/testing/selftests/kvm/x86/vmx_nested_la57_state_test.c @@ -90,7 +90,7 @@ int main(int argc, char *argv[]) * L1 needs to read its own PML5 table to set up L2. Identity map * the PML5 table to facilitate this. */ - virt_map(vm, vm->pgd, vm->pgd, 1); + virt_map(vm, vm->mmu.pgd, vm->mmu.pgd, 1); =20 vcpu_alloc_vmx(vm, &vmx_pages_gva); vcpu_args_set(vcpu, 1, vmx_pages_gva); --=20 2.52.0.351.gbe84eed79e-goog