From nobody Mon Feb 9 16:45:57 2026 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D11272F290B for ; Tue, 30 Dec 2025 23:02:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135728; cv=none; b=UIRzDq0i3BUyM1gqXZSKyoHhBLbvj2l1PiT2VQQIda2FMBw1vJdbJj3mDmpk99DtLHFLzEI++o3GJ8xnHQuDToO9zGLB45Mkkn0KD2NrXChzXMY/Q/TyNliaUB/vriAU49i+NND1092QTeiAOIfea6wfqSzfFD/O5tQz7RhVu0U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767135728; c=relaxed/simple; bh=ZU8xlAMH6mZxvqsPzFpIFF9VcGo1De1VYXH4GsE/Stk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Oa6WT6CgTN2bV1a2lvBrSbLl6SmKPki356SdNBcFdnyiicXBuepiU3kKebK1oGiRf9l+smUJ7Wn2RZorRCJvTWefzgCSAwf9bxDWsSm+ebe2YZplTB695+DomVVtRezNNaW33Mqj04hGudS1OBOVWQZuYrj1NhsOge4MpJil+qY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=huXxvwEX; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="huXxvwEX" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-34a9bb41009so16720273a91.3 for ; Tue, 30 Dec 2025 15:02:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1767135725; x=1767740525; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=5Cc6wkm6hIGFSfttAC+mfZ95Ec3DVPFhFOuj3x7H3nk=; b=huXxvwEXGo4vJUxgPlhDhMnBY+++5KZy/uPIqOO4fRsT6a4MDayLLol2WYB2/+foeV eRvf8oPyYZbgXUTzVVDM9fK/+M/rRqD3Ott9kygbjZfKPzrSeicFvkk9znJFfnoQd2aq 7Jd8LzrLDIU6kiUm/gnmDUSlb8CHIVKBvVi+vsrU0yvqNpiO79TnEyTYg648X85G/VGe R7SNwgAMj5MbTGJzs2NYWui2Uc/7VCeJ6L3nbMz5UT0Xe0pP06yJPXj1wcc7BtnFLghj XzWsNgzvRH9Mmb6tYaeGzCmzFoPzXNXjGu0BwQge6V1LpnZfXgnHV3hmUmkspUonLBx0 jFlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767135725; x=1767740525; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=5Cc6wkm6hIGFSfttAC+mfZ95Ec3DVPFhFOuj3x7H3nk=; b=jiWC47RSkYIyqb48A5oDqyOb/5w/qdOZJAhQm4jBoi+HZWAfKhvIYPuKBV4Vzls4CY yP19pGqykeEZ6z7HAGOzEl+1LniADeZj07YqrHV4h/DVv4vFXaMKo2zogoSnCtGJLvPo prBUbDNk9+Q6onFhfZVTzl21kSRFZ+BSGF9ZsI2hiljgz6f9vyktQC2nC+ByfCr2oQVT PnN6cQNk+U1F1GWmNc52Jw4kBOg4sALAQYA287+rl66qIgf8QF8OUjsFpg6emCABWGV7 /Pn2Nten3QA7uh8sVH3aXnStoxBScbCtN1LQOu8NpF5Y/mqMVUFYXyN9BEEj0f1ydpub wpcA== X-Forwarded-Encrypted: i=1; AJvYcCUomxYSQw2AFVuHIN6ofip18MPnJiRNkeGdJbZS40yeNXvwm/I7mVkYv2ec+MHti/gMgfv3NQRH6oT0XMQ=@vger.kernel.org X-Gm-Message-State: AOJu0YwSpFhrPi8BW0n0YmgoqX23/XJ+wwlYKb2aPJxoiez6pgNispXx kNwWk1dajVsuEYPpw47xoBQXpfAuDREn02YHiQIHEwifHXkHhp7Lafxs7KVn2JgOyKWlq5885LA vD/ToWg== X-Google-Smtp-Source: AGHT+IEOOU/PO2opx5uUUNTa4NR3nkLk9xuANEwk4g6N1XvLso6TqSi1gOEvbsBH7aLFxJKOwgJ+kV8p0Fg= X-Received: from pjnz3.prod.google.com ([2002:a17:90a:8b83:b0:34a:bee9:ef2]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2e08:b0:340:ad5e:ca with SMTP id 98e67ed59e1d1-34e92139e3amr31373229a91.12.1767135724924; Tue, 30 Dec 2025 15:02:04 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 30 Dec 2025 15:01:36 -0800 In-Reply-To: <20251230230150.4150236-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251230230150.4150236-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog Message-ID: <20251230230150.4150236-8-seanjc@google.com> Subject: [PATCH v4 07/21] KVM: selftests: Plumb "struct kvm_mmu" into x86's MMU APIs From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Yosry Ahmed Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for generalizing the x86 virt mapping APIs to work with TDP (stage-2) page tables, plumb "struct kvm_mmu" into all of the helper functions instead of operating on vm->mmu directly. Opportunistically swap the order of the check in virt_get_pte() to first assert that the parent is the PGD, and then check that the PTE is present, as it makes more sense to check if the parent PTE is the PGD/root (i.e. not a PTE) before checking that the PTE is PRESENT. No functional change intended. Suggested-by: Sean Christopherson Signed-off-by: Yosry Ahmed [sean: rebase on common kvm_mmu structure, rewrite changelog] Signed-off-by: Sean Christopherson --- .../selftests/kvm/include/x86/processor.h | 3 +- .../testing/selftests/kvm/lib/x86/processor.c | 68 +++++++++++-------- 2 files changed, 41 insertions(+), 30 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/te= sting/selftests/kvm/include/x86/processor.h index c00c0fbe62cd..cbac9de29074 100644 --- a/tools/testing/selftests/kvm/include/x86/processor.h +++ b/tools/testing/selftests/kvm/include/x86/processor.h @@ -1449,7 +1449,8 @@ enum pg_level { #define PG_SIZE_2M PG_LEVEL_SIZE(PG_LEVEL_2M) #define PG_SIZE_1G PG_LEVEL_SIZE(PG_LEVEL_1G) =20 -void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int = level); +void __virt_pg_map(struct kvm_vm *vm, struct kvm_mmu *mmu, uint64_t vaddr, + uint64_t paddr, int level); void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, uint64_t nr_bytes, int level); =20 diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testin= g/selftests/kvm/lib/x86/processor.c index f027f86d1535..f25742a804b0 100644 --- a/tools/testing/selftests/kvm/lib/x86/processor.c +++ b/tools/testing/selftests/kvm/lib/x86/processor.c @@ -156,26 +156,31 @@ bool kvm_is_tdp_enabled(void) return get_kvm_amd_param_bool("npt"); } =20 -void virt_arch_pgd_alloc(struct kvm_vm *vm) +static void virt_mmu_init(struct kvm_vm *vm, struct kvm_mmu *mmu) { - TEST_ASSERT(vm->mode =3D=3D VM_MODE_PXXVYY_4K, - "Unknown or unsupported guest mode: 0x%x", vm->mode); - /* If needed, create the top-level page table. */ - if (!vm->mmu.pgd_created) { - vm->mmu.pgd =3D vm_alloc_page_table(vm); - vm->mmu.pgd_created =3D true; + if (!mmu->pgd_created) { + mmu->pgd =3D vm_alloc_page_table(vm); + mmu->pgd_created =3D true; } } =20 -static void *virt_get_pte(struct kvm_vm *vm, uint64_t *parent_pte, - uint64_t vaddr, int level) +void virt_arch_pgd_alloc(struct kvm_vm *vm) +{ + TEST_ASSERT(vm->mode =3D=3D VM_MODE_PXXVYY_4K, + "Unknown or unsupported guest mode: 0x%x", vm->mode); + + virt_mmu_init(vm, &vm->mmu); +} + +static void *virt_get_pte(struct kvm_vm *vm, struct kvm_mmu *mmu, + uint64_t *parent_pte, uint64_t vaddr, int level) { uint64_t pt_gpa =3D PTE_GET_PA(*parent_pte); uint64_t *page_table =3D addr_gpa2hva(vm, pt_gpa); int index =3D (vaddr >> PG_LEVEL_SHIFT(level)) & 0x1ffu; =20 - TEST_ASSERT((*parent_pte & PTE_PRESENT_MASK) || parent_pte =3D=3D &vm->mm= u.pgd, + TEST_ASSERT((*parent_pte =3D=3D mmu->pgd) || (*parent_pte & PTE_PRESENT_M= ASK), "Parent PTE (level %d) not PRESENT for gva: 0x%08lx", level + 1, vaddr); =20 @@ -183,13 +188,14 @@ static void *virt_get_pte(struct kvm_vm *vm, uint64_t= *parent_pte, } =20 static uint64_t *virt_create_upper_pte(struct kvm_vm *vm, + struct kvm_mmu *mmu, uint64_t *parent_pte, uint64_t vaddr, uint64_t paddr, int current_level, int target_level) { - uint64_t *pte =3D virt_get_pte(vm, parent_pte, vaddr, current_level); + uint64_t *pte =3D virt_get_pte(vm, mmu, parent_pte, vaddr, current_level); =20 paddr =3D vm_untag_gpa(vm, paddr); =20 @@ -215,10 +221,11 @@ static uint64_t *virt_create_upper_pte(struct kvm_vm = *vm, return pte; } =20 -void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int = level) +void __virt_pg_map(struct kvm_vm *vm, struct kvm_mmu *mmu, uint64_t vaddr, + uint64_t paddr, int level) { const uint64_t pg_size =3D PG_LEVEL_SIZE(level); - uint64_t *pte =3D &vm->mmu.pgd; + uint64_t *pte =3D &mmu->pgd; int current_level; =20 TEST_ASSERT(vm->mode =3D=3D VM_MODE_PXXVYY_4K, @@ -243,17 +250,17 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr,= uint64_t paddr, int level) * Allocate upper level page tables, if not already present. Return * early if a hugepage was created. */ - for (current_level =3D vm->mmu.pgtable_levels; + for (current_level =3D mmu->pgtable_levels; current_level > PG_LEVEL_4K; current_level--) { - pte =3D virt_create_upper_pte(vm, pte, vaddr, paddr, + pte =3D virt_create_upper_pte(vm, mmu, pte, vaddr, paddr, current_level, level); if (*pte & PTE_LARGE_MASK) return; } =20 /* Fill in page table entry. */ - pte =3D virt_get_pte(vm, pte, vaddr, PG_LEVEL_4K); + pte =3D virt_get_pte(vm, mmu, pte, vaddr, PG_LEVEL_4K); TEST_ASSERT(!(*pte & PTE_PRESENT_MASK), "PTE already present for 4k page at vaddr: 0x%lx", vaddr); *pte =3D PTE_PRESENT_MASK | PTE_WRITABLE_MASK | (paddr & PHYSICAL_PAGE_MA= SK); @@ -270,7 +277,7 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, u= int64_t paddr, int level) =20 void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) { - __virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K); + __virt_pg_map(vm, &vm->mmu, vaddr, paddr, PG_LEVEL_4K); } =20 void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, @@ -285,7 +292,7 @@ void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, = uint64_t paddr, nr_bytes, pg_size); =20 for (i =3D 0; i < nr_pages; i++) { - __virt_pg_map(vm, vaddr, paddr, level); + __virt_pg_map(vm, &vm->mmu, vaddr, paddr, level); sparsebit_set_num(vm->vpages_mapped, vaddr >> vm->page_shift, nr_bytes / PAGE_SIZE); =20 @@ -294,7 +301,8 @@ void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, = uint64_t paddr, } } =20 -static bool vm_is_target_pte(uint64_t *pte, int *level, int current_level) +static bool vm_is_target_pte(struct kvm_mmu *mmu, uint64_t *pte, + int *level, int current_level) { if (*pte & PTE_LARGE_MASK) { TEST_ASSERT(*level =3D=3D PG_LEVEL_NONE || @@ -306,17 +314,19 @@ static bool vm_is_target_pte(uint64_t *pte, int *leve= l, int current_level) return *level =3D=3D current_level; } =20 -static uint64_t *__vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vad= dr, +static uint64_t *__vm_get_page_table_entry(struct kvm_vm *vm, + struct kvm_mmu *mmu, + uint64_t vaddr, int *level) { - int va_width =3D 12 + (vm->mmu.pgtable_levels) * 9; - uint64_t *pte =3D &vm->mmu.pgd; + int va_width =3D 12 + (mmu->pgtable_levels) * 9; + uint64_t *pte =3D &mmu->pgd; int current_level; =20 TEST_ASSERT(!vm->arch.is_pt_protected, "Walking page tables of protected guests is impossible"); =20 - TEST_ASSERT(*level >=3D PG_LEVEL_NONE && *level <=3D vm->mmu.pgtable_leve= ls, + TEST_ASSERT(*level >=3D PG_LEVEL_NONE && *level <=3D mmu->pgtable_levels, "Invalid PG_LEVEL_* '%d'", *level); =20 TEST_ASSERT(vm->mode =3D=3D VM_MODE_PXXVYY_4K, @@ -332,22 +342,22 @@ static uint64_t *__vm_get_page_table_entry(struct kvm= _vm *vm, uint64_t vaddr, (((int64_t)vaddr << (64 - va_width) >> (64 - va_width))), "Canonical check failed. The virtual address is invalid."); =20 - for (current_level =3D vm->mmu.pgtable_levels; + for (current_level =3D mmu->pgtable_levels; current_level > PG_LEVEL_4K; current_level--) { - pte =3D virt_get_pte(vm, pte, vaddr, current_level); - if (vm_is_target_pte(pte, level, current_level)) + pte =3D virt_get_pte(vm, mmu, pte, vaddr, current_level); + if (vm_is_target_pte(mmu, pte, level, current_level)) return pte; } =20 - return virt_get_pte(vm, pte, vaddr, PG_LEVEL_4K); + return virt_get_pte(vm, mmu, pte, vaddr, PG_LEVEL_4K); } =20 uint64_t *vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr) { int level =3D PG_LEVEL_4K; =20 - return __vm_get_page_table_entry(vm, vaddr, &level); + return __vm_get_page_table_entry(vm, &vm->mmu, vaddr, &level); } =20 void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent) @@ -497,7 +507,7 @@ static void kvm_seg_set_kernel_data_64bit(struct kvm_se= gment *segp) vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) { int level =3D PG_LEVEL_NONE; - uint64_t *pte =3D __vm_get_page_table_entry(vm, gva, &level); + uint64_t *pte =3D __vm_get_page_table_entry(vm, &vm->mmu, gva, &level); =20 TEST_ASSERT(*pte & PTE_PRESENT_MASK, "Leaf PTE not PRESENT for gva: 0x%08lx", gva); --=20 2.52.0.351.gbe84eed79e-goog