From nobody Sun Feb 8 14:10:23 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E00D9EB64DD for ; Sat, 29 Jul 2023 00:54:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237289AbjG2AyW (ORCPT ); Fri, 28 Jul 2023 20:54:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45740 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237126AbjG2Axe (ORCPT ); Fri, 28 Jul 2023 20:53:34 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E310F49ED for ; Fri, 28 Jul 2023 17:53:04 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1bb962ada0dso17237275ad.2 for ; Fri, 28 Jul 2023 17:53:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690591924; x=1691196724; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=PY67CjTOwz6KI4T0Lj9P5q66c/RjGwxRUJSEeS4GRlE=; b=oP9v3B7eglFdcVwjdKlFHcGD/0WUKiEfzvXaqLmMq67/t5PI5rmXtEhnqO+qWDCBXT 8FzhRhkE38yBETpK7GgAQZckPVoBi5pk10zSmcsgkwe862bauk7/NWdF59MvxE56Qw3G p/2FID9Ss66U/c6ARkkYrTXVscaZp3OJiSo9+6hL0XPXYl9UBGQm4g0vjbH7kSVlxB9c XvgHyvzGenaPkhH5gSxLYI3XGBTXWeldnWkwsPjQRm84nedB0avi3XH3KmVl7+8RjvUi RhVT7ILzlGGno1LYjXEfN7KbcEt7o+KxLGqP3UAoEE2ppbvEIMwxjcUaK5KqOLDH2coV WMiw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690591924; x=1691196724; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=PY67CjTOwz6KI4T0Lj9P5q66c/RjGwxRUJSEeS4GRlE=; b=T3dj1MNvJe38q8GjtWxkfTvIsM9MIp3GdxNAcMxUEfnO5tZfuxmkoi+Ga1WBk36jV9 91l/bYW5a8YP3yYG87trfLioeKRm38OIPT2a9VcmlD2EK6HUpUyrx6B/Rw+vKMZ48tJE f3xzTMFwDhK2U/OFeMs5c0vXlh9I4KoSru1A8GEkNgTRfUiO/SijnqpIjF4XGh6paCLj 2+6yUjOU2kGv8jNBZXlZZr2+JBeRl7w0UjpQuUhIc3RpHMGdAL8ibkXJ5G6tRXuUZlLK miV8rvwP+uhIDWqR2qJoHNLclvlMbDUsNwUClQ3Uxx8IQotpFoV0N2APJ9RU1tN8zpdx 9fvg== X-Gm-Message-State: ABy/qLbIfxoO45FLuKbog1w8h8fOxGlxgLd9BSjFvmo5HgIuC6cDhomN SR68eUPYqWqW5yF43mjuhB2M2BZvPKc= X-Google-Smtp-Source: APBJJlF347Yo8riIzrmRAqsgc5lfXe93YMWyAqqG1o07lKwl1IWvB4Q+y7XoUngD7GrhK94Kxz7o35NIq/I= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:e892:b0:1b9:c027:b5b8 with SMTP id w18-20020a170902e89200b001b9c027b5b8mr12917plg.12.1690591924329; Fri, 28 Jul 2023 17:52:04 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 28 Jul 2023 17:51:56 -0700 In-Reply-To: <20230729005200.1057358-1-seanjc@google.com> Mime-Version: 1.0 References: <20230729005200.1057358-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230729005200.1057358-2-seanjc@google.com> Subject: [PATCH v2 1/5] KVM: x86/mmu: Add helper to convert root hpa to shadow page From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Reima Ishii Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a dedicated helper for converting a root hpa to a shadow page in anticipation of using a "dummy" root to handle the scenario where KVM needs to load a valid shadow root (from hardware's perspective), but the guest doesn't have a visible root to shadow. Similar to PAE roots, the dummy root won't have an associated kvm_mmu_page and will need special handling when finding a shadow page given a root. Opportunistically retrieve the root shadow page in kvm_mmu_sync_roots() *after* verifying the root is unsync (the dummy root can never be unsync). Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 28 +++++++++++++--------------- arch/x86/kvm/mmu/spte.h | 9 +++++++++ arch/x86/kvm/mmu/tdp_mmu.c | 2 +- 3 files changed, 23 insertions(+), 16 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index ec169f5c7dce..1eadfcde30be 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3574,11 +3574,7 @@ static void mmu_free_root_page(struct kvm *kvm, hpa_= t *root_hpa, if (!VALID_PAGE(*root_hpa)) return; =20 - /* - * The "root" may be a special root, e.g. a PAE entry, treat it as a - * SPTE to ensure any non-PA bits are dropped. - */ - sp =3D spte_to_child_sp(*root_hpa); + sp =3D root_to_sp(*root_hpa); if (WARN_ON(!sp)) return; =20 @@ -3624,7 +3620,7 @@ void kvm_mmu_free_roots(struct kvm *kvm, struct kvm_m= mu *mmu, &invalid_list); =20 if (free_active_root) { - if (to_shadow_page(mmu->root.hpa)) { + if (root_to_sp(mmu->root.hpa)) { mmu_free_root_page(kvm, &mmu->root.hpa, &invalid_list); } else if (mmu->pae_root) { for (i =3D 0; i < 4; ++i) { @@ -3648,6 +3644,7 @@ EXPORT_SYMBOL_GPL(kvm_mmu_free_roots); void kvm_mmu_free_guest_mode_roots(struct kvm *kvm, struct kvm_mmu *mmu) { unsigned long roots_to_free =3D 0; + struct kvm_mmu_page *sp; hpa_t root_hpa; int i; =20 @@ -3662,8 +3659,8 @@ void kvm_mmu_free_guest_mode_roots(struct kvm *kvm, s= truct kvm_mmu *mmu) if (!VALID_PAGE(root_hpa)) continue; =20 - if (!to_shadow_page(root_hpa) || - to_shadow_page(root_hpa)->role.guest_mode) + sp =3D root_to_sp(root_hpa); + if (!sp || sp->role.guest_mode) roots_to_free |=3D KVM_MMU_ROOT_PREVIOUS(i); } =20 @@ -4018,7 +4015,7 @@ static bool is_unsync_root(hpa_t root) * requirement isn't satisfied. */ smp_rmb(); - sp =3D to_shadow_page(root); + sp =3D root_to_sp(root); =20 /* * PAE roots (somewhat arbitrarily) aren't backed by shadow pages, the @@ -4048,11 +4045,12 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu) =20 if (vcpu->arch.mmu->cpu_role.base.level >=3D PT64_ROOT_4LEVEL) { hpa_t root =3D vcpu->arch.mmu->root.hpa; - sp =3D to_shadow_page(root); =20 if (!is_unsync_root(root)) return; =20 + sp =3D root_to_sp(root); + write_lock(&vcpu->kvm->mmu_lock); mmu_sync_children(vcpu, sp, true); write_unlock(&vcpu->kvm->mmu_lock); @@ -4382,7 +4380,7 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault, static bool is_page_fault_stale(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { - struct kvm_mmu_page *sp =3D to_shadow_page(vcpu->arch.mmu->root.hpa); + struct kvm_mmu_page *sp =3D root_to_sp(vcpu->arch.mmu->root.hpa); =20 /* Special roots, e.g. pae_root, are not backed by shadow pages. */ if (sp && is_obsolete_sp(vcpu->kvm, sp)) @@ -4564,7 +4562,7 @@ static inline bool is_root_usable(struct kvm_mmu_root= _info *root, gpa_t pgd, { return (role.direct || pgd =3D=3D root->pgd) && VALID_PAGE(root->hpa) && - role.word =3D=3D to_shadow_page(root->hpa)->role.word; + role.word =3D=3D root_to_sp(root->hpa)->role.word; } =20 /* @@ -4638,7 +4636,7 @@ static bool fast_pgd_switch(struct kvm *kvm, struct k= vm_mmu *mmu, * having to deal with PDPTEs. We may add support for 32-bit hosts/VMs * later if necessary. */ - if (VALID_PAGE(mmu->root.hpa) && !to_shadow_page(mmu->root.hpa)) + if (VALID_PAGE(mmu->root.hpa) && !root_to_sp(mmu->root.hpa)) kvm_mmu_free_roots(kvm, mmu, KVM_MMU_ROOT_CURRENT); =20 if (VALID_PAGE(mmu->root.hpa)) @@ -4686,7 +4684,7 @@ void kvm_mmu_new_pgd(struct kvm_vcpu *vcpu, gpa_t new= _pgd) */ if (!new_role.direct) __clear_sp_write_flooding_count( - to_shadow_page(vcpu->arch.mmu->root.hpa)); + root_to_sp(vcpu->arch.mmu->root.hpa)); } EXPORT_SYMBOL_GPL(kvm_mmu_new_pgd); =20 @@ -5555,7 +5553,7 @@ static bool is_obsolete_root(struct kvm *kvm, hpa_t r= oot_hpa) * (c) KVM doesn't track previous roots for PAE paging, and the guest * is unlikely to zap an in-use PGD. */ - sp =3D to_shadow_page(root_hpa); + sp =3D root_to_sp(root_hpa); return !sp || is_obsolete_sp(kvm, sp); } =20 diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 1279db2eab44..9f8e8cda89e8 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -236,6 +236,15 @@ static inline struct kvm_mmu_page *sptep_to_sp(u64 *sp= tep) return to_shadow_page(__pa(sptep)); } =20 +static inline struct kvm_mmu_page *root_to_sp(hpa_t root) +{ + /* + * The "root" may be a special root, e.g. a PAE entry, treat it as a + * SPTE to ensure any non-PA bits are dropped. + */ + return spte_to_child_sp(root); +} + static inline bool is_mmio_spte(u64 spte) { return (spte & shadow_mmio_mask) =3D=3D shadow_mmio_value && diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 512163d52194..046ac2589611 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -689,7 +689,7 @@ static inline void tdp_mmu_iter_set_spte(struct kvm *kv= m, struct tdp_iter *iter, else =20 #define tdp_mmu_for_each_pte(_iter, _mmu, _start, _end) \ - for_each_tdp_pte(_iter, to_shadow_page(_mmu->root.hpa), _start, _end) + for_each_tdp_pte(_iter, root_to_sp(_mmu->root.hpa), _start, _end) =20 /* * Yield if the MMU lock is contended or this thread needs to return contr= ol --=20 2.41.0.487.g6d72f3e995-goog