From nobody Mon Apr 6 14:55:50 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E336DC4332F for ; Fri, 30 Sep 2022 23:49:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231429AbiI3Xte (ORCPT ); Fri, 30 Sep 2022 19:49:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53910 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232067AbiI3XtN (ORCPT ); Fri, 30 Sep 2022 19:49:13 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 973071B3A4F for ; Fri, 30 Sep 2022 16:49:08 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id y16-20020a17090322d000b0017848b6f556so4130707plg.19 for ; Fri, 30 Sep 2022 16:49:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date; bh=XvDAD7NLosORxVPwXfrRltwQBCQ5Au+Henf2w0RS580=; b=jzYcCTctNttedG9U6J6wPCq2AcgEhRp00GG0Yp0Sh1RXZ2TuraBQJXuT4OricisUVH Cjc9ZQgdS3m7YmloAJ4KoqaSIkUt7Twe5KUGxqPQ3FRSErhx2uzrgvqywBydMFUbUAK1 tqWFgF+HzBs0iYf2PCvD+8r9b+PCrLFKfUti1cO09fo2dFU7mxzbq/xlG+FNaqbPjU1c O88SYiMAceshCIO9Uz46nDgS58MPNpznQ0ZJBTPQC26kUHWaxv5mprj7yoC0zOyjyvDe /KL9kp1bJDU6mozE83YfIquSRnm3EjVDkbwy1sLowkk03Ytnc/yg0Nazf57hcjE22Pze Y2CA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date; bh=XvDAD7NLosORxVPwXfrRltwQBCQ5Au+Henf2w0RS580=; b=eWRoPzyb26vceAtU9hT28RRg020T5Rub9DfM3baPW5X3F6nJ2Puz1xVEs+yuj6S0jC CROSY0pJ0Z0vTaK521fcJGRAjepiTNHwtgyJHM6x+LZYwCXNyAUeUdzyDjXodbQXe3pA rTsq7px6e7rK1yvnyjJlsKbWm/GW3B5LkZMowDK+wcMaDHUAkDbxH/tDfUq6scPvoDbi k4xPrbedSNkkKrelDy3h1GfYyirF1rRQCC4qjMH33SX1dUuHNnOjNhAzsrM8ungoztt3 LJoWo14er+Km23XRHBkSm/5Amfs+rHjQFALNFC93x3ZaZo1wg/eaCIXtWZNUfOuu7i2n bwbg== X-Gm-Message-State: ACrzQf2Whj/rgXZpBSgm6gCaF8oUjZ9PdgY2/lZlhXy/pkcFi/I4KFaJ ZnT6yY2QO0tmmj/q7Th+9DKJg13nTQ0= X-Google-Smtp-Source: AMsMyM7Mqw3q+eYdpp66EkQ92L6FG7ZxOfSfv+6A1WtAkS7A4TaOYk7fZLGNBCzqLPzK6GgE7SErt6LBe3Y= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:f7d3:b0:179:fae9:c14d with SMTP id h19-20020a170902f7d300b00179fae9c14dmr11325475plw.91.1664581747599; Fri, 30 Sep 2022 16:49:07 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 30 Sep 2022 23:48:53 +0000 In-Reply-To: <20220930234854.1739690-1-seanjc@google.com> Mime-Version: 1.0 References: <20220930234854.1739690-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20220930234854.1739690-7-seanjc@google.com> Subject: [PATCH v5 6/7] KVM: x86/mmu: Add helper to convert SPTE value to its shadow page From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Mingwei Zhang , David Matlack , Yan Zhao , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a helper to convert a SPTE to its shadow page to deduplicate a variety of flows and hopefully avoid future bugs, e.g. if KVM attempts to get the shadow page for a SPTE without dropping high bits. Opportunistically add a comment in mmu_free_root_page() documenting why it treats the root HPA as a SPTE. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 17 ++++++++++------- arch/x86/kvm/mmu/mmu_internal.h | 12 ------------ arch/x86/kvm/mmu/spte.h | 17 +++++++++++++++++ arch/x86/kvm/mmu/tdp_mmu.h | 2 ++ 4 files changed, 29 insertions(+), 19 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f8a8a9b83755..54005b7f1499 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1816,7 +1816,7 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp, continue; } =20 - child =3D to_shadow_page(ent & SPTE_BASE_ADDR_MASK); + child =3D spte_to_child_sp(ent); =20 if (child->unsync_children) { if (mmu_pages_add(pvec, child, i)) @@ -2375,7 +2375,7 @@ static void validate_direct_spte(struct kvm_vcpu *vcp= u, u64 *sptep, * so we should update the spte at this point to get * a new sp with the correct access. */ - child =3D to_shadow_page(*sptep & SPTE_BASE_ADDR_MASK); + child =3D spte_to_child_sp(*sptep); if (child->role.access =3D=3D direct_access) return; =20 @@ -2396,7 +2396,7 @@ static int mmu_page_zap_pte(struct kvm *kvm, struct k= vm_mmu_page *sp, if (is_last_spte(pte, sp->role.level)) { drop_spte(kvm, spte); } else { - child =3D to_shadow_page(pte & SPTE_BASE_ADDR_MASK); + child =3D spte_to_child_sp(pte); drop_parent_pte(child, spte); =20 /* @@ -2835,7 +2835,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct= kvm_memory_slot *slot, struct kvm_mmu_page *child; u64 pte =3D *sptep; =20 - child =3D to_shadow_page(pte & SPTE_BASE_ADDR_MASK); + child =3D spte_to_child_sp(pte); drop_parent_pte(child, sptep); flush =3D true; } else if (pfn !=3D spte_to_pfn(*sptep)) { @@ -3447,7 +3447,11 @@ static void mmu_free_root_page(struct kvm *kvm, hpa_= t *root_hpa, if (!VALID_PAGE(*root_hpa)) return; =20 - sp =3D to_shadow_page(*root_hpa & SPTE_BASE_ADDR_MASK); + /* + * The "root" may be a special root, e.g. a PAE entry, treat it as a + * SPTE to ensure any non-PA bits are dropped. + */ + sp =3D spte_to_child_sp(*root_hpa); if (WARN_ON(!sp)) return; =20 @@ -3932,8 +3936,7 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu) hpa_t root =3D vcpu->arch.mmu->pae_root[i]; =20 if (IS_VALID_PAE_ROOT(root)) { - root &=3D SPTE_BASE_ADDR_MASK; - sp =3D to_shadow_page(root); + sp =3D spte_to_child_sp(root); mmu_sync_children(vcpu, sp, true); } } diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index 22152241bd29..dbaf6755c5a7 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -133,18 +133,6 @@ struct kvm_mmu_page { =20 extern struct kmem_cache *mmu_page_header_cache; =20 -static inline struct kvm_mmu_page *to_shadow_page(hpa_t shadow_page) -{ - struct page *page =3D pfn_to_page(shadow_page >> PAGE_SHIFT); - - return (struct kvm_mmu_page *)page_private(page); -} - -static inline struct kvm_mmu_page *sptep_to_sp(u64 *sptep) -{ - return to_shadow_page(__pa(sptep)); -} - static inline int kvm_mmu_role_as_id(union kvm_mmu_page_role role) { return role.smm ? 1 : 0; diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 7670c13ce251..7e5343339b90 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -219,6 +219,23 @@ static inline int spte_index(u64 *sptep) */ extern u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask; =20 +static inline struct kvm_mmu_page *to_shadow_page(hpa_t shadow_page) +{ + struct page *page =3D pfn_to_page((shadow_page) >> PAGE_SHIFT); + + return (struct kvm_mmu_page *)page_private(page); +} + +static inline struct kvm_mmu_page *spte_to_child_sp(u64 spte) +{ + return to_shadow_page(spte & SPTE_BASE_ADDR_MASK); +} + +static inline struct kvm_mmu_page *sptep_to_sp(u64 *sptep) +{ + return to_shadow_page(__pa(sptep)); +} + static inline bool is_mmio_spte(u64 spte) { return (spte & shadow_mmio_mask) =3D=3D shadow_mmio_value && diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index c163f7cc23ca..d3714200b932 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -5,6 +5,8 @@ =20 #include =20 +#include "spte.h" + hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu); =20 __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *= root) --=20 2.38.0.rc1.362.ged0d419d3c-goog