From nobody Wed Sep 17 09:04:18 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8A0CC4332F for ; Wed, 21 Dec 2022 22:25:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235067AbiLUWZV (ORCPT ); Wed, 21 Dec 2022 17:25:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36276 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235001AbiLUWZC (ORCPT ); Wed, 21 Dec 2022 17:25:02 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 07730275E5 for ; Wed, 21 Dec 2022 14:24:36 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id h2-20020a170902f54200b0018e56572a4eso137919plf.9 for ; Wed, 21 Dec 2022 14:24:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BM/X6lf5Q5u/NqA/5lapzPbr/TRyR56BZ+pcmqdxJ08=; b=qb64BZX2skXQRlPzR/gphfOKSPRkDwvBMkn7oaOZpTZ2gp5EHreY2v4lXCy1a9ufgK xN2ODq1pNGyJ8WN+gbl5Ci9n4tmFG1VHrj2V2fhMwYoUFMMUHFOvA6vV2L6tsnpg8ygl tXUsBzQ9SDD3ITbzPXAV49ShZhPEEvUx778mW6XLnI+FkqcU0BR+o0nw9YXw9yC0wa9h UlbCyVeUoOUdGp0IYDzYl8rnePgr5SU26yYQ1K8wRgygvowtWShEeKO5b1Xbv74eRFD4 DZsd2gZiGHEBZb6KIU0q90BsAN3EJ8DQb4+qAU+ctS2eav0Bkp6We3Q3zzxklyeENq31 Hjxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BM/X6lf5Q5u/NqA/5lapzPbr/TRyR56BZ+pcmqdxJ08=; b=S7FrsbNZaFTtdaxJyk74zxStt7Q2Pz4O9pepPkAuuQ/Zm5Bo4C0Jix16eCYFHIi5hq Hyq5+ftQvCW8J26vo7zblkEoiFZUSEehhntC7McwBwS17dZmXUi9fQTsjhDq3/W77GvU 3uIm/j7whtcxTVtCho+k1ZQdlXxfw3ajBV3W3Wf98UiogI4YHK6D9F7B3DOvtSfddjsX pzTFhVwT/2NT/82Nnlew/srFJvaW9rmuU+QRLeyPzwI3wxPHWlleBJ1Y3wPY7sclLwMI Y55PwH+E1cNdhQRS834EDk35M/s/LD3dNBeqLEvkTN1UXFA0lSGf5J4Wa7Dx6ZAs40iM dPcw== X-Gm-Message-State: AFqh2kqsUGOoHagBfkT2YA2N8vSlfc3uHfnHSRf+ovDanlat7J+Xd9c5 frf3UQ9sNK9A7OK3lH3/gVp8fhzhLiFItnlzZjy2hYdKstSHXD/ppD670WEMZcybMp43R9CFQKY tgyoolkjkwTvJzD4uoBH4ufAQ149ecIi6IeOtyMIXc7NXXTqBj6+0meCUo2svVYTgZtJbLzT+ X-Google-Smtp-Source: AMrXdXuIYxb8wwaa0x4dHvr8LQ0LAxW67fag3ohNfE+PlNGCJ7DafcYsm6xdERiSUGJuPa4Bc9sSe1NrmAmI X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a05:6a00:1d8f:b0:578:7184:b495 with SMTP id z15-20020a056a001d8f00b005787184b495mr270797pfw.16.1671661475626; Wed, 21 Dec 2022 14:24:35 -0800 (PST) Date: Wed, 21 Dec 2022 22:24:12 +0000 In-Reply-To: <20221221222418.3307832-1-bgardon@google.com> Mime-Version: 1.0 References: <20221221222418.3307832-1-bgardon@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221221222418.3307832-9-bgardon@google.com> Subject: [RFC 08/14] KVM: x86/MMU: Clean up naming of exported Shadow MMU functions From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Nagareddy Reddy , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Change the naming scheme on several functions exported from the shadow MMU to match the naming scheme used by the TDP MMU: kvm_shadow_mmu_. More cleanups will follow to convert the remaining functions to a similar naming scheme, but for now, start with the trivial renames. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 19 ++++++++++--------- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- arch/x86/kvm/mmu/shadow_mmu.c | 19 ++++++++++--------- arch/x86/kvm/mmu/shadow_mmu.h | 17 +++++++++-------- 4 files changed, 30 insertions(+), 27 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4c45a5b63356..bacb519ba7b4 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1129,7 +1129,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vc= pu) int r; =20 write_lock(&vcpu->kvm->mmu_lock); - r =3D make_mmu_pages_available(vcpu); + r =3D kvm_shadow_mmu_make_pages_available(vcpu); if (r < 0) goto out_unlock; =20 @@ -1204,7 +1204,7 @@ static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 = addr, u64 *sptep) if (is_tdp_mmu(vcpu->arch.mmu)) leaf =3D kvm_tdp_mmu_get_walk(vcpu, addr, sptes, &root); else - leaf =3D get_walk(vcpu, addr, sptes, &root); + leaf =3D kvm_shadow_mmu_get_walk(vcpu, addr, sptes, &root); =20 walk_shadow_page_lockless_end(vcpu); =20 @@ -1469,14 +1469,14 @@ static int direct_page_fault(struct kvm_vcpu *vcpu,= struct kvm_page_fault *fault if (is_page_fault_stale(vcpu, fault, mmu_seq)) goto out_unlock; =20 - r =3D make_mmu_pages_available(vcpu); + r =3D kvm_shadow_mmu_make_pages_available(vcpu); if (r) goto out_unlock; =20 if (is_tdp_mmu_fault) r =3D kvm_tdp_mmu_map(vcpu, fault); else - r =3D __direct_map(vcpu, fault); + r =3D kvm_shadow_mmu_direct_map(vcpu, fault); =20 out_unlock: if (is_tdp_mmu_fault) @@ -1514,7 +1514,7 @@ int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 = error_code, trace_kvm_page_fault(vcpu, fault_address, error_code); =20 if (kvm_event_needs_reinjection(vcpu)) - kvm_mmu_unprotect_page_virt(vcpu, fault_address); + kvm_shadow_mmu_unprotect_page_virt(vcpu, fault_address); r =3D kvm_mmu_page_fault(vcpu, fault_address, error_code, insn, insn_len); } else if (flags & KVM_PV_REASON_PAGE_NOT_PRESENT) { @@ -2791,7 +2791,8 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) * In order to ensure all vCPUs drop their soon-to-be invalid roots, * invalidating TDP MMU roots must be done while holding mmu_lock for * write and in the same critical section as making the reload request, - * e.g. before kvm_zap_obsolete_pages() could drop mmu_lock and yield. + * e.g. before kvm_shadow_mmu_zap_obsolete_pages() could drop mmu_lock + * and yield. */ if (is_tdp_mmu_enabled(kvm)) kvm_tdp_mmu_invalidate_all_roots(kvm); @@ -2806,7 +2807,7 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) */ kvm_make_all_cpus_request(kvm, KVM_REQ_MMU_FREE_OBSOLETE_ROOTS); =20 - kvm_zap_obsolete_pages(kvm); + kvm_shadow_mmu_zap_obsolete_pages(kvm); =20 write_unlock(&kvm->mmu_lock); =20 @@ -2892,7 +2893,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_sta= rt, gfn_t gfn_end) =20 kvm_mmu_invalidate_begin(kvm, gfn_start, gfn_end); =20 - flush =3D kvm_rmap_zap_gfn_range(kvm, gfn_start, gfn_end); + flush =3D kvm_shadow_mmu_zap_gfn_range(kvm, gfn_start, gfn_end); =20 if (is_tdp_mmu_enabled(kvm)) { for (i =3D 0; i < KVM_ADDRESS_SPACE_NUM; i++) @@ -3036,7 +3037,7 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, { if (kvm_memslots_have_rmaps(kvm)) { write_lock(&kvm->mmu_lock); - kvm_rmap_zap_collapsible_sptes(kvm, slot); + kvm_shadow_mmu_zap_collapsible_sptes(kvm, slot); write_unlock(&kvm->mmu_lock); } =20 diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 2e3b2aca64ad..85c0b186cb0a 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -874,7 +874,7 @@ int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm= _page_fault *fault) if (is_page_fault_stale(vcpu, fault, mmu_seq)) goto out_unlock; =20 - r =3D make_mmu_pages_available(vcpu); + r =3D kvm_shadow_mmu_make_pages_available(vcpu); if (r) goto out_unlock; r =3D FNAME(fetch)(vcpu, fault, &walker); diff --git a/arch/x86/kvm/mmu/shadow_mmu.c b/arch/x86/kvm/mmu/shadow_mmu.c index 1259c4a3b140..e36b4d9c67f2 100644 --- a/arch/x86/kvm/mmu/shadow_mmu.c +++ b/arch/x86/kvm/mmu/shadow_mmu.c @@ -1965,7 +1965,7 @@ static inline unsigned long kvm_mmu_available_pages(s= truct kvm *kvm) return 0; } =20 -int make_mmu_pages_available(struct kvm_vcpu *vcpu) +int kvm_shadow_mmu_make_pages_available(struct kvm_vcpu *vcpu) { unsigned long avail =3D kvm_mmu_available_pages(vcpu->kvm); =20 @@ -2029,7 +2029,7 @@ int kvm_mmu_unprotect_page(struct kvm *kvm, gfn_t gfn) return r; } =20 -int kvm_mmu_unprotect_page_virt(struct kvm_vcpu *vcpu, gva_t gva) +int kvm_shadow_mmu_unprotect_page_virt(struct kvm_vcpu *vcpu, gva_t gva) { gpa_t gpa; int r; @@ -2319,7 +2319,7 @@ static void direct_pte_prefetch(struct kvm_vcpu *vcpu= , u64 *sptep) __direct_pte_prefetch(vcpu, sp, sptep); } =20 -int __direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) +int kvm_shadow_mmu_direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault= *fault) { struct kvm_shadow_walk_iterator it; struct kvm_mmu_page *sp; @@ -2537,7 +2537,7 @@ int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) return r; =20 write_lock(&vcpu->kvm->mmu_lock); - r =3D make_mmu_pages_available(vcpu); + r =3D kvm_shadow_mmu_make_pages_available(vcpu); if (r < 0) goto out_unlock; =20 @@ -2785,7 +2785,8 @@ void kvm_mmu_sync_prev_roots(struct kvm_vcpu *vcpu) * * Must be called between walk_shadow_page_lockless_{begin,end}. */ -int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level) +int kvm_shadow_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, + int *root_level) { struct kvm_shadow_walk_iterator iterator; int leaf =3D -1; @@ -3091,7 +3092,7 @@ __always_inline bool slot_handle_level_4k(struct kvm = *kvm, } =20 #define BATCH_ZAP_PAGES 10 -void kvm_zap_obsolete_pages(struct kvm *kvm) +void kvm_shadow_mmu_zap_obsolete_pages(struct kvm *kvm) { struct kvm_mmu_page *sp, *node; int nr_zapped, batch =3D 0; @@ -3152,7 +3153,7 @@ bool kvm_shadow_mmu_has_zapped_obsolete_pages(struct = kvm *kvm) return unlikely(!list_empty_careful(&kvm->arch.zapped_obsolete_pages)); } =20 -bool kvm_rmap_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_en= d) +bool kvm_shadow_mmu_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t = gfn_end) { const struct kvm_memory_slot *memslot; struct kvm_memslots *slots; @@ -3404,8 +3405,8 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *= kvm, return need_tlb_flush; } =20 -void kvm_rmap_zap_collapsible_sptes(struct kvm *kvm, - const struct kvm_memory_slot *slot) +void kvm_shadow_mmu_zap_collapsible_sptes(struct kvm *kvm, + const struct kvm_memory_slot *slot) { /* * Note, use KVM_MAX_HUGEPAGE_LEVEL - 1 since there's no need to zap diff --git a/arch/x86/kvm/mmu/shadow_mmu.h b/arch/x86/kvm/mmu/shadow_mmu.h index 9952aa1e86cf..148cc3593d2b 100644 --- a/arch/x86/kvm/mmu/shadow_mmu.h +++ b/arch/x86/kvm/mmu/shadow_mmu.h @@ -60,18 +60,19 @@ bool kvm_mmu_prepare_zap_page(struct kvm *kvm, struct k= vm_mmu_page *sp, struct list_head *invalid_list); void kvm_mmu_commit_zap_page(struct kvm *kvm, struct list_head *invalid_li= st); =20 -int make_mmu_pages_available(struct kvm_vcpu *vcpu); +int kvm_shadow_mmu_make_pages_available(struct kvm_vcpu *vcpu); =20 -int kvm_mmu_unprotect_page_virt(struct kvm_vcpu *vcpu, gva_t gva); +int kvm_shadow_mmu_unprotect_page_virt(struct kvm_vcpu *vcpu, gva_t gva); =20 -int __direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); +int kvm_shadow_mmu_direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault= *fault); u64 *fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, gpa_t gpa, u64 *spte); =20 hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, int quadrant, u8 le= vel); int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu); int mmu_alloc_special_roots(struct kvm_vcpu *vcpu); =20 -int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level); +int kvm_shadow_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, + int *root_level); =20 void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, const u8 *new, int bytes, struct kvm_page_track_notifier_node *node); @@ -86,8 +87,8 @@ bool slot_handle_level(struct kvm *kvm, const struct kvm_= memory_slot *memslot, bool slot_handle_level_4k(struct kvm *kvm, const struct kvm_memory_slot *m= emslot, slot_level_handler fn, bool flush_on_yield); =20 -void kvm_zap_obsolete_pages(struct kvm *kvm); -bool kvm_rmap_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_en= d); +void kvm_shadow_mmu_zap_obsolete_pages(struct kvm *kvm); +bool kvm_shadow_mmu_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t = gfn_end); =20 bool slot_rmap_write_protect(struct kvm *kvm, struct kvm_rmap_head *rmap_h= ead, const struct kvm_memory_slot *slot); @@ -96,8 +97,8 @@ void kvm_shadow_mmu_try_split_huge_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t start, gfn_t end, int target_level); -void kvm_rmap_zap_collapsible_sptes(struct kvm *kvm, - const struct kvm_memory_slot *slot); +void kvm_shadow_mmu_zap_collapsible_sptes(struct kvm *kvm, + const struct kvm_memory_slot *slot); =20 bool kvm_shadow_mmu_has_zapped_obsolete_pages(struct kvm *kvm); unsigned long kvm_shadow_mmu_shrink_scan(struct kvm *kvm, int pages_to_fre= e); --=20 2.39.0.314.g84b9a713c41-goog