From nobody Thu Sep 18 12:56:27 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FFABC3A5A7 for ; Tue, 6 Dec 2022 17:36:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231669AbiLFRgP (ORCPT ); Tue, 6 Dec 2022 12:36:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235271AbiLFRgJ (ORCPT ); Tue, 6 Dec 2022 12:36:09 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3779937FA4 for ; Tue, 6 Dec 2022 09:36:08 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id 94-20020a17090a09e700b002191897f70aso13662015pjo.9 for ; Tue, 06 Dec 2022 09:36:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EexaDR+ZUMOeyHekBnbhmko3+9Xc1K29meG6UCLtOrc=; b=h+zaYT9end89tIo9b2PVWNxFrVHzi+Iim+OouFjtvn2sBZEdZu0z4TsPxoZaO27dOJ p5rvD7S2+MUVgtauzWvUtd8OEbeS05jWjro8t99zLHM1rJs2l0OvfcDkrG76OPUECgjA zJRDQIAXpk6lQy7W3HdN9RTXYen3QoVCz//rHnev2iMgv7itjFppiRrY8QewM+REbgND xc5QRxhfvR0/nJs8vXhcd1+1yAR54THKwSx2HtQGLKK385JZXgvgfR9wV7+QQtbv8qMR 1JEqPxBj3gZDK5UWCw1XhUMgoJb303nVQy49bINx+QxxtHhnuXEA1/4JJb6GrZgt+lLL RIfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EexaDR+ZUMOeyHekBnbhmko3+9Xc1K29meG6UCLtOrc=; b=u6+y/T860YRz8JW1rZVW/KAfq5/sJyx5hU/aj+1PzfkeMWIYKBxItrGy8d6/Gw0ZK4 BUQa331BjwClI4CNsoufbWoz1VtYIAefZDpKmPKTE/rnpSQvHEU/6X9V32XIHLHc+vT/ tGtAhhXT0mykiL6qDThgUCngh2zdusH95d/NXhona2R0JJ7qVe1LMW/LDYqGvHeTQPwl qCglHSDbiEbImv3+XFnh8+YL28HrHPVZ768AWH3apehykMBjXHvOfXCF5AoZMqBQE1Ml E7OEJzLJBkNwFH9OchK8aoqW26PdOqmYxJkvDpmglNtymyTsqrtfPqvp85g0mQPrPtni KjdA== X-Gm-Message-State: ANoB5pnoIJ7+wNhY619k8FIVbcjfjBLR1mi0xXjo4UFcf/eSn3p1q0wD 0t/LZtG+2i1ReeseDO5mam8UxAKbcJB//7+1QiJFW/Ey/cXnEe8YuW0gY4EPxGMMf454bgptweP cO8YLdgy4ipLkHizMoYG80M5Ww3qeSREJZ2IHW4LlOKYg5X9omOTBzGvE8YGg4siBbVdTB8Ej X-Google-Smtp-Source: AA0mqf5M3cker1iBSFrDHcbA7eaCcZjIXVNYlpw6/d4VHns8SW/q7qZqaIOhrX5owyF4SPbITG46zwD7NB7u X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a17:90a:d086:b0:219:227d:d91f with SMTP id k6-20020a17090ad08600b00219227dd91fmr4993173pju.0.1670348167215; Tue, 06 Dec 2022 09:36:07 -0800 (PST) Date: Tue, 6 Dec 2022 17:35:55 +0000 In-Reply-To: <20221206173601.549281-1-bgardon@google.com> Mime-Version: 1.0 References: <20221206173601.549281-1-bgardon@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221206173601.549281-2-bgardon@google.com> Subject: [PATCH 1/7] KVM: x86/MMU: Move pte_list operations to rmap.c From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In the interest of eventually splitting the Shadow MMU out of mmu.c, start by moving some of the operations for manipulating pte_lists out of mmu.c and into a new pair of files: rmap.c and rmap.h. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/Makefile | 2 +- arch/x86/kvm/debugfs.c | 1 + arch/x86/kvm/mmu/mmu.c | 152 +------------------------------- arch/x86/kvm/mmu/mmu_internal.h | 1 - arch/x86/kvm/mmu/rmap.c | 141 +++++++++++++++++++++++++++++ arch/x86/kvm/mmu/rmap.h | 34 +++++++ 6 files changed, 179 insertions(+), 152 deletions(-) create mode 100644 arch/x86/kvm/mmu/rmap.c create mode 100644 arch/x86/kvm/mmu/rmap.h diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile index 80e3fe184d17..9f766eebeddf 100644 --- a/arch/x86/kvm/Makefile +++ b/arch/x86/kvm/Makefile @@ -12,7 +12,7 @@ include $(srctree)/virt/kvm/Makefile.kvm kvm-y +=3D x86.o emulate.o i8259.o irq.o lapic.o \ i8254.o ioapic.o irq_comm.o cpuid.o pmu.o mtrr.o \ hyperv.o debugfs.o mmu/mmu.o mmu/page_track.o \ - mmu/spte.o + mmu/spte.o mmu/rmap.o =20 ifdef CONFIG_HYPERV kvm-y +=3D kvm_onhyperv.o diff --git a/arch/x86/kvm/debugfs.c b/arch/x86/kvm/debugfs.c index c1390357126a..29f692ecd6f3 100644 --- a/arch/x86/kvm/debugfs.c +++ b/arch/x86/kvm/debugfs.c @@ -9,6 +9,7 @@ #include "lapic.h" #include "mmu.h" #include "mmu/mmu_internal.h" +#include "mmu/rmap.h" =20 static int vcpu_get_timer_advance_ns(void *data, u64 *val) { diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4736d7849c60..90b3735d6064 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -26,6 +26,7 @@ #include "kvm_emulate.h" #include "cpuid.h" #include "spte.h" +#include "rmap.h" =20 #include #include @@ -112,24 +113,6 @@ module_param(dbg, bool, 0644); =20 #include =20 -/* make pte_list_desc fit well in cache lines */ -#define PTE_LIST_EXT 14 - -/* - * Slight optimization of cacheline layout, by putting `more' and `spte_co= unt' - * at the start; then accessing it will only use one single cacheline for - * either full (entries=3D=3DPTE_LIST_EXT) case or entries<=3D6. - */ -struct pte_list_desc { - struct pte_list_desc *more; - /* - * Stores number of entries stored in the pte_list_desc. No need to be - * u64 but just for easier alignment. When PTE_LIST_EXT, means full. - */ - u64 spte_count; - u64 *sptes[PTE_LIST_EXT]; -}; - struct kvm_shadow_walk_iterator { u64 addr; hpa_t shadow_addr; @@ -155,7 +138,6 @@ struct kvm_shadow_walk_iterator { ({ spte =3D mmu_spte_get_lockless(_walker.sptep); 1; }); \ __shadow_walk_next(&(_walker), spte)) =20 -static struct kmem_cache *pte_list_desc_cache; struct kmem_cache *mmu_page_header_cache; static struct percpu_counter kvm_total_used_mmu_pages; =20 @@ -674,11 +656,6 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vc= pu) kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache); } =20 -static void mmu_free_pte_list_desc(struct pte_list_desc *pte_list_desc) -{ - kmem_cache_free(pte_list_desc_cache, pte_list_desc); -} - static bool sp_has_gptes(struct kvm_mmu_page *sp); =20 static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) @@ -878,111 +855,6 @@ gfn_to_memslot_dirty_bitmap(struct kvm_vcpu *vcpu, gf= n_t gfn, return slot; } =20 -/* - * About rmap_head encoding: - * - * If the bit zero of rmap_head->val is clear, then it points to the only = spte - * in this rmap chain. Otherwise, (rmap_head->val & ~1) points to a struct - * pte_list_desc containing more mappings. - */ - -/* - * Returns the number of pointers in the rmap chain, not counting the new = one. - */ -static int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 *spte, - struct kvm_rmap_head *rmap_head) -{ - struct pte_list_desc *desc; - int count =3D 0; - - if (!rmap_head->val) { - rmap_printk("%p %llx 0->1\n", spte, *spte); - rmap_head->val =3D (unsigned long)spte; - } else if (!(rmap_head->val & 1)) { - rmap_printk("%p %llx 1->many\n", spte, *spte); - desc =3D kvm_mmu_memory_cache_alloc(cache); - desc->sptes[0] =3D (u64 *)rmap_head->val; - desc->sptes[1] =3D spte; - desc->spte_count =3D 2; - rmap_head->val =3D (unsigned long)desc | 1; - ++count; - } else { - rmap_printk("%p %llx many->many\n", spte, *spte); - desc =3D (struct pte_list_desc *)(rmap_head->val & ~1ul); - while (desc->spte_count =3D=3D PTE_LIST_EXT) { - count +=3D PTE_LIST_EXT; - if (!desc->more) { - desc->more =3D kvm_mmu_memory_cache_alloc(cache); - desc =3D desc->more; - desc->spte_count =3D 0; - break; - } - desc =3D desc->more; - } - count +=3D desc->spte_count; - desc->sptes[desc->spte_count++] =3D spte; - } - return count; -} - -static void -pte_list_desc_remove_entry(struct kvm_rmap_head *rmap_head, - struct pte_list_desc *desc, int i, - struct pte_list_desc *prev_desc) -{ - int j =3D desc->spte_count - 1; - - desc->sptes[i] =3D desc->sptes[j]; - desc->sptes[j] =3D NULL; - desc->spte_count--; - if (desc->spte_count) - return; - if (!prev_desc && !desc->more) - rmap_head->val =3D 0; - else - if (prev_desc) - prev_desc->more =3D desc->more; - else - rmap_head->val =3D (unsigned long)desc->more | 1; - mmu_free_pte_list_desc(desc); -} - -static void pte_list_remove(u64 *spte, struct kvm_rmap_head *rmap_head) -{ - struct pte_list_desc *desc; - struct pte_list_desc *prev_desc; - int i; - - if (!rmap_head->val) { - pr_err("%s: %p 0->BUG\n", __func__, spte); - BUG(); - } else if (!(rmap_head->val & 1)) { - rmap_printk("%p 1->0\n", spte); - if ((u64 *)rmap_head->val !=3D spte) { - pr_err("%s: %p 1->BUG\n", __func__, spte); - BUG(); - } - rmap_head->val =3D 0; - } else { - rmap_printk("%p many->many\n", spte); - desc =3D (struct pte_list_desc *)(rmap_head->val & ~1ul); - prev_desc =3D NULL; - while (desc) { - for (i =3D 0; i < desc->spte_count; ++i) { - if (desc->sptes[i] =3D=3D spte) { - pte_list_desc_remove_entry(rmap_head, - desc, i, prev_desc); - return; - } - } - prev_desc =3D desc; - desc =3D desc->more; - } - pr_err("%s: %p many->many\n", __func__, spte); - BUG(); - } -} - static void kvm_zap_one_rmap_spte(struct kvm *kvm, struct kvm_rmap_head *rmap_head, u64 *sptep) { @@ -1011,7 +883,7 @@ static bool kvm_zap_all_rmap_sptes(struct kvm *kvm, for (i =3D 0; i < desc->spte_count; i++) mmu_spte_clear_track_bits(kvm, desc->sptes[i]); next =3D desc->more; - mmu_free_pte_list_desc(desc); + free_pte_list_desc(desc); } out: /* rmap_head is meaningless now, remember to reset it */ @@ -1019,26 +891,6 @@ static bool kvm_zap_all_rmap_sptes(struct kvm *kvm, return true; } =20 -unsigned int pte_list_count(struct kvm_rmap_head *rmap_head) -{ - struct pte_list_desc *desc; - unsigned int count =3D 0; - - if (!rmap_head->val) - return 0; - else if (!(rmap_head->val & 1)) - return 1; - - desc =3D (struct pte_list_desc *)(rmap_head->val & ~1ul); - - while (desc) { - count +=3D desc->spte_count; - desc =3D desc->more; - } - - return count; -} - static struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, const struct kvm_memory_slot *slot) { diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index dbaf6755c5a7..cd1c8f32269d 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -166,7 +166,6 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, int min_level); void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, u64 start_gfn, u64 pages); -unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); =20 extern int nx_huge_pages; static inline bool is_nx_huge_page_enabled(struct kvm *kvm) diff --git a/arch/x86/kvm/mmu/rmap.c b/arch/x86/kvm/mmu/rmap.c new file mode 100644 index 000000000000..daa99dee0709 --- /dev/null +++ b/arch/x86/kvm/mmu/rmap.c @@ -0,0 +1,141 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include "mmu.h" +#include "mmu_internal.h" +#include "mmutrace.h" +#include "rmap.h" +#include "spte.h" + +#include +#include + +/* + * About rmap_head encoding: + * + * If the bit zero of rmap_head->val is clear, then it points to the only = spte + * in this rmap chain. Otherwise, (rmap_head->val & ~1) points to a struct + * pte_list_desc containing more mappings. + */ + +/* + * Returns the number of pointers in the rmap chain, not counting the new = one. + */ +int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 *spte, + struct kvm_rmap_head *rmap_head) +{ + struct pte_list_desc *desc; + int count =3D 0; + + if (!rmap_head->val) { + rmap_printk("%p %llx 0->1\n", spte, *spte); + rmap_head->val =3D (unsigned long)spte; + } else if (!(rmap_head->val & 1)) { + rmap_printk("%p %llx 1->many\n", spte, *spte); + desc =3D kvm_mmu_memory_cache_alloc(cache); + desc->sptes[0] =3D (u64 *)rmap_head->val; + desc->sptes[1] =3D spte; + desc->spte_count =3D 2; + rmap_head->val =3D (unsigned long)desc | 1; + ++count; + } else { + rmap_printk("%p %llx many->many\n", spte, *spte); + desc =3D (struct pte_list_desc *)(rmap_head->val & ~1ul); + while (desc->spte_count =3D=3D PTE_LIST_EXT) { + count +=3D PTE_LIST_EXT; + if (!desc->more) { + desc->more =3D kvm_mmu_memory_cache_alloc(cache); + desc =3D desc->more; + desc->spte_count =3D 0; + break; + } + desc =3D desc->more; + } + count +=3D desc->spte_count; + desc->sptes[desc->spte_count++] =3D spte; + } + return count; +} + +void free_pte_list_desc(struct pte_list_desc *pte_list_desc) +{ + kmem_cache_free(pte_list_desc_cache, pte_list_desc); +} + +static void +pte_list_desc_remove_entry(struct kvm_rmap_head *rmap_head, + struct pte_list_desc *desc, int i, + struct pte_list_desc *prev_desc) +{ + int j =3D desc->spte_count - 1; + + desc->sptes[i] =3D desc->sptes[j]; + desc->sptes[j] =3D NULL; + desc->spte_count--; + if (desc->spte_count) + return; + if (!prev_desc && !desc->more) + rmap_head->val =3D 0; + else + if (prev_desc) + prev_desc->more =3D desc->more; + else + rmap_head->val =3D (unsigned long)desc->more | 1; + free_pte_list_desc(desc); +} + +void pte_list_remove(u64 *spte, struct kvm_rmap_head *rmap_head) +{ + struct pte_list_desc *desc; + struct pte_list_desc *prev_desc; + int i; + + if (!rmap_head->val) { + pr_err("%s: %p 0->BUG\n", __func__, spte); + BUG(); + } else if (!(rmap_head->val & 1)) { + rmap_printk("%p 1->0\n", spte); + if ((u64 *)rmap_head->val !=3D spte) { + pr_err("%s: %p 1->BUG\n", __func__, spte); + BUG(); + } + rmap_head->val =3D 0; + } else { + rmap_printk("%p many->many\n", spte); + desc =3D (struct pte_list_desc *)(rmap_head->val & ~1ul); + prev_desc =3D NULL; + while (desc) { + for (i =3D 0; i < desc->spte_count; ++i) { + if (desc->sptes[i] =3D=3D spte) { + pte_list_desc_remove_entry(rmap_head, + desc, i, prev_desc); + return; + } + } + prev_desc =3D desc; + desc =3D desc->more; + } + pr_err("%s: %p many->many\n", __func__, spte); + BUG(); + } +} + +unsigned int pte_list_count(struct kvm_rmap_head *rmap_head) +{ + struct pte_list_desc *desc; + unsigned int count =3D 0; + + if (!rmap_head->val) + return 0; + else if (!(rmap_head->val & 1)) + return 1; + + desc =3D (struct pte_list_desc *)(rmap_head->val & ~1ul); + + while (desc) { + count +=3D desc->spte_count; + desc =3D desc->more; + } + + return count; +} + diff --git a/arch/x86/kvm/mmu/rmap.h b/arch/x86/kvm/mmu/rmap.h new file mode 100644 index 000000000000..059765b6e066 --- /dev/null +++ b/arch/x86/kvm/mmu/rmap.h @@ -0,0 +1,34 @@ +// SPDX-License-Identifier: GPL-2.0 + +#ifndef __KVM_X86_MMU_RMAP_H +#define __KVM_X86_MMU_RMAP_H + +#include + +/* make pte_list_desc fit well in cache lines */ +#define PTE_LIST_EXT 14 + +/* + * Slight optimization of cacheline layout, by putting `more' and `spte_co= unt' + * at the start; then accessing it will only use one single cacheline for + * either full (entries=3D=3DPTE_LIST_EXT) case or entries<=3D6. + */ +struct pte_list_desc { + struct pte_list_desc *more; + /* + * Stores number of entries stored in the pte_list_desc. No need to be + * u64 but just for easier alignment. When PTE_LIST_EXT, means full. + */ + u64 spte_count; + u64 *sptes[PTE_LIST_EXT]; +}; + +static struct kmem_cache *pte_list_desc_cache; + +int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 *spte, + struct kvm_rmap_head *rmap_head); +void free_pte_list_desc(struct pte_list_desc *pte_list_desc); +void pte_list_remove(u64 *spte, struct kvm_rmap_head *rmap_head); +unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); + +#endif /* __KVM_X86_MMU_RMAP_H */ --=20 2.39.0.rc0.267.gcb52ba06e7-goog From nobody Thu Sep 18 12:56:27 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91AC9C4708C for ; Tue, 6 Dec 2022 17:36:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235402AbiLFRgV (ORCPT ); Tue, 6 Dec 2022 12:36:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34920 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235297AbiLFRgK (ORCPT ); Tue, 6 Dec 2022 12:36:10 -0500 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFA5C3AC15 for ; Tue, 6 Dec 2022 09:36:09 -0800 (PST) Received: by mail-pg1-x54a.google.com with SMTP id x16-20020a63b210000000b0045f5c1e18d0so12539050pge.0 for ; Tue, 06 Dec 2022 09:36:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=akd5rMPTncKvwlPwwGjqiEGLXYk8H+l4pcsG5gvu2EQ=; b=P3zoi6Oo6b7q+soGLki3tsCGlAU9dd9EOInpq1nWCzLQXQqk0hqowY4Hkq8RRNwYi6 O/UBNmRBhKw2A1BU/fGaVbprZcSxapecuRH8bj0X94WYxwiQFZ0e5TqdxvXyApmA6Nfz t5JaVSrJVdjIYX+F6sAntOrzLzbsicP3kYMzZ6UugA24K+onBITM6wb3Db0Dtkp1heHx /1bFv58lqLKo0R59joc5Q/rGf0bzp/hDa9eIH2sRAt1ATPZrn1sxXPFjwd1gzFhDLEW3 AWzzmxhdvQN+2ZBILDa3i/iPLkmZGA3zrWubVUq/F/L7sKGoicBwrrWfy5g3Lzl1Q63C 9QUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=akd5rMPTncKvwlPwwGjqiEGLXYk8H+l4pcsG5gvu2EQ=; b=kvc3GT5Vhz6m4uFPLWLGdWPUcgZ40+ZU/yku+qq95h8684EcQk/SRvZdg7Dt2/9UMQ 83rXqb7IwvosWNsuJM+ZdDam6obwITIBEXWK3uModOIF8ThyvgV5+08mQPUqeSl6MsxD w4yWxPbO5nk5S8EIjRS+pcf0NAKlbE9roE0bJVhYbYq3qvH3UwwMWXVnu5iPe3VbQrES B2r/f1kwbeAClP2bzQku2beHo2LVHmNIU2KvUIlChDCSikqDn0zhwtd+VZchIIPcXmdb Ju4UX9Mcfwg6UT6CQ3C2YTsFWYmLnsuyOg7L766gTu4tJGpWBCPWCzorOU3pSrl3ImyJ 1KFA== X-Gm-Message-State: ANoB5pnlKCYi6N3IxTi+MMEyn7JAfPOpLv8rEcF0lgeuhohn6naGSI7c +5q6wNdOyf14n7aEHIv5Sz+q/rvdIK9TfhWpZ/ETKQ5pBXkfelS6fU482Bkork1w9jZ/1Jbcolf +2zh3vTLg0ySq15GsHfe4Bp0gK8snQN0l/cfepgQhG3fgrVIW8wT4sGbFrdiQ4Pc/Ga0PWnEC X-Google-Smtp-Source: AA0mqf7m3FYBiYJRx6kc4LjbWv9jhqrSqbUCo0Kd5jklLnQdmTyeb6gzvI5DTS5hZQdVHl/r3uSYpdQ1+RCr X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a62:8683:0:b0:577:3624:2d49 with SMTP id x125-20020a628683000000b0057736242d49mr5185978pfd.64.1670348169136; Tue, 06 Dec 2022 09:36:09 -0800 (PST) Date: Tue, 6 Dec 2022 17:35:56 +0000 In-Reply-To: <20221206173601.549281-1-bgardon@google.com> Mime-Version: 1.0 References: <20221206173601.549281-1-bgardon@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221206173601.549281-3-bgardon@google.com> Subject: [PATCH 2/7] KVM: x86/MMU: Move rmap_iterator to rmap.h From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In continuing to factor the rmap out of mmu.c, move the rmap_iterator and associated functions and macros into rmap.(c|h). No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 76 ----------------------------------------- arch/x86/kvm/mmu/rmap.c | 61 +++++++++++++++++++++++++++++++++ arch/x86/kvm/mmu/rmap.h | 18 ++++++++++ 3 files changed, 79 insertions(+), 76 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 90b3735d6064..c3a7f443a213 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -932,82 +932,6 @@ static void rmap_remove(struct kvm *kvm, u64 *spte) pte_list_remove(spte, rmap_head); } =20 -/* - * Used by the following functions to iterate through the sptes linked by a - * rmap. All fields are private and not assumed to be used outside. - */ -struct rmap_iterator { - /* private fields */ - struct pte_list_desc *desc; /* holds the sptep if not NULL */ - int pos; /* index of the sptep */ -}; - -/* - * Iteration must be started by this function. This should also be used a= fter - * removing/dropping sptes from the rmap link because in such cases the - * information in the iterator may not be valid. - * - * Returns sptep if found, NULL otherwise. - */ -static u64 *rmap_get_first(struct kvm_rmap_head *rmap_head, - struct rmap_iterator *iter) -{ - u64 *sptep; - - if (!rmap_head->val) - return NULL; - - if (!(rmap_head->val & 1)) { - iter->desc =3D NULL; - sptep =3D (u64 *)rmap_head->val; - goto out; - } - - iter->desc =3D (struct pte_list_desc *)(rmap_head->val & ~1ul); - iter->pos =3D 0; - sptep =3D iter->desc->sptes[iter->pos]; -out: - BUG_ON(!is_shadow_present_pte(*sptep)); - return sptep; -} - -/* - * Must be used with a valid iterator: e.g. after rmap_get_first(). - * - * Returns sptep if found, NULL otherwise. - */ -static u64 *rmap_get_next(struct rmap_iterator *iter) -{ - u64 *sptep; - - if (iter->desc) { - if (iter->pos < PTE_LIST_EXT - 1) { - ++iter->pos; - sptep =3D iter->desc->sptes[iter->pos]; - if (sptep) - goto out; - } - - iter->desc =3D iter->desc->more; - - if (iter->desc) { - iter->pos =3D 0; - /* desc->sptes[0] cannot be NULL */ - sptep =3D iter->desc->sptes[iter->pos]; - goto out; - } - } - - return NULL; -out: - BUG_ON(!is_shadow_present_pte(*sptep)); - return sptep; -} - -#define for_each_rmap_spte(_rmap_head_, _iter_, _spte_) \ - for (_spte_ =3D rmap_get_first(_rmap_head_, _iter_); \ - _spte_; _spte_ =3D rmap_get_next(_iter_)) - static void drop_spte(struct kvm *kvm, u64 *sptep) { u64 old_spte =3D mmu_spte_clear_track_bits(kvm, sptep); diff --git a/arch/x86/kvm/mmu/rmap.c b/arch/x86/kvm/mmu/rmap.c index daa99dee0709..c3bad366b627 100644 --- a/arch/x86/kvm/mmu/rmap.c +++ b/arch/x86/kvm/mmu/rmap.c @@ -139,3 +139,64 @@ unsigned int pte_list_count(struct kvm_rmap_head *rmap= _head) return count; } =20 +/* + * Iteration must be started by this function. This should also be used a= fter + * removing/dropping sptes from the rmap link because in such cases the + * information in the iterator may not be valid. + * + * Returns sptep if found, NULL otherwise. + */ +u64 *rmap_get_first(struct kvm_rmap_head *rmap_head, struct rmap_iterator = *iter) +{ + u64 *sptep; + + if (!rmap_head->val) + return NULL; + + if (!(rmap_head->val & 1)) { + iter->desc =3D NULL; + sptep =3D (u64 *)rmap_head->val; + goto out; + } + + iter->desc =3D (struct pte_list_desc *)(rmap_head->val & ~1ul); + iter->pos =3D 0; + sptep =3D iter->desc->sptes[iter->pos]; +out: + BUG_ON(!is_shadow_present_pte(*sptep)); + return sptep; +} + +/* + * Must be used with a valid iterator: e.g. after rmap_get_first(). + * + * Returns sptep if found, NULL otherwise. + */ +u64 *rmap_get_next(struct rmap_iterator *iter) +{ + u64 *sptep; + + if (iter->desc) { + if (iter->pos < PTE_LIST_EXT - 1) { + ++iter->pos; + sptep =3D iter->desc->sptes[iter->pos]; + if (sptep) + goto out; + } + + iter->desc =3D iter->desc->more; + + if (iter->desc) { + iter->pos =3D 0; + /* desc->sptes[0] cannot be NULL */ + sptep =3D iter->desc->sptes[iter->pos]; + goto out; + } + } + + return NULL; +out: + BUG_ON(!is_shadow_present_pte(*sptep)); + return sptep; +} + diff --git a/arch/x86/kvm/mmu/rmap.h b/arch/x86/kvm/mmu/rmap.h index 059765b6e066..13b265f3a95e 100644 --- a/arch/x86/kvm/mmu/rmap.h +++ b/arch/x86/kvm/mmu/rmap.h @@ -31,4 +31,22 @@ void free_pte_list_desc(struct pte_list_desc *pte_list_d= esc); void pte_list_remove(u64 *spte, struct kvm_rmap_head *rmap_head); unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); =20 +/* + * Used by the following functions to iterate through the sptes linked by a + * rmap. All fields are private and not assumed to be used outside. + */ +struct rmap_iterator { + /* private fields */ + struct pte_list_desc *desc; /* holds the sptep if not NULL */ + int pos; /* index of the sptep */ +}; + +u64 *rmap_get_first(struct kvm_rmap_head *rmap_head, + struct rmap_iterator *iter); +u64 *rmap_get_next(struct rmap_iterator *iter); + +#define for_each_rmap_spte(_rmap_head_, _iter_, _spte_) \ + for (_spte_ =3D rmap_get_first(_rmap_head_, _iter_); \ + _spte_; _spte_ =3D rmap_get_next(_iter_)) + #endif /* __KVM_X86_MMU_RMAP_H */ --=20 2.39.0.rc0.267.gcb52ba06e7-goog From nobody Thu Sep 18 12:56:27 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 749C8C4708C for ; Tue, 6 Dec 2022 17:36:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234603AbiLFRg0 (ORCPT ); Tue, 6 Dec 2022 12:36:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34950 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235360AbiLFRgM (ORCPT ); Tue, 6 Dec 2022 12:36:12 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 88638391D3 for ; Tue, 6 Dec 2022 09:36:11 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id o18-20020a17090aac1200b00219ca917708so6457717pjq.8 for ; Tue, 06 Dec 2022 09:36:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rbUiksLk/ndFO2E6RSCpSBYGdIi7ojQOCeIOZeAA2ow=; b=Ie/mmiJHQUpEyx6oKaIMTBhaNypqhp7wR5I9PgyeMslphAOxXP9Mt0H1vtwizsJhIe KudAd2OjLuL/EPpuQX9326EMo5VoILq1g7QyBXb6UFkaTuiDxZclbwT36DYio2poZCsS QnyUIh5aI4j5k7lLeu39t1k7Ll7Jcd5AbHaDNYnuGkYcn8utG14NtPeFRh8YUFbIi8cL cJWfh4uSFvrE9hq19uADqys655XW12zElikxi/7jrj2UQ1uE/y7Rfbjh6LTg7OdPXF8N 4p3XD24i3opGf7A5gtoUq/j1EaErnZ4+1m3FR0zLUPit2WLjg1bVsAfAScnCsjsWWKKN kj+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rbUiksLk/ndFO2E6RSCpSBYGdIi7ojQOCeIOZeAA2ow=; b=cfcRc6NKRIy/irFz7dfBGOqxNH2/AiWop3IbfBjDP1DYFGeuNudSTamhp/f31HtxBr /ki/kBjYdVuLc0FDDOsBbq4Y4qYX7Ci1tZKXg2i3GImC3cn+pmO/6jdxm7rOtU5A+eI1 ysV60wKCz08r6Al4oIkGPQcW5XlzbWKxo50LvMFVrd3NxdcocRw0huAV2lplcSZFRdJL 1fiqVF4RM5wTM2JUq+sqRkMzLDIfuu7e5jTRQlHKFtNSpskj3qNnMKrEKOTGVX6yucHV t+x4DjTutfBpUZYb53lH++ckiTP61zxrMGPVETuX8KzAGk0Vs1U+kVI2F1LQCWnJplWW zOnA== X-Gm-Message-State: ANoB5pmWZ0jmWC9npwgIuVFKIf0srwQS13doD9EkA0TuY8nqxPUhD3V6 n3eLOVbq2Dd/rOtro+EMPj9QK7tFKLxMawxkGFYfVpAmdlvmmQ0NsBioK+GxIzw9HAhyUoWWYHF GhpO8fgL3PwEOvfJiMdQ/q/BWEXuszAQc+GRJvBvRa/tCcImt1Sux6vPN2i8xVOA9QAJ/Ag+k X-Google-Smtp-Source: AA0mqf7CYvLk1N+e34pjvlhKeqbll+Of27+rNyiXwB2nQ8FwQUBi1/c58gdg0HYBifU3odb0bEYVaQvdni0M X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a17:902:b707:b0:189:5f3c:fb25 with SMTP id d7-20020a170902b70700b001895f3cfb25mr60010769pls.123.1670348170913; Tue, 06 Dec 2022 09:36:10 -0800 (PST) Date: Tue, 6 Dec 2022 17:35:57 +0000 In-Reply-To: <20221206173601.549281-1-bgardon@google.com> Mime-Version: 1.0 References: <20221206173601.549281-1-bgardon@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221206173601.549281-4-bgardon@google.com> Subject: [PATCH 3/7] KVM: x86/MMU: Move gfn_to_rmap() to rmap.c From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move gfn_to_rmap() to rmap.c. While the function is not part of manipulating the rmap, it is the main way that the MMU gets pointers to the rmaps. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 9 --------- arch/x86/kvm/mmu/rmap.c | 8 ++++++++ arch/x86/kvm/mmu/rmap.h | 2 ++ 3 files changed, 10 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c3a7f443a213..f8d7201210c8 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -891,15 +891,6 @@ static bool kvm_zap_all_rmap_sptes(struct kvm *kvm, return true; } =20 -static struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, - const struct kvm_memory_slot *slot) -{ - unsigned long idx; - - idx =3D gfn_to_index(gfn, slot->base_gfn, level); - return &slot->arch.rmap[level - PG_LEVEL_4K][idx]; -} - static bool rmap_can_add(struct kvm_vcpu *vcpu) { struct kvm_mmu_memory_cache *mc; diff --git a/arch/x86/kvm/mmu/rmap.c b/arch/x86/kvm/mmu/rmap.c index c3bad366b627..272e89147d96 100644 --- a/arch/x86/kvm/mmu/rmap.c +++ b/arch/x86/kvm/mmu/rmap.c @@ -200,3 +200,11 @@ u64 *rmap_get_next(struct rmap_iterator *iter) return sptep; } =20 +struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, + const struct kvm_memory_slot *slot) +{ + unsigned long idx; + + idx =3D gfn_to_index(gfn, slot->base_gfn, level); + return &slot->arch.rmap[level - PG_LEVEL_4K][idx]; +} diff --git a/arch/x86/kvm/mmu/rmap.h b/arch/x86/kvm/mmu/rmap.h index 13b265f3a95e..45732eda57e5 100644 --- a/arch/x86/kvm/mmu/rmap.h +++ b/arch/x86/kvm/mmu/rmap.h @@ -49,4 +49,6 @@ u64 *rmap_get_next(struct rmap_iterator *iter); for (_spte_ =3D rmap_get_first(_rmap_head_, _iter_); \ _spte_; _spte_ =3D rmap_get_next(_iter_)) =20 +struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, + const struct kvm_memory_slot *slot); #endif /* __KVM_X86_MMU_RMAP_H */ --=20 2.39.0.rc0.267.gcb52ba06e7-goog From nobody Thu Sep 18 12:56:27 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6E54C3A5A7 for ; Tue, 6 Dec 2022 17:36:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235409AbiLFRga (ORCPT ); Tue, 6 Dec 2022 12:36:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235373AbiLFRgQ (ORCPT ); Tue, 6 Dec 2022 12:36:16 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 15E033AC2E for ; Tue, 6 Dec 2022 09:36:13 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id p17-20020a056a0026d100b005769067d113so8029744pfw.3 for ; Tue, 06 Dec 2022 09:36:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=H8e8OULO3pzQ8jNXethnMgHASPzYAV1ngKTaICSK2LU=; b=FRZhVvY5mtOkbQD31kJD+YaHVJajPmTARMOqiCcI+/xvmLCmh6BGekzUTtjDNeotzq uV42h3eYcy2Y1U/jWIpUCdXPMDgEMZB91kpXLVv2PxG8wk3IE0JRMUxaYINZkDlBMDBa xpCtjKpLUcnALTVXha4JOqpMnLfJtCq7RtJbdS9Y/NG265fX+/qqKwUBhyWT0k0diYdV hq5HsZiSMwFdUeFrTC5lDJVcu3kftBgtGZ1hMUr8kPf0vK+aWbxsNiNiyI68v8zC2qbC 6pvrAGBfgZS4RecxrDW7y8awmGVUrRF0AFrFHz2Pjud05rWFkofdAumELQtzKuoObx4B tcew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=H8e8OULO3pzQ8jNXethnMgHASPzYAV1ngKTaICSK2LU=; b=jLwfkjnYmu+7WmfPTRcn4Ps1rOklcvD1KnBXljTAsO5KPMa6/zQLwuh5UkJzONjpM0 zykdxBpr0C7PsmXOYeiN1KVH6tOJLENocw0XTjsDZgY03/GjAC2M44xsfSQHEb3QgsUN eh5deEUUuktYYV+YCwKuu7GXoKh3jolSSrNyXdWKzjU/yFHLF7STB8YrjrdHSOY98kyB GMuzDhnmtSIKmooOVqNAa6kelz1N4xpfM1uLZrqOo6/eM5cRnVqvLPOxvMOgs93+syFs lwzMUTX4I5BeQ+4Trfb9TYLqCZuGMpVNBQa1ki3Mlj8+A+d3Y3HXLfQAK6wjUguPU0CN k/lg== X-Gm-Message-State: ANoB5plyJXLMCIjdPawPaTD8AWzT7gt46Il+LJJ3SAZPzfK0H+FiBfUA MFBleQesqLgvAgVBriicpIeYJZ9F57uoSTd0ma41U19Dx6UgMg1W3gJ3jHFqGAuDhVok5b27I3t pC8FPA/hIu87BYBEZwG1gVdSJCQH4/n8tJ1Af5lcivaR5pbWpyvlbwUJXcwqdBxZxF7c1hnqo X-Google-Smtp-Source: AA0mqf5wvq/CNKL0wy9/uNEygU7TfpDW2kz7kcpjud97MrqPQOnNGktDWLeek44G5uBaOn37u7mXhaJhOxg8 X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a05:6a00:3398:b0:575:72f3:d4dc with SMTP id cm24-20020a056a00339800b0057572f3d4dcmr36836377pfb.6.1670348172521; Tue, 06 Dec 2022 09:36:12 -0800 (PST) Date: Tue, 6 Dec 2022 17:35:58 +0000 In-Reply-To: <20221206173601.549281-1-bgardon@google.com> Mime-Version: 1.0 References: <20221206173601.549281-1-bgardon@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221206173601.549281-5-bgardon@google.com> Subject: [PATCH 4/7] KVM: x86/MMU: Move rmap_can_add() and rmap_remove() to rmap.c From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move the functions to check if an entry can be added to an rmap and for removing elements from an rmap to rmap.(c|h). No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 34 +-------------------------------- arch/x86/kvm/mmu/mmu_internal.h | 1 + arch/x86/kvm/mmu/rmap.c | 32 +++++++++++++++++++++++++++++++ arch/x86/kvm/mmu/rmap.h | 3 +++ 4 files changed, 37 insertions(+), 33 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f8d7201210c8..52e487d89d54 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -658,7 +658,7 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcp= u) =20 static bool sp_has_gptes(struct kvm_mmu_page *sp); =20 -static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) +gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) { if (sp->role.passthrough) return sp->gfn; @@ -891,38 +891,6 @@ static bool kvm_zap_all_rmap_sptes(struct kvm *kvm, return true; } =20 -static bool rmap_can_add(struct kvm_vcpu *vcpu) -{ - struct kvm_mmu_memory_cache *mc; - - mc =3D &vcpu->arch.mmu_pte_list_desc_cache; - return kvm_mmu_memory_cache_nr_free_objects(mc); -} - -static void rmap_remove(struct kvm *kvm, u64 *spte) -{ - struct kvm_memslots *slots; - struct kvm_memory_slot *slot; - struct kvm_mmu_page *sp; - gfn_t gfn; - struct kvm_rmap_head *rmap_head; - - sp =3D sptep_to_sp(spte); - gfn =3D kvm_mmu_page_get_gfn(sp, spte_index(spte)); - - /* - * Unlike rmap_add, rmap_remove does not run in the context of a vCPU - * so we have to determine which memslots to use based on context - * information in sp->role. - */ - slots =3D kvm_memslots_for_spte_role(kvm, sp->role); - - slot =3D __gfn_to_memslot(slots, gfn); - rmap_head =3D gfn_to_rmap(gfn, sp->role.level, slot); - - pte_list_remove(spte, rmap_head); -} - static void drop_spte(struct kvm *kvm, u64 *sptep) { u64 old_spte =3D mmu_spte_clear_track_bits(kvm, sptep); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index cd1c8f32269d..3de703c2a5d4 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -318,4 +318,5 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cach= e *mc); void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *s= p); =20 +gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index); #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/rmap.c b/arch/x86/kvm/mmu/rmap.c index 272e89147d96..6833676aa9ea 100644 --- a/arch/x86/kvm/mmu/rmap.c +++ b/arch/x86/kvm/mmu/rmap.c @@ -208,3 +208,35 @@ struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, idx =3D gfn_to_index(gfn, slot->base_gfn, level); return &slot->arch.rmap[level - PG_LEVEL_4K][idx]; } + +bool rmap_can_add(struct kvm_vcpu *vcpu) +{ + struct kvm_mmu_memory_cache *mc; + + mc =3D &vcpu->arch.mmu_pte_list_desc_cache; + return kvm_mmu_memory_cache_nr_free_objects(mc); +} + +void rmap_remove(struct kvm *kvm, u64 *spte) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *slot; + struct kvm_mmu_page *sp; + gfn_t gfn; + struct kvm_rmap_head *rmap_head; + + sp =3D sptep_to_sp(spte); + gfn =3D kvm_mmu_page_get_gfn(sp, spte_index(spte)); + + /* + * Unlike rmap_add, rmap_remove does not run in the context of a vCPU + * so we have to determine which memslots to use based on context + * information in sp->role. + */ + slots =3D kvm_memslots_for_spte_role(kvm, sp->role); + + slot =3D __gfn_to_memslot(slots, gfn); + rmap_head =3D gfn_to_rmap(gfn, sp->role.level, slot); + + pte_list_remove(spte, rmap_head); +} diff --git a/arch/x86/kvm/mmu/rmap.h b/arch/x86/kvm/mmu/rmap.h index 45732eda57e5..81df186ba3c3 100644 --- a/arch/x86/kvm/mmu/rmap.h +++ b/arch/x86/kvm/mmu/rmap.h @@ -51,4 +51,7 @@ u64 *rmap_get_next(struct rmap_iterator *iter); =20 struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, const struct kvm_memory_slot *slot); + +bool rmap_can_add(struct kvm_vcpu *vcpu); +void rmap_remove(struct kvm *kvm, u64 *spte); #endif /* __KVM_X86_MMU_RMAP_H */ --=20 2.39.0.rc0.267.gcb52ba06e7-goog From nobody Thu Sep 18 12:56:27 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD2CFC4708C for ; Tue, 6 Dec 2022 17:36:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235587AbiLFRgi (ORCPT ); Tue, 6 Dec 2022 12:36:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34894 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235309AbiLFRgS (ORCPT ); Tue, 6 Dec 2022 12:36:18 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E52623B9C3 for ; Tue, 6 Dec 2022 09:36:14 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id b13-20020a056a000a8d00b0057348c50123so13531014pfl.18 for ; Tue, 06 Dec 2022 09:36:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/TVyB4/pSme/cFmmgXmwyjFM6o5GyBUdN87C0tWnxAo=; b=oVPJ2nG7PFgq3bksWsgionXC79t3Kk/k1kgfLgV8FmLQwSFfrzXfhIv7LJ4yoS4zML KxXcvA273d0cpEacTsrs3CPwZ0QeGzc/wrtfBDP8vEEiwzvY9qAx/YXCee4fbzCDlyPA wf12/QEcjIC8x4ZQyEhI4RVPSyH0ittgNyouIntMcJOj9jEe20vMURToED9sTUyrn/4z 9oQV3HNShZ/sB9f4KGupI+YqgXr8EERfZU+z+oZiaGH+zBRcZ0PjIAAJsyAjMUzSm7Vs 4g6VdBhmU7ZsH3z3DVifdx/0F/p8Z+JwDN7p3vpkEeG1Qde7kp/1dlDnT2BnY/1VIB4d dowA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/TVyB4/pSme/cFmmgXmwyjFM6o5GyBUdN87C0tWnxAo=; b=0G/cwGlhqMZPkjZE5N0Mckc7JvXYU7h37H7im+5IO7I/ygV1CCrkchSIZcB/HVyR8V f9ZhILh2/LrKpqIzBKcZKe+xjI6qanJsDT3B+iMRuYDsbQ4XXwu5VBept/rK8DJKb8bU dbYRNhlcZQvR90RMxW36X3JzE1R/Kodii2LU4WNpdzGSJYMrbBCUq6FMA0CTbtZ36loM i/8tyQ/yjdqGF09+yFk8ZU0qi5IEwis4P/AAD9kj3hkzAArBhq1Ikb5Wo4czh0dtGdWA iYd8icycoie0lUaewD8UpBpvSRbEK+Jr7kXrCdAAAXZR5kfJguNcmT8C7DZggiqS+jaL TpAg== X-Gm-Message-State: ANoB5pnhGngahw7OZmt9ezfRq7kqBxiSZhgPag1eG6X7XlY5Al8VqKD3 JavBCwPXoaasE2UCpGUiNFhMzr9YpCKgX9i8aWuG7bwwrEVOX3Wy19qWFpH/Snn2+5lvKx0921w T/09Vr+22iV5ifPUzlo68FglQKZhOQoUD+qfWLV2YxCGxOYlI2VWQxXJ3u8he3EqGZ55g9BfW X-Google-Smtp-Source: AA0mqf7SW6OB8Y2Moh616DcgP97JKus7htQeSRUpj5qdfuWqrXLEsHouDMvq55z0jw4zpn9+Xt6dTXMtuqKH X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a17:90a:8b03:b0:213:16d2:4d4c with SMTP id y3-20020a17090a8b0300b0021316d24d4cmr96765107pjn.70.1670348174449; Tue, 06 Dec 2022 09:36:14 -0800 (PST) Date: Tue, 6 Dec 2022 17:35:59 +0000 In-Reply-To: <20221206173601.549281-1-bgardon@google.com> Mime-Version: 1.0 References: <20221206173601.549281-1-bgardon@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221206173601.549281-6-bgardon@google.com> Subject: [PATCH 5/7] KVM: x86/MMU: Move the rmap walk iterator out of mmu.c From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move slot_rmap_walk_iterator and its associated functions out of mmu.c to rmap.(c|h). No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 73 ----------------------------------------- arch/x86/kvm/mmu/rmap.c | 43 ++++++++++++++++++++++++ arch/x86/kvm/mmu/rmap.h | 36 ++++++++++++++++++++ 3 files changed, 79 insertions(+), 73 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 52e487d89d54..88da2abc2375 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1198,79 +1198,6 @@ static bool kvm_set_pte_rmap(struct kvm *kvm, struct= kvm_rmap_head *rmap_head, return need_flush; } =20 -struct slot_rmap_walk_iterator { - /* input fields. */ - const struct kvm_memory_slot *slot; - gfn_t start_gfn; - gfn_t end_gfn; - int start_level; - int end_level; - - /* output fields. */ - gfn_t gfn; - struct kvm_rmap_head *rmap; - int level; - - /* private field. */ - struct kvm_rmap_head *end_rmap; -}; - -static void -rmap_walk_init_level(struct slot_rmap_walk_iterator *iterator, int level) -{ - iterator->level =3D level; - iterator->gfn =3D iterator->start_gfn; - iterator->rmap =3D gfn_to_rmap(iterator->gfn, level, iterator->slot); - iterator->end_rmap =3D gfn_to_rmap(iterator->end_gfn, level, iterator->sl= ot); -} - -static void -slot_rmap_walk_init(struct slot_rmap_walk_iterator *iterator, - const struct kvm_memory_slot *slot, int start_level, - int end_level, gfn_t start_gfn, gfn_t end_gfn) -{ - iterator->slot =3D slot; - iterator->start_level =3D start_level; - iterator->end_level =3D end_level; - iterator->start_gfn =3D start_gfn; - iterator->end_gfn =3D end_gfn; - - rmap_walk_init_level(iterator, iterator->start_level); -} - -static bool slot_rmap_walk_okay(struct slot_rmap_walk_iterator *iterator) -{ - return !!iterator->rmap; -} - -static void slot_rmap_walk_next(struct slot_rmap_walk_iterator *iterator) -{ - while (++iterator->rmap <=3D iterator->end_rmap) { - iterator->gfn +=3D (1UL << KVM_HPAGE_GFN_SHIFT(iterator->level)); - - if (iterator->rmap->val) - return; - } - - if (++iterator->level > iterator->end_level) { - iterator->rmap =3D NULL; - return; - } - - rmap_walk_init_level(iterator, iterator->level); -} - -#define for_each_slot_rmap_range(_slot_, _start_level_, _end_level_, \ - _start_gfn, _end_gfn, _iter_) \ - for (slot_rmap_walk_init(_iter_, _slot_, _start_level_, \ - _end_level_, _start_gfn, _end_gfn); \ - slot_rmap_walk_okay(_iter_); \ - slot_rmap_walk_next(_iter_)) - -typedef bool (*rmap_handler_t)(struct kvm *kvm, struct kvm_rmap_head *rmap= _head, - struct kvm_memory_slot *slot, gfn_t gfn, - int level, pte_t pte); - static __always_inline bool kvm_handle_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range, rmap_handler_t handler) diff --git a/arch/x86/kvm/mmu/rmap.c b/arch/x86/kvm/mmu/rmap.c index 6833676aa9ea..91af5b32cffb 100644 --- a/arch/x86/kvm/mmu/rmap.c +++ b/arch/x86/kvm/mmu/rmap.c @@ -240,3 +240,46 @@ void rmap_remove(struct kvm *kvm, u64 *spte) =20 pte_list_remove(spte, rmap_head); } + +void rmap_walk_init_level(struct slot_rmap_walk_iterator *iterator, int le= vel) +{ + iterator->level =3D level; + iterator->gfn =3D iterator->start_gfn; + iterator->rmap =3D gfn_to_rmap(iterator->gfn, level, iterator->slot); + iterator->end_rmap =3D gfn_to_rmap(iterator->end_gfn, level, iterator->sl= ot); +} + +void slot_rmap_walk_init(struct slot_rmap_walk_iterator *iterator, + const struct kvm_memory_slot *slot, int start_level, + int end_level, gfn_t start_gfn, gfn_t end_gfn) +{ + iterator->slot =3D slot; + iterator->start_level =3D start_level; + iterator->end_level =3D end_level; + iterator->start_gfn =3D start_gfn; + iterator->end_gfn =3D end_gfn; + + rmap_walk_init_level(iterator, iterator->start_level); +} + +bool slot_rmap_walk_okay(struct slot_rmap_walk_iterator *iterator) +{ + return !!iterator->rmap; +} + +void slot_rmap_walk_next(struct slot_rmap_walk_iterator *iterator) +{ + while (++iterator->rmap <=3D iterator->end_rmap) { + iterator->gfn +=3D (1UL << KVM_HPAGE_GFN_SHIFT(iterator->level)); + + if (iterator->rmap->val) + return; + } + + if (++iterator->level > iterator->end_level) { + iterator->rmap =3D NULL; + return; + } + + rmap_walk_init_level(iterator, iterator->level); +} diff --git a/arch/x86/kvm/mmu/rmap.h b/arch/x86/kvm/mmu/rmap.h index 81df186ba3c3..dc4bf7e609ec 100644 --- a/arch/x86/kvm/mmu/rmap.h +++ b/arch/x86/kvm/mmu/rmap.h @@ -54,4 +54,40 @@ struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, =20 bool rmap_can_add(struct kvm_vcpu *vcpu); void rmap_remove(struct kvm *kvm, u64 *spte); + +struct slot_rmap_walk_iterator { + /* input fields. */ + const struct kvm_memory_slot *slot; + gfn_t start_gfn; + gfn_t end_gfn; + int start_level; + int end_level; + + /* output fields. */ + gfn_t gfn; + struct kvm_rmap_head *rmap; + int level; + + /* private field. */ + struct kvm_rmap_head *end_rmap; +}; + +void rmap_walk_init_level(struct slot_rmap_walk_iterator *iterator, int le= vel); +void slot_rmap_walk_init(struct slot_rmap_walk_iterator *iterator, + const struct kvm_memory_slot *slot, int start_level, + int end_level, gfn_t start_gfn, gfn_t end_gfn); +bool slot_rmap_walk_okay(struct slot_rmap_walk_iterator *iterator); +void slot_rmap_walk_next(struct slot_rmap_walk_iterator *iterator); + +#define for_each_slot_rmap_range(_slot_, _start_level_, _end_level_, \ + _start_gfn, _end_gfn, _iter_) \ + for (slot_rmap_walk_init(_iter_, _slot_, _start_level_, \ + _end_level_, _start_gfn, _end_gfn); \ + slot_rmap_walk_okay(_iter_); \ + slot_rmap_walk_next(_iter_)) + +typedef bool (*rmap_handler_t)(struct kvm *kvm, struct kvm_rmap_head *rmap= _head, + struct kvm_memory_slot *slot, gfn_t gfn, + int level, pte_t pte); + #endif /* __KVM_X86_MMU_RMAP_H */ --=20 2.39.0.rc0.267.gcb52ba06e7-goog From nobody Thu Sep 18 12:56:27 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67125C352A1 for ; Tue, 6 Dec 2022 17:36:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235483AbiLFRgo (ORCPT ); Tue, 6 Dec 2022 12:36:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35202 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235447AbiLFRgW (ORCPT ); Tue, 6 Dec 2022 12:36:22 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DD10A3B9D5 for ; Tue, 6 Dec 2022 09:36:16 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id l10-20020a170902f68a00b00189d1728848so6739896plg.2 for ; Tue, 06 Dec 2022 09:36:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jOLoTsUdfTgKzlK6Hy0CC7NNiyXgF9iSm7cJp2Q7w9I=; b=EGiIlaHT8ks2bvi5YXJjx9YD9KUtl5Bv7ychIQYtNPVEquCrgQSKDZ3acg5PgVPIVe pA28XThubo+ao/p3YKAxinQQWFyIQQSFkmKsnVY8WZCzhxljx0GJPSk1N3aJKeKOnyzO y3L6C6OLq9XYYqt7xwnr4+YTvztFafCbhJZxAymemVVXLgFhs4mJlMvh9daCOWTjifxU tHgiZumwPGXMLfuo+hZRZ6k7aZq4LV1VJveUo3alSBxbMD7MvMmBtQTJ66hHFiQOaMJ4 UB2QL9SzWEUQbGKo7llrKdknPgVwZyt1vBwC1vd6/WMmb9z36w4uwQ5uWqH0Hqjl2gat pqig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jOLoTsUdfTgKzlK6Hy0CC7NNiyXgF9iSm7cJp2Q7w9I=; b=2/g0/L3zGYbmqIA2lZDyBOFL134CXGNEAwnSAexTCQHKjpAfQ0ayWyHNQ8BSGTT5yu yRDHvSuag/9idyvK/onG/FNv0iAPL+ZbETwrIp7Hg6uzE2Bh7uCz0l1dcv1K7F/L1Y+5 XILZ05QXyUy0jVSKKYCCKOGrUjQgoxJ1M3TqxEPUn39qhhZsYeAqGsHOMzEzi+rASnP6 xZ3u1bKaUfKjsgqHcKShkdNiOH0VKpEPUbKQ7jKfhDsXYQEiRokHTKP5eSTuiCub4CaR l6+EnSW9B6kgjFx1RX7e6DRFwIeqCxz+Tiwrmxj/zZu1diZxudSTEHZljAgqZ5ISRaJ5 3U3w== X-Gm-Message-State: ANoB5pl8ioQPUAvofX0xpdjKegfzxfo2i79GhC/j+ZwlN/rqLqZKSmXm BR0K5miOZrlyKoDU5bTa2A0kSrVIYBXofm7EIhvIiOSYHonOiTpSG0xl04pyOkGx+L8/DabHbXy Lokyn6sU/C/4+ARTu+J9mpmHrQ2SclPCX6wxSAeQjTlqwHHoo6nEWNqZnFl6UzSwTiDWU6m2W X-Google-Smtp-Source: AA0mqf6SMlVyiA0N9Q0gw7pZUuNmuzbXTUgkBqIAbP6ewRDGoB4dQ9xMEoHDig2j00UC567TTm2tAOf3qMjb X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a17:90a:43a4:b0:219:1d0a:34a6 with SMTP id r33-20020a17090a43a400b002191d0a34a6mr5141962pjg.1.1670348175816; Tue, 06 Dec 2022 09:36:15 -0800 (PST) Date: Tue, 6 Dec 2022 17:36:00 +0000 In-Reply-To: <20221206173601.549281-1-bgardon@google.com> Mime-Version: 1.0 References: <20221206173601.549281-1-bgardon@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221206173601.549281-7-bgardon@google.com> Subject: [PATCH 6/7] KVM: x86/MMU: Move rmap zap operations to rmap.c From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move the various rmap zap functions to rmap.c. These functions are less "pure" rmap operations in that they also contain some SPTE manipulation, however they're mostly about rmap / pte list manipulation. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 51 +-------------------------------- arch/x86/kvm/mmu/mmu_internal.h | 1 + arch/x86/kvm/mmu/rmap.c | 50 +++++++++++++++++++++++++++++++- arch/x86/kvm/mmu/rmap.h | 9 +++++- 4 files changed, 59 insertions(+), 52 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 88da2abc2375..12082314d82d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -512,7 +512,7 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte) * state bits, it is used to clear the last level sptep. * Returns the old PTE. */ -static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep) +u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep) { kvm_pfn_t pfn; u64 old_spte =3D *sptep; @@ -855,42 +855,6 @@ gfn_to_memslot_dirty_bitmap(struct kvm_vcpu *vcpu, gfn= _t gfn, return slot; } =20 -static void kvm_zap_one_rmap_spte(struct kvm *kvm, - struct kvm_rmap_head *rmap_head, u64 *sptep) -{ - mmu_spte_clear_track_bits(kvm, sptep); - pte_list_remove(sptep, rmap_head); -} - -/* Return true if at least one SPTE was zapped, false otherwise */ -static bool kvm_zap_all_rmap_sptes(struct kvm *kvm, - struct kvm_rmap_head *rmap_head) -{ - struct pte_list_desc *desc, *next; - int i; - - if (!rmap_head->val) - return false; - - if (!(rmap_head->val & 1)) { - mmu_spte_clear_track_bits(kvm, (u64 *)rmap_head->val); - goto out; - } - - desc =3D (struct pte_list_desc *)(rmap_head->val & ~1ul); - - for (; desc; desc =3D next) { - for (i =3D 0; i < desc->spte_count; i++) - mmu_spte_clear_track_bits(kvm, desc->sptes[i]); - next =3D desc->more; - free_pte_list_desc(desc); - } -out: - /* rmap_head is meaningless now, remember to reset it */ - rmap_head->val =3D 0; - return true; -} - static void drop_spte(struct kvm *kvm, u64 *sptep) { u64 old_spte =3D mmu_spte_clear_track_bits(kvm, sptep); @@ -1145,19 +1109,6 @@ static bool kvm_vcpu_write_protect_gfn(struct kvm_vc= pu *vcpu, u64 gfn) return kvm_mmu_slot_gfn_write_protect(vcpu->kvm, slot, gfn, PG_LEVEL_4K); } =20 -static bool __kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_hea= d, - const struct kvm_memory_slot *slot) -{ - return kvm_zap_all_rmap_sptes(kvm, rmap_head); -} - -static bool kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, - struct kvm_memory_slot *slot, gfn_t gfn, int level, - pte_t unused) -{ - return __kvm_zap_rmap(kvm, rmap_head, slot); -} - static bool kvm_set_pte_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_h= ead, struct kvm_memory_slot *slot, gfn_t gfn, int level, pte_t pte) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index 3de703c2a5d4..a219c8e556e9 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -319,4 +319,5 @@ void track_possible_nx_huge_page(struct kvm *kvm, struc= t kvm_mmu_page *sp); void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *s= p); =20 gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index); +u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep); #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/rmap.c b/arch/x86/kvm/mmu/rmap.c index 91af5b32cffb..9cc4252aaabb 100644 --- a/arch/x86/kvm/mmu/rmap.c +++ b/arch/x86/kvm/mmu/rmap.c @@ -56,7 +56,7 @@ int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 = *spte, return count; } =20 -void free_pte_list_desc(struct pte_list_desc *pte_list_desc) +static void free_pte_list_desc(struct pte_list_desc *pte_list_desc) { kmem_cache_free(pte_list_desc_cache, pte_list_desc); } @@ -283,3 +283,51 @@ void slot_rmap_walk_next(struct slot_rmap_walk_iterato= r *iterator) =20 rmap_walk_init_level(iterator, iterator->level); } + +void kvm_zap_one_rmap_spte(struct kvm *kvm, struct kvm_rmap_head *rmap_hea= d, + u64 *sptep) +{ + mmu_spte_clear_track_bits(kvm, sptep); + pte_list_remove(sptep, rmap_head); +} + +/* Return true if at least one SPTE was zapped, false otherwise */ +bool kvm_zap_all_rmap_sptes(struct kvm *kvm, struct kvm_rmap_head *rmap_he= ad) +{ + struct pte_list_desc *desc, *next; + int i; + + if (!rmap_head->val) + return false; + + if (!(rmap_head->val & 1)) { + mmu_spte_clear_track_bits(kvm, (u64 *)rmap_head->val); + goto out; + } + + desc =3D (struct pte_list_desc *)(rmap_head->val & ~1ul); + + for (; desc; desc =3D next) { + for (i =3D 0; i < desc->spte_count; i++) + mmu_spte_clear_track_bits(kvm, desc->sptes[i]); + next =3D desc->more; + free_pte_list_desc(desc); + } +out: + /* rmap_head is meaningless now, remember to reset it */ + rmap_head->val =3D 0; + return true; +} + +bool __kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + const struct kvm_memory_slot *slot) +{ + return kvm_zap_all_rmap_sptes(kvm, rmap_head); +} + +bool kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + struct kvm_memory_slot *slot, gfn_t gfn, int level, + pte_t unused) +{ + return __kvm_zap_rmap(kvm, rmap_head, slot); +} diff --git a/arch/x86/kvm/mmu/rmap.h b/arch/x86/kvm/mmu/rmap.h index dc4bf7e609ec..a9bf48494e1a 100644 --- a/arch/x86/kvm/mmu/rmap.h +++ b/arch/x86/kvm/mmu/rmap.h @@ -27,7 +27,6 @@ static struct kmem_cache *pte_list_desc_cache; =20 int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 *spte, struct kvm_rmap_head *rmap_head); -void free_pte_list_desc(struct pte_list_desc *pte_list_desc); void pte_list_remove(u64 *spte, struct kvm_rmap_head *rmap_head); unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); =20 @@ -90,4 +89,12 @@ typedef bool (*rmap_handler_t)(struct kvm *kvm, struct k= vm_rmap_head *rmap_head, struct kvm_memory_slot *slot, gfn_t gfn, int level, pte_t pte); =20 +void kvm_zap_one_rmap_spte(struct kvm *kvm, struct kvm_rmap_head *rmap_hea= d, + u64 *sptep); +bool kvm_zap_all_rmap_sptes(struct kvm *kvm, struct kvm_rmap_head *rmap_he= ad); +bool __kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + const struct kvm_memory_slot *slot); +bool kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + struct kvm_memory_slot *slot, gfn_t gfn, int level, + pte_t unused); #endif /* __KVM_X86_MMU_RMAP_H */ --=20 2.39.0.rc0.267.gcb52ba06e7-goog From nobody Thu Sep 18 12:56:27 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DD86C352A1 for ; Tue, 6 Dec 2022 17:36:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232825AbiLFRgx (ORCPT ); Tue, 6 Dec 2022 12:36:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35058 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235503AbiLFRg2 (ORCPT ); Tue, 6 Dec 2022 12:36:28 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B594F3B9EA for ; Tue, 6 Dec 2022 09:36:18 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id pa16-20020a17090b265000b0020a71040b4cso11997198pjb.6 for ; Tue, 06 Dec 2022 09:36:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zGVIdVHmu98j0ATMgshfuLelOvL1LTZm9iF00wr+z1c=; b=the3UR5nZvASMSHWZbLNk/phN3QpGy2ZKbi9DHYfcXEQ159p8R/Z/7Kn8+DTeMoLVC pHaMaXdy8nBuAF7ZQ2GCqrLjaiaRR0ok0NVp/8M1NybmC1AX9iALlFOCP1J9eBukP62K TexEBs1IBdXfB8yWGuUpm4cBsb0DlWf6ECzEhJKJpgiNZHKgKJslJ4+mK+GkHrO3BR6n whrLxfo9JKB/TzV1zU5UtNlT+X/J7I2EibnOxAYrtaKAZ5MlhSUTaNP44stCu67XoUmc VA412z6d51UVGfKCbdx1Ny80uArOlblTwP+eCoH6u+49GoAX1nxW82zITs2cd+GzLIVD agtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zGVIdVHmu98j0ATMgshfuLelOvL1LTZm9iF00wr+z1c=; b=rOAz96E7NWoWxGCvre4KAXvBCrXVgk6efGLHVxSR268d2024BrVEdv15idbVQemw9E bJDXJBKZhHv75MJy5p95PfVkpNz9nYB1+ZnEUn8ZL+l3wmS5GBr6eN+mEkcPhCbHB5hx xhz9L2lvAiOxPdjzzc6ZIqnTiCQcnzuA8Ts2qjdvKw91+ouTlEpLVBSG/Z5nZtsOd5aD /dK1tTSVUkazVP6Y+QOEQgyvQahDBzvT6xVrv/lRILSjYPcLYJ7t6NPpchleKNivPpyV 8aPBzhIyxXGCluAmgxvAoOKdu+fGEGGcoWQBwuyQZdUAvolC2ZPIrBDYVuhcaV+iKqZV nh5Q== X-Gm-Message-State: ANoB5pl/CZUUIgYoIKBxTk/CcTqMqp02gLZNtoDmW0CuJtJu9d+GTntg VJNHwseE+bI4b8DYEO7a8io5vVUF0MDMQjvCeo2gzzSbqE9Lz2s0AUrnvDSzuU2uPAWKqhUDp7/ 51xor/MYBnkWY07xtlm9/U9WiNvoHTnu3uiiB7SvF/iVkKsEJWj6BYxJwDrRpPD3CJQ5ABqcr X-Google-Smtp-Source: AA0mqf68Pz8Y3Hu8v2Z/yj/mfJs/DBPFKf1uym2qY0m7kl49GEfjQ/w0L7uOl9vvRQZvkaj86jRMQk2/UOwu X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a17:90a:d086:b0:219:227d:d91f with SMTP id k6-20020a17090ad08600b00219227dd91fmr4993281pju.0.1670348177805; Tue, 06 Dec 2022 09:36:17 -0800 (PST) Date: Tue, 6 Dec 2022 17:36:01 +0000 In-Reply-To: <20221206173601.549281-1-bgardon@google.com> Mime-Version: 1.0 References: <20221206173601.549281-1-bgardon@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221206173601.549281-8-bgardon@google.com> Subject: [PATCH 7/7] KVM: x86/MMU: Move rmap_add() to rmap.c From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move rmap_add() to rmap.c to complete the migration of the various rmap operations out of mmu.c. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 45 ++++----------------------------- arch/x86/kvm/mmu/mmu_internal.h | 6 +++++ arch/x86/kvm/mmu/rmap.c | 37 ++++++++++++++++++++++++++- arch/x86/kvm/mmu/rmap.h | 8 +++++- 4 files changed, 54 insertions(+), 42 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 12082314d82d..b122c90a3e5f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -215,13 +215,13 @@ static struct kvm_mmu_role_regs vcpu_to_role_regs(str= uct kvm_vcpu *vcpu) return regs; } =20 -static inline bool kvm_available_flush_tlb_with_range(void) +inline bool kvm_available_flush_tlb_with_range(void) { return kvm_x86_ops.tlb_remote_flush_with_range; } =20 -static void kvm_flush_remote_tlbs_with_range(struct kvm *kvm, - struct kvm_tlb_range *range) +void kvm_flush_remote_tlbs_with_range(struct kvm *kvm, + struct kvm_tlb_range *range) { int ret =3D -ENOTSUPP; =20 @@ -695,8 +695,8 @@ static u32 kvm_mmu_page_get_access(struct kvm_mmu_page = *sp, int index) return sp->role.access; } =20 -static void kvm_mmu_page_set_translation(struct kvm_mmu_page *sp, int inde= x, - gfn_t gfn, unsigned int access) +void kvm_mmu_page_set_translation(struct kvm_mmu_page *sp, int index, + gfn_t gfn, unsigned int access) { if (sp_has_gptes(sp)) { sp->shadowed_translation[index] =3D (gfn << PAGE_SHIFT) | access; @@ -1217,41 +1217,6 @@ static bool kvm_test_age_rmap(struct kvm *kvm, struc= t kvm_rmap_head *rmap_head, return false; } =20 -#define RMAP_RECYCLE_THRESHOLD 1000 - -static void __rmap_add(struct kvm *kvm, - struct kvm_mmu_memory_cache *cache, - const struct kvm_memory_slot *slot, - u64 *spte, gfn_t gfn, unsigned int access) -{ - struct kvm_mmu_page *sp; - struct kvm_rmap_head *rmap_head; - int rmap_count; - - sp =3D sptep_to_sp(spte); - kvm_mmu_page_set_translation(sp, spte_index(spte), gfn, access); - kvm_update_page_stats(kvm, sp->role.level, 1); - - rmap_head =3D gfn_to_rmap(gfn, sp->role.level, slot); - rmap_count =3D pte_list_add(cache, spte, rmap_head); - - if (rmap_count > kvm->stat.max_mmu_rmap_size) - kvm->stat.max_mmu_rmap_size =3D rmap_count; - if (rmap_count > RMAP_RECYCLE_THRESHOLD) { - kvm_zap_all_rmap_sptes(kvm, rmap_head); - kvm_flush_remote_tlbs_with_address( - kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); - } -} - -static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *= slot, - u64 *spte, gfn_t gfn, unsigned int access) -{ - struct kvm_mmu_memory_cache *cache =3D &vcpu->arch.mmu_pte_list_desc_cach= e; - - __rmap_add(vcpu->kvm, cache, slot, spte, gfn, access); -} - bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { bool young =3D false; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index a219c8e556e9..03da1f8b066e 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -320,4 +320,10 @@ void untrack_possible_nx_huge_page(struct kvm *kvm, st= ruct kvm_mmu_page *sp); =20 gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index); u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep); +void kvm_mmu_page_set_translation(struct kvm_mmu_page *sp, int index, + gfn_t gfn, unsigned int access); + +inline bool kvm_available_flush_tlb_with_range(void); +void kvm_flush_remote_tlbs_with_range(struct kvm *kvm, + struct kvm_tlb_range *range); #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/rmap.c b/arch/x86/kvm/mmu/rmap.c index 9cc4252aaabb..136c5f4f867b 100644 --- a/arch/x86/kvm/mmu/rmap.c +++ b/arch/x86/kvm/mmu/rmap.c @@ -292,7 +292,8 @@ void kvm_zap_one_rmap_spte(struct kvm *kvm, struct kvm_= rmap_head *rmap_head, } =20 /* Return true if at least one SPTE was zapped, false otherwise */ -bool kvm_zap_all_rmap_sptes(struct kvm *kvm, struct kvm_rmap_head *rmap_he= ad) +static bool kvm_zap_all_rmap_sptes(struct kvm *kvm, + struct kvm_rmap_head *rmap_head) { struct pte_list_desc *desc, *next; int i; @@ -331,3 +332,37 @@ bool kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_hea= d *rmap_head, { return __kvm_zap_rmap(kvm, rmap_head, slot); } + +#define RMAP_RECYCLE_THRESHOLD 1000 + +void __rmap_add(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, + const struct kvm_memory_slot *slot, u64 *spte, gfn_t gfn, + unsigned int access) +{ + struct kvm_mmu_page *sp; + struct kvm_rmap_head *rmap_head; + int rmap_count; + + sp =3D sptep_to_sp(spte); + kvm_mmu_page_set_translation(sp, spte_index(spte), gfn, access); + kvm_update_page_stats(kvm, sp->role.level, 1); + + rmap_head =3D gfn_to_rmap(gfn, sp->role.level, slot); + rmap_count =3D pte_list_add(cache, spte, rmap_head); + + if (rmap_count > kvm->stat.max_mmu_rmap_size) + kvm->stat.max_mmu_rmap_size =3D rmap_count; + if (rmap_count > RMAP_RECYCLE_THRESHOLD) { + kvm_zap_all_rmap_sptes(kvm, rmap_head); + kvm_flush_remote_tlbs_with_address( + kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); + } +} + +void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, + u64 *spte, gfn_t gfn, unsigned int access) +{ + struct kvm_mmu_memory_cache *cache =3D &vcpu->arch.mmu_pte_list_desc_cach= e; + + __rmap_add(vcpu->kvm, cache, slot, spte, gfn, access); +} diff --git a/arch/x86/kvm/mmu/rmap.h b/arch/x86/kvm/mmu/rmap.h index a9bf48494e1a..b06897dad76a 100644 --- a/arch/x86/kvm/mmu/rmap.h +++ b/arch/x86/kvm/mmu/rmap.h @@ -91,10 +91,16 @@ typedef bool (*rmap_handler_t)(struct kvm *kvm, struct = kvm_rmap_head *rmap_head, =20 void kvm_zap_one_rmap_spte(struct kvm *kvm, struct kvm_rmap_head *rmap_hea= d, u64 *sptep); -bool kvm_zap_all_rmap_sptes(struct kvm *kvm, struct kvm_rmap_head *rmap_he= ad); bool __kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, const struct kvm_memory_slot *slot); bool kvm_zap_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, struct kvm_memory_slot *slot, gfn_t gfn, int level, pte_t unused); + +void __rmap_add(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, + const struct kvm_memory_slot *slot, u64 *spte, gfn_t gfn, + unsigned int access); +void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, + u64 *spte, gfn_t gfn, unsigned int access); + #endif /* __KVM_X86_MMU_RMAP_H */ --=20 2.39.0.rc0.267.gcb52ba06e7-goog