From nobody Sat Apr 18 07:43:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A00BC433EF for ; Fri, 15 Jul 2022 23:21:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232094AbiGOXVU (ORCPT ); Fri, 15 Jul 2022 19:21:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47898 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232182AbiGOXVN (ORCPT ); Fri, 15 Jul 2022 19:21:13 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2305492852 for ; Fri, 15 Jul 2022 16:21:12 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id d13-20020a170903230d00b0016c1efef9ecso2763533plh.6 for ; Fri, 15 Jul 2022 16:21:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=uK6YWJT3dAvxH+WRuQoDcyKd/pbfKdISor+ayvyAclQ=; b=b7bZXNx8GXbQB7y8gNHiTNWMeIITH2Rj/dqBiwVDN8m1KxdnHKA0UHgZgdKqUarWBo 0O79s3/wI0TxkEsbWtqrTC+EfNbmhq8B20WWTK0MHtIlpf42aUls4RPQdZUnrx/sVsvW y3CpfX8pj3COUw/jO4JzxN8OdgznGVGGVv3JWCs7FUWaEv0Yzb+5hEXQDtlVKl5MiLvP QKYLEBcoJcoTDWqh805wgOP8TcjTkaXnEJ9yGM8BShSeABoGjUMcu/LSdUQXKdvNCpbz XyHGGXsEQzk77fmfJK7Iw1cyMrJDaEF/nagwEQ8Z+b5mDOqWmbS+EclYGUwNkfE3qA1R xOLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=uK6YWJT3dAvxH+WRuQoDcyKd/pbfKdISor+ayvyAclQ=; b=tU25aLwS33YkWsV/0U3CjR/SB0AiNB+KXaAEqBQ7G9+EG8GVwbhFOh2dToRxfmGkU0 APfj+i0EU6O/w8h4IhhqUKcovkp+HzZk+CsG7jCXESOZCwTjqXlh+a65YwfQpkP8nh0l P6Gnap7DIX2ct0lZoJkBBiPx6Qn3CEWR/a8JxnpQ7bFk3E/uQ14Z1xcHH4YjslhCXCya JDBDJiriXkqkKYJ1jY609eOH/R7nguALW6mtDA6fHtYr7BkKIK2K+mI53Xw8kygM4ooH tQ+hiEuIWk+wlqajl+/CngJbvSy/oS07gxek9ANboCMxwPqdJnVFVGx1t4Vo4fLLiJLA 3Jbw== X-Gm-Message-State: AJIora/XFeIG7I2y00DU0L649cgCtNrzxUZAhGN0IJ1hsIoOwPfP8lLW W2ejLLQuApPhR1El5UtzDmeEce8C/uE= X-Google-Smtp-Source: AGRyM1tgEmxQb78qcbICr1DeJgxQpHrFx7PCcmc098gc8S9zuMS8B2YgdedOkxCPFmxhMwVCXPrzdxoI3Q0= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90b:1d02:b0:1f0:1c2c:cc64 with SMTP id on2-20020a17090b1d0200b001f01c2ccc64mr18619142pjb.52.1657927271683; Fri, 15 Jul 2022 16:21:11 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 15 Jul 2022 23:21:04 +0000 In-Reply-To: <20220715232107.3775620-1-seanjc@google.com> Message-Id: <20220715232107.3775620-2-seanjc@google.com> Mime-Version: 1.0 References: <20220715232107.3775620-1-seanjc@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH 1/4] KVM: x86/mmu: Don't require refcounted "struct page" to create huge SPTEs From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Mingwei Zhang Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Drop the requirement that a pfn be backed by a refcounted, compound or or ZONE_DEVICE, struct page, and instead rely solely on the host page tables to identify huge pages. The PageCompound() check is a remnant of an old implementation that identified (well, attempt to identify) huge pages without walking the host page tables. The ZONE_DEVICE check was added as an exception to the PageCompound() requirement. In other words, neither check is actually a hard requirement, if the primary has a pfn backed with a huge page, then KVM can back the pfn with a huge page regardless of the backing store. Dropping the @pfn parameter will also allow KVM to query the max host mapping level without having to first get the pfn, which is advantageous for use outside of the page fault path where KVM wants to take action if and only if a page can be mapped huge, i.e. avoids the pfn lookup for gfns that can't be backed with a huge page. Cc: Mingwei Zhang Signed-off-by: Sean Christopherson Reviewed-by: Mingwei Zhang --- arch/x86/kvm/mmu/mmu.c | 23 +++++------------------ arch/x86/kvm/mmu/mmu_internal.h | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 8 +------- 3 files changed, 7 insertions(+), 26 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 52664c3caaab..bebff1d5acd4 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2919,11 +2919,10 @@ static void direct_pte_prefetch(struct kvm_vcpu *vc= pu, u64 *sptep) __direct_pte_prefetch(vcpu, sp, sptep); } =20 -static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pf= n, +static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn, const struct kvm_memory_slot *slot) { int level =3D PG_LEVEL_4K; - struct page *page; unsigned long hva; unsigned long flags; pgd_t pgd; @@ -2931,17 +2930,6 @@ static int host_pfn_mapping_level(struct kvm *kvm, g= fn_t gfn, kvm_pfn_t pfn, pud_t pud; pmd_t pmd; =20 - /* - * Note, @slot must be non-NULL, i.e. the caller is responsible for - * ensuring @pfn isn't garbage and is backed by a memslot. - */ - page =3D kvm_pfn_to_refcounted_page(pfn); - if (!page) - return PG_LEVEL_4K; - - if (!PageCompound(page) && !kvm_is_zone_device_page(page)) - return PG_LEVEL_4K; - /* * Note, using the already-retrieved memslot and __gfn_to_hva_memslot() * is not solely for performance, it's also necessary to avoid the @@ -2994,7 +2982,7 @@ static int host_pfn_mapping_level(struct kvm *kvm, gf= n_t gfn, kvm_pfn_t pfn, =20 int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn, - kvm_pfn_t pfn, int max_level) + int max_level) { struct kvm_lpage_info *linfo; int host_level; @@ -3009,7 +2997,7 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, if (max_level =3D=3D PG_LEVEL_4K) return PG_LEVEL_4K; =20 - host_level =3D host_pfn_mapping_level(kvm, gfn, pfn, slot); + host_level =3D host_pfn_mapping_level(kvm, gfn, slot); return min(host_level, max_level); } =20 @@ -3034,8 +3022,7 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault * level, which will be used to do precise, accurate accounting. */ fault->req_level =3D kvm_mmu_max_mapping_level(vcpu->kvm, slot, - fault->gfn, fault->pfn, - fault->max_level); + fault->gfn, fault->max_level); if (fault->req_level =3D=3D PG_LEVEL_4K || fault->huge_page_disallowed) return; =20 @@ -6406,7 +6393,7 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *= kvm, */ if (sp->role.direct && sp->role.level < kvm_mmu_max_mapping_level(kvm, slot, sp->gfn, - pfn, PG_LEVEL_NUM)) { + PG_LEVEL_NUM)) { pte_list_remove(kvm, rmap_head, sptep); =20 if (kvm_available_flush_tlb_with_range()) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index ae2d660e2dab..582def531d4d 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -309,7 +309,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu= *vcpu, gpa_t cr2_or_gpa, =20 int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn, - kvm_pfn_t pfn, int max_level); + int max_level); void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault = *fault); void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, in= t cur_level); =20 diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index f3a430d64975..d75d93edc40a 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1733,7 +1733,6 @@ static void zap_collapsible_spte_range(struct kvm *kv= m, gfn_t end =3D start + slot->npages; struct tdp_iter iter; int max_mapping_level; - kvm_pfn_t pfn; =20 rcu_read_lock(); =20 @@ -1745,13 +1744,8 @@ static void zap_collapsible_spte_range(struct kvm *k= vm, !is_last_spte(iter.old_spte, iter.level)) continue; =20 - /* - * This is a leaf SPTE. Check if the PFN it maps can - * be mapped at a higher level. - */ - pfn =3D spte_to_pfn(iter.old_spte); max_mapping_level =3D kvm_mmu_max_mapping_level(kvm, slot, - iter.gfn, pfn, PG_LEVEL_NUM); + iter.gfn, PG_LEVEL_NUM); =20 WARN_ON(max_mapping_level < iter.level); =20 --=20 2.37.0.170.g444d1eabd0-goog From nobody Sat Apr 18 07:43:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F7ECC433EF for ; Fri, 15 Jul 2022 23:21:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232345AbiGOXVZ (ORCPT ); Fri, 15 Jul 2022 19:21:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47858 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232217AbiGOXVP (ORCPT ); Fri, 15 Jul 2022 19:21:15 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 247B892857 for ; Fri, 15 Jul 2022 16:21:13 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id s129-20020a632c87000000b00411564fe1feso3365837pgs.7 for ; Fri, 15 Jul 2022 16:21:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=OjnofI9H9ze+fn8El42EdD+BHILp0ViAq1IAqn7KhtQ=; b=S2dPlRloCOUbP6ylCwhyT9a6h+A+GRvDK3iRNO3z5OWDOJwwZjfLmRsuzN2oHBtFdL rc6W3ii9MJOzFXYCzkt3TnNp8AS8InYdpt4UkQrXiNvsvuAjQsF3nSVODTvkfb/iSd0B WHV2baEnVrgyQC8MFfTlJc1jAIDtzLW7F870EIl4q9zYLIAzyEM3XcX9xITyXu/40hf7 alaDSJ4mLL9us2972IKI3ntZZL4P88soflbkdDHCJP3aoNeQjQpsRbIjxzv5sDf0dbcL 6FXZJnyAxrDoouWxFDeEMCuGBz+tdaMXMzpO87MwKRRCVSAjHnLn2assDrnxiKswbsqU kaVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=OjnofI9H9ze+fn8El42EdD+BHILp0ViAq1IAqn7KhtQ=; b=UXBF3asFtZtTGdpX0RtqNfN5w8Rx8P2qHoX8G+tmI93ywYmJYCdAXcm3EHEzZUYKl6 Qm+lAHf/wlhEAZYlLohzcBXHw86iHGJmyFM6ZJ4xtUn9RJj3OoC57VEJEFuf6uDAlpSS ytoX7Xj9UVYJUDjKUfXm/PxTgBkgiA89jGt5EH6jA5HrnZjWAUmfgp8afbgy2fpZRhq+ JPY3etL8mwxEgcinIOqICNi1+zretmBd4c5SHL3YEPIMYrHm14J8IvHKuu8hVBNu03MB F3+TeVQlgOjo3FLs84Dv/nSWLKt5TL5iYKsTLyQqXjVKkN80qD1z3zbMlsDNx//+GS8X xyuA== X-Gm-Message-State: AJIora842EKema3aBG+mlxvqSH4LYI8kDd1Jmfmx31MwTMXZKKKcUw7q scBoTcaPdM2MhPhzbEM3s+RCuYMRwqM= X-Google-Smtp-Source: AGRyM1sre0fLEDWot+IVSxJCCNTVY3AXYrJ+aXAKwfE6qqe573oNgkQ4QT8Sc4g6UKV0kjKiKcrdIKY4dJA= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:8543:b0:16b:fbd1:9f68 with SMTP id d3-20020a170902854300b0016bfbd19f68mr16292338plo.101.1657927273410; Fri, 15 Jul 2022 16:21:13 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 15 Jul 2022 23:21:05 +0000 In-Reply-To: <20220715232107.3775620-1-seanjc@google.com> Message-Id: <20220715232107.3775620-3-seanjc@google.com> Mime-Version: 1.0 References: <20220715232107.3775620-1-seanjc@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH 2/4] KVM: x86/mmu: Document the "rules" for using host_pfn_mapping_level() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Mingwei Zhang Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a comment to document how host_pfn_mapping_level() can be used safely, as the line between safe and dangerous is quite thin. E.g. if KVM were to ever support in-place promotion to create huge pages, consuming the level is safe if the caller holds mmu_lock and checks that there's an existing _leaf_ SPTE, but unsafe if the caller only checks that there's a non-leaf SPTE. Opportunistically tweak the existing comments to explicitly document why KVM needs to use READ_ONCE(). No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 42 +++++++++++++++++++++++++++++++++++------- 1 file changed, 35 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index bebff1d5acd4..d5b644f3e003 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2919,6 +2919,31 @@ static void direct_pte_prefetch(struct kvm_vcpu *vcp= u, u64 *sptep) __direct_pte_prefetch(vcpu, sp, sptep); } =20 +/* + * Lookup the mapping level for @gfn in the current mm. + * + * WARNING! Use of host_pfn_mapping_level() requires the caller and the e= nd + * consumer to be tied into KVM's handlers for MMU notifier events! + * + * There are several ways to safely use this helper: + * + * - Check mmu_notifier_retry_hva() after grabbing the mapping level, befo= re + * consuming it. In this case, mmu_lock doesn't need to be held during = the + * lookup, but it does need to be held while checking the MMU notifier. + * + * - Hold mmu_lock AND ensure there is no in-progress MMU notifier invalid= ation + * event for the hva. This can be done by explicit checking the MMU not= ifier + * or by ensuring that KVM already has a valid mapping that covers the h= va. + * + * - Do not use the result to install new mappings, e.g. use the host mapp= ing + * level only to decide whether or not to zap an entry. In this case, i= t's + * not required to hold mmu_lock (though it's highly likely the caller w= ill + * want to hold mmu_lock anyways, e.g. to modify SPTEs). + * + * Note! The lookup can still race with modifications to host page tables= , but + * the above "rules" ensure KVM will not _consume_ the result of the walk = if a + * race with the primary MMU occurs. + */ static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn, const struct kvm_memory_slot *slot) { @@ -2941,16 +2966,19 @@ static int host_pfn_mapping_level(struct kvm *kvm, = gfn_t gfn, hva =3D __gfn_to_hva_memslot(slot, gfn); =20 /* - * Lookup the mapping level in the current mm. The information - * may become stale soon, but it is safe to use as long as - * 1) mmu_notifier_retry was checked after taking mmu_lock, and - * 2) mmu_lock is taken now. - * - * We still need to disable IRQs to prevent concurrent tear down - * of page tables. + * Disable IRQs to prevent concurrent tear down of host page tables, + * e.g. if the primary MMU promotes a P*D to a huge page and then frees + * the original page table. */ local_irq_save(flags); =20 + /* + * Read each entry once. As above, a non-leaf entry can be promoted to + * a huge page _during_ this walk. Re-reading the entry could send the + * walk into the weeks, e.g. p*d_large() returns false (sees the old + * value) and then p*d_offset() walks into the target huge page instead + * of the old page table (sees the new value). + */ pgd =3D READ_ONCE(*pgd_offset(kvm->mm, hva)); if (pgd_none(pgd)) goto out; --=20 2.37.0.170.g444d1eabd0-goog From nobody Sat Apr 18 07:43:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E12D8C433EF for ; Fri, 15 Jul 2022 23:21:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232489AbiGOXVg (ORCPT ); Fri, 15 Jul 2022 19:21:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48092 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232224AbiGOXVQ (ORCPT ); Fri, 15 Jul 2022 19:21:16 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80BBB93699 for ; Fri, 15 Jul 2022 16:21:15 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id u11-20020a654c0b000000b00415ed4acf16so3382296pgq.22 for ; Fri, 15 Jul 2022 16:21:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=xZJd80Zg/K5Tv3nyLPJvzVj/Ikxh3CwBykpRFC2Ta1c=; b=iOrlGEDESIs1HmS4KXox6mMFSZKfN5WplYjqqDMGK/Z3xogIV7OLYbJm9dq8uQCQKw pRIF8JcxDS7XxAIacADg5dG6TdekjMkGBFhY9snjJIORn51p1DFGHr3QFxAnNCNFRXy7 vZYz2VjNWfB2NYo9owsgdI5Hh6e1Wqeg26xHKQBMb/7jSEfdNFAGf46075yDGDt47UX7 LGiVOsRpO5Dfkq1Z/GTfrVZzIT6pnXZXOYChF5TbyzYTXms6HH26y39V6BMzzqzTmYp8 rkF2KWZPXidhbQonuXXwoEmefTChiH34XKqVoLaFw1bQ7I50GsCYCkj6iqBXo6RBvgti 2K4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=xZJd80Zg/K5Tv3nyLPJvzVj/Ikxh3CwBykpRFC2Ta1c=; b=DrQ/ONQrhiELaioLrf4gdjsyqF56OsNi+dU0udkPNQOZMFXX3A9M7py7Nt9pnTJLm0 i2XnohRRDLBw5YUFDGc0HIIAHUBoZ8qI7/pFfy5NHBPz22Bljr0CFl3MKGyIeAVecVwz uw8vCjZwQF66EHIoFch1o+RE1pqA8E9xdS3wXGMLW5l6T4FqrhUJuIElO2IJfCCw/Vva kx20dkivYdokcqhYBZ3uYw6HT5omwMRU23TG2r8iF6YnRTB7HvikoshTpksNOaijkl9D 70B8sqv/j4J/nadNjMHEbSy9JqiG6rLXYV5fIFXjhGS6jlH/cIbK1CKLW9dVdRreWkoz iRqg== X-Gm-Message-State: AJIora+qnfeVJ53y5+ENoTMxCU1KJ66Xjn3TAZemiU+NBkhYWILn17nT BltE7KzFWbCqHRzg3M0DjB4ehyApcEE= X-Google-Smtp-Source: AGRyM1tWXiTY+OVIpPgp623nVufEZCoz5Us8S7voWsHWxo3fLw+eQW46ZzhXfHuzbwbxM8lsyjixNrHFCaY= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:1941:b0:50d:807d:530b with SMTP id s1-20020a056a00194100b0050d807d530bmr16078951pfk.17.1657927275088; Fri, 15 Jul 2022 16:21:15 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 15 Jul 2022 23:21:06 +0000 In-Reply-To: <20220715232107.3775620-1-seanjc@google.com> Message-Id: <20220715232107.3775620-4-seanjc@google.com> Mime-Version: 1.0 References: <20220715232107.3775620-1-seanjc@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH 3/4] KVM: x86/mmu: Don't bottom out on leafs when zapping collapsible SPTEs From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Mingwei Zhang Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When zapping collapsible SPTEs in the TDP MMU, don't bottom out on a leaf SPTE now that KVM doesn't require a PFN to compute the host mapping level, i.e. now that there's no need to first find a leaf SPTE and then step back up. Drop the now unused tdp_iter_step_up(), as it is not the safest of helpers (using any of the low level iterators requires some understanding of the various side effects). Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/tdp_iter.c | 9 ------ arch/x86/kvm/mmu/tdp_iter.h | 1 - arch/x86/kvm/mmu/tdp_mmu.c | 57 ++++++++++++++++++------------------- 3 files changed, 27 insertions(+), 40 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c index 9c65a64a56d9..39b48e7d7d1a 100644 --- a/arch/x86/kvm/mmu/tdp_iter.c +++ b/arch/x86/kvm/mmu/tdp_iter.c @@ -145,15 +145,6 @@ static bool try_step_up(struct tdp_iter *iter) return true; } =20 -/* - * Step the iterator back up a level in the paging structure. Should only = be - * used when the iterator is below the root level. - */ -void tdp_iter_step_up(struct tdp_iter *iter) -{ - WARN_ON(!try_step_up(iter)); -} - /* * Step to the next SPTE in a pre-order traversal of the paging structure. * To get to the next SPTE, the iterator either steps down towards the goal diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index adfca0cf94d3..f0af385c56e0 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -114,6 +114,5 @@ void tdp_iter_start(struct tdp_iter *iter, struct kvm_m= mu_page *root, int min_level, gfn_t next_last_level_gfn); void tdp_iter_next(struct tdp_iter *iter); void tdp_iter_restart(struct tdp_iter *iter); -void tdp_iter_step_up(struct tdp_iter *iter); =20 #endif /* __KVM_X86_MMU_TDP_ITER_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index d75d93edc40a..40ccb5fba870 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1721,10 +1721,6 @@ void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *k= vm, clear_dirty_pt_masked(kvm, root, gfn, mask, wrprot); } =20 -/* - * Clear leaf entries which could be replaced by large mappings, for - * GFNs within the slot. - */ static void zap_collapsible_spte_range(struct kvm *kvm, struct kvm_mmu_page *root, const struct kvm_memory_slot *slot) @@ -1736,48 +1732,49 @@ static void zap_collapsible_spte_range(struct kvm *= kvm, =20 rcu_read_lock(); =20 - tdp_root_for_each_pte(iter, root, start, end) { + for_each_tdp_pte_min_level(iter, root, PG_LEVEL_2M, start, end) { +retry: if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true)) continue; =20 - if (!is_shadow_present_pte(iter.old_spte) || - !is_last_spte(iter.old_spte, iter.level)) + if (iter.level > KVM_MAX_HUGEPAGE_LEVEL || + !is_shadow_present_pte(iter.old_spte)) + continue; + + /* + * Don't zap leaf SPTEs, if a leaf SPTE could be replaced with + * a large page size, then its parent would have been zapped + * instead of stepping down. + */ + if (is_last_spte(iter.old_spte, iter.level)) + continue; + + /* + * If iter.gfn resides outside of the slot, i.e. the page for + * the current level overlaps but is not contained by the slot, + * then the SPTE can't be made huge. More importantly, trying + * to query that info from slot->arch.lpage_info will cause an + * out-of-bounds access. + */ + if (iter.gfn < start || iter.gfn >=3D end) continue; =20 max_mapping_level =3D kvm_mmu_max_mapping_level(kvm, slot, iter.gfn, PG_LEVEL_NUM); - - WARN_ON(max_mapping_level < iter.level); - - /* - * If this page is already mapped at the highest - * viable level, there's nothing more to do. - */ - if (max_mapping_level =3D=3D iter.level) + if (max_mapping_level < iter.level) continue; =20 - /* - * The page can be remapped at a higher level, so step - * up to zap the parent SPTE. - */ - while (max_mapping_level > iter.level) - tdp_iter_step_up(&iter); - /* Note, a successful atomic zap also does a remote TLB flush. */ - tdp_mmu_zap_spte_atomic(kvm, &iter); - - /* - * If the atomic zap fails, the iter will recurse back into - * the same subtree to retry. - */ + if (tdp_mmu_zap_spte_atomic(kvm, &iter)) + goto retry; } =20 rcu_read_unlock(); } =20 /* - * Clear non-leaf entries (and free associated page tables) which could - * be replaced by large mappings, for GFNs within the slot. + * Zap non-leaf SPTEs (and free their associated page tables) which could + * be replaced by huge pages, for GFNs within the slot. */ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, const struct kvm_memory_slot *slot) --=20 2.37.0.170.g444d1eabd0-goog From nobody Sat Apr 18 07:43:05 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1140ECCA47F for ; Fri, 15 Jul 2022 23:21:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232147AbiGOXVj (ORCPT ); Fri, 15 Jul 2022 19:21:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232200AbiGOXVS (ORCPT ); Fri, 15 Jul 2022 19:21:18 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A8D8936AB for ; Fri, 15 Jul 2022 16:21:17 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id o10-20020a655bca000000b00412787983b3so3386427pgr.12 for ; Fri, 15 Jul 2022 16:21:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=Gm/0I1+T32HiDACucIVWpxGQ1jktv79snrS7GRo/K1s=; b=AoKRujIk2kfCrBAADBgWJNrhbBLYrqc9+T6UYemN6kjvloos9C1+APFaUJZDghTYoK /gl5cf9LBxTnHKhFZihttuUm2Uw8gxHirDUmfoOBef0mS2z12msrFebej17FU4GFuxYc Jc6FLU3SMuhlLrgd91eBSLAv8SNc2g6ZHtFYmIgR0Zd1z6H+d04deiwFm6sR0O5j1oqP xrkV8tVUqiuGUTu9hZoHGz3v+FjoF16oG3R0to+BKmD7r2pdRWKzqSXZeTFEFBvBLOp7 06dK7Fq/kNzH0ebilO4dVQPsqEhGUZtn5dHy5uG3jXdG4uhiRvKzRBePbz3VVABSpJJX XKAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=Gm/0I1+T32HiDACucIVWpxGQ1jktv79snrS7GRo/K1s=; b=iLbEHud4IYgCLmeZWdEmN9S7JQyMmCx0bd470FUoGP1b00UbMzGeeC1AAqi6Qa1Bbu N6mUgLDdOv5AgkTCcBoQIJEQ2YYyDqF87aslu9hlz8lonB1+t7QcWHVw+3HJYUu76GuY YIimM5YN2iKLI6PaNsYXDlTYmIMrBnQrEeamhIy1iQksW/otuqA4RWt5a+VvTmN2o3OF dzQcS6SkWswlINmFsmCab/Cqp2A0NU3czstYh7kcqgbTmTogJ/TXOS4jiqlQ6Qh30UPf rKSkvM5Qep3Xl+9ENF3Ks1yjKbX70s3Sgk7+wYvGyPSKgTFj/trVretUFhcxbLWH67Ww MZxQ== X-Gm-Message-State: AJIora886xqsElqmoG7lNTKKjgKPNNNqX17AnLJDqaWs9i06XO1O9+YT 8DGM5Cl7OSu7X8chulDbw6uByPVvcbc= X-Google-Smtp-Source: AGRyM1sMGARc3J88SzpEf/kmJghkGEoGICMquk6J2DXH7vDBKPf1zVie0uMCNACtGtbHCAHC2vYQvaygYd0= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a63:cb:0:b0:40c:a2b4:4890 with SMTP id 194-20020a6300cb000000b0040ca2b44890mr14137493pga.304.1657927276873; Fri, 15 Jul 2022 16:21:16 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 15 Jul 2022 23:21:07 +0000 In-Reply-To: <20220715232107.3775620-1-seanjc@google.com> Message-Id: <20220715232107.3775620-5-seanjc@google.com> Mime-Version: 1.0 References: <20220715232107.3775620-1-seanjc@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH 4/4] KVM: selftests: Add an option to run vCPUs while disabling dirty logging From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Mingwei Zhang Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a command line option to dirty_log_perf_test to run vCPUs for the entire duration of disabling dirty logging. By default, the test stops running runs vCPUs before disabling dirty logging, which is faster but less interesting as it doesn't stress KVM's handling of contention between page faults and the zapping of collapsible SPTEs. Enabling the flag also lets the user verify that KVM is indeed rebuilding zapped SPTEs as huge pages by checking KVM's pages_{1g,2m,4k} stats. Without vCPUs to fault in the zapped SPTEs, the stats will show that KVM is zapping pages, but they never show whether or not KVM actually allows huge pages to be recreated. Note! Enabling the flag can _significantly_ increase runtime, especially if the thread that's disabling dirty logging doesn't have a dedicated pCPU, e.g. if all pCPUs are used to run vCPUs. Signed-off-by: Sean Christopherson --- .../selftests/kvm/dirty_log_perf_test.c | 30 +++++++++++++++++-- 1 file changed, 27 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/test= ing/selftests/kvm/dirty_log_perf_test.c index 808a36dbf0c0..f99e39a672d3 100644 --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c @@ -59,6 +59,7 @@ static void arch_cleanup_vm(struct kvm_vm *vm) =20 static int nr_vcpus =3D 1; static uint64_t guest_percpu_mem_size =3D DEFAULT_PER_VCPU_MEM_SIZE; +static bool run_vcpus_while_disabling_dirty_logging; =20 /* Host variables */ static u64 dirty_log_manual_caps; @@ -109,8 +110,13 @@ static void vcpu_worker(struct perf_test_vcpu_args *vc= pu_args) ts_diff.tv_nsec); } =20 + /* + * Keep running the guest while dirty logging is being disabled + * (iteration is negative) so that vCPUs are accessing memory + * for the entire duration of zapping collapsible SPTEs. + */ while (current_iteration =3D=3D READ_ONCE(iteration) && - !READ_ONCE(host_quit)) {} + READ_ONCE(iteration) >=3D 0 && !READ_ONCE(host_quit)) {} } =20 avg =3D timespec_div(total, vcpu_last_completed_iteration[vcpu_idx]); @@ -302,6 +308,14 @@ static void run_test(enum vm_guest_mode mode, void *ar= g) } } =20 + /* + * Run vCPUs while dirty logging is being disabled to stress disabling + * in terms of both performance and correctness. Opt-in via command + * line as this significantly increases time to disable dirty logging. + */ + if (run_vcpus_while_disabling_dirty_logging) + WRITE_ONCE(iteration, -1); + /* Disable dirty logging */ clock_gettime(CLOCK_MONOTONIC, &start); disable_dirty_logging(vm, p->slots); @@ -309,7 +323,11 @@ static void run_test(enum vm_guest_mode mode, void *ar= g) pr_info("Disabling dirty logging time: %ld.%.9lds\n", ts_diff.tv_sec, ts_diff.tv_nsec); =20 - /* Tell the vcpu thread to quit */ + /* + * Tell the vCPU threads to quit. No need to manually check that vCPUs + * have stopped running after disabling dirty logging, the join will + * wait for them to exit. + */ host_quit =3D true; perf_test_join_vcpu_threads(nr_vcpus); =20 @@ -349,6 +367,9 @@ static void help(char *name) " Warning: a low offset can conflict with the loaded test code= .\n"); guest_modes_help(); printf(" -n: Run the vCPUs in nested mode (L2)\n"); + printf(" -e: Run vCPUs while dirty logging is being disabled. This\n" + " can significantly increase runtime, especially if there\n" + " isn't a dedicated pCPU for the main thread.\n"); printf(" -b: specify the size of the memory region which should be\n" " dirtied by each vCPU. e.g. 10M or 3G.\n" " (default: 1G)\n"); @@ -385,8 +406,11 @@ int main(int argc, char *argv[]) =20 guest_modes_append_default(); =20 - while ((opt =3D getopt(argc, argv, "ghi:p:m:nb:f:v:os:x:")) !=3D -1) { + while ((opt =3D getopt(argc, argv, "eghi:p:m:nb:f:v:os:x:")) !=3D -1) { switch (opt) { + case 'e': + /* 'e' is for evil. */ + run_vcpus_while_disabling_dirty_logging =3D true; case 'g': dirty_log_manual_caps =3D 0; break; --=20 2.37.0.170.g444d1eabd0-goog