From nobody Tue Apr 7 06:22:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8B05C433FE for ; Wed, 12 Oct 2022 18:18:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229917AbiJLSSS (ORCPT ); Wed, 12 Oct 2022 14:18:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44164 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229792AbiJLSRi (ORCPT ); Wed, 12 Oct 2022 14:17:38 -0400 Received: from mail-ua1-x949.google.com (mail-ua1-x949.google.com [IPv6:2607:f8b0:4864:20::949]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3628162AA2 for ; Wed, 12 Oct 2022 11:17:21 -0700 (PDT) Received: by mail-ua1-x949.google.com with SMTP id h11-20020ab0470b000000b003bf1da44886so6856054uac.17 for ; Wed, 12 Oct 2022 11:17:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=SClUoUjsSG24cCzWoSTFyKRO7uQFbeIEmHbnw8Ok9hs=; b=gEUoDeRhQYPbmMicSrz0De2ej23Pwhyq7nPmr9ZFO8aNgSMbS64GeskgoqklSm2w4M GlVL7IRag6zZS9cZ7WrzwFOfS6TRQ8SY3Jnr5C07tyx3th49dzGrZGezGRL3rvKDTjSP d90usPiNYTxQUnaqzSAuSpY8iXq61iOD8OFMuQYjavEm9Uw6p6rY4VtizScjJxQe8q6l 5J+cG1oGLIBIqWVwg/cwbVFXv6njro+zJ8UqKePg/DqhDxbHZcv2/V94/B7i0TMhOG05 pp9EMQoAYQOwFabWebaOji6F4fFr2X9OtpXSWYicshv7l96GQiHCgEB4t28Uqpcl4LeS MKkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=SClUoUjsSG24cCzWoSTFyKRO7uQFbeIEmHbnw8Ok9hs=; b=s81DDBnLoR6YAox/FdPTtZuUuEQeDquZLuXXH+1GnAvuA3HY1ntjC44dCuHSRJiKlS cmtSFYxay0z0C8ECp88FBE9vGl9a15KdZO2cxtyJCWZyIyrV3Qh83IsY8MoC3MN4m3dr NfXEquptsGGo7kUZ3S5EMT0RkovTUyOXpHaGh7u0bZvez/LZrvLgjDhtgq2fwsdm8s8/ DUorIDaWd89Xk8/bEwycArWje021hjm6zF7Dw1cogENVc0rL63oHHOCm1hmtkFy4/JmX pil9DapIU6ayr2KZhw3wKnd1xzZ9d6+DQESqKkUTqp961CG73mv6uSQI7P3JJ98yApCZ NfTw== X-Gm-Message-State: ACrzQf0YB1ZyO0RS756rmRrBiEuiSV/Kly4q/vqBoSu7yNZi6MwJB7hG 43EaTr4InOUo/otyyKodmlB9WG6Xez0= X-Google-Smtp-Source: AMsMyM7xGyJXZk6ox9gNSOVejNjZxJtiQbex1Vsue8vyd94BAvpE4TwUqhe/7QyraogcYwMQdNmVM3J9vPU= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:e74f:b0:179:fdfd:2c84 with SMTP id p15-20020a170902e74f00b00179fdfd2c84mr30739521plf.41.1665598629402; Wed, 12 Oct 2022 11:17:09 -0700 (PDT) Reply-To: Sean Christopherson Date: Wed, 12 Oct 2022 18:16:52 +0000 In-Reply-To: <20221012181702.3663607-1-seanjc@google.com> Mime-Version: 1.0 References: <20221012181702.3663607-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221012181702.3663607-2-seanjc@google.com> Subject: [PATCH v4 01/11] KVM: x86/mmu: Change tdp_mmu to a read-only parameter From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Matlack , Isaku Yamahata Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Matlack Change tdp_mmu to a read-only parameter and drop the per-vm tdp_mmu_enabled. Keep the is_tdp_mmu_enabled() helper instead of referencing tdp_mmu_enabled directly to allow for future optimizations without needing to churn a lot of code, e.g. KVM can use a static key for now that the knob is read-only after the vendor module is loaded. The TDP MMU was introduced in 5.10 and has been enabled by default since 5.15. At this point there are no known functionality gaps between the TDP MMU and the shadow MMU, and the TDP MMU uses less memory and scales better with the number of vCPUs. In other words, there is no good reason to disable the TDP MMU on a live system. Purposely do not drop tdp_mmu=3DN support (i.e. do not force 64-bit KVM to always use the TDP MMU) since tdp_mmu=3DN is still used to get test coverage of KVM's shadow MMU TDP support, which is used in 32-bit KVM. Signed-off-by: David Matlack [sean: keep is_tdp_mmu_enabled()] Signed-off-by: Sean Christopherson Reviewed-by: Kai Huang --- arch/x86/include/asm/kvm_host.h | 9 ------ arch/x86/kvm/mmu.h | 13 ++++++-- arch/x86/kvm/mmu/mmu.c | 53 ++++++++++++++++++++++----------- arch/x86/kvm/mmu/tdp_mmu.c | 9 ++---- 4 files changed, 48 insertions(+), 36 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 7551b6f9c31c..6e89e7522903 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1270,15 +1270,6 @@ struct kvm_arch { struct task_struct *nx_lpage_recovery_thread; =20 #ifdef CONFIG_X86_64 - /* - * Whether the TDP MMU is enabled for this VM. This contains a - * snapshot of the TDP MMU module parameter from when the VM was - * created and remains unchanged for the life of the VM. If this is - * true, TDP MMU handler functions will run for various MMU - * operations. - */ - bool tdp_mmu_enabled; - /* * List of kvm_mmu_page structs being used as roots. * All kvm_mmu_page structs in the list should have diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 6bdaacb6faa0..1ad6d02e103f 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -230,14 +230,21 @@ static inline bool kvm_shadow_root_allocated(struct k= vm *kvm) } =20 #ifdef CONFIG_X86_64 -static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return kvm->arch.= tdp_mmu_enabled; } +extern bool tdp_mmu_enabled; +#endif + +static inline bool is_tdp_mmu_enabled(void) +{ +#ifdef CONFIG_X86_64 + return tdp_mmu_enabled; #else -static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return false; } + return false; #endif +} =20 static inline bool kvm_memslots_have_rmaps(struct kvm *kvm) { - return !is_tdp_mmu_enabled(kvm) || kvm_shadow_root_allocated(kvm); + return !is_tdp_mmu_enabled() || kvm_shadow_root_allocated(kvm); } =20 static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6f81539061d6..3a370f575808 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -98,6 +98,13 @@ module_param_named(flush_on_reuse, force_flush_and_sync_= on_reuse, bool, 0644); */ bool tdp_enabled =3D false; =20 +#ifdef CONFIG_X86_64 +static bool __ro_after_init tdp_mmu_allowed; + +bool __read_mostly tdp_mmu_enabled =3D true; +module_param_named(tdp_mmu, tdp_mmu_enabled, bool, 0444); +#endif + static int max_huge_page_level __read_mostly; static int tdp_root_level __read_mostly; static int max_tdp_level __read_mostly; @@ -1253,7 +1260,7 @@ static void kvm_mmu_write_protect_pt_masked(struct kv= m *kvm, { struct kvm_rmap_head *rmap_head; =20 - if (is_tdp_mmu_enabled(kvm)) + if (is_tdp_mmu_enabled()) kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot, slot->base_gfn + gfn_offset, mask, true); =20 @@ -1286,7 +1293,7 @@ static void kvm_mmu_clear_dirty_pt_masked(struct kvm = *kvm, { struct kvm_rmap_head *rmap_head; =20 - if (is_tdp_mmu_enabled(kvm)) + if (is_tdp_mmu_enabled()) kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot, slot->base_gfn + gfn_offset, mask, false); =20 @@ -1369,7 +1376,7 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, } } =20 - if (is_tdp_mmu_enabled(kvm)) + if (is_tdp_mmu_enabled()) write_protected |=3D kvm_tdp_mmu_write_protect_gfn(kvm, slot, gfn, min_level); =20 @@ -1532,7 +1539,7 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_= gfn_range *range) if (kvm_memslots_have_rmaps(kvm)) flush =3D kvm_handle_gfn_range(kvm, range, kvm_zap_rmap); =20 - if (is_tdp_mmu_enabled(kvm)) + if (is_tdp_mmu_enabled()) flush =3D kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush); =20 return flush; @@ -1545,7 +1552,7 @@ bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn= _range *range) if (kvm_memslots_have_rmaps(kvm)) flush =3D kvm_handle_gfn_range(kvm, range, kvm_set_pte_rmap); =20 - if (is_tdp_mmu_enabled(kvm)) + if (is_tdp_mmu_enabled()) flush |=3D kvm_tdp_mmu_set_spte_gfn(kvm, range); =20 return flush; @@ -1620,7 +1627,7 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_rang= e *range) if (kvm_memslots_have_rmaps(kvm)) young =3D kvm_handle_gfn_range(kvm, range, kvm_age_rmap); =20 - if (is_tdp_mmu_enabled(kvm)) + if (is_tdp_mmu_enabled()) young |=3D kvm_tdp_mmu_age_gfn_range(kvm, range); =20 return young; @@ -1633,7 +1640,7 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn= _range *range) if (kvm_memslots_have_rmaps(kvm)) young =3D kvm_handle_gfn_range(kvm, range, kvm_test_age_rmap); =20 - if (is_tdp_mmu_enabled(kvm)) + if (is_tdp_mmu_enabled()) young |=3D kvm_tdp_mmu_test_age_gfn(kvm, range); =20 return young; @@ -3557,7 +3564,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vc= pu) if (r < 0) goto out_unlock; =20 - if (is_tdp_mmu_enabled(vcpu->kvm)) { + if (is_tdp_mmu_enabled()) { root =3D kvm_tdp_mmu_get_vcpu_root_hpa(vcpu); mmu->root.hpa =3D root; } else if (shadow_root_level >=3D PT64_ROOT_4LEVEL) { @@ -5676,6 +5683,9 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_force= d_root_level, tdp_root_level =3D tdp_forced_root_level; max_tdp_level =3D tdp_max_root_level; =20 +#ifdef CONFIG_X86_64 + tdp_mmu_enabled =3D tdp_mmu_allowed && tdp_enabled; +#endif /* * max_huge_page_level reflects KVM's MMU capabilities irrespective * of kernel support, e.g. KVM may be capable of using 1GB pages when @@ -5923,7 +5933,7 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) * write and in the same critical section as making the reload request, * e.g. before kvm_zap_obsolete_pages() could drop mmu_lock and yield. */ - if (is_tdp_mmu_enabled(kvm)) + if (is_tdp_mmu_enabled()) kvm_tdp_mmu_invalidate_all_roots(kvm); =20 /* @@ -5948,7 +5958,7 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) * Deferring the zap until the final reference to the root is put would * lead to use-after-free. */ - if (is_tdp_mmu_enabled(kvm)) + if (is_tdp_mmu_enabled()) kvm_tdp_mmu_zap_invalidated_roots(kvm); } =20 @@ -6060,7 +6070,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_sta= rt, gfn_t gfn_end) =20 flush =3D kvm_rmap_zap_gfn_range(kvm, gfn_start, gfn_end); =20 - if (is_tdp_mmu_enabled(kvm)) { + if (is_tdp_mmu_enabled()) { for (i =3D 0; i < KVM_ADDRESS_SPACE_NUM; i++) flush =3D kvm_tdp_mmu_zap_leafs(kvm, i, gfn_start, gfn_end, true, flush); @@ -6093,7 +6103,7 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, write_unlock(&kvm->mmu_lock); } =20 - if (is_tdp_mmu_enabled(kvm)) { + if (is_tdp_mmu_enabled()) { read_lock(&kvm->mmu_lock); kvm_tdp_mmu_wrprot_slot(kvm, memslot, start_level); read_unlock(&kvm->mmu_lock); @@ -6336,7 +6346,7 @@ void kvm_mmu_try_split_huge_pages(struct kvm *kvm, u64 start, u64 end, int target_level) { - if (!is_tdp_mmu_enabled(kvm)) + if (!is_tdp_mmu_enabled()) return; =20 if (kvm_memslots_have_rmaps(kvm)) @@ -6357,7 +6367,7 @@ void kvm_mmu_slot_try_split_huge_pages(struct kvm *kv= m, u64 start =3D memslot->base_gfn; u64 end =3D start + memslot->npages; =20 - if (!is_tdp_mmu_enabled(kvm)) + if (!is_tdp_mmu_enabled()) return; =20 if (kvm_memslots_have_rmaps(kvm)) { @@ -6440,7 +6450,7 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, write_unlock(&kvm->mmu_lock); } =20 - if (is_tdp_mmu_enabled(kvm)) { + if (is_tdp_mmu_enabled()) { read_lock(&kvm->mmu_lock); kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot); read_unlock(&kvm->mmu_lock); @@ -6475,7 +6485,7 @@ void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, write_unlock(&kvm->mmu_lock); } =20 - if (is_tdp_mmu_enabled(kvm)) { + if (is_tdp_mmu_enabled()) { read_lock(&kvm->mmu_lock); kvm_tdp_mmu_clear_dirty_slot(kvm, memslot); read_unlock(&kvm->mmu_lock); @@ -6510,7 +6520,7 @@ void kvm_mmu_zap_all(struct kvm *kvm) =20 kvm_mmu_commit_zap_page(kvm, &invalid_list); =20 - if (is_tdp_mmu_enabled(kvm)) + if (is_tdp_mmu_enabled()) kvm_tdp_mmu_zap_all(kvm); =20 write_unlock(&kvm->mmu_lock); @@ -6675,6 +6685,15 @@ void __init kvm_mmu_x86_module_init(void) if (nx_huge_pages =3D=3D -1) __set_nx_huge_pages(get_nx_auto_mode()); =20 +#ifdef CONFIG_X86_64 + /* + * Snapshot userspace's desire to enable the TDP MMU. Whether or not the + * TDP MMU is actually enabled is determined in kvm_configure_mmu() + * when the vendor module is loaded. + */ + tdp_mmu_allowed =3D tdp_mmu_enabled; +#endif + kvm_mmu_spte_module_init(); } =20 diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 672f0432d777..cc2a3a511994 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -10,23 +10,18 @@ #include #include =20 -static bool __read_mostly tdp_mmu_enabled =3D true; -module_param_named(tdp_mmu, tdp_mmu_enabled, bool, 0644); - /* Initializes the TDP MMU for the VM, if enabled. */ int kvm_mmu_init_tdp_mmu(struct kvm *kvm) { struct workqueue_struct *wq; =20 - if (!tdp_enabled || !READ_ONCE(tdp_mmu_enabled)) + if (!is_tdp_mmu_enabled()) return 0; =20 wq =3D alloc_workqueue("kvm", WQ_UNBOUND|WQ_MEM_RECLAIM|WQ_CPU_INTENSIVE,= 0); if (!wq) return -ENOMEM; =20 - /* This should not be changed for the lifetime of the VM. */ - kvm->arch.tdp_mmu_enabled =3D true; INIT_LIST_HEAD(&kvm->arch.tdp_mmu_roots); spin_lock_init(&kvm->arch.tdp_mmu_pages_lock); INIT_LIST_HEAD(&kvm->arch.tdp_mmu_pages); @@ -48,7 +43,7 @@ static __always_inline bool kvm_lockdep_assert_mmu_lock_h= eld(struct kvm *kvm, =20 void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) { - if (!kvm->arch.tdp_mmu_enabled) + if (!is_tdp_mmu_enabled()) return; =20 /* Also waits for any queued work items. */ --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Tue Apr 7 06:22:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B312AC4332F for ; Wed, 12 Oct 2022 18:17:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229828AbiJLSRk (ORCPT ); Wed, 12 Oct 2022 14:17:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43872 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229783AbiJLSR0 (ORCPT ); Wed, 12 Oct 2022 14:17:26 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2AC6861B06 for ; Wed, 12 Oct 2022 11:17:11 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-355ece59b6fso168051297b3.22 for ; Wed, 12 Oct 2022 11:17:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=zOGoGGvgl+iojtStzq1iqrGDT0sLlPiSgwzzw0iw8d4=; b=SDGsBWQ+VnMwWTiurbtMOyBGOjPdO71ubMbkCsOoMcTO3hz0dW+s2peDJ6ke3VqM5c OVV/yvP/9wPg6tNAbb1tgzmKALjx5+o7+TYaqthPPbtOEVOFz4q+PCGS8s1SCMk18dJl wRyMuYno1IjJnt10UhmE41OOJyhhpcALq+HkAtrgUXpzNwZh9Mz9OO6UPg5cA6+eaUsl LEnf8w0xNG9gmSvjed/gDPyM1zWLjtbzOVuS/4BzdzrLLVxUVkQ9ZXH3vs5/MD4YkK1c kcIQ4wHo/qEAZR+W2MvcTF0qcjuGVM07nFEJHpU0wU4p3tqABqi+jwC9zh7mvIXQn+jL zzqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=zOGoGGvgl+iojtStzq1iqrGDT0sLlPiSgwzzw0iw8d4=; b=2PRCqkJQ9oFQw2maRHZte1RFmqk8Z6jxM7TPNhTR/MAhWnQVAWlSk1/D7WMm0S/ywq LyYfOAycVleAxlYB07mN37UbSeskq9AQgYmckSTEXlL3twRyVJn5gUagyJRqbiVonEQU ITL+qweaQNfEAmQfzkja6YHZaTQuwXFvXD3U2bwx1BMC4MCQt6zhBvCFaHjocjt68uge Fotsp3BY1LsoWdGIQUrSNcaEqV/AgWPg77o15h67uGaRCQev1K+4B3tswXcUPzIPKOrC bo1Bcy2FlT8/VwMr8dV6Iwfs1B8SkwkDiULzifiTaYBV82rsLRp7NIXwo8gf1kJ+6oAi jCxw== X-Gm-Message-State: ACrzQf1tOAMqHQUg02Q0pzNPMVDiLLy2KsJJxSAX63dS0ZlrO4izTlCj XuCDXaDcYI0JKanWE08jJLx7C6AH6Qw= X-Google-Smtp-Source: AMsMyM54rFJLdHQ9SY3H00lxfxo5SIzV1Qc8HPFItTmI+lRcEsO5uTDCC7PDTqgyuRQUiwc6sWiuFVsChNM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:5bc6:0:b0:351:17e1:6297 with SMTP id p189-20020a815bc6000000b0035117e16297mr28457434ywb.433.1665598630936; Wed, 12 Oct 2022 11:17:10 -0700 (PDT) Reply-To: Sean Christopherson Date: Wed, 12 Oct 2022 18:16:53 +0000 In-Reply-To: <20221012181702.3663607-1-seanjc@google.com> Mime-Version: 1.0 References: <20221012181702.3663607-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221012181702.3663607-3-seanjc@google.com> Subject: [PATCH v4 02/11] KVM: x86/mmu: Move TDP MMU VM init/uninit behind tdp_mmu_enabled From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Matlack , Isaku Yamahata Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Matlack Move kvm_mmu_{init,uninit}_tdp_mmu() behind tdp_mmu_enabled. This makes these functions consistent with the rest of the calls into the TDP MMU from mmu.c, and which is now possible since tdp_mmu_enabled is only modified when the x86 vendor module is loaded. i.e. It will never change during the lifetime of a VM. This change also enabled removing the stub definitions for 32-bit KVM, as the compiler will just optimize the calls out like it does for all the other TDP MMU functions. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Isaku Yamahata Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 11 +++++++---- arch/x86/kvm/mmu/tdp_mmu.c | 6 ------ arch/x86/kvm/mmu/tdp_mmu.h | 7 +++---- 3 files changed, 10 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3a370f575808..b2b970e9fa8d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5984,9 +5984,11 @@ int kvm_mmu_init_vm(struct kvm *kvm) INIT_LIST_HEAD(&kvm->arch.lpage_disallowed_mmu_pages); spin_lock_init(&kvm->arch.mmu_unsync_pages_lock); =20 - r =3D kvm_mmu_init_tdp_mmu(kvm); - if (r < 0) - return r; + if (is_tdp_mmu_enabled()) { + r =3D kvm_mmu_init_tdp_mmu(kvm); + if (r < 0) + return r; + } =20 node->track_write =3D kvm_mmu_pte_write; node->track_flush_slot =3D kvm_mmu_invalidate_zap_pages_in_memslot; @@ -6016,7 +6018,8 @@ void kvm_mmu_uninit_vm(struct kvm *kvm) =20 kvm_page_track_unregister_notifier(kvm, node); =20 - kvm_mmu_uninit_tdp_mmu(kvm); + if (is_tdp_mmu_enabled()) + kvm_mmu_uninit_tdp_mmu(kvm); =20 mmu_free_vm_memory_caches(kvm); } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index cc2a3a511994..f7c4555d5d36 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -15,9 +15,6 @@ int kvm_mmu_init_tdp_mmu(struct kvm *kvm) { struct workqueue_struct *wq; =20 - if (!is_tdp_mmu_enabled()) - return 0; - wq =3D alloc_workqueue("kvm", WQ_UNBOUND|WQ_MEM_RECLAIM|WQ_CPU_INTENSIVE,= 0); if (!wq) return -ENOMEM; @@ -43,9 +40,6 @@ static __always_inline bool kvm_lockdep_assert_mmu_lock_h= eld(struct kvm *kvm, =20 void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) { - if (!is_tdp_mmu_enabled()) - return; - /* Also waits for any queued work items. */ destroy_workqueue(kvm->arch.tdp_mmu_zap_wq); =20 diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index c163f7cc23ca..9d086a103f77 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -5,6 +5,9 @@ =20 #include =20 +int kvm_mmu_init_tdp_mmu(struct kvm *kvm); +void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm); + hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu); =20 __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *= root) @@ -66,8 +69,6 @@ u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *= vcpu, u64 addr, u64 *spte); =20 #ifdef CONFIG_X86_64 -int kvm_mmu_init_tdp_mmu(struct kvm *kvm); -void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm); static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return sp->t= dp_mmu_page; } =20 static inline bool is_tdp_mmu(struct kvm_mmu *mmu) @@ -87,8 +88,6 @@ static inline bool is_tdp_mmu(struct kvm_mmu *mmu) return sp && is_tdp_mmu_page(sp) && sp->root_count; } #else -static inline int kvm_mmu_init_tdp_mmu(struct kvm *kvm) { return 0; } -static inline void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) {} static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return false= ; } static inline bool is_tdp_mmu(struct kvm_mmu *mmu) { return false; } #endif --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Tue Apr 7 06:22:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2E77C433FE for ; Wed, 12 Oct 2022 18:18:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229786AbiJLSSI (ORCPT ); Wed, 12 Oct 2022 14:18:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44276 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229810AbiJLSRh (ORCPT ); Wed, 12 Oct 2022 14:17:37 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7AA32AC39F for ; Wed, 12 Oct 2022 11:17:16 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-33dc888dc62so166752067b3.4 for ; Wed, 12 Oct 2022 11:17:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=TyhNic6RvRRWto70e2eCzZmqxgJk22R96aLUbLuIWvs=; b=MVQI3M6pbnDm9koPk/+/XGl6NZkOOF3VKQKaZVSZjFl6JxjgenM2TroRTmInL4qQiT kE7xjKNlTauf9ZjHyleDGQQZ57S7JpqLShfQReaA9Ga2AGPgUdm0M/boo6iV4UWs+BXK vRAP+xfrf+gCf5StNDUN7CqCpKI4if3Clv1czf1UnxpqqMa8fecGAfvZlFu0RQqqRc7t ve0DCfPCji3W1bzhdcfvD/Nm8vJOzVEkVNUDlTTMkoTj+eT2aIRn1ZDE8ksEbe1ofpfW fgKLmBrgmT2wMhxKpn2UR9JcRBdTQRVwyEPNnLx3kljweq1qmlro0s+ABvo0QtsQ+pTu 8m5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=TyhNic6RvRRWto70e2eCzZmqxgJk22R96aLUbLuIWvs=; b=UydI/0fCnbKy2zamNneJ1h4Yq+i7enaFzNplwekKJFFX6N3Xz7NldVtGYFdjs9mTjR lFrjw7yE+I4NatcFAxPXrNGfScE0YnvQaVTFa7b3XF5lb7dvQvycYrB0lOuAyCy3vu58 RlL5omdnk7dbOcO2Hr1GUVezAGfMBCF7aejB0FrU7Wm2r/PT7NLQWQFt0Ce1bXiAR1YE YV5qkyd/YiP56XHR6zd3K+wA3Y/giB5wEs6VFcmbexDFENW5En25Fa8WyXcwrd2RsTnf QrikQ5ODmb6FLNTGjJcQtB87ZkFZM6Vws0qlcvjl1CI4Uh7zCAyEWnzkkLpIoGKQXeQ8 g+xg== X-Gm-Message-State: ACrzQf0LKuyJxTotGIc1+YmAPhqvOr4vweJM1ucfKYVqsbWvESbGSl+f 30skNrpi/4w5de4oAbKydmBFjvmLoj0= X-Google-Smtp-Source: AMsMyM4F55qdnAxdTruYJL4NvuqMwcZDQ9nbKWTitPpl1wVMtTj8QpPhAdtMT8Tdyk+zByiu7mDlMO/Gi74= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:3cc4:0:b0:6af:c67e:909e with SMTP id j187-20020a253cc4000000b006afc67e909emr27796209yba.266.1665598632588; Wed, 12 Oct 2022 11:17:12 -0700 (PDT) Reply-To: Sean Christopherson Date: Wed, 12 Oct 2022 18:16:54 +0000 In-Reply-To: <20221012181702.3663607-1-seanjc@google.com> Mime-Version: 1.0 References: <20221012181702.3663607-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221012181702.3663607-4-seanjc@google.com> Subject: [PATCH v4 03/11] KVM: x86/mmu: Grab mmu_invalidate_seq in kvm_faultin_pfn() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Matlack , Isaku Yamahata Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Matlack Grab mmu_invalidate_seq in kvm_faultin_pfn() and stash it in struct kvm_page_fault. The eliminates duplicate code and reduces the amount of parameters needed for is_page_fault_stale(). Preemptively split out __kvm_faultin_pfn() to a separate function for use in subsequent commits. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Isaku Yamahata Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 21 ++++++++++++--------- arch/x86/kvm/mmu/mmu_internal.h | 1 + arch/x86/kvm/mmu/paging_tmpl.h | 6 +----- 3 files changed, 14 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b2b970e9fa8d..72c3dc1884f6 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4143,7 +4143,7 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu,= struct kvm_async_pf *work) kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true); } =20 -static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *f= ault) +static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault = *fault) { struct kvm_memory_slot *slot =3D fault->slot; bool async; @@ -4199,12 +4199,20 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault) return RET_PF_CONTINUE; } =20 +static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *f= ault) +{ + fault->mmu_seq =3D vcpu->kvm->mmu_invalidate_seq; + smp_rmb(); + + return __kvm_faultin_pfn(vcpu, fault); +} + /* * Returns true if the page fault is stale and needs to be retried, i.e. i= f the * root was invalidated by a memslot update or a relevant mmu_notifier fir= ed. */ static bool is_page_fault_stale(struct kvm_vcpu *vcpu, - struct kvm_page_fault *fault, int mmu_seq) + struct kvm_page_fault *fault) { struct kvm_mmu_page *sp =3D to_shadow_page(vcpu->arch.mmu->root.hpa); =20 @@ -4224,14 +4232,12 @@ static bool is_page_fault_stale(struct kvm_vcpu *vc= pu, return true; =20 return fault->slot && - mmu_invalidate_retry_hva(vcpu->kvm, mmu_seq, fault->hva); + mmu_invalidate_retry_hva(vcpu->kvm, fault->mmu_seq, fault->hva); } =20 static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault = *fault) { bool is_tdp_mmu_fault =3D is_tdp_mmu(vcpu->arch.mmu); - - unsigned long mmu_seq; int r; =20 fault->gfn =3D fault->addr >> PAGE_SHIFT; @@ -4248,9 +4254,6 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault if (r) return r; =20 - mmu_seq =3D vcpu->kvm->mmu_invalidate_seq; - smp_rmb(); - r =3D kvm_faultin_pfn(vcpu, fault); if (r !=3D RET_PF_CONTINUE) return r; @@ -4266,7 +4269,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault else write_lock(&vcpu->kvm->mmu_lock); =20 - if (is_page_fault_stale(vcpu, fault, mmu_seq)) + if (is_page_fault_stale(vcpu, fault)) goto out_unlock; =20 r =3D make_mmu_pages_available(vcpu); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index 582def531d4d..1c0a1e7c796d 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -221,6 +221,7 @@ struct kvm_page_fault { struct kvm_memory_slot *slot; =20 /* Outputs of kvm_faultin_pfn. */ + unsigned long mmu_seq; kvm_pfn_t pfn; hva_t hva; bool map_writable; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 5ab5f94dcb6f..30b9d9b6734f 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -791,7 +791,6 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault { struct guest_walker walker; int r; - unsigned long mmu_seq; bool is_self_change_mapping; =20 pgprintk("%s: addr %lx err %x\n", __func__, fault->addr, fault->error_cod= e); @@ -838,9 +837,6 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault else fault->max_level =3D walker.level; =20 - mmu_seq =3D vcpu->kvm->mmu_invalidate_seq; - smp_rmb(); - r =3D kvm_faultin_pfn(vcpu, fault); if (r !=3D RET_PF_CONTINUE) return r; @@ -871,7 +867,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault r =3D RET_PF_RETRY; write_lock(&vcpu->kvm->mmu_lock); =20 - if (is_page_fault_stale(vcpu, fault, mmu_seq)) + if (is_page_fault_stale(vcpu, fault)) goto out_unlock; =20 r =3D make_mmu_pages_available(vcpu); --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Tue Apr 7 06:22:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67672C433FE for ; Wed, 12 Oct 2022 18:17:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229841AbiJLSRo (ORCPT ); Wed, 12 Oct 2022 14:17:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43874 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229785AbiJLSR0 (ORCPT ); Wed, 12 Oct 2022 14:17:26 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 11A904A817 for ; Wed, 12 Oct 2022 11:17:16 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id t21-20020a056a0021d500b0056358536cdeso4801386pfj.22 for ; Wed, 12 Oct 2022 11:17:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Y/LJ+bYPOjVmy5bXWpTx2anrdUeoNGY754k8mkXwTdk=; b=K7ewdJSMaryouSIQqqXh+tEwQj0DP7qWAXEcN0vWUDuNPNdsMGpffpcR6dlydV12Lr 9+2ckE7nczJUyTO++zxeaEGSc/WBLOlI62yv0lHon3PaTpfT5G7zj4Pmy661k4HXvobB 4ti4CMZQAbov6LiocNpv/9tlu6O7EahHIPCwXo1oGQlL8fiFGCX8qMhu2a4WHQjmHikP VWg50NzQ4N7btUZv6sh/k0vbA93kiFzN8eV0l2IFarjziZc2bvjjX+gM3BmDRmeWkBkq 8NZxJYvvdXaCe9FhD8o6YDn5CVZMFLt5Hb6lqJisMCc+59aY8IEY0DyVXqf8tPY9NZPZ tlRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Y/LJ+bYPOjVmy5bXWpTx2anrdUeoNGY754k8mkXwTdk=; b=I17lVHrrbOsZpUcVkf3aJ/DbrRsRXcB56jFNls1wDTmolAWKFM7ToCLyHljUimrDDI +vkznXE3GA4R9iVyOx9PugA6nC1hSLpV35Ld1qt/hB96i27tXPhU+dABYqn6FIVE+PQu gGbJ1YXUz7EJ0NWCXyGc791X/e4IV+JlQyZ6mmABzJ8D1ZdNBsFrmSre1srUSSvh5wVB CNjeKHXDNZKM0r7YdB8Melv0kmY7PIT5lI4HVrrL9SgumJhEBPms5FURq0AP1i7zjg96 gSJnjW2jacJLYySWcMQeKexswMngL6eSJz+YyqfHsqGHrY1gOrsgYdhGUZD6t1dJT2A+ sMjw== X-Gm-Message-State: ACrzQf2A4lgKxbvRiIEAuexK87iQVO6MTX93TeTva9Gtbb7l61I25X1X qfU0iVIhzaBfceG1fT4w3+bIYrBXjvU= X-Google-Smtp-Source: AMsMyM6aUnnCS0syZQAS1W2pUNJGA7rl+aN7n8C9LHWe129DVcZ/WhCwQpElYJ7gd9Pzkd5+llP8oCAuTS0= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:c40d:b0:17f:7ef6:57cd with SMTP id k13-20020a170902c40d00b0017f7ef657cdmr30715748plk.151.1665598634380; Wed, 12 Oct 2022 11:17:14 -0700 (PDT) Reply-To: Sean Christopherson Date: Wed, 12 Oct 2022 18:16:55 +0000 In-Reply-To: <20221012181702.3663607-1-seanjc@google.com> Mime-Version: 1.0 References: <20221012181702.3663607-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221012181702.3663607-5-seanjc@google.com> Subject: [PATCH v4 04/11] KVM: x86/mmu: Handle error PFNs in kvm_faultin_pfn() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Matlack , Isaku Yamahata Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Matlack Handle error PFNs in kvm_faultin_pfn() rather than relying on the caller to invoke handle_abnormal_pfn() after kvm_faultin_pfn(). Opportunistically rename kvm_handle_bad_page() to kvm_handle_error_pfn() to make it more consistent with is_error_pfn(). This commit moves KVM closer to being able to drop handle_abnormal_pfn(), which will reduce the amount of duplicate code in the various page fault handlers. No functional change intended. Signed-off-by: David Matlack Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 72c3dc1884f6..6417a909181c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3155,7 +3155,7 @@ static void kvm_send_hwpoison_signal(unsigned long ad= dress, struct task_struct * send_sig_mceerr(BUS_MCEERR_AR, (void __user *)address, PAGE_SHIFT, tsk); } =20 -static int kvm_handle_bad_page(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t= pfn) +static int kvm_handle_error_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_= t pfn) { /* * Do not cache the mmio info caused by writing the readonly gfn @@ -3176,10 +3176,6 @@ static int kvm_handle_bad_page(struct kvm_vcpu *vcpu= , gfn_t gfn, kvm_pfn_t pfn) static int handle_abnormal_pfn(struct kvm_vcpu *vcpu, struct kvm_page_faul= t *fault, unsigned int access) { - /* The pfn is invalid, report the error! */ - if (unlikely(is_error_pfn(fault->pfn))) - return kvm_handle_bad_page(vcpu, fault->gfn, fault->pfn); - if (unlikely(!fault->slot)) { gva_t gva =3D fault->is_tdp ? 0 : fault->addr; =20 @@ -4201,10 +4197,19 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu,= struct kvm_page_fault *fault =20 static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *f= ault) { + int ret; + fault->mmu_seq =3D vcpu->kvm->mmu_invalidate_seq; smp_rmb(); =20 - return __kvm_faultin_pfn(vcpu, fault); + ret =3D __kvm_faultin_pfn(vcpu, fault); + if (ret !=3D RET_PF_CONTINUE) + return ret; + + if (unlikely(is_error_pfn(fault->pfn))) + return kvm_handle_error_pfn(vcpu, fault->gfn, fault->pfn); + + return RET_PF_CONTINUE; } =20 /* --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Tue Apr 7 06:22:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74EF7C43217 for ; Wed, 12 Oct 2022 18:17:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229666AbiJLSRX (ORCPT ); Wed, 12 Oct 2022 14:17:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43614 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229462AbiJLSRU (ORCPT ); Wed, 12 Oct 2022 14:17:20 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A8D145F6E for ; Wed, 12 Oct 2022 11:17:17 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id j20-20020a056a00235400b00565af23c8c2so1135142pfj.7 for ; Wed, 12 Oct 2022 11:17:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=UcVvkDQBBHipi7B9bc9w9EXjqJE1fFNcFSTj++h2wVs=; b=QkHR2nWKcX4fk382uPwPldcRQqjFRAey2CHttSccJdRNJsDzASiGBLZma3s8fgFwmK MPC2y+wVmpuFeM9PgB6jB8hbl+3JliNqQSjdpvYHGZDj2AeAFrWh/dlqtwmDflksZw+8 zChiYaPDsdA6SyO0YKD4tc/D0n2NdJyGoBXY9ozL8wJzWo7AKgogmmoNLJGKtX75uJid YZHjqVB8ZThf0XVWDUjszIJL2WMI8R6gLOnnrB3A9jMbqh2h5VtqprIKbsUEjJAUWns+ E2y684CDnfAzdbgV7Mug7sVXg/iSOHFHaVDXNovV0gZobyyYa649/V760naGZ/GqaciV CTBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=UcVvkDQBBHipi7B9bc9w9EXjqJE1fFNcFSTj++h2wVs=; b=4+mfb7t0CScCwliq+H+2E4aeQVrL/LtLas6j1Pp3z+dagGu39IntC53cX6LroV2vl5 MxJM0/kEd4krqxiHfCWM+xcRc8yiI6y13qezO7IzRRoH955lgpaeWmIHWdLdXxXcEyZs 8r5/nE/YyVNxXt0wX2bHZMZnqP7dWB4HGi81qBd63Pfrm9IeZDgdTjFxw9nDfNe2cESi IUDp9ZxaSDqFUfsYedwiBabFCfv0hOXG8UZZwpE9rtJrBKw7PnLx6dS8INdGlPPFnaIb 0t1lqsTPy+i1S75nZhDpV4wl26iR4TS05OG7kych5cfLXZxDKXXXRFfna039e9AbYEMp vaIg== X-Gm-Message-State: ACrzQf1RUpQUwdIdc5RuX0n2edevDN/L4Q8ZCRn5DmWcfVgGb7kM1g5P N5UEWjEwo0CaLNeze3Nm2sryvdQZ07I= X-Google-Smtp-Source: AMsMyM6tFuF6KpvADH5ubFcQPbqpoqffWf9m6rAN+DG10+8ZPvAr4c5f85Ii1WQeXWz6bSfoOnmWTNVm+Oc= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:ba8f:b0:17f:9b1b:6634 with SMTP id k15-20020a170902ba8f00b0017f9b1b6634mr31226126pls.171.1665598635998; Wed, 12 Oct 2022 11:17:15 -0700 (PDT) Reply-To: Sean Christopherson Date: Wed, 12 Oct 2022 18:16:56 +0000 In-Reply-To: <20221012181702.3663607-1-seanjc@google.com> Mime-Version: 1.0 References: <20221012181702.3663607-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221012181702.3663607-6-seanjc@google.com> Subject: [PATCH v4 05/11] KVM: x86/mmu: Avoid memslot lookup during KVM_PFN_ERR_HWPOISON handling From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Matlack , Isaku Yamahata Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Matlack Pass the kvm_page_fault struct down to kvm_handle_error_pfn() to avoid a memslot lookup when handling KVM_PFN_ERR_HWPOISON. Opportunistically move the gfn_to_hva_memslot() call and @current down into kvm_send_hwpoison_signal() to cut down on line lengths. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Isaku Yamahata Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6417a909181c..07c3f83b3204 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3150,23 +3150,25 @@ static int __direct_map(struct kvm_vcpu *vcpu, stru= ct kvm_page_fault *fault) return ret; } =20 -static void kvm_send_hwpoison_signal(unsigned long address, struct task_st= ruct *tsk) +static void kvm_send_hwpoison_signal(struct kvm_memory_slot *slot, gfn_t g= fn) { - send_sig_mceerr(BUS_MCEERR_AR, (void __user *)address, PAGE_SHIFT, tsk); + unsigned long hva =3D gfn_to_hva_memslot(slot, gfn); + + send_sig_mceerr(BUS_MCEERR_AR, (void __user *)hva, PAGE_SHIFT, current); } =20 -static int kvm_handle_error_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_= t pfn) +static int kvm_handle_error_pfn(struct kvm_page_fault *fault) { /* * Do not cache the mmio info caused by writing the readonly gfn * into the spte otherwise read access on readonly gfn also can * caused mmio page fault and treat it as mmio access. */ - if (pfn =3D=3D KVM_PFN_ERR_RO_FAULT) + if (fault->pfn =3D=3D KVM_PFN_ERR_RO_FAULT) return RET_PF_EMULATE; =20 - if (pfn =3D=3D KVM_PFN_ERR_HWPOISON) { - kvm_send_hwpoison_signal(kvm_vcpu_gfn_to_hva(vcpu, gfn), current); + if (fault->pfn =3D=3D KVM_PFN_ERR_HWPOISON) { + kvm_send_hwpoison_signal(fault->slot, fault->gfn); return RET_PF_RETRY; } =20 @@ -4207,7 +4209,7 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault) return ret; =20 if (unlikely(is_error_pfn(fault->pfn))) - return kvm_handle_error_pfn(vcpu, fault->gfn, fault->pfn); + return kvm_handle_error_pfn(fault); =20 return RET_PF_CONTINUE; } --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Tue Apr 7 06:22:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7E23C43217 for ; Wed, 12 Oct 2022 18:17:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229867AbiJLSRw (ORCPT ); Wed, 12 Oct 2022 14:17:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44182 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229775AbiJLSRe (ORCPT ); Wed, 12 Oct 2022 14:17:34 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4A80D29816 for ; Wed, 12 Oct 2022 11:17:20 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id y71-20020a638a4a000000b0046014b2258dso6317229pgd.19 for ; Wed, 12 Oct 2022 11:17:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=vUaKgOSrNqA1/LJ+9txEymrprt2AYNUfSfOocHXnnPA=; b=TV0tYpUJlzj2DrhD9CPl8P8ic32L5Cbn5gVvBySQmsBGKCWVfG/vcVj5z05yHAhWrp ApMxT74hLkEBLq0mgx7MSBPwckA2eLstZXh4/ld9M+3YxQmZr4iqyt02J8G1WQGsTkgN Qp0Ed+FCdo0f1eMzfRJp+RWra39O62tVUChPlg+7NaJorCjiNPVPXJ1lJpCmXUCcqPcm R3y4xSbjX/YUgQ4UIr1A6LDN/47mgjF7KcNjK90T0hzJBBv0NMsN+lS28ijCFjYgELv3 jL+I6IIQO7w3Qhwux1iTqvyz7754thhuT4lPavt2CtcEXFsggzpipPE6vNOMnIUgEP4e 6haQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=vUaKgOSrNqA1/LJ+9txEymrprt2AYNUfSfOocHXnnPA=; b=c5FNlX7jm50BOkuyu+ALjDGVfFvdizKYXeGXQFdARz265/uMsxc0Z6rZPqSafLo4G/ Qps8ah3NZdcnu4wX6STJotFtfTN8XoUddVu97ZpaGXCu88YCjvbwBnlRQIsYLQfobZqh VfHqDcgXQo1rvfv/ZeO8thQV7DStRDN2VfizxxNSkrGwZU4s5GeLVbHD4ksXIBriD0Wf Xh57dNYu3+9+TcczfhhsJ/PMVRED4zrBoj7mJYGEjRUB+gi2nZDDwpeO/PfGFzeuguCw D7g8HaZU6GANgp4bcPcmRK6HjFFJ0IOZlJC5kxera1pc0BMVrCQcs/2ydKA7F5kmWeEg N2YQ== X-Gm-Message-State: ACrzQf2ejo6+l7GINxu+2UiSn5z8Ptz9DeIRfRvyGVS9YxL7axSjmFpg AZYbcPakeJcoux3y5vRPg3+RX/m6rkc= X-Google-Smtp-Source: AMsMyM6NLKMVWBVhxXqpNktOPLqijNknlgx85gWq5Yox6xkv3aBfx4K5CAJNFODuB42nFrQQgU5CYvEAXaI= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90b:4b0c:b0:20d:7ffa:4b1b with SMTP id lx12-20020a17090b4b0c00b0020d7ffa4b1bmr6560072pjb.16.1665598637830; Wed, 12 Oct 2022 11:17:17 -0700 (PDT) Reply-To: Sean Christopherson Date: Wed, 12 Oct 2022 18:16:57 +0000 In-Reply-To: <20221012181702.3663607-1-seanjc@google.com> Mime-Version: 1.0 References: <20221012181702.3663607-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221012181702.3663607-7-seanjc@google.com> Subject: [PATCH v4 06/11] KVM: x86/mmu: Handle no-slot faults in kvm_faultin_pfn() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Matlack , Isaku Yamahata Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Matlack Handle faults on GFNs that do not have a backing memslot in kvm_faultin_pfn() and drop handle_abnormal_pfn(). This eliminates duplicate code in the various page fault handlers. Opportunistically tweak the comment about handling gfn > host.MAXPHYADDR to reflect that the effect of returning RET_PF_EMULATE at that point is to avoid creating an MMIO SPTE for such GFNs. No functional change intended. Signed-off-by: David Matlack Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 56 ++++++++++++++++++---------------- arch/x86/kvm/mmu/paging_tmpl.h | 6 +--- 2 files changed, 31 insertions(+), 31 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 07c3f83b3204..5710be4d328b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3175,28 +3175,32 @@ static int kvm_handle_error_pfn(struct kvm_page_fau= lt *fault) return -EFAULT; } =20 -static int handle_abnormal_pfn(struct kvm_vcpu *vcpu, struct kvm_page_faul= t *fault, - unsigned int access) +static int kvm_handle_noslot_fault(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault, + unsigned int access) { - if (unlikely(!fault->slot)) { - gva_t gva =3D fault->is_tdp ? 0 : fault->addr; + gva_t gva =3D fault->is_tdp ? 0 : fault->addr; =20 - vcpu_cache_mmio_info(vcpu, gva, fault->gfn, - access & shadow_mmio_access_mask); - /* - * If MMIO caching is disabled, emulate immediately without - * touching the shadow page tables as attempting to install an - * MMIO SPTE will just be an expensive nop. Do not cache MMIO - * whose gfn is greater than host.MAXPHYADDR, any guest that - * generates such gfns is running nested and is being tricked - * by L0 userspace (you can observe gfn > L1.MAXPHYADDR if - * and only if L1's MAXPHYADDR is inaccurate with respect to - * the hardware's). - */ - if (unlikely(!enable_mmio_caching) || - unlikely(fault->gfn > kvm_mmu_max_gfn())) - return RET_PF_EMULATE; - } + vcpu_cache_mmio_info(vcpu, gva, fault->gfn, + access & shadow_mmio_access_mask); + + /* + * If MMIO caching is disabled, emulate immediately without + * touching the shadow page tables as attempting to install an + * MMIO SPTE will just be an expensive nop. + */ + if (unlikely(!enable_mmio_caching)) + return RET_PF_EMULATE; + + /* + * Do not create an MMIO SPTE for a gfn greater than host.MAXPHYADDR, + * any guest that generates such gfns is running nested and is being + * tricked by L0 userspace (you can observe gfn > L1.MAXPHYADDR if and + * only if L1's MAXPHYADDR is inaccurate with respect to the + * hardware's). + */ + if (unlikely(fault->gfn > kvm_mmu_max_gfn())) + return RET_PF_EMULATE; =20 return RET_PF_CONTINUE; } @@ -4197,7 +4201,8 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault return RET_PF_CONTINUE; } =20 -static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *f= ault) +static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *f= ault, + unsigned int access) { int ret; =20 @@ -4211,6 +4216,9 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault) if (unlikely(is_error_pfn(fault->pfn))) return kvm_handle_error_pfn(fault); =20 + if (unlikely(!fault->slot)) + return kvm_handle_noslot_fault(vcpu, fault, access); + return RET_PF_CONTINUE; } =20 @@ -4261,11 +4269,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, = struct kvm_page_fault *fault if (r) return r; =20 - r =3D kvm_faultin_pfn(vcpu, fault); - if (r !=3D RET_PF_CONTINUE) - return r; - - r =3D handle_abnormal_pfn(vcpu, fault, ACC_ALL); + r =3D kvm_faultin_pfn(vcpu, fault, ACC_ALL); if (r !=3D RET_PF_CONTINUE) return r; =20 diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 30b9d9b6734f..60bd642bbb90 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -837,11 +837,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, st= ruct kvm_page_fault *fault else fault->max_level =3D walker.level; =20 - r =3D kvm_faultin_pfn(vcpu, fault); - if (r !=3D RET_PF_CONTINUE) - return r; - - r =3D handle_abnormal_pfn(vcpu, fault, walker.pte_access); + r =3D kvm_faultin_pfn(vcpu, fault, walker.pte_access); if (r !=3D RET_PF_CONTINUE) return r; =20 --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Tue Apr 7 06:22:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1352C43217 for ; Wed, 12 Oct 2022 18:17:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229881AbiJLSR4 (ORCPT ); Wed, 12 Oct 2022 14:17:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44188 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229788AbiJLSRf (ORCPT ); Wed, 12 Oct 2022 14:17:35 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 55E5C74DC8 for ; Wed, 12 Oct 2022 11:17:21 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id y15-20020aa78f2f000000b00562674456afso9227305pfr.9 for ; Wed, 12 Oct 2022 11:17:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=iyX7m6ZRNZlr207DNB9Z4rPKTDQz+P1dKMcokiI8RMw=; b=dC6BDjbSgC/mF0qaOjJetXhcy1TYDU0SUaizDMBnGuRPt/Fqu3XnBcriNNLmfUaF8A jHJWE+50+BKwWoAUEFKfPs7JX+LiontYRkUNgYu2lWEjoD/A7I1JZ1btPTHnQxm+BQBD XJRPLFpPKaiJB858dnI1+kR+okEwz+0Jb+b+n+WN74PapBfCSuF38Uw+ZhRzz/frm0Cb CApPtaWK00kbu0aCeQZRVUniej6ZcoPk3IkoTKMH2KIaQvRsvJPd3B4s3MQRrzhb473V ++Ul2fQwyVoawNPjxzqc2i4kqS1ID4adt0Gw318+WHr6icbpw9/C9s6y6WNQalVwvqpM lRuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=iyX7m6ZRNZlr207DNB9Z4rPKTDQz+P1dKMcokiI8RMw=; b=jOvb9o79Kkqh5vCYWsK1XpU3oDfBG3aRoj8bWdEry5ZS/iYsoNm5jivBTfgCRW8+rK r45P7bG2rLWaKphhIler3eg129x81+iEFJIfEo+SMNAmXNu47iGJsTZ/6A0feLbyydmy abKqCMg79aV74EjLAsp6FMdYeainXn36aFljy63ZxINjw/Og5UibLhnJZFOCF9xvKU5k 5A8tf12bHthLDZFqACROlnFnNkvJ0dzhemRuYHOBeMbt0I+alQeDrmaG0W5XcOyb2yZc nG7gj5H6kO8SfvYQNexxytBA67qDC9U7s6aBPSPO07dkhtoAtOU3URAFUZtEdmnW3loe yutg== X-Gm-Message-State: ACrzQf0EVq3rUbyljB51MnUnHq0b7gZ/drTcWylEXuf0ncuNNmENucLg tFfegjX8ynSMvEvxVeS+pYVOg5pIIcw= X-Google-Smtp-Source: AMsMyM4lEwUUMdCeCnx7mAlaFt4RogEpGNjpcS+QFSvdr1Yi+YI0AfEFIvaH9uUZneNKjtU2x9asckXEOHs= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:4d4:b0:17f:7ecc:88e3 with SMTP id jm20-20020a17090304d400b0017f7ecc88e3mr29589224plb.169.1665598639452; Wed, 12 Oct 2022 11:17:19 -0700 (PDT) Reply-To: Sean Christopherson Date: Wed, 12 Oct 2022 18:16:58 +0000 In-Reply-To: <20221012181702.3663607-1-seanjc@google.com> Mime-Version: 1.0 References: <20221012181702.3663607-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221012181702.3663607-8-seanjc@google.com> Subject: [PATCH v4 07/11] KVM: x86/mmu: Pivot on "TDP MMU enabled" when handling direct page faults From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Matlack , Isaku Yamahata Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When handling direct page faults, pivot on the TDP MMU being globally enabled instead of checking if the target MMU is a TDP MMU. Now that the TDP MMU is all-or-nothing, if the TDP MMU is enabled, KVM will reach direct_page_fault() if and only if the MMU is a TDP MMU. When TDP is enabled (obviously required for the TDP MMU), only non-nested TDP page faults reach direct_page_fault(), i.e. nonpaging MMUs are impossible, as NPT requires paging to be enabled and EPT faults use ept_page_fault(). Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 5710be4d328b..fe3aa890a487 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3324,7 +3324,7 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault) do { u64 new_spte; =20 - if (is_tdp_mmu(vcpu->arch.mmu)) + if (is_tdp_mmu_enabled()) sptep =3D kvm_tdp_mmu_fast_pf_get_last_sptep(vcpu, fault->addr, &spte); else sptep =3D fast_pf_get_last_sptep(vcpu, fault->addr, &spte); @@ -4252,7 +4252,6 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu, =20 static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault = *fault) { - bool is_tdp_mmu_fault =3D is_tdp_mmu(vcpu->arch.mmu); int r; =20 fault->gfn =3D fault->addr >> PAGE_SHIFT; @@ -4275,7 +4274,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault =20 r =3D RET_PF_RETRY; =20 - if (is_tdp_mmu_fault) + if (is_tdp_mmu_enabled()) read_lock(&vcpu->kvm->mmu_lock); else write_lock(&vcpu->kvm->mmu_lock); @@ -4287,13 +4286,13 @@ static int direct_page_fault(struct kvm_vcpu *vcpu,= struct kvm_page_fault *fault if (r) goto out_unlock; =20 - if (is_tdp_mmu_fault) + if (is_tdp_mmu_enabled()) r =3D kvm_tdp_mmu_map(vcpu, fault); else r =3D __direct_map(vcpu, fault); =20 out_unlock: - if (is_tdp_mmu_fault) + if (is_tdp_mmu_enabled()) read_unlock(&vcpu->kvm->mmu_lock); else write_unlock(&vcpu->kvm->mmu_lock); --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Tue Apr 7 06:22:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5252CC433FE for ; Wed, 12 Oct 2022 18:18:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229897AbiJLSSD (ORCPT ); Wed, 12 Oct 2022 14:18:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229801AbiJLSRg (ORCPT ); Wed, 12 Oct 2022 14:17:36 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 00511760CF for ; Wed, 12 Oct 2022 11:17:22 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id t4-20020a635344000000b0045fe7baa222so6541821pgl.13 for ; Wed, 12 Oct 2022 11:17:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=HGu23Wr+qHQVNklFm3PpeuujWY5OcBocaxwK/75/O5s=; b=r6J9FqsfaxJSHD4tY9LVFpySVzsbTFiE91iL9h1qUa0sv5QFxq/m/ieFoiZbvvhHus KnsqUi4onZbRHZeEE1XOH5O7kG9wrZgxVHkOMYlkNM9QvQNOsqiPxHVSavmI18DJ77fI MZFYY6onEA5K6v9n2kki8CFia8IxP3aJusSWvPcjkwoRc9MmPGB50BO/KWmarPsfJ2Xx Jzd07R5PG30W1fLz9Tl67fFI2TkvyyXKVV4KDurwgOz0yLpycZgkwq1pT7irqDpF7Yhd R9Knw8jqLStNWl3BfZyMoEPBNxeJzNt6EDrqF1ePMA/+vYjY5JNm8F+EnXnxz0DPYwUO C78w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=HGu23Wr+qHQVNklFm3PpeuujWY5OcBocaxwK/75/O5s=; b=SIWPn2k93P+Ww5oqBQI91vgn7jAFWmGIpAU72+x66mMJ8E0X3bG4RN/54GQ72CLNjj YYYIUal7XT/Q6Sikb3B9DATdDplf3YycpHom0n4Ieq6kAV9hK60l7WyifiTExELLo02K Farl4xi+S8Ea3pOWfZYeZdDE9fzhL9a9uDpLkcNKE8nKqHsSLVM7bhRCJCzs0WYzPmtx fuAF/92klC1XoxFQiXYuZoe2ejhdSYPSDFn+5X9Nor3KKDfu8n2vVqUukJ1T1xIeBfv9 +cvbrQR28Ynx4lzcyLAIJHoTBV6/EQJB1OUBgi9s7T1bqJ8bCIMk7vHvIP9nVbIknXw2 rgxg== X-Gm-Message-State: ACrzQf1Lj8dmKe7ocx7sDKyLNBtpTPydev9ibPPe7amXiOTlt1ARzXJc H5HDhjKfPilVeOqhUCT+mhYmwNaHGcE= X-Google-Smtp-Source: AMsMyM54w4ZsjtZ+hfVdEHLx8pp2DhfoZcjJXZA++yCmPm4WyMuAzZv0oPKKJqJybZMDuzDtqtyybHDI8T0= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:c986:b0:205:f08c:a82b with SMTP id w6-20020a17090ac98600b00205f08ca82bmr556759pjt.1.1665598641021; Wed, 12 Oct 2022 11:17:21 -0700 (PDT) Reply-To: Sean Christopherson Date: Wed, 12 Oct 2022 18:16:59 +0000 In-Reply-To: <20221012181702.3663607-1-seanjc@google.com> Mime-Version: 1.0 References: <20221012181702.3663607-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221012181702.3663607-9-seanjc@google.com> Subject: [PATCH v4 08/11] KVM: x86/mmu: Pivot on "TDP MMU enabled" to check if active MMU is TDP MMU From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Matlack , Isaku Yamahata Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Simplify and optimize the logic for detecting if the current/active MMU is a TDP MMU. If the TDP MMU is globally enabled, then the active MMU is a TDP MMU if it is direct. When TDP is enabled, so called nonpaging MMUs are never used as the only form of shadow paging KVM uses is for nested TDP, and the active MMU can't be direct in that case. Rename the helper and take the vCPU instead of an arbitrary MMU, as nonpaging MMUs can show up in the walk_mmu if L1 is using nested TDP and L2 has paging disabled. Taking the vCPU has the added bonus of cleaning up the callers, all of which check the current MMU but wrap code that consumes the vCPU. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 11 ++++++++--- arch/x86/kvm/mmu/tdp_mmu.h | 18 ------------------ 2 files changed, 8 insertions(+), 21 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index fe3aa890a487..1598aaf29c4a 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -615,9 +615,14 @@ static bool mmu_spte_age(u64 *sptep) return true; } =20 +static inline bool is_tdp_mmu_active(struct kvm_vcpu *vcpu) +{ + return is_tdp_mmu_enabled() && vcpu->arch.mmu->root_role.direct; +} + static void walk_shadow_page_lockless_begin(struct kvm_vcpu *vcpu) { - if (is_tdp_mmu(vcpu->arch.mmu)) { + if (is_tdp_mmu_active(vcpu)) { kvm_tdp_mmu_walk_lockless_begin(); } else { /* @@ -636,7 +641,7 @@ static void walk_shadow_page_lockless_begin(struct kvm_= vcpu *vcpu) =20 static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu) { - if (is_tdp_mmu(vcpu->arch.mmu)) { + if (is_tdp_mmu_active(vcpu)) { kvm_tdp_mmu_walk_lockless_end(); } else { /* @@ -3997,7 +4002,7 @@ static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 = addr, u64 *sptep) =20 walk_shadow_page_lockless_begin(vcpu); =20 - if (is_tdp_mmu(vcpu->arch.mmu)) + if (is_tdp_mmu_active(vcpu)) leaf =3D kvm_tdp_mmu_get_walk(vcpu, addr, sptes, &root); else leaf =3D get_walk(vcpu, addr, sptes, &root); diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 9d086a103f77..5808f32e4a45 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -70,26 +70,8 @@ u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu = *vcpu, u64 addr, =20 #ifdef CONFIG_X86_64 static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return sp->t= dp_mmu_page; } - -static inline bool is_tdp_mmu(struct kvm_mmu *mmu) -{ - struct kvm_mmu_page *sp; - hpa_t hpa =3D mmu->root.hpa; - - if (WARN_ON(!VALID_PAGE(hpa))) - return false; - - /* - * A NULL shadow page is legal when shadowing a non-paging guest with - * PAE paging, as the MMU will be direct with root_hpa pointing at the - * pae_root page, not a shadow page. - */ - sp =3D to_shadow_page(hpa); - return sp && is_tdp_mmu_page(sp) && sp->root_count; -} #else static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return false= ; } -static inline bool is_tdp_mmu(struct kvm_mmu *mmu) { return false; } #endif =20 #endif /* __KVM_X86_MMU_TDP_MMU_H */ --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Tue Apr 7 06:22:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50A67C433FE for ; Wed, 12 Oct 2022 18:18:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229865AbiJLSSM (ORCPT ); Wed, 12 Oct 2022 14:18:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44238 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229768AbiJLSRi (ORCPT ); Wed, 12 Oct 2022 14:17:38 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 591E2A87B9 for ; Wed, 12 Oct 2022 11:17:25 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id m16-20020a17090ab79000b0020abf0d3ff4so8457519pjr.1 for ; Wed, 12 Oct 2022 11:17:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=6LjPpC5WMK+HN+fw4FsDT2cJnXsT7lM8aCSyAvCtEgA=; b=NuW25Nv3biN2IHS4poTdCD4rAjew+qpVd76+lQVBDIym1H9FoBeMG+X5ODgl4H7p+j ZpINWlCzUXGsBCo+2p0WR5bMJ301Ad+BvriGTTYQfCAA8N5N2XXgElBzbFpja/WmjtbD r1U8sdlPTHKYjp8AvuMc1BWKuAZP28hM/xHbow2I2ehjSa41fxbYvgHZ8ZEzsL9i/0ME ygk4gfSIsEx7D8ZGXcxyuXrxbMtiuDj2aklQMMfTSeLfRo56aPgXRP+spNxfF50lNsqQ HTvy9TUINVREz4MVmhk7Qka+Qs1NjZoIMI5z2xXnuFzD0+FsaKIJA1oySTG7+XOptZNb URPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=6LjPpC5WMK+HN+fw4FsDT2cJnXsT7lM8aCSyAvCtEgA=; b=5XcdZqU/IPWVSBPazYadj0DvJOGNDrIdLaJUeJ8JUmE8Zca45o1SmZIcRFBxqoSDbN ry0AEiUR37PQlyk3kLaHyCU7TTHaGBPebka2tgd9L7QH4nE7rwe7iWob7Dj/r0jnDebN uiN9ZvXVMm3h0dqQrE9bHdNTUBZd5+wMfnleApoARbiyqssHTCviy4K44xzgr7e4Dktr kTlXl25SyyjE7pI4JgxvfzgBxfYkVOWcvqcqGcG1omHsn4uLSvVA/24gRuYRV1deTzSY jjU7IdF45wcWAo20G+kUbQ0kJ4PUsSklLSEcfqvtGRhckX3JEXMlutC4NPLcfH5zoYrZ Ajmw== X-Gm-Message-State: ACrzQf3/XgfmUQwZ2xbxswCAnNvTL82dsjU68Yif0X2cXpvh220LLm2R xWqUqD+Ju//BlkbeNAahyOG8g1LWO1A= X-Google-Smtp-Source: AMsMyM7htAu+wDIbq1XRbKYSbs7Fwquqh8ysfXFz2L3IgX1ZynIZzrjBpLYtIdUWzgxbDYA5EHlVbNckaRM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:e7d0:b0:20c:169f:7503 with SMTP id kb16-20020a17090ae7d000b0020c169f7503mr6483671pjb.175.1665598643087; Wed, 12 Oct 2022 11:17:23 -0700 (PDT) Reply-To: Sean Christopherson Date: Wed, 12 Oct 2022 18:17:00 +0000 In-Reply-To: <20221012181702.3663607-1-seanjc@google.com> Mime-Version: 1.0 References: <20221012181702.3663607-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221012181702.3663607-10-seanjc@google.com> Subject: [PATCH v4 09/11] KVM: x86/mmu: Replace open coded usage of tdp_mmu_page with is_tdp_mmu_page() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Matlack , Isaku Yamahata Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use is_tdp_mmu_page() instead of querying sp->tdp_mmu_page directly so that all users benefit if KVM ever finds a way to optimize the logic. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1598aaf29c4a..4792d76edd6d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1907,7 +1907,7 @@ static bool is_obsolete_sp(struct kvm *kvm, struct kv= m_mmu_page *sp) return true; =20 /* TDP MMU pages due not use the MMU generation. */ - return !sp->tdp_mmu_page && + return !is_tdp_mmu_page(sp) && unlikely(sp->mmu_valid_gen !=3D kvm->arch.mmu_valid_gen); } =20 diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index f7c4555d5d36..477418a2ed9b 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -134,7 +134,7 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_m= mu_page *root, if (!refcount_dec_and_test(&root->tdp_mmu_root_count)) return; =20 - WARN_ON(!root->tdp_mmu_page); + WARN_ON(!is_tdp_mmu_page(root)); =20 /* * The root now has refcount=3D0. It is valid, but readers already --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Tue Apr 7 06:22:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68867C4332F for ; Wed, 12 Oct 2022 18:18:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229961AbiJLSSf (ORCPT ); Wed, 12 Oct 2022 14:18:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44344 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229846AbiJLSRq (ORCPT ); Wed, 12 Oct 2022 14:17:46 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8116CAE237 for ; Wed, 12 Oct 2022 11:17:25 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id pj13-20020a17090b4f4d00b0020b0a13cba4so2725705pjb.0 for ; Wed, 12 Oct 2022 11:17:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=oyyfiq1u1ow61sJxYdtEop67BKEuTOwTRoB44Qlve6k=; b=P2usEtoXL2d7Kadfzc6pHmhtZOrLbkU1dncF458jZB05kEaDR4KjrjxeuY2vSXDSFo bzwldY+iDn3YjfRcxMznyMfidl6/WuUWEP2QFEgmLDI3MAwPiaWHJCuCHdIGXcW+eSGN g/sEzO9PGlOgo6Vr6vFw6JZI73bhDmBxShW+cbqHQzscfO6bl7+73l8UYRTpUv1xyY3g +iU0rq7hqAlv49yzPLEq1daKsx2PPabZ8Od1fF8YSttMwMcfv7oszFPM2d32wM99sV4e h1fDGa/bn6rViRE0jfB29EyHUGVv1B9iTHYnpZW4bVSacmvCNnfVBmr1nw2Q07saZkag o1uA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=oyyfiq1u1ow61sJxYdtEop67BKEuTOwTRoB44Qlve6k=; b=KZ+ZAncOzMputnKTpLKOh84OZdNYgTzsRNjZuNMUZfKXM/97WXhF7zG+ImvpFR9g5Y L3MRRBHAJ67rt4UtLbUuJEGhV9c9fiR9WblfVlrHclmvzGYi5PM0pXqs/N3Hf6y0lz8V TQatmSRUL+yNqz54mLZT1YKFKMqgfQv/4vckv9AoE1q8LtG6kIDGAWhv3KSuXy1ySem3 zZXRx2VAZ6CHpFBuUkdNHStAaUD62m8Z0qMYiG1+JBo4FXx65dJkyZqWn7pTVVddNpQO 1qSFFtvQjNT0kcO0w8IcTyzR4vV5gttH42LR1TZ5mKie6Bl1OM6ql8Agwe80yuscdyd6 oX6g== X-Gm-Message-State: ACrzQf0MkWNSpel8rOgCNaIee3FIeTfPgDvlpAB39tFv54C4of+NiPiz XJDqTaZ4UNovbsoS1HwYFP84roD4MzQ= X-Google-Smtp-Source: AMsMyM5a7PIgQlH+XQAxQkl3E0hWyOTVe15+NXdqge73zQWdRTWS96H4/6KaihK8TGdSJwuYFfp8urRmBZs= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:1349:b0:563:654d:ce3f with SMTP id k9-20020a056a00134900b00563654dce3fmr17351716pfu.32.1665598644547; Wed, 12 Oct 2022 11:17:24 -0700 (PDT) Reply-To: Sean Christopherson Date: Wed, 12 Oct 2022 18:17:01 +0000 In-Reply-To: <20221012181702.3663607-1-seanjc@google.com> Mime-Version: 1.0 References: <20221012181702.3663607-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221012181702.3663607-11-seanjc@google.com> Subject: [PATCH v4 10/11] KVM: x86/mmu: Use static key/branches for checking if TDP MMU is enabled From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Matlack , Isaku Yamahata Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now that the TDP MMU being enabled is read-only after the vendor module is loaded, use a static key to track whether or not the TDP MMU is enabled to avoid conditional branches in hot paths, e.g. in direct_page_fault() and fast_page_fault(). Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu.h | 5 +++-- arch/x86/kvm/mmu/mmu.c | 14 ++++++++++---- 2 files changed, 13 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 1ad6d02e103f..bc0d8a5c09f9 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -2,6 +2,7 @@ #ifndef __KVM_X86_MMU_H #define __KVM_X86_MMU_H =20 +#include #include #include "kvm_cache_regs.h" #include "cpuid.h" @@ -230,13 +231,13 @@ static inline bool kvm_shadow_root_allocated(struct k= vm *kvm) } =20 #ifdef CONFIG_X86_64 -extern bool tdp_mmu_enabled; +DECLARE_STATIC_KEY_TRUE(tdp_mmu_enabled); #endif =20 static inline bool is_tdp_mmu_enabled(void) { #ifdef CONFIG_X86_64 - return tdp_mmu_enabled; + return static_branch_likely(&tdp_mmu_enabled); #else return false; #endif diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4792d76edd6d..a5ba7b41263d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -101,8 +101,10 @@ bool tdp_enabled =3D false; #ifdef CONFIG_X86_64 static bool __ro_after_init tdp_mmu_allowed; =20 -bool __read_mostly tdp_mmu_enabled =3D true; -module_param_named(tdp_mmu, tdp_mmu_enabled, bool, 0444); +static bool __read_mostly __tdp_mmu_enabled =3D true; +module_param_named(tdp_mmu, __tdp_mmu_enabled, bool, 0444); + +DEFINE_STATIC_KEY_TRUE(tdp_mmu_enabled); #endif =20 static int max_huge_page_level __read_mostly; @@ -5702,7 +5704,11 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forc= ed_root_level, max_tdp_level =3D tdp_max_root_level; =20 #ifdef CONFIG_X86_64 - tdp_mmu_enabled =3D tdp_mmu_allowed && tdp_enabled; + __tdp_mmu_enabled =3D tdp_mmu_allowed && tdp_enabled; + if (__tdp_mmu_enabled) + static_branch_enable(&tdp_mmu_enabled); + else + static_branch_disable(&tdp_mmu_enabled); #endif /* * max_huge_page_level reflects KVM's MMU capabilities irrespective @@ -6712,7 +6718,7 @@ void __init kvm_mmu_x86_module_init(void) * TDP MMU is actually enabled is determined in kvm_configure_mmu() * when the vendor module is loaded. */ - tdp_mmu_allowed =3D tdp_mmu_enabled; + tdp_mmu_allowed =3D __tdp_mmu_enabled; #endif =20 kvm_mmu_spte_module_init(); --=20 2.38.0.rc1.362.ged0d419d3c-goog From nobody Tue Apr 7 06:22:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B01B5C433FE for ; Wed, 12 Oct 2022 18:18:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229723AbiJLSS1 (ORCPT ); Wed, 12 Oct 2022 14:18:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44162 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229825AbiJLSRj (ORCPT ); Wed, 12 Oct 2022 14:17:39 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 05240FE92B for ; Wed, 12 Oct 2022 11:17:27 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id h11-20020a170902f54b00b001780f0f7ea7so12189008plf.9 for ; Wed, 12 Oct 2022 11:17:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=JdxEzOf9q3jZxRIV3R0387viDqqF4uKKm+TfA/ON3Oc=; b=mjDpSoePv7poy+PxVm7eiwY47bJwFz/gpt0nUUv0PO3BzPC/Ttl5kJNv/EhwE5Br5+ bUMnLKeE3xFl6B7J99VsEUlfYUgeZWSmS2rlVTiGepzXgRNWLhF9uimu9rChtT9FEumR 3ALWn4Qa6kpNgLaSsK0ryJOF2xk0gxWiZ28fX+CwlhTyTWr7fBrI4ltb+fCBAo24SHXZ hRrGqpTktRlB7L9lkNDoFLp0U5Ay41OcozgoxbnB6yMaR71c3HIOQmbvwzVvx6WzU85U Gni28BZphgiauWAj3isz0DK3rKUVLchYmMcdMej9pt35495AhP7HxjdtpmyNyFDN2JQe wJbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=JdxEzOf9q3jZxRIV3R0387viDqqF4uKKm+TfA/ON3Oc=; b=jzSdJNC+hoMKdItyH7uOQmk8O/m+QkePe8oTLMkXERJ13gRK2xGO0wBxK/qr9XCrlD kTH1Pv6XlsRURhTLCctmfDHANZQZT7uN/s7eyRbNdVer4gueaWTBbo0nwHt4thP7lIfY SeoB3WpFipvwEv5CDxctD5QdA2oR/qTaMcci94EP8goiH6Zx5uCYLpl+CZKk2wJNCc8y nTzojr/1LafZakVEo0Hg4A6O12M9gF+sq7ENjw4MrbvjTTlDpSHVfeGXkm76vTg5tcH8 yv7xJe51nUscwhqQL650pMDzeva4vJfxlZoDiI5Fl9sX21q6W24lCuU7mAXQA3qUgjrz TGgw== X-Gm-Message-State: ACrzQf2TAx1AdL2hix+J59ukUK0CuVakBBZci3aAZrSzjd17dj5Zs9CG +06r63xEqrrmusk31U42tGx3UuaBDgk= X-Google-Smtp-Source: AMsMyM5/uWnwIatGAE/VXqmY8wTRmoVUxQ1aOTxsVOaXPyrDvX9WcpjiI+C70U/g26pFUrSK8I2l6Dbui38= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:9f97:b0:183:ee9e:59ef with SMTP id g23-20020a1709029f9700b00183ee9e59efmr8652275plq.38.1665598646155; Wed, 12 Oct 2022 11:17:26 -0700 (PDT) Reply-To: Sean Christopherson Date: Wed, 12 Oct 2022 18:17:02 +0000 In-Reply-To: <20221012181702.3663607-1-seanjc@google.com> Mime-Version: 1.0 References: <20221012181702.3663607-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221012181702.3663607-12-seanjc@google.com> Subject: [PATCH v4 11/11] KVM: x86/mmu: Stop needlessly making MMU pages available for TDP MMU From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Matlack , Isaku Yamahata Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Matlack Stop calling make_mmu_pages_available() when handling TDP MMU faults and when allocating TDP MMU roots. The TDP MMU does not participate in the "available MMU pages" tracking and limiting so calling this function is unnecessary work when handling TDP MMU faults. Signed-off-by: David Matlack [sean: apply to root allocation too] Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 22 +++++++++++++--------- 1 file changed, 13 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a5ba7b41263d..0fcf4560f4d8 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3569,9 +3569,12 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *v= cpu) int r; =20 write_lock(&vcpu->kvm->mmu_lock); - r =3D make_mmu_pages_available(vcpu); - if (r < 0) - goto out_unlock; + + if (!is_tdp_mmu_enabled()) { + r =3D make_mmu_pages_available(vcpu); + if (r < 0) + goto out_unlock; + } =20 if (is_tdp_mmu_enabled()) { root =3D kvm_tdp_mmu_get_vcpu_root_hpa(vcpu); @@ -4289,14 +4292,15 @@ static int direct_page_fault(struct kvm_vcpu *vcpu,= struct kvm_page_fault *fault if (is_page_fault_stale(vcpu, fault)) goto out_unlock; =20 - r =3D make_mmu_pages_available(vcpu); - if (r) - goto out_unlock; - - if (is_tdp_mmu_enabled()) + if (is_tdp_mmu_enabled()) { r =3D kvm_tdp_mmu_map(vcpu, fault); - else + } else { + r =3D make_mmu_pages_available(vcpu); + if (r) + goto out_unlock; + r =3D __direct_map(vcpu, fault); + } =20 out_unlock: if (is_tdp_mmu_enabled()) --=20 2.38.0.rc1.362.ged0d419d3c-goog