From nobody Tue Apr 7 07:55:40 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B312AC4332F for ; Wed, 12 Oct 2022 18:17:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229828AbiJLSRk (ORCPT ); Wed, 12 Oct 2022 14:17:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43872 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229783AbiJLSR0 (ORCPT ); Wed, 12 Oct 2022 14:17:26 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2AC6861B06 for ; Wed, 12 Oct 2022 11:17:11 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-355ece59b6fso168051297b3.22 for ; Wed, 12 Oct 2022 11:17:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=zOGoGGvgl+iojtStzq1iqrGDT0sLlPiSgwzzw0iw8d4=; b=SDGsBWQ+VnMwWTiurbtMOyBGOjPdO71ubMbkCsOoMcTO3hz0dW+s2peDJ6ke3VqM5c OVV/yvP/9wPg6tNAbb1tgzmKALjx5+o7+TYaqthPPbtOEVOFz4q+PCGS8s1SCMk18dJl wRyMuYno1IjJnt10UhmE41OOJyhhpcALq+HkAtrgUXpzNwZh9Mz9OO6UPg5cA6+eaUsl LEnf8w0xNG9gmSvjed/gDPyM1zWLjtbzOVuS/4BzdzrLLVxUVkQ9ZXH3vs5/MD4YkK1c kcIQ4wHo/qEAZR+W2MvcTF0qcjuGVM07nFEJHpU0wU4p3tqABqi+jwC9zh7mvIXQn+jL zzqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=zOGoGGvgl+iojtStzq1iqrGDT0sLlPiSgwzzw0iw8d4=; b=2PRCqkJQ9oFQw2maRHZte1RFmqk8Z6jxM7TPNhTR/MAhWnQVAWlSk1/D7WMm0S/ywq LyYfOAycVleAxlYB07mN37UbSeskq9AQgYmckSTEXlL3twRyVJn5gUagyJRqbiVonEQU ITL+qweaQNfEAmQfzkja6YHZaTQuwXFvXD3U2bwx1BMC4MCQt6zhBvCFaHjocjt68uge Fotsp3BY1LsoWdGIQUrSNcaEqV/AgWPg77o15h67uGaRCQev1K+4B3tswXcUPzIPKOrC bo1Bcy2FlT8/VwMr8dV6Iwfs1B8SkwkDiULzifiTaYBV82rsLRp7NIXwo8gf1kJ+6oAi jCxw== X-Gm-Message-State: ACrzQf1tOAMqHQUg02Q0pzNPMVDiLLy2KsJJxSAX63dS0ZlrO4izTlCj XuCDXaDcYI0JKanWE08jJLx7C6AH6Qw= X-Google-Smtp-Source: AMsMyM54rFJLdHQ9SY3H00lxfxo5SIzV1Qc8HPFItTmI+lRcEsO5uTDCC7PDTqgyuRQUiwc6sWiuFVsChNM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:5bc6:0:b0:351:17e1:6297 with SMTP id p189-20020a815bc6000000b0035117e16297mr28457434ywb.433.1665598630936; Wed, 12 Oct 2022 11:17:10 -0700 (PDT) Reply-To: Sean Christopherson Date: Wed, 12 Oct 2022 18:16:53 +0000 In-Reply-To: <20221012181702.3663607-1-seanjc@google.com> Mime-Version: 1.0 References: <20221012181702.3663607-1-seanjc@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221012181702.3663607-3-seanjc@google.com> Subject: [PATCH v4 02/11] KVM: x86/mmu: Move TDP MMU VM init/uninit behind tdp_mmu_enabled From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Matlack , Isaku Yamahata Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Matlack Move kvm_mmu_{init,uninit}_tdp_mmu() behind tdp_mmu_enabled. This makes these functions consistent with the rest of the calls into the TDP MMU from mmu.c, and which is now possible since tdp_mmu_enabled is only modified when the x86 vendor module is loaded. i.e. It will never change during the lifetime of a VM. This change also enabled removing the stub definitions for 32-bit KVM, as the compiler will just optimize the calls out like it does for all the other TDP MMU functions. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Isaku Yamahata Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 11 +++++++---- arch/x86/kvm/mmu/tdp_mmu.c | 6 ------ arch/x86/kvm/mmu/tdp_mmu.h | 7 +++---- 3 files changed, 10 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3a370f575808..b2b970e9fa8d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5984,9 +5984,11 @@ int kvm_mmu_init_vm(struct kvm *kvm) INIT_LIST_HEAD(&kvm->arch.lpage_disallowed_mmu_pages); spin_lock_init(&kvm->arch.mmu_unsync_pages_lock); =20 - r =3D kvm_mmu_init_tdp_mmu(kvm); - if (r < 0) - return r; + if (is_tdp_mmu_enabled()) { + r =3D kvm_mmu_init_tdp_mmu(kvm); + if (r < 0) + return r; + } =20 node->track_write =3D kvm_mmu_pte_write; node->track_flush_slot =3D kvm_mmu_invalidate_zap_pages_in_memslot; @@ -6016,7 +6018,8 @@ void kvm_mmu_uninit_vm(struct kvm *kvm) =20 kvm_page_track_unregister_notifier(kvm, node); =20 - kvm_mmu_uninit_tdp_mmu(kvm); + if (is_tdp_mmu_enabled()) + kvm_mmu_uninit_tdp_mmu(kvm); =20 mmu_free_vm_memory_caches(kvm); } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index cc2a3a511994..f7c4555d5d36 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -15,9 +15,6 @@ int kvm_mmu_init_tdp_mmu(struct kvm *kvm) { struct workqueue_struct *wq; =20 - if (!is_tdp_mmu_enabled()) - return 0; - wq =3D alloc_workqueue("kvm", WQ_UNBOUND|WQ_MEM_RECLAIM|WQ_CPU_INTENSIVE,= 0); if (!wq) return -ENOMEM; @@ -43,9 +40,6 @@ static __always_inline bool kvm_lockdep_assert_mmu_lock_h= eld(struct kvm *kvm, =20 void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) { - if (!is_tdp_mmu_enabled()) - return; - /* Also waits for any queued work items. */ destroy_workqueue(kvm->arch.tdp_mmu_zap_wq); =20 diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index c163f7cc23ca..9d086a103f77 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -5,6 +5,9 @@ =20 #include =20 +int kvm_mmu_init_tdp_mmu(struct kvm *kvm); +void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm); + hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu); =20 __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *= root) @@ -66,8 +69,6 @@ u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *= vcpu, u64 addr, u64 *spte); =20 #ifdef CONFIG_X86_64 -int kvm_mmu_init_tdp_mmu(struct kvm *kvm); -void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm); static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return sp->t= dp_mmu_page; } =20 static inline bool is_tdp_mmu(struct kvm_mmu *mmu) @@ -87,8 +88,6 @@ static inline bool is_tdp_mmu(struct kvm_mmu *mmu) return sp && is_tdp_mmu_page(sp) && sp->root_count; } #else -static inline int kvm_mmu_init_tdp_mmu(struct kvm *kvm) { return 0; } -static inline void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) {} static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return false= ; } static inline bool is_tdp_mmu(struct kvm_mmu *mmu) { return false; } #endif --=20 2.38.0.rc1.362.ged0d419d3c-goog