From nobody Mon Feb 9 03:45:55 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46C8EC04FDF for ; Fri, 21 Jul 2023 23:00:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231211AbjGUXAX (ORCPT ); Fri, 21 Jul 2023 19:00:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48670 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231159AbjGUXAR (ORCPT ); Fri, 21 Jul 2023 19:00:17 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 373E73A9F for ; Fri, 21 Jul 2023 16:00:16 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-583a058f890so8908667b3.0 for ; Fri, 21 Jul 2023 16:00:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1689980415; x=1690585215; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=omheWi+fB4D/RinebRnnTaXxuGsxp/H5OZ8/iJL52gg=; b=vbdN8OctTXpcWhp32zvgNfnvRtfWXnph6+vcUlc57CGOPd/R/4a75ImhrgGJ3xAHn1 HaUh6pk8xronY2t50P13VcHS9megjWcRGLgMyQ7QhQfCyyCR28TJ495s7uhFE9oLDv/v BUKnqDhACAN7T+UUwH5DpfGaHlhnS+WA5TMxO/jxbTNz4qzlhXWxxK5UwaB7GT7i4Jjk MisNKnGlVNq6+vnwALgo0EqgsXSHKXoZErQKj4TAW98IFS+aKdGQZ9cgBSAxem3upUdi wSCmNRxyhGEmgLGpYFwC6/jF6+ThqolhHilmyYC0Q4N9SXDCrO8n4iHEdPaFAavMSBM9 k6SA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689980415; x=1690585215; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=omheWi+fB4D/RinebRnnTaXxuGsxp/H5OZ8/iJL52gg=; b=VK/7l1YXHzdr4UeNWw6amBf2pah7nB+bFX0ff5eYNCucBcHRCfImCYNZQJnxcFaz+i qEkY0rLGygzmC1IKnssjaOTM/eEbLJ19QRwVCqarRoH2p/6AZTjy0BAQwPXXThZgH4v/ t2rFJoY6woaDxdLaIIC+wKR2oesyYdEA0DQmfCQ/liVEWwbskweMyp0n6sCT+s16HCr7 C7eii0xrJRJ9NNPrvDx6MKL1QhKYM5HYFsXrMlkaGQL+wpvLK9GESG56shsoaWIX7Ci6 htjveV1crQgA308SVoh1OtKeQgkLWvpavkzu81RLtLquS5jqLiaWBvyjhGl5pYTmWBtQ H2Hw== X-Gm-Message-State: ABy/qLZtbRZKHu0Bc3ta60HNgy6KFZxZLSqbTes4qk0ntqfoS5h/xEmN k3bm64+kfkX532gnCTXEnMe2Yb3Apns= X-Google-Smtp-Source: APBJJlED9CgDOHYdLhezvIRO12CxHnBgshIulfAAdeWhiRCGQ3PWOvbdpCPlyTEInQj7kA8j2mFdN/93DVQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:b668:0:b0:56f:f62b:7a11 with SMTP id h40-20020a81b668000000b0056ff62b7a11mr13119ywk.8.1689980415400; Fri, 21 Jul 2023 16:00:15 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 21 Jul 2023 16:00:00 -0700 In-Reply-To: <20230721230006.2337941-1-seanjc@google.com> Mime-Version: 1.0 References: <20230721230006.2337941-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.487.g6d72f3e995-goog Message-ID: <20230721230006.2337941-4-seanjc@google.com> Subject: [PATCH v2 3/9] KVM: x86/mmu: Rename MMU_WARN_ON() to KVM_MMU_WARN_ON() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Mingwei Zhang , David Matlack , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Rename MMU_WARN_ON() to make it super obvious that the assertions are all about KVM's MMU, not the primary MMU. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 4 ++-- arch/x86/kvm/mmu/mmu_internal.h | 4 ++-- arch/x86/kvm/mmu/spte.h | 8 ++++---- arch/x86/kvm/mmu/tdp_mmu.c | 8 ++++---- 4 files changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b16092d71d3f..c87539dd1ac0 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1255,7 +1255,7 @@ static bool spte_clear_dirty(u64 *sptep) { u64 spte =3D *sptep; =20 - MMU_WARN_ON(!spte_ad_enabled(spte)); + KVM_MMU_WARN_ON(!spte_ad_enabled(spte)); spte &=3D ~shadow_dirty_mask; return mmu_spte_update(sptep, spte); } @@ -1735,7 +1735,7 @@ static void kvm_unaccount_mmu_page(struct kvm *kvm, s= truct kvm_mmu_page *sp) =20 static void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp) { - MMU_WARN_ON(!is_empty_shadow_page(sp->spt)); + KVM_MMU_WARN_ON(!is_empty_shadow_page(sp->spt)); hlist_del(&sp->hash_link); list_del(&sp->link); free_page((unsigned long)sp->spt); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index 9ea80e4d463c..bb1649669bc9 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -9,9 +9,9 @@ #undef MMU_DEBUG =20 #ifdef MMU_DEBUG -#define MMU_WARN_ON(x) WARN_ON(x) +#define KVM_MMU_WARN_ON(x) WARN_ON(x) #else -#define MMU_WARN_ON(x) do { } while (0) +#define KVM_MMU_WARN_ON(x) do { } while (0) #endif =20 /* Page table builder macros common to shadow (host) PTEs and guest PTEs. = */ diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 1279db2eab44..83e6614f3720 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -265,13 +265,13 @@ static inline bool sp_ad_disabled(struct kvm_mmu_page= *sp) =20 static inline bool spte_ad_enabled(u64 spte) { - MMU_WARN_ON(!is_shadow_present_pte(spte)); + KVM_MMU_WARN_ON(!is_shadow_present_pte(spte)); return (spte & SPTE_TDP_AD_MASK) !=3D SPTE_TDP_AD_DISABLED; } =20 static inline bool spte_ad_need_write_protect(u64 spte) { - MMU_WARN_ON(!is_shadow_present_pte(spte)); + KVM_MMU_WARN_ON(!is_shadow_present_pte(spte)); /* * This is benign for non-TDP SPTEs as SPTE_TDP_AD_ENABLED is '0', * and non-TDP SPTEs will never set these bits. Optimize for 64-bit @@ -282,13 +282,13 @@ static inline bool spte_ad_need_write_protect(u64 spt= e) =20 static inline u64 spte_shadow_accessed_mask(u64 spte) { - MMU_WARN_ON(!is_shadow_present_pte(spte)); + KVM_MMU_WARN_ON(!is_shadow_present_pte(spte)); return spte_ad_enabled(spte) ? shadow_accessed_mask : 0; } =20 static inline u64 spte_shadow_dirty_mask(u64 spte) { - MMU_WARN_ON(!is_shadow_present_pte(spte)); + KVM_MMU_WARN_ON(!is_shadow_present_pte(spte)); return spte_ad_enabled(spte) ? shadow_dirty_mask : 0; } =20 diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 512163d52194..f881de40f9ef 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1548,8 +1548,8 @@ static bool clear_dirty_gfn_range(struct kvm *kvm, st= ruct kvm_mmu_page *root, if (!is_shadow_present_pte(iter.old_spte)) continue; =20 - MMU_WARN_ON(kvm_ad_enabled() && - spte_ad_need_write_protect(iter.old_spte)); + KVM_MMU_WARN_ON(kvm_ad_enabled() && + spte_ad_need_write_protect(iter.old_spte)); =20 if (!(iter.old_spte & dbit)) continue; @@ -1607,8 +1607,8 @@ static void clear_dirty_pt_masked(struct kvm *kvm, st= ruct kvm_mmu_page *root, if (!mask) break; =20 - MMU_WARN_ON(kvm_ad_enabled() && - spte_ad_need_write_protect(iter.old_spte)); + KVM_MMU_WARN_ON(kvm_ad_enabled() && + spte_ad_need_write_protect(iter.old_spte)); =20 if (iter.level > PG_LEVEL_4K || !(mask & (1UL << (iter.gfn - gfn)))) --=20 2.41.0.487.g6d72f3e995-goog