From nobody Tue Feb 10 06:05:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71D87C77B7C for ; Thu, 11 May 2023 23:59:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239626AbjEKX72 (ORCPT ); Thu, 11 May 2023 19:59:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239160AbjEKX7X (ORCPT ); Thu, 11 May 2023 19:59:23 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 57EB52689 for ; Thu, 11 May 2023 16:59:21 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-55d9a9d19c9so133438267b3.1 for ; Thu, 11 May 2023 16:59:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1683849560; x=1686441560; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=CNraDidmFDw9dJQ9QnV0Y0C81tvqP9mqw0EiCSRz+Vs=; b=sOIKLxFGuZR3i+K8MJnpj7JJVwmgGhXNdfe9XNnpN8OBIHbAYWKHKMnwLAXfeq0eS2 ML6wMOtYOj99/tupq5/BOSt0fd19YAO+5/xjWDl0mbazf82YyyhL0m2wtHFjZwoa87x/ iFiAl66sn+AQjsVPkU5S2e0s7QnsrMicU9ivDez7l6MkMd0z0N8wR0rZTx6+swKs+YEL 3cxQEQwHQTqOs7qC0XOyaKQJ+AwoNL/dPQTVXgZeYaToXfHCQKLMJ6sY3p/OudNICFtd oFAFNrp3Nu13/EUHIEvlNfCOBJwPTpZeUgGaw0luySkG+EAYCs99WAR7cgEIWIScPC9E 4rmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683849560; x=1686441560; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=CNraDidmFDw9dJQ9QnV0Y0C81tvqP9mqw0EiCSRz+Vs=; b=JKPk3a5iOxXWZWoqpSF8YKc9TyapV4A7ow8fnxeq3jzNQP/nQUWfyu5YbwuOMu+AY2 xhCqrE0PbgjNdAVBLbJLeL3+eqHtT2QkQQ0oX1FKXEdbe9OGl4wOQxT+lbolAov7Bwkd veTy0QCZn9wBsFTAjCAFryuBhJTdr0lw04t5BUuZ0nYhr6QqmjUfuOG6jsRKWHwNN8TJ /2NT1O4zf8GypQ/egnYium5Ns1hJcy0SEWdVcs/EFLGyq0HnsqBSnZY5H1XwzOtE5UC9 uppSy32m1WrLR52a+y9he6lmfbBrFjAy8n0dJktpS5xYnzyNkaQr2ZlibsSzgEg4qXqQ RMRw== X-Gm-Message-State: AC+VfDyTthAaANvOK/10Jk/V3TDdKe/nmDZ9O7BRGTYJMx+0qbDKEzhh s95sleowyIqOXZ8UqS67zezw+IfuVhY= X-Google-Smtp-Source: ACHHUZ6DJJxHn1wWxo8Af/BSN9VswWrMKBS5HRAMWHBlrXxQJVzW1+QwvpSq7d3Dmgy9grKr3jV273YrWho= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:ae65:0:b0:55d:f921:6bfa with SMTP id g37-20020a81ae65000000b0055df9216bfamr9874068ywk.5.1683849560649; Thu, 11 May 2023 16:59:20 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 11 May 2023 16:59:09 -0700 In-Reply-To: <20230511235917.639770-1-seanjc@google.com> Mime-Version: 1.0 References: <20230511235917.639770-1-seanjc@google.com> X-Mailer: git-send-email 2.40.1.606.ga4b1b128d6-goog Message-ID: <20230511235917.639770-2-seanjc@google.com> Subject: [PATCH 1/9] KVM: x86/mmu: Delete pgprintk() and all its usage From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Mingwei Zhang , David Matlack , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Delete KVM's pgprintk() and all its usage, as the code is very prone to bitrot due to being buried behind MMU_DEBUG, and the functionality has been rendered almost entirely obsolete by the tracepoints KVM has gained over the years. And for the situations where the information provided by KVM's tracepoints is insufficient, pgprintk() rarely fills in the gaps, and is almost always far too noisy, i.e. developers end up implementing custom prints anyways. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 17 ----------------- arch/x86/kvm/mmu/mmu_internal.h | 2 -- arch/x86/kvm/mmu/paging_tmpl.h | 7 ------- arch/x86/kvm/mmu/spte.c | 2 -- 4 files changed, 28 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c8961f45e3b1..cb70958eeaf9 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2768,12 +2768,9 @@ int kvm_mmu_unprotect_page(struct kvm *kvm, gfn_t gf= n) LIST_HEAD(invalid_list); int r; =20 - pgprintk("%s: looking for gfn %llx\n", __func__, gfn); r =3D 0; write_lock(&kvm->mmu_lock); for_each_gfn_valid_sp_with_gptes(kvm, sp, gfn) { - pgprintk("%s: gfn %llx role %x\n", __func__, gfn, - sp->role.word); r =3D 1; kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); } @@ -2931,9 +2928,6 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct= kvm_memory_slot *slot, bool prefetch =3D !fault || fault->prefetch; bool write_fault =3D fault && fault->write; =20 - pgprintk("%s: spte %llx write_fault %d gfn %llx\n", __func__, - *sptep, write_fault, gfn); - if (unlikely(is_noslot_pfn(pfn))) { vcpu->stat.pf_mmio_spte_created++; mark_mmio_spte(vcpu, sptep, gfn, pte_access); @@ -2953,8 +2947,6 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct= kvm_memory_slot *slot, drop_parent_pte(child, sptep); flush =3D true; } else if (pfn !=3D spte_to_pfn(*sptep)) { - pgprintk("hfn old %llx new %llx\n", - spte_to_pfn(*sptep), pfn); drop_spte(vcpu->kvm, sptep); flush =3D true; } else @@ -2979,8 +2971,6 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct= kvm_memory_slot *slot, if (flush) kvm_flush_remote_tlbs_gfn(vcpu->kvm, gfn, level); =20 - pgprintk("%s: setting spte %llx\n", __func__, *sptep); - if (!was_rmapped) { WARN_ON_ONCE(ret =3D=3D RET_PF_SPURIOUS); rmap_add(vcpu, slot, sptep, gfn, pte_access); @@ -4436,8 +4426,6 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault static int nonpaging_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { - pgprintk("%s: gva %lx error %x\n", __func__, fault->addr, fault->error_co= de); - /* This path builds a PAE pagetable, we can map 2mb pages at maximum. */ fault->max_level =3D PG_LEVEL_2M; return direct_page_fault(vcpu, fault); @@ -5627,9 +5615,6 @@ static bool detect_write_misaligned(struct kvm_mmu_pa= ge *sp, gpa_t gpa, { unsigned offset, pte_size, misaligned; =20 - pgprintk("misaligned: gpa %llx bytes %d role %x\n", - gpa, bytes, sp->role.word); - offset =3D offset_in_page(gpa); pte_size =3D sp->role.has_4_byte_gpte ? 4 : 8; =20 @@ -5695,8 +5680,6 @@ static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, = gpa_t gpa, if (!READ_ONCE(vcpu->kvm->arch.indirect_shadow_pages)) return; =20 - pgprintk("%s: gpa %llx bytes %d\n", __func__, gpa, bytes); - write_lock(&vcpu->kvm->mmu_lock); =20 gentry =3D mmu_pte_write_fetch_gpte(vcpu, &gpa, &bytes); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index d39af5639ce9..4f1e4b327f40 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -11,11 +11,9 @@ #ifdef MMU_DEBUG extern bool dbg; =20 -#define pgprintk(x...) do { if (dbg) printk(x); } while (0) #define rmap_printk(fmt, args...) do { if (dbg) printk("%s: " fmt, __func_= _, ## args); } while (0) #define MMU_WARN_ON(x) WARN_ON(x) #else -#define pgprintk(x...) do { } while (0) #define rmap_printk(x...) do { } while (0) #define MMU_WARN_ON(x) do { } while (0) #endif diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 0662e0278e70..7a97f769a7cb 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -456,9 +456,6 @@ static int FNAME(walk_addr_generic)(struct guest_walker= *walker, goto retry_walk; } =20 - pgprintk("%s: pte %llx pte_access %x pt_access %x\n", - __func__, (u64)pte, walker->pte_access, - walker->pt_access[walker->level - 1]); return 1; =20 error: @@ -529,8 +526,6 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_= mmu_page *sp, if (FNAME(prefetch_invalid_gpte)(vcpu, sp, spte, gpte)) return false; =20 - pgprintk("%s: gpte %llx spte %p\n", __func__, (u64)gpte, spte); - gfn =3D gpte_to_gfn(gpte); pte_access =3D sp->role.access & FNAME(gpte_access)(gpte); FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte); @@ -758,7 +753,6 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault struct guest_walker walker; int r; =20 - pgprintk("%s: addr %lx err %x\n", __func__, fault->addr, fault->error_cod= e); WARN_ON_ONCE(fault->is_tdp); =20 /* @@ -773,7 +767,6 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault * The page is not mapped by the guest. Let the guest handle it. */ if (!r) { - pgprintk("%s: guest page fault\n", __func__); if (!fault->prefetch) kvm_inject_emulated_page_fault(vcpu, &walker.fault); =20 diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index cf2c6426a6fc..438a86bda9f3 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -221,8 +221,6 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_pa= ge *sp, * shadow pages and unsync'ing pages is not allowed. */ if (mmu_try_to_unsync_pages(vcpu->kvm, slot, gfn, can_unsync, prefetch))= { - pgprintk("%s: found shadow page for %llx, marking ro\n", - __func__, gfn); wrprot =3D true; pte_access &=3D ~ACC_WRITE_MASK; spte &=3D ~(PT_WRITABLE_MASK | shadow_mmu_writable_mask); --=20 2.40.1.606.ga4b1b128d6-goog From nobody Tue Feb 10 06:05:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BCE3C7EE22 for ; Thu, 11 May 2023 23:59:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239634AbjEKX7d (ORCPT ); Thu, 11 May 2023 19:59:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239600AbjEKX7Y (ORCPT ); Thu, 11 May 2023 19:59:24 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D41CD4C39 for ; Thu, 11 May 2023 16:59:22 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id 98e67ed59e1d1-24ffa913678so5083741a91.2 for ; Thu, 11 May 2023 16:59:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1683849562; x=1686441562; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=VIjrZedVYHUazO+QEAuS++mQjGlxrOf/5Ry3YdZ2+oU=; b=o016p9C1l7Bf40QWW5hZhlJ6UNhvoepg3EV6HlxcEYKweG3hyj3sX9GCUcRyUaOCgw 1EsWeEZWAF5yt9PC1IvGCqyts1dY1FMyo9VyyOnzVniVn1OOjUz/rJcvRoKtyYpf9oYW xjmQozf8TMPhASSJoL9Qgtz7Jnj1N6K3BmfgneVnlwAuEKLZ5/r47TvU4fS4y6XX6aJV eKnjIqfLnrUfCd+h9dNaC9E78ttRH6SvCz+F63h2Op6/U3ZNBVEJJmavoSBz2hwqF+Gr 6k8Fp7hpwoCAq/i3vHBs0AguuohmxIjuP9c2uDco58HxBUqso8q2sogF9Szz/CBXk6qn MrEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683849562; x=1686441562; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=VIjrZedVYHUazO+QEAuS++mQjGlxrOf/5Ry3YdZ2+oU=; b=jLAra+l69ig+OkXUBQUATOPnL7MV3FXg4yU32lQ7D44eWdAE0JqOTV+JXXnhs//d/n y8Dxhup0TNde80VOa38CaMfrMCK7WuLvIQbsTEu4XPMRoHaDh70AIkrNIEIxNzoqLieI 8sf/OkZ6bwXyKpGS+6O79juV4wBS+4H/b89n4aShX5PmpOm6lWvRMgmchtQe3gWHUFEX MONWrubZRXncQFU9A3JwCpEAJiJmFrRPmC6rK1GZz50EDb7t1KvdM9D9S37DKPNm5CgO mKh+S+mD+GgWR6EU9sWPnYLttt2gR81O/+R8ctyeTJsqAgvVUbZvTTysV/EHCfKMmBwW QWIQ== X-Gm-Message-State: AC+VfDyb8VLKH1n50MeOJzxRbMJhNToIdNyOfjC+fFnzwhzzyqYxzvxW 5EWeYqqJZYEmRQUjBFd7wb8Bcvr0KC4= X-Google-Smtp-Source: ACHHUZ5ncEZoSbrKYktLcseA8fx7EMsUd80/dPDRz+xO8GX7szwGTZ2sEmLLJw4b5O1/sROyqSHmg/wxPCI= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:d80d:b0:252:8154:790c with SMTP id a13-20020a17090ad80d00b002528154790cmr1368698pjv.9.1683849562448; Thu, 11 May 2023 16:59:22 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 11 May 2023 16:59:10 -0700 In-Reply-To: <20230511235917.639770-1-seanjc@google.com> Mime-Version: 1.0 References: <20230511235917.639770-1-seanjc@google.com> X-Mailer: git-send-email 2.40.1.606.ga4b1b128d6-goog Message-ID: <20230511235917.639770-3-seanjc@google.com> Subject: [PATCH 2/9] KVM: x86/mmu: Delete rmap_printk() and all its usage From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Mingwei Zhang , David Matlack , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Delete rmap_printk() so that MMU_WARN_ON() and MMU_DEBUG can be morphed into something that can be regularly enabled for debug kernels. The information provided by rmap_printk() isn't all that useful now that the rmap and unsync code is mature, as the prints are simultaneously too verbose (_lots_ of message) and yet not verbose enough to be helpful for debug (most instances print just the SPTE pointer/value, which is rarely sufficient to root cause anything but trivial bugs). Alternatively, rmap_printk() could be reworked to into tracepoints, but it's not clear there is a real need as rmap bugs rarely escape initial development, and when bugs do escape to production, they are often edge cases and/or reside in code that isn't directly related to the rmaps. In other words, the problems with rmap_printk() being unhelpful also apply to tracepoints. And deleting rmap_printk() doesn't preclude adding tracepoints in the future. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 12 ------------ arch/x86/kvm/mmu/mmu_internal.h | 2 -- 2 files changed, 14 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index cb70958eeaf9..f6918c0bb82d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -938,10 +938,8 @@ static int pte_list_add(struct kvm_mmu_memory_cache *c= ache, u64 *spte, int count =3D 0; =20 if (!rmap_head->val) { - rmap_printk("%p %llx 0->1\n", spte, *spte); rmap_head->val =3D (unsigned long)spte; } else if (!(rmap_head->val & 1)) { - rmap_printk("%p %llx 1->many\n", spte, *spte); desc =3D kvm_mmu_memory_cache_alloc(cache); desc->sptes[0] =3D (u64 *)rmap_head->val; desc->sptes[1] =3D spte; @@ -950,7 +948,6 @@ static int pte_list_add(struct kvm_mmu_memory_cache *ca= che, u64 *spte, rmap_head->val =3D (unsigned long)desc | 1; ++count; } else { - rmap_printk("%p %llx many->many\n", spte, *spte); desc =3D (struct pte_list_desc *)(rmap_head->val & ~1ul); count =3D desc->tail_count + desc->spte_count; =20 @@ -1015,14 +1012,12 @@ static void pte_list_remove(u64 *spte, struct kvm_r= map_head *rmap_head) pr_err("%s: %p 0->BUG\n", __func__, spte); BUG(); } else if (!(rmap_head->val & 1)) { - rmap_printk("%p 1->0\n", spte); if ((u64 *)rmap_head->val !=3D spte) { pr_err("%s: %p 1->BUG\n", __func__, spte); BUG(); } rmap_head->val =3D 0; } else { - rmap_printk("%p many->many\n", spte); desc =3D (struct pte_list_desc *)(rmap_head->val & ~1ul); while (desc) { for (i =3D 0; i < desc->spte_count; ++i) { @@ -1238,8 +1233,6 @@ static bool spte_write_protect(u64 *sptep, bool pt_pr= otect) !(pt_protect && is_mmu_writable_spte(spte))) return false; =20 - rmap_printk("spte %p %llx\n", sptep, *sptep); - if (pt_protect) spte &=3D ~shadow_mmu_writable_mask; spte =3D spte & ~PT_WRITABLE_MASK; @@ -1264,8 +1257,6 @@ static bool spte_clear_dirty(u64 *sptep) { u64 spte =3D *sptep; =20 - rmap_printk("spte %p %llx\n", sptep, *sptep); - MMU_WARN_ON(!spte_ad_enabled(spte)); spte &=3D ~shadow_dirty_mask; return mmu_spte_update(sptep, spte); @@ -1477,9 +1468,6 @@ static bool kvm_set_pte_rmap(struct kvm *kvm, struct = kvm_rmap_head *rmap_head, =20 restart: for_each_rmap_spte(rmap_head, &iter, sptep) { - rmap_printk("spte %p %llx gfn %llx (%d)\n", - sptep, *sptep, gfn, level); - need_flush =3D true; =20 if (pte_write(pte)) { diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index 4f1e4b327f40..9c9dd9340c63 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -11,10 +11,8 @@ #ifdef MMU_DEBUG extern bool dbg; =20 -#define rmap_printk(fmt, args...) do { if (dbg) printk("%s: " fmt, __func_= _, ## args); } while (0) #define MMU_WARN_ON(x) WARN_ON(x) #else -#define rmap_printk(x...) do { } while (0) #define MMU_WARN_ON(x) do { } while (0) #endif =20 --=20 2.40.1.606.ga4b1b128d6-goog From nobody Tue Feb 10 06:05:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48D45C7EE22 for ; Thu, 11 May 2023 23:59:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239653AbjEKX7h (ORCPT ); Thu, 11 May 2023 19:59:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52890 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239612AbjEKX70 (ORCPT ); Thu, 11 May 2023 19:59:26 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE2077693 for ; Thu, 11 May 2023 16:59:24 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id d2e1a72fcca58-643aad3bbf3so4667212b3a.0 for ; Thu, 11 May 2023 16:59:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1683849564; x=1686441564; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=MEk4ljfjUrhYUrSteJqpKxRY6vJbyA6fFCK+rB7er/I=; b=ZTKfWcodlx+iAl7zxea0FMeNu6SZzfQ391F9NdQdcDalTF810JiIW+11fnDFvCxftW YOTk1GcXRemaIFDnJKfDJ/9ZIhmJ1DrnF7eQkl4olnIjpXECV+zr50SHuMQjl8aUvgZf V2fsmPRUBzHwTR28iFSim2t7wgW2DeJxeoZka9mF7A6lBburlYrEB+VUaTAW13EZVAUY I3HBUF6UkvWU1AI/DxsLzCDqlR8HVMF7IeVpF+zjpJdMHrOJQT9gAIhHybKb761iF4U5 mvM430WKhXJFE/Yk9hCXHdfFvsdG422kHWf3cMn8wRH80p2g8sdw43iZxtoogclofmgg lQHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683849564; x=1686441564; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=MEk4ljfjUrhYUrSteJqpKxRY6vJbyA6fFCK+rB7er/I=; b=NfAUNdb+455EIfdBilzvnxmb02y2Gj1jy/vyXocZGDYRqjyvg0YzJJbPVfIZ7RyRsQ imuKuejTJJIg48xprHwCQPemtVqyze5AazWQ8sP9zGU4W3EwC4B4+cFOlzVqbAuI7Pq2 JKSnYfxK02hRtpaOqW6VUdpqPs5DxYm/u3ZeQar299PzX6OfVimeteotOLbw+tYSFvd2 vJVxzVoEllmRjffwWfy/c4twwkLIOC7Fdr0wuU7O4upl5RO7mJpTKqY4BQiN26Mo/rBt z8nQphORgWKiF/n335cZdzPK82hahXFLfNk2iWyFsrf7eyOaCUpDs744Hoi/q3wm2iQp GBwg== X-Gm-Message-State: AC+VfDy+3Mwuu4V9DN5tgu4X/PFCQXUcOB3t3Oz5aMbiXnNj//QJYxDw A3yjTa3ZVMuICSxPNSqJFqQVMe8vEvk= X-Google-Smtp-Source: ACHHUZ5l3rRuanjYbp1RV2ouAwFxf1cJNWLF+jyC0+yqQArFREl4t0+hRDySD9cYAS6UIzzTM7dmmv4jZPI= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:2cf:b0:643:4b03:4930 with SMTP id b15-20020a056a0002cf00b006434b034930mr5941851pft.0.1683849564339; Thu, 11 May 2023 16:59:24 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 11 May 2023 16:59:11 -0700 In-Reply-To: <20230511235917.639770-1-seanjc@google.com> Mime-Version: 1.0 References: <20230511235917.639770-1-seanjc@google.com> X-Mailer: git-send-email 2.40.1.606.ga4b1b128d6-goog Message-ID: <20230511235917.639770-4-seanjc@google.com> Subject: [PATCH 3/9] KVM: x86/mmu: Delete the "dbg" module param From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Mingwei Zhang , David Matlack , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Delete KVM's "dbg" module param now that its usage in KVM is gone (it used to guard pgprintk() and rmap_printk()). Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 5 ----- arch/x86/kvm/mmu/mmu_internal.h | 2 -- 2 files changed, 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f6918c0bb82d..2b65a62fb953 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -112,11 +112,6 @@ static int max_huge_page_level __read_mostly; static int tdp_root_level __read_mostly; static int max_tdp_level __read_mostly; =20 -#ifdef MMU_DEBUG -bool dbg =3D 0; -module_param(dbg, bool, 0644); -#endif - #define PTE_PREFETCH_NUM 8 =20 #include diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index 9c9dd9340c63..9ea80e4d463c 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -9,8 +9,6 @@ #undef MMU_DEBUG =20 #ifdef MMU_DEBUG -extern bool dbg; - #define MMU_WARN_ON(x) WARN_ON(x) #else #define MMU_WARN_ON(x) do { } while (0) --=20 2.40.1.606.ga4b1b128d6-goog From nobody Tue Feb 10 06:05:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 452C4C7EE22 for ; Thu, 11 May 2023 23:59:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239674AbjEKX7j (ORCPT ); Thu, 11 May 2023 19:59:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52960 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239623AbjEKX71 (ORCPT ); Thu, 11 May 2023 19:59:27 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9FFFB2689 for ; Thu, 11 May 2023 16:59:26 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id 41be03b00d2f7-53017eb8b2eso5565251a12.0 for ; Thu, 11 May 2023 16:59:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1683849566; x=1686441566; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=PikE2sys0Bkw6OuFwhsOmriaAvdxJeJy0bEZXYHR6ps=; b=dZK3TIMY1KqU98/dI2bh88MzcBwpNW/nl4yCarDpLAxx5tYXXWm/+94gBnBN8jaTTa a4ZSO3mz0BDPUpqRLN+T177f+nEGv1K34k6YguSfdMsD1oD5E7JM3w2RYqSdL3JdRQbl Y4CGw1pePwomjKrjZTfGH1pLhgooy7ncPlEAL86vDItZPf57h/fLbm+bexMZv8rELGFd WnuhXWPcpXTO5EIINh2kH7XSKRtECYbPz7EvbfLuSP8elSf77h0hvLRihkPqSZwNHBQo HD49nVUppK2fo1ud/3++WO2wC4lcTjI/nHf3UPRHk0/g1tbM5FqFR/aTP9i6y8mLpsDm JeKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683849566; x=1686441566; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=PikE2sys0Bkw6OuFwhsOmriaAvdxJeJy0bEZXYHR6ps=; b=YUpE5S1Kd/9mnp3ZAy5lpdYUVNf/bmimgFX3tbURetdj5rcER/WSqIpJcQm7djRtmu iDDeJpoBPjb8dHdfaqH0FJ/pWZhIXBSxz/kvlnvz/ENMlw49Z19wK5o4S1MmNqed4H8D ioKvx+9bl/0bzCMpriDdKUwgChcZj3GwyufzLRE1rF4/2Io6vGDxGKkrjpw4O9PHceJL a87FeC3iAI2uVE/2NoRXtgeaOkWh53PlB41lPuINKliPG5pzpmXdOEgD/dVguLjT3AaK gGeKmFfYjUYLDQrgtkLO3O+llQ1/Trurp7ZTxjBjxLkwOJRUQVkwTGodQb8IqJP/RIU9 aO9g== X-Gm-Message-State: AC+VfDxNLHtOXrovE1sQZZjjlmkL/bHgy+2/a71gErRN1lxnRRyq5KF/ OjKZoe+ZgpSqY6B2HvhjWzzN2hEL+Qo= X-Google-Smtp-Source: ACHHUZ5rDKGyUS4GvTb7gmtbjPIQqXx4fUgA3xtEszH3sHui2Ppu/htGcfPhkCGRTei+fbyk6Tovrv37/lI= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a63:2807:0:b0:52c:4227:aa60 with SMTP id bs7-20020a632807000000b0052c4227aa60mr6346089pgb.7.1683849566231; Thu, 11 May 2023 16:59:26 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 11 May 2023 16:59:12 -0700 In-Reply-To: <20230511235917.639770-1-seanjc@google.com> Mime-Version: 1.0 References: <20230511235917.639770-1-seanjc@google.com> X-Mailer: git-send-email 2.40.1.606.ga4b1b128d6-goog Message-ID: <20230511235917.639770-5-seanjc@google.com> Subject: [PATCH 4/9] KVM: x86/mmu: Rename MMU_WARN_ON() to KVM_MMU_WARN_ON() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Mingwei Zhang , David Matlack , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Rename MMU_WARN_ON() to make it super obvious that the assertions are all about KVM's MMU, not the primary MMU. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 4 ++-- arch/x86/kvm/mmu/mmu_internal.h | 4 ++-- arch/x86/kvm/mmu/spte.h | 8 ++++---- arch/x86/kvm/mmu/tdp_mmu.c | 8 ++++---- 4 files changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2b65a62fb953..240272b10ceb 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1252,7 +1252,7 @@ static bool spte_clear_dirty(u64 *sptep) { u64 spte =3D *sptep; =20 - MMU_WARN_ON(!spte_ad_enabled(spte)); + KVM_MMU_WARN_ON(!spte_ad_enabled(spte)); spte &=3D ~shadow_dirty_mask; return mmu_spte_update(sptep, spte); } @@ -1728,7 +1728,7 @@ static void kvm_unaccount_mmu_page(struct kvm *kvm, s= truct kvm_mmu_page *sp) =20 static void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp) { - MMU_WARN_ON(!is_empty_shadow_page(sp->spt)); + KVM_MMU_WARN_ON(!is_empty_shadow_page(sp->spt)); hlist_del(&sp->hash_link); list_del(&sp->link); free_page((unsigned long)sp->spt); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index 9ea80e4d463c..bb1649669bc9 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -9,9 +9,9 @@ #undef MMU_DEBUG =20 #ifdef MMU_DEBUG -#define MMU_WARN_ON(x) WARN_ON(x) +#define KVM_MMU_WARN_ON(x) WARN_ON(x) #else -#define MMU_WARN_ON(x) do { } while (0) +#define KVM_MMU_WARN_ON(x) do { } while (0) #endif =20 /* Page table builder macros common to shadow (host) PTEs and guest PTEs. = */ diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 1279db2eab44..83e6614f3720 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -265,13 +265,13 @@ static inline bool sp_ad_disabled(struct kvm_mmu_page= *sp) =20 static inline bool spte_ad_enabled(u64 spte) { - MMU_WARN_ON(!is_shadow_present_pte(spte)); + KVM_MMU_WARN_ON(!is_shadow_present_pte(spte)); return (spte & SPTE_TDP_AD_MASK) !=3D SPTE_TDP_AD_DISABLED; } =20 static inline bool spte_ad_need_write_protect(u64 spte) { - MMU_WARN_ON(!is_shadow_present_pte(spte)); + KVM_MMU_WARN_ON(!is_shadow_present_pte(spte)); /* * This is benign for non-TDP SPTEs as SPTE_TDP_AD_ENABLED is '0', * and non-TDP SPTEs will never set these bits. Optimize for 64-bit @@ -282,13 +282,13 @@ static inline bool spte_ad_need_write_protect(u64 spt= e) =20 static inline u64 spte_shadow_accessed_mask(u64 spte) { - MMU_WARN_ON(!is_shadow_present_pte(spte)); + KVM_MMU_WARN_ON(!is_shadow_present_pte(spte)); return spte_ad_enabled(spte) ? shadow_accessed_mask : 0; } =20 static inline u64 spte_shadow_dirty_mask(u64 spte) { - MMU_WARN_ON(!is_shadow_present_pte(spte)); + KVM_MMU_WARN_ON(!is_shadow_present_pte(spte)); return spte_ad_enabled(spte) ? shadow_dirty_mask : 0; } =20 diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 08340219c35a..6ef44d60ba2b 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1545,8 +1545,8 @@ static bool clear_dirty_gfn_range(struct kvm *kvm, st= ruct kvm_mmu_page *root, if (!is_shadow_present_pte(iter.old_spte)) continue; =20 - MMU_WARN_ON(kvm_ad_enabled() && - spte_ad_need_write_protect(iter.old_spte)); + KVM_MMU_WARN_ON(kvm_ad_enabled() && + spte_ad_need_write_protect(iter.old_spte)); =20 if (!(iter.old_spte & dbit)) continue; @@ -1604,8 +1604,8 @@ static void clear_dirty_pt_masked(struct kvm *kvm, st= ruct kvm_mmu_page *root, if (!mask) break; =20 - MMU_WARN_ON(kvm_ad_enabled() && - spte_ad_need_write_protect(iter.old_spte)); + KVM_MMU_WARN_ON(kvm_ad_enabled() && + spte_ad_need_write_protect(iter.old_spte)); =20 if (iter.level > PG_LEVEL_4K || !(mask & (1UL << (iter.gfn - gfn)))) --=20 2.40.1.606.ga4b1b128d6-goog From nobody Tue Feb 10 06:05:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E99DC7EE22 for ; Thu, 11 May 2023 23:59:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239692AbjEKX7m (ORCPT ); Thu, 11 May 2023 19:59:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239638AbjEKX7e (ORCPT ); Thu, 11 May 2023 19:59:34 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B7F07A80 for ; Thu, 11 May 2023 16:59:28 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id 98e67ed59e1d1-24e1e19f63fso5250704a91.0 for ; Thu, 11 May 2023 16:59:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1683849568; x=1686441568; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=GvrXFTsUIEbl8XYlDYISM/+RLHQD32m4dCKsvb8PQPc=; b=Bgfqg9A7IvL9ARt7BoCja1L/qtbaDZTHexIhm2XU4kyEXtiZrmM/qjOaiJo1djtL+x iWtMq0zESMhpeAemlghV6H/RzMJJ/QMMwu9u3d4sDck/rXf331fruD55vhY4ZraZOvRG 1c+/2/S1zj4uDEUUgPrQa5aQkOAADp5Out+rlzxkhcGXbPSLcqlrQAwDFQo7bfdWFLIN 9RJS8/8DD3ZbfQs3wZr/lBkit+cYaLxqzvDgbgNVU+DnnJgzzmOhTTN91m5R9iqWt5Ws QPMt4zD/Gkd1asEzGcy+DuFI6BJ8UUjJvF2OGgtDMMH6dhi4RR7imEEXnsd9wcyUNeB1 Ob8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683849568; x=1686441568; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=GvrXFTsUIEbl8XYlDYISM/+RLHQD32m4dCKsvb8PQPc=; b=DYzID/rWS3xhwD+66TuLr+RvOW5VU0Ip1LXQGyiEkdED+KLdbidRq+R5FdgpQBVOXi 0WnUaW3pc0RLSxYMM7FpdPkQ6chrd5C/6LyVIOWFs4b+FNkY2C0P0AEuKQjl9rXFL+BF iBh6EQwTuSKB5f2ejmSeiItya6XjabK9phFyX4SpZTTisbZfqam8LbKw+NTIiu9sZiVi HSIuki/mBIpGVos2lrLDakHmG3P293hlKrxM/wBDysGayuyjwjAt/QzplssFq43wepAb UvnyNKkhQ2TEbS+pBIrCZCdV/snBv+FWRTuP0forfRY7Pa8KhZhrS7PSEZmij0L3igmC Sk+g== X-Gm-Message-State: AC+VfDz71k9RdtVgEZ8PD2DKCpvblIVb/YhCWd3jo2M77/KrIdghXQns 9Z0B2LkIrDUlfSLRGHIJ/r7WFHy50uA= X-Google-Smtp-Source: ACHHUZ7iEL0gAfW/R21XnajEPgAghcG8a3/K79ZiJ4E5JKmumY7TbhUBcpMq3bvKEbl/g9Vs3fPHod7VUMo= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:c088:b0:24d:f3f5:4d2f with SMTP id o8-20020a17090ac08800b0024df3f54d2fmr6888039pjs.3.1683849567896; Thu, 11 May 2023 16:59:27 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 11 May 2023 16:59:13 -0700 In-Reply-To: <20230511235917.639770-1-seanjc@google.com> Mime-Version: 1.0 References: <20230511235917.639770-1-seanjc@google.com> X-Mailer: git-send-email 2.40.1.606.ga4b1b128d6-goog Message-ID: <20230511235917.639770-6-seanjc@google.com> Subject: [PATCH 5/9] KVM: x86/mmu: Convert "runtime" WARN_ON() assertions to WARN_ON_ONCE() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Mingwei Zhang , David Matlack , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Convert all "runtime" assertions, i.e. assertions that can be triggered while running vCPUs, from WARN_ON() to WARN_ON_ONCE(). Every WARN in the MMU that is tied to running vCPUs, i.e. not contained to loading and initializing KVM, is likely to fire _a lot_ when it does trigger. E.g. if KVM ends up with a bug that causes a root to be invalidated before the page fault handler is invoked, pretty much _every_ page fault VM-Exit triggers the WARN. If a WARN is triggered frequently, the resulting spam usually causes a lot of damage of its own, e.g. consumes resources to log the WARN and pollutes the kernel log, often to the point where other useful information can be lost. In many case, the damage caused by the spam is actually worse than the bug itself, e.g. KVM can almost always recover from an unexpectedly invalid root. On the flip side, warning every time is rarely helpful for debug and triage, i.e. a single splat is usually sufficient to point a debugger in the right direction, and automated testing, e.g. syzkaller, typically runs with warn_on_panic=3D1, i.e. will never get past the first WARN anyways. Lastly, when an assertions fails multiple times, the stack traces in KVM are almost always identical, i.e. the full splat only needs to be captured once. And _if_ there is value in captruing information about the failed assert, a ratelimited printk() is sufficient and less likely to rack up a large amount of collateral damage. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 48 ++++++++++++++++----------------- arch/x86/kvm/mmu/mmu_internal.h | 2 +- arch/x86/kvm/mmu/page_track.c | 16 +++++------ arch/x86/kvm/mmu/paging_tmpl.h | 4 +-- arch/x86/kvm/mmu/spte.c | 4 +-- arch/x86/kvm/mmu/tdp_iter.c | 4 +-- arch/x86/kvm/mmu/tdp_mmu.c | 20 +++++++------- 7 files changed, 49 insertions(+), 49 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 240272b10ceb..4731d2bf5af6 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -482,7 +482,7 @@ static u64 __get_spte_lockless(u64 *sptep) */ static void mmu_spte_set(u64 *sptep, u64 new_spte) { - WARN_ON(is_shadow_present_pte(*sptep)); + WARN_ON_ONCE(is_shadow_present_pte(*sptep)); __set_spte(sptep, new_spte); } =20 @@ -494,7 +494,7 @@ static u64 mmu_spte_update_no_track(u64 *sptep, u64 new= _spte) { u64 old_spte =3D *sptep; =20 - WARN_ON(!is_shadow_present_pte(new_spte)); + WARN_ON_ONCE(!is_shadow_present_pte(new_spte)); check_spte_writable_invariants(new_spte); =20 if (!is_shadow_present_pte(old_spte)) { @@ -507,7 +507,7 @@ static u64 mmu_spte_update_no_track(u64 *sptep, u64 new= _spte) else old_spte =3D __update_clear_spte_slow(sptep, new_spte); =20 - WARN_ON(spte_to_pfn(old_spte) !=3D spte_to_pfn(new_spte)); + WARN_ON_ONCE(spte_to_pfn(old_spte) !=3D spte_to_pfn(new_spte)); =20 return old_spte; } @@ -589,7 +589,7 @@ static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u= 64 *sptep) * by a refcounted page, the refcount is elevated. */ page =3D kvm_pfn_to_refcounted_page(pfn); - WARN_ON(page && !page_count(page)); + WARN_ON_ONCE(page && !page_count(page)); =20 if (is_accessed_spte(old_spte)) kvm_set_pfn_accessed(pfn); @@ -804,7 +804,7 @@ static void update_gfn_disallow_lpage_count(const struc= t kvm_memory_slot *slot, for (i =3D PG_LEVEL_2M; i <=3D KVM_MAX_HUGEPAGE_LEVEL; ++i) { linfo =3D lpage_info_slot(gfn, slot, i); linfo->disallow_lpage +=3D count; - WARN_ON(linfo->disallow_lpage < 0); + WARN_ON_ONCE(linfo->disallow_lpage < 0); } } =20 @@ -1199,7 +1199,7 @@ static void drop_large_spte(struct kvm *kvm, u64 *spt= ep, bool flush) struct kvm_mmu_page *sp; =20 sp =3D sptep_to_sp(sptep); - WARN_ON(sp->role.level =3D=3D PG_LEVEL_4K); + WARN_ON_ONCE(sp->role.level =3D=3D PG_LEVEL_4K); =20 drop_spte(kvm, sptep); =20 @@ -1458,7 +1458,7 @@ static bool kvm_set_pte_rmap(struct kvm *kvm, struct = kvm_rmap_head *rmap_head, u64 new_spte; kvm_pfn_t new_pfn; =20 - WARN_ON(pte_huge(pte)); + WARN_ON_ONCE(pte_huge(pte)); new_pfn =3D pte_pfn(pte); =20 restart: @@ -1816,7 +1816,7 @@ static int mmu_pages_add(struct kvm_mmu_pages *pvec, = struct kvm_mmu_page *sp, static inline void clear_unsync_child_bit(struct kvm_mmu_page *sp, int idx) { --sp->unsync_children; - WARN_ON((int)sp->unsync_children < 0); + WARN_ON_ONCE((int)sp->unsync_children < 0); __clear_bit(idx, sp->unsync_child_bitmap); } =20 @@ -1874,7 +1874,7 @@ static int mmu_unsync_walk(struct kvm_mmu_page *sp, =20 static void kvm_unlink_unsync_page(struct kvm *kvm, struct kvm_mmu_page *s= p) { - WARN_ON(!sp->unsync); + WARN_ON_ONCE(!sp->unsync); trace_kvm_mmu_sync_page(sp); sp->unsync =3D 0; --kvm->stat.mmu_unsync; @@ -2049,11 +2049,11 @@ static int mmu_pages_first(struct kvm_mmu_pages *pv= ec, if (pvec->nr =3D=3D 0) return 0; =20 - WARN_ON(pvec->page[0].idx !=3D INVALID_INDEX); + WARN_ON_ONCE(pvec->page[0].idx !=3D INVALID_INDEX); =20 sp =3D pvec->page[0].sp; level =3D sp->role.level; - WARN_ON(level =3D=3D PG_LEVEL_4K); + WARN_ON_ONCE(level =3D=3D PG_LEVEL_4K); =20 parents->parent[level-2] =3D sp; =20 @@ -2075,7 +2075,7 @@ static void mmu_pages_clear_parents(struct mmu_page_p= ath *parents) if (!sp) return; =20 - WARN_ON(idx =3D=3D INVALID_INDEX); + WARN_ON_ONCE(idx =3D=3D INVALID_INDEX); clear_unsync_child_bit(sp, idx); level++; } while (!sp->unsync_children); @@ -2196,7 +2196,7 @@ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(= struct kvm *kvm, if (ret < 0) break; =20 - WARN_ON(!list_empty(&invalid_list)); + WARN_ON_ONCE(!list_empty(&invalid_list)); if (ret > 0) kvm_flush_remote_tlbs(kvm); } @@ -2651,7 +2651,7 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, kvm_flush_remote_tlbs(kvm); =20 list_for_each_entry_safe(sp, nsp, invalid_list, link) { - WARN_ON(!sp->role.invalid || sp->root_count); + WARN_ON_ONCE(!sp->role.invalid || sp->root_count); kvm_mmu_free_shadow_page(sp); } } @@ -2846,7 +2846,7 @@ int mmu_try_to_unsync_pages(struct kvm *kvm, const st= ruct kvm_memory_slot *slot, continue; } =20 - WARN_ON(sp->role.level !=3D PG_LEVEL_4K); + WARN_ON_ONCE(sp->role.level !=3D PG_LEVEL_4K); kvm_unsync_page(kvm, sp); } if (locked) @@ -2999,7 +2999,7 @@ static void __direct_pte_prefetch(struct kvm_vcpu *vc= pu, u64 *spte, *start =3D NULL; int i; =20 - WARN_ON(!sp->role.direct); + WARN_ON_ONCE(!sp->role.direct); =20 i =3D spte_index(sptep) & ~(PTE_PREFETCH_NUM - 1); spte =3D sp->spt + i; @@ -3545,7 +3545,7 @@ static void mmu_free_root_page(struct kvm *kvm, hpa_t= *root_hpa, * SPTE to ensure any non-PA bits are dropped. */ sp =3D spte_to_child_sp(*root_hpa); - if (WARN_ON(!sp)) + if (WARN_ON_ONCE(!sp)) return; =20 if (is_tdp_mmu_page(sp)) @@ -4160,7 +4160,7 @@ static int handle_mmio_page_fault(struct kvm_vcpu *vc= pu, u64 addr, bool direct) return RET_PF_EMULATE; =20 reserved =3D get_mmio_spte(vcpu, addr, &spte); - if (WARN_ON(reserved)) + if (WARN_ON_ONCE(reserved)) return -EINVAL; =20 if (is_mmio_spte(spte)) { @@ -5495,9 +5495,9 @@ void kvm_mmu_unload(struct kvm_vcpu *vcpu) struct kvm *kvm =3D vcpu->kvm; =20 kvm_mmu_free_roots(kvm, &vcpu->arch.root_mmu, KVM_MMU_ROOTS_ALL); - WARN_ON(VALID_PAGE(vcpu->arch.root_mmu.root.hpa)); + WARN_ON_ONCE(VALID_PAGE(vcpu->arch.root_mmu.root.hpa)); kvm_mmu_free_roots(kvm, &vcpu->arch.guest_mmu, KVM_MMU_ROOTS_ALL); - WARN_ON(VALID_PAGE(vcpu->arch.guest_mmu.root.hpa)); + WARN_ON_ONCE(VALID_PAGE(vcpu->arch.guest_mmu.root.hpa)); vcpu_clear_mmio_info(vcpu, MMIO_GVA_ANY); } =20 @@ -5701,7 +5701,7 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu= , gpa_t cr2_or_gpa, u64 err int r, emulation_type =3D EMULTYPE_PF; bool direct =3D vcpu->arch.mmu->root_role.direct; =20 - if (WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root.hpa))) + if (WARN_ON_ONCE(!VALID_PAGE(vcpu->arch.mmu->root.hpa))) return RET_PF_RETRY; =20 r =3D RET_PF_INVALID; @@ -6050,7 +6050,7 @@ static void kvm_zap_obsolete_pages(struct kvm *kvm) * pages. Skip the bogus page, otherwise we'll get stuck in an * infinite loop if the page gets put back on the list (again). */ - if (WARN_ON(sp->role.invalid)) + if (WARN_ON_ONCE(sp->role.invalid)) continue; =20 /* @@ -6692,7 +6692,7 @@ void kvm_mmu_zap_all(struct kvm *kvm) write_lock(&kvm->mmu_lock); restart: list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) { - if (WARN_ON(sp->role.invalid)) + if (WARN_ON_ONCE(sp->role.invalid)) continue; if (__kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list, &ign)) goto restart; @@ -6710,7 +6710,7 @@ void kvm_mmu_zap_all(struct kvm *kvm) =20 void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen) { - WARN_ON(gen & KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS); + WARN_ON_ONCE(gen & KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS); =20 gen &=3D MMIO_SPTE_GEN_MASK; =20 diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index bb1649669bc9..cfe925fefa68 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -9,7 +9,7 @@ #undef MMU_DEBUG =20 #ifdef MMU_DEBUG -#define KVM_MMU_WARN_ON(x) WARN_ON(x) +#define KVM_MMU_WARN_ON(x) WARN_ON_ONCE(x) #else #define KVM_MMU_WARN_ON(x) do { } while (0) #endif diff --git a/arch/x86/kvm/mmu/page_track.c b/arch/x86/kvm/mmu/page_track.c index 0a2ac438d647..fd16918b3a7a 100644 --- a/arch/x86/kvm/mmu/page_track.c +++ b/arch/x86/kvm/mmu/page_track.c @@ -94,7 +94,7 @@ static void update_gfn_track(struct kvm_memory_slot *slot= , gfn_t gfn, =20 val =3D slot->arch.gfn_track[mode][index]; =20 - if (WARN_ON(val + count < 0 || val + count > USHRT_MAX)) + if (WARN_ON_ONCE(val + count < 0 || val + count > USHRT_MAX)) return; =20 slot->arch.gfn_track[mode][index] +=3D count; @@ -117,11 +117,11 @@ void kvm_slot_page_track_add_page(struct kvm *kvm, enum kvm_page_track_mode mode) { =20 - if (WARN_ON(!page_track_mode_is_valid(mode))) + if (WARN_ON_ONCE(!page_track_mode_is_valid(mode))) return; =20 - if (WARN_ON(mode =3D=3D KVM_PAGE_TRACK_WRITE && - !kvm_page_track_write_tracking_enabled(kvm))) + if (WARN_ON_ONCE(mode =3D=3D KVM_PAGE_TRACK_WRITE && + !kvm_page_track_write_tracking_enabled(kvm))) return; =20 update_gfn_track(slot, gfn, mode, 1); @@ -155,11 +155,11 @@ void kvm_slot_page_track_remove_page(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, enum kvm_page_track_mode mode) { - if (WARN_ON(!page_track_mode_is_valid(mode))) + if (WARN_ON_ONCE(!page_track_mode_is_valid(mode))) return; =20 - if (WARN_ON(mode =3D=3D KVM_PAGE_TRACK_WRITE && - !kvm_page_track_write_tracking_enabled(kvm))) + if (WARN_ON_ONCE(mode =3D=3D KVM_PAGE_TRACK_WRITE && + !kvm_page_track_write_tracking_enabled(kvm))) return; =20 update_gfn_track(slot, gfn, mode, -1); @@ -181,7 +181,7 @@ bool kvm_slot_page_track_is_active(struct kvm *kvm, { int index; =20 - if (WARN_ON(!page_track_mode_is_valid(mode))) + if (WARN_ON_ONCE(!page_track_mode_is_valid(mode))) return false; =20 if (!slot) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 7a97f769a7cb..a3fc7c1a7f8d 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -633,7 +633,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct k= vm_page_fault *fault, if (FNAME(gpte_changed)(vcpu, gw, top_level)) goto out_gpte_changed; =20 - if (WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root.hpa))) + if (WARN_ON_ONCE(!VALID_PAGE(vcpu->arch.mmu->root.hpa))) goto out_gpte_changed; =20 for_each_shadow_entry(vcpu, fault->addr, it) { @@ -830,7 +830,7 @@ static gpa_t FNAME(get_level1_sp_gpa)(struct kvm_mmu_pa= ge *sp) { int offset =3D 0; =20 - WARN_ON(sp->role.level !=3D PG_LEVEL_4K); + WARN_ON_ONCE(sp->role.level !=3D PG_LEVEL_4K); =20 if (PTTYPE =3D=3D 32) offset =3D sp->role.quadrant << SPTE_LEVEL_BITS; diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 438a86bda9f3..4a599130e9c9 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -61,7 +61,7 @@ static u64 generation_mmio_spte_mask(u64 gen) { u64 mask; =20 - WARN_ON(gen & ~MMIO_SPTE_GEN_MASK); + WARN_ON_ONCE(gen & ~MMIO_SPTE_GEN_MASK); =20 mask =3D (gen << MMIO_SPTE_GEN_LOW_SHIFT) & MMIO_SPTE_GEN_LOW_MASK; mask |=3D (gen << MMIO_SPTE_GEN_HIGH_SHIFT) & MMIO_SPTE_GEN_HIGH_MASK; @@ -240,7 +240,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_pa= ge *sp, =20 if ((spte & PT_WRITABLE_MASK) && kvm_slot_dirty_track_enabled(slot)) { /* Enforced by kvm_mmu_hugepage_adjust. */ - WARN_ON(level > PG_LEVEL_4K); + WARN_ON_ONCE(level > PG_LEVEL_4K); mark_page_dirty_in_slot(vcpu->kvm, slot, gfn); } =20 diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c index d2eb0d4f8710..5bb09f8d9fc6 100644 --- a/arch/x86/kvm/mmu/tdp_iter.c +++ b/arch/x86/kvm/mmu/tdp_iter.c @@ -41,8 +41,8 @@ void tdp_iter_start(struct tdp_iter *iter, struct kvm_mmu= _page *root, { int root_level =3D root->role.level; =20 - WARN_ON(root_level < 1); - WARN_ON(root_level > PT64_ROOT_MAX_LEVEL); + WARN_ON_ONCE(root_level < 1); + WARN_ON_ONCE(root_level > PT64_ROOT_MAX_LEVEL); =20 iter->next_last_level_gfn =3D next_last_level_gfn; iter->root_level =3D root_level; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 6ef44d60ba2b..799479e84f8b 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -475,9 +475,9 @@ static void handle_changed_spte(struct kvm *kvm, int as= _id, gfn_t gfn, bool is_leaf =3D is_present && is_last_spte(new_spte, level); bool pfn_changed =3D spte_to_pfn(old_spte) !=3D spte_to_pfn(new_spte); =20 - WARN_ON(level > PT64_ROOT_MAX_LEVEL); - WARN_ON(level < PG_LEVEL_4K); - WARN_ON(gfn & (KVM_PAGES_PER_HPAGE(level) - 1)); + WARN_ON_ONCE(level > PT64_ROOT_MAX_LEVEL); + WARN_ON_ONCE(level < PG_LEVEL_4K); + WARN_ON_ONCE(gfn & (KVM_PAGES_PER_HPAGE(level) - 1)); =20 /* * If this warning were to trigger it would indicate that there was a @@ -522,9 +522,9 @@ static void handle_changed_spte(struct kvm *kvm, int as= _id, gfn_t gfn, * impact the guest since both the former and current SPTEs * are nonpresent. */ - if (WARN_ON(!is_mmio_spte(old_spte) && - !is_mmio_spte(new_spte) && - !is_removed_spte(new_spte))) + if (WARN_ON_ONCE(!is_mmio_spte(old_spte) && + !is_mmio_spte(new_spte) && + !is_removed_spte(new_spte))) pr_err("Unexpected SPTE change! Nonpresent SPTEs\n" "should not be replaced with another,\n" "different nonpresent SPTE, unless one or both\n" @@ -658,7 +658,7 @@ static u64 tdp_mmu_set_spte(struct kvm *kvm, int as_id,= tdp_ptep_t sptep, * should be used. If operating under the MMU lock in write mode, the * use of the removed SPTE should not be necessary. */ - WARN_ON(is_removed_spte(old_spte) || is_removed_spte(new_spte)); + WARN_ON_ONCE(is_removed_spte(old_spte) || is_removed_spte(new_spte)); =20 old_spte =3D kvm_tdp_mmu_write_spte(sptep, old_spte, new_spte, level); =20 @@ -706,7 +706,7 @@ static inline bool __must_check tdp_mmu_iter_cond_resch= ed(struct kvm *kvm, struct tdp_iter *iter, bool flush, bool shared) { - WARN_ON(iter->yielded); + WARN_ON_ONCE(iter->yielded); =20 /* Ensure forward progress has been made before yielding. */ if (iter->next_last_level_gfn =3D=3D iter->yielded_gfn) @@ -725,7 +725,7 @@ static inline bool __must_check tdp_mmu_iter_cond_resch= ed(struct kvm *kvm, =20 rcu_read_lock(); =20 - WARN_ON(iter->gfn > iter->next_last_level_gfn); + WARN_ON_ONCE(iter->gfn > iter->next_last_level_gfn); =20 iter->yielded =3D true; } @@ -1238,7 +1238,7 @@ static bool set_spte_gfn(struct kvm *kvm, struct tdp_= iter *iter, u64 new_spte; =20 /* Huge pages aren't expected to be modified without first being zapped. = */ - WARN_ON(pte_huge(range->pte) || range->start + 1 !=3D range->end); + WARN_ON_ONCE(pte_huge(range->pte) || range->start + 1 !=3D range->end); =20 if (iter->level !=3D PG_LEVEL_4K || !is_shadow_present_pte(iter->old_spte)) --=20 2.40.1.606.ga4b1b128d6-goog From nobody Tue Feb 10 06:05:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6F2CC77B7C for ; Thu, 11 May 2023 23:59:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239661AbjEKX7p (ORCPT ); Thu, 11 May 2023 19:59:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53380 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239660AbjEKX7h (ORCPT ); Thu, 11 May 2023 19:59:37 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5ADC861AF for ; Thu, 11 May 2023 16:59:30 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1ac375503deso92185225ad.3 for ; Thu, 11 May 2023 16:59:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1683849570; x=1686441570; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=YnFuIzqD1/t6APimZk0lC1MpBWigrgfjZSdse0VQg8E=; b=m6BIqiWHUNnwH9ZX1auPqN8qbYxGdaQTvKaKyZWwmYcUSvHMgc/cPcnsQG+xCbXjby uFO0RSxKkpl/MG8aAsVdgDpTocb2yEDzuEuAGyfFv5RrIuvsdjXXDZDcIbtQ8Y5Am32i gFk1v/oXsMP6Lc33MQcOJA9n7VbewZB9LcXu8IVxeBFYNpcpjKkSlfwzqiWMz0Iu7ZGR lhCSVmtX2z75ZiWwfBYjGLCRQAzx9inv2mbnDLQpsX90sfcHCU3oIVCk3ev+ej7nD45U Y1p7GQ91pd/Nf5ZBLMnXakp4S9DaWdR3sTMIrK3NpOO/yXJ0tuH1xgs3nV57joSjCFPp ABDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683849570; x=1686441570; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=YnFuIzqD1/t6APimZk0lC1MpBWigrgfjZSdse0VQg8E=; b=hPe87u4j/KKRUOsarKp6FVeWIt7Kh96PLn+XI8MDwGTcdq2q8MpJXFKxQ49X3GOco2 VSrWkHXSR6Lv+iWoMgu33QkUb8mooqfvjEFyJt9McfZOEsr+SxeD1rxZ16kYjx1/VEUq 64A61PHHEOfyeZ3ZzWJDA7UPOiTry0kGy0Wi1MhQyfmoSRQkW66wRE9abHMOu9vxiKYp K7ReSmNvmeXZ1kHDERtayELjfwQ0Cc22OOsCZXHHf5o60l+S3skrL1ygHOJCRxbDEA/n 3jqxS6LaFKMDJgo2302mBDbuJ2svMBV84Xu+2OX/NDS1jgnzHm+vNdFZLrmpmquK2BCt gLCA== X-Gm-Message-State: AC+VfDwIqhkTaM+i9CW1dOJgJ04JnmSGtndDPdyqgJABJ94UbdJff7IE 0iyo7ONqZkgwtcQaE8/1Fg7yryDSD3A= X-Google-Smtp-Source: ACHHUZ50yKczHohEWCnK11pRBzxZJwgMfOTrGLy4SGJC1v/SnV4xq6fJZWHW3hsVT9XcVqeBdvbSquzBL3Y= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:c946:b0:1a6:a899:fe78 with SMTP id i6-20020a170902c94600b001a6a899fe78mr8905287pla.2.1683849569880; Thu, 11 May 2023 16:59:29 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 11 May 2023 16:59:14 -0700 In-Reply-To: <20230511235917.639770-1-seanjc@google.com> Mime-Version: 1.0 References: <20230511235917.639770-1-seanjc@google.com> X-Mailer: git-send-email 2.40.1.606.ga4b1b128d6-goog Message-ID: <20230511235917.639770-7-seanjc@google.com> Subject: [PATCH 6/9] KVM: x86/mmu: Bug the VM if a vCPU ends up in long mode without PAE enabled From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Mingwei Zhang , David Matlack , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Promote the ASSERT(), which is quite dead code in KVM, into a KVM_BUG_ON() for KVM's sanity check that CR4.PAE=3D1 if the vCPU is in long mode when performing a walk of guest page tables. The sanity is quite cheap since neither EFER nor CR4.PAE requires a VMREAD, especially relative to the cost of walking the guest page tables. More importantly, the sanity check would have prevented the true badness fixed by commit 112e66017bff ("KVM: nVMX: add missing consistency checks for CR0 and CR4"). The missed consistency check resulted in some versions of KVM corrupting the on-stack guest_walker structure due to KVM thinking there are 4/5 levels of page tables, but wiring up the MMU hooks to point at the paging32 implementation, which only allocates space for two levels of page tables in "struct guest_walker32". Queue a page fault for injection if the assertion fails, as the sole caller, FNAME(gva_to_gpa), assumes that walker.fault contains sane info on a walk failure, i.e. avoid making the situation worse between the time the assertion fails and when KVM kicks the vCPU out to userspace (because the VM is bugged). Move the check below the initialization of "pte_access" so that the aforementioned to-be-injected page fault doesn't consume uninitialized stack data. The information _shouldn't_ reach the guest or userspace, but there's zero downside to being paranoid in this case. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/paging_tmpl.h | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index a3fc7c1a7f8d..f297e9311dcd 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -338,7 +338,6 @@ static int FNAME(walk_addr_generic)(struct guest_walker= *walker, } #endif walker->max_level =3D walker->level; - ASSERT(!(is_long_mode(vcpu) && !is_pae(vcpu))); =20 /* * FIXME: on Intel processors, loads of the PDPTE registers for PAE paging @@ -348,6 +347,10 @@ static int FNAME(walk_addr_generic)(struct guest_walke= r *walker, nested_access =3D (have_ad ? PFERR_WRITE_MASK : 0) | PFERR_USER_MASK; =20 pte_access =3D ~0; + + if (KVM_BUG_ON(is_long_mode(vcpu) && !is_pae(vcpu), vcpu->kvm)) + goto error; + ++walker->level; =20 do { --=20 2.40.1.606.ga4b1b128d6-goog From nobody Tue Feb 10 06:05:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66F05C7EE24 for ; Fri, 12 May 2023 00:00:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239741AbjELAAB (ORCPT ); Thu, 11 May 2023 20:00:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53644 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239696AbjEKX7m (ORCPT ); Thu, 11 May 2023 19:59:42 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7CC158A7E for ; Thu, 11 May 2023 16:59:33 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id 98e67ed59e1d1-24e4531d571so5135547a91.1 for ; Thu, 11 May 2023 16:59:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1683849571; x=1686441571; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=cjA+0mSTLh3yCMsGFKEfBNcNTT0A0nkYa87zKPgDSZA=; b=HN5fg1TcfLW1GSMIq1qurpY6RT1MqPzo8sehXms4OzfTNvmXjZfLujx3S1VqvYHNTY fIIVjvido5iXylJ1btxzXdDI8yd5o1+kGrYyGOSJE2KesbbdtMQhkuKjm4ZovfIhsPWg ttqpocKZBKrG2aXudd7T6pSdIqNvlURaLQV3R7GJ3KW7f/I1u6Dsj7LY3YIqabmJbYol 3GCVV/ow1MWhY4j92o5CR+4Yu58vgJ7MWNjKyYV5Xp65FsZeurXCc8LiMx2MZoFCNBhx FGdOP0HpWUJGN91HXEGI9pCKp2yL+amhf7Y6f1vNXUCMAEUVVW2XUq0j9aiP8qZekuBe Giag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683849571; x=1686441571; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=cjA+0mSTLh3yCMsGFKEfBNcNTT0A0nkYa87zKPgDSZA=; b=e9xsZ2wAZ8OoiEXWBIp1kz5hWZ36kPDQNsSQkp68o508vbeTAfGydGFmH3XuO+y0aK wNlTGigMLhQorekXEcDFDtnAi6SZTCCdF8FaNAk6Xa48h1cATcl8uoCoEzE/8eyEX9Kw qIGVUMzrVZ4HRgMkgopoOWKb99MZdDRnWPxaq/nZQ3G1bHQP/ne83kNzwYqR92AvGkR1 OE/ITN9MBMCB1VVFRyC08aKoya1P1/JTVwPg6CSuGompTXo2gg5kObI+198xIv0mVWYh o3YNNdRZ/VD65PIaWCh6o9hr4f8q8dupk1avL3v0pigiOGfOJOfy08+AX2j5kItQ9QpH 6+kQ== X-Gm-Message-State: AC+VfDxxcytpA0Kmi2b9H8YGZNe/2d0MIk1hz1a6Qbv1aSHgVfcuzwrG wkv9GyOzok0wI2kpCpBKKH/BbPN8c4I= X-Google-Smtp-Source: ACHHUZ52hND3kQfsuqaR24n4SBVr5xhmzHxWfDp7/xsaI/5up5YSws+O7uE9UrE0UTjUIeaPSDpMjLAk8tE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:f989:b0:246:6a3a:6aec with SMTP id cq9-20020a17090af98900b002466a3a6aecmr6656579pjb.4.1683849571640; Thu, 11 May 2023 16:59:31 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 11 May 2023 16:59:15 -0700 In-Reply-To: <20230511235917.639770-1-seanjc@google.com> Mime-Version: 1.0 References: <20230511235917.639770-1-seanjc@google.com> X-Mailer: git-send-email 2.40.1.606.ga4b1b128d6-goog Message-ID: <20230511235917.639770-8-seanjc@google.com> Subject: [PATCH 7/9] KVM: x86/mmu: Replace MMU_DEBUG with proper KVM_PROVE_MMU Kconfig From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Mingwei Zhang , David Matlack , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Replace MMU_DEBUG, which requires manually modifying KVM to enable the macro, with a proper Kconfig, KVM_PROVE_MMU. Now that pgprintk() and rmap_printk() are gone, i.e. the macro guards only KVM_MMU_WARN_ON() and won't flood the kernel logs, enabling the option for debug kernels is both desirable and feasible. Signed-off-by: Sean Christopherson --- arch/x86/kvm/Kconfig | 13 +++++++++++++ arch/x86/kvm/mmu/mmu.c | 4 ++-- arch/x86/kvm/mmu/mmu_internal.h | 4 +--- 3 files changed, 16 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index 8e578311ca9d..cccedb424324 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -139,6 +139,19 @@ config KVM_XEN =20 If in doubt, say "N". =20 +config KVM_PROVE_MMU + bool "Prove KVM MMU correctness" + depends on DEBUG_KERNEL + depends on KVM + depends on EXPERT + help + Enables runtime assertions in KVM's MMU that are too costly to enable + in anything remotely resembling a production environment, e.g. this + gates code that verifies a to-be-freed page table doesn't have any + present SPTEs. + + If in doubt, say "N". + config KVM_EXTERNAL_WRITE_TRACKING bool =20 diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4731d2bf5af6..d209d466d58f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1686,7 +1686,7 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn= _range *range) return young; } =20 -#ifdef MMU_DEBUG +#ifdef CONFIG_KVM_PROVE_MMU static int is_empty_shadow_page(u64 *spt) { u64 *pos; @@ -1700,7 +1700,7 @@ static int is_empty_shadow_page(u64 *spt) } return 1; } -#endif +#endif /* CONFIG_KVM_PROVE_MMU */ =20 /* * This value is the sum of all of the kvm instances's diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index cfe925fefa68..40e74db6a7d5 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -6,9 +6,7 @@ #include #include =20 -#undef MMU_DEBUG - -#ifdef MMU_DEBUG +#ifdef CONFIG_KVM_PROVE_MMU #define KVM_MMU_WARN_ON(x) WARN_ON_ONCE(x) #else #define KVM_MMU_WARN_ON(x) do { } while (0) --=20 2.40.1.606.ga4b1b128d6-goog From nobody Tue Feb 10 06:05:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E350C7EE24 for ; Fri, 12 May 2023 00:00:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239755AbjELAAG (ORCPT ); Thu, 11 May 2023 20:00:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53890 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239702AbjEKX7u (ORCPT ); Thu, 11 May 2023 19:59:50 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2AE7A93C1 for ; Thu, 11 May 2023 16:59:34 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id 98e67ed59e1d1-250109c729eso5070678a91.3 for ; Thu, 11 May 2023 16:59:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1683849573; x=1686441573; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=dN6VmGM2zS2bKeFyzPEe5BkbZy3+Yl9CkJ/vyhAlFJE=; b=1HHzzZmlSWuBI2Il2a5v5W8oLlA5YWunYryBelZHRr4cU7Xy72WnhQAKYwNrSranEd rWz3WOmczD4xBiW66jWliz5GmMUf9GRJZnsvJYHMTWzGUnV5huLtOf2UuZlTST6inmcn nqJbvU+9L4QzoWmO2TsFqBrwSvmH5cW0HcrcOJ9zbQ0U/sXLOR++nPUNK4aVKbB6tmpf AO2bjqXGaYvxT6lLtpYOmnWp5gkF/iR4OuI04enrwbWGcDeU82+p+xZp3KkEC2bgXB2D xQy33HL1cFoxFibtrMxmbbEz5gi5Z8ZarJQOCdwnT8hLURkG+PmufoLjF33JMYwJtx1l 0plA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683849573; x=1686441573; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=dN6VmGM2zS2bKeFyzPEe5BkbZy3+Yl9CkJ/vyhAlFJE=; b=WveSWy9YNx2oVon6Da1QxmIlssc0LzsqqeSix1ESUJ+cMv4tWFr2kkcFaYgi8DUgqQ sCzt9KubHdXWPv4ttViXgn/o/afXnDYG2nN2YitxTEHTRVnAnxV2UWWsz1qjlbR8UvXj VtkjuIGw9xDKZttEUiyiWK0iJCQbUbqaW19GSwHC6jH2+eJak5JTx+vMUSJe1KxDKvm2 k/UUA6CxkFjr7ndEouNhLYWULKfvjusF3CcCPBNDZyt93Yih4sKQ1D9mRNyFo+ebB9Sq pNboWJd40LwnddJfAnls0l3OcH1QrdRptQ5uzBV3aoE9fmNt57rSbs7K7Wl158HaqMIk nwuQ== X-Gm-Message-State: AC+VfDwc5MJi/lANwhs7yikrSU4XCUeCY+A0uWbrOKaA9TkrCVr6C96E Bf9i6Tb4LCy+wn26IC2cNK4ifOoRGaA= X-Google-Smtp-Source: ACHHUZ4mNLERypAs+m+7901J9SW6AbXGZiMSOHDKoj9yYJslw1BME/5QGWveOg114OVRm3Jz+zKTPvRuJxA= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:230a:b0:1ac:8cd4:741a with SMTP id d10-20020a170903230a00b001ac8cd4741amr4891193plh.6.1683849573449; Thu, 11 May 2023 16:59:33 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 11 May 2023 16:59:16 -0700 In-Reply-To: <20230511235917.639770-1-seanjc@google.com> Mime-Version: 1.0 References: <20230511235917.639770-1-seanjc@google.com> X-Mailer: git-send-email 2.40.1.606.ga4b1b128d6-goog Message-ID: <20230511235917.639770-9-seanjc@google.com> Subject: [PATCH 8/9] KVM: x86/mmu: Plumb "struct kvm" all the way to pte_list_remove() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Mingwei Zhang , David Matlack , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Mingwei Zhang Plumb "struct kvm" all the way to pte_list_remove() to allow the usage of KVM_BUG() and/or KVM_BUG_ON(). This will allow killing only the offending VM instead of doing BUG() if the kernel is built with CONFIG_BUG_ON_DATA_CORRUPTION=3Dn, i.e. does NOT want to BUG() if KVM's data structures (rmaps) appear to be corrupted. Signed-off-by: Mingwei Zhang [sean: tweak changelog] Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 33 ++++++++++++++++++--------------- 1 file changed, 18 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index d209d466d58f..8a8adeaa7dd7 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -962,7 +962,8 @@ static int pte_list_add(struct kvm_mmu_memory_cache *ca= che, u64 *spte, return count; } =20 -static void pte_list_desc_remove_entry(struct kvm_rmap_head *rmap_head, +static void pte_list_desc_remove_entry(struct kvm *kvm, + struct kvm_rmap_head *rmap_head, struct pte_list_desc *desc, int i) { struct pte_list_desc *head_desc =3D (struct pte_list_desc *)(rmap_head->v= al & ~1ul); @@ -998,7 +999,8 @@ static void pte_list_desc_remove_entry(struct kvm_rmap_= head *rmap_head, mmu_free_pte_list_desc(head_desc); } =20 -static void pte_list_remove(u64 *spte, struct kvm_rmap_head *rmap_head) +static void pte_list_remove(struct kvm *kvm, u64 *spte, + struct kvm_rmap_head *rmap_head) { struct pte_list_desc *desc; int i; @@ -1017,7 +1019,8 @@ static void pte_list_remove(u64 *spte, struct kvm_rma= p_head *rmap_head) while (desc) { for (i =3D 0; i < desc->spte_count; ++i) { if (desc->sptes[i] =3D=3D spte) { - pte_list_desc_remove_entry(rmap_head, desc, i); + pte_list_desc_remove_entry(kvm, rmap_head, + desc, i); return; } } @@ -1032,7 +1035,7 @@ static void kvm_zap_one_rmap_spte(struct kvm *kvm, struct kvm_rmap_head *rmap_head, u64 *sptep) { mmu_spte_clear_track_bits(kvm, sptep); - pte_list_remove(sptep, rmap_head); + pte_list_remove(kvm, sptep, rmap_head); } =20 /* Return true if at least one SPTE was zapped, false otherwise */ @@ -1107,7 +1110,7 @@ static void rmap_remove(struct kvm *kvm, u64 *spte) slot =3D __gfn_to_memslot(slots, gfn); rmap_head =3D gfn_to_rmap(gfn, sp->role.level, slot); =20 - pte_list_remove(spte, rmap_head); + pte_list_remove(kvm, spte, rmap_head); } =20 /* @@ -1751,16 +1754,16 @@ static void mmu_page_add_parent_pte(struct kvm_mmu_= memory_cache *cache, pte_list_add(cache, parent_pte, &sp->parent_ptes); } =20 -static void mmu_page_remove_parent_pte(struct kvm_mmu_page *sp, +static void mmu_page_remove_parent_pte(struct kvm *kvm, struct kvm_mmu_pag= e *sp, u64 *parent_pte) { - pte_list_remove(parent_pte, &sp->parent_ptes); + pte_list_remove(kvm, parent_pte, &sp->parent_ptes); } =20 -static void drop_parent_pte(struct kvm_mmu_page *sp, +static void drop_parent_pte(struct kvm *kvm, struct kvm_mmu_page *sp, u64 *parent_pte) { - mmu_page_remove_parent_pte(sp, parent_pte); + mmu_page_remove_parent_pte(kvm, sp, parent_pte); mmu_spte_clear_no_track(parent_pte); } =20 @@ -2475,7 +2478,7 @@ static void validate_direct_spte(struct kvm_vcpu *vcp= u, u64 *sptep, if (child->role.access =3D=3D direct_access) return; =20 - drop_parent_pte(child, sptep); + drop_parent_pte(vcpu->kvm, child, sptep); kvm_flush_remote_tlbs_sptep(vcpu->kvm, sptep); } } @@ -2493,7 +2496,7 @@ static int mmu_page_zap_pte(struct kvm *kvm, struct k= vm_mmu_page *sp, drop_spte(kvm, spte); } else { child =3D spte_to_child_sp(pte); - drop_parent_pte(child, spte); + drop_parent_pte(kvm, child, spte); =20 /* * Recursively zap nested TDP SPs, parentless SPs are @@ -2524,13 +2527,13 @@ static int kvm_mmu_page_unlink_children(struct kvm = *kvm, return zapped; } =20 -static void kvm_mmu_unlink_parents(struct kvm_mmu_page *sp) +static void kvm_mmu_unlink_parents(struct kvm *kvm, struct kvm_mmu_page *s= p) { u64 *sptep; struct rmap_iterator iter; =20 while ((sptep =3D rmap_get_first(&sp->parent_ptes, &iter))) - drop_parent_pte(sp, sptep); + drop_parent_pte(kvm, sp, sptep); } =20 static int mmu_zap_unsync_children(struct kvm *kvm, @@ -2569,7 +2572,7 @@ static bool __kvm_mmu_prepare_zap_page(struct kvm *kv= m, ++kvm->stat.mmu_shadow_zapped; *nr_zapped =3D mmu_zap_unsync_children(kvm, sp, invalid_list); *nr_zapped +=3D kvm_mmu_page_unlink_children(kvm, sp, invalid_list); - kvm_mmu_unlink_parents(sp); + kvm_mmu_unlink_parents(kvm, sp); =20 /* Zapping children means active_mmu_pages has become unstable. */ list_unstable =3D *nr_zapped; @@ -2927,7 +2930,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct= kvm_memory_slot *slot, u64 pte =3D *sptep; =20 child =3D spte_to_child_sp(pte); - drop_parent_pte(child, sptep); + drop_parent_pte(vcpu->kvm, child, sptep); flush =3D true; } else if (pfn !=3D spte_to_pfn(*sptep)) { drop_spte(vcpu->kvm, sptep); --=20 2.40.1.606.ga4b1b128d6-goog From nobody Tue Feb 10 06:05:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0105C77B7C for ; Fri, 12 May 2023 00:00:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239774AbjELAAR (ORCPT ); Thu, 11 May 2023 20:00:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54018 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239666AbjEKX74 (ORCPT ); Thu, 11 May 2023 19:59:56 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B71C900E for ; Thu, 11 May 2023 16:59:36 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id 41be03b00d2f7-517bfcfe83fso4995456a12.2 for ; Thu, 11 May 2023 16:59:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1683849575; x=1686441575; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=YdFQws+BbDv+q0XQd6XpddAlUZcDBYqpJC7jJhm9HZU=; b=VR7/YRmc0kUdtZZeXAJk/oybb3UJiNq99DF6W3kZr5MdmwNZsDFKQHw32c7BCB0Ac0 Pj3x7OapMux0TWLexQNp6Ms+tCPJOo2GQ0S9nkhqgnS4jfigZ7GxS69/icextMSVKSDw jKNBr8JiAzV2N+5vMvvXDyJIgaZDwlVCQhF+oD/im38Dlti4nMJkyWOV3DbWzdRbRC2Z S1xRUZCMG548+ISQRwU4pcge0PbiN/nvsve6YuJRDpnnvSMJ+1ZBs4TYshhEPEIEUHLH QN964Mm4Q+yzpBrqvBm0HS/0w62xwIeTMkv9Vq3MecaWzIlVwSO7rfJ+aZq4GTxDLJtj FOFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683849575; x=1686441575; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=YdFQws+BbDv+q0XQd6XpddAlUZcDBYqpJC7jJhm9HZU=; b=T7SPCFQ++8+59zcFsGCTenFCd5t40nfL9TBCN9oFrGN4MKXwGrsAXIKbgQ6vtnv/Lj be7kD+7gcEOv5l/+sE0fgiIy5T1iGxmfM56yV+Ck+sJzGnCL9qzKIZ1bHHhRAk3vdFSG KFgZyNliNpWQybx9mzQ4BEE46XndABD8Ue4k04ANVFk3TEKu2XRyjcNeQOTRQAAPen06 S4nBPABAUS4q2lpZ2VMlq95qFhs6J8OXyVo6RLrqM1tZFmVnrltOyfk1PpDMKmzkxP27 RiksYPuzzV9MG1D6kjbgnbx1wuHOugB9OwCnHwQEBvKXS7u9Z651uHYq1ZdK/YlMyEis 4siQ== X-Gm-Message-State: AC+VfDzzbwy32+yjRWCjl5hfh+nCV/QZScMghZBo+2aQddHG7AjtvrRm eEwwmhN16WQvlsSwrbwQ24+o+WhJ41U= X-Google-Smtp-Source: ACHHUZ4jZDMm52zcA1xq6SxE1K9rdws7Jb9q4u5mzF0NVgByBQqTqnizNeMIXfpLdnTLtyFXEjaQa4k55G0= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a63:2a09:0:b0:530:3a44:1581 with SMTP id q9-20020a632a09000000b005303a441581mr3118668pgq.9.1683849575375; Thu, 11 May 2023 16:59:35 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 11 May 2023 16:59:17 -0700 In-Reply-To: <20230511235917.639770-1-seanjc@google.com> Mime-Version: 1.0 References: <20230511235917.639770-1-seanjc@google.com> X-Mailer: git-send-email 2.40.1.606.ga4b1b128d6-goog Message-ID: <20230511235917.639770-10-seanjc@google.com> Subject: [PATCH 9/9] KVM: x86/mmu: BUG() in rmap helpers iff CONFIG_BUG_ON_DATA_CORRUPTION=y From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Mingwei Zhang , David Matlack , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce KVM_BUG_ON_DATA_CORRUPTION() and use it in the low-level rmap helpers to convert the existing BUG()s to WARN_ON_ONCE() when the kernel is built with CONFIG_BUG_ON_DATA_CORRUPTION=3Dn, i.e. does NOT want to BUG() on corruption of host kernel data structures. Environments that don't have infrastructure to automatically capture crash dumps, i.e. aren't likely to enable CONFIG_BUG_ON_DATA_CORRUPTION=3Dy, are typically better served overall by WARN-and-continue behavior (for the kernel, the VM is dead regardless), as a BUG() while holding mmu_lock all but guarantees the _best_ case scenario is a panic(). Make the BUG()s conditional instead of removing/replacing them entirely as there's a non-zero chance (though by no means a guarantee) that the damage isn't contained to the target VM, e.g. if no rmap is found for a SPTE then KVM may be double-zapping the SPTE, i.e. has already freed the memory the SPTE pointed at and thus KVM is reading/writing memory that KVM no longer owns. Link: https://lore.kernel.org/all/20221129191237.31447-1-mizhang@google.com Suggested-by: Mingwei Zhang Cc: David Matlack Cc: Jim Mattson Signed-off-by: Sean Christopherson Reviewed-by: Mingwei Zhang --- arch/x86/kvm/mmu/mmu.c | 21 ++++++++++----------- include/linux/kvm_host.h | 19 +++++++++++++++++++ 2 files changed, 29 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 8a8adeaa7dd7..5ee1ee201441 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -974,7 +974,7 @@ static void pte_list_desc_remove_entry(struct kvm *kvm, * when adding an entry and the previous head is full, and heads are * removed (this flow) when they become empty. */ - BUG_ON(j < 0); + KVM_BUG_ON_DATA_CORRUPTION(j < 0, kvm); =20 /* * Replace the to-be-freed SPTE with the last valid entry from the head @@ -1005,14 +1005,13 @@ static void pte_list_remove(struct kvm *kvm, u64 *s= pte, struct pte_list_desc *desc; int i; =20 - if (!rmap_head->val) { - pr_err("%s: %p 0->BUG\n", __func__, spte); - BUG(); - } else if (!(rmap_head->val & 1)) { - if ((u64 *)rmap_head->val !=3D spte) { - pr_err("%s: %p 1->BUG\n", __func__, spte); - BUG(); - } + if (KVM_BUG_ON_DATA_CORRUPTION(!rmap_head->val, kvm)) + return; + + if (!(rmap_head->val & 1)) { + if (KVM_BUG_ON_DATA_CORRUPTION((u64 *)rmap_head->val !=3D spte, kvm)) + return; + rmap_head->val =3D 0; } else { desc =3D (struct pte_list_desc *)(rmap_head->val & ~1ul); @@ -1026,8 +1025,8 @@ static void pte_list_remove(struct kvm *kvm, u64 *spt= e, } desc =3D desc->more; } - pr_err("%s: %p many->many\n", __func__, spte); - BUG(); + + KVM_BUG_ON_DATA_CORRUPTION(true, kvm); } } =20 diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 9696c2fb30e9..2f06222f44e6 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -864,6 +864,25 @@ static inline void kvm_vm_bugged(struct kvm *kvm) unlikely(__ret); \ }) =20 +/* + * Note, "data corruption" refers to corruption of host kernel data struct= ures, + * not guest data. Guest data corruption, suspected or confirmed, that is= tied + * and contained to a single VM should *never* BUG() and potentially panic= the + * host, i.e. use this variant of KVM_BUG() if and only if a KVM data stru= cture + * is corrupted and that corruption can have a cascading effect to other p= arts + * of the hosts and/or to other VMs. + */ +#define KVM_BUG_ON_DATA_CORRUPTION(cond, kvm) \ +({ \ + bool __ret =3D !!(cond); \ + \ + if (IS_ENABLED(CONFIG_BUG_ON_DATA_CORRUPTION)) \ + BUG_ON(__ret); \ + else if (WARN_ON_ONCE(__ret && !(kvm)->vm_bugged)) \ + kvm_vm_bugged(kvm); \ + unlikely(__ret); \ +}) + static inline void kvm_vcpu_srcu_read_lock(struct kvm_vcpu *vcpu) { #ifdef CONFIG_PROVE_RCU --=20 2.40.1.606.ga4b1b128d6-goog