From nobody Wed Feb 11 02:06:19 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71D87C77B7C for ; Thu, 11 May 2023 23:59:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239626AbjEKX72 (ORCPT ); Thu, 11 May 2023 19:59:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239160AbjEKX7X (ORCPT ); Thu, 11 May 2023 19:59:23 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 57EB52689 for ; Thu, 11 May 2023 16:59:21 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-55d9a9d19c9so133438267b3.1 for ; Thu, 11 May 2023 16:59:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1683849560; x=1686441560; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=CNraDidmFDw9dJQ9QnV0Y0C81tvqP9mqw0EiCSRz+Vs=; b=sOIKLxFGuZR3i+K8MJnpj7JJVwmgGhXNdfe9XNnpN8OBIHbAYWKHKMnwLAXfeq0eS2 ML6wMOtYOj99/tupq5/BOSt0fd19YAO+5/xjWDl0mbazf82YyyhL0m2wtHFjZwoa87x/ iFiAl66sn+AQjsVPkU5S2e0s7QnsrMicU9ivDez7l6MkMd0z0N8wR0rZTx6+swKs+YEL 3cxQEQwHQTqOs7qC0XOyaKQJ+AwoNL/dPQTVXgZeYaToXfHCQKLMJ6sY3p/OudNICFtd oFAFNrp3Nu13/EUHIEvlNfCOBJwPTpZeUgGaw0luySkG+EAYCs99WAR7cgEIWIScPC9E 4rmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683849560; x=1686441560; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=CNraDidmFDw9dJQ9QnV0Y0C81tvqP9mqw0EiCSRz+Vs=; b=JKPk3a5iOxXWZWoqpSF8YKc9TyapV4A7ow8fnxeq3jzNQP/nQUWfyu5YbwuOMu+AY2 xhCqrE0PbgjNdAVBLbJLeL3+eqHtT2QkQQ0oX1FKXEdbe9OGl4wOQxT+lbolAov7Bwkd veTy0QCZn9wBsFTAjCAFryuBhJTdr0lw04t5BUuZ0nYhr6QqmjUfuOG6jsRKWHwNN8TJ /2NT1O4zf8GypQ/egnYium5Ns1hJcy0SEWdVcs/EFLGyq0HnsqBSnZY5H1XwzOtE5UC9 uppSy32m1WrLR52a+y9he6lmfbBrFjAy8n0dJktpS5xYnzyNkaQr2ZlibsSzgEg4qXqQ RMRw== X-Gm-Message-State: AC+VfDyTthAaANvOK/10Jk/V3TDdKe/nmDZ9O7BRGTYJMx+0qbDKEzhh s95sleowyIqOXZ8UqS67zezw+IfuVhY= X-Google-Smtp-Source: ACHHUZ6DJJxHn1wWxo8Af/BSN9VswWrMKBS5HRAMWHBlrXxQJVzW1+QwvpSq7d3Dmgy9grKr3jV273YrWho= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:ae65:0:b0:55d:f921:6bfa with SMTP id g37-20020a81ae65000000b0055df9216bfamr9874068ywk.5.1683849560649; Thu, 11 May 2023 16:59:20 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 11 May 2023 16:59:09 -0700 In-Reply-To: <20230511235917.639770-1-seanjc@google.com> Mime-Version: 1.0 References: <20230511235917.639770-1-seanjc@google.com> X-Mailer: git-send-email 2.40.1.606.ga4b1b128d6-goog Message-ID: <20230511235917.639770-2-seanjc@google.com> Subject: [PATCH 1/9] KVM: x86/mmu: Delete pgprintk() and all its usage From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Mingwei Zhang , David Matlack , Jim Mattson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Delete KVM's pgprintk() and all its usage, as the code is very prone to bitrot due to being buried behind MMU_DEBUG, and the functionality has been rendered almost entirely obsolete by the tracepoints KVM has gained over the years. And for the situations where the information provided by KVM's tracepoints is insufficient, pgprintk() rarely fills in the gaps, and is almost always far too noisy, i.e. developers end up implementing custom prints anyways. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 17 ----------------- arch/x86/kvm/mmu/mmu_internal.h | 2 -- arch/x86/kvm/mmu/paging_tmpl.h | 7 ------- arch/x86/kvm/mmu/spte.c | 2 -- 4 files changed, 28 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c8961f45e3b1..cb70958eeaf9 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2768,12 +2768,9 @@ int kvm_mmu_unprotect_page(struct kvm *kvm, gfn_t gf= n) LIST_HEAD(invalid_list); int r; =20 - pgprintk("%s: looking for gfn %llx\n", __func__, gfn); r =3D 0; write_lock(&kvm->mmu_lock); for_each_gfn_valid_sp_with_gptes(kvm, sp, gfn) { - pgprintk("%s: gfn %llx role %x\n", __func__, gfn, - sp->role.word); r =3D 1; kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); } @@ -2931,9 +2928,6 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct= kvm_memory_slot *slot, bool prefetch =3D !fault || fault->prefetch; bool write_fault =3D fault && fault->write; =20 - pgprintk("%s: spte %llx write_fault %d gfn %llx\n", __func__, - *sptep, write_fault, gfn); - if (unlikely(is_noslot_pfn(pfn))) { vcpu->stat.pf_mmio_spte_created++; mark_mmio_spte(vcpu, sptep, gfn, pte_access); @@ -2953,8 +2947,6 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct= kvm_memory_slot *slot, drop_parent_pte(child, sptep); flush =3D true; } else if (pfn !=3D spte_to_pfn(*sptep)) { - pgprintk("hfn old %llx new %llx\n", - spte_to_pfn(*sptep), pfn); drop_spte(vcpu->kvm, sptep); flush =3D true; } else @@ -2979,8 +2971,6 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct= kvm_memory_slot *slot, if (flush) kvm_flush_remote_tlbs_gfn(vcpu->kvm, gfn, level); =20 - pgprintk("%s: setting spte %llx\n", __func__, *sptep); - if (!was_rmapped) { WARN_ON_ONCE(ret =3D=3D RET_PF_SPURIOUS); rmap_add(vcpu, slot, sptep, gfn, pte_access); @@ -4436,8 +4426,6 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault static int nonpaging_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { - pgprintk("%s: gva %lx error %x\n", __func__, fault->addr, fault->error_co= de); - /* This path builds a PAE pagetable, we can map 2mb pages at maximum. */ fault->max_level =3D PG_LEVEL_2M; return direct_page_fault(vcpu, fault); @@ -5627,9 +5615,6 @@ static bool detect_write_misaligned(struct kvm_mmu_pa= ge *sp, gpa_t gpa, { unsigned offset, pte_size, misaligned; =20 - pgprintk("misaligned: gpa %llx bytes %d role %x\n", - gpa, bytes, sp->role.word); - offset =3D offset_in_page(gpa); pte_size =3D sp->role.has_4_byte_gpte ? 4 : 8; =20 @@ -5695,8 +5680,6 @@ static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, = gpa_t gpa, if (!READ_ONCE(vcpu->kvm->arch.indirect_shadow_pages)) return; =20 - pgprintk("%s: gpa %llx bytes %d\n", __func__, gpa, bytes); - write_lock(&vcpu->kvm->mmu_lock); =20 gentry =3D mmu_pte_write_fetch_gpte(vcpu, &gpa, &bytes); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index d39af5639ce9..4f1e4b327f40 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -11,11 +11,9 @@ #ifdef MMU_DEBUG extern bool dbg; =20 -#define pgprintk(x...) do { if (dbg) printk(x); } while (0) #define rmap_printk(fmt, args...) do { if (dbg) printk("%s: " fmt, __func_= _, ## args); } while (0) #define MMU_WARN_ON(x) WARN_ON(x) #else -#define pgprintk(x...) do { } while (0) #define rmap_printk(x...) do { } while (0) #define MMU_WARN_ON(x) do { } while (0) #endif diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 0662e0278e70..7a97f769a7cb 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -456,9 +456,6 @@ static int FNAME(walk_addr_generic)(struct guest_walker= *walker, goto retry_walk; } =20 - pgprintk("%s: pte %llx pte_access %x pt_access %x\n", - __func__, (u64)pte, walker->pte_access, - walker->pt_access[walker->level - 1]); return 1; =20 error: @@ -529,8 +526,6 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_= mmu_page *sp, if (FNAME(prefetch_invalid_gpte)(vcpu, sp, spte, gpte)) return false; =20 - pgprintk("%s: gpte %llx spte %p\n", __func__, (u64)gpte, spte); - gfn =3D gpte_to_gfn(gpte); pte_access =3D sp->role.access & FNAME(gpte_access)(gpte); FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte); @@ -758,7 +753,6 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault struct guest_walker walker; int r; =20 - pgprintk("%s: addr %lx err %x\n", __func__, fault->addr, fault->error_cod= e); WARN_ON_ONCE(fault->is_tdp); =20 /* @@ -773,7 +767,6 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault * The page is not mapped by the guest. Let the guest handle it. */ if (!r) { - pgprintk("%s: guest page fault\n", __func__); if (!fault->prefetch) kvm_inject_emulated_page_fault(vcpu, &walker.fault); =20 diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index cf2c6426a6fc..438a86bda9f3 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -221,8 +221,6 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_pa= ge *sp, * shadow pages and unsync'ing pages is not allowed. */ if (mmu_try_to_unsync_pages(vcpu->kvm, slot, gfn, can_unsync, prefetch))= { - pgprintk("%s: found shadow page for %llx, marking ro\n", - __func__, gfn); wrprot =3D true; pte_access &=3D ~ACC_WRITE_MASK; spte &=3D ~(PT_WRITABLE_MASK | shadow_mmu_writable_mask); --=20 2.40.1.606.ga4b1b128d6-goog