From nobody Sun Feb 8 16:33:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35C05C761A6 for ; Tue, 21 Mar 2023 22:00:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230038AbjCUWAa (ORCPT ); Tue, 21 Mar 2023 18:00:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229936AbjCUWA1 (ORCPT ); Tue, 21 Mar 2023 18:00:27 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F3083C33 for ; Tue, 21 Mar 2023 15:00:26 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id o10-20020a17090ac08a00b0023f3196fa6fso5894707pjs.2 for ; Tue, 21 Mar 2023 15:00:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679436025; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=r1QVmdvFEb28GS25TRtVN9nWQiQqhscO2NCp34L3b7o=; b=opwJdvUDTbn5pEsZvmx7JbsvKQqX9DTDLTVcDZDneAgw2XeNuQfyqUfaJ8BDieFicc htw/EvDpnUnSsWMjtRvCiQzu3936Zt4+eV68c9o64kQJdP0TBAGwuBj7Pchn2kfUN/Zw qFbhd0xQ4EglTUg5q1IYu8kxZ2ViKx4lO9khc00I1Q5LxHxiFNx+JKmHe0KJ4V7/8lsQ bHX13cqimtF55sdaL/xKd0LC3OrM4b5htPbDPIjWnJrfV9Q/irYTpivlzAogCf5yfPmC H63FMbbXhcp294JYI7jYk8czBNUemSFKskQMYTim5y1C15NbuQPX64QSEzQw5v8k80jg aKhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679436025; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=r1QVmdvFEb28GS25TRtVN9nWQiQqhscO2NCp34L3b7o=; b=51+vyi8hVLde9db2K/QIYM5NzvgSuHSz7kxLll6IVB2JJDfJdAo3BjhzpHbqL1gIDi So+o/l0dIkKD5myWzmoxBtwd9UAfJFMFvXrfUKD28ungJVx+tWSR7kp5Uk2XoX/z693v kA+FaD7ZQ9xU0sEKUdUX7i8xx/Q+RR+4SLiIZq7HwGUpElpQlERi3zQ4MrRt/NNhHTMf gP/+jNE0Iz/OdAfM/ucnE4sbA5HpUKT0caYIT86iC8pr5LoeKaTLQKbGAAkDTB2Ielv+ usgTSRBSkth+BzQKAK0YllDLkwk+ogoyxfgxSh8/cXR16XRNW+qxZEMHbXabDZdPOW9m JwiQ== X-Gm-Message-State: AO0yUKX/1zJS3keXO8JwzXVIUnWcR3JQ8dqOKxtUlrYUhKC5D9UG4Czt MkX2Thow8bgydo/62g/Ck6bi9dMFGC0= X-Google-Smtp-Source: AK7set+sP72192khOTQtMN//XvLV9zi2fGU3FKezienBVDWGneZ4w86csLCQ1sCf17hxPcZGAiA/4gLFHLI= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a65:4806:0:b0:50a:c1b3:ed55 with SMTP id h6-20020a654806000000b0050ac1b3ed55mr142164pgs.11.1679436025587; Tue, 21 Mar 2023 15:00:25 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 21 Mar 2023 15:00:09 -0700 In-Reply-To: <20230321220021.2119033-1-seanjc@google.com> Mime-Version: 1.0 References: <20230321220021.2119033-1-seanjc@google.com> X-Mailer: git-send-email 2.40.0.rc2.332.ga46443480c-goog Message-ID: <20230321220021.2119033-2-seanjc@google.com> Subject: [PATCH v4 01/13] KVM: x86/mmu: Add a helper function to check if an SPTE needs atomic write From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma , David Matlack , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Vipin Sharma Move conditions in kvm_tdp_mmu_write_spte() to check if an SPTE should be written atomically or not to a separate function. This new function, kvm_tdp_mmu_spte_need_atomic_write(), will be used in future commits to optimize clearing bits in SPTEs. Signed-off-by: Vipin Sharma Reviewed-by: David Matlack Reviewed-by: Ben Gardon Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/tdp_iter.h | 34 ++++++++++++++++++++-------------- 1 file changed, 20 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index f0af385c56e0..c11c5d00b2c1 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -29,23 +29,29 @@ static inline void __kvm_tdp_mmu_write_spte(tdp_ptep_t = sptep, u64 new_spte) WRITE_ONCE(*rcu_dereference(sptep), new_spte); } =20 +/* + * SPTEs must be modified atomically if they are shadow-present, leaf + * SPTEs, and have volatile bits, i.e. has bits that can be set outside + * of mmu_lock. The Writable bit can be set by KVM's fast page fault + * handler, and Accessed and Dirty bits can be set by the CPU. + * + * Note, non-leaf SPTEs do have Accessed bits and those bits are + * technically volatile, but KVM doesn't consume the Accessed bit of + * non-leaf SPTEs, i.e. KVM doesn't care if it clobbers the bit. This + * logic needs to be reassessed if KVM were to use non-leaf Accessed + * bits, e.g. to skip stepping down into child SPTEs when aging SPTEs. + */ +static inline bool kvm_tdp_mmu_spte_need_atomic_write(u64 old_spte, int le= vel) +{ + return is_shadow_present_pte(old_spte) && + is_last_spte(old_spte, level) && + spte_has_volatile_bits(old_spte); +} + static inline u64 kvm_tdp_mmu_write_spte(tdp_ptep_t sptep, u64 old_spte, u64 new_spte, int level) { - /* - * Atomically write the SPTE if it is a shadow-present, leaf SPTE with - * volatile bits, i.e. has bits that can be set outside of mmu_lock. - * The Writable bit can be set by KVM's fast page fault handler, and - * Accessed and Dirty bits can be set by the CPU. - * - * Note, non-leaf SPTEs do have Accessed bits and those bits are - * technically volatile, but KVM doesn't consume the Accessed bit of - * non-leaf SPTEs, i.e. KVM doesn't care if it clobbers the bit. This - * logic needs to be reassessed if KVM were to use non-leaf Accessed - * bits, e.g. to skip stepping down into child SPTEs when aging SPTEs. - */ - if (is_shadow_present_pte(old_spte) && is_last_spte(old_spte, level) && - spte_has_volatile_bits(old_spte)) + if (kvm_tdp_mmu_spte_need_atomic_write(old_spte, level)) return kvm_tdp_mmu_write_spte_atomic(sptep, new_spte); =20 __kvm_tdp_mmu_write_spte(sptep, new_spte); --=20 2.40.0.rc2.332.ga46443480c-goog From nobody Sun Feb 8 16:33:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17C07C6FD1D for ; Tue, 21 Mar 2023 22:00:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230026AbjCUWAf (ORCPT ); Tue, 21 Mar 2023 18:00:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37412 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230030AbjCUWA3 (ORCPT ); Tue, 21 Mar 2023 18:00:29 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 719C3975B for ; Tue, 21 Mar 2023 15:00:28 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id l84-20020a252557000000b00b61b96282a4so15614720ybl.0 for ; Tue, 21 Mar 2023 15:00:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679436027; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Z2pSG5VWmqJOQAcZFq9+VDplzIlr7FmeqQZuXEVRazw=; b=CapADCaxZJSclql7XSUlwPtEzWodyt0ZlKgUzR1YZzbY+OoabPx9Io0xt3T2lMBg1d nQau2muHFYgZjvDCXtTmgnDobeaSoFXW09jRj9UVq0c06edNe1fdboBQ/FhLBcnlXDNM KR+YUT4rbgZ34eRrt5kd8HxvBPEqQSzk8e1IhsXxQM8nLsspr/TEWTI2tOtXbQBqQyjN HEjAe3RIzenZOkWArXwtVjOUlHjTfPBGx6QROJ9e4pJ/tgr3hOXAmxogZ1QX0LOWYGx0 9VGUy+3Ht3swyuu/6GY/qco2c95BElcMOmdyc9ruIimjHS5kCbZwb/fG9MvgLAAwFZXt j6Tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679436027; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Z2pSG5VWmqJOQAcZFq9+VDplzIlr7FmeqQZuXEVRazw=; b=tE7nS9iM+JXpI6sZAun7gFcBg97UIpI9Fdip4ob/mBgn2nPd1D5TJNssMdtWhuLRet N3W74l1ShGmtMQ02kcBqiX0fcmbC0/+DtjmkPYCYn/BAcKlSdMy3aHhwUq+a4i4+j5Yo 9yvg8vC9fJ3mtjKHHA0/VrH16tLw7YnYS2BYQbnW9WHhxGEptaJX7yTyv4rxyXy+Q5CE 6uHtJ/v3syTc4mDUX+JzbGli935Tcn4NeRk/cCw/K64J0OdVtkvCyS3UENF9W/5s403B HY0JKQU+otgZ66HYtmqf1preyZMg2Ce2qNzZPlH793zut1fR3E7go72sM6cTvLGiWYrt 6cPQ== X-Gm-Message-State: AAQBX9eEiN5yCCbg5ImW4lOJghDyxETZhnigj16SaDZ0ZMjCJsB6C0Ry JoqtcaQj9O81mp+gNV2kPex8BO9oOQw= X-Google-Smtp-Source: AKy350YQSQiwJABBg4LrQxgcwJuDItXoq3fvtBNLZ/fcyahwKCiYuVH0awsgCBzU/qICGrqlOVnO3noUKAM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:18cd:b0:ad0:a82:7ef2 with SMTP id ck13-20020a05690218cd00b00ad00a827ef2mr2161016ybb.8.1679436027667; Tue, 21 Mar 2023 15:00:27 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 21 Mar 2023 15:00:10 -0700 In-Reply-To: <20230321220021.2119033-1-seanjc@google.com> Mime-Version: 1.0 References: <20230321220021.2119033-1-seanjc@google.com> X-Mailer: git-send-email 2.40.0.rc2.332.ga46443480c-goog Message-ID: <20230321220021.2119033-3-seanjc@google.com> Subject: [PATCH v4 02/13] KVM: x86/mmu: Use kvm_ad_enabled() to determine if TDP MMU SPTEs need wrprot From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma , David Matlack , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Vipin Sharma Use the constant-after-module-load kvm_ad_enabled() to check if SPTEs in the TDP MMU need to be write-protected when clearing accessed/dirty status instead of manually checking every SPTE. The per-SPTE A/D enabling is specific to nested EPT MMUs, i.e. when KVM is using EPT A/D bits but L1 is not, and so cannot happen in the TDP MMU (which is non-nested only). Keep the original code as sanity checks buried under MMU_WARN_ON(). MMU_WARN_ON() is more or less useless at the moment, but there are plans to change that. Link: https://lore.kernel.org/all/Yz4Qi7cn7TWTWQjj@google.com Signed-off-by: Vipin Sharma [sean: split to separate patch, apply to dirty path, write changelog] Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/tdp_mmu.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 7c25dbf32ecc..5a5642650c3e 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1621,7 +1621,10 @@ static bool clear_dirty_gfn_range(struct kvm *kvm, s= truct kvm_mmu_page *root, if (!is_shadow_present_pte(iter.old_spte)) continue; =20 - if (spte_ad_need_write_protect(iter.old_spte)) { + MMU_WARN_ON(kvm_ad_enabled() && + spte_ad_need_write_protect(iter.old_spte)); + + if (!kvm_ad_enabled()) { if (is_writable_pte(iter.old_spte)) new_spte =3D iter.old_spte & ~PT_WRITABLE_MASK; else @@ -1685,13 +1688,16 @@ static void clear_dirty_pt_masked(struct kvm *kvm, = struct kvm_mmu_page *root, if (!mask) break; =20 + MMU_WARN_ON(kvm_ad_enabled() && + spte_ad_need_write_protect(iter.old_spte)); + if (iter.level > PG_LEVEL_4K || !(mask & (1UL << (iter.gfn - gfn)))) continue; =20 mask &=3D ~(1UL << (iter.gfn - gfn)); =20 - if (wrprot || spte_ad_need_write_protect(iter.old_spte)) { + if (wrprot || !kvm_ad_enabled()) { if (is_writable_pte(iter.old_spte)) new_spte =3D iter.old_spte & ~PT_WRITABLE_MASK; else --=20 2.40.0.rc2.332.ga46443480c-goog From nobody Sun Feb 8 16:33:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C70DC6FD20 for ; Tue, 21 Mar 2023 22:00:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230165AbjCUWAl (ORCPT ); Tue, 21 Mar 2023 18:00:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37506 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229936AbjCUWAa (ORCPT ); Tue, 21 Mar 2023 18:00:30 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A89F71F5E4 for ; Tue, 21 Mar 2023 15:00:29 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id s9-20020a634509000000b004fc1c14c9daso3940057pga.23 for ; Tue, 21 Mar 2023 15:00:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679436029; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=gnSD9O/lJZyBqYeNY2m7pEpV6MfgA8T7u5acuE9N9l0=; b=JtaOBpnozxchYgVWFwtMgv+aDGVJwKyGzpUb/zjycukt1br6dCwPcdxWCUtQUoev+7 Jzwtxj9ovLQgpJfxJaMFKZ3y1J+az24NKn0O/lTAzAhflX2TqYRysBBsQnTpSNkS5oRO a6fHIGGrkCYoLXOJzMUnx3hcEUog2Fqw4cC5z6ljT8UqdmT0OLyuCnZObCt/J6WLdYOn AlnRaRLAm83exTiPdQsZI9iaLKkaSBkTIypHw3JYk5oIQJ5GKJ91/h2FR/f0qKam+Slj 15Q8KyK8AVnARKNmwOjbjLpxtZyHvcF8t+EY9paR+b5s203RS7jjI95OvDF3ZQwy+Mip HIHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679436029; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=gnSD9O/lJZyBqYeNY2m7pEpV6MfgA8T7u5acuE9N9l0=; b=MR1Kj35LnU182dZAvD8s6ko0/nB61objHMtwUI3F+K456WnHuKq7HbtBkDWx2eDjfr b8LgW2KMPc2C4021BbHHMCR7EQiF/tjqKPpf9qtIvYX4Yx1uoSo2BhHe7gOeYWTQNl64 eUoNZqF04knB/f3Dx08k9Ll1ZeaEHfYKu7aXaj7BEhXm9JFX5EMmFt3VHSz2XlhbfDgV Kpt1q6Suz+lKfHu6xVTC8+WGiwy1eKjrQrN2OQPcCYMMuX8qe5H3oeqjJPVlB8JVLptW Z4vY739a7clniDuu9ZW9HsapVWFZ5g+GzYW9FPNG1/5K3BJKTAmLXQ051B3ts9B3zjvB GHWA== X-Gm-Message-State: AO0yUKWLpp8hSryyPaDjuTqFyeOAyyPb+7jlneR1B7+JW3i+kPohtM1H o/xgKSxoJol7qe/77nV95OZtzWBkFdw= X-Google-Smtp-Source: AK7set9aqPWpa1gyyUEkGDOGHGDgAj6e8VwrKTp1VKCDuWxlbwccPOy+KMZ7Fpz1D3EUsC5Ps0BQgHxuJ7I= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:8bca:b0:1a1:c945:4b2c with SMTP id r10-20020a1709028bca00b001a1c9454b2cmr231691plo.7.1679436029272; Tue, 21 Mar 2023 15:00:29 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 21 Mar 2023 15:00:11 -0700 In-Reply-To: <20230321220021.2119033-1-seanjc@google.com> Mime-Version: 1.0 References: <20230321220021.2119033-1-seanjc@google.com> X-Mailer: git-send-email 2.40.0.rc2.332.ga46443480c-goog Message-ID: <20230321220021.2119033-4-seanjc@google.com> Subject: [PATCH v4 03/13] KVM: x86/mmu: Consolidate Dirty vs. Writable clearing logic in TDP MMU From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma , David Matlack , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Vipin Sharma Deduplicate the guts of the TDP MMU's clearing of dirty status by snapshotting whether to check+clear the Dirty bit vs. the Writable bit, which is the only difference between the two flavors of dirty tracking. Note, kvm_ad_enabled() is just a wrapper for shadow_accessed_mask, i.e. is constant after kvm-{intel,amd}.ko is loaded. Link: https://lore.kernel.org/all/Yz4Qi7cn7TWTWQjj@google.com Signed-off-by: Vipin Sharma [sean: split to separate patch, apply to dirty log, write changelog] Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/tdp_mmu.c | 35 +++++++++-------------------------- 1 file changed, 9 insertions(+), 26 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 5a5642650c3e..b32c9ba05c89 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1607,8 +1607,8 @@ void kvm_tdp_mmu_try_split_huge_pages(struct kvm *kvm, static bool clear_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *ro= ot, gfn_t start, gfn_t end) { + u64 dbit =3D kvm_ad_enabled() ? shadow_dirty_mask : PT_WRITABLE_MASK; struct tdp_iter iter; - u64 new_spte; bool spte_set =3D false; =20 rcu_read_lock(); @@ -1624,19 +1624,10 @@ static bool clear_dirty_gfn_range(struct kvm *kvm, = struct kvm_mmu_page *root, MMU_WARN_ON(kvm_ad_enabled() && spte_ad_need_write_protect(iter.old_spte)); =20 - if (!kvm_ad_enabled()) { - if (is_writable_pte(iter.old_spte)) - new_spte =3D iter.old_spte & ~PT_WRITABLE_MASK; - else - continue; - } else { - if (iter.old_spte & shadow_dirty_mask) - new_spte =3D iter.old_spte & ~shadow_dirty_mask; - else - continue; - } + if (!(iter.old_spte & dbit)) + continue; =20 - if (tdp_mmu_set_spte_atomic(kvm, &iter, new_spte)) + if (tdp_mmu_set_spte_atomic(kvm, &iter, iter.old_spte & ~dbit)) goto retry; =20 spte_set =3D true; @@ -1678,8 +1669,9 @@ bool kvm_tdp_mmu_clear_dirty_slot(struct kvm *kvm, static void clear_dirty_pt_masked(struct kvm *kvm, struct kvm_mmu_page *ro= ot, gfn_t gfn, unsigned long mask, bool wrprot) { + u64 dbit =3D (wrprot || !kvm_ad_enabled()) ? PT_WRITABLE_MASK : + shadow_dirty_mask; struct tdp_iter iter; - u64 new_spte; =20 rcu_read_lock(); =20 @@ -1697,19 +1689,10 @@ static void clear_dirty_pt_masked(struct kvm *kvm, = struct kvm_mmu_page *root, =20 mask &=3D ~(1UL << (iter.gfn - gfn)); =20 - if (wrprot || !kvm_ad_enabled()) { - if (is_writable_pte(iter.old_spte)) - new_spte =3D iter.old_spte & ~PT_WRITABLE_MASK; - else - continue; - } else { - if (iter.old_spte & shadow_dirty_mask) - new_spte =3D iter.old_spte & ~shadow_dirty_mask; - else - continue; - } + if (!(iter.old_spte & dbit)) + continue; =20 - tdp_mmu_set_spte_no_dirty_log(kvm, &iter, new_spte); + tdp_mmu_set_spte_no_dirty_log(kvm, &iter, iter.old_spte & ~dbit); } =20 rcu_read_unlock(); --=20 2.40.0.rc2.332.ga46443480c-goog From nobody Sun Feb 8 16:33:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37ED8C6FD1D for ; Tue, 21 Mar 2023 22:00:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230281AbjCUWAp (ORCPT ); Tue, 21 Mar 2023 18:00:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38008 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230105AbjCUWAh (ORCPT ); Tue, 21 Mar 2023 18:00:37 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A562B9ED7 for ; Tue, 21 Mar 2023 15:00:31 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id z31-20020a25a122000000b00b38d2b9a2e9so17504754ybh.3 for ; Tue, 21 Mar 2023 15:00:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679436031; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=BlZkVbsBQRGxsE44yMzQP4FXruxJde6CtimkWJbV8J8=; b=smB55WpYv1jmLE5QmtjKXgdM9pRlkQKAcGwOpp/dWCOg56bnwmNXE5NUz7vGTUxqRY CTzd946M5ldXW05bAsz8CqOTkyTXF0BcWB0QnIIwV0JeZcEEuGklbMp55rusJcJPfOyy Gaz4RvJIO9o/DfSeBRgZVH87G4z1qHgkkIhY9qCKfyfpMr4t52pu2jZLF/iD25PqIe04 2F9SWz5/lKRAWzThoQBbAF3qJVXMxENtuFIN/0YTHNFIOZxghdFJrIeANzQQAyqbChJ4 POk8yrEtWRXqUUlcXlBwuwkuLToTJJJFtvTmAWBuQ7p2XWqVDfg69qzFkds/nIdL9S2N 5WxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679436031; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=BlZkVbsBQRGxsE44yMzQP4FXruxJde6CtimkWJbV8J8=; b=zSlJbLlB5HRtLzFYcqt1GDYlypCGZTiorrc3cOE+F1O6Q4KBLq5CaHI8SprpcPlg56 e7fzKOYOnqVwg0qwVRUnhQpMJMRJ+l2QHx68Z59rtelBuAmEwqB19oaq8+lQzNMEf8VM OhdLFT5yBjyupSwC2cfKDoT8VGhET5byh/AosjDOLwMYO9uXBsEVwSK8vk4A6Ub4/4Ys NiZas6EKtnucJITPHMOEGzcujiYHf/IQtinwP6ctE1F2KdGePAv3NUOs6nQTh87jiYwW +hz8VTcDYp9auOy1Xdbr5SwT3tKFnQ758/ZRPdOm2bspGOqTj5miHaPMwYflNpvGL3I2 33Ww== X-Gm-Message-State: AAQBX9caViCtzwJSU9SrDTE07WGejAtKi2leISbPAZ1Pmfk6lKa9SJx7 jL2UF8oWcHGksVJPWQGKKIAk6o3M0Ns= X-Google-Smtp-Source: AKy350aq4ai1xb1LHX7ScaJbYrRF3vWOThLimkcEkrhHJUUWQze2N0EJcxYHOh+0oMXJwWPvJWpHoCpcESk= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:302:b0:b6e:d788:eba4 with SMTP id b2-20020a056902030200b00b6ed788eba4mr2156679ybs.6.1679436030890; Tue, 21 Mar 2023 15:00:30 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 21 Mar 2023 15:00:12 -0700 In-Reply-To: <20230321220021.2119033-1-seanjc@google.com> Mime-Version: 1.0 References: <20230321220021.2119033-1-seanjc@google.com> X-Mailer: git-send-email 2.40.0.rc2.332.ga46443480c-goog Message-ID: <20230321220021.2119033-5-seanjc@google.com> Subject: [PATCH v4 04/13] KVM: x86/mmu: Atomically clear SPTE dirty state in the clear-dirty-log flow From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma , David Matlack , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Vipin Sharma Optimize the clearing of dirty state in TDP MMU SPTEs by doing an atomic-AND (on SPTEs that have volatile bits) instead of the full XCHG that currently ends up being invoked (see kvm_tdp_mmu_write_spte()). Clearing _only_ the bit in question will allow KVM to skip the many irrelevant checks in __handle_changed_spte() by avoiding any collateral damage due to the XCHG writing all SPTE bits, e.g. the XCHG could race with fast_page_fault() setting the W-bit and the CPU setting the D-bit, and thus incorrectly drop the CPU's D-bit update. Link: https://lore.kernel.org/all/Y9hXmz%2FnDOr1hQal@google.com Signed-off-by: Vipin Sharma Reviewed-by: David Matlack [sean: split the switch to atomic-AND to a separate patch] Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/tdp_iter.h | 14 ++++++++++++++ arch/x86/kvm/mmu/tdp_mmu.c | 16 ++++++++-------- 2 files changed, 22 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index c11c5d00b2c1..fae559559a80 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -58,6 +58,20 @@ static inline u64 kvm_tdp_mmu_write_spte(tdp_ptep_t spte= p, u64 old_spte, return old_spte; } =20 +static inline u64 tdp_mmu_clear_spte_bits(tdp_ptep_t sptep, u64 old_spte, + u64 mask, int level) +{ + atomic64_t *sptep_atomic; + + if (kvm_tdp_mmu_spte_need_atomic_write(old_spte, level)) { + sptep_atomic =3D (atomic64_t *)rcu_dereference(sptep); + return (u64)atomic64_fetch_and(~mask, sptep_atomic); + } + + __kvm_tdp_mmu_write_spte(sptep, old_spte & ~mask); + return old_spte; +} + /* * A TDP iterator performs a pre-order walk over a TDP paging structure. */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index b32c9ba05c89..a70cc1dae18a 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -770,13 +770,6 @@ static inline void tdp_mmu_set_spte_no_acc_track(struc= t kvm *kvm, _tdp_mmu_set_spte(kvm, iter, new_spte, false, true); } =20 -static inline void tdp_mmu_set_spte_no_dirty_log(struct kvm *kvm, - struct tdp_iter *iter, - u64 new_spte) -{ - _tdp_mmu_set_spte(kvm, iter, new_spte, true, false); -} - #define tdp_root_for_each_pte(_iter, _root, _start, _end) \ for_each_tdp_pte(_iter, _root, _start, _end) =20 @@ -1692,7 +1685,14 @@ static void clear_dirty_pt_masked(struct kvm *kvm, s= truct kvm_mmu_page *root, if (!(iter.old_spte & dbit)) continue; =20 - tdp_mmu_set_spte_no_dirty_log(kvm, &iter, iter.old_spte & ~dbit); + iter.old_spte =3D tdp_mmu_clear_spte_bits(iter.sptep, + iter.old_spte, dbit, + iter.level); + + __handle_changed_spte(kvm, iter.as_id, iter.gfn, iter.old_spte, + iter.old_spte & ~dbit, iter.level, false); + handle_changed_spte_acc_track(iter.old_spte, iter.old_spte & ~dbit, + iter.level); } =20 rcu_read_unlock(); --=20 2.40.0.rc2.332.ga46443480c-goog From nobody Sun Feb 8 16:33:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 044CDC6FD1D for ; Tue, 21 Mar 2023 22:00:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230321AbjCUWAt (ORCPT ); Tue, 21 Mar 2023 18:00:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230144AbjCUWAj (ORCPT ); Tue, 21 Mar 2023 18:00:39 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2059551FAB for ; Tue, 21 Mar 2023 15:00:33 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id p5-20020a17090a428500b0023b4776f0daso5311458pjg.7 for ; Tue, 21 Mar 2023 15:00:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679436032; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=31NgJMjMJW7CSjSko12tKGd+dlWKG5Xzh4U0zIk6tlA=; b=IB2gAfwr5yVRY7nH94N+NGMOeubdkqvhLL7Aw102uL8KAUo4EtZIiSOzmdoqk82CCD nxlWAMimTDK1t+giIs+UirPjyczFnJ3CAiPrE/D+K8MHL7SC7B26jKcP3qqx10p3fO8V fRAoSR6Sg5DrSZ+Kcz6WIULBnY/hejORQBZtdeBFL12p8rRdBx6tGcNJMj7VB6Wwb0Vg jMuR5d/qEOCbfTdvb42J8Qp9ibkLbAXst4X62GFKQHjs9j+QPbqJDquuN+pioC7fPGcL EAQAaRcxquKF4+Q8bz9TXagJNG/fNQ1funnN0WRVuA1tpUjwmBdmS9q38Y+eTVLNJU0x qc5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679436032; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=31NgJMjMJW7CSjSko12tKGd+dlWKG5Xzh4U0zIk6tlA=; b=lW1sgHj7CFyvOFTCnhLRRDpd+JKeHKsQ6Wp7ZcFBnqq+lHspv3KU3VQWSpsx4pS60R jqWRqbprH0Xp82RSbGZAMGqZrwyNFeKocsP+YaTssvLUH3Oab6ofgWia9KR29CKMD8Bb 1CMH7RaZyfuVRINb/sekwpeiNDgO2dF0aJ2uKn+OBtG1fvaEcN+PN9v9fAtR/H7rpKgt wBLXXchEFm6w2nj0YBBMiZQpvizQw0uDvJ5A9lwACa+YGzQ/4FFzPvurklft2JYqphZV /8OlUgSi6xxqaNnP/TEZWB7SVL8nu8YHNtPb0SRkhQM4SNaoUCa2IMbm3nJlpFs8zD4M Vi6Q== X-Gm-Message-State: AO0yUKUfTVoCtxSa0eicD9Ncy2B6Au+0BIAKmerXnYKNKGTUCKP/5Tsz tUmBHDmLGwtOSNM1DgDOQumpwbe7SJ4= X-Google-Smtp-Source: AK7set/cErfnTfsBWKRiZ7VBh0P9I4swkYzrwUU8mgwuXWxpb/kt+IdT7jdftdLuXfHQs1bYWME89hZ5D6o= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:724b:b0:19f:2164:9b3b with SMTP id c11-20020a170902724b00b0019f21649b3bmr232120pll.13.1679436032611; Tue, 21 Mar 2023 15:00:32 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 21 Mar 2023 15:00:13 -0700 In-Reply-To: <20230321220021.2119033-1-seanjc@google.com> Mime-Version: 1.0 References: <20230321220021.2119033-1-seanjc@google.com> X-Mailer: git-send-email 2.40.0.rc2.332.ga46443480c-goog Message-ID: <20230321220021.2119033-6-seanjc@google.com> Subject: [PATCH v4 05/13] KVM: x86/mmu: Drop access tracking checks when clearing TDP MMU dirty bits From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma , David Matlack , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Vipin Sharma Drop the unnecessary call to handle access-tracking changes when clearing the dirty status of TDP MMU SPTEs. Neither the Dirty bit nor the Writable bit has any impact on the accessed state of a page, i.e. clearing only the aforementioned bits doesn't make an accessed SPTE suddently not accessed. Signed-off-by: Vipin Sharma [sean: split to separate patch, write changelog] Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/tdp_mmu.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index a70cc1dae18a..950c5d23ecee 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1691,8 +1691,6 @@ static void clear_dirty_pt_masked(struct kvm *kvm, st= ruct kvm_mmu_page *root, =20 __handle_changed_spte(kvm, iter.as_id, iter.gfn, iter.old_spte, iter.old_spte & ~dbit, iter.level, false); - handle_changed_spte_acc_track(iter.old_spte, iter.old_spte & ~dbit, - iter.level); } =20 rcu_read_unlock(); --=20 2.40.0.rc2.332.ga46443480c-goog From nobody Sun Feb 8 16:33:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4F88C6FD20 for ; Tue, 21 Mar 2023 22:00:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230144AbjCUWAv (ORCPT ); Tue, 21 Mar 2023 18:00:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38334 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230214AbjCUWAm (ORCPT ); Tue, 21 Mar 2023 18:00:42 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F8B0E1AF for ; Tue, 21 Mar 2023 15:00:34 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id 16-20020a056a00071000b00627e9b4871eso4514615pfl.11 for ; Tue, 21 Mar 2023 15:00:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679436034; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=HzfCgHSGwiR4AoBi0+VDzgdM9euLusEu5CGS2HK69v4=; b=Yu7X9URVZznsAeySx9Qhfzav+zZs4b9qm/SerUUpFioJW+7M+vcfPplgBY/xdz0XvA NMhdVwoUnZSerYrGDyxjtElMeU8AVF7JLm2h04tQ6ICADq8sXuJIclZdZmsRXOep9o7W 75z5DUQbLCLtn3+9NSfYpE3B0LZhqMQ6mHWTb0Wo/mGw+RQpjmHhkfkeh7wFPoHtLlmy hoj8lgSAu+/x1iIuwxYOx9sGGPKkUDKyIipDDIh13JeAbgnHnE960vttauivNTcCXTY1 Ek17Bwty+k6JHhE5OvYK3eKVdHS6gEy8KIi7JdvPS0zWoYPUO2NjQsf6GCwC/N2ws4VS rhkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679436034; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=HzfCgHSGwiR4AoBi0+VDzgdM9euLusEu5CGS2HK69v4=; b=3qR2FG7Dcf8JbUELztwDhcy8MIYsheaTBv6hJFD5lAZLNHBGMAE7EKo6gLgsR0CPeN 8TQhLr1ibspsGhkewN5JsPJnmZ8BVhBqhrGnWj+ztsZg26SVVIsFZmjiGAXkwJk1g7FL k+DRb/eAcxGMLdsV8y2E7CPLf6/lDJzsdxetzfJZK0w4varh4+Br1n6bQsfxTpslr/4T g0dUQOIip+ijjdn2I2PW9SEBva9Iab3W7nyNi7qQHaqtIRQmorHeneleXFDypSLyM3NS fa/+fGiz4n2ibFokB+YObKab7yI2l4vbcp6jRMrR/THfX1JPhj4slo778TfrvuNu5oda Psow== X-Gm-Message-State: AO0yUKU6Lfnu9tKMGUGHEcKHzgoIt2r7TdcwWPXeR5kRCO1cKPkZugbM UeMm4k7M7sVOvH5nI/a5DGn57sPYl1o= X-Google-Smtp-Source: AK7set/2qMBQoItgnF2UfGuWjnb/SN0i6Ilhs0VHre6SWbR31nyde30w38bk+RZLr+phG73CUYuyOzmTwxQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90b:2283:b0:23b:446b:bbfd with SMTP id kx3-20020a17090b228300b0023b446bbbfdmr466592pjb.1.1679436034138; Tue, 21 Mar 2023 15:00:34 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 21 Mar 2023 15:00:14 -0700 In-Reply-To: <20230321220021.2119033-1-seanjc@google.com> Mime-Version: 1.0 References: <20230321220021.2119033-1-seanjc@google.com> X-Mailer: git-send-email 2.40.0.rc2.332.ga46443480c-goog Message-ID: <20230321220021.2119033-7-seanjc@google.com> Subject: [PATCH v4 06/13] KVM: x86/mmu: Bypass __handle_changed_spte() when clearing TDP MMU dirty bits From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma , David Matlack , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Vipin Sharma Drop everything except marking the PFN dirty and the relevant tracepoint parts of __handle_changed_spte() when clearing the dirty status of gfns in the TDP MMU. Clearing only the Dirty (or Writable) bit doesn't affect the SPTEs shadow-present status, whether or not the SPTE is a leaf, or change the SPTE's PFN. I.e. other than marking the PFN dirty, none of the functional updates handled by __handle_changed_spte() are relevant. Losing __handle_changed_spte()'s sanity checks does mean that a bug could theoretical go unnoticed, but that scenario is extremely unlikely, e.g. would effectively require a misconfigured or a locking bug elsewhere. Opportunistically remove a comment blurb from __handle_changed_spte() about all modifications to TDP MMU SPTEs needing to invoke said function, that "rule" hasn't been true since fast page fault support was added for the TDP MMU (and perhaps even before). Tested on a VM (160 vCPUs, 160 GB memory) and found that performance of clear dirty log stage improved by ~40% in dirty_log_perf_test (with the full optimization applied). Before optimization: Reviewed-by: David Matlack -------------------- Iteration 1 clear dirty log time: 3.638543593s Iteration 2 clear dirty log time: 3.145032742s Iteration 3 clear dirty log time: 3.142340358s Clear dirty log over 3 iterations took 9.925916693s. (Avg 3.308638897s/iter= ation) After optimization: ------------------- Iteration 1 clear dirty log time: 2.318988110s Iteration 2 clear dirty log time: 1.794470164s Iteration 3 clear dirty log time: 1.791668628s Clear dirty log over 3 iterations took 5.905126902s. (Avg 1.968375634s/iter= ation) Link: https://lore.kernel.org/all/Y9hXmz%2FnDOr1hQal@google.com Signed-off-by: Vipin Sharma Reviewed-by: David Matlack [sean: split the switch to atomic-AND to a separate patch] Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/tdp_mmu.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 950c5d23ecee..467931c43968 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -517,7 +517,6 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep= _t pt, bool shared) * threads that might be modifying SPTEs. * * Handle bookkeeping that might result from the modification of a SPTE. - * This function must be called for all TDP SPTE modifications. */ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, u64 old_spte, u64 new_spte, int level, @@ -1689,8 +1688,10 @@ static void clear_dirty_pt_masked(struct kvm *kvm, s= truct kvm_mmu_page *root, iter.old_spte, dbit, iter.level); =20 - __handle_changed_spte(kvm, iter.as_id, iter.gfn, iter.old_spte, - iter.old_spte & ~dbit, iter.level, false); + trace_kvm_tdp_mmu_spte_changed(iter.as_id, iter.gfn, iter.level, + iter.old_spte, + iter.old_spte & ~dbit); + kvm_set_pfn_dirty(spte_to_pfn(iter.old_spte)); } =20 rcu_read_unlock(); --=20 2.40.0.rc2.332.ga46443480c-goog From nobody Sun Feb 8 16:33:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1ACDEC6FD1D for ; Tue, 21 Mar 2023 22:01:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229913AbjCUWBG (ORCPT ); Tue, 21 Mar 2023 18:01:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38070 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230273AbjCUWAp (ORCPT ); Tue, 21 Mar 2023 18:00:45 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D8D5591F2 for ; Tue, 21 Mar 2023 15:00:36 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id h66-20020a628345000000b00625e0121e40so8147135pfe.1 for ; Tue, 21 Mar 2023 15:00:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679436036; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=KjKcmvesKMOfQvirTv9DqxPBipEMdr9d2inmQgOn+s4=; b=tkoda7txDF5YFbkW+Mt/PmeisXwqsGdCNNiWC7XyWUmM7WlTyiGAQ+S0jYuSJFqxiW IN80Zrw3U1974nErB9TjPhl2gbNt8n9+swhU4bOZXbscFco6IfgyR4ha0MKsSgz/i/N+ TpETbLNOuDVTJAckCx4h1rgt1YE8iIf5jE4YamaAqUI7E+yIOVDuF9pOx8zrCZQUMkOC ejW7z5CjNoBXnvAgk6JOZ4B6F1EhCvQrRo/l/fEhAuFe8Oer/pRFIGob2XKRh39WdxAY OF40O8sbgZWSYr5TJCTsIwmlRdRVVg1NABFchyyXQ6dV3s53u4zcsVsqeYF/l3pVnu+Y bDeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679436036; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=KjKcmvesKMOfQvirTv9DqxPBipEMdr9d2inmQgOn+s4=; b=O1bWxtmle3Xt7Q72DQGmUh0HnSlLLhHS0batNlWYvn7jCE+IWMrI9G6zDgZS23DXHR BqnuV7L7Gyrwtkm+heC18vprS8ZzBXqDh2fbOtzexv4x/LbMXZmpaT1VDDcMgF9oF1ym KsRKk9+FjzRElxB6bHoZzIO7k4Pm+wt3RnKRxrbCqr5dMqpIXsKCoCmBzaaLAZ4eDfID RUI/aFJXs7ftdFsOmdwLHoTsesUV/pkIUq13AOxjwEqsKEC/TlifGeCMwhhwhf4nHkrE DQxVo9EHBZJrcrwnFqw6laLrGIZCMwSGL5L0bRtCeRzju0i2sLcjbDUqg70Qsn4m4Zcg Pf+w== X-Gm-Message-State: AO0yUKVKhKHRxa39XDhP9ylU98olexj1e1BGW3fIdNKxXLhGPMMhQaqg UOYq1Xsztu6ZSLrbWpXbXt/5+SbbV90= X-Google-Smtp-Source: AK7set8hFJFKyUzPqhFofQ4pnN9+yx4h3CVcQkvdECtsqsnFR2WhZTuhAjlo7ykamjm+qJyMrwUxziV18pk= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a65:43c6:0:b0:503:2663:5c9f with SMTP id n6-20020a6543c6000000b0050326635c9fmr142889pgp.8.1679436035808; Tue, 21 Mar 2023 15:00:35 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 21 Mar 2023 15:00:15 -0700 In-Reply-To: <20230321220021.2119033-1-seanjc@google.com> Mime-Version: 1.0 References: <20230321220021.2119033-1-seanjc@google.com> X-Mailer: git-send-email 2.40.0.rc2.332.ga46443480c-goog Message-ID: <20230321220021.2119033-8-seanjc@google.com> Subject: [PATCH v4 07/13] KVM: x86/mmu: Remove "record_dirty_log" in __tdp_mmu_set_spte() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma , David Matlack , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Vipin Sharma Remove bool parameter "record_dirty_log" from __tdp_mmu_set_spte() and refactor the code as this variable is always set to true by its caller. Signed-off-by: Vipin Sharma Reviewed-by: David Matlack Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/tdp_mmu.c | 24 +++++++++--------------- 1 file changed, 9 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 467931c43968..3cc81fa22b7f 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -708,18 +708,13 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm = *kvm, * notifier for access tracking. Leaving record_acc_track * unset in that case prevents page accesses from being * double counted. - * @record_dirty_log: Record the page as dirty in the dirty bitmap if - * appropriate for the change being made. Should be set - * unless performing certain dirty logging operations. - * Leaving record_dirty_log unset in that case prevents page - * writes from being double counted. * * Returns the old SPTE value, which _may_ be different than @old_spte if = the * SPTE had voldatile bits. */ static u64 __tdp_mmu_set_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep, u64 old_spte, u64 new_spte, gfn_t gfn, int level, - bool record_acc_track, bool record_dirty_log) + bool record_acc_track) { lockdep_assert_held_write(&kvm->mmu_lock); =20 @@ -738,35 +733,34 @@ static u64 __tdp_mmu_set_spte(struct kvm *kvm, int as= _id, tdp_ptep_t sptep, =20 if (record_acc_track) handle_changed_spte_acc_track(old_spte, new_spte, level); - if (record_dirty_log) - handle_changed_spte_dirty_log(kvm, as_id, gfn, old_spte, - new_spte, level); + + handle_changed_spte_dirty_log(kvm, as_id, gfn, old_spte, new_spte, + level); return old_spte; } =20 static inline void _tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *ite= r, - u64 new_spte, bool record_acc_track, - bool record_dirty_log) + u64 new_spte, bool record_acc_track) { WARN_ON_ONCE(iter->yielded); =20 iter->old_spte =3D __tdp_mmu_set_spte(kvm, iter->as_id, iter->sptep, iter->old_spte, new_spte, iter->gfn, iter->level, - record_acc_track, record_dirty_log); + record_acc_track); } =20 static inline void tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, u64 new_spte) { - _tdp_mmu_set_spte(kvm, iter, new_spte, true, true); + _tdp_mmu_set_spte(kvm, iter, new_spte, true); } =20 static inline void tdp_mmu_set_spte_no_acc_track(struct kvm *kvm, struct tdp_iter *iter, u64 new_spte) { - _tdp_mmu_set_spte(kvm, iter, new_spte, false, true); + _tdp_mmu_set_spte(kvm, iter, new_spte, false); } =20 #define tdp_root_for_each_pte(_iter, _root, _start, _end) \ @@ -916,7 +910,7 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu= _page *sp) return false; =20 __tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte, 0, - sp->gfn, sp->role.level + 1, true, true); + sp->gfn, sp->role.level + 1, true); =20 return true; } --=20 2.40.0.rc2.332.ga46443480c-goog From nobody Sun Feb 8 16:33:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31024C6FD1D for ; Tue, 21 Mar 2023 22:01:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230260AbjCUWBU (ORCPT ); Tue, 21 Mar 2023 18:01:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230221AbjCUWBA (ORCPT ); Tue, 21 Mar 2023 18:01:00 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76B2858C06 for ; Tue, 21 Mar 2023 15:00:38 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-540e3b152a3so167691837b3.2 for ; Tue, 21 Mar 2023 15:00:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679436037; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=l3JzYD0u087M8uWewJDs63iio7s7LCBf0UooyaaQZak=; b=sPfaqDOE7yWhSuiUIZf0xd+k0PDvbafdbkdbHy9oZQ50zsLIKXlNo4EApjcM1z4yXY 9/ZRMbMHteBU7UnlaUQlJB83EZFWV6DqQLgv/3l95DfrZ07rsVGjeprJ4Jlt9TGlI6A1 Mz9CElVjeppNfWHMD3wYKaOjvH2MuEeM3SMsArkPXwg3cpCopPUsnPJU0ggZI/oGJMKj t2zLZFRLThgRMWoaLHb3zSCfQKrIbKvLhuYiQXflYK36QWSPTN1q4hq5AH8M3dpCnyW4 hB0Ewf4SzzL44avvLXFFBq2aomaMHTMr3mMskq3FCfPKAsd8Hca1vd6hDtU9ajGSQbby jbtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679436037; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=l3JzYD0u087M8uWewJDs63iio7s7LCBf0UooyaaQZak=; b=YTX8bQc8NFfTmBRrPcEV7V/q31+XPv7VStnnFAUyT2whPIY4RWu5ybB15pIhBanNqi rEwG+2hGb+CY/y4TrM1ndzDSVP6MTsOYmq/Xxn9U2+oz956RaJBoPQhtC8GaoP5QHiAS fTbNDr9pCE4dTrICwVietsMeiGdPz+6gDGix4hEqqZ6eTmGH3eJlhJbygkd+7i2ZsPzf mGzzcZ7CkNM942Iwp0ZrR8zDEHxgB9xeGlQHrZO0DkC7tIqVid8YM76mbaEUCZUGDcRg 7AV34FX4EgCOp7UqIs+BiJFvr4NvhGd9ZwGKtFR9lHuFkzeoOobhPOv/RcxlgdgRtBlk 3RbA== X-Gm-Message-State: AAQBX9cUZuu1TIiF4o8y1pcIdZ19MJfPpdKkJ+qUzBr+0ak57Dmq8nm9 udeqffPX2cp5uqsXD9W0qxWzu4a5ni4= X-Google-Smtp-Source: AKy350ZrJiZIL9DkISdCz4U34JRpP0pdOec2sduVj6biEF/6aEeFwAoFopPBaLpv1b0ftiNSOQQL+BhsVz0= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:283:b0:b6e:d788:eba7 with SMTP id v3-20020a056902028300b00b6ed788eba7mr2491158ybh.6.1679436037531; Tue, 21 Mar 2023 15:00:37 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 21 Mar 2023 15:00:16 -0700 In-Reply-To: <20230321220021.2119033-1-seanjc@google.com> Mime-Version: 1.0 References: <20230321220021.2119033-1-seanjc@google.com> X-Mailer: git-send-email 2.40.0.rc2.332.ga46443480c-goog Message-ID: <20230321220021.2119033-9-seanjc@google.com> Subject: [PATCH v4 08/13] KVM: x86/mmu: Clear only A-bit (if enabled) when aging TDP MMU SPTEs From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma , David Matlack , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Vipin Sharma Use tdp_mmu_clear_spte_bits() when clearing the Accessed bit in TDP MMU SPTEs so as to use an atomic-AND instead of XCHG to clear the A-bit. Similar to the D-bit story, this will allow KVM to bypass __handle_changed_spte() by ensuring only the A-bit is modified. Link: https://lore.kernel.org/all/Y9HcHRBShQgjxsQb@google.com Signed-off-by: Vipin Sharma Reviewed-by: David Matlack [sean: massage changelog] Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/tdp_mmu.c | 38 +++++++++++++++++++++----------------- 1 file changed, 21 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 3cc81fa22b7f..adbdfed287cc 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -756,13 +756,6 @@ static inline void tdp_mmu_set_spte(struct kvm *kvm, s= truct tdp_iter *iter, _tdp_mmu_set_spte(kvm, iter, new_spte, true); } =20 -static inline void tdp_mmu_set_spte_no_acc_track(struct kvm *kvm, - struct tdp_iter *iter, - u64 new_spte) -{ - _tdp_mmu_set_spte(kvm, iter, new_spte, false); -} - #define tdp_root_for_each_pte(_iter, _root, _start, _end) \ for_each_tdp_pte(_iter, _root, _start, _end) =20 @@ -1248,33 +1241,44 @@ static __always_inline bool kvm_tdp_mmu_handle_gfn(= struct kvm *kvm, /* * Mark the SPTEs range of GFNs [start, end) unaccessed and return non-zero * if any of the GFNs in the range have been accessed. + * + * No need to mark the corresponding PFN as accessed as this call is coming + * from the clear_young() or clear_flush_young() notifier, which uses the + * return value to determine if the page has been accessed. */ static bool age_gfn_range(struct kvm *kvm, struct tdp_iter *iter, struct kvm_gfn_range *range) { - u64 new_spte =3D 0; + u64 new_spte; =20 /* If we have a non-accessed entry we don't need to change the pte. */ if (!is_accessed_spte(iter->old_spte)) return false; =20 - new_spte =3D iter->old_spte; - - if (spte_ad_enabled(new_spte)) { - new_spte &=3D ~shadow_accessed_mask; + if (spte_ad_enabled(iter->old_spte)) { + iter->old_spte =3D tdp_mmu_clear_spte_bits(iter->sptep, + iter->old_spte, + shadow_accessed_mask, + iter->level); + new_spte =3D iter->old_spte & ~shadow_accessed_mask; } else { /* * Capture the dirty status of the page, so that it doesn't get * lost when the SPTE is marked for access tracking. */ - if (is_writable_pte(new_spte)) - kvm_set_pfn_dirty(spte_to_pfn(new_spte)); + if (is_writable_pte(iter->old_spte)) + kvm_set_pfn_dirty(spte_to_pfn(iter->old_spte)); =20 - new_spte =3D mark_spte_for_access_track(new_spte); + new_spte =3D mark_spte_for_access_track(iter->old_spte); + iter->old_spte =3D kvm_tdp_mmu_write_spte(iter->sptep, + iter->old_spte, new_spte, + iter->level); } =20 - tdp_mmu_set_spte_no_acc_track(kvm, iter, new_spte); - + __handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte, + new_spte, iter->level, false); + handle_changed_spte_dirty_log(kvm, iter->as_id, iter->gfn, + iter->old_spte, new_spte, iter->level); return true; } =20 --=20 2.40.0.rc2.332.ga46443480c-goog From nobody Sun Feb 8 16:33:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA593C6FD1D for ; Tue, 21 Mar 2023 22:01:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230325AbjCUWBY (ORCPT ); Tue, 21 Mar 2023 18:01:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38070 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230322AbjCUWBD (ORCPT ); Tue, 21 Mar 2023 18:01:03 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0FF60591E7 for ; Tue, 21 Mar 2023 15:00:40 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-54196bfcd5fso164831407b3.4 for ; Tue, 21 Mar 2023 15:00:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679436039; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=m01VVNtHv5a2CK3Ncdg5RaqOrgxf+NohQ23K4Xtg7Uw=; b=EZfbDVRwIoBx8cswvwhIDIwqFUYHYC6XCnbD20Tsx1OSz3sil1keddYVFHZ4PRh5/e Gudp/LPKWq9pdn+geXQkZg+/oYpzH551feiLqHH4Z9Fe0KgHstPhTA249VC1fJHQd0W8 MrnZeveyaKWhgDZ/y2dJh6CzCZ2nDJJIO6Gr7hljgPcxHlnuA0tBdkFZkHk0CzUDx74Z JPeEnLmtnUD17XanmT2tGU1n0iUSzHx6I/YBjpbOGbBIKVQ2MqI3pc7MSrPzuWT1J2cT QDZF26YxhiYKldxhXENLBcM3ZJd/AJelJgUUN1ln7hkMXnXnKHRfCK2EgG75RJzSIOAZ Q55g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679436039; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=m01VVNtHv5a2CK3Ncdg5RaqOrgxf+NohQ23K4Xtg7Uw=; b=hGei6jEoV+nWgBP6KpCUjP96c1aiErdh4aP4B3GdFupIsox3Db7cn4yUD4U9BOnRdz iijjulLYaevtWydS/rTvmAjFHUmHxmPXyt7wkP/JNFXPxIpxqiYG+Q8uNn5HQqOhSD5d P6K7z5mBcb6/CamZ4s2y50VuIuX2tHMLB2FksyWa50pCm8HOC3o2SfuxnntJJecw5Lp6 Jnx1MtP+hLESvnHM1KTQHgYKRlENmb0KzdJvj9NjXBZ/TbAoM6288stGDHIlwqADRdXx 9mLIdvm9LZVQgGC2DbJZjbq4H0z7lv2wNOVN++40/ZvM+ssBZUm2CRhW0SuBVi41e+Jf jvcg== X-Gm-Message-State: AAQBX9fZKBl5GKx+HhWSBK/RTh+V3Hy1afDv/QmKIk8l+zk/OdBmwglj kGURrqpMhnwv7nS3NIkxEyjZhoG6Dk0= X-Google-Smtp-Source: AKy350Z37plakpN/ldIsHVOhVv7kxZOn6TBL+Sf55vGyZqFCOF/7J3IVimc92fFhkKcm4JdKFzs7+2qlJr8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:1881:b0:a06:538f:24b2 with SMTP id cj1-20020a056902188100b00a06538f24b2mr2291202ybb.2.1679436038986; Tue, 21 Mar 2023 15:00:38 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 21 Mar 2023 15:00:17 -0700 In-Reply-To: <20230321220021.2119033-1-seanjc@google.com> Mime-Version: 1.0 References: <20230321220021.2119033-1-seanjc@google.com> X-Mailer: git-send-email 2.40.0.rc2.332.ga46443480c-goog Message-ID: <20230321220021.2119033-10-seanjc@google.com> Subject: [PATCH v4 09/13] KVM: x86/mmu: Drop unnecessary dirty log checks when aging TDP MMU SPTEs From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma , David Matlack , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Vipin Sharma Drop the unnecessary call to handle dirty log updates when aging TDP MMU SPTEs, as neither clearing the Accessed bit nor marking a SPTE for access tracking can _set_ the Writable bit, i.e. can't trigger marking a gfn dirty in its memslot. The access tracking path can _clear_ the Writable bit, e.g. if the XCHG races with fast_page_fault() and writes the stale value without the Writable bit set, but clearing the Writable bit outside of mmu_lock is not allowed, i.e. access tracking can't spuriously set the Writable bit. Signed-off-by: Vipin Sharma [sean: split to separate patch, apply to dirty path, write changelog] Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/tdp_mmu.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index adbdfed287cc..29bb97ff266e 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1277,8 +1277,6 @@ static bool age_gfn_range(struct kvm *kvm, struct tdp= _iter *iter, =20 __handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte, new_spte, iter->level, false); - handle_changed_spte_dirty_log(kvm, iter->as_id, iter->gfn, - iter->old_spte, new_spte, iter->level); return true; } =20 --=20 2.40.0.rc2.332.ga46443480c-goog From nobody Sun Feb 8 16:33:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AB06C6FD20 for ; Tue, 21 Mar 2023 22:01:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230288AbjCUWB1 (ORCPT ); Tue, 21 Mar 2023 18:01:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39416 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230323AbjCUWBD (ORCPT ); Tue, 21 Mar 2023 18:01:03 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B9B7D25BB3 for ; Tue, 21 Mar 2023 15:00:41 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id p36-20020a056a000a2400b005f72df7d97bso8199486pfh.19 for ; Tue, 21 Mar 2023 15:00:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679436041; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=g5IUgDmGKg6jS9TeGySBz+vT2au0VaL5ai4ERvav8Fo=; b=IPQCsgpH532ux37jGtK+gee01/TmGVQihY3q1GSL5SbWGpDOmVHoKwjo9gY1VnBvV7 iQvqVl9VlJ9vSNl5eVAmVDd7inExLA22z9oDXKCLG2DFexsbV7JhWWOqLXEKnVvofSNO 02oLDJtx5gKZtP/TbiWLFuQI++cTODWPWXUpLRfymbIl6Y0lJZOe+QA5Vlz7+mKA60We 266VMnURrV3+LDfx+ovmWN++5M3RvB3G6JUk3BUX+gKMWINAundUZC3cd02gKOwgpJUQ KATJb9cZratcUCkP83G9ozJQKZSkwe5NIyvlkwV4qv0/2vgW9fH+5F+PVl1KjMo9lSmO IZaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679436041; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=g5IUgDmGKg6jS9TeGySBz+vT2au0VaL5ai4ERvav8Fo=; b=56xIqrCrGtcJwZdxllLs+ce5Dy0Iqsndrj0fN8wn+dkYQTuDrv+uTswHUWH5p24zlw 1llpNKmdqIlDFROpDAVZB1k38B14iiX/ynBmIrIJjuUdNfKF5MCZY+RKyHYhIKFhPZ5D keKKyJkFvBsahEfOofD7s6TvI+kJFTyZBkAKfg12qw3ApicHW/TxDJyetEWf/rqGqYNj tax/yBmQAtolFlzs6cdGOh1uy+0qZYMMuxtmMGEtQfzrzn+lBizlkjcl0TSGFupQaWnZ kIMV6rbY1QrQ5HNrXyIvtr02oof2dhzN003XbKUJOynKAbAEl6oU8RVUqwbPapIQBzcr TAVA== X-Gm-Message-State: AO0yUKUjzVlejxhACIh5nflY58Y/HqDBpfxV/sJm+WHf1ovOSAJObCuZ 7SMN1DVXEf7zDb8qsrZmEt5qKY1SeIE= X-Google-Smtp-Source: AK7set94oJIsgHXZKQz4WZIR+szyagQWRPAqHsFOSA0C2T8jc9ZnEEapzRbWt3iH34aCX5qQOhAxHeyeZQ8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a65:4349:0:b0:502:fd12:83ce with SMTP id k9-20020a654349000000b00502fd1283cemr154916pgq.5.1679436040780; Tue, 21 Mar 2023 15:00:40 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 21 Mar 2023 15:00:18 -0700 In-Reply-To: <20230321220021.2119033-1-seanjc@google.com> Mime-Version: 1.0 References: <20230321220021.2119033-1-seanjc@google.com> X-Mailer: git-send-email 2.40.0.rc2.332.ga46443480c-goog Message-ID: <20230321220021.2119033-11-seanjc@google.com> Subject: [PATCH v4 10/13] KVM: x86/mmu: Bypass __handle_changed_spte() when aging TDP MMU SPTEs From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma , David Matlack , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Vipin Sharma Drop everything except the "tdp_mmu_spte_changed" tracepoint part of __handle_changed_spte() when aging SPTEs in the TDP MMU, as clearing the accessed status doesn't affect the SPTE's shadow-present status, whether or not the SPTE is a leaf, or change the PFN. I.e. none of the functional updates handled by __handle_changed_spte() are relevant. Losing __handle_changed_spte()'s sanity checks does mean that a bug could theoretical go unnoticed, but that scenario is extremely unlikely, e.g. would effectively require a misconfigured MMU or a locking bug elsewhere. Link: https://lore.kernel.org/all/Y9HcHRBShQgjxsQb@google.com Signed-off-by: Vipin Sharma Reviewed-by: David Matlack [sean: massage changelog] Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/tdp_mmu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 29bb97ff266e..cdfb67ef5800 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1275,8 +1275,8 @@ static bool age_gfn_range(struct kvm *kvm, struct tdp= _iter *iter, iter->level); } =20 - __handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte, - new_spte, iter->level, false); + trace_kvm_tdp_mmu_spte_changed(iter->as_id, iter->gfn, iter->level, + iter->old_spte, new_spte); return true; } =20 --=20 2.40.0.rc2.332.ga46443480c-goog From nobody Sun Feb 8 16:33:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F08FC76195 for ; Tue, 21 Mar 2023 22:01:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230228AbjCUWBn (ORCPT ); Tue, 21 Mar 2023 18:01:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38376 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230083AbjCUWBJ (ORCPT ); Tue, 21 Mar 2023 18:01:09 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB75748E0F for ; Tue, 21 Mar 2023 15:00:44 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id p9-20020a170902e74900b001a1c7b2e7afso4864001plf.0 for ; Tue, 21 Mar 2023 15:00:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679436042; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=TXYP/la1qhNyvVoZkTScqrgtgxD2yAawazbtnCeBxio=; b=g+1xhV9HsMHnKFM49waIK6sqkS2EKBFRc3TBIumBs9n9HLYewdRG78hEvTPZimgJuF jOZCE9/uPtqTkdpOX96ixTH0A1jj/qpMTudZtnOg2HBqXrPlFJ2vNamX2wNk3YxqMR1K 8S+VRHbSrN3k7nQTqBoSEpcMETj0nkdITGJbmcqzjVBQlsawg5hTdDH/z6D/YbrAPIr5 1lZh+OfD/2Poua0cWt0WCnu9A8paciTZqfS5DRH34XWsE5P8heG6DZxwSIF7wZbK+Zhx aa3y/2ZarGX0G4zmlREasDuUbDqvZSlQwchIVBxZCUsvAskwrf7ne9EhA3R0W8p1vX7A yQQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679436042; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=TXYP/la1qhNyvVoZkTScqrgtgxD2yAawazbtnCeBxio=; b=DqZAkdxwNQZ2v9C3Y39IeWp/PpHpt96UNfQBsNvhD+l+mc5jRa6K6P85km7ZjRTnZL BdMd2of/tRcFpbrraF9MiHA37J501fXABDL6XB3nD5cbGqUztQEFPAAovPp1/WOIbY/N gBR5bx+dtSBv58ut+zbIpn79OQocwn4Yn3v/754uN/GoBVePQkOJpdu9Olr1VhL33DaP VANiWujI/I80w1GhOB/vAaDl1Ia3j59KpD36hw5dpDSo621i2eKtLMH0oeUNMXlRxuXk Vf9iwbC2lyUfIed5o0omj7UwSlrjXdOO004miOTwoOnP+JsZDGdlZNx5/qM+jNK953Z+ PIUA== X-Gm-Message-State: AO0yUKVVALoN+Cj99AvPqYR1znpY0MTvjAOlYsvPV/5exk065W2hMuf1 1elJAuIekXRixjdx5pYduQsfL2HAZZs= X-Google-Smtp-Source: AK7set8yAJ++eaFWIiZL69K1/iFr2Dk4PnlCeEmj8UEu5lEOX0S0xMFPwLF8x1AF0Y/AYImhOvtUl9w2ZXY= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:f3d5:b0:23d:2000:ccd6 with SMTP id ha21-20020a17090af3d500b0023d2000ccd6mr460733pjb.2.1679436042594; Tue, 21 Mar 2023 15:00:42 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 21 Mar 2023 15:00:19 -0700 In-Reply-To: <20230321220021.2119033-1-seanjc@google.com> Mime-Version: 1.0 References: <20230321220021.2119033-1-seanjc@google.com> X-Mailer: git-send-email 2.40.0.rc2.332.ga46443480c-goog Message-ID: <20230321220021.2119033-12-seanjc@google.com> Subject: [PATCH v4 11/13] KVM: x86/mmu: Remove "record_acc_track" in __tdp_mmu_set_spte() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma , David Matlack , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Vipin Sharma Remove bool parameter "record_acc_track" from __tdp_mmu_set_spte() and refactor the code. This variable is always set to true by its caller. Remove single and double underscore prefix from tdp_mmu_set_spte() related APIs: 1. Change __tdp_mmu_set_spte() to tdp_mmu_set_spte() 2. Change _tdp_mmu_set_spte() to tdp_mmu_iter_set_spte() Signed-off-by: Vipin Sharma Reviewed-by: David Matlack Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/tdp_mmu.c | 51 +++++++++++++------------------------- 1 file changed, 17 insertions(+), 34 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index cdfb67ef5800..9649e0fe4302 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -695,7 +695,7 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *k= vm, =20 =20 /* - * __tdp_mmu_set_spte - Set a TDP MMU SPTE and handle the associated bookk= eeping + * tdp_mmu_set_spte - Set a TDP MMU SPTE and handle the associated bookkee= ping * @kvm: KVM instance * @as_id: Address space ID, i.e. regular vs. SMM * @sptep: Pointer to the SPTE @@ -703,18 +703,12 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm = *kvm, * @new_spte: The new value that will be set for the SPTE * @gfn: The base GFN that was (or will be) mapped by the SPTE * @level: The level _containing_ the SPTE (its parent PT's level) - * @record_acc_track: Notify the MM subsystem of changes to the accessed s= tate - * of the page. Should be set unless handling an MMU - * notifier for access tracking. Leaving record_acc_track - * unset in that case prevents page accesses from being - * double counted. * * Returns the old SPTE value, which _may_ be different than @old_spte if = the * SPTE had voldatile bits. */ -static u64 __tdp_mmu_set_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep, - u64 old_spte, u64 new_spte, gfn_t gfn, int level, - bool record_acc_track) +static u64 tdp_mmu_set_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep, + u64 old_spte, u64 new_spte, gfn_t gfn, int level) { lockdep_assert_held_write(&kvm->mmu_lock); =20 @@ -730,30 +724,19 @@ static u64 __tdp_mmu_set_spte(struct kvm *kvm, int as= _id, tdp_ptep_t sptep, old_spte =3D kvm_tdp_mmu_write_spte(sptep, old_spte, new_spte, level); =20 __handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level, false); - - if (record_acc_track) - handle_changed_spte_acc_track(old_spte, new_spte, level); - + handle_changed_spte_acc_track(old_spte, new_spte, level); handle_changed_spte_dirty_log(kvm, as_id, gfn, old_spte, new_spte, level); return old_spte; } =20 -static inline void _tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *ite= r, - u64 new_spte, bool record_acc_track) +static inline void tdp_mmu_iter_set_spte(struct kvm *kvm, struct tdp_iter = *iter, + u64 new_spte) { WARN_ON_ONCE(iter->yielded); - - iter->old_spte =3D __tdp_mmu_set_spte(kvm, iter->as_id, iter->sptep, - iter->old_spte, new_spte, - iter->gfn, iter->level, - record_acc_track); -} - -static inline void tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, - u64 new_spte) -{ - _tdp_mmu_set_spte(kvm, iter, new_spte, true); + iter->old_spte =3D tdp_mmu_set_spte(kvm, iter->as_id, iter->sptep, + iter->old_spte, new_spte, + iter->gfn, iter->level); } =20 #define tdp_root_for_each_pte(_iter, _root, _start, _end) \ @@ -845,7 +828,7 @@ static void __tdp_mmu_zap_root(struct kvm *kvm, struct = kvm_mmu_page *root, continue; =20 if (!shared) - tdp_mmu_set_spte(kvm, &iter, 0); + tdp_mmu_iter_set_spte(kvm, &iter, 0); else if (tdp_mmu_set_spte_atomic(kvm, &iter, 0)) goto retry; } @@ -902,8 +885,8 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu= _page *sp) if (WARN_ON_ONCE(!is_shadow_present_pte(old_spte))) return false; =20 - __tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte, 0, - sp->gfn, sp->role.level + 1, true); + tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte, 0, + sp->gfn, sp->role.level + 1); =20 return true; } @@ -937,7 +920,7 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct k= vm_mmu_page *root, !is_last_spte(iter.old_spte, iter.level)) continue; =20 - tdp_mmu_set_spte(kvm, &iter, 0); + tdp_mmu_iter_set_spte(kvm, &iter, 0); flush =3D true; } =20 @@ -1107,7 +1090,7 @@ static int tdp_mmu_link_sp(struct kvm *kvm, struct td= p_iter *iter, if (ret) return ret; } else { - tdp_mmu_set_spte(kvm, iter, spte); + tdp_mmu_iter_set_spte(kvm, iter, spte); } =20 tdp_account_mmu_page(kvm, sp); @@ -1314,13 +1297,13 @@ static bool set_spte_gfn(struct kvm *kvm, struct td= p_iter *iter, * invariant that the PFN of a present * leaf SPTE can never change. * See __handle_changed_spte(). */ - tdp_mmu_set_spte(kvm, iter, 0); + tdp_mmu_iter_set_spte(kvm, iter, 0); =20 if (!pte_write(range->pte)) { new_spte =3D kvm_mmu_changed_pte_notifier_make_spte(iter->old_spte, pte_pfn(range->pte)); =20 - tdp_mmu_set_spte(kvm, iter, new_spte); + tdp_mmu_iter_set_spte(kvm, iter, new_spte); } =20 return true; @@ -1805,7 +1788,7 @@ static bool write_protect_gfn(struct kvm *kvm, struct= kvm_mmu_page *root, if (new_spte =3D=3D iter.old_spte) break; =20 - tdp_mmu_set_spte(kvm, &iter, new_spte); + tdp_mmu_iter_set_spte(kvm, &iter, new_spte); spte_set =3D true; } =20 --=20 2.40.0.rc2.332.ga46443480c-goog From nobody Sun Feb 8 16:33:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55008C6FD20 for ; Tue, 21 Mar 2023 22:01:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230298AbjCUWBq (ORCPT ); Tue, 21 Mar 2023 18:01:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38678 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230294AbjCUWBQ (ORCPT ); Tue, 21 Mar 2023 18:01:16 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 40DEC52932 for ; Tue, 21 Mar 2023 15:00:45 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id a6-20020aa795a6000000b006262c174d64so6018247pfk.7 for ; Tue, 21 Mar 2023 15:00:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679436044; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=UukHHmzYp3jbnwpYv/v1FsmAZ+NnOZSwMuugHGlP2/0=; b=s5j9K7qbU+HX10Fei6AhiU+MmRATSG8GykyvbfcYCD9RB241fVkf4CqaD/UO08AAq3 BOl/Btyaj9RgY5Pq5j8tWUvWGV6xvVBzjS0mTP9MTax0P8AGs2IHNQXA5GVHy5pPAT/v x9tU0H2UAOpK1ALOYklmBkfpr933WgEQue8YVsn8c6BlGL6vkm174eLAnXG169QPN1s3 +765bkk3eL6Q1KoIveG+LTGa3v5P6CG3Q9wZT7wExdepi5nbokgAQEf26OjPOsTgjWzC ET/aNTrfHGr/2RV4LUrERXQFyJct2jvNf2pEUcUHdoaZEC4M5qLGGewQO9WhK92P/GFJ 887w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679436044; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=UukHHmzYp3jbnwpYv/v1FsmAZ+NnOZSwMuugHGlP2/0=; b=qCJwlxEPLH4HYuxxyg44AX6HCFq8YwtFmoa0GL5KXv7KBhDD6yuiwSLsL/489J1xad KFRZUQ5N9FglTBUTn5248YaIhw9c8Ha0+KiYFP/NP6qlIMAAv0m59rnMD+rwQoWbOVyq 2ZK4AscfFq05Bb5TbZCyZXqMgHpWBq6gCFjHOcYuwvWj3m86HYpGX9/XAoE3aZKhgIDO mZL0xjbwqGgXcNCc8jOk9TSRWNNipKlhqlzJ4c74ohUzpHGmpQaOYkCIkOlN3e6eA/EZ Xvr6vYxXaodbfzOMhymWH5Twk8nAspuD87V9y2KjDHRUspdrhHZ+VRlnb+oCsbxlVaNg d6GA== X-Gm-Message-State: AO0yUKUA7dTShsIEtKnO9aPtR0FOPm3CYpng0xbXoGOOmKkMDxIx5MGg cGzddeMKfzfZuNkFUUX3NY1Iv78Dwb8= X-Google-Smtp-Source: AK7set89NMM9G0JJ9iyRC4WIoVVZdGhkYAS++VuqnjYtGk8PQol09OlBuCWxxzLc4nep21LFiv8QxpuHaiI= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:7408:b0:23f:695a:1355 with SMTP id a8-20020a17090a740800b0023f695a1355mr436972pjg.5.1679436044297; Tue, 21 Mar 2023 15:00:44 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 21 Mar 2023 15:00:20 -0700 In-Reply-To: <20230321220021.2119033-1-seanjc@google.com> Mime-Version: 1.0 References: <20230321220021.2119033-1-seanjc@google.com> X-Mailer: git-send-email 2.40.0.rc2.332.ga46443480c-goog Message-ID: <20230321220021.2119033-13-seanjc@google.com> Subject: [PATCH v4 12/13] KVM: x86/mmu: Remove handle_changed_spte_dirty_log() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma , David Matlack , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Vipin Sharma Remove handle_changed_spte_dirty_log() as there is no code flow which sets 4KiB SPTE writable and hit this path. This function marks the page dirty in a memslot only if new SPTE is 4KiB in size and writable. Current users of handle_changed_spte_dirty_log() are: 1. set_spte_gfn() - Create only non writable SPTEs. 2. write_protect_gfn() - Change an SPTE to non writable. 3. zap leaf and roots APIs - Everything is 0. 4. handle_removed_pt() - Sets SPTEs to REMOVED_SPTE 5. tdp_mmu_link_sp() - Makes non leaf SPTEs. There is also no path which creates a writable 4KiB without going through make_spte() and this functions takes care of marking SPTE dirty in the memslot if it is PT_WRITABLE. Signed-off-by: Vipin Sharma Reviewed-by: David Matlack [sean: add blurb to __handle_changed_spte()'s comment] Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/tdp_mmu.c | 26 +++----------------------- 1 file changed, 3 insertions(+), 23 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 9649e0fe4302..e8ee49b6da5b 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -345,24 +345,6 @@ static void handle_changed_spte_acc_track(u64 old_spte= , u64 new_spte, int level) kvm_set_pfn_accessed(spte_to_pfn(old_spte)); } =20 -static void handle_changed_spte_dirty_log(struct kvm *kvm, int as_id, gfn_= t gfn, - u64 old_spte, u64 new_spte, int level) -{ - bool pfn_changed; - struct kvm_memory_slot *slot; - - if (level > PG_LEVEL_4K) - return; - - pfn_changed =3D spte_to_pfn(old_spte) !=3D spte_to_pfn(new_spte); - - if ((!is_writable_pte(old_spte) || pfn_changed) && - is_writable_pte(new_spte)) { - slot =3D __gfn_to_memslot(__kvm_memslots(kvm, as_id), gfn); - mark_page_dirty_in_slot(kvm, slot, gfn); - } -} - static void tdp_account_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp) { kvm_account_pgtable_pages((void *)sp->spt, +1); @@ -516,7 +498,9 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep= _t pt, bool shared) * the MMU lock and the operation must synchronize with other * threads that might be modifying SPTEs. * - * Handle bookkeeping that might result from the modification of a SPTE. + * Handle bookkeeping that might result from the modification of a SPTE. = Note, + * dirty logging updates are handled in common code, not here (see make_sp= te() + * and fast_pf_fix_direct_spte()). */ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, u64 old_spte, u64 new_spte, int level, @@ -613,8 +597,6 @@ static void handle_changed_spte(struct kvm *kvm, int as= _id, gfn_t gfn, __handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level, shared); handle_changed_spte_acc_track(old_spte, new_spte, level); - handle_changed_spte_dirty_log(kvm, as_id, gfn, old_spte, - new_spte, level); } =20 /* @@ -725,8 +707,6 @@ static u64 tdp_mmu_set_spte(struct kvm *kvm, int as_id,= tdp_ptep_t sptep, =20 __handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level, false); handle_changed_spte_acc_track(old_spte, new_spte, level); - handle_changed_spte_dirty_log(kvm, as_id, gfn, old_spte, new_spte, - level); return old_spte; } =20 --=20 2.40.0.rc2.332.ga46443480c-goog From nobody Sun Feb 8 16:33:56 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90786C6FD1D for ; Tue, 21 Mar 2023 22:01:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229675AbjCUWBu (ORCPT ); Tue, 21 Mar 2023 18:01:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230143AbjCUWBS (ORCPT ); Tue, 21 Mar 2023 18:01:18 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 559095614D for ; Tue, 21 Mar 2023 15:00:48 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id q99-20020a17090a1b6c00b0023f0c6c6b3dso5968106pjq.1 for ; Tue, 21 Mar 2023 15:00:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679436046; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=G+iGntBtVRJjD4o480g3Ec0CENpb6X11x9JeRvhErT0=; b=j1gwEl2XqCykbfMpcCLwM8y8MF47Jdosm/r2/BLi61FTQRxWIMTi1IjHX9YQ9nQAKq p/Cle9TrbcWcZS/e26akLlTSXNsz2bY14u7mUg1Yk/vT+oGidLfPprXJN8cUHa97nfle 5E37kyY+kc6QtIldgnBFO8VDYvAZmfbMLgVKQWZl17opxD3laEOw2KekIjr1k2EUaJNA c6d9nzDNWOlr0Hntu0jyxOs4L2czPLAuUWv9y5ElqxGAE2i18JhZ/sveILZVP1d2NcTD /rKSD4YbTNjlVViUPHy9KO4dGbXgEgUPh+d7/T6RXhQIC/Ngmq43PQXOQYn6aCFhB0xB jwEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679436046; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=G+iGntBtVRJjD4o480g3Ec0CENpb6X11x9JeRvhErT0=; b=opw2asb509DZZzXYDOf3HeG9sQthSmw0svkwGIyRxfregwIJuDWFcN+PgydOWpYO5l 6dGFJRFJe1TF4Hiz1/kAJN+eBRrFwBPJTw6ci4M8Mzxbb3GtxQZ//pMjUpeLCBJbQSbF 9qpGYSTZlPdNMva7dln5H2VINq2BnmdFXFAXArlXO801ex3z9oQoU9arO3aXgVlYbXdK 034XIqFySgdMZrZH7a8g3JIBQALPbwW6nvVAhn4q2jspNNXN2Bm8vs70rI9vTx1zX1Z6 y6HJnb6v1C6C3lDdXrHG5F3A8aamUwXAKaFyhWvy3GFcQQusb9hb5zKaKwVd8q581Tbw n0HA== X-Gm-Message-State: AO0yUKX3wXffGYc3yFQPdAKa8WyVL2Aau3/s0c4bvhrF+p9mABRGSV95 nZVtaCF0fwoLy2+fqmRfxslTNVOMdVE= X-Google-Smtp-Source: AK7set9LH1Y3w0ma4uzI/zS8MJ9A1jvQwicm85z1bRm3olEHtxhvQqi/+Aw125ALJ3uKxOceH+dcr66rHIY= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:2d0c:b0:627:df77:8303 with SMTP id fa12-20020a056a002d0c00b00627df778303mr680408pfb.5.1679436046123; Tue, 21 Mar 2023 15:00:46 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 21 Mar 2023 15:00:21 -0700 In-Reply-To: <20230321220021.2119033-1-seanjc@google.com> Mime-Version: 1.0 References: <20230321220021.2119033-1-seanjc@google.com> X-Mailer: git-send-email 2.40.0.rc2.332.ga46443480c-goog Message-ID: <20230321220021.2119033-14-seanjc@google.com> Subject: [PATCH v4 13/13] KVM: x86/mmu: Merge all handle_changed_pte*() functions From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma , David Matlack , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Vipin Sharma Merge __handle_changed_pte() and handle_changed_spte_acc_track() into a single function, handle_changed_pte(), as the two are always used together. Remove the existing handle_changed_pte(), as it's just a wrapper that calls __handle_changed_pte() and handle_changed_spte_acc_track(). Signed-off-by: Vipin Sharma Reviewed-by: Ben Gardon Reviewed-by: David Matlack [sean: massage changelog] Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/tdp_mmu.c | 42 +++++++++++--------------------------- 1 file changed, 12 insertions(+), 30 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index e8ee49b6da5b..b2fca11b91ff 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -334,17 +334,6 @@ static void handle_changed_spte(struct kvm *kvm, int a= s_id, gfn_t gfn, u64 old_spte, u64 new_spte, int level, bool shared); =20 -static void handle_changed_spte_acc_track(u64 old_spte, u64 new_spte, int = level) -{ - if (!is_shadow_present_pte(old_spte) || !is_last_spte(old_spte, level)) - return; - - if (is_accessed_spte(old_spte) && - (!is_shadow_present_pte(new_spte) || !is_accessed_spte(new_spte) || - spte_to_pfn(old_spte) !=3D spte_to_pfn(new_spte))) - kvm_set_pfn_accessed(spte_to_pfn(old_spte)); -} - static void tdp_account_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp) { kvm_account_pgtable_pages((void *)sp->spt, +1); @@ -487,7 +476,7 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep= _t pt, bool shared) } =20 /** - * __handle_changed_spte - handle bookkeeping associated with an SPTE chan= ge + * handle_changed_spte - handle bookkeeping associated with an SPTE change * @kvm: kvm instance * @as_id: the address space of the paging structure the SPTE was a part of * @gfn: the base GFN that was mapped by the SPTE @@ -502,9 +491,9 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep= _t pt, bool shared) * dirty logging updates are handled in common code, not here (see make_sp= te() * and fast_pf_fix_direct_spte()). */ -static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, - u64 old_spte, u64 new_spte, int level, - bool shared) +static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, + u64 old_spte, u64 new_spte, int level, + bool shared) { bool was_present =3D is_shadow_present_pte(old_spte); bool is_present =3D is_shadow_present_pte(new_spte); @@ -588,15 +577,10 @@ static void __handle_changed_spte(struct kvm *kvm, in= t as_id, gfn_t gfn, if (was_present && !was_leaf && (is_leaf || !is_present || WARN_ON_ONCE(pfn_changed))) handle_removed_pt(kvm, spte_to_child_pt(old_spte, level), shared); -} =20 -static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, - u64 old_spte, u64 new_spte, int level, - bool shared) -{ - __handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level, - shared); - handle_changed_spte_acc_track(old_spte, new_spte, level); + if (was_leaf && is_accessed_spte(old_spte) && + (!is_present || !is_accessed_spte(new_spte) || pfn_changed)) + kvm_set_pfn_accessed(spte_to_pfn(old_spte)); } =20 /* @@ -639,9 +623,8 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *k= vm, if (!try_cmpxchg64(sptep, &iter->old_spte, new_spte)) return -EBUSY; =20 - __handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte, - new_spte, iter->level, true); - handle_changed_spte_acc_track(iter->old_spte, new_spte, iter->level); + handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte, + new_spte, iter->level, true); =20 return 0; } @@ -705,8 +688,7 @@ static u64 tdp_mmu_set_spte(struct kvm *kvm, int as_id,= tdp_ptep_t sptep, =20 old_spte =3D kvm_tdp_mmu_write_spte(sptep, old_spte, new_spte, level); =20 - __handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level, false); - handle_changed_spte_acc_track(old_spte, new_spte, level); + handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level, false); return old_spte; } =20 @@ -1275,7 +1257,7 @@ static bool set_spte_gfn(struct kvm *kvm, struct tdp_= iter *iter, * Note, when changing a read-only SPTE, it's not strictly necessary to * zero the SPTE before setting the new PFN, but doing so preserves the * invariant that the PFN of a present * leaf SPTE can never change. - * See __handle_changed_spte(). + * See handle_changed_spte(). */ tdp_mmu_iter_set_spte(kvm, iter, 0); =20 @@ -1300,7 +1282,7 @@ bool kvm_tdp_mmu_set_spte_gfn(struct kvm *kvm, struct= kvm_gfn_range *range) /* * No need to handle the remote TLB flush under RCU protection, the * target SPTE _must_ be a leaf SPTE, i.e. cannot result in freeing a - * shadow page. See the WARN on pfn_changed in __handle_changed_spte(). + * shadow page. See the WARN on pfn_changed in handle_changed_spte(). */ return kvm_tdp_mmu_handle_gfn(kvm, range, set_spte_gfn); } --=20 2.40.0.rc2.332.ga46443480c-goog