From nobody Fri Sep 12 09:03:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E1AAC6379F for ; Sat, 11 Feb 2023 01:46:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229958AbjBKBqj (ORCPT ); Fri, 10 Feb 2023 20:46:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57516 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229930AbjBKBqf (ORCPT ); Fri, 10 Feb 2023 20:46:35 -0500 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7814B84B81 for ; Fri, 10 Feb 2023 17:46:34 -0800 (PST) Received: by mail-pg1-x549.google.com with SMTP id h126-20020a636c84000000b004d31ad79086so3221640pgc.23 for ; Fri, 10 Feb 2023 17:46:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6MGvE068epLiCiKO1TAoNVUmSZBBRkXrwHXhAiWz0E8=; b=MGK8U9JKxXLEY3zJNoNM7+6IniYornmEEv05U2sGpm2O9eFj4auGyNcX24Eo43wk5O fHZiLPKVSIf3MFAZRo8T7W98SKx92zL+WmqcssJm9yZxLK51iIVdWbS5asBU401uiVds 05Ktad81SM43kWtp3PBg8EiF0ld/oYIFeqIdI/znw/MCZ/a8vG+K0D23LMJpovSyNHZc yRP7ru9d/M1lHVBsdfT8G1bbcfrJnZjIzpNyTwgrQCwPM4oLUOBvaYjLPnpSIx+ki7qj 4tOoKaz7LslUVY+It/VYOrY5IHw2lpL5kjqnm8OCBNXsvcFQ5B6AUOBUQd63HPuB0YNP LIdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6MGvE068epLiCiKO1TAoNVUmSZBBRkXrwHXhAiWz0E8=; b=aGvrmcmJnchhncmXF33h6i5OGTUOUDds7jn7hy0hv77v69dMKrRq+V4mcPQjLgIj7B ElVr7s2XZ76gvTWQuiPSmf+rIDOteI+tG0hZk3uKo3O+fa3vny84KYYh2evZyk9Jq0l4 5x20spG7YrT7kfcYyGitgMBbzfXlZvoy192Ma/6+F9KuAaI/GPoyVCq1mHRsIZH3EqgT 3HGAj/AtMnU/NhtD6Ec3ay7jKLm97qNGKdN+84/3iRIg2IxKcoYwYvRNIttmhCvkYG1n Y9jW3qSVu3gt3Z8fRc4zXl3cEXQ2M30ujWqQ3EzGp32EPvwUjnO4TMRdkQHtWr2Va+6z 8Y+w== X-Gm-Message-State: AO0yUKUTOZIzeYVQyQPoawsC/HZmGhbM3/mxmRLubyEEfdfJS5NrZRYw i2csPpSLlB7ElguvBha8xIBldKTPK7Iq X-Google-Smtp-Source: AK7set/JFBGz37t345sdNfBqXNYa7A4iPZMNWpg/ZmlKXGa/6FL7sIXl00Sw+uNwrB4SiFY6PM+UfWliQ+VL X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a63:9602:0:b0:4da:85b1:e9c with SMTP id c2-20020a639602000000b004da85b10e9cmr3134854pge.100.1676079993740; Fri, 10 Feb 2023 17:46:33 -0800 (PST) Date: Fri, 10 Feb 2023 17:46:20 -0800 In-Reply-To: <20230211014626.3659152-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230211014626.3659152-1-vipinsh@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230211014626.3659152-2-vipinsh@google.com> Subject: [Patch v3 1/7] KVM: x86/mmu: Add a helper function to check if an SPTE needs atomic write From: Vipin Sharma To: seanjc@google.com, pbonzini@redhat.com, bgardon@google.com, dmatlack@google.com Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move conditions in kvm_tdp_mmu_write_spte() to check if an SPTE should be written atomically or not to a separate function. This new function, kvm_tdp_mmu_spte_need_atomic_write(), will be used in future commits to optimize clearing bits in SPTEs. Signed-off-by: Vipin Sharma Reviewed-by: David Matlack Reviewed-by: Ben Gardon --- arch/x86/kvm/mmu/tdp_iter.h | 34 ++++++++++++++++++++-------------- 1 file changed, 20 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index f0af385c56e0..c11c5d00b2c1 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -29,23 +29,29 @@ static inline void __kvm_tdp_mmu_write_spte(tdp_ptep_t = sptep, u64 new_spte) WRITE_ONCE(*rcu_dereference(sptep), new_spte); } =20 +/* + * SPTEs must be modified atomically if they are shadow-present, leaf + * SPTEs, and have volatile bits, i.e. has bits that can be set outside + * of mmu_lock. The Writable bit can be set by KVM's fast page fault + * handler, and Accessed and Dirty bits can be set by the CPU. + * + * Note, non-leaf SPTEs do have Accessed bits and those bits are + * technically volatile, but KVM doesn't consume the Accessed bit of + * non-leaf SPTEs, i.e. KVM doesn't care if it clobbers the bit. This + * logic needs to be reassessed if KVM were to use non-leaf Accessed + * bits, e.g. to skip stepping down into child SPTEs when aging SPTEs. + */ +static inline bool kvm_tdp_mmu_spte_need_atomic_write(u64 old_spte, int le= vel) +{ + return is_shadow_present_pte(old_spte) && + is_last_spte(old_spte, level) && + spte_has_volatile_bits(old_spte); +} + static inline u64 kvm_tdp_mmu_write_spte(tdp_ptep_t sptep, u64 old_spte, u64 new_spte, int level) { - /* - * Atomically write the SPTE if it is a shadow-present, leaf SPTE with - * volatile bits, i.e. has bits that can be set outside of mmu_lock. - * The Writable bit can be set by KVM's fast page fault handler, and - * Accessed and Dirty bits can be set by the CPU. - * - * Note, non-leaf SPTEs do have Accessed bits and those bits are - * technically volatile, but KVM doesn't consume the Accessed bit of - * non-leaf SPTEs, i.e. KVM doesn't care if it clobbers the bit. This - * logic needs to be reassessed if KVM were to use non-leaf Accessed - * bits, e.g. to skip stepping down into child SPTEs when aging SPTEs. - */ - if (is_shadow_present_pte(old_spte) && is_last_spte(old_spte, level) && - spte_has_volatile_bits(old_spte)) + if (kvm_tdp_mmu_spte_need_atomic_write(old_spte, level)) return kvm_tdp_mmu_write_spte_atomic(sptep, new_spte); =20 __kvm_tdp_mmu_write_spte(sptep, new_spte); --=20 2.39.1.581.gbfd45094c4-goog From nobody Fri Sep 12 09:03:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0579C636D4 for ; Sat, 11 Feb 2023 01:46:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229776AbjBKBqm (ORCPT ); Fri, 10 Feb 2023 20:46:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229946AbjBKBqg (ORCPT ); Fri, 10 Feb 2023 20:46:36 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DE4D56ADD1 for ; Fri, 10 Feb 2023 17:46:35 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id b5-20020a170902d50500b00198f3be5233so3840247plg.16 for ; Fri, 10 Feb 2023 17:46:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ylQ3Rvj2lB+88rdqpws35hu/21Y36j4VWX2N801NtKE=; b=C5Kk4fsWBf9kv5U1Qpa+U0cRxkXaRvZkMCQcwSbzIWmaJ+hHmiprlZeqdKj2u+eYxA fAGqWnQXXpPqhjIRRFJ0011pgAMHb+sMwNM5eAm8C4adyGpalV5McmX4s75Iqs+PHmAj 5Y/c3WfgxzXMriP4aoE3FO4LAtuVVpqEAy78xXV9q6uAsV+0WslQoyXOpQIaINDz6BYC /47DxtT2suW/zMdIuyrcGsvKYv5iQVj2161i8iCLYnoqm3KwMzkw+Qwyt7wLZDnH3M0B t16nDYRqLoiQ1N3KLASolNpwFD18YbaKeu+NF0S3NWPpbJ9jboy/7yHV3LtNoFSAqt9I PwNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ylQ3Rvj2lB+88rdqpws35hu/21Y36j4VWX2N801NtKE=; b=h1MULqG69SvQuC4HXQPJANRG1R6B1WaINj31Dw/m6nAwu8ODx15baqzn/6kmbaC3T3 7+gWvhzISeftPHE2XLz+NXmI2Q8ufwyHMrGPbG7Skh3+wyJThWPCjGaDYN1bJosVYgig EaYr53Aee7RT36DlgQYCHJN33K/uDT1V8ql6+c3E2gUxfUy+yffZK1Mb9wCPiHVd5VEa ZkL1fBUcHfLYDcjhsZOBegYXzMXTCEglalwN+4bRBeLLMmExj/biLKtapyGouG3Ef8H6 oO0J1ySXWJZp5FVpu/7t56Q7Vo7IHuLeeGT4NFC4mA8tiya5jApEtVdxikLqlhi/zwY2 ljMA== X-Gm-Message-State: AO0yUKXx/Rx/gxcUBWVLN4470J5ZCPbyb8haSVhMrZe8j89A7KMU3ZHG DRn4F8T57AfGY3R3sZXr+6ay13gkDUf4 X-Google-Smtp-Source: AK7set/E73OchTTBFaySUO9KPeYHWBzgKB9X+c//+xmmRYkSgD0kBZIqT1GdHv9Z4Yoc6TvtA9nWmbJaNcyP X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a17:90b:312:b0:233:bf8f:82a4 with SMTP id ay18-20020a17090b031200b00233bf8f82a4mr329176pjb.72.1676079995322; Fri, 10 Feb 2023 17:46:35 -0800 (PST) Date: Fri, 10 Feb 2023 17:46:21 -0800 In-Reply-To: <20230211014626.3659152-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230211014626.3659152-1-vipinsh@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230211014626.3659152-3-vipinsh@google.com> Subject: [Patch v3 2/7] KVM: x86/mmu: Atomically clear SPTE dirty state in the clear-dirty-log flow From: Vipin Sharma To: seanjc@google.com, pbonzini@redhat.com, bgardon@google.com, dmatlack@google.com Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Do atomic-AND to clear the dirty state of SPTEs. Optimize clear-dirty-log flow by avoiding to go through __handle_changed_spte() and directly call kvm_set_pfn_dirty() instead. Atomic-AND allows to fetch the latest value in SPTE, clear only its dirty state and set the new SPTE value. This optimization avoids executing unnecessary checks by not calling __handle_changed_spte(). With the removal of tdp_mmu_set_spte_no_dirty_log(), "record_dirty_log" parameter in __tdp_mmu_set_spte() is now obsolete. It will always be set to true by its caller. This dead code will be cleaned up in future commits. Tested on a VM (160 vCPUs, 160 GB memory) and found that performance of cle= ar dirty log stage improved by ~40% in dirty_log_perf_test Before optimization: Reviewed-by: David Matlack -------------------- Iteration 1 clear dirty log time: 3.638543593s Iteration 2 clear dirty log time: 3.145032742s Iteration 3 clear dirty log time: 3.142340358s Clear dirty log over 3 iterations took 9.925916693s. (Avg 3.308638897s/iter= ation) After optimization: ------------------- Iteration 1 clear dirty log time: 2.318988110s Iteration 2 clear dirty log time: 1.794470164s Iteration 3 clear dirty log time: 1.791668628s Clear dirty log over 3 iterations took 5.905126902s. (Avg 1.968375634s/iter= ation) Signed-off-by: Vipin Sharma --- arch/x86/kvm/mmu/tdp_iter.h | 14 ++++++++++++++ arch/x86/kvm/mmu/tdp_mmu.c | 35 +++++++++++++++-------------------- 2 files changed, 29 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index c11c5d00b2c1..fae559559a80 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -58,6 +58,20 @@ static inline u64 kvm_tdp_mmu_write_spte(tdp_ptep_t spte= p, u64 old_spte, return old_spte; } =20 +static inline u64 tdp_mmu_clear_spte_bits(tdp_ptep_t sptep, u64 old_spte, + u64 mask, int level) +{ + atomic64_t *sptep_atomic; + + if (kvm_tdp_mmu_spte_need_atomic_write(old_spte, level)) { + sptep_atomic =3D (atomic64_t *)rcu_dereference(sptep); + return (u64)atomic64_fetch_and(~mask, sptep_atomic); + } + + __kvm_tdp_mmu_write_spte(sptep, old_spte & ~mask); + return old_spte; +} + /* * A TDP iterator performs a pre-order walk over a TDP paging structure. */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index bba33aea0fb0..66ccbeb9d845 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -771,13 +771,6 @@ static inline void tdp_mmu_set_spte_no_acc_track(struc= t kvm *kvm, _tdp_mmu_set_spte(kvm, iter, new_spte, false, true); } =20 -static inline void tdp_mmu_set_spte_no_dirty_log(struct kvm *kvm, - struct tdp_iter *iter, - u64 new_spte) -{ - _tdp_mmu_set_spte(kvm, iter, new_spte, true, false); -} - #define tdp_root_for_each_pte(_iter, _root, _start, _end) \ for_each_tdp_pte(_iter, _root, _start, _end) =20 @@ -1677,8 +1670,13 @@ bool kvm_tdp_mmu_clear_dirty_slot(struct kvm *kvm, static void clear_dirty_pt_masked(struct kvm *kvm, struct kvm_mmu_page *ro= ot, gfn_t gfn, unsigned long mask, bool wrprot) { + /* + * Either all SPTEs in TDP MMU will need write protection or none. This + * contract will not be modified for TDP MMU pages. + */ + u64 clear_bit =3D (wrprot || !kvm_ad_enabled()) ? PT_WRITABLE_MASK : + shadow_dirty_mask; struct tdp_iter iter; - u64 new_spte; =20 rcu_read_lock(); =20 @@ -1693,19 +1691,16 @@ static void clear_dirty_pt_masked(struct kvm *kvm, = struct kvm_mmu_page *root, =20 mask &=3D ~(1UL << (iter.gfn - gfn)); =20 - if (wrprot || spte_ad_need_write_protect(iter.old_spte)) { - if (is_writable_pte(iter.old_spte)) - new_spte =3D iter.old_spte & ~PT_WRITABLE_MASK; - else - continue; - } else { - if (iter.old_spte & shadow_dirty_mask) - new_spte =3D iter.old_spte & ~shadow_dirty_mask; - else - continue; - } + if (!(iter.old_spte & clear_bit)) + continue; =20 - tdp_mmu_set_spte_no_dirty_log(kvm, &iter, new_spte); + iter.old_spte =3D tdp_mmu_clear_spte_bits(iter.sptep, + iter.old_spte, + clear_bit, iter.level); + trace_kvm_tdp_mmu_spte_changed(iter.as_id, iter.gfn, iter.level, + iter.old_spte, + iter.old_spte & ~clear_bit); + kvm_set_pfn_dirty(spte_to_pfn(iter.old_spte)); } =20 rcu_read_unlock(); --=20 2.39.1.581.gbfd45094c4-goog From nobody Fri Sep 12 09:03:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B434C05027 for ; Sat, 11 Feb 2023 01:46:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229990AbjBKBqo (ORCPT ); Fri, 10 Feb 2023 20:46:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57592 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229965AbjBKBqi (ORCPT ); Fri, 10 Feb 2023 20:46:38 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12FED84B92 for ; Fri, 10 Feb 2023 17:46:38 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id u6-20020a170903124600b00188cd4769bcso3796881plh.0 for ; Fri, 10 Feb 2023 17:46:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qvyXszpTjt01Q056Izvp9tkChdBBaJSX/3iFEV2GgtY=; b=pXbFYzGjBuwJfNS6gYgtZkM+mMuYItL/rPu/0kjg9o1lSwd6WeiH3PKpTXldxh71c6 RThUEh1i98Z9k2lbyktBMh4JMSqtNTpqq9xRTAVGI+c7urYxueHJva9EwOqDS/PcJuh4 yLqteVd2RHO54sWBM1gpvQQV9oNYp6gcv9EupI+qdSMLaivRobnO+c05PKqDPWPImyjq On6g2elHTKbcTVgvPnv01ouk5fStSdgun/QMR8uVb4RTQ6EWCaASFeMVu/mkxpow4ac3 leyLD8IJ9xI2tGRZ/jSI7/vyhL04+iGYD0l1qFD9CKfwbjarb6laL+4AT08Q9b/GT/ay PGmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qvyXszpTjt01Q056Izvp9tkChdBBaJSX/3iFEV2GgtY=; b=djQacoHwR4Hc5a185kGmSW9GGzSkh6pwZ0YTzwNjcfXCmZ3aM2/RAXdFD4WyAN8CfK cob/5jmk1P5GzEoneK/utLPhQ6MWQcoIe24eNBJGIIvUvfcqrmYvIBuKX8lMOxDfN4NN AtGLqvVIhupDXnkD7c+xkyxxKoXvGDvse1GhkrVR4JyUe38E9IfLp2cFG6IrO5Amf7cK fE9ziocEFGNTnmm/rpbkVj2gFNW3cJ2f7YTEAGjxNyyJdi9bELwFdZvrIHEzN+zLcYB5 HmWbWI2GaiWueFwwTrfvUQKLzw6v1qZ6LPhabBL4RuqVY/o+QGx2Y1EnXpq1jcvFFc8e AqGg== X-Gm-Message-State: AO0yUKWg/VeiQVHUHJzKfUkc3LCkVKxmjgPUXpYV/+EPywxD4k8+2mHv +Ib1wrNCDBpMZ2iSPawk1FugnST3NOQu X-Google-Smtp-Source: AK7set88Pn+HoZLc3TBke6+sHNxlhxbdjSn/64zcLE8iYOBdZHck60mTQXFtdxLlJC3hr723/QgnYUHhyoB1 X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:aa7:9a0e:0:b0:56b:b520:3751 with SMTP id w14-20020aa79a0e000000b0056bb5203751mr3710282pfj.29.1676079997607; Fri, 10 Feb 2023 17:46:37 -0800 (PST) Date: Fri, 10 Feb 2023 17:46:22 -0800 In-Reply-To: <20230211014626.3659152-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230211014626.3659152-1-vipinsh@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230211014626.3659152-4-vipinsh@google.com> Subject: [Patch v3 3/7] KVM: x86/mmu: Remove "record_dirty_log" in __tdp_mmu_set_spte() From: Vipin Sharma To: seanjc@google.com, pbonzini@redhat.com, bgardon@google.com, dmatlack@google.com Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Remove bool parameter "record_dirty_log" from __tdp_mmu_set_spte() and refactor the code as this variable is always set to true by its caller. Signed-off-by: Vipin Sharma Reviewed-by: David Matlack --- arch/x86/kvm/mmu/tdp_mmu.c | 24 +++++++++--------------- 1 file changed, 9 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 66ccbeb9d845..c895560244de 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -710,18 +710,13 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm = *kvm, * notifier for access tracking. Leaving record_acc_track * unset in that case prevents page accesses from being * double counted. - * @record_dirty_log: Record the page as dirty in the dirty bitmap if - * appropriate for the change being made. Should be set - * unless performing certain dirty logging operations. - * Leaving record_dirty_log unset in that case prevents page - * writes from being double counted. * * Returns the old SPTE value, which _may_ be different than @old_spte if = the * SPTE had voldatile bits. */ static u64 __tdp_mmu_set_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep, u64 old_spte, u64 new_spte, gfn_t gfn, int level, - bool record_acc_track, bool record_dirty_log) + bool record_acc_track) { lockdep_assert_held_write(&kvm->mmu_lock); =20 @@ -740,35 +735,34 @@ static u64 __tdp_mmu_set_spte(struct kvm *kvm, int as= _id, tdp_ptep_t sptep, =20 if (record_acc_track) handle_changed_spte_acc_track(old_spte, new_spte, level); - if (record_dirty_log) - handle_changed_spte_dirty_log(kvm, as_id, gfn, old_spte, - new_spte, level); + + handle_changed_spte_dirty_log(kvm, as_id, gfn, old_spte, new_spte, + level); return old_spte; } =20 static inline void _tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *ite= r, - u64 new_spte, bool record_acc_track, - bool record_dirty_log) + u64 new_spte, bool record_acc_track) { WARN_ON_ONCE(iter->yielded); =20 iter->old_spte =3D __tdp_mmu_set_spte(kvm, iter->as_id, iter->sptep, iter->old_spte, new_spte, iter->gfn, iter->level, - record_acc_track, record_dirty_log); + record_acc_track); } =20 static inline void tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, u64 new_spte) { - _tdp_mmu_set_spte(kvm, iter, new_spte, true, true); + _tdp_mmu_set_spte(kvm, iter, new_spte, true); } =20 static inline void tdp_mmu_set_spte_no_acc_track(struct kvm *kvm, struct tdp_iter *iter, u64 new_spte) { - _tdp_mmu_set_spte(kvm, iter, new_spte, false, true); + _tdp_mmu_set_spte(kvm, iter, new_spte, false); } =20 #define tdp_root_for_each_pte(_iter, _root, _start, _end) \ @@ -918,7 +912,7 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu= _page *sp) return false; =20 __tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte, 0, - sp->gfn, sp->role.level + 1, true, true); + sp->gfn, sp->role.level + 1, true); =20 return true; } --=20 2.39.1.581.gbfd45094c4-goog From nobody Fri Sep 12 09:03:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC063C636D7 for ; Sat, 11 Feb 2023 01:46:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229637AbjBKBqv (ORCPT ); Fri, 10 Feb 2023 20:46:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57702 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229980AbjBKBql (ORCPT ); Fri, 10 Feb 2023 20:46:41 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 323D584B96 for ; Fri, 10 Feb 2023 17:46:40 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id pj9-20020a17090b4f4900b00232cca91108so3165226pjb.7 for ; Fri, 10 Feb 2023 17:46:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+gGMkWLRs9fWzgoKMZp1xjTHNEtbWSt2tA+J164dR+I=; b=n4w2JvSZxbvAx7VDuTa8a6gnidk7+fM4SN3MoyITzRdo8Zusatugfb0N1mp7zDdAtm nkhT4ceurl+KSeBYu9KzUTa8oFiachS16q0TOsJhkSXmoY5VcCxLGE77JXnMLtd4AlJm nYaND+RHtk6DaGb8UbXmjF67jjmu7+tbRomwRl8q4whlT68/prZH4lf6oAKIuMqg3Oog 7jRUqsAdomH6eGOi5pQpQdIXfWvKdc17woQPwGK56J49NXm1kQFEkeER/UbFB7IRBE4E ds6/M+/hv1rxXm244otjozs+JI1/o3dSAAc9NOQzc/cN4c27HBZFHrDd9YmeV9F+D+Ts TwYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+gGMkWLRs9fWzgoKMZp1xjTHNEtbWSt2tA+J164dR+I=; b=70nst99rhzxxhXMXH7Pq9k5128Jd9Iehxi4GpyfW/a0OvE77Bl+pvQZw6KfWuBoiTu bJ7untnKu/aJXwoEb1RvS6MHHUlDVdPQgNwlxRTpEN9a5W6YdHmNVHf2CAkhBtYyuPFm VotSfWvLxYC3vlqDjoCsxGn7NY440PIht4bVUKvb02CwbOviF6bBiw4K3A/eCjHFNShI qLARFzGOw/Lsy+grzRKCdLO5iydNx4p7OjB2if0grraVG/rr0GGRWTowzqtj5az54EUD /+W9MW8KfioJJz6rRetUG3nWnNmHl8ewSa9bbhntt8OgOK4aTGjG/PaX+OOPwX7qmJF2 YKqw== X-Gm-Message-State: AO0yUKU5ywmaEgkkTtXOrTtRoZywLsxx2AM1fINsIQ6JYul6mgvrlFqG ertfRLvKOhq5cHV8QVJIY6AW9Krf7yRo X-Google-Smtp-Source: AK7set8K9Pxm27vSO52D5ATEFyE/B/V3FGyDTFtREQw5bTFuoc9urcOJfrVPxABPSX++bn6TZOR+0gt2vq91 X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a17:902:e84f:b0:189:6df9:4f85 with SMTP id t15-20020a170902e84f00b001896df94f85mr4137680plg.27.1676079999320; Fri, 10 Feb 2023 17:46:39 -0800 (PST) Date: Fri, 10 Feb 2023 17:46:23 -0800 In-Reply-To: <20230211014626.3659152-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230211014626.3659152-1-vipinsh@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230211014626.3659152-5-vipinsh@google.com> Subject: [Patch v3 4/7] KVM: x86/mmu: Optimize SPTE change for aging gfn range From: Vipin Sharma To: seanjc@google.com, pbonzini@redhat.com, bgardon@google.com, dmatlack@google.com Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" No need to check all of the conditions in __handle_changed_spte(). Aging a gfn range implies resetting access bit or marking spte for access tracking. Use atomic operation to only reset those bits. This avoids checking many conditions in __handle_changed_spte() API. Also, clean up code by removing dead code and API parameters. Signed-off-by: Vipin Sharma Reviewed-by: David Matlack --- arch/x86/kvm/mmu/tdp_mmu.c | 36 +++++++++++++++++++----------------- 1 file changed, 19 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index c895560244de..5d6e77554797 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -758,13 +758,6 @@ static inline void tdp_mmu_set_spte(struct kvm *kvm, s= truct tdp_iter *iter, _tdp_mmu_set_spte(kvm, iter, new_spte, true); } =20 -static inline void tdp_mmu_set_spte_no_acc_track(struct kvm *kvm, - struct tdp_iter *iter, - u64 new_spte) -{ - _tdp_mmu_set_spte(kvm, iter, new_spte, false); -} - #define tdp_root_for_each_pte(_iter, _root, _start, _end) \ for_each_tdp_pte(_iter, _root, _start, _end) =20 @@ -1251,32 +1244,41 @@ static __always_inline bool kvm_tdp_mmu_handle_gfn(= struct kvm *kvm, /* * Mark the SPTEs range of GFNs [start, end) unaccessed and return non-zero * if any of the GFNs in the range have been accessed. + * + * No need to mark corresponding PFN as accessed as this call is coming fr= om + * the clear_young() or clear_flush_young() notifier, which uses the return + * value to determine if the page has been accessed. */ static bool age_gfn_range(struct kvm *kvm, struct tdp_iter *iter, struct kvm_gfn_range *range) { - u64 new_spte =3D 0; + u64 new_spte; =20 /* If we have a non-accessed entry we don't need to change the pte. */ if (!is_accessed_spte(iter->old_spte)) return false; =20 - new_spte =3D iter->old_spte; - - if (spte_ad_enabled(new_spte)) { - new_spte &=3D ~shadow_accessed_mask; + if (spte_ad_enabled(iter->old_spte)) { + iter->old_spte =3D tdp_mmu_clear_spte_bits(iter->sptep, + iter->old_spte, + shadow_accessed_mask, + iter->level); + new_spte =3D iter->old_spte & ~shadow_accessed_mask; } else { + new_spte =3D mark_spte_for_access_track(iter->old_spte); + iter->old_spte =3D kvm_tdp_mmu_write_spte(iter->sptep, + iter->old_spte, new_spte, + iter->level); /* * Capture the dirty status of the page, so that it doesn't get * lost when the SPTE is marked for access tracking. */ - if (is_writable_pte(new_spte)) - kvm_set_pfn_dirty(spte_to_pfn(new_spte)); - - new_spte =3D mark_spte_for_access_track(new_spte); + if (is_writable_pte(iter->old_spte)) + kvm_set_pfn_dirty(spte_to_pfn(iter->old_spte)); } =20 - tdp_mmu_set_spte_no_acc_track(kvm, iter, new_spte); + trace_kvm_tdp_mmu_spte_changed(iter->as_id, iter->gfn, iter->level, + iter->old_spte, new_spte); =20 return true; } --=20 2.39.1.581.gbfd45094c4-goog From nobody Fri Sep 12 09:03:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A717C636D4 for ; Sat, 11 Feb 2023 01:46:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230044AbjBKBq4 (ORCPT ); Fri, 10 Feb 2023 20:46:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58510 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229965AbjBKBqv (ORCPT ); Fri, 10 Feb 2023 20:46:51 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A28E84F67 for ; Fri, 10 Feb 2023 17:46:42 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id k2-20020a17090a658200b002311a9f6b3cso5006375pjj.6 for ; Fri, 10 Feb 2023 17:46:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ilQD1ZwfOZ+d2+O+AXt7dFLjcZcLaA1r/kAp0uBR7mU=; b=PdwRuNZr5qWt72w20yZfAeNVSTn+Uwkm5WaseLyWoj7zvwCv5tBjeMIQQwybX4veDF 006bGVjCu6QI8eRjrg32Xec+RnDfgzWKAiq6UcM6h8DUFq5IEkcMpD++bzRF4nC56zP+ LO7TiApbuF2VCN52PILf2eZ9f2dAanYCU+9Ki7zwdjTUUIF1cdpPCCZ1H+Q9riLM3c/K AknD+8fIbtUCg10CXhbQYiPpxBErkWPZDfpuCGjwX8y1NB7XClNWsirwkeCsiUfnUhqu 0Koth+qR6OIxRElg/2asJa+iK7/rpmoDuuS6qX9m9hA7rNk3qJ9C6mIQqrnKckRs8E8U 6LUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ilQD1ZwfOZ+d2+O+AXt7dFLjcZcLaA1r/kAp0uBR7mU=; b=mscjVY7AxSlWzTqOazto8XvyYv+hfjuvAMJ8obinvmKO5YOjZarmiBryZY4RYlA28o wen3HVGrzQnpaKjpxqbkBiUY9LLDCZKpjWrV52iX2iacmNxwU4eLw0C/18VDs0Jh6DFe iHNizLDLEkhEcX04Su4RgE7Y9iXLpxsBQ4JGeK21uL0q4sF0/2HdceJI2v0/bWgXDxaJ eEU47vLwSczOTbi9CW7NziXeq59brZQHLTBegoA2McrPIpKtBvIj4h2eaTUVS32EkfGq RNet5El4tToTAUWGaTtOxvKXiJT5SbNH30OW9P1JdPiyJAPAkPQpUfyacV61KST5CmIt 9jmw== X-Gm-Message-State: AO0yUKWTccP6iMWHQ7sooOtsQ8X1E1rJTUWpLeiC0DbXOEjQFj6Tq9j5 6w/vFdvWMvo6B3PVQefMZ6gl5jPBAyEo X-Google-Smtp-Source: AK7set9U4Yaa0zpTFVx5Wxa0jeIjgqiyiZa13NAhHdcyUs62NVEkjx/lJfvzmjwiNtL4FFIrziz6y4YeF4qv X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a17:902:e806:b0:198:eeb5:637b with SMTP id u6-20020a170902e80600b00198eeb5637bmr4391689plg.23.1676080000973; Fri, 10 Feb 2023 17:46:40 -0800 (PST) Date: Fri, 10 Feb 2023 17:46:24 -0800 In-Reply-To: <20230211014626.3659152-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230211014626.3659152-1-vipinsh@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230211014626.3659152-6-vipinsh@google.com> Subject: [Patch v3 5/7] KVM: x86/mmu: Remove "record_acc_track" in __tdp_mmu_set_spte() From: Vipin Sharma To: seanjc@google.com, pbonzini@redhat.com, bgardon@google.com, dmatlack@google.com Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Remove bool parameter "record_acc_track" from __tdp_mmu_set_spte() and refactor the code. This variable is always set to true by its caller. Remove single and double underscore prefix from tdp_mmu_set_spte() related APIs: 1. Change __tdp_mmu_set_spte() to tdp_mmu_set_spte() 2. Change _tdp_mmu_set_spte() to tdp_mmu_iter_set_spte() Signed-off-by: Vipin Sharma Reviewed-by: David Matlack --- arch/x86/kvm/mmu/tdp_mmu.c | 51 +++++++++++++------------------------- 1 file changed, 17 insertions(+), 34 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 5d6e77554797..e50e869bf879 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -697,7 +697,7 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *k= vm, =20 =20 /* - * __tdp_mmu_set_spte - Set a TDP MMU SPTE and handle the associated bookk= eeping + * tdp_mmu_set_spte - Set a TDP MMU SPTE and handle the associated bookkee= ping * @kvm: KVM instance * @as_id: Address space ID, i.e. regular vs. SMM * @sptep: Pointer to the SPTE @@ -705,18 +705,12 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm = *kvm, * @new_spte: The new value that will be set for the SPTE * @gfn: The base GFN that was (or will be) mapped by the SPTE * @level: The level _containing_ the SPTE (its parent PT's level) - * @record_acc_track: Notify the MM subsystem of changes to the accessed s= tate - * of the page. Should be set unless handling an MMU - * notifier for access tracking. Leaving record_acc_track - * unset in that case prevents page accesses from being - * double counted. * * Returns the old SPTE value, which _may_ be different than @old_spte if = the * SPTE had voldatile bits. */ -static u64 __tdp_mmu_set_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep, - u64 old_spte, u64 new_spte, gfn_t gfn, int level, - bool record_acc_track) +static u64 tdp_mmu_set_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep, + u64 old_spte, u64 new_spte, gfn_t gfn, int level) { lockdep_assert_held_write(&kvm->mmu_lock); =20 @@ -732,30 +726,19 @@ static u64 __tdp_mmu_set_spte(struct kvm *kvm, int as= _id, tdp_ptep_t sptep, old_spte =3D kvm_tdp_mmu_write_spte(sptep, old_spte, new_spte, level); =20 __handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level, false); - - if (record_acc_track) - handle_changed_spte_acc_track(old_spte, new_spte, level); - + handle_changed_spte_acc_track(old_spte, new_spte, level); handle_changed_spte_dirty_log(kvm, as_id, gfn, old_spte, new_spte, level); return old_spte; } =20 -static inline void _tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *ite= r, - u64 new_spte, bool record_acc_track) +static inline void tdp_mmu_iter_set_spte(struct kvm *kvm, struct tdp_iter = *iter, + u64 new_spte) { WARN_ON_ONCE(iter->yielded); - - iter->old_spte =3D __tdp_mmu_set_spte(kvm, iter->as_id, iter->sptep, - iter->old_spte, new_spte, - iter->gfn, iter->level, - record_acc_track); -} - -static inline void tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, - u64 new_spte) -{ - _tdp_mmu_set_spte(kvm, iter, new_spte, true); + iter->old_spte =3D tdp_mmu_set_spte(kvm, iter->as_id, iter->sptep, + iter->old_spte, new_spte, + iter->gfn, iter->level); } =20 #define tdp_root_for_each_pte(_iter, _root, _start, _end) \ @@ -847,7 +830,7 @@ static void __tdp_mmu_zap_root(struct kvm *kvm, struct = kvm_mmu_page *root, continue; =20 if (!shared) - tdp_mmu_set_spte(kvm, &iter, 0); + tdp_mmu_iter_set_spte(kvm, &iter, 0); else if (tdp_mmu_set_spte_atomic(kvm, &iter, 0)) goto retry; } @@ -904,8 +887,8 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu= _page *sp) if (WARN_ON_ONCE(!is_shadow_present_pte(old_spte))) return false; =20 - __tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte, 0, - sp->gfn, sp->role.level + 1, true); + tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte, 0, + sp->gfn, sp->role.level + 1); =20 return true; } @@ -939,7 +922,7 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct k= vm_mmu_page *root, !is_last_spte(iter.old_spte, iter.level)) continue; =20 - tdp_mmu_set_spte(kvm, &iter, 0); + tdp_mmu_iter_set_spte(kvm, &iter, 0); flush =3D true; } =20 @@ -1110,7 +1093,7 @@ static int tdp_mmu_link_sp(struct kvm *kvm, struct td= p_iter *iter, if (ret) return ret; } else { - tdp_mmu_set_spte(kvm, iter, spte); + tdp_mmu_iter_set_spte(kvm, iter, spte); } =20 tdp_account_mmu_page(kvm, sp); @@ -1317,13 +1300,13 @@ static bool set_spte_gfn(struct kvm *kvm, struct td= p_iter *iter, * invariant that the PFN of a present * leaf SPTE can never change. * See __handle_changed_spte(). */ - tdp_mmu_set_spte(kvm, iter, 0); + tdp_mmu_iter_set_spte(kvm, iter, 0); =20 if (!pte_write(range->pte)) { new_spte =3D kvm_mmu_changed_pte_notifier_make_spte(iter->old_spte, pte_pfn(range->pte)); =20 - tdp_mmu_set_spte(kvm, iter, new_spte); + tdp_mmu_iter_set_spte(kvm, iter, new_spte); } =20 return true; @@ -1814,7 +1797,7 @@ static bool write_protect_gfn(struct kvm *kvm, struct= kvm_mmu_page *root, if (new_spte =3D=3D iter.old_spte) break; =20 - tdp_mmu_set_spte(kvm, &iter, new_spte); + tdp_mmu_iter_set_spte(kvm, &iter, new_spte); spte_set =3D true; } =20 --=20 2.39.1.581.gbfd45094c4-goog From nobody Fri Sep 12 09:03:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD2EDC05027 for ; Sat, 11 Feb 2023 01:47:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230002AbjBKBrF (ORCPT ); Fri, 10 Feb 2023 20:47:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58582 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230034AbjBKBqw (ORCPT ); Fri, 10 Feb 2023 20:46:52 -0500 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0DF02880E6 for ; Fri, 10 Feb 2023 17:46:43 -0800 (PST) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-52ee93d7863so5970517b3.18 for ; Fri, 10 Feb 2023 17:46:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9cV6anjCgKUaY0e+B+aFQ5NbnTuwKip5tCWUOTAoaYc=; b=HZV5xjcaT/zj7344N4oejB5i1K5DQKr/S3o+xMZz7NM4yxEvq5V75WAGRuVSQ7qaRm ItCLApgwaCA+CtdXN9GgtI/046ysapHYNDO77M6uqY/hu1RY1xjhUnxfNFGA2YRnNVDB krpQ7jimSaCg2t1vnRdk5sXw9CUfJgtg2OGvMKnoe7sl0Vne/xQjDl6RoDhuwh+6c8Sd W7Uf9eUm6wE2Q4d/4vCLjUbf9PbV5fB1I/yADR8b0rWHfZDv61eRsBmAapmijMIWWef1 +1kTpL2ruNBjIjGCY22pwhL7DMqUgYJfP9Z98dboKDd430bC4cH3R71VZaCsl/B2hNmm AiMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9cV6anjCgKUaY0e+B+aFQ5NbnTuwKip5tCWUOTAoaYc=; b=f3xg6PgPcwXkByh3E5Rl9lJWKu3qVVV9lBe6O9V7Fs8Wpv4dvCewhfx80qephyOkfl yGWoBK8EV4lMaBlMuDM3O0KQsdipxYG5uZnwurFiSPlLNj41qrni/NBsgxHOmYBEhWJq 3GjWeTtR407I1RBKg3JbEwNZG46FXs+3o3IW697lbDy6tOZJ8cmPERVSdVBbnRsCeSdW 9mRtabs30SCJhZ7wNI5w1gjYEdSuctUjr7CJU2gregXMSwvVuv9ZBgKGi0Qlmo4xl7Pk Xua6WifUu9i5HHrUxaYJ95QkVmeU7r1DwTaB5DkMy7+eDxDw63OnnoygnG33ThB9vcHB qvjg== X-Gm-Message-State: AO0yUKWKBXwazN+rdbfuL+hluvocaccuPsMtPnwQ4dTMFt78Ko52yhUa jJ0Y9UnuFrcfKhkxtMeaw6w+v/JytVHg X-Google-Smtp-Source: AK7set+8rZs7v/xCMonTIpZIgut8OUh9JbDVoCmWOxbweb0/DRK5Qdc6F3UTroIqg/uAt5W2yJMbGe9Qzs4G X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a0d:d813:0:b0:52e:ec03:9b2f with SMTP id a19-20020a0dd813000000b0052eec039b2fmr0ywe.8.1676080002679; Fri, 10 Feb 2023 17:46:42 -0800 (PST) Date: Fri, 10 Feb 2023 17:46:25 -0800 In-Reply-To: <20230211014626.3659152-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230211014626.3659152-1-vipinsh@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230211014626.3659152-7-vipinsh@google.com> Subject: [Patch v3 6/7] KVM: x86/mmu: Remove handle_changed_spte_dirty_log() From: Vipin Sharma To: seanjc@google.com, pbonzini@redhat.com, bgardon@google.com, dmatlack@google.com Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Remove handle_changed_spte_dirty_log() as there is no code flow which sets 4KiB SPTE writable and hit this path. This function marks the page dirty in a memslot only if new SPTE is 4KiB in size and writable. Current users of handle_changed_spte_dirty_log() are: 1. set_spte_gfn() - Create only non writable SPTEs. 2. write_protect_gfn() - Change an SPTE to non writable. 3. zap leaf and roots APIs - Everything is 0. 4. handle_removed_pt() - Sets SPTEs to REMOVED_SPTE 5. tdp_mmu_link_sp() - Makes non leaf SPTEs. There is also no path which creates a writable 4KiB without going through make_spte() and this functions takes care of marking SPTE dirty in the memslot if it is PT_WRITABLE. Signed-off-by: Vipin Sharma Reviewed-by: David Matlack --- arch/x86/kvm/mmu/tdp_mmu.c | 22 ---------------------- 1 file changed, 22 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index e50e869bf879..0c031319e901 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -345,24 +345,6 @@ static void handle_changed_spte_acc_track(u64 old_spte= , u64 new_spte, int level) kvm_set_pfn_accessed(spte_to_pfn(old_spte)); } =20 -static void handle_changed_spte_dirty_log(struct kvm *kvm, int as_id, gfn_= t gfn, - u64 old_spte, u64 new_spte, int level) -{ - bool pfn_changed; - struct kvm_memory_slot *slot; - - if (level > PG_LEVEL_4K) - return; - - pfn_changed =3D spte_to_pfn(old_spte) !=3D spte_to_pfn(new_spte); - - if ((!is_writable_pte(old_spte) || pfn_changed) && - is_writable_pte(new_spte)) { - slot =3D __gfn_to_memslot(__kvm_memslots(kvm, as_id), gfn); - mark_page_dirty_in_slot(kvm, slot, gfn); - } -} - static void tdp_account_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp) { kvm_account_pgtable_pages((void *)sp->spt, +1); @@ -614,8 +596,6 @@ static void handle_changed_spte(struct kvm *kvm, int as= _id, gfn_t gfn, __handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level, shared); handle_changed_spte_acc_track(old_spte, new_spte, level); - handle_changed_spte_dirty_log(kvm, as_id, gfn, old_spte, - new_spte, level); } =20 /* @@ -727,8 +707,6 @@ static u64 tdp_mmu_set_spte(struct kvm *kvm, int as_id,= tdp_ptep_t sptep, =20 __handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level, false); handle_changed_spte_acc_track(old_spte, new_spte, level); - handle_changed_spte_dirty_log(kvm, as_id, gfn, old_spte, new_spte, - level); return old_spte; } =20 --=20 2.39.1.581.gbfd45094c4-goog From nobody Fri Sep 12 09:03:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A872C636D4 for ; Sat, 11 Feb 2023 01:47:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229965AbjBKBrJ (ORCPT ); Fri, 10 Feb 2023 20:47:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58530 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229738AbjBKBqy (ORCPT ); Fri, 10 Feb 2023 20:46:54 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66D6686608 for ; Fri, 10 Feb 2023 17:46:45 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id cp14-20020a056a00348e00b005a858f5c99bso3371979pfb.22 for ; Fri, 10 Feb 2023 17:46:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=LQavnvPA7sTuc9JB1orqdV5Xojq0c/KlFgV3VwSeK0k=; b=icmDPkGL44J2kqQwV1RYnsD2fVzMq1tDWbJdeuolq0tIfvmvcyAgR3mvlsuZ0H6PSp XFxgo7b+BhE+x73Qz71vIt8UnQba+Y1xvdSJUh2T8z3+iRarDp2n6ZlFkO13JS8UC+QX tmh9QJdHEh5YTxZVvQbw1/eV24cBMlusEhDV+xi7JyIXvLZGPr9gYWsDUDxZw/b2ELGH EwPotRyQi+Tc7RwkasEKUeF3Jul+fL+otCjmhdZXsZs880/yCSmGAsMhcdwXxxlTsG6V M2da1299K16+NoVxLp0adPTY0g0lYM2etXmhW+xSz8GeSF/ZSxqSFRWkWIa7qJnJ5bS3 dRxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LQavnvPA7sTuc9JB1orqdV5Xojq0c/KlFgV3VwSeK0k=; b=MY8yVLcXvhrO+9RUqJsQMRemLSvGDKm+5z/zpi+YkLSR82XdWpFAgkIdIlOxpU6ZOO k2Jr/TRvbsohfMqh5B1GfLmeukBv71K+061F8JuS/iAx1naXyqhG3eOJGDUdeJ98Ht9g pir67K505SbyBdp/HI2L5dVElARekVwOwEGuycV77nDUHW9N6uiVa4DujYqpikNRNCmx jorYmu3ImEkDC5C2rLxZFEPgxu9ozLlVovDQJPDIqtkG2zoVHHYC0DRif7uV5f1o0rRK tbuapTGYf1ZXjU19XS8f7XVrOPuaIMWh/K1r/2c/pDdKOm4qcxXcGeSJ4MhVRy0z3D0H H61w== X-Gm-Message-State: AO0yUKWJdTG7wdH/0HWcUBa7POf3hpn9iYycG3UCOpJW7a37QbEdWcOy v+BJT4fbpmKhCc1PQUceS8C2LpGdoybM X-Google-Smtp-Source: AK7set/eG1ZlNuQSoJZEd5WvqmwTqycjbs629sGD/uywCliUkSpNTY+oyNqP12ndtyXlzxB00C9kiPCFkQjl X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a05:6a00:dd:b0:5a8:189d:b53f with SMTP id e29-20020a056a0000dd00b005a8189db53fmr2515330pfj.6.1676080004721; Fri, 10 Feb 2023 17:46:44 -0800 (PST) Date: Fri, 10 Feb 2023 17:46:26 -0800 In-Reply-To: <20230211014626.3659152-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230211014626.3659152-1-vipinsh@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230211014626.3659152-8-vipinsh@google.com> Subject: [Patch v3 7/7] KVM: x86/mmu: Merge all handle_changed_pte* functions. From: Vipin Sharma To: seanjc@google.com, pbonzini@redhat.com, bgardon@google.com, dmatlack@google.com Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" __handle_changed_pte() and handle_changed_spte_acc_track() are always used together. Merge these two functions and name the new function handle_changed_pte(). Remove the existing handle_changed_pte() function which just calls __handle_changed_pte and handle_changed_spte_acc_track(). This converges SPTEs change handling code to a single place. Signed-off-by: Vipin Sharma Reviewed-by: Ben Gardon Reviewed-by: David Matlack --- arch/x86/kvm/mmu/tdp_mmu.c | 42 +++++++++++--------------------------- 1 file changed, 12 insertions(+), 30 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 0c031319e901..67538ec48ce5 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -334,17 +334,6 @@ static void handle_changed_spte(struct kvm *kvm, int a= s_id, gfn_t gfn, u64 old_spte, u64 new_spte, int level, bool shared); =20 -static void handle_changed_spte_acc_track(u64 old_spte, u64 new_spte, int = level) -{ - if (!is_shadow_present_pte(old_spte) || !is_last_spte(old_spte, level)) - return; - - if (is_accessed_spte(old_spte) && - (!is_shadow_present_pte(new_spte) || !is_accessed_spte(new_spte) || - spte_to_pfn(old_spte) !=3D spte_to_pfn(new_spte))) - kvm_set_pfn_accessed(spte_to_pfn(old_spte)); -} - static void tdp_account_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp) { kvm_account_pgtable_pages((void *)sp->spt, +1); @@ -487,7 +476,7 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep= _t pt, bool shared) } =20 /** - * __handle_changed_spte - handle bookkeeping associated with an SPTE chan= ge + * handle_changed_spte - handle bookkeeping associated with an SPTE change * @kvm: kvm instance * @as_id: the address space of the paging structure the SPTE was a part of * @gfn: the base GFN that was mapped by the SPTE @@ -501,9 +490,9 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep= _t pt, bool shared) * Handle bookkeeping that might result from the modification of a SPTE. * This function must be called for all TDP SPTE modifications. */ -static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, - u64 old_spte, u64 new_spte, int level, - bool shared) +static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, + u64 old_spte, u64 new_spte, int level, + bool shared) { bool was_present =3D is_shadow_present_pte(old_spte); bool is_present =3D is_shadow_present_pte(new_spte); @@ -587,15 +576,10 @@ static void __handle_changed_spte(struct kvm *kvm, in= t as_id, gfn_t gfn, if (was_present && !was_leaf && (is_leaf || !is_present || WARN_ON_ONCE(pfn_changed))) handle_removed_pt(kvm, spte_to_child_pt(old_spte, level), shared); -} =20 -static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, - u64 old_spte, u64 new_spte, int level, - bool shared) -{ - __handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level, - shared); - handle_changed_spte_acc_track(old_spte, new_spte, level); + if (was_leaf && is_accessed_spte(old_spte) && + (!is_present || !is_accessed_spte(new_spte) || pfn_changed)) + kvm_set_pfn_accessed(spte_to_pfn(old_spte)); } =20 /* @@ -638,9 +622,8 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *k= vm, if (!try_cmpxchg64(sptep, &iter->old_spte, new_spte)) return -EBUSY; =20 - __handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte, - new_spte, iter->level, true); - handle_changed_spte_acc_track(iter->old_spte, new_spte, iter->level); + handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte, + new_spte, iter->level, true); =20 return 0; } @@ -705,8 +688,7 @@ static u64 tdp_mmu_set_spte(struct kvm *kvm, int as_id,= tdp_ptep_t sptep, =20 old_spte =3D kvm_tdp_mmu_write_spte(sptep, old_spte, new_spte, level); =20 - __handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level, false); - handle_changed_spte_acc_track(old_spte, new_spte, level); + handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level, false); return old_spte; } =20 @@ -1276,7 +1258,7 @@ static bool set_spte_gfn(struct kvm *kvm, struct tdp_= iter *iter, * Note, when changing a read-only SPTE, it's not strictly necessary to * zero the SPTE before setting the new PFN, but doing so preserves the * invariant that the PFN of a present * leaf SPTE can never change. - * See __handle_changed_spte(). + * See handle_changed_spte(). */ tdp_mmu_iter_set_spte(kvm, iter, 0); =20 @@ -1301,7 +1283,7 @@ bool kvm_tdp_mmu_set_spte_gfn(struct kvm *kvm, struct= kvm_gfn_range *range) /* * No need to handle the remote TLB flush under RCU protection, the * target SPTE _must_ be a leaf SPTE, i.e. cannot result in freeing a - * shadow page. See the WARN on pfn_changed in __handle_changed_spte(). + * shadow page. See the WARN on pfn_changed in handle_changed_spte(). */ return kvm_tdp_mmu_handle_gfn(kvm, range, set_spte_gfn); } --=20 2.39.1.581.gbfd45094c4-goog