From nobody Fri May 8 06:54:31 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE6A9C433EF for ; Mon, 9 May 2022 06:16:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231490AbiEIGTo (ORCPT ); Mon, 9 May 2022 02:19:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55302 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234700AbiEIGSg (ORCPT ); Mon, 9 May 2022 02:18:36 -0400 Received: from mail.nfschina.com (unknown [IPv6:2400:dd01:100f:2:72e2:84ff:fe10:5f45]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 58C4BEABA6 for ; Sun, 8 May 2022 23:14:43 -0700 (PDT) Received: from localhost (unknown [127.0.0.1]) by mail.nfschina.com (Postfix) with ESMTP id D8E8D1E80D70; Mon, 9 May 2022 14:09:34 +0800 (CST) X-Virus-Scanned: amavisd-new at test.com Received: from mail.nfschina.com ([127.0.0.1]) by localhost (mail.nfschina.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id BwKUcw5aKpyv; Mon, 9 May 2022 14:09:32 +0800 (CST) Received: from localhost.localdomain (unknown [219.141.250.2]) (Authenticated sender: zhounan@nfschina.com) by mail.nfschina.com (Postfix) with ESMTPA id 1B6711E80D6B; Mon, 9 May 2022 14:09:32 +0800 (CST) From: Zhou nan To: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com Cc: linux-kernel@vger.kernel.org, Zhou nan Subject: [PATCH] x86: It seems possible to optimize the code format Date: Mon, 9 May 2022 14:14:09 +0800 Message-Id: <20220509061409.29083-1-zhounan@nfschina.com> X-Mailer: git-send-email 2.18.2 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" It seems possible to optimize the code format Signed-off-by: Zhou nan --- arch/x86/kvm/mmu/tdp_mmu.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 922b06bf4b94..a4ab6aee4db2 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -743,6 +743,7 @@ static u64 __tdp_mmu_set_spte(struct kvm *kvm, int as_i= d, tdp_ptep_t sptep, =20 if (record_acc_track) handle_changed_spte_acc_track(old_spte, new_spte, level); + if (record_dirty_log) handle_changed_spte_dirty_log(kvm, as_id, gfn, old_spte, new_spte, level); @@ -1149,8 +1150,10 @@ static int tdp_mmu_link_sp(struct kvm *kvm, struct t= dp_iter *iter, =20 spin_lock(&kvm->arch.tdp_mmu_pages_lock); list_add(&sp->link, &kvm->arch.tdp_mmu_pages); + if (account_nx) account_huge_nx_page(kvm, sp); + spin_unlock(&kvm->arch.tdp_mmu_pages_lock); =20 return 0; --=20 2.18.2