From nobody Mon Feb 9 10:57:18 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89EB3EB64D7 for ; Wed, 28 Jun 2023 22:45:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232445AbjF1WpO (ORCPT ); Wed, 28 Jun 2023 18:45:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47060 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231494AbjF1Wn1 (ORCPT ); Wed, 28 Jun 2023 18:43:27 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A1F426B9; Wed, 28 Jun 2023 15:43:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1687992205; x=1719528205; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0Hrv6P8Eo/E7Tn5rAa++Nq6hcFfaCBIgIk5SxiRD7OA=; b=eRfiLNKBUfUwsZblap5yqmhbZed3m0AsjOA7g3GqIfdht4yRyxlMOfPH q1GFhi+EcWeaqmhlfTC0x8kSooUyMe1wC/9/5RNWrUiP9wq3AhZpSlTCO x7NIbcIde3NT6FaRj+LGa5BjoOuiw5n0WosEzjnw9E2M/OaY1aBm8OPfz LVEB+uCDBz6rUsruOn3Grdt+jK+waICl9VptTRLBTA8OpqbXOxjjcJY8P VV/PQuP4H8Vg+3jAvo5SttHhp3yCBJ/Z9AyN62wx9bH/97IU8C4zdKesN TTpwvPsRL19h4GBBrTKxe0quwOsGL4zZbg1tb5Ac1S7UH+iMi4ETAaro6 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10755"; a="392699159" X-IronPort-AV: E=Sophos;i="6.01,166,1684825200"; d="scan'208";a="392699159" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2023 15:43:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10755"; a="830300005" X-IronPort-AV: E=Sophos;i="6.01,166,1684825200"; d="scan'208";a="830300005" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2023 15:43:23 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, linux-coco@lists.linux.dev, Chao Peng , Ackerley Tng , Vishal Annapurve , Michael Roth , Yuan Yao , Brijesh Singh , Ashish Kalra Subject: [RFC PATCH v3 07/11] KVM: x86: Export the kvm_zap_gfn_range() for the SNP use Date: Wed, 28 Jun 2023 15:43:06 -0700 Message-Id: <93778ff0e10657491f0a2906a251f8f68b774a15.1687991811.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Brijesh Singh While resolving the RMP page fault, there may be cases where the page level between the RMP entry and TDP does not match and the 2M RMP entry must be split into 4K RMP entries. Or a 2M TDP page need to be broken into multiple of 4K pages. To keep the RMP and TDP page level in sync, zap the gfn range after splitting the pages in the RMP entry. The zap should force the TDP to gets rebuilt with the new page level. Signed-off-by: Brijesh Singh Signed-off-by: Ashish Kalra Signed-off-by: Michael Roth Link: https://lore.kernel.org/r/20230612042559.375660-39-michael.roth@amd.c= om --- Changes v2 -> v3: - Newly added --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/mmu.h | 2 -- arch/x86/kvm/mmu/mmu.c | 1 + 3 files changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 831bfd1e719a..bdf507797c73 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1842,6 +1842,8 @@ void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, void kvm_mmu_zap_all(struct kvm *kvm); void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen); void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned long kvm_nr_mmu_pa= ges); +void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end); + =20 int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3); =20 diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 92d5a1924fc1..963c734642f6 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -235,8 +235,6 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu= , struct kvm_mmu *mmu, return -(u32)fault & errcode; } =20 -void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end); - int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu); =20 int kvm_mmu_post_init_vm(struct kvm *kvm); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 464c70b35383..5a80ec49bdcd 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6727,6 +6727,7 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *= kvm, =20 return need_tlb_flush; } +EXPORT_SYMBOL_GPL(kvm_zap_gfn_range); =20 static void kvm_rmap_zap_collapsible_sptes(struct kvm *kvm, const struct kvm_memory_slot *slot) --=20 2.25.1