From nobody Thu Sep 18 06:59:09 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACAF0C04FDE for ; Thu, 8 Dec 2022 23:36:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230320AbiLHXgg (ORCPT ); Thu, 8 Dec 2022 18:36:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45538 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230158AbiLHXgE (ORCPT ); Thu, 8 Dec 2022 18:36:04 -0500 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C39796BC9B; Thu, 8 Dec 2022 15:35:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670542559; x=1702078559; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=evLGS0Ysb2C50awEgkYSSa/pcL5o4ysLqfSgQX2aZMQ=; b=kx9t+P60XYxObBRurtVa+M2S82eYBF1lDYGmvDX1kw34VJc5SxQFFDwq CabbFWYM65rYk9PyHndhJN6/3WbmSfqs4bBBjL0ePbI4TTVNZmdFBwOJ1 9qT9ug828+SNceb/qLiG8pQmi28AOgChrCVuljvPYj/cwFBfZxFQ6YUP+ WRHDut1STje1tU4u9Q3vck1UfqQQAAEkGonto/1TawIdYcwEWodhcBLjS ec1ns3CQlvXZv2Xa4gu5Tp0jFJob8mMVENPiwPwDWdl3C1Efin1gITH22 IF+bJCVt0JO20tumDhPe7MmpIukapajDNbW61HoKFWwbvVuyGGnxOUM/q A==; X-IronPort-AV: E=McAfee;i="6500,9779,10555"; a="403586544" X-IronPort-AV: E=Sophos;i="5.96,228,1665471600"; d="scan'208";a="403586544" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Dec 2022 15:35:59 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10555"; a="677950990" X-IronPort-AV: E=Sophos;i="5.96,228,1665471600"; d="scan'208";a="677950990" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Dec 2022 15:35:58 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , Sean Christopherson , David Matlack Subject: [RFC PATCH v2 10/15] KVM: x86/tdp_mmu: Split the large page when zap leaf Date: Thu, 8 Dec 2022 15:35:45 -0800 Message-Id: <7edb5526907c1d36e78647577cbf562e9155a76a.1670541736.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li When TDX enabled, a large page cannot be zapped if it contains mixed pages. In this case, it has to split the large page. Signed-off-by: Xiaoyao Li --- arch/x86/kvm/mmu/mmu.c | 9 +++++ arch/x86/kvm/mmu/mmu_internal.h | 2 ++ arch/x86/kvm/mmu/tdp_mmu.c | 62 +++++++++++++++++++++++++++++++-- 3 files changed, 71 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 02adc3c23627..7f56b1dd76fa 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -7318,6 +7318,15 @@ static bool linfo_is_mixed(struct kvm_lpage_info *li= nfo) return linfo->disallow_lpage & KVM_LPAGE_PRIVATE_SHARED_MIXED; } =20 +bool kvm_mem_attr_is_mixed(struct kvm_memory_slot *slot, gfn_t gfn, int le= vel) +{ + struct kvm_lpage_info *linfo =3D lpage_info_slot(gfn & KVM_HPAGE_MA= SK(level), + slot, level); + + WARN_ON_ONCE(level =3D=3D PG_LEVEL_4K); + return linfo_is_mixed(linfo); +} + static void linfo_set_mixed(gfn_t gfn, struct kvm_memory_slot *slot, int level, bool mixed) { diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index 641afc4e90cb..2b7c16dfdf5e 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -435,6 +435,8 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cach= e *mc); void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *s= p); =20 +bool kvm_mem_attr_is_mixed(struct kvm_memory_slot *slot, gfn_t gfn, int le= vel); + #ifndef CONFIG_HAVE_KVM_RESTRICTED_MEM static inline int kvm_restricted_mem_get_pfn(struct kvm_memory_slot *slot, gfn_t gfn, kvm_pfn_t *pfn, int *order) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index cb36089a40da..e9af8c95a3ae 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1102,6 +1102,14 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_= mmu_page *sp) return true; } =20 + +static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm, + struct tdp_iter *iter, + bool shared); + +static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, + struct kvm_mmu_page *sp, bool shared); + /* * If can_yield is true, will release the MMU lock and reschedule if the * scheduler needs the CPU or there is contention on the MMU lock. If this @@ -1113,6 +1121,7 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct= kvm_mmu_page *root, gfn_t start, gfn_t end, bool can_yield, bool flush, bool zap_private) { + struct kvm_mmu_page *split_sp =3D NULL; struct tdp_iter iter; =20 end =3D min(end, tdp_mmu_max_gfn_exclusive()); @@ -1144,12 +1153,63 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, stru= ct kvm_mmu_page *root, !is_last_spte(iter.old_spte, iter.level)) continue; =20 + if (kvm_gfn_shared_mask(kvm) && is_large_pte(iter.old_spte)) { + gfn_t gfn =3D iter.gfn & ~kvm_gfn_shared_mask(kvm); + gfn_t mask =3D KVM_PAGES_PER_HPAGE(iter.level) - 1; + struct kvm_memory_slot *slot; + struct kvm_mmu_page *sp; + + slot =3D gfn_to_memslot(kvm, gfn); + if (kvm_mem_attr_is_mixed(slot, gfn, iter.level) || + (gfn & mask) < start || + end < (gfn & mask) + KVM_PAGES_PER_HPAGE(iter.level)) { + WARN_ON_ONCE(!can_yield); + if (split_sp) { + sp =3D split_sp; + split_sp =3D NULL; + } else { + WARN_ON(iter.yielded); + if (flush) { + kvm_flush_remote_tlbs(kvm); + flush =3D false; + } + sp =3D tdp_mmu_alloc_sp_for_split(kvm, &iter, false); + if (iter.yielded) { + split_sp =3D sp; + continue; + } + } + KVM_BUG_ON(!sp, kvm); + + if (tdp_mmu_split_huge_page(kvm, &iter, sp, false)) { + kvm_flush_remote_tlbs(kvm); + flush =3D false; + /* force retry on this gfn. */ + iter.yielded =3D true; + } else + flush =3D true; + continue; + } + } + tdp_mmu_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE); flush =3D true; } =20 rcu_read_unlock(); =20 + if (split_sp) { + WARN_ON(!can_yield); + if (flush) { + kvm_flush_remote_tlbs(kvm); + flush =3D false; + } + + write_unlock(&kvm->mmu_lock); + tdp_mmu_free_sp(split_sp); + write_lock(&kvm->mmu_lock); + } + /* * Because this flow zaps _only_ leaf SPTEs, the caller doesn't need * to provide RCU protection as no 'struct kvm_mmu_page' will be freed. @@ -1691,8 +1751,6 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_spli= t(struct kvm *kvm, =20 KVM_BUG_ON(kvm_mmu_page_role_is_private(role) !=3D is_private_sptep(iter->sptep), kvm); - /* TODO: Large page isn't supported for private SPTE yet. */ - KVM_BUG_ON(kvm_mmu_page_role_is_private(role), kvm); =20 /* * Since we are allocating while under the MMU lock we have to be --=20 2.25.1