From nobody Wed Dec 17 07:59:15 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5657ACDB482 for ; Mon, 16 Oct 2023 16:55:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234003AbjJPQzv (ORCPT ); Mon, 16 Oct 2023 12:55:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231302AbjJPQze (ORCPT ); Mon, 16 Oct 2023 12:55:34 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B3827EF7; Mon, 16 Oct 2023 09:23:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697473391; x=1729009391; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=a3eY264RkIk2rIxUpnKkNG2x5Y8gzvyqjfJuV3d5wUs=; b=chEkuZW4X8hN0GA26GY1G9Or4c/8yGuT5Bn1s/wX90eCba2vh390y0XV W0jHJYkj6Shw6WOTtHMXtF8s6AVPWL2Jo3ACkwnRGcat/05AoJGqQt5Z2 impATT9P76us5sCmuIu0tcu0KtyYIeXmEhOuApZHn9wfi/9hsjquF8YD7 LeS9eXuAkwvJgP1XUPDksIJa3igoByTEEjrEPKBjPEH84/5NEzUz6UO0/ bkOOww8leFqc48nR50BVv91A8HZNihnQwJgClAoWX3RfzRAhQYRkrXtb0 tjmiiqQcFIZJpNbaQlM75pO3LzRjTV/EcUQejR1rJire89uwSkTyGHmX0 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="471793109" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="471793109" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:21:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="899569212" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="899569212" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:19:11 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [RFC PATCH v5 01/16] KVM: TDP_MMU: Go to next level if smaller private mapping exists Date: Mon, 16 Oct 2023 09:20:52 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li Cannot map a private page as large page if any smaller mapping exists. It has to wait for all the not-mapped smaller page to be mapped and promote it to larger mapping. Signed-off-by: Xiaoyao Li --- arch/x86/kvm/mmu/tdp_mmu.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 4f58edbb8c06..012f270cfb6f 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1414,7 +1414,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm= _page_fault *fault) tdp_mmu_for_each_pte(iter, mmu, is_private, raw_gfn, raw_gfn + 1) { int r; =20 - if (fault->nx_huge_page_workaround_enabled) + if (fault->nx_huge_page_workaround_enabled || + kvm_gfn_shared_mask(vcpu->kvm)) disallowed_hugepage_adjust(fault, iter.old_spte, iter.level); =20 /* --=20 2.25.1 From nobody Wed Dec 17 07:59:15 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 051F7CDB482 for ; Mon, 16 Oct 2023 16:49:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230101AbjJPQtz (ORCPT ); Mon, 16 Oct 2023 12:49:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234584AbjJPQhv (ORCPT ); Mon, 16 Oct 2023 12:37:51 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C37EA7EF8; Mon, 16 Oct 2023 09:23:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697473391; x=1729009391; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wMYbVvjqR+ZSSbdZopCJtYPSlf9Ohr0kM34J0EtH7+E=; b=jBLJezOa/IOmSlSIQepous1W2qiwz07ZIvrb1Goxft0Ji/lbkMpILkwq /RKZs/R6xckEfVGM9aeDrAV+XTJS+VY7WE6YWNIRAhDO0ZIRjaj3vP+pg bT62FPgOHYzQJbMidDWt33hU71dkcV60DkTIRcFLZ63H3/mDf6GIYs8c5 /wzMPmvSVhFBO3z1ekE76zyEHfXcTbl/7pPnKR3tWPAVW9Hum+D3pcb5e nsOHd7jQF1lmJRpK+1D2HKMwX+W9gEE4SB1hHEx675kBT6MMWQKkhZO5Y kI3j5/W+LxR2gD25gKo9CQJSfQVrUJcxDCz/IS7LDYdvuNzQkXZWMiF9K g==; X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="471793114" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="471793114" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:21:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="899569215" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="899569215" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:19:12 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [RFC PATCH v5 02/16] KVM: TDX: Pass page level to cache flush before TDX SEAMCALL Date: Mon, 16 Oct 2023 09:20:53 -0700 Message-Id: <4c5c2e204b9369d17988a988871b86c7c753cb7b.1697473009.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li tdh_mem_page_aug() will support 2MB large page in the near future. Cache flush also needs to be 2MB instead of 4KB in such cases. Introduce a helper function to flush cache with page size info in preparation for large pages. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx_ops.h | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h index c716e54be66a..c9de1b48a022 100644 --- a/arch/x86/kvm/vmx/tdx_ops.h +++ b/arch/x86/kvm/vmx/tdx_ops.h @@ -6,6 +6,7 @@ =20 #include =20 +#include #include #include #include @@ -62,6 +63,11 @@ static inline u64 tdx_seamcall(u64 op, u64 rcx, u64 rdx,= u64 r8, u64 r9, void pr_tdx_error(u64 op, u64 error_code, const struct tdx_module_args *ou= t); #endif =20 +static inline void tdx_clflush_page(hpa_t addr, enum pg_level level) +{ + clflush_cache_range(__va(addr), KVM_HPAGE_SIZE(level)); +} + /* * TDX module acquires its internal lock for resources. It doesn't spin t= o get * locks because of its restrictions of allowed execution time. Instead, = it @@ -94,21 +100,21 @@ static inline u64 tdx_seamcall_sept(u64 op, u64 rcx, u= 64 rdx, u64 r8, u64 r9, =20 static inline u64 tdh_mng_addcx(hpa_t tdr, hpa_t addr) { - clflush_cache_range(__va(addr), PAGE_SIZE); + tdx_clflush_page(addr, PG_LEVEL_4K); return tdx_seamcall(TDH_MNG_ADDCX, addr, tdr, 0, 0, NULL); } =20 static inline u64 tdh_mem_page_add(hpa_t tdr, gpa_t gpa, hpa_t hpa, hpa_t = source, struct tdx_module_args *out) { - clflush_cache_range(__va(hpa), PAGE_SIZE); + tdx_clflush_page(hpa, PG_LEVEL_4K); return tdx_seamcall_sept(TDH_MEM_PAGE_ADD, gpa, tdr, hpa, source, out); } =20 static inline u64 tdh_mem_sept_add(hpa_t tdr, gpa_t gpa, int level, hpa_t = page, struct tdx_module_args *out) { - clflush_cache_range(__va(page), PAGE_SIZE); + tdx_clflush_page(page, PG_LEVEL_4K); return tdx_seamcall_sept(TDH_MEM_SEPT_ADD, gpa | level, tdr, page, 0, out= ); } =20 @@ -126,21 +132,21 @@ static inline u64 tdh_mem_sept_remove(hpa_t tdr, gpa_= t gpa, int level, =20 static inline u64 tdh_vp_addcx(hpa_t tdvpr, hpa_t addr) { - clflush_cache_range(__va(addr), PAGE_SIZE); + tdx_clflush_page(addr, PG_LEVEL_4K); return tdx_seamcall(TDH_VP_ADDCX, addr, tdvpr, 0, 0, NULL); } =20 static inline u64 tdh_mem_page_relocate(hpa_t tdr, gpa_t gpa, hpa_t hpa, struct tdx_module_args *out) { - clflush_cache_range(__va(hpa), PAGE_SIZE); + tdx_clflush_page(hpa, PG_LEVEL_4K); return tdx_seamcall_sept(TDH_MEM_PAGE_RELOCATE, gpa, tdr, hpa, 0, out); } =20 static inline u64 tdh_mem_page_aug(hpa_t tdr, gpa_t gpa, hpa_t hpa, struct tdx_module_args *out) { - clflush_cache_range(__va(hpa), PAGE_SIZE); + tdx_clflush_page(hpa, PG_LEVEL_4K); return tdx_seamcall_sept(TDH_MEM_PAGE_AUG, gpa, tdr, hpa, 0, out); } =20 @@ -157,13 +163,13 @@ static inline u64 tdh_mng_key_config(hpa_t tdr) =20 static inline u64 tdh_mng_create(hpa_t tdr, int hkid) { - clflush_cache_range(__va(tdr), PAGE_SIZE); + tdx_clflush_page(tdr, PG_LEVEL_4K); return tdx_seamcall(TDH_MNG_CREATE, tdr, hkid, 0, 0, NULL); } =20 static inline u64 tdh_vp_create(hpa_t tdr, hpa_t tdvpr) { - clflush_cache_range(__va(tdvpr), PAGE_SIZE); + tdx_clflush_page(tdvpr, PG_LEVEL_4K); return tdx_seamcall(TDH_VP_CREATE, tdvpr, tdr, 0, 0, NULL); } =20 --=20 2.25.1 From nobody Wed Dec 17 07:59:15 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6657C41513 for ; Mon, 16 Oct 2023 16:38:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233982AbjJPQiG (ORCPT ); Mon, 16 Oct 2023 12:38:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52936 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234452AbjJPQhg (ORCPT ); Mon, 16 Oct 2023 12:37:36 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B5D1E824B; Mon, 16 Oct 2023 09:23:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697473394; x=1729009394; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qraYcW5IY/UHw2621ZBIfhsYEJTaAoFyy8cv5ZLjX0M=; b=TLCFxMTdA9h91wreHTjSYrhDMkmJSjgWJkPeFTBb2qhfVpsqsgccihQe GoAJtUNLNuHD2POQD9gbaRtElT6Dzsfw/0HQAZUtC8GjRbhSzxPFiX5iM pD2p6pP18Y0PfvSg+cvJJQD1TeN+kRZ5CkE/AIULMoOMlSp2W+Z/2c9ai mpTjIzKyvPtAc1Q5MQN/ckqjk+N0hG9VNykn+2ZvLzQkvlXQBMfPzgfy9 5kIS6Pz+rxTStiw1Q39CA8F+duc1uaHl7YlrHBkpVUsdJrRi3b9gArOkv +kII9vsM8izw7U9wfMEuFIkvElTW1kX/sbGxlhZcm/hoWoT84Oauc2j3Z Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="471793120" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="471793120" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:21:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="899569218" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="899569218" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:19:12 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [RFC PATCH v5 03/16] KVM: TDX: Pass KVM page level to tdh_mem_page_add() and tdh_mem_page_aug() Date: Mon, 16 Oct 2023 09:20:54 -0700 Message-Id: <792c88756f17b07459847b2846dcb9a80202c065.1697473009.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li Level info is needed in tdh_clflush_page() to generate the correct page size. Besides, explicitly pass level info to SEAMCALL instead of assuming it's zero. It works naturally when 2MB support lands. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx.c | 7 ++++--- arch/x86/kvm/vmx/tdx_ops.h | 19 ++++++++++++------- 2 files changed, 16 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 4e7bd884e972..471128946e63 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1449,7 +1449,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t g= fn, union tdx_sept_entry entry; u64 err; =20 - err =3D tdh_mem_page_aug(kvm_tdx->tdr_pa, gpa, hpa, &out); + err =3D tdh_mem_page_aug(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, &out); if (unlikely(err =3D=3D TDX_ERROR_SEPT_BUSY)) { tdx_unpin(kvm, pfn); return -EAGAIN; @@ -1496,6 +1496,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t g= fn, static int tdx_sept_page_add(struct kvm *kvm, gfn_t gfn, enum pg_level level, kvm_pfn_t pfn) { + int tdx_level =3D pg_level_to_tdx_sept_level(level); struct kvm_tdx *kvm_tdx =3D to_kvm_tdx(kvm); hpa_t hpa =3D pfn_to_hpa(pfn); gpa_t gpa =3D gfn_to_gpa(gfn); @@ -1530,8 +1531,8 @@ static int tdx_sept_page_add(struct kvm *kvm, gfn_t g= fn, kvm_tdx->source_pa =3D INVALID_PAGE; =20 do { - err =3D tdh_mem_page_add(kvm_tdx->tdr_pa, gpa, hpa, source_pa, - &out); + err =3D tdh_mem_page_add(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, + source_pa, &out); /* * This path is executed during populating initial guest memory * image. i.e. before running any vcpu. Race is rare. diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h index c9de1b48a022..afc85e7ffb8e 100644 --- a/arch/x86/kvm/vmx/tdx_ops.h +++ b/arch/x86/kvm/vmx/tdx_ops.h @@ -63,6 +63,11 @@ static inline u64 tdx_seamcall(u64 op, u64 rcx, u64 rdx,= u64 r8, u64 r9, void pr_tdx_error(u64 op, u64 error_code, const struct tdx_module_args *ou= t); #endif =20 +static inline enum pg_level tdx_sept_level_to_pg_level(int tdx_level) +{ + return tdx_level + 1; +} + static inline void tdx_clflush_page(hpa_t addr, enum pg_level level) { clflush_cache_range(__va(addr), KVM_HPAGE_SIZE(level)); @@ -104,11 +109,11 @@ static inline u64 tdh_mng_addcx(hpa_t tdr, hpa_t addr) return tdx_seamcall(TDH_MNG_ADDCX, addr, tdr, 0, 0, NULL); } =20 -static inline u64 tdh_mem_page_add(hpa_t tdr, gpa_t gpa, hpa_t hpa, hpa_t = source, - struct tdx_module_args *out) +static inline u64 tdh_mem_page_add(hpa_t tdr, gpa_t gpa, int level, hpa_t = hpa, + hpa_t source, struct tdx_module_args *out) { - tdx_clflush_page(hpa, PG_LEVEL_4K); - return tdx_seamcall_sept(TDH_MEM_PAGE_ADD, gpa, tdr, hpa, source, out); + tdx_clflush_page(hpa, tdx_sept_level_to_pg_level(level)); + return tdx_seamcall_sept(TDH_MEM_PAGE_ADD, gpa | level, tdr, hpa, source,= out); } =20 static inline u64 tdh_mem_sept_add(hpa_t tdr, gpa_t gpa, int level, hpa_t = page, @@ -143,11 +148,11 @@ static inline u64 tdh_mem_page_relocate(hpa_t tdr, gp= a_t gpa, hpa_t hpa, return tdx_seamcall_sept(TDH_MEM_PAGE_RELOCATE, gpa, tdr, hpa, 0, out); } =20 -static inline u64 tdh_mem_page_aug(hpa_t tdr, gpa_t gpa, hpa_t hpa, +static inline u64 tdh_mem_page_aug(hpa_t tdr, gpa_t gpa, int level, hpa_t = hpa, struct tdx_module_args *out) { - tdx_clflush_page(hpa, PG_LEVEL_4K); - return tdx_seamcall_sept(TDH_MEM_PAGE_AUG, gpa, tdr, hpa, 0, out); + tdx_clflush_page(hpa, tdx_sept_level_to_pg_level(level)); + return tdx_seamcall_sept(TDH_MEM_PAGE_AUG, gpa | level, tdr, hpa, 0, out); } =20 static inline u64 tdh_mem_range_block(hpa_t tdr, gpa_t gpa, int level, --=20 2.25.1 From nobody Wed Dec 17 07:59:15 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5A36CDB474 for ; Mon, 16 Oct 2023 16:40:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234085AbjJPQk0 (ORCPT ); Mon, 16 Oct 2023 12:40:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343569AbjJPQjO (ORCPT ); Mon, 16 Oct 2023 12:39:14 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EB9948255; Mon, 16 Oct 2023 09:23:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697473396; x=1729009396; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yRWbSVt3B/Gex4+EgIb9aWYiPKxKJtDXPk/HN1H82fE=; b=ncosRVKfHbHSV9DFV5dX/q5qGdowDuYe/dQQzRcOuCO/IEcOnHtlzWeB /VNszo3DLaNC+KPG7em0Koe1EnP/gzgKP+jpnMudWPgKDCEM6GuVzxbib fVBb4PJDWGIQ07GBXWqzia+V7u+6IBeK/0k/brhBbfaAWyxs9da8gp0cM deinIIqRGER21hVfR7uZT7hPMCIUwKk5UFEMZVQU7KAYbtDZAy91i6r+L FRyA/ewmjwVDXi3A+cN0KIP6pEXWJsxe+p2TeBqgUCY2Yh2X+jQlTIznQ JyyFAvL7S/XTTkLkMkS4YCXOKJruEq7zE3wFGzB8aD9o8aFor9485/Ulf A==; X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="471793127" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="471793127" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:21:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="899569223" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="899569223" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:19:12 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [RFC PATCH v5 04/16] KVM: TDX: Pass size to tdx_measure_page() Date: Mon, 16 Oct 2023 09:20:55 -0700 Message-Id: <8c0aa0968cb1f995cbd4552e1f2cf79ac443475b.1697473009.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li Extend tdx_measure_page() to pass size info so that it can measure large page as well. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 471128946e63..bda2c8fa895c 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1415,13 +1415,15 @@ void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t = root_hpa, int pgd_level) td_vmcs_write64(to_tdx(vcpu), SHARED_EPT_POINTER, root_hpa & PAGE_MASK); } =20 -static void tdx_measure_page(struct kvm_tdx *kvm_tdx, hpa_t gpa) +static void tdx_measure_page(struct kvm_tdx *kvm_tdx, hpa_t gpa, int size) { struct tdx_module_args out; u64 err; int i; =20 - for (i =3D 0; i < PAGE_SIZE; i +=3D TDX_EXTENDMR_CHUNKSIZE) { + WARN_ON_ONCE(size % TDX_EXTENDMR_CHUNKSIZE); + + for (i =3D 0; i < size; i +=3D TDX_EXTENDMR_CHUNKSIZE) { err =3D tdh_mr_extend(kvm_tdx->tdr_pa, gpa + i, &out); if (KVM_BUG_ON(err, &kvm_tdx->kvm)) { pr_tdx_error(TDH_MR_EXTEND, err, &out); @@ -1543,7 +1545,7 @@ static int tdx_sept_page_add(struct kvm *kvm, gfn_t g= fn, tdx_unpin(kvm, pfn); return -EIO; } else if (measure) - tdx_measure_page(kvm_tdx, gpa); + tdx_measure_page(kvm_tdx, gpa, KVM_HPAGE_SIZE(level)); =20 return 0; =20 --=20 2.25.1 From nobody Wed Dec 17 07:59:15 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10353CDB465 for ; Mon, 16 Oct 2023 16:40:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343530AbjJPQkf (ORCPT ); Mon, 16 Oct 2023 12:40:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41388 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343571AbjJPQjP (ORCPT ); Mon, 16 Oct 2023 12:39:15 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EBB1F8256; Mon, 16 Oct 2023 09:23:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697473396; x=1729009396; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=o5ZqZe5kh4rI7M8fMrhNtQT9rpjdbgf6u1T422GjqRE=; b=kPlrJBct9DpxhGfWuoHNfzvtlvvltu7tAa9isS9SLjJo+6Uhi0rjzEGx OBD6+Ta9EZ2PcoxbBhcQTBFpht4sGzD8ZUvbpogRvzrHLdWpaJl5IXsgt ysoBD8Pz7WZLNBSgexKjqiOqL9QdRhL9XViYkM80AsMCisPAA9TeMLmvD iKDbcUSezDabMoXd6dJ0UfucW0pZg875gVF9v2tqK0ld8dsjzead9v9vO tMAv9zDCUwpap2Bo46wWIIoADSF/ZbKLzN6+/7olqNuJ8sDRQMQfohOCd Xa7bXe+gKdf/WRqWuTk/0tfMyLNvL2HuT3ADMvOVoaHZ95aM5N3JdDCmE A==; X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="471793140" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="471793140" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:21:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="899569226" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="899569226" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:19:13 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [RFC PATCH v5 05/16] KVM: TDX: Pass size to reclaim_page() Date: Mon, 16 Oct 2023 09:20:56 -0700 Message-Id: <12cd734126366ea7d9b4334002a88be838f31afb.1697473009.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li A 2MB large page can be tdh_mem_page_aug()'ed to TD directly. In this case, it needs to reclaim and clear the page as 2MB size. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx.c | 27 +++++++++++++++------------ 1 file changed, 15 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index bda2c8fa895c..72672b2c30a1 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -205,12 +205,13 @@ static void tdx_disassociate_vp_on_cpu(struct kvm_vcp= u *vcpu) smp_call_function_single(cpu, tdx_disassociate_vp_arg, vcpu, 1); } =20 -static void tdx_clear_page(unsigned long page_pa) +static void tdx_clear_page(unsigned long page_pa, int size) { const void *zero_page =3D (const void *) __va(page_to_phys(ZERO_PAGE(0))); void *page =3D __va(page_pa); unsigned long i; =20 + WARN_ON_ONCE(size % PAGE_SIZE); /* * When re-assign one page from old keyid to a new keyid, MOVDIR64B is * required to clear/write the page with new keyid to prevent integrity @@ -219,7 +220,7 @@ static void tdx_clear_page(unsigned long page_pa) * clflush doesn't flush cache with HKID set. The cache line could be * poisoned (even without MKTME-i), clear the poison bit. */ - for (i =3D 0; i < PAGE_SIZE; i +=3D 64) + for (i =3D 0; i < size; i +=3D 64) movdir64b(page + i, zero_page); /* * MOVDIR64B store uses WC buffer. Prevent following memory reads @@ -228,7 +229,7 @@ static void tdx_clear_page(unsigned long page_pa) __mb(); } =20 -static int __tdx_reclaim_page(hpa_t pa) +static int __tdx_reclaim_page(hpa_t pa, enum pg_level level) { struct tdx_module_args out; u64 err; @@ -246,17 +247,19 @@ static int __tdx_reclaim_page(hpa_t pa) pr_tdx_error(TDH_PHYMEM_PAGE_RECLAIM, err, &out); return -EIO; } + /* out.r8 =3D=3D tdx sept page level */ + WARN_ON_ONCE(out.r8 !=3D pg_level_to_tdx_sept_level(level)); =20 return 0; } =20 -static int tdx_reclaim_page(hpa_t pa) +static int tdx_reclaim_page(hpa_t pa, enum pg_level level) { int r; =20 - r =3D __tdx_reclaim_page(pa); + r =3D __tdx_reclaim_page(pa, level); if (!r) - tdx_clear_page(pa); + tdx_clear_page(pa, KVM_HPAGE_SIZE(level)); return r; } =20 @@ -270,7 +273,7 @@ static void tdx_reclaim_td_page(unsigned long td_page_p= a) * was already flushed by TDH.PHYMEM.CACHE.WB before here, So * cache doesn't need to be flushed again. */ - if (tdx_reclaim_page(td_page_pa)) + if (tdx_reclaim_page(td_page_pa, PG_LEVEL_4K)) /* * Leak the page on failure: * tdx_reclaim_page() returns an error if and only if there's an @@ -502,7 +505,7 @@ void tdx_vm_free(struct kvm *kvm) =20 if (!kvm_tdx->tdr_pa) return; - if (__tdx_reclaim_page(kvm_tdx->tdr_pa)) + if (__tdx_reclaim_page(kvm_tdx->tdr_pa, PG_LEVEL_4K)) return; /* * TDX module maps TDR with TDX global HKID. TDX module may access TDR @@ -515,7 +518,7 @@ void tdx_vm_free(struct kvm *kvm) pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL); return; } - tdx_clear_page(kvm_tdx->tdr_pa); + tdx_clear_page(kvm_tdx->tdr_pa, PAGE_SIZE); =20 free_page((unsigned long)__va(kvm_tdx->tdr_pa)); kvm_tdx->tdr_pa =3D 0; @@ -1596,7 +1599,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm= , gfn_t gfn, * The HKID assigned to this TD was already freed and cache * was already flushed. We don't have to flush again. */ - err =3D tdx_reclaim_page(hpa); + err =3D tdx_reclaim_page(hpa, level); if (KVM_BUG_ON(err, kvm)) return -EIO; tdx_unpin(kvm, pfn); @@ -1629,7 +1632,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm= , gfn_t gfn, pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL); return -EIO; } - tdx_clear_page(hpa); + tdx_clear_page(hpa, PAGE_SIZE); tdx_unpin(kvm, pfn); return 0; } @@ -1741,7 +1744,7 @@ static int tdx_sept_free_private_spt(struct kvm *kvm,= gfn_t gfn, * already flushed. We don't have to flush again. */ if (!is_hkid_assigned(kvm_tdx)) - return tdx_reclaim_page(__pa(private_spt)); + return tdx_reclaim_page(__pa(private_spt), PG_LEVEL_4K); =20 /* * free_private_spt() is (obviously) called when a shadow page is being --=20 2.25.1 From nobody Wed Dec 17 07:59:15 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 066CFCDB465 for ; Mon, 16 Oct 2023 16:40:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343538AbjJPQkh (ORCPT ); Mon, 16 Oct 2023 12:40:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343572AbjJPQjP (ORCPT ); Mon, 16 Oct 2023 12:39:15 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC83C8258; Mon, 16 Oct 2023 09:23:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697473396; x=1729009396; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=y35oJUPopyeRbTd34DdLXBVXFicmHGF9J0ehvEF/xG0=; b=mKbBevXv1+UXK5+Lsljn5cYtREcta7uXcH6+Mw16+WEPw0I8zc5XF+S5 jypaqru9S3EyPfQ1TUz3tfyzKtEPhi821/k3792SeRJqLPrWmO/MARkrT 69/XrBgx5hVQQw/zbs+WIGGNtMI7XaxYOsoRIBVsmdXjRJh2S/99rFGck sEmXOXHGuMefECxPsX12bjtoU57eauEnNIV5DaBizMCjJGASzAtqT2WJM xCIe+dthNCitIY677Taq2lRfEx76583nArLvTLUELJCprTmTkIhc4Cb2D U6RSlwwyZukvRQOR8o1A1F9pyU/OPK4WaLxijCH9pry8l7N5uUiNllRUh g==; X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="471793146" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="471793146" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:21:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="899569231" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="899569231" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:19:14 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [RFC PATCH v5 06/16] KVM: TDX: Update tdx_sept_{set,drop}_private_spte() to support large page Date: Mon, 16 Oct 2023 09:20:57 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li Allow large page level AUG and REMOVE for TDX pages. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx.c | 70 ++++++++++++++++++++++-------------------- 1 file changed, 36 insertions(+), 34 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 72672b2c30a1..992cf3ed02f2 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1435,11 +1435,12 @@ static void tdx_measure_page(struct kvm_tdx *kvm_td= x, hpa_t gpa, int size) } } =20 -static void tdx_unpin(struct kvm *kvm, kvm_pfn_t pfn) +static void tdx_unpin(struct kvm *kvm, kvm_pfn_t pfn, int level) { - struct page *page =3D pfn_to_page(pfn); + int i; =20 - put_page(page); + for (i =3D 0; i < KVM_PAGES_PER_HPAGE(level); i++) + put_page(pfn_to_page(pfn + i)); } =20 static int tdx_sept_page_aug(struct kvm *kvm, gfn_t gfn, @@ -1456,7 +1457,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t g= fn, =20 err =3D tdh_mem_page_aug(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, &out); if (unlikely(err =3D=3D TDX_ERROR_SEPT_BUSY)) { - tdx_unpin(kvm, pfn); + tdx_unpin(kvm, pfn, level); return -EAGAIN; } if (unlikely(err =3D=3D (TDX_EPT_ENTRY_NOT_FREE | TDX_OPERAND_ID_RCX))) { @@ -1472,7 +1473,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t g= fn, &tmpout); if (KVM_BUG_ON(tmp, kvm)) { pr_tdx_error(TDH_MEM_SEPT_RD, tmp, &tmpout); - tdx_unpin(kvm, pfn); + tdx_unpin(kvm, pfn, level); return -EIO; } pr_debug_ratelimited("gfn 0x%llx pg_level %d pfn 0x%llx entry 0x%llx lev= el_stat 0x%llx\n", @@ -1483,7 +1484,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t g= fn, if (level_state.level =3D=3D tdx_level && level_state.state =3D=3D TDX_SEPT_PENDING && entry.leaf && entry.pfn =3D=3D pfn && entry.sve) { - tdx_unpin(kvm, pfn); + tdx_unpin(kvm, pfn, level); WARN_ON_ONCE(!(to_kvm_tdx(kvm)->attributes & TDX_TD_ATTR_SEPT_VE_DISABLE)); return -EAGAIN; @@ -1491,7 +1492,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t g= fn, } if (KVM_BUG_ON(err, kvm)) { pr_tdx_error(TDH_MEM_PAGE_AUG, err, &out); - tdx_unpin(kvm, pfn); + tdx_unpin(kvm, pfn, level); return -EIO; } =20 @@ -1527,7 +1528,7 @@ static int tdx_sept_page_add(struct kvm *kvm, gfn_t g= fn, * always uses vcpu 0's page table and protected by vcpu->mutex). */ if (KVM_BUG_ON(kvm_tdx->source_pa =3D=3D INVALID_PAGE, kvm)) { - tdx_unpin(kvm, pfn); + tdx_unpin(kvm, pfn, level); return -EINVAL; } =20 @@ -1545,7 +1546,7 @@ static int tdx_sept_page_add(struct kvm *kvm, gfn_t g= fn, } while (unlikely(err =3D=3D TDX_ERROR_SEPT_BUSY)); if (KVM_BUG_ON(err, kvm)) { pr_tdx_error(TDH_MEM_PAGE_ADD, err, &out); - tdx_unpin(kvm, pfn); + tdx_unpin(kvm, pfn, level); return -EIO; } else if (measure) tdx_measure_page(kvm_tdx, gpa, KVM_HPAGE_SIZE(level)); @@ -1558,10 +1559,7 @@ static int tdx_sept_set_private_spte(struct kvm *kvm= , gfn_t gfn, enum pg_level level, kvm_pfn_t pfn) { struct kvm_tdx *kvm_tdx =3D to_kvm_tdx(kvm); - - /* TODO: handle large pages. */ - if (KVM_BUG_ON(level !=3D PG_LEVEL_4K, kvm)) - return -EINVAL; + int i; =20 /* * Because restricted mem doesn't support page migration with @@ -1571,7 +1569,8 @@ static int tdx_sept_set_private_spte(struct kvm *kvm,= gfn_t gfn, * TODO: Once restricted mem introduces callback on page migration, * implement it and remove get_page/put_page(). */ - get_page(pfn_to_page(pfn)); + for (i =3D 0; i < KVM_PAGES_PER_HPAGE(level); i++) + get_page(pfn_to_page(pfn + i)); =20 if (likely(is_td_finalized(kvm_tdx))) return tdx_sept_page_aug(kvm, gfn, level, pfn); @@ -1588,11 +1587,9 @@ static int tdx_sept_drop_private_spte(struct kvm *kv= m, gfn_t gfn, gpa_t gpa =3D gfn_to_gpa(gfn); hpa_t hpa =3D pfn_to_hpa(pfn); hpa_t hpa_with_hkid; + int r =3D 0; u64 err; - - /* TODO: handle large pages. */ - if (KVM_BUG_ON(level !=3D PG_LEVEL_4K, kvm)) - return -EINVAL; + int i; =20 if (unlikely(!is_hkid_assigned(kvm_tdx))) { /* @@ -1602,7 +1599,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm= , gfn_t gfn, err =3D tdx_reclaim_page(hpa, level); if (KVM_BUG_ON(err, kvm)) return -EIO; - tdx_unpin(kvm, pfn); + tdx_unpin(kvm, pfn, level); return 0; } =20 @@ -1619,22 +1616,27 @@ static int tdx_sept_drop_private_spte(struct kvm *k= vm, gfn_t gfn, return -EIO; } =20 - hpa_with_hkid =3D set_hkid_to_hpa(hpa, (u16)kvm_tdx->hkid); - do { - /* - * TDX_OPERAND_BUSY can happen on locking PAMT entry. Because - * this page was removed above, other thread shouldn't be - * repeatedly operating on this page. Just retry loop. - */ - err =3D tdh_phymem_page_wbinvd(hpa_with_hkid); - } while (unlikely(err =3D=3D (TDX_OPERAND_BUSY | TDX_OPERAND_ID_RCX))); - if (KVM_BUG_ON(err, kvm)) { - pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL); - return -EIO; + for (i =3D 0; i < KVM_PAGES_PER_HPAGE(level); i++) { + hpa_with_hkid =3D set_hkid_to_hpa(hpa, (u16)kvm_tdx->hkid); + do { + /* + * TDX_OPERAND_BUSY can happen on locking PAMT entry. + * Because this page was removed above, other thread + * shouldn't be repeatedly operating on this page. + * Simple retry should work. + */ + err =3D tdh_phymem_page_wbinvd(hpa_with_hkid); + } while (unlikely(err =3D=3D (TDX_OPERAND_BUSY | TDX_OPERAND_ID_RCX))); + if (KVM_BUG_ON(err, kvm)) { + pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL); + r =3D -EIO; + } else { + tdx_clear_page(hpa, PAGE_SIZE); + tdx_unpin(kvm, pfn + i, PG_LEVEL_4K); + } + hpa +=3D PAGE_SIZE; } - tdx_clear_page(hpa, PAGE_SIZE); - tdx_unpin(kvm, pfn); - return 0; + return r; } =20 static int tdx_sept_link_private_spt(struct kvm *kvm, gfn_t gfn, --=20 2.25.1 From nobody Wed Dec 17 07:59:15 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23AC5CDB474 for ; Mon, 16 Oct 2023 16:31:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233847AbjJPQb1 (ORCPT ); Mon, 16 Oct 2023 12:31:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41388 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233684AbjJPQa7 (ORCPT ); Mon, 16 Oct 2023 12:30:59 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E4D0825F; Mon, 16 Oct 2023 09:23:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697473396; x=1729009396; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=76HvwvivJxanOflJfCGymJ5o3a4H7yOPOQNDRzjrHPY=; b=T66XB0Yt+lgIGrEwQMpHj46R6srEAT+gWjilhmLYYZ0C7HaZc5+dPsGB nFpN5UfNar/jiGzc8SVD9dMdqehjXKbo6cNEOrjyVqBU3plYxBDVsKxbf NzgYNIupZarSbfPTBptOmkywlK3dLub2DGhdOCXnYsKk0PSyYwCv6muaw PxOzIS0e28KU1qXnOjS36K2iO3t/AHPweq5BUgRfihdG1d4bhVTKS/Kcb RH0yrDi6gTHtX1kpy5sr4C7cWj8dqbEvL5zwoh+SkV+OD4/F4485tPgb5 khZJ02QRx0fp3dD2olTi/5/13+K3gShkij0Wez9VXIDvAUqqA/qLPxJC0 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="471793157" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="471793157" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:21:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="899569235" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="899569235" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:19:14 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [RFC PATCH v5 07/16] KVM: MMU: Introduce level info in PFERR code Date: Mon, 16 Oct 2023 09:20:58 -0700 Message-Id: <194ae75b2eff5da9e1729e5f5711407ca56d7c0e.1697473009.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li For TDX, EPT violation can happen when TDG.MEM.PAGE.ACCEPT. And TDG.MEM.PAGE.ACCEPT contains the desired accept page level of TD guest. 1. KVM can map it with 4KB page while TD guest wants to accept 2MB page. TD geust will get TDX_PAGE_SIZE_MISMATCH and it should try to accept 4KB size. 2. KVM can map it with 2MB page while TD guest wants to accept 4KB page. KVM needs to honor it because a) there is no way to tell guest KVM maps it as 2MB size. And b) guest accepts it in 4KB size since guest knows some other 4KB page in the same 2MB range will be used as shared page. For case 2, it need to pass desired page level to MMU's page_fault_handler. Use bit 29:31 of kvm PF error code for this purpose. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm_host.h | 3 +++ arch/x86/kvm/mmu/mmu.c | 5 +++++ arch/x86/kvm/vmx/common.h | 6 +++++- arch/x86/kvm/vmx/tdx.c | 15 ++++++++++++++- arch/x86/kvm/vmx/tdx.h | 19 +++++++++++++++++++ arch/x86/kvm/vmx/vmx.c | 2 +- 6 files changed, 47 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 8ac6f0555ac3..7bcdc2afe88c 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -254,6 +254,8 @@ enum x86_intercept_stage; #define PFERR_FETCH_BIT 4 #define PFERR_PK_BIT 5 #define PFERR_SGX_BIT 15 +#define PFERR_LEVEL_START_BIT 29 +#define PFERR_LEVEL_END_BIT 31 #define PFERR_GUEST_FINAL_BIT 32 #define PFERR_GUEST_PAGE_BIT 33 #define PFERR_GUEST_ENC_BIT 34 @@ -266,6 +268,7 @@ enum x86_intercept_stage; #define PFERR_FETCH_MASK BIT(PFERR_FETCH_BIT) #define PFERR_PK_MASK BIT(PFERR_PK_BIT) #define PFERR_SGX_MASK BIT(PFERR_SGX_BIT) +#define PFERR_LEVEL_MASK GENMASK_ULL(PFERR_LEVEL_END_BIT, PFERR_LEVEL_STAR= T_BIT) #define PFERR_GUEST_FINAL_MASK BIT_ULL(PFERR_GUEST_FINAL_BIT) #define PFERR_GUEST_PAGE_MASK BIT_ULL(PFERR_GUEST_PAGE_BIT) #define PFERR_GUEST_ENC_MASK BIT_ULL(PFERR_GUEST_ENC_BIT) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 26bad5c646fe..55574993a13a 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4593,6 +4593,11 @@ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *v= cpu, =20 int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { + u8 err_level =3D (fault->error_code & PFERR_LEVEL_MASK) >> PFERR_LEVEL_ST= ART_BIT; + + if (err_level) + fault->max_level =3D min(fault->max_level, err_level); + /* * If the guest's MTRRs may be used to compute the "real" memtype, * restrict the mapping level to ensure KVM uses a consistent memtype diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h index 027aa4175d2c..bb00433932ee 100644 --- a/arch/x86/kvm/vmx/common.h +++ b/arch/x86/kvm/vmx/common.h @@ -67,7 +67,8 @@ static inline void vmx_handle_external_interrupt_irqoff(s= truct kvm_vcpu *vcpu, } =20 static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t = gpa, - unsigned long exit_qualification) + unsigned long exit_qualification, + int err_page_level) { u64 error_code; =20 @@ -90,6 +91,9 @@ static inline int __vmx_handle_ept_violation(struct kvm_v= cpu *vcpu, gpa_t gpa, if (kvm_is_private_gpa(vcpu->kvm, gpa)) error_code |=3D PFERR_GUEST_ENC_MASK; =20 + if (err_page_level > 0) + error_code |=3D (err_page_level << PFERR_LEVEL_START_BIT) & PFERR_LEVEL_= MASK; + return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0); } =20 diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 992cf3ed02f2..c37b66f9a52a 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1802,7 +1802,20 @@ void tdx_deliver_interrupt(struct kvm_lapic *apic, i= nt delivery_mode, =20 static int tdx_handle_ept_violation(struct kvm_vcpu *vcpu) { + union tdx_ext_exit_qualification ext_exit_qual; unsigned long exit_qual; + int err_page_level =3D 0; + + ext_exit_qual.full =3D tdexit_ext_exit_qual(vcpu); + + if (ext_exit_qual.type >=3D NUM_EXT_EXIT_QUAL) { + pr_err("EPT violation at gpa 0x%lx, with invalid ext exit qualification = type 0x%x\n", + tdexit_gpa(vcpu), ext_exit_qual.type); + kvm_vm_bugged(vcpu->kvm); + return 0; + } else if (ext_exit_qual.type =3D=3D EXT_EXIT_QUAL_ACCEPT) { + err_page_level =3D tdx_sept_level_to_pg_level(ext_exit_qual.req_sept_lev= el); + } =20 if (kvm_is_private_gpa(vcpu->kvm, tdexit_gpa(vcpu))) { /* @@ -1829,7 +1842,7 @@ static int tdx_handle_ept_violation(struct kvm_vcpu *= vcpu) } =20 trace_kvm_page_fault(vcpu, tdexit_gpa(vcpu), exit_qual); - return __vmx_handle_ept_violation(vcpu, tdexit_gpa(vcpu), exit_qual); + return __vmx_handle_ept_violation(vcpu, tdexit_gpa(vcpu), exit_qual, err_= page_level); } =20 static int tdx_handle_ept_misconfig(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h index 796ff0a4bcbf..b0bc3ee89e03 100644 --- a/arch/x86/kvm/vmx/tdx.h +++ b/arch/x86/kvm/vmx/tdx.h @@ -73,6 +73,25 @@ union tdx_exit_reason { u64 full; }; =20 +union tdx_ext_exit_qualification { + struct { + u64 type : 4; + u64 reserved0 : 28; + u64 req_sept_level : 3; + u64 err_sept_level : 3; + u64 err_sept_state : 8; + u64 err_sept_is_leaf : 1; + u64 reserved1 : 17; + }; + u64 full; +}; + +enum tdx_ext_exit_qualification_type { + EXT_EXIT_QUAL_NONE, + EXT_EXIT_QUAL_ACCEPT, + NUM_EXT_EXIT_QUAL, +}; + struct vcpu_tdx { struct kvm_vcpu vcpu; =20 diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 18b374b3f940..fadb89346635 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -5753,7 +5753,7 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu) if (unlikely(allow_smaller_maxphyaddr && kvm_vcpu_is_illegal_gpa(vcpu, gp= a))) return kvm_emulate_instruction(vcpu, 0); =20 - return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification); + return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification, 0); } =20 static int handle_ept_misconfig(struct kvm_vcpu *vcpu) --=20 2.25.1 From nobody Wed Dec 17 07:59:15 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F08ACDB482 for ; Mon, 16 Oct 2023 16:55:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233969AbjJPQz4 (ORCPT ); Mon, 16 Oct 2023 12:55:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234222AbjJPQzh (ORCPT ); Mon, 16 Oct 2023 12:55:37 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A280C8260; Mon, 16 Oct 2023 09:23:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697473396; x=1729009396; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bH4TweNwAW5PDjRc4LuWzqFQTxFArgLwB4OYW4yKw2M=; b=K4wdlT7FQZqbDahzMoGqS3+xQL/CiW2/HTUE6etAe85A0tiklLvtnKJ9 ZS8cSe6tlAFkuIg4hmsVO1K5k4LiVlwdvSrvddpOpqZUFepOBHtje6BI/ Ar7TdvCPWAt3Cy3ctFjejrhjgz945BMMGSvb6o4EWzpbL+o0Fjnnx3SyL ptKcoXgFxkxbZl9ZduoZGS7q/X1o0Vo9QfTyfLIXMFcIJyOYfKQTLSV/k Q0CR4w9rFSV0oja8XgJ20O+Q3jOn/zqiWXqJOC3/6iNq1+1a7BzznkiTF qqOklVWNsyyH0jzjH/ixoV2nUDoYJsd8IzZ+eEZodPE/3KPwvNXENhQLS g==; X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="471793165" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="471793165" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:21:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="899569238" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="899569238" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:19:15 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [RFC PATCH v5 08/16] KVM: TDX: Pin pages via get_page() right before ADD/AUG'ed to TDs Date: Mon, 16 Oct 2023 09:20:59 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li When kvm_faultin_pfn(), it doesn't have the info regarding which page level will the gfn be mapped at. Hence it doesn't know to pin a 4K page or a 2M page. Move the guest private pages pinning logic right before TDH_MEM_PAGE_ADD/AUG() since at that time it knows the page level info. Signed-off-by: Xiaoyao Li --- arch/x86/kvm/vmx/tdx.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index c37b66f9a52a..0558faee5b19 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1435,7 +1435,8 @@ static void tdx_measure_page(struct kvm_tdx *kvm_tdx,= hpa_t gpa, int size) } } =20 -static void tdx_unpin(struct kvm *kvm, kvm_pfn_t pfn, int level) +static void tdx_unpin(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, + enum pg_level level) { int i; =20 @@ -1457,7 +1458,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t g= fn, =20 err =3D tdh_mem_page_aug(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, &out); if (unlikely(err =3D=3D TDX_ERROR_SEPT_BUSY)) { - tdx_unpin(kvm, pfn, level); + tdx_unpin(kvm, gfn, pfn, level); return -EAGAIN; } if (unlikely(err =3D=3D (TDX_EPT_ENTRY_NOT_FREE | TDX_OPERAND_ID_RCX))) { @@ -1473,7 +1474,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t g= fn, &tmpout); if (KVM_BUG_ON(tmp, kvm)) { pr_tdx_error(TDH_MEM_SEPT_RD, tmp, &tmpout); - tdx_unpin(kvm, pfn, level); + tdx_unpin(kvm, gfn, pfn, level); return -EIO; } pr_debug_ratelimited("gfn 0x%llx pg_level %d pfn 0x%llx entry 0x%llx lev= el_stat 0x%llx\n", @@ -1484,7 +1485,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t g= fn, if (level_state.level =3D=3D tdx_level && level_state.state =3D=3D TDX_SEPT_PENDING && entry.leaf && entry.pfn =3D=3D pfn && entry.sve) { - tdx_unpin(kvm, pfn, level); + tdx_unpin(kvm, gfn, pfn, level); WARN_ON_ONCE(!(to_kvm_tdx(kvm)->attributes & TDX_TD_ATTR_SEPT_VE_DISABLE)); return -EAGAIN; @@ -1492,7 +1493,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t g= fn, } if (KVM_BUG_ON(err, kvm)) { pr_tdx_error(TDH_MEM_PAGE_AUG, err, &out); - tdx_unpin(kvm, pfn, level); + tdx_unpin(kvm, gfn, pfn, level); return -EIO; } =20 @@ -1528,7 +1529,7 @@ static int tdx_sept_page_add(struct kvm *kvm, gfn_t g= fn, * always uses vcpu 0's page table and protected by vcpu->mutex). */ if (KVM_BUG_ON(kvm_tdx->source_pa =3D=3D INVALID_PAGE, kvm)) { - tdx_unpin(kvm, pfn, level); + tdx_unpin(kvm, gfn, pfn, level); return -EINVAL; } =20 @@ -1546,7 +1547,7 @@ static int tdx_sept_page_add(struct kvm *kvm, gfn_t g= fn, } while (unlikely(err =3D=3D TDX_ERROR_SEPT_BUSY)); if (KVM_BUG_ON(err, kvm)) { pr_tdx_error(TDH_MEM_PAGE_ADD, err, &out); - tdx_unpin(kvm, pfn, level); + tdx_unpin(kvm, gfn, pfn, level); return -EIO; } else if (measure) tdx_measure_page(kvm_tdx, gpa, KVM_HPAGE_SIZE(level)); @@ -1599,7 +1600,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm= , gfn_t gfn, err =3D tdx_reclaim_page(hpa, level); if (KVM_BUG_ON(err, kvm)) return -EIO; - tdx_unpin(kvm, pfn, level); + tdx_unpin(kvm, gfn, pfn, level); return 0; } =20 @@ -1632,7 +1633,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm= , gfn_t gfn, r =3D -EIO; } else { tdx_clear_page(hpa, PAGE_SIZE); - tdx_unpin(kvm, pfn + i, PG_LEVEL_4K); + tdx_unpin(kvm, gfn + i, pfn + i, PG_LEVEL_4K); } hpa +=3D PAGE_SIZE; } --=20 2.25.1 From nobody Wed Dec 17 07:59:15 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E621CDB465 for ; Mon, 16 Oct 2023 16:39:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234432AbjJPQjo (ORCPT ); Mon, 16 Oct 2023 12:39:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41302 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234628AbjJPQh5 (ORCPT ); Mon, 16 Oct 2023 12:37:57 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CC7FE8264; Mon, 16 Oct 2023 09:23:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697473396; x=1729009396; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dYyJaAe9NtV/9+vuz7+dWQkffsVxT4JQowmTNSzwnc4=; b=acnqcyAcsGH88RUJ6ZGHnaNbI1YDoePvdDBEfRAKxSnls7Ihf7nxNzmF IFq4iYIagjQ5vqFHs5ONGesV+zgWjFso0qUA3+bEp0IJ1ZbBppO/4QeT2 mUam914r1luNWhOQPfmrZF7dkELmFk6hHPe1PVsTrT5tnBsMpvcn7bScZ t7aQN22cdcwxPvPWRehNjcK9VRE8E/r9W1Q0463lSViatnNria3s/4TGA t9Kv7GQOyI9xIZsbvzBdgVgCNjw991EpxDy415SAY5MOdqMlaQZG967Fh gviyLVtXi1A4fkDQLRWn4HuOYPaAniYg36rVpc3JDItblQohOL/tMEifO A==; X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="471793172" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="471793172" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:21:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="899569242" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="899569242" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:19:15 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [RFC PATCH v5 09/16] KVM: TDX: Pass desired page level in err code for page fault handler Date: Mon, 16 Oct 2023 09:21:00 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li For TDX, EPT violation can happen when TDG.MEM.PAGE.ACCEPT. And TDG.MEM.PAGE.ACCEPT contains the desired accept page level of TD guest. 1. KVM can map it with 4KB page while TD guest wants to accept 2MB page. TD geust will get TDX_PAGE_SIZE_MISMATCH and it should try to accept 4KB size. 2. KVM can map it with 2MB page while TD guest wants to accept 4KB page. KVM needs to honor it because a) there is no way to tell guest KVM maps it as 2MB size. And b) guest accepts it in 4KB size since guest knows some other 4KB page in the same 2MB range will be used as shared page. For case 2, it need to pass desired page level to MMU's page_fault_handler. Use bit 29:31 of kvm PF error code for this purpose. Signed-off-by: Xiaoyao Li --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/vmx/common.h | 2 +- arch/x86/kvm/vmx/tdx.c | 7 ++++++- arch/x86/kvm/vmx/tdx.h | 19 ------------------- arch/x86/kvm/vmx/tdx_arch.h | 19 +++++++++++++++++++ arch/x86/kvm/vmx/vmx.c | 2 +- 6 files changed, 29 insertions(+), 22 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 7bcdc2afe88c..bb2b4f8c0c57 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -278,6 +278,8 @@ enum x86_intercept_stage; PFERR_WRITE_MASK | \ PFERR_PRESENT_MASK) =20 +#define PFERR_LEVEL(err_code) (((err_code) & PFERR_LEVEL_MASK) >> PFERR_LE= VEL_START_BIT) + /* apic attention bits */ #define KVM_APIC_CHECK_VAPIC 0 /* diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h index bb00433932ee..787f59c44abc 100644 --- a/arch/x86/kvm/vmx/common.h +++ b/arch/x86/kvm/vmx/common.h @@ -91,7 +91,7 @@ static inline int __vmx_handle_ept_violation(struct kvm_v= cpu *vcpu, gpa_t gpa, if (kvm_is_private_gpa(vcpu->kvm, gpa)) error_code |=3D PFERR_GUEST_ENC_MASK; =20 - if (err_page_level > 0) + if (err_page_level > PG_LEVEL_NONE) error_code |=3D (err_page_level << PFERR_LEVEL_START_BIT) & PFERR_LEVEL_= MASK; =20 return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0); diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 0558faee5b19..2c760947ab21 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -2762,6 +2762,7 @@ static int tdx_init_mem_region(struct kvm *kvm, struc= t kvm_tdx_cmd *cmd) struct kvm_tdx_init_mem_region region; struct kvm_vcpu *vcpu; struct page *page; + u64 error_code; int idx, ret =3D 0; bool added =3D false; =20 @@ -2819,7 +2820,11 @@ static int tdx_init_mem_region(struct kvm *kvm, stru= ct kvm_tdx_cmd *cmd) kvm_tdx->source_pa =3D pfn_to_hpa(page_to_pfn(page)) | (cmd->flags & KVM_TDX_MEASURE_MEMORY_REGION); =20 - ret =3D kvm_mmu_map_tdp_page(vcpu, region.gpa, TDX_SEPT_PFERR, + /* TODO: large page support. */ + error_code =3D TDX_SEPT_PFERR; + error_code |=3D (PG_LEVEL_4K << PFERR_LEVEL_START_BIT) & + PFERR_LEVEL_MASK; + ret =3D kvm_mmu_map_tdp_page(vcpu, region.gpa, error_code, PG_LEVEL_4K); put_page(page); if (ret) diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h index b0bc3ee89e03..796ff0a4bcbf 100644 --- a/arch/x86/kvm/vmx/tdx.h +++ b/arch/x86/kvm/vmx/tdx.h @@ -73,25 +73,6 @@ union tdx_exit_reason { u64 full; }; =20 -union tdx_ext_exit_qualification { - struct { - u64 type : 4; - u64 reserved0 : 28; - u64 req_sept_level : 3; - u64 err_sept_level : 3; - u64 err_sept_state : 8; - u64 err_sept_is_leaf : 1; - u64 reserved1 : 17; - }; - u64 full; -}; - -enum tdx_ext_exit_qualification_type { - EXT_EXIT_QUAL_NONE, - EXT_EXIT_QUAL_ACCEPT, - NUM_EXT_EXIT_QUAL, -}; - struct vcpu_tdx { struct kvm_vcpu vcpu; =20 diff --git a/arch/x86/kvm/vmx/tdx_arch.h b/arch/x86/kvm/vmx/tdx_arch.h index 8d02a315724a..93934851610b 100644 --- a/arch/x86/kvm/vmx/tdx_arch.h +++ b/arch/x86/kvm/vmx/tdx_arch.h @@ -227,6 +227,25 @@ union tdx_sept_level_state { u64 raw; }; =20 +union tdx_ext_exit_qualification { + struct { + u64 type : 4; + u64 reserved0 : 28; + u64 req_sept_level : 3; + u64 err_sept_level : 3; + u64 err_sept_state : 8; + u64 err_sept_is_leaf : 1; + u64 reserved1 : 17; + }; + u64 full; +}; + +enum tdx_ext_exit_qualification_type { + EXT_EXIT_QUAL_NONE =3D 0, + EXT_EXIT_QUAL_ACCEPT, + NUM_EXT_EXIT_QUAL, +}; + #define TDX_MD_CLASS_GLOBAL_VERSION 8 =20 #define TDX_MD_FID_GLOBAL_FEATURES0 0x0A00000300000008 diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index fadb89346635..17b44731d0e7 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -5753,7 +5753,7 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu) if (unlikely(allow_smaller_maxphyaddr && kvm_vcpu_is_illegal_gpa(vcpu, gp= a))) return kvm_emulate_instruction(vcpu, 0); =20 - return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification, 0); + return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification, PG_LEVEL= _NONE); } =20 static int handle_ept_misconfig(struct kvm_vcpu *vcpu) --=20 2.25.1 From nobody Wed Dec 17 07:59:15 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B975DCDB474 for ; Mon, 16 Oct 2023 16:40:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233848AbjJPQkc (ORCPT ); Mon, 16 Oct 2023 12:40:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343574AbjJPQjP (ORCPT ); Mon, 16 Oct 2023 12:39:15 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EE7A88267; Mon, 16 Oct 2023 09:23:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697473397; x=1729009397; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wxBtarhP8Ri01XIHFhb3x0YgU9de0ypsG7SgSLsDpf4=; b=f1fO1IvobXcHTrbv3B32/7QNYXamf5OOiHODeVIdsnOiEZpohlcr7AOr 1zJTXd7px2EoCtV1IPLPepLPFD/I/1v/X72iPb94lVc4b9BRVZxohJGpk r9QzEP9tfFvt9qPUrpMO9EIJoLbCXAqQMdvXt6wudO9N60dkArFVQcU42 JbIFwyDBIQ/v/f/D6Vpj5DUbA6ZBkISauGWE0RAjuNFTWocMUAYTU66yJ YRERwPfYbbwo6W8r2HUBJvN2OS5Lu/h3Dt+U35gLjl/STafBt5ukq8dlR xV+8tP9Qii1TVhlNRAKIkwRkR+vqf01sC14Bjse3v1K4wBBZxLXJmtZHh w==; X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="471793176" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="471793176" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:21:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="899569245" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="899569245" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:19:15 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com Subject: [RFC PATCH v5 10/16] KVM: x86/tdp_mmu: Allocate private page table for large page split Date: Mon, 16 Oct 2023 09:21:01 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata Make tdp_mmu_alloc_sp_split() aware of private page table. Signed-off-by: Isaku Yamahata --- arch/x86/kvm/mmu/mmu_internal.h | 14 ++++++++++++++ arch/x86/kvm/mmu/tdp_mmu.c | 8 ++++++-- 2 files changed, 20 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index 908504bcc2cd..641d4e777b73 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -203,6 +203,15 @@ static inline void kvm_mmu_alloc_private_spt(struct kv= m_vcpu *vcpu, struct kvm_m } } =20 +static inline int kvm_alloc_private_spt_for_split(struct kvm_mmu_page *sp,= gfp_t gfp) +{ + gfp &=3D ~__GFP_ZERO; + sp->private_spt =3D (void *)__get_free_page(gfp); + if (!sp->private_spt) + return -ENOMEM; + return 0; +} + static inline void kvm_mmu_free_private_spt(struct kvm_mmu_page *sp) { if (sp->private_spt) @@ -231,6 +240,11 @@ static inline void kvm_mmu_alloc_private_spt(struct kv= m_vcpu *vcpu, struct kvm_m { } =20 +static inline int kvm_alloc_private_spt_for_split(struct kvm_mmu_page *sp,= gfp_t gfp) +{ + return -ENOMEM; +} + static inline void kvm_mmu_free_private_spt(struct kvm_mmu_page *sp) { } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 012f270cfb6f..75a2fec7a3fa 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1706,8 +1706,12 @@ static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_s= plit(gfp_t gfp, union kvm_mm =20 sp->role =3D role; sp->spt =3D (void *)__get_free_page(gfp); - /* TODO: large page support for private GPA. */ - WARN_ON_ONCE(kvm_mmu_page_role_is_private(role)); + if (kvm_mmu_page_role_is_private(role)) { + if (kvm_alloc_private_spt_for_split(sp, gfp)) { + free_page((unsigned long)sp->spt); + sp->spt =3D NULL; + } + } if (!sp->spt) { kmem_cache_free(mmu_page_header_cache, sp); return NULL; --=20 2.25.1 From nobody Wed Dec 17 07:59:15 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4CB7CDB465 for ; Mon, 16 Oct 2023 16:33:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234069AbjJPQdE (ORCPT ); Mon, 16 Oct 2023 12:33:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35474 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233853AbjJPQcZ (ORCPT ); Mon, 16 Oct 2023 12:32:25 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29ABD8276; Mon, 16 Oct 2023 09:23:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697473397; x=1729009397; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yPrJYT1RFsVP8N8TkF5b62qyZ832rTjB68o4Dhh8k6M=; b=kE5odJjZQZFdxaTFL1shqNytEmohyxDTSjoxbfLZh2Jxl5ECJWtPR8B8 r/VtTFxneuRcYABzfsBWtfjDeP/UIDWlcjNk1NzzZZHJToOnNpGn4AZZb REEaHlPAoDS45+7sUNIRrh5+EbORBNeMjQIVu0G4KmPNbCIiOMStHCrSU D4nlOLEl4lpXN+ulATgdphSZNAfSK2J4sghuoT+wnb1A3iQEdOazdAZzW QSAMfPls3pbElnSRlO6dZml37LsDFRyAhnJdEGHkyD/7pC2JabF1N20Qt gAAZDDAaBHgxJN/WjUapew7d/4qaLPucMCfDnkPFSXdpIl64sPuHT0gmo A==; X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="471793183" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="471793183" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:21:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="899569250" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="899569250" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:19:16 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [RFC PATCH v5 11/16] KVM: x86/tdp_mmu: Split the large page when zap leaf Date: Mon, 16 Oct 2023 09:21:02 -0700 Message-Id: <583f8981191122b742bf2b77dab58d1a296da394.1697473009.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li When TDX enabled, a large page cannot be zapped if it contains mixed pages. In this case, it has to split the large page. Signed-off-by: Xiaoyao Li --- arch/x86/kvm/Kconfig | 1 + arch/x86/kvm/mmu/mmu.c | 6 +-- arch/x86/kvm/mmu/mmu_internal.h | 9 +++++ arch/x86/kvm/mmu/tdp_mmu.c | 68 +++++++++++++++++++++++++++++++-- 4 files changed, 78 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index e08cc4ad5749..8a2d52aba6f0 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -93,6 +93,7 @@ config KVM_INTEL tristate "KVM for Intel (and compatible) processors support" depends on KVM && IA32_FEAT_CTL select KVM_SW_PROTECTED_VM if INTEL_TDX_HOST + select KVM_GENERIC_MEMORY_ATTRIBUTES if INTEL_TDX_HOST select KVM_PRIVATE_MEM if INTEL_TDX_HOST help Provides support for KVM on processors equipped with Intel's VT diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 55574993a13a..a1a9b0bc4f1a 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -7455,8 +7455,8 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm *k= vm, return kvm_unmap_gfn_range(kvm, range); } =20 -static bool hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, - int level) +bool kvm_hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, + int level) { return lpage_info_slot(gfn, slot, level)->disallow_lpage & KVM_LPAGE_MIXE= D_FLAG; } @@ -7483,7 +7483,7 @@ static bool hugepage_has_attrs(struct kvm *kvm, struc= t kvm_memory_slot *slot, return kvm_range_has_memory_attributes(kvm, start, end, attrs); =20 for (gfn =3D start; gfn < end; gfn +=3D KVM_PAGES_PER_HPAGE(level - 1)) { - if (hugepage_test_mixed(slot, gfn, level - 1) || + if (kvm_hugepage_test_mixed(slot, gfn, level - 1) || attrs !=3D kvm_get_memory_attributes(kvm, gfn)) return false; } diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index 641d4e777b73..099e8fe929c6 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -461,4 +461,13 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cac= he *mc); void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *s= p); =20 +#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES +bool kvm_hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, int = level); +#else +static inline bool kvm_hugepage_test_mixed(struct kvm_memory_slot *slot, g= fn_t gfn, int level) +{ + return false; +} +#endif + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 75a2fec7a3fa..366f1b58af09 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1118,6 +1118,14 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_= mmu_page *sp) return true; } =20 + +static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm, + struct tdp_iter *iter, + bool shared); + +static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, + struct kvm_mmu_page *sp, bool shared); + /* * If can_yield is true, will release the MMU lock and reschedule if the * scheduler needs the CPU or there is contention on the MMU lock. If this @@ -1129,13 +1137,15 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, stru= ct kvm_mmu_page *root, gfn_t start, gfn_t end, bool can_yield, bool flush, bool zap_private) { + bool is_private =3D is_private_sp(root); + struct kvm_mmu_page *split_sp =3D NULL; struct tdp_iter iter; =20 end =3D min(end, tdp_mmu_max_gfn_exclusive()); =20 lockdep_assert_held_write(&kvm->mmu_lock); =20 - WARN_ON_ONCE(zap_private && !is_private_sp(root)); + WARN_ON_ONCE(zap_private && !is_private); if (!zap_private && is_private_sp(root)) return false; =20 @@ -1160,12 +1170,66 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, stru= ct kvm_mmu_page *root, !is_last_spte(iter.old_spte, iter.level)) continue; =20 + if (is_private && kvm_gfn_shared_mask(kvm) && + is_large_pte(iter.old_spte)) { + gfn_t gfn =3D iter.gfn & ~kvm_gfn_shared_mask(kvm); + gfn_t mask =3D KVM_PAGES_PER_HPAGE(iter.level) - 1; + struct kvm_memory_slot *slot; + struct kvm_mmu_page *sp; + + slot =3D gfn_to_memslot(kvm, gfn); + if (kvm_hugepage_test_mixed(slot, gfn, iter.level) || + (gfn & mask) < start || + end < (gfn & mask) + KVM_PAGES_PER_HPAGE(iter.level)) { + WARN_ON_ONCE(!can_yield); + if (split_sp) { + sp =3D split_sp; + split_sp =3D NULL; + sp->role =3D tdp_iter_child_role(&iter); + } else { + WARN_ON(iter.yielded); + if (flush && can_yield) { + kvm_flush_remote_tlbs(kvm); + flush =3D false; + } + sp =3D tdp_mmu_alloc_sp_for_split(kvm, &iter, false); + if (iter.yielded) { + split_sp =3D sp; + continue; + } + } + KVM_BUG_ON(!sp, kvm); + + tdp_mmu_init_sp(sp, iter.sptep, iter.gfn); + if (tdp_mmu_split_huge_page(kvm, &iter, sp, false)) { + kvm_flush_remote_tlbs(kvm); + flush =3D false; + /* force retry on this gfn. */ + iter.yielded =3D true; + } else + flush =3D true; + continue; + } + } + tdp_mmu_iter_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE); flush =3D true; } =20 rcu_read_unlock(); =20 + if (split_sp) { + WARN_ON(!can_yield); + if (flush) { + kvm_flush_remote_tlbs(kvm); + flush =3D false; + } + + write_unlock(&kvm->mmu_lock); + tdp_mmu_free_sp(split_sp); + write_lock(&kvm->mmu_lock); + } + /* * Because this flow zaps _only_ leaf SPTEs, the caller doesn't need * to provide RCU protection as no 'struct kvm_mmu_page' will be freed. @@ -1729,8 +1793,6 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_spli= t(struct kvm *kvm, =20 KVM_BUG_ON(kvm_mmu_page_role_is_private(role) !=3D is_private_sptep(iter->sptep), kvm); - /* TODO: Large page isn't supported for private SPTE yet. */ - KVM_BUG_ON(kvm_mmu_page_role_is_private(role), kvm); =20 /* * Since we are allocating while under the MMU lock we have to be --=20 2.25.1 From nobody Wed Dec 17 07:59:15 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70533CDB465 for ; Mon, 16 Oct 2023 16:34:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234288AbjJPQek (ORCPT ); Mon, 16 Oct 2023 12:34:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233895AbjJPQdu (ORCPT ); Mon, 16 Oct 2023 12:33:50 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46B728277; Mon, 16 Oct 2023 09:23:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697473397; x=1729009397; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zjEOSTfiX1VxaI5ktnpSnlZHAx5ak9uNUJUeujoIXdY=; b=geQE2Y5O6/WZf8aYXVq+V97Bq9oNJkXGHpu9MzrGGZNMFYqSJp+5uhlu d77fhmeIIsCfY9WHZbvyLdPvZRf20YyU/22Iu4c1jXb4ZMZDugCQ1gwee ZyMNjwDBGWQzUxUrHjo6ZXr6JToEsDS4oi7LoaqiCJQ5D3GDT3HTwPTrd lc/5yZyzrYhVHLhx32drXajzsSYRzdRi+tPBeMTRXgf24Hs6ToghcvD1S 3u84H1HXKy4oVPxislKewzWFH+15GoyNqIMu2rFd4jEoVZheIVy2DsNmh A2qrMboGvk7spwTq/N6QMbYT6kwkhwIeBw2pRx6v6I7WEAUrnXbeO8/UO w==; X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="471793195" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="471793195" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:21:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="899569259" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="899569259" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:19:17 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [RFC PATCH v5 12/16] KVM: x86/tdp_mmu, TDX: Split a large page when 4KB page within it converted to shared Date: Mon, 16 Oct 2023 09:21:03 -0700 Message-Id: <3606f99cd9b083cdf44b7bfc81a524ed5ab85031.1697473009.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li When mapping the shared page for TDX, it needs to zap private alias. In the case that private page is mapped as large page (2MB), it can be removed directly only when the whole 2MB is converted to shared. Otherwise, it has to split 2MB page into 512 4KB page, and only remove the pages that converted to shared. When a present large leaf spte switches to present non-leaf spte, TDX needs to split the corresponding SEPT page to reflect it. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm-x86-ops.h | 1 + arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/mmu/tdp_mmu.c | 21 ++++++++++++++++----- arch/x86/kvm/vmx/tdx.c | 25 +++++++++++++++++++++++-- arch/x86/kvm/vmx/tdx_arch.h | 1 + arch/x86/kvm/vmx/tdx_ops.h | 7 +++++++ 6 files changed, 50 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-= x86-ops.h index ba74cb7199b3..d751c8b58c45 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -103,6 +103,7 @@ KVM_X86_OP_OPTIONAL_RET0(get_mt_mask) KVM_X86_OP(load_mmu_pgd) KVM_X86_OP_OPTIONAL(link_private_spt) KVM_X86_OP_OPTIONAL(free_private_spt) +KVM_X86_OP_OPTIONAL(split_private_spt) KVM_X86_OP_OPTIONAL(set_private_spte) KVM_X86_OP_OPTIONAL(remove_private_spte) KVM_X86_OP_OPTIONAL(zap_private_spte) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index bb2b4f8c0c57..d9ada89c0c55 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1745,6 +1745,8 @@ struct kvm_x86_ops { void *private_spt); int (*free_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level, void *private_spt); + int (*split_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level, + void *private_spt); int (*set_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level, kvm_pfn_t pfn); int (*remove_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level leve= l, diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 366f1b58af09..6828394cedec 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -647,23 +647,34 @@ static int __must_check __set_private_spte_present(st= ruct kvm *kvm, tdp_ptep_t s { bool was_present =3D is_shadow_present_pte(old_spte); bool is_present =3D is_shadow_present_pte(new_spte); + bool was_leaf =3D was_present && is_last_spte(old_spte, level); bool is_leaf =3D is_present && is_last_spte(new_spte, level); kvm_pfn_t new_pfn =3D spte_to_pfn(new_spte); + void *private_spt; int ret =3D 0; =20 lockdep_assert_held(&kvm->mmu_lock); - /* TDP MMU doesn't change present -> present */ - KVM_BUG_ON(was_present, kvm); =20 /* * Use different call to either set up middle level * private page table, or leaf. */ - if (is_leaf) + if (level > PG_LEVEL_4K && was_leaf && !is_leaf) { + /* + * splitting large page into 4KB. + * tdp_mmu_split_huage_page() =3D> tdp_mmu_link_sp() + */ + private_spt =3D get_private_spt(gfn, new_spte, level); + KVM_BUG_ON(!private_spt, kvm); + ret =3D static_call(kvm_x86_zap_private_spte)(kvm, gfn, level); + kvm_flush_remote_tlbs(kvm); + if (!ret) + ret =3D static_call(kvm_x86_split_private_spt)(kvm, gfn, + level, private_spt); + } else if (is_leaf) ret =3D static_call(kvm_x86_set_private_spte)(kvm, gfn, level, new_pfn); else { - void *private_spt =3D get_private_spt(gfn, new_spte, level); - + private_spt =3D get_private_spt(gfn, new_spte, level); KVM_BUG_ON(!private_spt, kvm); ret =3D static_call(kvm_x86_link_private_spt)(kvm, gfn, level, private_s= pt); } diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 2c760947ab21..1db56696ad99 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1661,6 +1661,28 @@ static int tdx_sept_link_private_spt(struct kvm *kvm= , gfn_t gfn, return 0; } =20 +static int tdx_sept_split_private_spt(struct kvm *kvm, gfn_t gfn, + enum pg_level level, void *private_spt) +{ + int tdx_level =3D pg_level_to_tdx_sept_level(level); + struct kvm_tdx *kvm_tdx =3D to_kvm_tdx(kvm); + gpa_t gpa =3D gfn_to_gpa(gfn) & KVM_HPAGE_MASK(level); + hpa_t hpa =3D __pa(private_spt); + struct tdx_module_args out; + u64 err; + + /* See comment in tdx_sept_set_private_spte() */ + err =3D tdh_mem_page_demote(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, &out); + if (unlikely(err =3D=3D TDX_ERROR_SEPT_BUSY)) + return -EAGAIN; + if (KVM_BUG_ON(err, kvm)) { + pr_tdx_error(TDH_MEM_PAGE_DEMOTE, err, &out); + return -EIO; + } + + return 0; +} + static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn, enum pg_level level) { @@ -1674,8 +1696,6 @@ static int tdx_sept_zap_private_spte(struct kvm *kvm,= gfn_t gfn, if (unlikely(!is_hkid_assigned(kvm_tdx))) return 0; =20 - /* For now large page isn't supported yet. */ - WARN_ON_ONCE(level !=3D PG_LEVEL_4K); err =3D tdh_mem_range_block(kvm_tdx->tdr_pa, gpa, tdx_level, &out); if (unlikely(err =3D=3D TDX_ERROR_SEPT_BUSY)) return -EAGAIN; @@ -3283,6 +3303,7 @@ int __init tdx_hardware_setup(struct kvm_x86_ops *x86= _ops) =20 x86_ops->link_private_spt =3D tdx_sept_link_private_spt; x86_ops->free_private_spt =3D tdx_sept_free_private_spt; + x86_ops->split_private_spt =3D tdx_sept_split_private_spt; x86_ops->set_private_spte =3D tdx_sept_set_private_spte; x86_ops->remove_private_spte =3D tdx_sept_remove_private_spte; x86_ops->zap_private_spte =3D tdx_sept_zap_private_spte; diff --git a/arch/x86/kvm/vmx/tdx_arch.h b/arch/x86/kvm/vmx/tdx_arch.h index 93934851610b..0c9823fcf829 100644 --- a/arch/x86/kvm/vmx/tdx_arch.h +++ b/arch/x86/kvm/vmx/tdx_arch.h @@ -21,6 +21,7 @@ #define TDH_MNG_CREATE 9 #define TDH_VP_CREATE 10 #define TDH_MNG_RD 11 +#define TDH_MEM_PAGE_DEMOTE 15 #define TDH_MR_EXTEND 16 #define TDH_MR_FINALIZE 17 #define TDH_VP_FLUSH 18 diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h index afc85e7ffb8e..7293510fa2e5 100644 --- a/arch/x86/kvm/vmx/tdx_ops.h +++ b/arch/x86/kvm/vmx/tdx_ops.h @@ -183,6 +183,13 @@ static inline u64 tdh_mng_rd(hpa_t tdr, u64 field, str= uct tdx_module_args *out) return tdx_seamcall(TDH_MNG_RD, tdr, field, 0, 0, out); } =20 +static inline u64 tdh_mem_page_demote(hpa_t tdr, gpa_t gpa, int level, hpa= _t page, + struct tdx_module_args *out) +{ + tdx_clflush_page(page, PG_LEVEL_4K); + return tdx_seamcall_sept(TDH_MEM_PAGE_DEMOTE, gpa | level, tdr, page, 0, = out); +} + static inline u64 tdh_mr_extend(hpa_t tdr, gpa_t gpa, struct tdx_module_args *out) { --=20 2.25.1 From nobody Wed Dec 17 07:59:15 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C624DCDB482 for ; Mon, 16 Oct 2023 16:38:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234407AbjJPQia (ORCPT ); Mon, 16 Oct 2023 12:38:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234100AbjJPQgU (ORCPT ); Mon, 16 Oct 2023 12:36:20 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D6B2827C; Mon, 16 Oct 2023 09:23:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697473397; x=1729009397; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=finGYciZPBhc+6okZb2EHt/c3gmBHrbCne0oUELsQm0=; b=IcBbWMMWsQ8lT/mQJfqwO/37cPuKFIrZ/BI8X9g++zTIUjPIW+Nw6dyu 0/EYfk8LzbMZrKETtnW8DQ5w0Mie2+LCLE6MUP4TnD8SYZRtAft3x5zPc VNL5e0uHxMzPktGMbc8wKagyUkTE8UyP/DXHvmsf6wQTnb9Cy5NEMMRFy yka3IBP/5RhE7KTa4sXvgr+v4C2llZNVUY/vjGU/Pyqx8eC532V+BNnyR YV2HHiVgpmomOIeEi/AA22PHWzO4p1+cG1SpB66HM5f5ehpUbVt+5c0BR 1NpQSjrVjg3fVIeAY7Sp/WAAgcUZShCTEsP1duTPD+1vhd9VfL4RM6u6f g==; X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="471793202" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="471793202" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:21:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="899569264" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="899569264" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:19:17 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com Subject: [RFC PATCH v5 13/16] KVM: x86/tdp_mmu: Try to merge pages into a large page Date: Mon, 16 Oct 2023 09:21:04 -0700 Message-Id: <0ef14edefb39ecfbd7ca72b0a68ff09a885e7b35.1697473009.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata When a large page is passed to the KVM page fault handler and some of sub pages are already populated, try to merge sub pages into a large page. This situation can happen when the guest converts small pages into shared and convert it back into private. When a large page is passed to KVM mmu page fault handler and the spte corresponding to the page is non-leaf (one or more of sub pages are already populated at lower page level), the current kvm mmu zaps non-leaf spte at a large page level, and populate a leaf spte at that level. Thus small pages are converted into a large page. However, it doesn't work for TDX because zapping and re-populating results in zeroing page content. Instead, populate all small pages and merge them into a large page. Merging pages into a large page can fail when some sub pages are accepted and some are not. In such case, with the assumption that guest tries to accept at large page size for performance when possible, don't try to be smart to identify which page is still pending, map all pages at lower page level, and let vcpu re-execute. Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm-x86-ops.h | 2 + arch/x86/include/asm/kvm_host.h | 4 + arch/x86/kvm/mmu/tdp_iter.c | 37 +++++-- arch/x86/kvm/mmu/tdp_iter.h | 2 + arch/x86/kvm/mmu/tdp_mmu.c | 172 ++++++++++++++++++++++++++++- 5 files changed, 207 insertions(+), 10 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-= x86-ops.h index d751c8b58c45..6be5e78f5c41 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -104,9 +104,11 @@ KVM_X86_OP(load_mmu_pgd) KVM_X86_OP_OPTIONAL(link_private_spt) KVM_X86_OP_OPTIONAL(free_private_spt) KVM_X86_OP_OPTIONAL(split_private_spt) +KVM_X86_OP_OPTIONAL(merge_private_spt) KVM_X86_OP_OPTIONAL(set_private_spte) KVM_X86_OP_OPTIONAL(remove_private_spte) KVM_X86_OP_OPTIONAL(zap_private_spte) +KVM_X86_OP_OPTIONAL(unzap_private_spte) KVM_X86_OP(has_wbinvd_exit) KVM_X86_OP(get_l2_tsc_offset) KVM_X86_OP(get_l2_tsc_multiplier) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index d9ada89c0c55..35232f5158b0 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -139,6 +139,7 @@ #define KVM_MAX_HUGEPAGE_LEVEL PG_LEVEL_1G #define KVM_NR_PAGE_SIZES (KVM_MAX_HUGEPAGE_LEVEL - PG_LEVEL_4K + 1) #define KVM_HPAGE_GFN_SHIFT(x) (((x) - 1) * 9) +#define KVM_HPAGE_GFN_MASK(x) (~((1UL << KVM_HPAGE_GFN_SHIFT(x)) - 1)) #define KVM_HPAGE_SHIFT(x) (PAGE_SHIFT + KVM_HPAGE_GFN_SHIFT(x)) #define KVM_HPAGE_SIZE(x) (1UL << KVM_HPAGE_SHIFT(x)) #define KVM_HPAGE_MASK(x) (~(KVM_HPAGE_SIZE(x) - 1)) @@ -1747,11 +1748,14 @@ struct kvm_x86_ops { void *private_spt); int (*split_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level, void *private_spt); + int (*merge_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level, + void *private_spt); int (*set_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level, kvm_pfn_t pfn); int (*remove_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level leve= l, kvm_pfn_t pfn); int (*zap_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level); + int (*unzap_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level= ); =20 bool (*has_wbinvd_exit)(void); =20 diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c index bd30ebfb2f2c..f33226fcd62a 100644 --- a/arch/x86/kvm/mmu/tdp_iter.c +++ b/arch/x86/kvm/mmu/tdp_iter.c @@ -71,6 +71,14 @@ tdp_ptep_t spte_to_child_pt(u64 spte, int level) return (tdp_ptep_t)__va(spte_to_pfn(spte) << PAGE_SHIFT); } =20 +static void step_down(struct tdp_iter *iter, tdp_ptep_t child_pt) +{ + iter->level--; + iter->pt_path[iter->level - 1] =3D child_pt; + iter->gfn =3D gfn_round_for_level(iter->next_last_level_gfn, iter->level); + tdp_iter_refresh_sptep(iter); +} + /* * Steps down one level in the paging structure towards the goal GFN. Retu= rns * true if the iterator was able to step down a level, false otherwise. @@ -92,14 +100,28 @@ static bool try_step_down(struct tdp_iter *iter) if (!child_pt) return false; =20 - iter->level--; - iter->pt_path[iter->level - 1] =3D child_pt; - iter->gfn =3D gfn_round_for_level(iter->next_last_level_gfn, iter->level); - tdp_iter_refresh_sptep(iter); - + step_down(iter, child_pt); return true; } =20 +/* Steps down for freezed spte. Don't re-read sptep because it was freeze= d. */ +void tdp_iter_step_down(struct tdp_iter *iter, tdp_ptep_t child_pt) +{ + WARN_ON_ONCE(!child_pt); + WARN_ON_ONCE(iter->yielded); + WARN_ON_ONCE(iter->level =3D=3D iter->min_level); + + step_down(iter, child_pt); +} + +void tdp_iter_step_side(struct tdp_iter *iter) +{ + iter->gfn +=3D KVM_PAGES_PER_HPAGE(iter->level); + iter->next_last_level_gfn =3D iter->gfn; + iter->sptep++; + iter->old_spte =3D kvm_tdp_mmu_read_spte(iter->sptep); +} + /* * Steps to the next entry in the current page table, at the current page = table * level. The next entry could point to a page backing guest memory or ano= ther @@ -117,10 +139,7 @@ static bool try_step_side(struct tdp_iter *iter) (SPTE_ENT_PER_PAGE - 1)) return false; =20 - iter->gfn +=3D KVM_PAGES_PER_HPAGE(iter->level); - iter->next_last_level_gfn =3D iter->gfn; - iter->sptep++; - iter->old_spte =3D kvm_tdp_mmu_read_spte(iter->sptep); + tdp_iter_step_side(iter); =20 return true; } diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index a9c9cd0db20a..ca00db799a50 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -134,6 +134,8 @@ void tdp_iter_start(struct tdp_iter *iter, struct kvm_m= mu_page *root, int min_level, gfn_t next_last_level_gfn); void tdp_iter_next(struct tdp_iter *iter); void tdp_iter_restart(struct tdp_iter *iter); +void tdp_iter_step_side(struct tdp_iter *iter); +void tdp_iter_step_down(struct tdp_iter *iter, tdp_ptep_t child_pt); =20 static inline union kvm_mmu_page_role tdp_iter_child_role(struct tdp_iter = *iter) { diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 6828394cedec..9cb63613d831 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1351,6 +1351,176 @@ void kvm_tdp_mmu_invalidate_all_roots(struct kvm *k= vm, bool skip_private) rcu_read_unlock(); } =20 +static int tdp_mmu_iter_step_side(int i, struct tdp_iter *iter) +{ + i++; + + /* + * if i =3D SPTE_ENT_PER_PAGE, tdp_iter_step_side() results + * in reading the entry beyond the last entry. + */ + if (i < SPTE_ENT_PER_PAGE) + tdp_iter_step_side(iter); + + return i; +} + +static int tdp_mmu_merge_private_spt(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault, + struct tdp_iter *iter, u64 new_spte) +{ + u64 *sptep =3D rcu_dereference(iter->sptep); + u64 old_spte =3D iter->old_spte; + struct kvm_mmu_page *child_sp; + struct kvm *kvm =3D vcpu->kvm; + struct tdp_iter child_iter; + int level =3D iter->level; + gfn_t gfn =3D iter->gfn; + tdp_ptep_t child_pt; + u64 child_spte; + int ret =3D 0; + int i; + + /* + * TDX KVM supports only 2MB large page. It's not supported to merge + * 2MB pages into 1GB page at the moment. + */ + WARN_ON_ONCE(fault->goal_level !=3D PG_LEVEL_2M); + WARN_ON_ONCE(iter->level !=3D PG_LEVEL_2M); + WARN_ON_ONCE(!is_large_pte(new_spte)); + + /* Freeze the spte to prevent other threads from working spte. */ + if (!try_cmpxchg64(sptep, &iter->old_spte, REMOVED_SPTE)) + return -EBUSY; + + /* + * Step down to the child spte. Because tdp_iter_next() assumes the + * parent spte isn't freezed, do it manually. + */ + child_pt =3D spte_to_child_pt(iter->old_spte, iter->level); + child_sp =3D sptep_to_sp(child_pt); + WARN_ON_ONCE(child_sp->role.level !=3D PG_LEVEL_4K); + WARN_ON_ONCE(!kvm_mmu_page_role_is_private(child_sp->role)); + + /* Don't modify iter as the caller will use iter after this function. */ + child_iter =3D *iter; + /* Adjust the target gfn to the head gfn of the large page. */ + child_iter.next_last_level_gfn &=3D -KVM_PAGES_PER_HPAGE(level); + tdp_iter_step_down(&child_iter, child_pt); + + /* + * All child pages are required to be populated for merging them into a + * large page. Populate all child spte. + */ + for (i =3D 0; i < SPTE_ENT_PER_PAGE; i =3D tdp_mmu_iter_step_side(i, &chi= ld_iter)) { + int tmp; + + WARN_ON_ONCE(child_iter.level !=3D PG_LEVEL_4K); + + if (is_shadow_present_pte(child_iter.old_spte)) { + /* TODO: relocate page for huge page. */ + if (WARN_ON_ONCE(spte_to_pfn(child_iter.old_spte) !=3D + spte_to_pfn(new_spte) + i)) { + if (!ret) + ret =3D -EAGAIN; + continue; + } + /* + * When SEPT_VE_DISABLE=3Dtrue and the page state is + * pending, this case can happen. Just resume the vcpu + * again with the expectation for other vcpu to accept + * this page. + */ + if (child_iter.gfn =3D=3D fault->gfn) { + if (!ret) + ret =3D -EAGAIN; + } + continue; + } + + child_spte =3D make_huge_page_split_spte(kvm, new_spte, child_sp->role, = i); + /* + * Because other thread may have started to operate on this spte + * before freezing the parent spte, Use atomic version to + * prevent race. + */ + tmp =3D tdp_mmu_set_spte_atomic(vcpu->kvm, &child_iter, child_spte); + if (tmp =3D=3D -EBUSY || tmp =3D=3D -EAGAIN) { + /* + * There was a race condition. Populate remaining 4K + * spte to resolve fault->gfn to guarantee the forward + * progress. + */ + if (!ret) + ret =3D tmp; + } else if (tmp) { + ret =3D tmp; + goto out; + } + } + if (ret) + goto out; + + /* Prevent the Secure-EPT entry from being used. */ + ret =3D static_call(kvm_x86_zap_private_spte)(kvm, gfn, level); + if (ret) + goto out; + kvm_flush_remote_tlbs_range(kvm, gfn & KVM_HPAGE_GFN_MASK(level), + KVM_PAGES_PER_HPAGE(level)); + + /* Merge pages into a large page. */ + ret =3D static_call(kvm_x86_merge_private_spt)(kvm, gfn, level, + kvm_mmu_private_spt(child_sp)); + /* + * Failed to merge pages because some pages are accepted and some are + * pending. Since the child page was mapped above, let vcpu run. + */ + if (ret) { + if (static_call(kvm_x86_unzap_private_spte)(kvm, gfn, level)) + old_spte =3D SHADOW_NONPRESENT_VALUE | + (spte_to_pfn(old_spte) << PAGE_SHIFT) | + PT_PAGE_SIZE_MASK; + goto out; + } + + /* Unfreeze spte. */ + iter->old_spte =3D new_spte; + __kvm_tdp_mmu_write_spte(sptep, new_spte); + + /* + * Free unused child sp. Secure-EPT page was already freed at TDX level + * by kvm_x86_merge_private_spt(). + */ + tdp_unaccount_mmu_page(kvm, child_sp); + tdp_mmu_free_sp(child_sp); + return -EAGAIN; + +out: + iter->old_spte =3D old_spte; + __kvm_tdp_mmu_write_spte(sptep, old_spte); + return ret; +} + +static int __tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault, + struct tdp_iter *iter, u64 new_spte) +{ + /* + * The private page has smaller-size pages. For example, the child + * pages was converted from shared to page, and now it can be mapped as + * a large page. Try to merge small pages into a large page. + */ + if (fault->slot && + kvm_gfn_shared_mask(vcpu->kvm) && + iter->level > PG_LEVEL_4K && + kvm_is_private_gpa(vcpu->kvm, fault->addr) && + is_shadow_present_pte(iter->old_spte) && + !is_large_pte(iter->old_spte)) + return tdp_mmu_merge_private_spt(vcpu, fault, iter, new_spte); + + return tdp_mmu_set_spte_atomic(vcpu->kvm, iter, new_spte); +} + /* * Installs a last-level SPTE to handle a TDP page fault. * (NPT/EPT violation/misconfiguration) @@ -1392,7 +1562,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm= _vcpu *vcpu, =20 if (new_spte =3D=3D iter->old_spte) ret =3D RET_PF_SPURIOUS; - else if (tdp_mmu_set_spte_atomic(vcpu->kvm, iter, new_spte)) + else if (__tdp_mmu_map_handle_target_level(vcpu, fault, iter, new_spte)) return RET_PF_RETRY; else if (is_shadow_present_pte(iter->old_spte) && !is_last_spte(iter->old_spte, iter->level)) --=20 2.25.1 From nobody Wed Dec 17 07:59:15 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A697CDB465 for ; Mon, 16 Oct 2023 16:36:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234439AbjJPQgj (ORCPT ); Mon, 16 Oct 2023 12:36:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33006 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234366AbjJPQfq (ORCPT ); Mon, 16 Oct 2023 12:35:46 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 731F98279; Mon, 16 Oct 2023 09:23:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697473397; x=1729009397; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Kvkqy/cP7bWGs8zyVnGaL4oZutpS6cXKUkDMo9VqnfE=; b=QXuNZzLML+Q5d+7R0Qjk+5lOLit3ZMS3U+SL/p9fwrcAKUNQGdA8CXcI bigRbIJ4i4z1HW09c2tEx/FpbVce0YISfC13gewEIqEqq9Rgo9X7hNWC0 C0obkYsQn9Z+HFrzEUpbT5ELYwbteRAhl4spJjO2AiNHqOTTyzpiUWdRF nxJZCS/+I9XAdROhqgZntZkh76/NmxJOUJd6Jqa6EQhtA4Z73hCegfVRs F0Jy/Jy74W9mV/HcEJAlzVjurisGo6fuIKARBlOmIXrVd41AyMS7VHVc2 CQ4ojgC0QXvcYwClJ8iVcZWzPu8q3r1h6Jl3CIpXi/tv/ehzRMw+RmM5E w==; X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="471793210" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="471793210" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:21:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="899569269" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="899569269" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:19:18 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com Subject: [RFC PATCH v5 14/16] KVM: x86/tdp_mmu: TDX: Implement merge pages into a large page Date: Mon, 16 Oct 2023 09:21:05 -0700 Message-Id: <341dc50c854d078c91097dd3145a55fc2c50625c.1697473009.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata Implement merge_private_stp callback. Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx.c | 72 ++++++++++++++++++++++++++++++++++++ arch/x86/kvm/vmx/tdx_arch.h | 1 + arch/x86/kvm/vmx/tdx_errno.h | 2 + arch/x86/kvm/vmx/tdx_ops.h | 6 +++ 4 files changed, 81 insertions(+) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 1db56696ad99..2627dcf240cc 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1683,6 +1683,49 @@ static int tdx_sept_split_private_spt(struct kvm *kv= m, gfn_t gfn, return 0; } =20 +static int tdx_sept_merge_private_spt(struct kvm *kvm, gfn_t gfn, + enum pg_level level, void *private_spt) +{ + int tdx_level =3D pg_level_to_tdx_sept_level(level); + struct kvm_tdx *kvm_tdx =3D to_kvm_tdx(kvm); + struct tdx_module_args out; + gpa_t gpa =3D gfn_to_gpa(gfn) & KVM_HPAGE_MASK(level); + u64 err; + + /* See comment in tdx_sept_set_private_spte() */ + err =3D tdh_mem_page_promote(kvm_tdx->tdr_pa, gpa, tdx_level, &out); + if (unlikely(err =3D=3D TDX_ERROR_SEPT_BUSY)) + return -EAGAIN; + if (unlikely(err =3D=3D (TDX_EPT_INVALID_PROMOTE_CONDITIONS | + TDX_OPERAND_ID_RCX))) + /* + * Some pages are accepted, some pending. Need to wait for TD + * to accept all pages. Tell it the caller. + */ + return -EAGAIN; + if (KVM_BUG_ON(err, kvm)) { + pr_tdx_error(TDH_MEM_PAGE_PROMOTE, err, &out); + return -EIO; + } + WARN_ON_ONCE(out.rcx !=3D __pa(private_spt)); + + /* + * TDH.MEM.PAGE.PROMOTE frees the Secure-EPT page for the lower level. + * Flush cache for reuse. + */ + do { + err =3D tdh_phymem_page_wbinvd(set_hkid_to_hpa(__pa(private_spt), + to_kvm_tdx(kvm)->hkid)); + } while (unlikely(err =3D=3D (TDX_OPERAND_BUSY | TDX_OPERAND_ID_RCX))); + if (WARN_ON_ONCE(err)) { + pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL); + return -EIO; + } + + tdx_clear_page(__pa(private_spt), PAGE_SIZE); + return 0; +} + static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn, enum pg_level level) { @@ -1757,6 +1800,33 @@ static void tdx_track(struct kvm *kvm) =20 } =20 +static int tdx_sept_unzap_private_spte(struct kvm *kvm, gfn_t gfn, + enum pg_level level) +{ + int tdx_level =3D pg_level_to_tdx_sept_level(level); + struct kvm_tdx *kvm_tdx =3D to_kvm_tdx(kvm); + gpa_t gpa =3D gfn_to_gpa(gfn) & KVM_HPAGE_MASK(level); + struct tdx_module_args out; + u64 err; + + do { + err =3D tdh_mem_range_unblock(kvm_tdx->tdr_pa, gpa, tdx_level, &out); + + /* + * tdh_mem_range_block() is accompanied with tdx_track() via kvm + * remote tlb flush. Wait for the caller of + * tdh_mem_range_block() to complete TDX track. + */ + } while (err =3D=3D (TDX_TLB_TRACKING_NOT_DONE | TDX_OPERAND_ID_SEPT)); + if (unlikely(err =3D=3D TDX_ERROR_SEPT_BUSY)) + return -EAGAIN; + if (KVM_BUG_ON(err, kvm)) { + pr_tdx_error(TDH_MEM_RANGE_UNBLOCK, err, &out); + return -EIO; + } + return 0; +} + static int tdx_sept_free_private_spt(struct kvm *kvm, gfn_t gfn, enum pg_level level, void *private_spt) { @@ -3304,9 +3374,11 @@ int __init tdx_hardware_setup(struct kvm_x86_ops *x8= 6_ops) x86_ops->link_private_spt =3D tdx_sept_link_private_spt; x86_ops->free_private_spt =3D tdx_sept_free_private_spt; x86_ops->split_private_spt =3D tdx_sept_split_private_spt; + x86_ops->merge_private_spt =3D tdx_sept_merge_private_spt; x86_ops->set_private_spte =3D tdx_sept_set_private_spte; x86_ops->remove_private_spte =3D tdx_sept_remove_private_spte; x86_ops->zap_private_spte =3D tdx_sept_zap_private_spte; + x86_ops->unzap_private_spte =3D tdx_sept_unzap_private_spte; =20 return 0; =20 diff --git a/arch/x86/kvm/vmx/tdx_arch.h b/arch/x86/kvm/vmx/tdx_arch.h index 0c9823fcf829..aa9c927e4adc 100644 --- a/arch/x86/kvm/vmx/tdx_arch.h +++ b/arch/x86/kvm/vmx/tdx_arch.h @@ -29,6 +29,7 @@ #define TDH_MNG_KEY_FREEID 20 #define TDH_MNG_INIT 21 #define TDH_VP_INIT 22 +#define TDH_MEM_PAGE_PROMOTE 23 #define TDH_MEM_SEPT_RD 25 #define TDH_VP_RD 26 #define TDH_MNG_KEY_RECLAIMID 27 diff --git a/arch/x86/kvm/vmx/tdx_errno.h b/arch/x86/kvm/vmx/tdx_errno.h index dbee050b2356..99424336f534 100644 --- a/arch/x86/kvm/vmx/tdx_errno.h +++ b/arch/x86/kvm/vmx/tdx_errno.h @@ -23,6 +23,8 @@ #define TDX_FLUSHVP_NOT_DONE 0x8000082400000000ULL #define TDX_EPT_WALK_FAILED 0xC0000B0000000000ULL #define TDX_EPT_ENTRY_NOT_FREE 0xC0000B0200000000ULL +#define TDX_TLB_TRACKING_NOT_DONE 0xC0000B0800000000ULL +#define TDX_EPT_INVALID_PROMOTE_CONDITIONS 0xC0000B0900000000ULL =20 /* * TDG.VP.VMCALL Status Codes (returned in R10) diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h index 7293510fa2e5..3094008ba390 100644 --- a/arch/x86/kvm/vmx/tdx_ops.h +++ b/arch/x86/kvm/vmx/tdx_ops.h @@ -190,6 +190,12 @@ static inline u64 tdh_mem_page_demote(hpa_t tdr, gpa_t= gpa, int level, hpa_t pag return tdx_seamcall_sept(TDH_MEM_PAGE_DEMOTE, gpa | level, tdr, page, 0, = out); } =20 +static inline u64 tdh_mem_page_promote(hpa_t tdr, gpa_t gpa, int level, + struct tdx_module_args *out) +{ + return tdx_seamcall_sept(TDH_MEM_PAGE_PROMOTE, gpa | level, tdr, 0, 0, ou= t); +} + static inline u64 tdh_mr_extend(hpa_t tdr, gpa_t gpa, struct tdx_module_args *out) { --=20 2.25.1 From nobody Wed Dec 17 07:59:15 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 560FECDB465 for ; Mon, 16 Oct 2023 16:38:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234468AbjJPQis (ORCPT ); Mon, 16 Oct 2023 12:38:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52920 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234457AbjJPQhg (ORCPT ); Mon, 16 Oct 2023 12:37:36 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B5B8783C2; Mon, 16 Oct 2023 09:23:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697473398; x=1729009398; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=plOWEmHcszC/a1uSmdrAZhY93JEutDgD1TMhrZRF+cs=; b=UklVa7/QuN42L2VjQe4Etd7jcGFaES/o1tShzuOpLGXN6GzbYeKDjOCS EmpMHpx4OzKkga7OU0yZ5yH/aUJl0tJ00/uNNlvTUWhTaQN5hfxB8giXL QoJgbe2wbq6r4VCI4s0nJPdu/zKaBFqW5m1wnfsxPVb7PvL9a83JJEUw8 XYEOTFuss2NPq8icpGKWPe9HBZFjEaOAmxGYd827TZdUvYbVg0fYI+Zyw XnOjnFwrse/pidg9+nR5mIMAWunKxRSnMMJ7EUY2bqpqcz6rK7Bz9rFV3 znZLQYHK/eBMmKERED2I5W36CKPgaqgeQm9s4fE0nI2Q/UgBUBLw2TPmU Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="471793214" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="471793214" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:21:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="899569273" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="899569273" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:19:18 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com Subject: [RFC PATCH v5 15/16] KVM: x86/mmu: Make kvm fault handler aware of large page of private memslot Date: Mon, 16 Oct 2023 09:21:06 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata struct kvm_page_fault.req_level is the page level which takes care of the faulted-in page size. For now its calculation is only for the conventional kvm memslot by host_pfn_mapping_level() that traverses page table. However, host_pfn_mapping_level() cannot be used for private kvm memslot because pages of private kvm memlost aren't mapped into user virtual address space. Instead page order is given when getting pfn. Remember it in struct kvm_page_fault and use it. Signed-off-by: Isaku Yamahata --- arch/x86/kvm/mmu/mmu.c | 34 +++++++++++++++++---------------- arch/x86/kvm/mmu/mmu_internal.h | 12 +++++++++++- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- 3 files changed, 30 insertions(+), 18 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a1a9b0bc4f1a..e77cfe133dfe 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3158,10 +3158,10 @@ static int host_pfn_mapping_level(struct kvm *kvm, = gfn_t gfn, =20 static int __kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, - gfn_t gfn, int max_level, bool is_private) + gfn_t gfn, int max_level, int host_level, + bool is_private) { struct kvm_lpage_info *linfo; - int host_level; =20 max_level =3D min(max_level, max_huge_page_level); for ( ; max_level > PG_LEVEL_4K; max_level--) { @@ -3170,24 +3170,23 @@ static int __kvm_mmu_max_mapping_level(struct kvm *= kvm, break; } =20 - if (is_private) - return max_level; - if (max_level =3D=3D PG_LEVEL_4K) return PG_LEVEL_4K; =20 - host_level =3D host_pfn_mapping_level(kvm, gfn, slot); + if (!is_private) { + WARN_ON_ONCE(host_level !=3D PG_LEVEL_NONE); + host_level =3D host_pfn_mapping_level(kvm, gfn, slot); + } + WARN_ON_ONCE(host_level =3D=3D PG_LEVEL_NONE); return min(host_level, max_level); } =20 int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn, - int max_level) + int max_level, bool faultin_private) { - bool is_private =3D kvm_slot_can_be_private(slot) && - kvm_mem_is_private(kvm, gfn); - - return __kvm_mmu_max_mapping_level(kvm, slot, gfn, max_level, is_private); + return __kvm_mmu_max_mapping_level(kvm, slot, gfn, max_level, + PG_LEVEL_NONE, faultin_private); } =20 void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault = *fault) @@ -3212,7 +3211,8 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault */ fault->req_level =3D __kvm_mmu_max_mapping_level(vcpu->kvm, slot, fault->gfn, fault->max_level, - fault->is_private); + fault->host_level, + kvm_is_faultin_private(fault)); if (fault->req_level =3D=3D PG_LEVEL_4K || fault->huge_page_disallowed) return; =20 @@ -4328,6 +4328,7 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu *v= cpu, struct kvm_page_fault *fault) { int max_order, r; + u8 max_level; =20 if (!kvm_slot_can_be_private(fault->slot)) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); @@ -4341,8 +4342,9 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu *v= cpu, return r; } =20 - fault->max_level =3D min(kvm_max_level_for_order(max_order), - fault->max_level); + max_level =3D kvm_max_level_for_order(max_order); + fault->host_level =3D max_level; + fault->max_level =3D min(max_level, fault->max_level); fault->map_writable =3D !(fault->slot->flags & KVM_MEM_READONLY); =20 return RET_PF_CONTINUE; @@ -4392,7 +4394,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault return -EFAULT; } =20 - if (fault->is_private) + if (kvm_is_faultin_private(fault)) return kvm_faultin_pfn_private(vcpu, fault); =20 async =3D false; @@ -6804,7 +6806,7 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *= kvm, */ if (sp->role.direct && sp->role.level < kvm_mmu_max_mapping_level(kvm, slot, sp->gfn, - PG_LEVEL_NUM)) { + PG_LEVEL_NUM, false)) { kvm_zap_one_rmap_spte(kvm, rmap_head, sptep); =20 if (kvm_available_flush_remote_tlbs_range()) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index 099e8fe929c6..813d405fc11e 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -358,6 +358,9 @@ struct kvm_page_fault { * is changing its own translation in the guest page tables. */ bool write_fault_to_shadow_pgtable; + + /* valid only for private memslot && private gfn */ + enum pg_level host_level; }; =20 int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault= ); @@ -452,7 +455,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu= *vcpu, gpa_t cr2_or_gpa, =20 int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn, - int max_level); + int max_level, bool faultin_private); void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault = *fault); void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, in= t cur_level); =20 @@ -470,4 +473,11 @@ static inline bool kvm_hugepage_test_mixed(struct kvm_= memory_slot *slot, gfn_t g } #endif =20 +static inline bool kvm_is_faultin_private(const struct kvm_page_fault *fau= lt) +{ + if (IS_ENABLED(CONFIG_KVM_GENERIC_PRIVATE_MEM)) + return fault->is_private && kvm_slot_can_be_private(fault->slot); + return false; +} + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 9cb63613d831..cac48881c5f1 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -2302,7 +2302,7 @@ static void zap_collapsible_spte_range(struct kvm *kv= m, continue; =20 max_mapping_level =3D kvm_mmu_max_mapping_level(kvm, slot, - iter.gfn, PG_LEVEL_NUM); + iter.gfn, PG_LEVEL_NUM, false); if (max_mapping_level < iter.level) continue; =20 --=20 2.25.1 From nobody Wed Dec 17 07:59:15 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25F9CCDB482 for ; Mon, 16 Oct 2023 16:38:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234179AbjJPQiM (ORCPT ); Mon, 16 Oct 2023 12:38:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234456AbjJPQhg (ORCPT ); Mon, 16 Oct 2023 12:37:36 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B3D1A83C0; Mon, 16 Oct 2023 09:23:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697473398; x=1729009398; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=B72YDEO70rKyCktJVODQJPRxRRKceKPZ5dkJZRqrFkA=; b=ChweWF400kKXogUbCsz/6iO2/3nfrcalzm3xrKzrfkdj9R4aM8sNJ7HC 0nOiTdoou5aFqcO4MIKVF+C6Msy/ooumSWx2q5ly4elB1xmH/3iSI5Fjz mMVo2qlgrsYeBqHxMzj8jyd5QvY1tg+klifW0JgR5bu5AvIlo2tb4w2Xw dQ7nzqiBYYlaShDXg/XxnO+9AQ1trhkSkEwcgmazxzFUPHmXAN1qfFeYz Qxm1TbyKg/q02BfYvDAHo0nQybqJjTv/mvnMyBwwVBhrXhVXoFx/L/uas 3vp6a8RjJ8yVOapqvBTAaJui1gqOm5hPM2mAxueiKMZPo9qLj7jXSOw0X g==; X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="471793226" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="471793226" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:21:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="899569280" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="899569280" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:19:18 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [RFC PATCH v5 16/16] KVM: TDX: Allow 2MB large page for TD GUEST Date: Mon, 16 Oct 2023 09:21:07 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li Now that everything is there to support 2MB page for TD guest. Because TDX module TDH.MEM.PAGE.AUG supports 4KB page and 2MB page, set struct kvm_arch.tdp_max_page_level to 2MB page level. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/mmu/tdp_mmu.c | 9 ++------- arch/x86/kvm/vmx/tdx.c | 4 ++-- 2 files changed, 4 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index cac48881c5f1..4158ca4612fa 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1687,14 +1687,9 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kv= m_page_fault *fault) =20 sp->nx_huge_page_disallowed =3D fault->huge_page_disallowed; =20 - if (is_shadow_present_pte(iter.old_spte)) { - /* - * TODO: large page support. - * Doesn't support large page for TDX now - */ - KVM_BUG_ON(is_private_sptep(iter.sptep), vcpu->kvm); + if (is_shadow_present_pte(iter.old_spte)) r =3D tdp_mmu_split_huge_page(kvm, &iter, sp, true); - } else + else r =3D tdp_mmu_link_sp(kvm, &iter, sp, true); =20 /* diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 2627dcf240cc..648bd2636ff3 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -565,8 +565,8 @@ int tdx_vm_init(struct kvm *kvm) */ kvm_mmu_set_mmio_spte_value(kvm, 0); =20 - /* TODO: Enable 2mb and 1gb large page support. */ - kvm->arch.tdp_max_page_level =3D PG_LEVEL_4K; + /* TDH.MEM.PAGE.AUG supports up to 2MB page. */ + kvm->arch.tdp_max_page_level =3D PG_LEVEL_2M; =20 /* * This function initializes only KVM software construct. It doesn't --=20 2.25.1