From nobody Mon Sep 15 09:47:25 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42510C61DB3 for ; Thu, 12 Jan 2023 17:09:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240133AbjALRJq (ORCPT ); Thu, 12 Jan 2023 12:09:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39944 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240977AbjALRJJ (ORCPT ); Thu, 12 Jan 2023 12:09:09 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 312FF27183; Thu, 12 Jan 2023 08:47:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542039; x=1705078039; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lN2sQKQJw+fvb4BXvuf9J5Yol5hJrcIFcz4ejYm1oT4=; b=ngXnomQlVKnBsNdwhxKf++UkUySDbW6mcS+d/kUgROxSgv5VaBJGcIzF 994Cmw4MnXz+oNNbuRzP/AWj7MUP+drSQCaaBHDGw72DNRAO9av+RLLXM ZJvNIYdavP+WJnpmQ+LlQtz2SHAzbZ3+gs0npCzcxDSYDTlPcgG25HnXW Ir9JLWLwKhk5iuHT7WR7jjm47noXXj+sXnGipmFZAnkBn8pHc51c9APn1 2LZ4JnuwHzQbrlcGnRqpj6apw3+iZqDBthO9k0StF72Ami7dPZaFAPDiU gB7yuTq2gPLH6qFK/OBiHaLM6+BeCwwJkb8iqy79+4CzH3+vxLVieCK18 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="323816274" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="323816274" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:16 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="986658318" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="986658318" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:16 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Xiaoyao Li Subject: [RFC PATCH v3 01/16] KVM: TDP_MMU: Go to next level if smaller private mapping exists Date: Thu, 12 Jan 2023 08:43:53 -0800 Message-Id: <588de42a732a068bdcff46494865c0345d289bb2.1673541292.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li Cannot map a private page as large page if any smaller mapping exists. It has to wait for all the not-mapped smaller page to be mapped and promote it to larger mapping. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/mmu/tdp_mmu.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 69e202bd1897..5c2c7e8ea62e 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1394,7 +1394,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm= _page_fault *fault) tdp_mmu_for_each_pte(iter, mmu, is_private, raw_gfn, raw_gfn + 1) { int r; =20 - if (fault->nx_huge_page_workaround_enabled) + if (fault->nx_huge_page_workaround_enabled || + kvm_gfn_shared_mask(vcpu->kvm)) disallowed_hugepage_adjust(fault, iter.old_spte, iter.level); =20 /* --=20 2.25.1 From nobody Mon Sep 15 09:47:25 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D009C54EBE for ; Thu, 12 Jan 2023 17:10:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240660AbjALRKS (ORCPT ); Thu, 12 Jan 2023 12:10:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238687AbjALRJg (ORCPT ); Thu, 12 Jan 2023 12:09:36 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0FE595E67C; Thu, 12 Jan 2023 08:47:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542053; x=1705078053; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hqWWs96g8j8J+o2BvB4eR8w2w9BH0DQPRuS3KFwmOI0=; b=V0DsdpLaAqHZDSCCAhXwUr9DyX1J8mhvDjyKevQmKv7UuTxdecI4BCb0 aJnKnypKPPRNYFbQyqNSe8ADb0e6Gbb+2fD5wQ7forqUp/9cGvpOYUO/A q51cMxxmCklcLnsJh5Dlc1iaRsZlvEwGNV2QqTIs25G9v9rJ3hGsFJy56 o1ADVbKvoeIhbYRn3fvSgjIX3e587QbHkjVekqK75uce9Qs4Si+ta/Rep XZVqQf1ocH02q4A1jcDMBtfIqlyrDmcnTsXKtHDjLCoQnrXoN4Y4YSUk8 Wltk1MatOdMLvAbTfxXGLXcvVcXVD1kwF2iLoWRE7ryaGwYlAQ20bMamE A==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="323816280" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="323816280" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:16 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="986658322" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="986658322" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:16 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Xiaoyao Li Subject: [RFC PATCH v3 02/16] KVM: TDX: Pass page level to cache flush before TDX SEAMCALL Date: Thu, 12 Jan 2023 08:43:54 -0800 Message-Id: <76098021c9ff33c9b60ab0bff97b68a4e664fdaf.1673541292.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li tdh_mem_page_aug() will support 2MB large page in the near future. Cache flush also needs to be 2MB instead of 4KB in such cases. Introduce a helper function to flush cache with page size info in preparation for large pages. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx_ops.h | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h index 86330d0e4b22..4db983b1dc94 100644 --- a/arch/x86/kvm/vmx/tdx_ops.h +++ b/arch/x86/kvm/vmx/tdx_ops.h @@ -6,6 +6,7 @@ =20 #include =20 +#include #include #include #include @@ -18,6 +19,11 @@ =20 void pr_tdx_error(u64 op, u64 error_code, const struct tdx_module_output *= out); =20 +static inline void tdx_clflush_page(hpa_t addr, enum pg_level level) +{ + clflush_cache_range(__va(addr), KVM_HPAGE_SIZE(level)); +} + /* * TDX module acquires its internal lock for resources. It doesn't spin t= o get * locks because of its restrictions of allowed execution time. Instead, = it @@ -50,21 +56,21 @@ static inline u64 seamcall_sept(u64 op, u64 rcx, u64 rd= x, u64 r8, u64 r9, =20 static inline u64 tdh_mng_addcx(hpa_t tdr, hpa_t addr) { - clflush_cache_range(__va(addr), PAGE_SIZE); + tdx_clflush_page(addr, PG_LEVEL_4K); return __seamcall(TDH_MNG_ADDCX, addr, tdr, 0, 0, NULL); } =20 static inline u64 tdh_mem_page_add(hpa_t tdr, gpa_t gpa, hpa_t hpa, hpa_t = source, struct tdx_module_output *out) { - clflush_cache_range(__va(hpa), PAGE_SIZE); + tdx_clflush_page(hpa, PG_LEVEL_4K); return seamcall_sept(TDH_MEM_PAGE_ADD, gpa, tdr, hpa, source, out); } =20 static inline u64 tdh_mem_sept_add(hpa_t tdr, gpa_t gpa, int level, hpa_t = page, struct tdx_module_output *out) { - clflush_cache_range(__va(page), PAGE_SIZE); + tdx_clflush_page(page, PG_LEVEL_4K); return seamcall_sept(TDH_MEM_SEPT_ADD, gpa | level, tdr, page, 0, out); } =20 @@ -76,21 +82,21 @@ static inline u64 tdh_mem_sept_remove(hpa_t tdr, gpa_t = gpa, int level, =20 static inline u64 tdh_vp_addcx(hpa_t tdvpr, hpa_t addr) { - clflush_cache_range(__va(addr), PAGE_SIZE); + tdx_clflush_page(addr, PG_LEVEL_4K); return __seamcall(TDH_VP_ADDCX, addr, tdvpr, 0, 0, NULL); } =20 static inline u64 tdh_mem_page_relocate(hpa_t tdr, gpa_t gpa, hpa_t hpa, struct tdx_module_output *out) { - clflush_cache_range(__va(hpa), PAGE_SIZE); + tdx_clflush_page(hpa, PG_LEVEL_4K); return __seamcall(TDH_MEM_PAGE_RELOCATE, gpa, tdr, hpa, 0, out); } =20 static inline u64 tdh_mem_page_aug(hpa_t tdr, gpa_t gpa, hpa_t hpa, struct tdx_module_output *out) { - clflush_cache_range(__va(hpa), PAGE_SIZE); + tdx_clflush_page(hpa, PG_LEVEL_4K); return seamcall_sept(TDH_MEM_PAGE_AUG, gpa, tdr, hpa, 0, out); } =20 @@ -107,13 +113,13 @@ static inline u64 tdh_mng_key_config(hpa_t tdr) =20 static inline u64 tdh_mng_create(hpa_t tdr, int hkid) { - clflush_cache_range(__va(tdr), PAGE_SIZE); + tdx_clflush_page(tdr, PG_LEVEL_4K); return __seamcall(TDH_MNG_CREATE, tdr, hkid, 0, 0, NULL); } =20 static inline u64 tdh_vp_create(hpa_t tdr, hpa_t tdvpr) { - clflush_cache_range(__va(tdvpr), PAGE_SIZE); + tdx_clflush_page(tdvpr, PG_LEVEL_4K); return __seamcall(TDH_VP_CREATE, tdvpr, tdr, 0, 0, NULL); } =20 --=20 2.25.1 From nobody Mon Sep 15 09:47:25 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2506C63797 for ; Thu, 12 Jan 2023 17:10:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240539AbjALRK4 (ORCPT ); Thu, 12 Jan 2023 12:10:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39900 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235333AbjALRKV (ORCPT ); Thu, 12 Jan 2023 12:10:21 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 579AF5D8BF; Thu, 12 Jan 2023 08:47:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542071; x=1705078071; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=cFvN8cXkdxYZpaCwLAKQFI0KDAWdozzMlOc3mI82T5c=; b=Fz/e3ayGP+qoObh6bSDl4a4GpQ1p+a4GII2pMnG988OkNjkl8kVwpVq/ Yh8LNfHmB9LRxyMNrbpgkcDmcVMJc9i17NK/5uDcQ2DMT8WDlqIQW8B1m xnxnej6q84KPiETcv6qTNsF7sTVMn61PNjupTdz1B4rPvHcWCaWN9tXsp glDDEbOeVrPGWprCwdKTcxsd0n7+0lPRNPU+fpL+fe2WgEPdEQbaBeoSe bIugmm3Um1p3R+kvX/JkORlFbXJBN5NMwvG2HaLg6z5M0zh77bPxLhRRU llhBLIHh5PmtCHipQgq0alfPLdK/X1IiHjirz86aL8zKe6q+/hoNUVPqL g==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="323816285" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="323816285" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:16 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="986658327" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="986658327" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:16 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Xiaoyao Li Subject: [RFC PATCH v3 03/16] KVM: TDX: Pass KVM page level to tdh_mem_page_add() and tdh_mem_page_aug() Date: Thu, 12 Jan 2023 08:43:55 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li Level info is needed in tdh_clflush_page() to generate the correct page size. Besides, explicitly pass level info to SEAMCALL instead of assuming it's zero. It works naturally when 2MB support lands. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx.c | 7 ++++--- arch/x86/kvm/vmx/tdx_ops.h | 19 ++++++++++++------- 2 files changed, 16 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 487ba90a0b7c..8959a019b87a 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1324,6 +1324,7 @@ static void tdx_unpin(struct kvm *kvm, kvm_pfn_t pfn) static int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn, enum pg_level level, kvm_pfn_t pfn) { + int tdx_level =3D pg_level_to_tdx_sept_level(level); struct kvm_tdx *kvm_tdx =3D to_kvm_tdx(kvm); hpa_t hpa =3D pfn_to_hpa(pfn); gpa_t gpa =3D gfn_to_gpa(gfn); @@ -1348,7 +1349,7 @@ static int tdx_sept_set_private_spte(struct kvm *kvm,= gfn_t gfn, if (KVM_BUG_ON(level !=3D PG_LEVEL_4K, kvm)) return -EINVAL; =20 - err =3D tdh_mem_page_aug(kvm_tdx->tdr_pa, gpa, hpa, &out); + err =3D tdh_mem_page_aug(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, &out); if (err =3D=3D TDX_ERROR_SEPT_BUSY) { tdx_unpin(kvm, pfn); return -EAGAIN; @@ -1387,8 +1388,8 @@ static int tdx_sept_set_private_spte(struct kvm *kvm,= gfn_t gfn, kvm_tdx->source_pa =3D INVALID_PAGE; =20 do { - err =3D tdh_mem_page_add(kvm_tdx->tdr_pa, gpa, hpa, source_pa, - &out); + err =3D tdh_mem_page_add(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, + source_pa, &out); /* * This path is executed during populating initial guest memory * image. i.e. before running any vcpu. Race is rare. diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h index 4db983b1dc94..4b03acce5003 100644 --- a/arch/x86/kvm/vmx/tdx_ops.h +++ b/arch/x86/kvm/vmx/tdx_ops.h @@ -19,6 +19,11 @@ =20 void pr_tdx_error(u64 op, u64 error_code, const struct tdx_module_output *= out); =20 +static inline enum pg_level tdx_sept_level_to_pg_level(int tdx_level) +{ + return tdx_level + 1; +} + static inline void tdx_clflush_page(hpa_t addr, enum pg_level level) { clflush_cache_range(__va(addr), KVM_HPAGE_SIZE(level)); @@ -60,11 +65,11 @@ static inline u64 tdh_mng_addcx(hpa_t tdr, hpa_t addr) return __seamcall(TDH_MNG_ADDCX, addr, tdr, 0, 0, NULL); } =20 -static inline u64 tdh_mem_page_add(hpa_t tdr, gpa_t gpa, hpa_t hpa, hpa_t = source, - struct tdx_module_output *out) +static inline u64 tdh_mem_page_add(hpa_t tdr, gpa_t gpa, int level, hpa_t = hpa, + hpa_t source, struct tdx_module_output *out) { - tdx_clflush_page(hpa, PG_LEVEL_4K); - return seamcall_sept(TDH_MEM_PAGE_ADD, gpa, tdr, hpa, source, out); + tdx_clflush_page(hpa, tdx_sept_level_to_pg_level(level)); + return seamcall_sept(TDH_MEM_PAGE_ADD, gpa | level, tdr, hpa, source, out= ); } =20 static inline u64 tdh_mem_sept_add(hpa_t tdr, gpa_t gpa, int level, hpa_t = page, @@ -93,11 +98,11 @@ static inline u64 tdh_mem_page_relocate(hpa_t tdr, gpa_= t gpa, hpa_t hpa, return __seamcall(TDH_MEM_PAGE_RELOCATE, gpa, tdr, hpa, 0, out); } =20 -static inline u64 tdh_mem_page_aug(hpa_t tdr, gpa_t gpa, hpa_t hpa, +static inline u64 tdh_mem_page_aug(hpa_t tdr, gpa_t gpa, int level, hpa_t = hpa, struct tdx_module_output *out) { - tdx_clflush_page(hpa, PG_LEVEL_4K); - return seamcall_sept(TDH_MEM_PAGE_AUG, gpa, tdr, hpa, 0, out); + tdx_clflush_page(hpa, tdx_sept_level_to_pg_level(level)); + return seamcall_sept(TDH_MEM_PAGE_AUG, gpa | level, tdr, hpa, 0, out); } =20 static inline u64 tdh_mem_range_block(hpa_t tdr, gpa_t gpa, int level, --=20 2.25.1 From nobody Mon Sep 15 09:47:25 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 043B6C54EBE for ; Thu, 12 Jan 2023 17:11:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240183AbjALRLg (ORCPT ); Thu, 12 Jan 2023 12:11:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40246 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240503AbjALRKy (ORCPT ); Thu, 12 Jan 2023 12:10:54 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D52856132E; Thu, 12 Jan 2023 08:48:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542090; x=1705078090; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QFP3jIj8NsfV1mtc5OzoIWRWOKvWQx16E9fvvGq2lpo=; b=TyyZFtiFunn4NJfPTmbE9VfYg2WPqZ6K7MpN/+4DHOimeenrMuhRgMRz c9mMlPdO0CLbuP5EjahO/CPUei0uW/YaEp9XLrkEvFLpD1AnY5gicuPot jh9lT4YW1LSgLwbHSdj5LOcv6N2WQZN2ZAzzRZzzItuXnpWh1TtF1K/R4 3fT4TmTU1EV4YV12FIrAqhNuhtqEGKM006QelGPw/Uqx7bWfxMSoO5XHa 1JP/IK14FvlagI0NTndY5hVYJqw8SqRiQLIgbSwfcXm3gOvIYPWYCn6RG w0llpwM4kqkSKu1BCwyt2r48pVBln88Jtc5qUeT3DUd0gL6IAAqLXg3pZ Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="323816292" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="323816292" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:16 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="986658331" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="986658331" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:16 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Xiaoyao Li Subject: [RFC PATCH v3 04/16] KVM: TDX: Pass size to tdx_measure_page() Date: Thu, 12 Jan 2023 08:43:56 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li Extend tdx_measure_page() to pass size info so that it can measure large page as well. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 8959a019b87a..1bc07dfe765a 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1299,13 +1299,15 @@ void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t = root_hpa, int pgd_level) td_vmcs_write64(to_tdx(vcpu), SHARED_EPT_POINTER, root_hpa & PAGE_MASK); } =20 -static void tdx_measure_page(struct kvm_tdx *kvm_tdx, hpa_t gpa) +static void tdx_measure_page(struct kvm_tdx *kvm_tdx, hpa_t gpa, int size) { struct tdx_module_output out; u64 err; int i; =20 - for (i =3D 0; i < PAGE_SIZE; i +=3D TDX_EXTENDMR_CHUNKSIZE) { + WARN_ON_ONCE(size % TDX_EXTENDMR_CHUNKSIZE); + + for (i =3D 0; i < size; i +=3D TDX_EXTENDMR_CHUNKSIZE) { err =3D tdh_mr_extend(kvm_tdx->tdr_pa, gpa + i, &out); if (KVM_BUG_ON(err, &kvm_tdx->kvm)) { pr_tdx_error(TDH_MR_EXTEND, err, &out); @@ -1400,7 +1402,7 @@ static int tdx_sept_set_private_spte(struct kvm *kvm,= gfn_t gfn, tdx_unpin(kvm, pfn); return -EIO; } else if (measure) - tdx_measure_page(kvm_tdx, gpa); + tdx_measure_page(kvm_tdx, gpa, KVM_HPAGE_SIZE(level)); =20 return 0; } --=20 2.25.1 From nobody Mon Sep 15 09:47:25 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34C9CC61DB3 for ; Thu, 12 Jan 2023 17:12:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240619AbjALRMN (ORCPT ); Thu, 12 Jan 2023 12:12:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236181AbjALRLi (ORCPT ); Thu, 12 Jan 2023 12:11:38 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 44EEF7CBC8; Thu, 12 Jan 2023 08:48:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542108; x=1705078108; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KQ1XXWMbHl0MYIHVHrHXAlrv8TLu2HVtsjPVNJosGr4=; b=DyMXvunT/OLYrUiV63qk4rmlccFCeEiFeUb586hMO5JCo9351FlWdf4e fpq4X9M1Rb4QzFncFts4DZWe5dmtpnLa18xwiJyZeMLd8mUwU7CnGcd5q sKqUx6CuA9WueqGxRfPdwdFvR8EL1TrPFaTsK+qoM0RgY/B/PGEAEDm1c iTmO86ipF0jILI85HHtxIHb7av84Z0dBfFt90SOJUdCG+Y+CMOASxzVcd 4olVKbKkrGBf9u360GSaOfzQ/adql/eVjPgzY6+SoxEycCdXF08z6XROr tjXWjYG7zgtXGDxtmz9WymcyX+APM+JRz/YDmgqWqhwnj4IpJN9JRus/l A==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="323816295" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="323816295" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:17 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="986658335" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="986658335" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:16 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Xiaoyao Li Subject: [RFC PATCH v3 05/16] KVM: TDX: Pass size to reclaim_page() Date: Thu, 12 Jan 2023 08:43:57 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li A 2MB large page can be tdh_mem_page_aug()'ed to TD directly. In this case, it needs to reclaim and clear the page as 2MB size. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx.c | 26 ++++++++++++++++---------- 1 file changed, 16 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 1bc07dfe765a..8bc8fd7f28eb 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -184,14 +184,17 @@ void tdx_hardware_disable(void) tdx_disassociate_vp(&tdx->vcpu); } =20 -static void tdx_clear_page(unsigned long page_pa) +static void tdx_clear_page(unsigned long page_pa, int size) { const void *zero_page =3D (const void *) __va(page_to_phys(ZERO_PAGE(0))); void *page =3D __va(page_pa); unsigned long i; =20 + WARN_ON_ONCE(size % PAGE_SIZE); + if (!static_cpu_has(X86_FEATURE_MOVDIR64B)) { - clear_page(page); + for (i =3D 0; i < size; i +=3D PAGE_SIZE) + clear_page(page + i); return; } =20 @@ -205,7 +208,7 @@ static void tdx_clear_page(unsigned long page_pa) * The cache line could be poisoned (even without MKTME-i), clear the * poison bit. */ - for (i =3D 0; i < PAGE_SIZE; i +=3D 64) + for (i =3D 0; i < size; i +=3D 64) movdir64b(page + i, zero_page); /* * MOVDIR64B store uses WC buffer. Prevent following memory reads @@ -214,7 +217,8 @@ static void tdx_clear_page(unsigned long page_pa) __mb(); } =20 -static int tdx_reclaim_page(hpa_t pa, bool do_wb, u16 hkid) +static int tdx_reclaim_page(hpa_t pa, enum pg_level level, + bool do_wb, u16 hkid) { struct tdx_module_output out; u64 err; @@ -232,8 +236,10 @@ static int tdx_reclaim_page(hpa_t pa, bool do_wb, u16 = hkid) pr_tdx_error(TDH_PHYMEM_PAGE_RECLAIM, err, &out); return -EIO; } + /* out.r8 =3D=3D tdx sept page level */ + WARN_ON_ONCE(out.r8 !=3D pg_level_to_tdx_sept_level(level)); =20 - if (do_wb) { + if (do_wb && level =3D=3D PG_LEVEL_4K) { /* * Only TDR page gets into this path. No contention is expected * because of the last page of TD. @@ -245,7 +251,7 @@ static int tdx_reclaim_page(hpa_t pa, bool do_wb, u16 h= kid) } } =20 - tdx_clear_page(pa); + tdx_clear_page(pa, KVM_HPAGE_SIZE(level)); return 0; } =20 @@ -259,7 +265,7 @@ static void tdx_reclaim_td_page(unsigned long td_page_p= a) * was already flushed by TDH.PHYMEM.CACHE.WB before here, So * cache doesn't need to be flushed again. */ - if (WARN_ON(tdx_reclaim_page(td_page_pa, false, 0))) + if (WARN_ON(tdx_reclaim_page(td_page_pa, PG_LEVEL_4K, false, 0))) /* If reclaim failed, leak the page. */ return; free_page((unsigned long)__va(td_page_pa)); @@ -436,7 +442,7 @@ void tdx_vm_free(struct kvm *kvm) * while operating on TD (Especially reclaiming TDCS). Cache flush with * TDX global HKID is needed. */ - if (tdx_reclaim_page(kvm_tdx->tdr_pa, true, tdx_global_keyid)) + if (tdx_reclaim_page(kvm_tdx->tdr_pa, PG_LEVEL_4K, true, tdx_global_keyid= )) return; =20 free_page((unsigned long)__va(kvm_tdx->tdr_pa)); @@ -1427,7 +1433,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm= , gfn_t gfn, * The HKID assigned to this TD was already freed and cache * was already flushed. We don't have to flush again. */ - err =3D tdx_reclaim_page(hpa, false, 0); + err =3D tdx_reclaim_page(hpa, level, false, 0); if (KVM_BUG_ON(err, kvm)) return -EIO; tdx_unpin(kvm, pfn); @@ -1566,7 +1572,7 @@ static int tdx_sept_free_private_spt(struct kvm *kvm,= gfn_t gfn, * already flushed. We don't have to flush again. */ if (!is_hkid_assigned(kvm_tdx)) - return tdx_reclaim_page(__pa(private_spt), false, 0); + return tdx_reclaim_page(__pa(private_spt), PG_LEVEL_4K, false, 0); =20 /* * free_private_spt() is (obviously) called when a shadow page is being --=20 2.25.1 From nobody Mon Sep 15 09:47:25 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D2C3C54EBE for ; Thu, 12 Jan 2023 17:12:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240017AbjALRMs (ORCPT ); Thu, 12 Jan 2023 12:12:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43682 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230124AbjALRMK (ORCPT ); Thu, 12 Jan 2023 12:12:10 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D6C17CDF0; Thu, 12 Jan 2023 08:48:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542136; x=1705078136; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nIjTfGGdZyqyIE1lEPQNVb+HiQZclRMOxDxgV3JrZtc=; b=T+Hd8tbV8PoTcRCNP9RfLHpdw06LN2dK1aHFATihCwJr2TEKSQuKsCev mCgwewB4NJPyL63wCDZa2p7n+GqoCgnff4vNBjgRzeXIJtZrqfWc9BnIJ Wot1oK/wYIfjaYa4f7aTXPbsqUwoi4Sx8YZAzKd3wmJIl0zyjXmRmFEAB AxDymcwJ0AV4GJN3hXY5ICvPrAkWZ+qcCD38uIP58xw5HTD6gcqBD/emH 7reD/wc4A3XSmu1l11QCveEK/xSE34ZEeJb93+eXE2HD5ydkGytnz77p2 qF6LcSXMOnqF+Kmil1ZrfjkA7YIn9MFL3hzMao1rHCZMn5WvM7dVa7weK Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="323816299" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="323816299" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:17 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="986658338" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="986658338" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:16 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Xiaoyao Li Subject: [RFC PATCH v3 06/16] KVM: TDX: Update tdx_sept_{set,drop}_private_spte() to support large page Date: Thu, 12 Jan 2023 08:43:58 -0800 Message-Id: <591f477bd44f0cfbc7475194be32e9f771c3f2f8.1673541292.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li Allow large page level AUG and REMOVE for TDX pages. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx.c | 63 +++++++++++++++++++++--------------------- 1 file changed, 32 insertions(+), 31 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 8bc8fd7f28eb..d7be634edf3c 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1322,11 +1322,12 @@ static void tdx_measure_page(struct kvm_tdx *kvm_td= x, hpa_t gpa, int size) } } =20 -static void tdx_unpin(struct kvm *kvm, kvm_pfn_t pfn) +static void tdx_unpin(struct kvm *kvm, kvm_pfn_t pfn, int level) { - struct page *page =3D pfn_to_page(pfn); + int i; =20 - put_page(page); + for (i =3D 0; i < KVM_PAGES_PER_HPAGE(level); i++) + put_page(pfn_to_page(pfn + i)); } =20 static int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn, @@ -1340,6 +1341,7 @@ static int tdx_sept_set_private_spte(struct kvm *kvm,= gfn_t gfn, hpa_t source_pa; bool measure; u64 err; + int i; =20 /* * Because restricted mem doesn't support page migration with @@ -1349,22 +1351,19 @@ static int tdx_sept_set_private_spte(struct kvm *kv= m, gfn_t gfn, * TODO: Once restricted mem introduces callback on page migration, * implement it and remove get_page/put_page(). */ - get_page(pfn_to_page(pfn)); + for (i =3D 0; i < KVM_PAGES_PER_HPAGE(level); i++) + get_page(pfn_to_page(pfn + i)); =20 /* Build-time faults are induced and handled via TDH_MEM_PAGE_ADD. */ if (likely(is_td_finalized(kvm_tdx))) { - /* TODO: handle large pages. */ - if (KVM_BUG_ON(level !=3D PG_LEVEL_4K, kvm)) - return -EINVAL; - err =3D tdh_mem_page_aug(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, &out); if (err =3D=3D TDX_ERROR_SEPT_BUSY) { - tdx_unpin(kvm, pfn); + tdx_unpin(kvm, pfn, level); return -EAGAIN; } if (KVM_BUG_ON(err, kvm)) { pr_tdx_error(TDH_MEM_PAGE_AUG, err, &out); - tdx_unpin(kvm, pfn); + tdx_unpin(kvm, pfn, level); return -EIO; } return 0; @@ -1387,7 +1386,7 @@ static int tdx_sept_set_private_spte(struct kvm *kvm,= gfn_t gfn, * always uses vcpu 0's page table and protected by vcpu->mutex). */ if (KVM_BUG_ON(kvm_tdx->source_pa =3D=3D INVALID_PAGE, kvm)) { - tdx_unpin(kvm, pfn); + tdx_unpin(kvm, pfn, level); return -EINVAL; } =20 @@ -1405,7 +1404,7 @@ static int tdx_sept_set_private_spte(struct kvm *kvm,= gfn_t gfn, } while (err =3D=3D TDX_ERROR_SEPT_BUSY); if (KVM_BUG_ON(err, kvm)) { pr_tdx_error(TDH_MEM_PAGE_ADD, err, &out); - tdx_unpin(kvm, pfn); + tdx_unpin(kvm, pfn, level); return -EIO; } else if (measure) tdx_measure_page(kvm_tdx, gpa, KVM_HPAGE_SIZE(level)); @@ -1422,11 +1421,9 @@ static int tdx_sept_drop_private_spte(struct kvm *kv= m, gfn_t gfn, gpa_t gpa =3D gfn_to_gpa(gfn); hpa_t hpa =3D pfn_to_hpa(pfn); hpa_t hpa_with_hkid; + int r =3D 0; u64 err; - - /* TODO: handle large pages. */ - if (KVM_BUG_ON(level !=3D PG_LEVEL_4K, kvm)) - return -EINVAL; + int i; =20 if (!is_hkid_assigned(kvm_tdx)) { /* @@ -1436,7 +1433,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm= , gfn_t gfn, err =3D tdx_reclaim_page(hpa, level, false, 0); if (KVM_BUG_ON(err, kvm)) return -EIO; - tdx_unpin(kvm, pfn); + tdx_unpin(kvm, pfn, level); return 0; } =20 @@ -1453,21 +1450,25 @@ static int tdx_sept_drop_private_spte(struct kvm *k= vm, gfn_t gfn, return -EIO; } =20 - hpa_with_hkid =3D set_hkid_to_hpa(hpa, (u16)kvm_tdx->hkid); - do { - /* - * TDX_OPERAND_BUSY can happen on locking PAMT entry. Because - * this page was removed above, other thread shouldn't be - * repeatedly operating on this page. Just retry loop. - */ - err =3D tdh_phymem_page_wbinvd(hpa_with_hkid); - } while (err =3D=3D (TDX_OPERAND_BUSY | TDX_OPERAND_ID_RCX)); - if (KVM_BUG_ON(err, kvm)) { - pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL); - return -EIO; + for (i =3D 0; i < KVM_PAGES_PER_HPAGE(level); i++) { + hpa_with_hkid =3D set_hkid_to_hpa(hpa, (u16)kvm_tdx->hkid); + do { + /* + * TDX_OPERAND_BUSY can happen on locking PAMT entry. + * Because this page was removed above, other thread + * shouldn't be repeatedly operating on this page. + * Simple retry should work. + */ + err =3D tdh_phymem_page_wbinvd(hpa_with_hkid); + } while (err =3D=3D (TDX_OPERAND_BUSY | TDX_OPERAND_ID_RCX)); + if (KVM_BUG_ON(err, kvm)) { + pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL); + r =3D -EIO; + } else + tdx_unpin(kvm, pfn + i, PG_LEVEL_4K); + hpa +=3D PAGE_SIZE; } - tdx_unpin(kvm, pfn); - return 0; + return r; } =20 static int tdx_sept_link_private_spt(struct kvm *kvm, gfn_t gfn, --=20 2.25.1 From nobody Mon Sep 15 09:47:25 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F9C1C54EBE for ; Thu, 12 Jan 2023 17:13:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239959AbjALRNZ (ORCPT ); Thu, 12 Jan 2023 12:13:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40740 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234570AbjALRMm (ORCPT ); Thu, 12 Jan 2023 12:12:42 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78A3C7D9E1; Thu, 12 Jan 2023 08:48:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542139; x=1705078139; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vKufx1QCohxcCEPwotzWV81iiW/fnRih5FKrKsY89Hk=; b=BPxLfqcgn36YxPQ/xFpWPrXJg7PTdQB91/khLIIUEgyiFGrofUX7ntBg kqztfuJUoMn/xEC/L0OjymtE1ppcKhQs6uY72zGhlDA9mROMRUuAuLXry +H7jF1gkmmoP9h0FieikRICh88WIv7EdEz1ErzTQb7R1Qdq11q9x4zr1A qCl6Ei6PuhKh3sjGWVI7fGD3W6MhqCuxP1qJ3QJFk6UcbmuHM0Pu5oRWm npNP/+KteBCtLcR2GDz6Pvxw8vDqa9YXQKDvrGc1v1mwAmVqYbhjUkaKj dH7G4eY9FPUEcYg6QGNyCzM5cW1npBvaQw8y/9ZRwJn+ZJsBCb43Kxcny Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="323816306" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="323816306" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:17 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="986658341" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="986658341" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:17 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Xiaoyao Li Subject: [RFC PATCH v3 07/16] KVM: MMU: Introduce level info in PFERR code Date: Thu, 12 Jan 2023 08:43:59 -0800 Message-Id: <191d6612356e9ab9ac7b05abc4d1ab9d1c09e522.1673541292.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li For TDX, EPT violation can happen when TDG.MEM.PAGE.ACCEPT. And TDG.MEM.PAGE.ACCEPT contains the desired accept page level of TD guest. 1. KVM can map it with 4KB page while TD guest wants to accept 2MB page. TD geust will get TDX_PAGE_SIZE_MISMATCH and it should try to accept 4KB size. 2. KVM can map it with 2MB page while TD guest wants to accept 4KB page. KVM needs to honor it because a) there is no way to tell guest KVM maps it as 2MB size. And b) guest accepts it in 4KB size since guest knows some other 4KB page in the same 2MB range will be used as shared page. For case 2, it need to pass desired page level to MMU's page_fault_handler. Use bit 29:31 of kvm PF error code for this purpose. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm_host.h | 3 +++ arch/x86/kvm/mmu/mmu.c | 5 +++++ arch/x86/kvm/vmx/common.h | 6 +++++- arch/x86/kvm/vmx/tdx.c | 15 ++++++++++++++- arch/x86/kvm/vmx/tdx.h | 19 +++++++++++++++++++ arch/x86/kvm/vmx/vmx.c | 2 +- 6 files changed, 47 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 75e53b2bb4af..92d935eec2f5 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -257,6 +257,8 @@ enum x86_intercept_stage; #define PFERR_FETCH_BIT 4 #define PFERR_PK_BIT 5 #define PFERR_SGX_BIT 15 +#define PFERR_LEVEL_START_BIT 29 +#define PFERR_LEVEL_END_BIT 31 #define PFERR_GUEST_FINAL_BIT 32 #define PFERR_GUEST_PAGE_BIT 33 #define PFERR_IMPLICIT_ACCESS_BIT 48 @@ -268,6 +270,7 @@ enum x86_intercept_stage; #define PFERR_FETCH_MASK BIT(PFERR_FETCH_BIT) #define PFERR_PK_MASK BIT(PFERR_PK_BIT) #define PFERR_SGX_MASK BIT(PFERR_SGX_BIT) +#define PFERR_LEVEL_MASK GENMASK_ULL(PFERR_LEVEL_END_BIT, PFERR_LEVEL_STAR= T_BIT) #define PFERR_GUEST_FINAL_MASK BIT_ULL(PFERR_GUEST_FINAL_BIT) #define PFERR_GUEST_PAGE_MASK BIT_ULL(PFERR_GUEST_PAGE_BIT) #define PFERR_IMPLICIT_ACCESS BIT_ULL(PFERR_IMPLICIT_ACCESS_BIT) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e9229e41eab2..10d7c46b3bf5 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4503,6 +4503,11 @@ EXPORT_SYMBOL_GPL(kvm_handle_page_fault); =20 int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { + u8 err_level =3D (fault->error_code & PFERR_LEVEL_MASK) >> PFERR_LEVEL_ST= ART_BIT; + + if (err_level) + fault->max_level =3D min(fault->max_level, err_level); + /* * If the guest's MTRRs may be used to compute the "real" memtype, * restrict the mapping level to ensure KVM uses a consistent memtype diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h index 65abda49debe..995cf22f47cf 100644 --- a/arch/x86/kvm/vmx/common.h +++ b/arch/x86/kvm/vmx/common.h @@ -78,7 +78,8 @@ static inline void vmx_handle_external_interrupt_irqoff(s= truct kvm_vcpu *vcpu, } =20 static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t = gpa, - unsigned long exit_qualification) + unsigned long exit_qualification, + int err_page_level) { u64 error_code; =20 @@ -98,6 +99,9 @@ static inline int __vmx_handle_ept_violation(struct kvm_v= cpu *vcpu, gpa_t gpa, error_code |=3D (exit_qualification & EPT_VIOLATION_GVA_TRANSLATED) !=3D = 0 ? PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK; =20 + if (err_page_level > 0) + error_code |=3D (err_page_level << PFERR_LEVEL_START_BIT) & PFERR_LEVEL_= MASK; + return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0); } =20 diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index d7be634edf3c..66a1f8534461 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1634,7 +1634,20 @@ void tdx_deliver_interrupt(struct kvm_lapic *apic, i= nt delivery_mode, =20 static int tdx_handle_ept_violation(struct kvm_vcpu *vcpu) { + union tdx_ext_exit_qualification ext_exit_qual; unsigned long exit_qual; + int err_page_level =3D 0; + + ext_exit_qual.full =3D tdexit_ext_exit_qual(vcpu); + + if (ext_exit_qual.type >=3D NUM_EXT_EXIT_QUAL) { + pr_err("EPT violation at gpa 0x%lx, with invalid ext exit qualification = type 0x%x\n", + tdexit_gpa(vcpu), ext_exit_qual.type); + kvm_vm_bugged(vcpu->kvm); + return 0; + } else if (ext_exit_qual.type =3D=3D EXT_EXIT_QUAL_ACCEPT) { + err_page_level =3D ext_exit_qual.req_sept_level + 1; + } =20 if (kvm_is_private_gpa(vcpu->kvm, tdexit_gpa(vcpu))) { /* @@ -1661,7 +1674,7 @@ static int tdx_handle_ept_violation(struct kvm_vcpu *= vcpu) } =20 trace_kvm_page_fault(vcpu, tdexit_gpa(vcpu), exit_qual); - return __vmx_handle_ept_violation(vcpu, tdexit_gpa(vcpu), exit_qual); + return __vmx_handle_ept_violation(vcpu, tdexit_gpa(vcpu), exit_qual, err_= page_level); } =20 static int tdx_handle_ept_misconfig(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h index 01e97d6886d5..a647cc36fcee 100644 --- a/arch/x86/kvm/vmx/tdx.h +++ b/arch/x86/kvm/vmx/tdx.h @@ -57,6 +57,25 @@ union tdx_exit_reason { u64 full; }; =20 +union tdx_ext_exit_qualification { + struct { + u64 type : 4; + u64 reserved0 : 28; + u64 req_sept_level : 3; + u64 err_sept_level : 3; + u64 err_sept_state : 8; + u64 err_sept_is_leaf : 1; + u64 reserved1 : 17; + }; + u64 full; +}; + +enum tdx_ext_exit_qualification_type { + EXT_EXIT_QUAL_NONE, + EXT_EXIT_QUAL_ACCEPT, + NUM_EXT_EXIT_QUAL, +}; + struct vcpu_tdx { struct kvm_vcpu vcpu; =20 diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 7c8522628dd3..16ef0f9844c7 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -5791,7 +5791,7 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu) if (unlikely(allow_smaller_maxphyaddr && kvm_vcpu_is_illegal_gpa(vcpu, gp= a))) return kvm_emulate_instruction(vcpu, 0); =20 - return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification); + return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification, 0); } =20 static int handle_ept_misconfig(struct kvm_vcpu *vcpu) --=20 2.25.1 From nobody Mon Sep 15 09:47:25 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54F10C61DB3 for ; Thu, 12 Jan 2023 17:14:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240637AbjALROB (ORCPT ); Thu, 12 Jan 2023 12:14:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232200AbjALRNR (ORCPT ); Thu, 12 Jan 2023 12:13:17 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F2D347DE37; Thu, 12 Jan 2023 08:49:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542149; x=1705078149; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=H/nxuouj18bYRqF828XvaIlfCKsR/L23rNhgJlNwdfo=; b=P6myxIpSVEp2KVzZQUl10a6eoh3rRlEny7PIgBhqdQzQ0MhzmJPLLmg/ VmxVY9eYEdEeZFebO9UqibpvQ6XtDJoNOvXTueZDZ/7VHCz/HgCdZ5Trb EQxxfbHEangkhcXo414dXJL35kasJ+uSFjr0EtC1borIugbyKxZjCu6dn P9pvSR5wIFQ+9K7/YRARTaL5Pp8QmMBp3DwnCt+J6J6gHqbLgYlAQ/SNA moRgv70u7Vc+5tllctFCVpC0KAfekKOadgNdy3cDeQLy11u2bR0M8E7Vb UE5xpPy/5U1nZKB4OJWEmU27ceRBon0jwAkA1f0y79cgbCgUHEPyK/2OX A==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="323816312" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="323816312" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:17 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="986658344" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="986658344" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:17 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Xiaoyao Li Subject: [RFC PATCH v3 08/16] KVM: TDX: Pin pages via get_page() right before ADD/AUG'ed to TDs Date: Thu, 12 Jan 2023 08:44:00 -0800 Message-Id: <49633539246692ba834c812952dcaf8fecc7600b.1673541292.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li When kvm_faultin_pfn(), it doesn't have the info regarding which page level will the gfn be mapped at. Hence it doesn't know to pin a 4K page or a 2M page. Move the guest private pages pinning logic right before TDH_MEM_PAGE_ADD/AUG() since at that time it knows the page level info. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 66a1f8534461..177f98f7c9c2 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1322,7 +1322,8 @@ static void tdx_measure_page(struct kvm_tdx *kvm_tdx,= hpa_t gpa, int size) } } =20 -static void tdx_unpin(struct kvm *kvm, kvm_pfn_t pfn, int level) +static void tdx_unpin(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, + enum pg_level level) { int i; =20 @@ -1358,12 +1359,12 @@ static int tdx_sept_set_private_spte(struct kvm *kv= m, gfn_t gfn, if (likely(is_td_finalized(kvm_tdx))) { err =3D tdh_mem_page_aug(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, &out); if (err =3D=3D TDX_ERROR_SEPT_BUSY) { - tdx_unpin(kvm, pfn, level); + tdx_unpin(kvm, gfn, pfn, level); return -EAGAIN; } if (KVM_BUG_ON(err, kvm)) { pr_tdx_error(TDH_MEM_PAGE_AUG, err, &out); - tdx_unpin(kvm, pfn, level); + tdx_unpin(kvm, gfn, pfn, level); return -EIO; } return 0; @@ -1386,7 +1387,7 @@ static int tdx_sept_set_private_spte(struct kvm *kvm,= gfn_t gfn, * always uses vcpu 0's page table and protected by vcpu->mutex). */ if (KVM_BUG_ON(kvm_tdx->source_pa =3D=3D INVALID_PAGE, kvm)) { - tdx_unpin(kvm, pfn, level); + tdx_unpin(kvm, gfn, pfn, level); return -EINVAL; } =20 @@ -1404,7 +1405,7 @@ static int tdx_sept_set_private_spte(struct kvm *kvm,= gfn_t gfn, } while (err =3D=3D TDX_ERROR_SEPT_BUSY); if (KVM_BUG_ON(err, kvm)) { pr_tdx_error(TDH_MEM_PAGE_ADD, err, &out); - tdx_unpin(kvm, pfn, level); + tdx_unpin(kvm, gfn, pfn, level); return -EIO; } else if (measure) tdx_measure_page(kvm_tdx, gpa, KVM_HPAGE_SIZE(level)); @@ -1433,7 +1434,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm= , gfn_t gfn, err =3D tdx_reclaim_page(hpa, level, false, 0); if (KVM_BUG_ON(err, kvm)) return -EIO; - tdx_unpin(kvm, pfn, level); + tdx_unpin(kvm, gfn, pfn, level); return 0; } =20 @@ -1465,7 +1466,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm= , gfn_t gfn, pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL); r =3D -EIO; } else - tdx_unpin(kvm, pfn + i, PG_LEVEL_4K); + tdx_unpin(kvm, gfn + i, pfn + i, PG_LEVEL_4K); hpa +=3D PAGE_SIZE; } return r; --=20 2.25.1 From nobody Mon Sep 15 09:47:25 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15994C63797 for ; Thu, 12 Jan 2023 17:14:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229497AbjALRON (ORCPT ); Thu, 12 Jan 2023 12:14:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232416AbjALRNe (ORCPT ); Thu, 12 Jan 2023 12:13:34 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 89FA37F463; Thu, 12 Jan 2023 08:49:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542153; x=1705078153; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PKWxzTF+/6yrpBpODoBjtY86+OW4eflEqnhC9tXJfI4=; b=hl5UePxDo3mhN/7ykay0MTA3xLVXGDK/YrsmH0109pjshUe8e4fNjAk9 nDRdCBVsNlmd6ihf28EHE3kSYDDLoMz0BpvTu3VJ9uTsF3dMew8n5qla6 R2xC4W+h3W/v3+FbEJw6jAltK6euMRzz50WLCDeCU0kO8xZQZubjqS9S1 ARujDboSgUCqfUxuSjLKWcdDjnOhPZlMRb4oylMml/9aihOH2cY2HxHIA MYBnvm5wpNmOZjZVWAbp4tSlSc9BdRLUMj0SgdVxXlNb/co9EyvnLZwIG fnzbk98/fnPW5Mnceak4AyhqAmD4XjSzq49z6XDL1MvgTGpQjbdqo/Wa+ Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="323816318" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="323816318" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:17 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="986658349" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="986658349" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:17 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Xiaoyao Li Subject: [RFC PATCH v3 09/16] KVM: TDX: Pass desired page level in err code for page fault handler Date: Thu, 12 Jan 2023 08:44:01 -0800 Message-Id: <630fb8898357d2cbb01e47ca1dff702653afeef4.1673541292.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li For TDX, EPT violation can happen when TDG.MEM.PAGE.ACCEPT. And TDG.MEM.PAGE.ACCEPT contains the desired accept page level of TD guest. 1. KVM can map it with 4KB page while TD guest wants to accept 2MB page. TD geust will get TDX_PAGE_SIZE_MISMATCH and it should try to accept 4KB size. 2. KVM can map it with 2MB page while TD guest wants to accept 4KB page. KVM needs to honor it because a) there is no way to tell guest KVM maps it as 2MB size. And b) guest accepts it in 4KB size since guest knows some other 4KB page in the same 2MB range will be used as shared page. For case 2, it need to pass desired page level to MMU's page_fault_handler. Use bit 29:31 of kvm PF error code for this purpose. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/vmx/common.h | 2 +- arch/x86/kvm/vmx/tdx.c | 7 ++++++- arch/x86/kvm/vmx/tdx.h | 19 ------------------- arch/x86/kvm/vmx/tdx_arch.h | 19 +++++++++++++++++++ arch/x86/kvm/vmx/vmx.c | 2 +- 6 files changed, 29 insertions(+), 22 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 92d935eec2f5..9687d8c8031c 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -279,6 +279,8 @@ enum x86_intercept_stage; PFERR_WRITE_MASK | \ PFERR_PRESENT_MASK) =20 +#define PFERR_LEVEL(err_code) (((err_code) & PFERR_LEVEL_MASK) >> PFERR_LE= VEL_START_BIT) + /* apic attention bits */ #define KVM_APIC_CHECK_VAPIC 0 /* diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h index 995cf22f47cf..69464ae0f7e8 100644 --- a/arch/x86/kvm/vmx/common.h +++ b/arch/x86/kvm/vmx/common.h @@ -99,7 +99,7 @@ static inline int __vmx_handle_ept_violation(struct kvm_v= cpu *vcpu, gpa_t gpa, error_code |=3D (exit_qualification & EPT_VIOLATION_GVA_TRANSLATED) !=3D = 0 ? PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK; =20 - if (err_page_level > 0) + if (err_page_level > PG_LEVEL_NONE) error_code |=3D (err_page_level << PFERR_LEVEL_START_BIT) & PFERR_LEVEL_= MASK; =20 return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0); diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 177f98f7c9c2..bdfcbd0db531 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -2360,6 +2360,7 @@ static int tdx_init_mem_region(struct kvm *kvm, struc= t kvm_tdx_cmd *cmd) struct kvm_tdx_init_mem_region region; struct kvm_vcpu *vcpu; struct page *page; + u64 error_code; kvm_pfn_t pfn; int idx, ret =3D 0; =20 @@ -2412,7 +2413,11 @@ static int tdx_init_mem_region(struct kvm *kvm, stru= ct kvm_tdx_cmd *cmd) kvm_tdx->source_pa =3D pfn_to_hpa(page_to_pfn(page)) | (cmd->flags & KVM_TDX_MEASURE_MEMORY_REGION); =20 - pfn =3D kvm_mmu_map_tdp_page(vcpu, region.gpa, TDX_SEPT_PFERR, + /* TODO: large page support. */ + error_code =3D TDX_SEPT_PFERR; + error_code |=3D (PG_LEVEL_4K << PFERR_LEVEL_START_BIT) & + PFERR_LEVEL_MASK; + pfn =3D kvm_mmu_map_tdp_page(vcpu, region.gpa, error_code, PG_LEVEL_4K); if (is_error_noslot_pfn(pfn) || kvm->vm_bugged) ret =3D -EFAULT; diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h index a647cc36fcee..01e97d6886d5 100644 --- a/arch/x86/kvm/vmx/tdx.h +++ b/arch/x86/kvm/vmx/tdx.h @@ -57,25 +57,6 @@ union tdx_exit_reason { u64 full; }; =20 -union tdx_ext_exit_qualification { - struct { - u64 type : 4; - u64 reserved0 : 28; - u64 req_sept_level : 3; - u64 err_sept_level : 3; - u64 err_sept_state : 8; - u64 err_sept_is_leaf : 1; - u64 reserved1 : 17; - }; - u64 full; -}; - -enum tdx_ext_exit_qualification_type { - EXT_EXIT_QUAL_NONE, - EXT_EXIT_QUAL_ACCEPT, - NUM_EXT_EXIT_QUAL, -}; - struct vcpu_tdx { struct kvm_vcpu vcpu; =20 diff --git a/arch/x86/kvm/vmx/tdx_arch.h b/arch/x86/kvm/vmx/tdx_arch.h index 18604734fb14..471a9f61fc81 100644 --- a/arch/x86/kvm/vmx/tdx_arch.h +++ b/arch/x86/kvm/vmx/tdx_arch.h @@ -163,4 +163,23 @@ struct td_params { #define TDX_MIN_TSC_FREQUENCY_KHZ (100 * 1000) #define TDX_MAX_TSC_FREQUENCY_KHZ (10 * 1000 * 1000) =20 +union tdx_ext_exit_qualification { + struct { + u64 type : 4; + u64 reserved0 : 28; + u64 req_sept_level : 3; + u64 err_sept_level : 3; + u64 err_sept_state : 8; + u64 err_sept_is_leaf : 1; + u64 reserved1 : 17; + }; + u64 full; +}; + +enum tdx_ext_exit_qualification_type { + EXT_EXIT_QUAL_NONE =3D 0, + EXT_EXIT_QUAL_ACCEPT, + NUM_EXT_EXIT_QUAL, +}; + #endif /* __KVM_X86_TDX_ARCH_H */ diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 16ef0f9844c7..b0f16c6f735b 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -5791,7 +5791,7 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu) if (unlikely(allow_smaller_maxphyaddr && kvm_vcpu_is_illegal_gpa(vcpu, gp= a))) return kvm_emulate_instruction(vcpu, 0); =20 - return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification, 0); + return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification, PG_LEVEL= _NONE); } =20 static int handle_ept_misconfig(struct kvm_vcpu *vcpu) --=20 2.25.1 From nobody Mon Sep 15 09:47:25 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B6E9C61DB3 for ; Thu, 12 Jan 2023 17:15:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232577AbjALROz (ORCPT ); Thu, 12 Jan 2023 12:14:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44078 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231814AbjALRNv (ORCPT ); Thu, 12 Jan 2023 12:13:51 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F093C7FEC7; Thu, 12 Jan 2023 08:49:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542157; x=1705078157; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KZYRVGkJd5m7mGfOiF9U3xQlytIMIV5s9yI1a8jwmE4=; b=PuJTMjAFSLr3sep2HZsu5qk3rk4rq0lqga7EIqWzGZD3/bP97wppw/rK 0rabTQsvRxkE4aFvOMs1U1o+Hm8/3tKPWP2+qUK6fc+VtMMqkyvN7XvDi zLFTI1ZN1wKytJ7npgHI7Mqu7pMKUoMZH5xOojPcEGvTuBWTKzDM441TB 0Fbtf8AbkgysuK/5Gwt+DUQ3DpFity6kXbkrLCC5D4SC5J1NUqiGX3gXp 73LaqRlsKjUBMsJ2TpcCUZ5dBh98hotlCLWvxtDrd2o1xjfhw8XyI5M39 pXEhrSQBeDZGCFYwbsMlK5iPeldJSNzL+TtO+gVTKGg92on69Ky1lYK6Q w==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="323816322" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="323816322" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:17 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="986658352" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="986658352" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:17 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack Subject: [RFC PATCH v3 10/16] KVM: x86/tdp_mmu: Allocate private page table for large page split Date: Thu, 12 Jan 2023 08:44:02 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata Make tdp_mmu_alloc_sp_split() aware of private page table. Signed-off-by: Isaku Yamahata --- arch/x86/kvm/mmu/mmu_internal.h | 14 ++++++++++++++ arch/x86/kvm/mmu/tdp_mmu.c | 8 ++++++-- 2 files changed, 20 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index 0ed802dc8627..e51fc5a5cabc 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -201,6 +201,15 @@ static inline void kvm_mmu_alloc_private_spt(struct kv= m_vcpu *vcpu, struct kvm_m } } =20 +static inline int kvm_alloc_private_spt_for_split(struct kvm_mmu_page *sp,= gfp_t gfp) +{ + gfp &=3D ~__GFP_ZERO; + sp->private_spt =3D (void *)__get_free_page(gfp); + if (!sp->private_spt) + return -ENOMEM; + return 0; +} + static inline void kvm_mmu_free_private_spt(struct kvm_mmu_page *sp) { if (sp->private_spt) @@ -229,6 +238,11 @@ static inline void kvm_mmu_alloc_private_spt(struct kv= m_vcpu *vcpu, struct kvm_m { } =20 +static inline int kvm_alloc_private_spt_for_split(struct kvm_mmu_page *sp,= gfp_t gfp) +{ + return -ENOMEM; +} + static inline void kvm_mmu_free_private_spt(struct kvm_mmu_page *sp) { } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 5c2c7e8ea62e..1a71aad62bd3 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1690,8 +1690,12 @@ static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_s= plit(gfp_t gfp, union kvm_mm =20 sp->role =3D role; sp->spt =3D (void *)__get_free_page(gfp); - /* TODO: large page support for private GPA. */ - WARN_ON_ONCE(kvm_mmu_page_role_is_private(role)); + if (kvm_mmu_page_role_is_private(role)) { + if (kvm_alloc_private_spt_for_split(sp, gfp)) { + free_page((unsigned long)sp->spt); + sp->spt =3D NULL; + } + } if (!sp->spt) { kmem_cache_free(mmu_page_header_cache, sp); return NULL; --=20 2.25.1 From nobody Mon Sep 15 09:47:25 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07802C61DB3 for ; Thu, 12 Jan 2023 17:15:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240349AbjALRPM (ORCPT ); Thu, 12 Jan 2023 12:15:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44396 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240599AbjALROO (ORCPT ); Thu, 12 Jan 2023 12:14:14 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9C4C78060D; Thu, 12 Jan 2023 08:49:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542161; x=1705078161; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HuHoeUekyFSAUzPPvkzdyR3yLNoy8R+YlxBP1c0vyVQ=; b=ebi0eAwYjHXO/n1sVHug6cQXxkf1UEYfjsCADR2o9TU9oGA7ikecJfO4 aiG5yKxW0RYWJPvBpgcwZOtAcFzNJFxQ/8O+SeqxQzh4C1FoyQ2qAG/VI Mns/fVkqSK4MO5Wd8HgcdbVXOGhDD7UoxQ007v4Hha8RcHSi4sAGMnxwF ngJdvdtYuHqbzF/eWrv1APjn9MxGPoV9CItzpqYXFq7mWQOkGcD0VPAg3 /m1pCGOg8LaCbMnYwFxZeoBJaLshRowcIoZeZO6hrN3F97xohX4jt+o7f 9XVraTm2a9JLEbAh+8AhjlOsyGmSaK9qIsmhjDHr/jNk6G/RbnQH/jFTM Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="323816326" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="323816326" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:18 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="986658355" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="986658355" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:17 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Xiaoyao Li Subject: [RFC PATCH v3 11/16] KVM: x86/tdp_mmu: Split the large page when zap leaf Date: Thu, 12 Jan 2023 08:44:03 -0800 Message-Id: <1fccbab5db35fe1c9ea552fc24ef4da6eca0a393.1673541292.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li When TDX enabled, a large page cannot be zapped if it contains mixed pages. In this case, it has to split the large page. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/mmu/mmu.c | 9 +++++ arch/x86/kvm/mmu/mmu_internal.h | 2 + arch/x86/kvm/mmu/tdp_mmu.c | 68 +++++++++++++++++++++++++++++++-- 3 files changed, 76 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 10d7c46b3bf5..961e103e674a 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -7250,6 +7250,15 @@ static bool linfo_is_mixed(struct kvm_lpage_info *li= nfo) return linfo->disallow_lpage & KVM_LPAGE_PRIVATE_SHARED_MIXED; } =20 +bool kvm_mem_attr_is_mixed(struct kvm_memory_slot *slot, gfn_t gfn, int le= vel) +{ + struct kvm_lpage_info *linfo =3D lpage_info_slot(gfn & KVM_HPAGE_MA= SK(level), + slot, level); + + WARN_ON_ONCE(level =3D=3D PG_LEVEL_4K); + return linfo_is_mixed(linfo); +} + static void linfo_set_mixed(gfn_t gfn, struct kvm_memory_slot *slot, int level, bool mixed) { diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index e51fc5a5cabc..b2774c164abb 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -434,6 +434,8 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cach= e *mc); void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *s= p); =20 +bool kvm_mem_attr_is_mixed(struct kvm_memory_slot *slot, gfn_t gfn, int le= vel); + #ifndef CONFIG_HAVE_KVM_RESTRICTED_MEM static inline int kvm_restricted_mem_get_pfn(struct kvm_memory_slot *slot, gfn_t gfn, kvm_pfn_t *pfn, int *order) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 1a71aad62bd3..2e55454c3e51 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1121,6 +1121,14 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_= mmu_page *sp) return true; } =20 + +static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm, + struct tdp_iter *iter, + bool shared); + +static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, + struct kvm_mmu_page *sp, bool shared); + /* * If can_yield is true, will release the MMU lock and reschedule if the * scheduler needs the CPU or there is contention on the MMU lock. If this @@ -1132,13 +1140,15 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, stru= ct kvm_mmu_page *root, gfn_t start, gfn_t end, bool can_yield, bool flush, bool zap_private) { + bool is_private =3D is_private_sp(root); + struct kvm_mmu_page *split_sp =3D NULL; struct tdp_iter iter; =20 end =3D min(end, tdp_mmu_max_gfn_exclusive()); =20 lockdep_assert_held_write(&kvm->mmu_lock); =20 - WARN_ON_ONCE(zap_private && !is_private_sp(root)); + WARN_ON_ONCE(zap_private && !is_private); if (!zap_private && is_private_sp(root)) return false; =20 @@ -1163,12 +1173,66 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, stru= ct kvm_mmu_page *root, !is_last_spte(iter.old_spte, iter.level)) continue; =20 + if (is_private && kvm_gfn_shared_mask(kvm) && + is_large_pte(iter.old_spte)) { + gfn_t gfn =3D iter.gfn & ~kvm_gfn_shared_mask(kvm); + gfn_t mask =3D KVM_PAGES_PER_HPAGE(iter.level) - 1; + struct kvm_memory_slot *slot; + struct kvm_mmu_page *sp; + + slot =3D gfn_to_memslot(kvm, gfn); + if (kvm_mem_attr_is_mixed(slot, gfn, iter.level) || + (gfn & mask) < start || + end < (gfn & mask) + KVM_PAGES_PER_HPAGE(iter.level)) { + WARN_ON_ONCE(!can_yield); + if (split_sp) { + sp =3D split_sp; + split_sp =3D NULL; + sp->role =3D tdp_iter_child_role(&iter); + } else { + WARN_ON(iter.yielded); + if (flush && can_yield) { + kvm_flush_remote_tlbs(kvm); + flush =3D false; + } + sp =3D tdp_mmu_alloc_sp_for_split(kvm, &iter, false); + if (iter.yielded) { + split_sp =3D sp; + continue; + } + } + KVM_BUG_ON(!sp, kvm); + + tdp_mmu_init_sp(sp, iter.sptep, iter.gfn); + if (tdp_mmu_split_huge_page(kvm, &iter, sp, false)) { + kvm_flush_remote_tlbs(kvm); + flush =3D false; + /* force retry on this gfn. */ + iter.yielded =3D true; + } else + flush =3D true; + continue; + } + } + tdp_mmu_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE); flush =3D true; } =20 rcu_read_unlock(); =20 + if (split_sp) { + WARN_ON(!can_yield); + if (flush) { + kvm_flush_remote_tlbs(kvm); + flush =3D false; + } + + write_unlock(&kvm->mmu_lock); + tdp_mmu_free_sp(split_sp); + write_lock(&kvm->mmu_lock); + } + /* * Because this flow zaps _only_ leaf SPTEs, the caller doesn't need * to provide RCU protection as no 'struct kvm_mmu_page' will be freed. @@ -1713,8 +1777,6 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_spli= t(struct kvm *kvm, =20 KVM_BUG_ON(kvm_mmu_page_role_is_private(role) !=3D is_private_sptep(iter->sptep), kvm); - /* TODO: Large page isn't supported for private SPTE yet. */ - KVM_BUG_ON(kvm_mmu_page_role_is_private(role), kvm); =20 /* * Since we are allocating while under the MMU lock we have to be --=20 2.25.1 From nobody Mon Sep 15 09:47:25 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6000C54EBE for ; Thu, 12 Jan 2023 17:15:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240498AbjALRPf (ORCPT ); Thu, 12 Jan 2023 12:15:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240260AbjALROp (ORCPT ); Thu, 12 Jan 2023 12:14:45 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C7F180631; Thu, 12 Jan 2023 08:49:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542165; x=1705078165; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Kn827o7T0Pw2Bvsgy6NtO00SvkT3y6jv6W+2jSSUbX0=; b=FVMxDrfpolKFR6ZmYPNSpERv1Eh+Umqi6Eg4dTEbqhggq1/09IU+yKEx 5xkxlu6JQz6xvWC5G3BOf/+namYfWU//lBFYWKR2tKse/Ed7OmjEUMNU7 /6Csh6tOon74WoKea2bKL37EVQMQ+Ylt3hatFforC1J9g44YOKfriqtvU tERR1RjOvviqES+XOEFGlORALOhIafXL9w4rPSfQp7ZBnEDKjk2f64rxM rxw6Oi+4+EUDRKoIHZs/RHvsVz/EaW6RUICszLgleBs7hPFR/65aicMTs ufPqJig71m6NSQIa6mz0ZYe0LMa8RkPlY3xRdP/LSjTFsXw7h4vYHszYs A==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="323816331" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="323816331" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:18 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="986658358" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="986658358" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:18 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Xiaoyao Li Subject: [RFC PATCH v3 12/16] KVM: x86/tdp_mmu, TDX: Split a large page when 4KB page within it converted to shared Date: Thu, 12 Jan 2023 08:44:04 -0800 Message-Id: <3ecba4c846764482bb15e63ae3353b5f9f627982.1673541292.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li When mapping the shared page for TDX, it needs to zap private alias. In the case that private page is mapped as large page (2MB), it can be removed directly only when the whole 2MB is converted to shared. Otherwise, it has to split 2MB page into 512 4KB page, and only remove the pages that converted to shared. When a present large leaf spte switches to present non-leaf spte, TDX needs to split the corresponding SEPT page to reflect it. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm-x86-ops.h | 1 + arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/mmu/tdp_mmu.c | 24 +++++++++++++++--------- arch/x86/kvm/vmx/tdx.c | 25 +++++++++++++++++++++++-- arch/x86/kvm/vmx/tdx_arch.h | 1 + arch/x86/kvm/vmx/tdx_ops.h | 7 +++++++ 6 files changed, 49 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-= x86-ops.h index 0cf928d12067..1e86542141f7 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -97,6 +97,7 @@ KVM_X86_OP_OPTIONAL_RET0(get_mt_mask) KVM_X86_OP(load_mmu_pgd) KVM_X86_OP_OPTIONAL(link_private_spt) KVM_X86_OP_OPTIONAL(free_private_spt) +KVM_X86_OP_OPTIONAL(split_private_spt) KVM_X86_OP_OPTIONAL(set_private_spte) KVM_X86_OP_OPTIONAL(remove_private_spte) KVM_X86_OP_OPTIONAL(zap_private_spte) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 9687d8c8031c..7c6f8380b7e8 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1695,6 +1695,8 @@ struct kvm_x86_ops { void *private_spt); int (*free_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level, void *private_spt); + int (*split_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level, + void *private_spt); int (*set_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level, kvm_pfn_t pfn); int (*remove_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level leve= l, diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 2e55454c3e51..2fa6ec89a0fd 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -585,18 +585,24 @@ static int __must_check handle_changed_private_spte(s= truct kvm *kvm, gfn_t gfn, =20 lockdep_assert_held(&kvm->mmu_lock); if (is_present) { - /* TDP MMU doesn't change present -> present */ - KVM_BUG_ON(was_present, kvm); + void *private_spt; =20 - /* - * Use different call to either set up middle level - * private page table, or leaf. - */ - if (is_leaf) + if (level > PG_LEVEL_4K && was_leaf && !is_leaf) { + /* + * splitting large page into 4KB. + * tdp_mmu_split_huage_page() =3D> tdp_mmu_link_sp() + */ + private_spt =3D get_private_spt(gfn, new_spte, level); + KVM_BUG_ON(!private_spt, kvm); + ret =3D static_call(kvm_x86_zap_private_spte)(kvm, gfn, level); + kvm_flush_remote_tlbs(kvm); + if (!ret) + ret =3D static_call(kvm_x86_split_private_spt)(kvm, gfn, + level, private_spt); + } else if (is_leaf) ret =3D static_call(kvm_x86_set_private_spte)(kvm, gfn, level, new_pfn); else { - void *private_spt =3D get_private_spt(gfn, new_spte, level); - + private_spt =3D get_private_spt(gfn, new_spte, level); KVM_BUG_ON(!private_spt, kvm); ret =3D static_call(kvm_x86_link_private_spt)(kvm, gfn, level, private_= spt); } diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index bdfcbd0db531..3fb7eb0df3aa 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1493,6 +1493,28 @@ static int tdx_sept_link_private_spt(struct kvm *kvm= , gfn_t gfn, return 0; } =20 +static int tdx_sept_split_private_spt(struct kvm *kvm, gfn_t gfn, + enum pg_level level, void *private_spt) +{ + int tdx_level =3D pg_level_to_tdx_sept_level(level); + struct kvm_tdx *kvm_tdx =3D to_kvm_tdx(kvm); + gpa_t gpa =3D gfn_to_gpa(gfn); + hpa_t hpa =3D __pa(private_spt); + struct tdx_module_output out; + u64 err; + + /* See comment in tdx_sept_set_private_spte() */ + err =3D tdh_mem_page_demote(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, &out); + if (err =3D=3D TDX_ERROR_SEPT_BUSY) + return -EAGAIN; + if (KVM_BUG_ON(err, kvm)) { + pr_tdx_error(TDH_MEM_PAGE_DEMOTE, err, &out); + return -EIO; + } + + return 0; +} + static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn, enum pg_level level) { @@ -1502,8 +1524,6 @@ static int tdx_sept_zap_private_spte(struct kvm *kvm,= gfn_t gfn, struct tdx_module_output out; u64 err; =20 - /* For now large page isn't supported yet. */ - WARN_ON_ONCE(level !=3D PG_LEVEL_4K); err =3D tdh_mem_range_block(kvm_tdx->tdr_pa, gpa, tdx_level, &out); if (err =3D=3D TDX_ERROR_SEPT_BUSY) return -EAGAIN; @@ -2725,6 +2745,7 @@ int __init tdx_hardware_setup(struct kvm_x86_ops *x86= _ops) =20 x86_ops->link_private_spt =3D tdx_sept_link_private_spt; x86_ops->free_private_spt =3D tdx_sept_free_private_spt; + x86_ops->split_private_spt =3D tdx_sept_split_private_spt; x86_ops->set_private_spte =3D tdx_sept_set_private_spte; x86_ops->remove_private_spte =3D tdx_sept_remove_private_spte; x86_ops->zap_private_spte =3D tdx_sept_zap_private_spte; diff --git a/arch/x86/kvm/vmx/tdx_arch.h b/arch/x86/kvm/vmx/tdx_arch.h index 471a9f61fc81..508d9a1139ce 100644 --- a/arch/x86/kvm/vmx/tdx_arch.h +++ b/arch/x86/kvm/vmx/tdx_arch.h @@ -21,6 +21,7 @@ #define TDH_MNG_CREATE 9 #define TDH_VP_CREATE 10 #define TDH_MNG_RD 11 +#define TDH_MEM_PAGE_DEMOTE 15 #define TDH_MR_EXTEND 16 #define TDH_MR_FINALIZE 17 #define TDH_VP_FLUSH 18 diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h index 4b03acce5003..60cbc7f94b18 100644 --- a/arch/x86/kvm/vmx/tdx_ops.h +++ b/arch/x86/kvm/vmx/tdx_ops.h @@ -133,6 +133,13 @@ static inline u64 tdh_mng_rd(hpa_t tdr, u64 field, str= uct tdx_module_output *out return __seamcall(TDH_MNG_RD, tdr, field, 0, 0, out); } =20 +static inline u64 tdh_mem_page_demote(hpa_t tdr, gpa_t gpa, int level, hpa= _t page, + struct tdx_module_output *out) +{ + tdx_clflush_page(page, PG_LEVEL_4K); + return seamcall_sept(TDH_MEM_PAGE_DEMOTE, gpa | level, tdr, page, 0, out); +} + static inline u64 tdh_mr_extend(hpa_t tdr, gpa_t gpa, struct tdx_module_output *out) { --=20 2.25.1 From nobody Mon Sep 15 09:47:25 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98C1AC61DB3 for ; Thu, 12 Jan 2023 17:15:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232750AbjALRPp (ORCPT ); Thu, 12 Jan 2023 12:15:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42000 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229742AbjALROz (ORCPT ); Thu, 12 Jan 2023 12:14:55 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D4FDE80992; Thu, 12 Jan 2023 08:49:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542166; x=1705078166; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5QI5NkbvG8pqYWneV7DVZAoNbztvBSfytrVl2ac3Vvo=; b=U/eCOuWrpONDfy3MZRNljXno9ZcEy0lgGHOkzjvBk9gQWzsX78HAKCJF yxmahms1I8NRcf80h+1Ojsu1aOwcQllUn4Z2f+S9UNfdUyaNICRXgrpXK yMa2crIKbp5Jch5jTBFfan9hE9CZCku/zvh5xnvWk43bfBmiD/oHewMuh MTSsDP58RjaJ/qejCo3xt+dnaJ0IRAk0I112Q4iNrdcXAkaKnGM3NVthn nQl7BPcRqAuu2sygM+en8x/ek6gF8HVkZ9ryqQSsodqQ7khJ/QcvZIChh xoiU96TtH17tCeq3A2iyaibiEKfnT3uZRk+MxYKAVmhBXTEktTwTrQTDm Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="323816336" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="323816336" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:18 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="986658361" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="986658361" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:18 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack Subject: [RFC PATCH v3 13/16] KVM: x86/tdp_mmu: Try to merge pages into a large page Date: Thu, 12 Jan 2023 08:44:05 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata When a large page is passed to the KVM page fault handler and some of sub pages are already populated, try to merge sub pages into a large page. This situation can happen when the guest converts small pages into shared and convert it back into private. When a large page is passed to KVM mmu page fault handler and the spte corresponding to the page is non-leaf (one or more of sub pages are already populated at lower page level), the current kvm mmu zaps non-leaf spte at a large page level, and populate a leaf spte at that level. Thus small pages are converted into a large page. However, it doesn't work for TDX because zapping and re-populating results in zeroing page content. Instead, populate all small pages and merge them into a large page. Merging pages into a large page can fail when some sub pages are accepted and some are not. In such case, with the assumption that guest tries to accept at large page size for performance when possible, don't try to be smart to identify which page is still pending, map all pages at lower page level, and let vcpu re-execute. Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm-x86-ops.h | 2 + arch/x86/include/asm/kvm_host.h | 3 + arch/x86/kvm/mmu/tdp_iter.c | 37 ++++++-- arch/x86/kvm/mmu/tdp_iter.h | 2 + arch/x86/kvm/mmu/tdp_mmu.c | 140 ++++++++++++++++++++++++++++- 5 files changed, 174 insertions(+), 10 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-= x86-ops.h index 1e86542141f7..83f99a9fb3c2 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -98,9 +98,11 @@ KVM_X86_OP(load_mmu_pgd) KVM_X86_OP_OPTIONAL(link_private_spt) KVM_X86_OP_OPTIONAL(free_private_spt) KVM_X86_OP_OPTIONAL(split_private_spt) +KVM_X86_OP_OPTIONAL(merge_private_spt) KVM_X86_OP_OPTIONAL(set_private_spte) KVM_X86_OP_OPTIONAL(remove_private_spte) KVM_X86_OP_OPTIONAL(zap_private_spte) +KVM_X86_OP_OPTIONAL(unzap_private_spte) KVM_X86_OP(has_wbinvd_exit) KVM_X86_OP(get_l2_tsc_offset) KVM_X86_OP(get_l2_tsc_multiplier) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 7c6f8380b7e8..9574d9907074 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1697,11 +1697,14 @@ struct kvm_x86_ops { void *private_spt); int (*split_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level, void *private_spt); + int (*merge_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level, + void *private_spt); int (*set_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level, kvm_pfn_t pfn); int (*remove_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level leve= l, kvm_pfn_t pfn); int (*zap_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level); + int (*unzap_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level= ); =20 bool (*has_wbinvd_exit)(void); =20 diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c index e26e744df1d1..5b83967fbd82 100644 --- a/arch/x86/kvm/mmu/tdp_iter.c +++ b/arch/x86/kvm/mmu/tdp_iter.c @@ -75,6 +75,14 @@ tdp_ptep_t spte_to_child_pt(u64 spte, int level) return (tdp_ptep_t)__va(spte_to_pfn(spte) << PAGE_SHIFT); } =20 +static void step_down(struct tdp_iter *iter, tdp_ptep_t child_pt) +{ + iter->level--; + iter->pt_path[iter->level - 1] =3D child_pt; + iter->gfn =3D round_gfn_for_level(iter->next_last_level_gfn, iter->level); + tdp_iter_refresh_sptep(iter); +} + /* * Steps down one level in the paging structure towards the goal GFN. Retu= rns * true if the iterator was able to step down a level, false otherwise. @@ -96,14 +104,28 @@ static bool try_step_down(struct tdp_iter *iter) if (!child_pt) return false; =20 - iter->level--; - iter->pt_path[iter->level - 1] =3D child_pt; - iter->gfn =3D round_gfn_for_level(iter->next_last_level_gfn, iter->level); - tdp_iter_refresh_sptep(iter); - + step_down(iter, child_pt); return true; } =20 +/* Steps down for freezed spte. Don't re-read sptep because it was freeze= d. */ +void tdp_iter_step_down(struct tdp_iter *iter, tdp_ptep_t child_pt) +{ + WARN_ON_ONCE(!child_pt); + WARN_ON_ONCE(iter->yielded); + WARN_ON_ONCE(iter->level =3D=3D iter->min_level); + + step_down(iter, child_pt); +} + +void tdp_iter_step_side(struct tdp_iter * iter) +{ + iter->gfn +=3D KVM_PAGES_PER_HPAGE(iter->level); + iter->next_last_level_gfn =3D iter->gfn; + iter->sptep++; + iter->old_spte =3D kvm_tdp_mmu_read_spte(iter->sptep); +} + /* * Steps to the next entry in the current page table, at the current page = table * level. The next entry could point to a page backing guest memory or ano= ther @@ -121,10 +143,7 @@ static bool try_step_side(struct tdp_iter *iter) (SPTE_ENT_PER_PAGE - 1)) return false; =20 - iter->gfn +=3D KVM_PAGES_PER_HPAGE(iter->level); - iter->next_last_level_gfn =3D iter->gfn; - iter->sptep++; - iter->old_spte =3D kvm_tdp_mmu_read_spte(iter->sptep); + tdp_iter_step_side(iter); =20 return true; } diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index eab62baf8549..27460aa677a3 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -114,6 +114,8 @@ void tdp_iter_start(struct tdp_iter *iter, struct kvm_m= mu_page *root, int min_level, gfn_t next_last_level_gfn); void tdp_iter_next(struct tdp_iter *iter); void tdp_iter_restart(struct tdp_iter *iter); +void tdp_iter_step_side(struct tdp_iter *iter); +void tdp_iter_step_down(struct tdp_iter *iter, tdp_ptep_t child_pt); =20 static inline union kvm_mmu_page_role tdp_iter_child_role(struct tdp_iter = *iter) { diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 2fa6ec89a0fd..fc8f457292b9 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1332,6 +1332,144 @@ void kvm_tdp_mmu_invalidate_all_roots(struct kvm *k= vm) } } =20 +static int tdp_mmu_merge_private_spt(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault, + struct tdp_iter *iter, u64 new_spte) +{ + u64 *sptep =3D rcu_dereference(iter->sptep); + struct kvm_mmu_page *child_sp; + struct kvm *kvm =3D vcpu->kvm; + struct tdp_iter child_iter; + bool ret_pf_retry =3D false; + int level =3D iter->level; + gfn_t gfn =3D iter->gfn; + u64 old_spte =3D *sptep; + tdp_ptep_t child_pt; + u64 child_spte; + int ret =3D 0; + int i; + + /* + * TDX KVM supports only 2MB large page. It's not supported to merge + * 2MB pages into 1GB page at the moment. + */ + WARN_ON_ONCE(fault->goal_level !=3D PG_LEVEL_2M); + WARN_ON_ONCE(iter->level !=3D PG_LEVEL_2M); + WARN_ON_ONCE(!is_large_pte(new_spte)); + + /* Freeze the spte to prevent other threads from working spte. */ + if (!try_cmpxchg64(sptep, &iter->old_spte, REMOVED_SPTE)) + return -EBUSY; + + /* + * Step down to the child spte. Because tdp_iter_next() assumes the + * parent spte isn't freezed, do it manually. + */ + child_pt =3D spte_to_child_pt(iter->old_spte, iter->level); + child_sp =3D sptep_to_sp(child_pt); + WARN_ON_ONCE(child_sp->role.level !=3D PG_LEVEL_4K); + WARN_ON_ONCE(!kvm_mmu_page_role_is_private(child_sp->role)); + + /* Don't modify iter as the caller will use iter after this function. */ + child_iter =3D *iter; + /* Adjust the target gfn to the head gfn of the large page. */ + child_iter.next_last_level_gfn &=3D -KVM_PAGES_PER_HPAGE(level); + tdp_iter_step_down(&child_iter, child_pt); + + /* + * All child pages are required to be populated for merging them into a + * large page. Populate all child spte. + */ + for (i =3D 0; i < SPTE_ENT_PER_PAGE; i++, tdp_iter_step_side(&child_iter)= ) { + WARN_ON_ONCE(child_iter.level !=3D PG_LEVEL_4K); + if (is_shadow_present_pte(child_iter.old_spte)) { + /* TODO: relocate page for huge page. */ + WARN_ON_ONCE(spte_to_pfn(child_iter.old_spte) !=3D spte_to_pfn(new_spte= ) + i); + continue; + } + + WARN_ON_ONCE(spte_to_pfn(child_iter.old_spte) !=3D spte_to_pfn(new_spte)= + i); + child_spte =3D make_huge_page_split_spte(kvm, new_spte, child_sp->role, = i); + /* + * Because other thread may have started to operate on this spte + * before freezing the parent spte, Use atomic version to + * prevent race. + */ + ret =3D tdp_mmu_set_spte_atomic(vcpu->kvm, &child_iter, child_spte); + if (ret =3D=3D -EBUSY || ret =3D=3D -EAGAIN) + /* + * There was a race condition. Populate remaining 4K + * spte to resolve fault->gfn to guarantee the forward + * progress. + */ + ret_pf_retry =3D true; + else if (ret) + goto out; + } + if (ret_pf_retry) { + ret =3D RET_PF_RETRY; + goto out; + } + + /* Prevent the Secure-EPT entry from being used. */ + ret =3D static_call(kvm_x86_zap_private_spte)(kvm, gfn, level); + if (ret) + goto out; + kvm_flush_remote_tlbs_with_address(kvm, gfn, KVM_PAGES_PER_HPAGE(level)); + + /* Merge pages into a large page. */ + ret =3D static_call(kvm_x86_merge_private_spt)(kvm, gfn, level, + kvm_mmu_private_spt(child_sp)); + /* + * Failed to merge pages because some pages are accepted and some are + * pending. Since the child page was mapped above, let vcpu run. + */ + if (ret =3D=3D -EAGAIN) + ret =3D RET_PF_RETRY; + if (ret) + goto unzap; + + /* Unfreeze spte. */ + __kvm_tdp_mmu_write_spte(sptep, new_spte); + + /* + * Free unused child sp. Secure-EPT page was already freed at TDX level + * by kvm_x86_merge_private_spt(). + */ + tdp_unaccount_mmu_page(kvm, child_sp); + tdp_mmu_free_sp(child_sp); + return RET_PF_RETRY; + +unzap: + if (static_call(kvm_x86_unzap_private_spte)(kvm, gfn, level)) + old_spte =3D SHADOW_NONPRESENT_VALUE | + (spte_to_pfn(old_spte) << PAGE_SHIFT) | + PT_PAGE_SIZE_MASK; +out: + __kvm_tdp_mmu_write_spte(sptep, old_spte); + return ret; +} + +static int __tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault, + struct tdp_iter *iter, u64 new_spte) +{ + /* + * The private page has smaller-size pages. For example, the child + * pages was converted from shared to page, and now it can be mapped as + * a large page. Try to merge small pages into a large page. + */ + if (fault->slot && + kvm_gfn_shared_mask(vcpu->kvm) && + iter->level > PG_LEVEL_4K && + kvm_is_private_gpa(vcpu->kvm, gfn_to_gpa(fault->gfn)) && + is_shadow_present_pte(iter->old_spte) && + !is_large_pte(iter->old_spte)) + return tdp_mmu_merge_private_spt(vcpu, fault, iter, new_spte); + + return tdp_mmu_set_spte_atomic(vcpu->kvm, iter, new_spte); +} + /* * Installs a last-level SPTE to handle a TDP page fault. * (NPT/EPT violation/misconfiguration) @@ -1366,7 +1504,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm= _vcpu *vcpu, =20 if (new_spte =3D=3D iter->old_spte) ret =3D RET_PF_SPURIOUS; - else if (tdp_mmu_set_spte_atomic(vcpu->kvm, iter, new_spte)) + else if (__tdp_mmu_map_handle_target_level(vcpu, fault, iter, new_spte)) return RET_PF_RETRY; else if (is_shadow_present_pte(iter->old_spte) && !is_last_spte(iter->old_spte, iter->level)) --=20 2.25.1 From nobody Mon Sep 15 09:47:25 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4511DC54EBE for ; Thu, 12 Jan 2023 17:16:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233462AbjALRQB (ORCPT ); Thu, 12 Jan 2023 12:16:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42014 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240631AbjALRO5 (ORCPT ); Thu, 12 Jan 2023 12:14:57 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 48736809A5; Thu, 12 Jan 2023 08:49:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542168; x=1705078168; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=k4iD1TNlKoUYX44DYqXUqzhco4y+uWUH8JhSCeeuIUE=; b=b7MVAN0fyOZWWAx9jc0LMRNXBhvw4qx6iUTosbxS4aMR6CDGbPwEXI1U hZZT6Cb8IK4xabXgyhpHOAqx5fCWm/3IMJS8rbLL9rfGW70fo8dt2wxJB aBmAiO2gTO9PtAxYI6SeoOe9cJcV1zpj2C3GG0fuFqJ/c9NpftXZ1gbOX 6L5GliQC/licRJYJclsms8LNvPR1I1IPilq0OvcueF592PqcfSgpHDdqW lzeE5nFaT0D1yBimM/V8l33gDyd6mzLQ2jClR3byvkKqI48Q1DlhM00zT dtPwTf+lc2lbkH4u55IlM52pSf+JXuLm/sRyljadHvv0GQB1uwc9RfFaB Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="323816340" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="323816340" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:18 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="986658364" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="986658364" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:18 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack Subject: [RFC PATCH v3 14/16] KVM: x86/tdp_mmu: TDX: Implement merge pages into a large page Date: Thu, 12 Jan 2023 08:44:06 -0800 Message-Id: <149d3cb8ff72b1a87c37b0356a5729ec4900b9dd.1673541292.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata Implement merge_private_stp callback. Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx.c | 70 ++++++++++++++++++++++++++++++++++++ arch/x86/kvm/vmx/tdx_arch.h | 1 + arch/x86/kvm/vmx/tdx_errno.h | 2 ++ arch/x86/kvm/vmx/tdx_ops.h | 6 ++++ 4 files changed, 79 insertions(+) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 3fb7eb0df3aa..4ed76ef46b0d 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1515,6 +1515,47 @@ static int tdx_sept_split_private_spt(struct kvm *kv= m, gfn_t gfn, return 0; } =20 +static int tdx_sept_merge_private_spt(struct kvm *kvm, gfn_t gfn, + enum pg_level level, void *private_spt) +{ + int tdx_level =3D pg_level_to_tdx_sept_level(level); + struct kvm_tdx *kvm_tdx =3D to_kvm_tdx(kvm); + struct tdx_module_output out; + gpa_t gpa =3D gfn_to_gpa(gfn); + u64 err; + + /* See comment in tdx_sept_set_private_spte() */ + err =3D tdh_mem_page_promote(kvm_tdx->tdr_pa, gpa, tdx_level, &out); + if (err =3D=3D TDX_ERROR_SEPT_BUSY) + return -EAGAIN; + if (err =3D=3D TDX_EPT_INVALID_PROMOTE_CONDITIONS) + /* + * Some pages are accepted, some pending. Need to wait for TD + * to accept all pages. Tell it the caller. + */ + return -EAGAIN; + if (KVM_BUG_ON(err, kvm)) { + pr_tdx_error(TDH_MEM_PAGE_PROMOTE, err, &out); + return -EIO; + } + WARN_ON_ONCE(out.rcx !=3D __pa(private_spt)); + + /* + * TDH.MEM.PAGE.PROMOTE frees the Secure-EPT page for the lower level. + * Flush cache for reuse. + */ + do { + err =3D tdh_phymem_page_wbinvd(set_hkid_to_hpa(__pa(private_spt), + to_kvm_tdx(kvm)->hkid)); + } while (err =3D=3D (TDX_OPERAND_BUSY | TDX_OPERAND_ID_RCX)); + if (WARN_ON_ONCE(err)) { + pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL); + return -EIO; + } + + return 0; +} + static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn, enum pg_level level) { @@ -1584,6 +1625,33 @@ static void tdx_track(struct kvm_tdx *kvm_tdx) =20 } =20 +static int tdx_sept_unzap_private_spte(struct kvm *kvm, gfn_t gfn, + enum pg_level level) +{ + int tdx_level =3D pg_level_to_tdx_sept_level(level); + struct kvm_tdx *kvm_tdx =3D to_kvm_tdx(kvm); + gpa_t gpa =3D gfn_to_gpa(gfn); + struct tdx_module_output out; + u64 err; + + do { + err =3D tdh_mem_range_unblock(kvm_tdx->tdr_pa, gpa, tdx_level, &out); + + /* + * tdh_mem_range_block() is accompanied with tdx_track() via kvm + * remote tlb flush. Wait for the caller of + * tdh_mem_range_block() to complete TDX track. + */ + } while (err =3D=3D (TDX_TLB_TRACKING_NOT_DONE | TDX_OPERAND_ID_SEPT)); + if (err =3D=3D TDX_ERROR_SEPT_BUSY) + return -EAGAIN; + if (KVM_BUG_ON(err, kvm)) { + pr_tdx_error(TDH_MEM_RANGE_UNBLOCK, err, &out); + return -EIO; + } + return 0; +} + static int tdx_sept_free_private_spt(struct kvm *kvm, gfn_t gfn, enum pg_level level, void *private_spt) { @@ -2746,9 +2814,11 @@ int __init tdx_hardware_setup(struct kvm_x86_ops *x8= 6_ops) x86_ops->link_private_spt =3D tdx_sept_link_private_spt; x86_ops->free_private_spt =3D tdx_sept_free_private_spt; x86_ops->split_private_spt =3D tdx_sept_split_private_spt; + x86_ops->merge_private_spt =3D tdx_sept_merge_private_spt; x86_ops->set_private_spte =3D tdx_sept_set_private_spte; x86_ops->remove_private_spte =3D tdx_sept_remove_private_spte; x86_ops->zap_private_spte =3D tdx_sept_zap_private_spte; + x86_ops->unzap_private_spte =3D tdx_sept_unzap_private_spte; =20 return 0; } diff --git a/arch/x86/kvm/vmx/tdx_arch.h b/arch/x86/kvm/vmx/tdx_arch.h index 508d9a1139ce..3a3c9c608bf0 100644 --- a/arch/x86/kvm/vmx/tdx_arch.h +++ b/arch/x86/kvm/vmx/tdx_arch.h @@ -29,6 +29,7 @@ #define TDH_MNG_KEY_FREEID 20 #define TDH_MNG_INIT 21 #define TDH_VP_INIT 22 +#define TDH_MEM_PAGE_PROMOTE 23 #define TDH_VP_RD 26 #define TDH_MNG_KEY_RECLAIMID 27 #define TDH_PHYMEM_PAGE_RECLAIM 28 diff --git a/arch/x86/kvm/vmx/tdx_errno.h b/arch/x86/kvm/vmx/tdx_errno.h index 389b1b53da25..74a5777c05f1 100644 --- a/arch/x86/kvm/vmx/tdx_errno.h +++ b/arch/x86/kvm/vmx/tdx_errno.h @@ -19,6 +19,8 @@ #define TDX_KEY_CONFIGURED 0x0000081500000000ULL #define TDX_NO_HKID_READY_TO_WBCACHE 0x0000082100000000ULL #define TDX_EPT_WALK_FAILED 0xC0000B0000000000ULL +#define TDX_TLB_TRACKING_NOT_DONE 0xC0000B0800000000ULL +#define TDX_EPT_INVALID_PROMOTE_CONDITIONS 0xC0000B0900000000ULL =20 /* * TDG.VP.VMCALL Status Codes (returned in R10) diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h index 60cbc7f94b18..5d2d0b1eed28 100644 --- a/arch/x86/kvm/vmx/tdx_ops.h +++ b/arch/x86/kvm/vmx/tdx_ops.h @@ -140,6 +140,12 @@ static inline u64 tdh_mem_page_demote(hpa_t tdr, gpa_t= gpa, int level, hpa_t pag return seamcall_sept(TDH_MEM_PAGE_DEMOTE, gpa | level, tdr, page, 0, out); } =20 +static inline u64 tdh_mem_page_promote(hpa_t tdr, gpa_t gpa, int level, + struct tdx_module_output *out) +{ + return seamcall_sept(TDH_MEM_PAGE_PROMOTE, gpa | level, tdr, 0, 0, out); +} + static inline u64 tdh_mr_extend(hpa_t tdr, gpa_t gpa, struct tdx_module_output *out) { --=20 2.25.1 From nobody Mon Sep 15 09:47:25 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC228C54EBE for ; Thu, 12 Jan 2023 17:16:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232334AbjALRQF (ORCPT ); Thu, 12 Jan 2023 12:16:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43874 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240667AbjALRPB (ORCPT ); Thu, 12 Jan 2023 12:15:01 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 590C51A83E; Thu, 12 Jan 2023 08:49:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542169; x=1705078169; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8K06GLnahUhJRsvx1JOjAEWLEWf7xxI20HWsQS5Shaw=; b=XVuNpucJ/SYNRGQttx8CAVnnTGVFu6vjN6mIsBwEMUEL1692zn8cCnZ9 6b64pRLcjywofZBJawYma5u+h6y4zHwMWeRO//qkEX8wL028uoybXHWfY LEgpWuowofUHnaJZXVT9TOR+PLsfw3kP9ONiRBvxqEzPFp/yRg6/MJy3/ 97EMxxLGTbXXy6DtLLOuQgX+Zft0ZNBGhRcoemnwUdiCFW0gKfROJoxk4 MEigreK1VP/DaLbEg678RvH1KB6a6PKTWgESETieE78Jhos/MY8hJPSke lzv2IPQQ3Uggqlk+B+dkJClP0pCCcu/6zGQWsrUHCgkIuZxoaFbgtgn3x Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="323816346" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="323816346" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:18 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="986658367" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="986658367" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:18 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack Subject: [RFC PATCH v3 15/16] KVM: x86/mmu: Make kvm fault handelr aware of large page of private memslot Date: Thu, 12 Jan 2023 08:44:07 -0800 Message-Id: <89fa5a971d80f6e2cfbb6859d6985156b671b39e.1673541292.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata struct kvm_page_fault.req_level is the page level which takes care of the faulted-in page size. For now its calculation is only for the conventional kvm memslot by host_pfn_mapping_level() that traverses page table. However, host_pfn_mapping_level() cannot be used for private kvm memslot because pages of private kvm memlost aren't mapped into user virtual address space. Instead page order is given when getting pfn. Remember it in struct kvm_page_fault and use it. Signed-off-by: Isaku Yamahata --- arch/x86/kvm/mmu/mmu.c | 36 +++++++++++++++++++++------------ arch/x86/kvm/mmu/mmu_internal.h | 8 ++++++++ 2 files changed, 31 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 961e103e674a..dc767125922e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3118,12 +3118,12 @@ static int host_pfn_mapping_level(struct kvm *kvm, = gfn_t gfn, return level; } =20 -int kvm_mmu_max_mapping_level(struct kvm *kvm, - const struct kvm_memory_slot *slot, gfn_t gfn, - int max_level, bool is_private) +static int __kvm_mmu_max_mapping_level(struct kvm *kvm, + const struct kvm_memory_slot *slot, + gfn_t gfn, int max_level, int host_level, + bool faultin_private) { struct kvm_lpage_info *linfo; - int host_level; =20 max_level =3D min(max_level, max_huge_page_level); for ( ; max_level > PG_LEVEL_4K; max_level--) { @@ -3132,16 +3132,24 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, break; } =20 - if (is_private) - return max_level; - if (max_level =3D=3D PG_LEVEL_4K) return PG_LEVEL_4K; =20 - host_level =3D host_pfn_mapping_level(kvm, gfn, slot); + if (!faultin_private) { + WARN_ON_ONCE(host_level !=3D PG_LEVEL_NONE); + host_level =3D host_pfn_mapping_level(kvm, gfn, slot); + } + WARN_ON_ONCE(host_level =3D=3D PG_LEVEL_NONE); return min(host_level, max_level); } =20 +int kvm_mmu_max_mapping_level(struct kvm *kvm, + const struct kvm_memory_slot *slot, gfn_t gfn, + int max_level, bool faultin_private) +{ + return __kvm_mmu_max_mapping_level(kvm, slot, gfn, max_level, PG_LEVEL_NO= NE, faultin_private); +} + void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault = *fault) { struct kvm_memory_slot *slot =3D fault->slot; @@ -3162,9 +3170,10 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, = struct kvm_page_fault *fault * Enforce the iTLB multihit workaround after capturing the requested * level, which will be used to do precise, accurate accounting. */ - fault->req_level =3D kvm_mmu_max_mapping_level(vcpu->kvm, slot, - fault->gfn, fault->max_level, - fault->is_private); + fault->req_level =3D __kvm_mmu_max_mapping_level(vcpu->kvm, slot, + fault->gfn, fault->max_level, + fault->host_level, + kvm_is_faultin_private(fault)); if (fault->req_level =3D=3D PG_LEVEL_4K || fault->huge_page_disallowed) return; =20 @@ -4294,7 +4303,8 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu *v= cpu, if (kvm_restricted_mem_get_pfn(slot, fault->gfn, &fault->pfn, &order)) return RET_PF_RETRY; =20 - fault->max_level =3D min(order_to_level(order), fault->max_level); + fault->host_level =3D order_to_level(order); + fault->max_level =3D min((u8)fault->host_level, fault->max_level); fault->map_writable =3D !(slot->flags & KVM_MEM_READONLY); return RET_PF_CONTINUE; } @@ -4338,7 +4348,7 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault) if (fault->is_private !=3D kvm_mem_is_private(vcpu->kvm, fault->gfn)) return kvm_do_memory_fault_exit(vcpu, fault); =20 - if (fault->is_private && kvm_slot_can_be_private(slot)) + if (kvm_is_faultin_private(fault)) return kvm_faultin_pfn_private(vcpu, fault); =20 async =3D false; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index b2774c164abb..1e73faad6268 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -337,6 +337,7 @@ struct kvm_page_fault { kvm_pfn_t pfn; hva_t hva; bool map_writable; + enum pg_level host_level; /* valid only for private memslot && private gf= n */ }; =20 int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault= ); @@ -445,4 +446,11 @@ static inline int kvm_restricted_mem_get_pfn(struct kv= m_memory_slot *slot, } #endif /* CONFIG_HAVE_KVM_RESTRICTED_MEM */ =20 +static inline bool kvm_is_faultin_private(const struct kvm_page_fault *fau= lt) +{ + if (IS_ENABLED(CONFIG_HAVE_KVM_RESTRICTED_MEM)) + return fault->is_private && kvm_slot_can_be_private(fault->slot); + return false; +} + #endif /* __KVM_X86_MMU_INTERNAL_H */ --=20 2.25.1 From nobody Mon Sep 15 09:47:25 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 619BCC54EBE for ; Thu, 12 Jan 2023 17:16:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230263AbjALRQ2 (ORCPT ); Thu, 12 Jan 2023 12:16:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44078 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239063AbjALRP0 (ORCPT ); Thu, 12 Jan 2023 12:15:26 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 017F680AD5; Thu, 12 Jan 2023 08:49:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542173; x=1705078173; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=OeLsEo50hbC8XkGy3XlTL2j0FeJ7MRnEC1GCFgbITOk=; b=Q4mjjgqKyldQygCYOOFzHFwDmJRoFW0S/TeU/tjDDMH1/m0p4fENp2hN NY7DLCzdlsvVBALlOC3FzFJtd+FGMuzGk2u8h+L0Kom6HKtoFx+OB4zgV bDDKXv15cpf4qfzkTBM/9xeEgCj+T/9JTkJJHB7d4IUFcw+9QvToxsAnP YIrynhekqauC0TzustQEWqL8WWby9vcYdUYm28FZS7N4WMmy5xenT3x8k RdjiItWm03zQeVyHT+lGYJODztQvVJASNpKU3cKvpFyTfUXMP6ODxQqu/ M885kFY6PReXOrckPl2/bv+AavdXSmqOBX2cqbAj2dVZtnWYhl5caZ6Vq g==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="323816350" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="323816350" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:19 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="986658370" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="986658370" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:44:18 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Xiaoyao Li Subject: [RFC PATCH v3 16/16] KVM: TDX: Allow 2MB large page for TD GUEST Date: Thu, 12 Jan 2023 08:44:08 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li Now that everything is there to support 2MB page for TD guest. Because TDX module TDH.MEM.PAGE.AUG supports 4KB page and 2MB page, set struct kvm_arch.tdp_max_page_level to 2MB page level. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/mmu/tdp_mmu.c | 9 ++------- arch/x86/kvm/vmx/tdx.c | 4 ++-- 2 files changed, 4 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index fc8f457292b9..7091c45697ef 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1630,14 +1630,9 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kv= m_page_fault *fault) =20 sp->nx_huge_page_disallowed =3D fault->huge_page_disallowed; =20 - if (is_shadow_present_pte(iter.old_spte)) { - /* - * TODO: large page support. - * Doesn't support large page for TDX now - */ - KVM_BUG_ON(is_private_sptep(iter.sptep), vcpu->kvm); + if (is_shadow_present_pte(iter.old_spte)) r =3D tdp_mmu_split_huge_page(kvm, &iter, sp, true); - } else + else r =3D tdp_mmu_link_sp(kvm, &iter, sp, true); =20 /* diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 4ed76ef46b0d..3084fa846460 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -487,8 +487,8 @@ int tdx_vm_init(struct kvm *kvm) */ kvm_mmu_set_mmio_spte_value(kvm, 0); =20 - /* TODO: Enable 2mb and 1gb large page support. */ - kvm->arch.tdp_max_page_level =3D PG_LEVEL_4K; + /* TDH.MEM.PAGE.AUG supports up to 2MB page. */ + kvm->arch.tdp_max_page_level =3D PG_LEVEL_2M; =20 /* * This function initializes only KVM software construct. It doesn't --=20 2.25.1