From nobody Sat Apr 11 19:32:16 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 689CEC25B0C for ; Sun, 7 Aug 2022 22:32:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237490AbiHGWb4 (ORCPT ); Sun, 7 Aug 2022 18:31:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242163AbiHGWbh (ORCPT ); Sun, 7 Aug 2022 18:31:37 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A97491834F; Sun, 7 Aug 2022 15:18:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659910728; x=1691446728; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=MjryHD9QygxcfdweTRylKf8TyhZNoq+ECeI3q06/RPQ=; b=VdSb6bvirF1FD0kP00pjdO7i9yih+WWAR0nQZEsyaQIqYWjD4tLlGLvz vtxRZ7ziW2Q52mMg5BN0nCSnKD8Y5uIToQTFt1vUUSzCg8OH5JsSA8QNj oUhu5fMyxDwODOUynGMyhn1btp0+HAQ2RO6ZqYE2g9+vS4aiPvUgqvYie ixjQ2WjfhFfqICw778kYFpwOXJJ2KvroiyVLX7FAON0SHSXLLyTEIqmmH lhmpOB2biUHEJNzkyv1Fe6axMvESfRfpdC5W3Ka/+fSknKI5HFOGrWtn8 rCPSBHW7E3L3BA4iubuGGF0Oae8Q7P/qm8IWgpiGC9cUdus4TthlMHPve w==; X-IronPort-AV: E=McAfee;i="6400,9594,10432"; a="270852829" X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="270852829" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:48 -0700 X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="632642298" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:47 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar Subject: [RFC PATCH 01/13] KVM: Update lpage info when private/shared memory are mixed Date: Sun, 7 Aug 2022 15:18:34 -0700 Message-Id: <80242041681cff8c215329f3d7ad02581e3e7ca2.1659854957.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Chao Peng Update lpage_info when private/shared memory attribute is changed. If both private and shared pages are within large page region, it can't be mapped as large page. Reserve a bit in disallow_lpage to indicate a large page has private/share pages mixed. Signed-off-by: Chao Peng Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm_host.h | 8 ++ arch/x86/kvm/mmu/mmu.c | 152 +++++++++++++++++++++++++++++++- arch/x86/kvm/mmu/mmu_internal.h | 2 + include/linux/kvm_host.h | 10 +++ virt/kvm/kvm_main.c | 9 +- 5 files changed, 178 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index d68130be5bf7..2bdb1de9bce0 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -37,6 +37,7 @@ #include =20 #define __KVM_HAVE_ARCH_VCPU_DEBUGFS +#define __KVM_HAVE_ARCH_UPDATE_MEM_ATTR #define __KVM_HAVE_ZAP_GFN_RANGE =20 #define KVM_MAX_VCPUS 1024 @@ -981,6 +982,13 @@ struct kvm_vcpu_arch { #endif }; =20 +/* + * Use a bit in disallow_lpage to indicate private/shared pages mixed at t= he + * level. The remaining bits will be used as a reference count for other u= sers. + */ +#define KVM_LPAGE_PRIVATE_SHARED_MIXED (1U << 31) +#define KVM_LPAGE_COUNT_MAX ((1U << 31) - 1) + struct kvm_lpage_info { int disallow_lpage; }; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c61fb6848d0d..a03aa609a0da 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -818,11 +818,16 @@ static void update_gfn_disallow_lpage_count(const str= uct kvm_memory_slot *slot, { struct kvm_lpage_info *linfo; int i; + int disallow_count; =20 for (i =3D PG_LEVEL_2M; i <=3D KVM_MAX_HUGEPAGE_LEVEL; ++i) { linfo =3D lpage_info_slot(gfn, slot, i); + + disallow_count =3D linfo->disallow_lpage & KVM_LPAGE_COUNT_MAX; + WARN_ON(disallow_count + count < 0 || + disallow_count > KVM_LPAGE_COUNT_MAX - count); + linfo->disallow_lpage +=3D count; - WARN_ON(linfo->disallow_lpage < 0); } } =20 @@ -7236,3 +7241,148 @@ void kvm_mmu_pre_destroy_vm(struct kvm *kvm) if (kvm->arch.nx_lpage_recovery_thread) kthread_stop(kvm->arch.nx_lpage_recovery_thread); } + +bool kvm_mem_attr_is_mixed(struct kvm_memory_slot *slot, gfn_t gfn, int le= vel) +{ + gfn_t pages =3D KVM_PAGES_PER_HPAGE(level); + gfn_t mask =3D ~(pages - 1); + struct kvm_lpage_info *linfo =3D lpage_info_slot(gfn & mask, slot, level); + + WARN_ON(level =3D=3D PG_LEVEL_4K); + return linfo->disallow_lpage & KVM_LPAGE_PRIVATE_SHARED_MIXED; +} + +static void update_mixed(struct kvm_lpage_info *linfo, bool mixed) +{ + if (mixed) + linfo->disallow_lpage |=3D KVM_LPAGE_PRIVATE_SHARED_MIXED; + else + linfo->disallow_lpage &=3D ~KVM_LPAGE_PRIVATE_SHARED_MIXED; +} + +static bool __mem_attr_is_mixed(struct kvm *kvm, gfn_t start, gfn_t end) +{ + XA_STATE(xas, &kvm->mem_attr_array, start); + bool mixed =3D false; + gfn_t gfn =3D start; + void *s_entry; + void *entry; + + rcu_read_lock(); + s_entry =3D xas_load(&xas); + while (gfn < end) { + if (xas_retry(&xas, entry)) + continue; + + KVM_BUG_ON(gfn !=3D xas.xa_index, kvm); + + entry =3D xas_next(&xas); + if (entry !=3D s_entry) { + mixed =3D true; + break; + } + gfn++; + } + rcu_read_unlock(); + return mixed; +} + +static bool mem_attr_is_mixed(struct kvm *kvm, + struct kvm_memory_slot *slot, int level, + gfn_t start, gfn_t end) +{ + struct kvm_lpage_info *child_linfo; + unsigned long child_pages; + bool mixed =3D false; + unsigned long gfn; + void *entry; + + if (WARN_ON(level =3D=3D PG_LEVEL_4K)) + return false; + + if (level =3D=3D PG_LEVEL_2M) + return __mem_attr_is_mixed(kvm, start, end); + + /* This assumes that level - 1 is already updated. */ + rcu_read_lock(); + child_pages =3D KVM_PAGES_PER_HPAGE(level - 1); + entry =3D xa_load(&kvm->mem_attr_array, start); + for (gfn =3D start; gfn < end; gfn +=3D child_pages) { + child_linfo =3D lpage_info_slot(gfn, slot, level - 1); + if (child_linfo->disallow_lpage & KVM_LPAGE_PRIVATE_SHARED_MIXED) { + mixed =3D true; + break; + } + if (xa_load(&kvm->mem_attr_array, gfn) !=3D entry) { + mixed =3D true; + break; + } + } + rcu_read_unlock(); + return mixed; +} + +static void update_mem_lpage_info(struct kvm *kvm, + struct kvm_memory_slot *slot, + unsigned int attr, + gfn_t start, gfn_t end) +{ + unsigned long lpage_start, lpage_end; + unsigned long gfn, pages, mask; + int level; + + for (level =3D PG_LEVEL_2M; level <=3D KVM_MAX_HUGEPAGE_LEVEL; level++) { + pages =3D KVM_PAGES_PER_HPAGE(level); + mask =3D ~(pages - 1); + lpage_start =3D start & mask; + lpage_end =3D (end - 1) & mask; + + /* + * We only need to scan the head and tail page, for middle pages + * we know they are not mixed. + */ + update_mixed(lpage_info_slot(lpage_start, slot, level), + mem_attr_is_mixed(kvm, slot, level, + lpage_start, lpage_start + pages)); + + if (lpage_start =3D=3D lpage_end) + return; + + for (gfn =3D lpage_start + pages; gfn < lpage_end; gfn +=3D pages) { + update_mixed(lpage_info_slot(gfn, slot, level), false); + } + + update_mixed(lpage_info_slot(lpage_end, slot, level), + mem_attr_is_mixed(kvm, slot, level, + lpage_end, lpage_end + pages)); + } +} + +void kvm_arch_update_mem_attr(struct kvm *kvm, unsigned int attr, + gfn_t start, gfn_t end) +{ + struct kvm_memory_slot *slot; + struct kvm_memslots *slots; + struct kvm_memslot_iter iter; + int idx; + int i; + + WARN_ONCE(!(attr & (KVM_MEM_ATTR_PRIVATE | KVM_MEM_ATTR_SHARED)), + "Unsupported mem attribute.\n"); + + idx =3D srcu_read_lock(&kvm->srcu); + for (i =3D 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots =3D __kvm_memslots(kvm, i); + + kvm_for_each_memslot_in_gfn_range(&iter, slots, start, end) { + slot =3D iter.slot; + start =3D max(start, slot->base_gfn); + end =3D min(end, slot->base_gfn + slot->npages); + if (WARN_ON_ONCE(start >=3D end)) + continue; + + update_mem_lpage_info(kvm, slot, attr, start, end); + } + } + srcu_read_unlock(&kvm->srcu, idx); +} diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index 4b581209b3b9..e5d5fea29bfa 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -259,6 +259,8 @@ static inline gfn_t kvm_gfn_for_root(struct kvm *kvm, s= truct kvm_mmu_page *root, } #endif =20 +bool kvm_mem_attr_is_mixed(struct kvm_memory_slot *slot, gfn_t gfn, int le= vel); + static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page = *sp) { /* diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 3c29e0eb754c..7e3d582cc1ba 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2295,6 +2295,16 @@ static inline void kvm_handle_signal_exit(struct kvm= _vcpu *vcpu) /* Max number of entries allowed for each kvm dirty ring */ #define KVM_DIRTY_RING_MAX_ENTRIES 65536 =20 +#ifdef __KVM_HAVE_ARCH_UPDATE_MEM_ATTR +void kvm_arch_update_mem_attr(struct kvm *kvm, unsigned int attr, + gfn_t start, gfn_t end); +#else +static inline void kvm_arch_update_mem_attr(struct kvm *kvm, unsigned int = attr, + gfn_t start, gfn_t end) +{ +} +#endif /* __KVM_HAVE_ARCH_UPDATE_MEM_ATTR */ + #ifdef CONFIG_HAVE_KVM_PRIVATE_MEM static inline int kvm_private_mem_get_pfn(struct kvm_memory_slot *slot, gfn_t gfn, kvm_pfn_t *pfn, int *order) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2ec940354749..9f9b2c0e7afc 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -943,6 +943,7 @@ EXPORT_SYMBOL_GPL(kvm_vm_reserve_mem_attr); int kvm_vm_set_mem_attr(struct kvm *kvm, int attr, gfn_t start, gfn_t end) { void *entry; + int r; =20 /* By default, the entry is private. */ switch (attr) { @@ -958,8 +959,12 @@ int kvm_vm_set_mem_attr(struct kvm *kvm, int attr, gfn= _t start, gfn_t end) } =20 WARN_ON(start >=3D end); - return xa_err(xa_store_range(&kvm->mem_attr_array, start, end - 1, - entry, GFP_KERNEL_ACCOUNT)); + r =3D xa_err(xa_store_range(&kvm->mem_attr_array, start, end - 1, + entry, GFP_KERNEL_ACCOUNT)); + if (r) + return r; + kvm_arch_update_mem_attr(kvm, attr, start, end); + return 0; } EXPORT_SYMBOL_GPL(kvm_vm_set_mem_attr); #endif /* CONFIG_HAVE_KVM_PRIVATE_MEM_ATTR */ --=20 2.25.1 From nobody Sat Apr 11 19:32:16 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B90C1C25B08 for ; Sun, 7 Aug 2022 22:32:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242355AbiHGWcI (ORCPT ); Sun, 7 Aug 2022 18:32:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35668 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242170AbiHGWbh (ORCPT ); Sun, 7 Aug 2022 18:31:37 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B851418358; Sun, 7 Aug 2022 15:18:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659910729; x=1691446729; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=L/qfCpkL6EjBOusAo/hpTfWMeNJV7Hjj/edqAUVoBdg=; b=YFkXUarfKXKWHj3nnQLW8cQV1Jd5TqHao395tT1OM6R6+jIMAUAwHeYl eY/Bol7UcHIcGtpTTclA9KH6aDPjBBCJ3t+LNEb7AL7Uqpr8CJ3l1V2EA QNu87MLuXvtWCqo60QFkjFh3sN46igbh/rMnJ7KenrGgNd0BWDGjc58Cy /QJw7v7udxmItSX88dGB3EbVUBE/uTvIuah9Lx58UQDIR5SVWHxtn1Ct7 1ULHh16FqFPp5G34Tr89J030Fzqh+pN/7t8RiyNrEY4/kVHtPcr76Z1W/ VHtWjt7HISYGiTes17oYf+ohCdwy6/pKUdD2fPPd0C49kVrHQ2JVfScuv w==; X-IronPort-AV: E=McAfee;i="6400,9594,10432"; a="270852830" X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="270852830" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:48 -0700 X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="632642302" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:48 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar Subject: [RFC PATCH 02/13] KVM: TDP_MMU: Go to next level if smaller private mapping exists Date: Sun, 7 Aug 2022 15:18:35 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li Cannot map a private page as large page if any smaller mapping exists. It has to wait for all the not-mapped smaller page to be mapped and promote it to larger mapping. Signed-off-by: Xiaoyao Li --- arch/x86/kvm/mmu/tdp_mmu.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index f2461deba2dc..faf278e0c740 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1322,7 +1322,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm= _page_fault *fault) } =20 tdp_mmu_for_each_pte(iter, mmu, is_private, raw_gfn, raw_gfn + 1) { - if (fault->nx_huge_page_workaround_enabled) + if (fault->nx_huge_page_workaround_enabled || + kvm_gfn_shared_mask(vcpu->kvm)) disallowed_hugepage_adjust(fault, iter.old_spte, iter.level); =20 if (iter.level =3D=3D fault->goal_level) --=20 2.25.1 From nobody Sat Apr 11 19:32:16 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47624C19F2A for ; Sun, 7 Aug 2022 22:32:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242334AbiHGWcE (ORCPT ); Sun, 7 Aug 2022 18:32:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34426 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242175AbiHGWbi (ORCPT ); Sun, 7 Aug 2022 18:31:38 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB7C01835A; Sun, 7 Aug 2022 15:18:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659910729; x=1691446729; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZBCDb+I6x0GETVViSJuThLetZlFQXcMtaaKhL4WaELI=; b=dD/E8qkJfHfiAhlvVdrDnqTk/pnpjV0KaQK2d5rBDrGWxhkLUI5BRnES xumgZiXZNe9Wri6swG82bod6Lp5sgDifpgkv0IF4IBmdTK/HAhB5CaB+t RJIY3IEnxXf8riLX6pP1Ei9mvuQ2/taTNmBxltXM5uxJ8BlTCZExfhm6s FWx9AuWBPei75TzEGFSe4QArbHgn9nZ9GRzsJX2jiXz5jC/mF4W1rmsLN f24TGVYarNlNNKc2rOYgDVf7MJhV1eWrT7HacyuVzh6NHMeCuQocomXDz mBXl4KHS4BWrptFFn3N+O58/ccqaDwVAmsU98WrZORnCkiVyO5Bx+ptEk A==; X-IronPort-AV: E=McAfee;i="6400,9594,10432"; a="270852831" X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="270852831" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:48 -0700 X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="632642305" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:48 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar Subject: [RFC PATCH 03/13] KVM: TDX: Pass page level to cache flush before TDX SEAMCALL Date: Sun, 7 Aug 2022 15:18:36 -0700 Message-Id: <01bfd080f85afffcffeba96692be726294faa18d.1659854957.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li tdh_mem_page_aug() will support 2MB large page in the near future. Cache flush also needs to be 2MB instead of 4KB in such cases. Introduce a helper function to flush cache with page size info in preparation for large pages. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx_ops.h | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h index a50bc1445cc2..9accf2fe04ae 100644 --- a/arch/x86/kvm/vmx/tdx_ops.h +++ b/arch/x86/kvm/vmx/tdx_ops.h @@ -6,6 +6,7 @@ =20 #include =20 +#include #include #include #include @@ -18,6 +19,11 @@ =20 void pr_tdx_error(u64 op, u64 error_code, const struct tdx_module_output *= out); =20 +static inline void tdx_clflush_page(hpa_t addr, enum pg_level level) +{ + clflush_cache_range(__va(addr), KVM_HPAGE_SIZE(level)); +} + /* * Although seamcal_lock protects seamcall to avoid contention inside the = TDX * module, it doesn't protect TDH.VP.ENTER. With zero-step attack mitigat= ion, @@ -40,21 +46,21 @@ static inline u64 seamcall_sept_retry(u64 op, u64 rcx, = u64 rdx, u64 r8, u64 r9, =20 static inline u64 tdh_mng_addcx(hpa_t tdr, hpa_t addr) { - clflush_cache_range(__va(addr), PAGE_SIZE); + tdx_clflush_page(addr, PG_LEVEL_4K); return __seamcall(TDH_MNG_ADDCX, addr, tdr, 0, 0, NULL); } =20 static inline u64 tdh_mem_page_add(hpa_t tdr, gpa_t gpa, hpa_t hpa, hpa_t = source, struct tdx_module_output *out) { - clflush_cache_range(__va(hpa), PAGE_SIZE); + tdx_clflush_page(hpa, PG_LEVEL_4K); return seamcall_sept_retry(TDH_MEM_PAGE_ADD, gpa, tdr, hpa, source, out); } =20 static inline u64 tdh_mem_sept_add(hpa_t tdr, gpa_t gpa, int level, hpa_t = page, struct tdx_module_output *out) { - clflush_cache_range(__va(page), PAGE_SIZE); + tdx_clflush_page(page, PG_LEVEL_4K); return seamcall_sept_retry(TDH_MEM_SEPT_ADD, gpa | level, tdr, page, 0, out); } @@ -67,21 +73,21 @@ static inline u64 tdh_mem_sept_remove(hpa_t tdr, gpa_t = gpa, int level, =20 static inline u64 tdh_vp_addcx(hpa_t tdvpr, hpa_t addr) { - clflush_cache_range(__va(addr), PAGE_SIZE); + tdx_clflush_page(addr, PG_LEVEL_4K); return __seamcall(TDH_VP_ADDCX, addr, tdvpr, 0, 0, NULL); } =20 static inline u64 tdh_mem_page_relocate(hpa_t tdr, gpa_t gpa, hpa_t hpa, struct tdx_module_output *out) { - clflush_cache_range(__va(hpa), PAGE_SIZE); + tdx_clflush_page(hpa, PG_LEVEL_4K); return __seamcall(TDH_MEM_PAGE_RELOCATE, gpa, tdr, hpa, 0, out); } =20 static inline u64 tdh_mem_page_aug(hpa_t tdr, gpa_t gpa, hpa_t hpa, struct tdx_module_output *out) { - clflush_cache_range(__va(hpa), PAGE_SIZE); + tdx_clflush_page(hpa, PG_LEVEL_4K); return seamcall_sept_retry(TDH_MEM_PAGE_AUG, gpa, tdr, hpa, 0, out); } =20 @@ -99,13 +105,13 @@ static inline u64 tdh_mng_key_config(hpa_t tdr) =20 static inline u64 tdh_mng_create(hpa_t tdr, int hkid) { - clflush_cache_range(__va(tdr), PAGE_SIZE); + tdx_clflush_page(tdr, PG_LEVEL_4K); return __seamcall(TDH_MNG_CREATE, tdr, hkid, 0, 0, NULL); } =20 static inline u64 tdh_vp_create(hpa_t tdr, hpa_t tdvpr) { - clflush_cache_range(__va(tdvpr), PAGE_SIZE); + tdx_clflush_page(tdvpr, PG_LEVEL_4K); return __seamcall(TDH_VP_CREATE, tdvpr, tdr, 0, 0, NULL); } =20 --=20 2.25.1 From nobody Sat Apr 11 19:32:16 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C03FBC25B0C for ; Sun, 7 Aug 2022 22:32:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242367AbiHGWcN (ORCPT ); Sun, 7 Aug 2022 18:32:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242186AbiHGWbi (ORCPT ); Sun, 7 Aug 2022 18:31:38 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 635E41834B; Sun, 7 Aug 2022 15:18:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659910730; x=1691446730; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EtDHJys8JQbAqNtz2Ir/5m2Mw+AMu1zQswdGXIEagW4=; b=h89AAWd7LSMSc2gBjrHLWeCVKydKofK/HdkiDACZbnmqd3QYSwP37x1D g7DsRYh7HNBs+bMK4eYsd6i5su7Oz+0bk2g2NBU67HiCLhWg4mTzEFfPk A5hlWSpGcfTjaWhrqLIGdC66raTNjXftTRPXEhs1l/MeGDkvLrcutmh4N PuIaq2ifllQhKdELsHMmqWi+Tih8dneod2z9yQujdQa6vQB+c6dPbgVBO FELvA1qg4hYhoixaJ3PwrfD3HGoUYIBzCTgJJ7FXy1L1Y7PNUQ2XPD+LG QZyZyCT/orki2k/BUCQ7swXWsuBx2WrbpCISM0AE+PX1Z8NsOo6qQoXBw w==; X-IronPort-AV: E=McAfee;i="6400,9594,10432"; a="270852832" X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="270852832" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:49 -0700 X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="632642309" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:48 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar Subject: [RFC PATCH 04/13] KVM: TDX: Pass KVM page level to tdh_mem_page_add() and tdh_mem_page_aug() Date: Sun, 7 Aug 2022 15:18:37 -0700 Message-Id: <86bb6025f375587aae50a59c347fb16ea28efcd6.1659854957.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li Level info is needed in tdh_clflush_page() to generate the correct page size. Besides, explicitly pass level info to SEAMCALL instead of assuming it's zero. It works naturally when 2MB support lands. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx.c | 7 ++++--- arch/x86/kvm/vmx/tdx_ops.h | 21 ++++++++++++++------- 2 files changed, 18 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 2d34f0b41b26..b717d50ee4d3 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1443,6 +1443,7 @@ static void tdx_unpin_pfn(struct kvm *kvm, kvm_pfn_t = pfn) static void __tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn, enum pg_level level, kvm_pfn_t pfn) { + int tdx_level =3D pg_level_to_tdx_sept_level(level); struct kvm_tdx *kvm_tdx =3D to_kvm_tdx(kvm); hpa_t hpa =3D pfn_to_hpa(pfn); gpa_t gpa =3D gfn_to_gpa(gfn); @@ -1463,7 +1464,7 @@ static void __tdx_sept_set_private_spte(struct kvm *k= vm, gfn_t gfn, if (KVM_BUG_ON(level !=3D PG_LEVEL_4K, kvm)) return; =20 - err =3D tdh_mem_page_aug(kvm_tdx->tdr.pa, gpa, hpa, &out); + err =3D tdh_mem_page_aug(kvm_tdx->tdr.pa, gpa, tdx_level, hpa, &out); if (KVM_BUG_ON(err, kvm)) { pr_tdx_error(TDH_MEM_PAGE_AUG, err, &out); tdx_unpin_pfn(kvm, pfn); @@ -1491,12 +1492,12 @@ static void __tdx_sept_set_private_spte(struct kvm = *kvm, gfn_t gfn, =20 source_pa =3D kvm_tdx->source_pa & ~KVM_TDX_MEASURE_MEMORY_REGION; =20 - err =3D tdh_mem_page_add(kvm_tdx->tdr.pa, gpa, hpa, source_pa, &out); + err =3D tdh_mem_page_add(kvm_tdx->tdr.pa, gpa, tdx_level, hpa, source_pa,= &out); if (KVM_BUG_ON(err, kvm)) { pr_tdx_error(TDH_MEM_PAGE_ADD, err, &out); tdx_unpin_pfn(kvm, pfn); } else if ((kvm_tdx->source_pa & KVM_TDX_MEASURE_MEMORY_REGION)) - tdx_measure_page(kvm_tdx, gpa); + tdx_measure_page(kvm_tdx, gpa); /* TODO: handle page size > 4KB */ =20 kvm_tdx->source_pa =3D INVALID_PAGE; } diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h index 9accf2fe04ae..da662aa46cd9 100644 --- a/arch/x86/kvm/vmx/tdx_ops.h +++ b/arch/x86/kvm/vmx/tdx_ops.h @@ -19,6 +19,11 @@ =20 void pr_tdx_error(u64 op, u64 error_code, const struct tdx_module_output *= out); =20 +static inline enum pg_level tdx_sept_level_to_pg_level(int tdx_level) +{ + return tdx_level + 1; +} + static inline void tdx_clflush_page(hpa_t addr, enum pg_level level) { clflush_cache_range(__va(addr), KVM_HPAGE_SIZE(level)); @@ -50,11 +55,12 @@ static inline u64 tdh_mng_addcx(hpa_t tdr, hpa_t addr) return __seamcall(TDH_MNG_ADDCX, addr, tdr, 0, 0, NULL); } =20 -static inline u64 tdh_mem_page_add(hpa_t tdr, gpa_t gpa, hpa_t hpa, hpa_t = source, - struct tdx_module_output *out) +static inline u64 tdh_mem_page_add(hpa_t tdr, gpa_t gpa, int level, hpa_t = hpa, + hpa_t source, struct tdx_module_output *out) { - tdx_clflush_page(hpa, PG_LEVEL_4K); - return seamcall_sept_retry(TDH_MEM_PAGE_ADD, gpa, tdr, hpa, source, out); + tdx_clflush_page(hpa, tdx_sept_level_to_pg_level(level)); + return seamcall_sept_retry(TDH_MEM_PAGE_ADD, gpa | level, tdr, hpa, + source, out); } =20 static inline u64 tdh_mem_sept_add(hpa_t tdr, gpa_t gpa, int level, hpa_t = page, @@ -84,11 +90,12 @@ static inline u64 tdh_mem_page_relocate(hpa_t tdr, gpa_= t gpa, hpa_t hpa, return __seamcall(TDH_MEM_PAGE_RELOCATE, gpa, tdr, hpa, 0, out); } =20 -static inline u64 tdh_mem_page_aug(hpa_t tdr, gpa_t gpa, hpa_t hpa, +static inline u64 tdh_mem_page_aug(hpa_t tdr, gpa_t gpa, int level, hpa_t = hpa, struct tdx_module_output *out) { - tdx_clflush_page(hpa, PG_LEVEL_4K); - return seamcall_sept_retry(TDH_MEM_PAGE_AUG, gpa, tdr, hpa, 0, out); + tdx_clflush_page(hpa, tdx_sept_level_to_pg_level(level)); + return seamcall_sept_retry(TDH_MEM_PAGE_AUG, gpa | level, tdr, hpa, 0, + out); } =20 static inline u64 tdh_mem_range_block(hpa_t tdr, gpa_t gpa, int level, --=20 2.25.1 From nobody Sat Apr 11 19:32:16 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE3BDC25B08 for ; Sun, 7 Aug 2022 22:32:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242384AbiHGWcR (ORCPT ); Sun, 7 Aug 2022 18:32:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35060 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242188AbiHGWbi (ORCPT ); Sun, 7 Aug 2022 18:31:38 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E398818360; Sun, 7 Aug 2022 15:18:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659910730; x=1691446730; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=n1VhPYbloRyaQpOluo7vN8itgxhqrMUvvsfBcoYG0Y8=; b=THSepzMTwTBrSkHRqbLQIou3jEpn9IPUNdHiM1mLCC6WMnBaY9T/7b/L swcxQdzV85oX5ZA3J/7MUmfunYefWy97Z6pVybD4XXE+PFYQE9mG391gm zxTmtGc4GPhVr8yOGroDu2g2lAGsqRNyiFeWckrqX6DQK0qvNX6jrH/3z +J6lAMuhaAGA3ac5tJ/5l4o8kJj9qUfSR7JpIU1o6GOZDE/Q0zgkw2atl w151bQnIb+FdEd4fkzXlW4ugNB+CUklsSK/op92gKeWOdNG7kSOxr2tlg 8+N/ckW2sBO6OPT58tadTUUMSzkEqfOAshBTNTp3v+yt4h8eBeFqinZqd Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10432"; a="270852833" X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="270852833" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:49 -0700 X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="632642313" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:49 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar Subject: [RFC PATCH 05/13] KVM: TDX: Pass size to tdx_measure_page() Date: Sun, 7 Aug 2022 15:18:38 -0700 Message-Id: <824f3a80ea74d1065ec5e2f8c123aa64e527f7f0.1659854957.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li Extend tdx_measure_page() to pass size info so that it can measure large page as well. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index b717d50ee4d3..b7a75c0adbfa 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1417,13 +1417,15 @@ void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t = root_hpa, int pgd_level) td_vmcs_write64(to_tdx(vcpu), SHARED_EPT_POINTER, root_hpa & PAGE_MASK); } =20 -static void tdx_measure_page(struct kvm_tdx *kvm_tdx, hpa_t gpa) +static void tdx_measure_page(struct kvm_tdx *kvm_tdx, hpa_t gpa, int size) { struct tdx_module_output out; u64 err; int i; =20 - for (i =3D 0; i < PAGE_SIZE; i +=3D TDX_EXTENDMR_CHUNKSIZE) { + WARN_ON_ONCE(size % TDX_EXTENDMR_CHUNKSIZE); + + for (i =3D 0; i < size; i +=3D TDX_EXTENDMR_CHUNKSIZE) { err =3D tdh_mr_extend(kvm_tdx->tdr.pa, gpa + i, &out); if (KVM_BUG_ON(err, &kvm_tdx->kvm)) { pr_tdx_error(TDH_MR_EXTEND, err, &out); @@ -1497,7 +1499,7 @@ static void __tdx_sept_set_private_spte(struct kvm *k= vm, gfn_t gfn, pr_tdx_error(TDH_MEM_PAGE_ADD, err, &out); tdx_unpin_pfn(kvm, pfn); } else if ((kvm_tdx->source_pa & KVM_TDX_MEASURE_MEMORY_REGION)) - tdx_measure_page(kvm_tdx, gpa); /* TODO: handle page size > 4KB */ + tdx_measure_page(kvm_tdx, gpa, KVM_HPAGE_SIZE(level)); =20 kvm_tdx->source_pa =3D INVALID_PAGE; } --=20 2.25.1 From nobody Sat Apr 11 19:32:16 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BEF74C19F2A for ; Sun, 7 Aug 2022 22:32:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242397AbiHGWcU (ORCPT ); Sun, 7 Aug 2022 18:32:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35104 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242189AbiHGWbi (ORCPT ); Sun, 7 Aug 2022 18:31:38 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0502118361; Sun, 7 Aug 2022 15:18:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659910730; x=1691446730; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0eQxR6em7YwtSJ4aNxbKq4cfhsibGseChB89myr5XqY=; b=fBJAlYAqZKc//40OFVOSvjZV1LhaOR+Aw2peLMaCLaUsmH3JqS/s7Ahb FRe1D1e+hHAFjqdPzr0l22/H5LOxLWQQJywzKTReOgQEk3JQTG2cyjEfq KoCCjoFkEXsspqGTEOR0qcqwvaPzKOE60HCpuHbnsDLAM7f6erQDnaQ7x 8TR1ygDqCdgbdn+yGXf/WLWZfpapl02YpuYpxdY2b0m2mzfphiFw15EEH SuTul1Cwf/wIxxvZWQD2zLloMkhBIfQ89TkQxMM/OuGFpB3TzRdVDYgpI 4/cq5fPBCbG+u1nBMpFcoPqE44x+9vbVxVlPyxShfBxsK/SwldrreYo4r Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10432"; a="270852834" X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="270852834" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:49 -0700 X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="632642317" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:49 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar Subject: [RFC PATCH 06/13] KVM: TDX: Pass size to reclaim_page() Date: Sun, 7 Aug 2022 15:18:39 -0700 Message-Id: <5dda5dbae9b4db639b1f1d4c66d64a164115c024.1659854957.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li A 2MB large page can be tdh_mem_page_aug()'ed to TD directly. In this case, it needs to reclaim and clear the page as 2MB size. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx.c | 28 ++++++++++++++++++---------- 1 file changed, 18 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index b7a75c0adbfa..0b9f9075e1ea 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -189,11 +189,13 @@ void tdx_hardware_disable(void) tdx_disassociate_vp(&tdx->vcpu); } =20 -static void tdx_clear_page(unsigned long page) +static void tdx_clear_page(unsigned long page, int size) { const void *zero_page =3D (const void *) __va(page_to_phys(ZERO_PAGE(0))); unsigned long i; =20 + WARN_ON_ONCE(size % 64); + /* * Zeroing the page is only necessary for systems with MKTME-i: * when re-assign one page from old keyid to a new keyid, MOVDIR64B is @@ -203,13 +205,14 @@ static void tdx_clear_page(unsigned long page) if (!static_cpu_has(X86_FEATURE_MOVDIR64B)) return; =20 - for (i =3D 0; i < 4096; i +=3D 64) + for (i =3D 0; i < size; i +=3D 64) /* MOVDIR64B [rdx], es:rdi */ asm (".byte 0x66, 0x0f, 0x38, 0xf8, 0x3a" : : "d" (zero_page), "D" (page + i) : "memory"); } =20 -static int tdx_reclaim_page(unsigned long va, hpa_t pa, bool do_wb, u16 hk= id) +static int tdx_reclaim_page(unsigned long va, hpa_t pa, enum pg_level leve= l, + bool do_wb, u16 hkid) { struct tdx_module_output out; u64 err; @@ -219,8 +222,11 @@ static int tdx_reclaim_page(unsigned long va, hpa_t pa= , bool do_wb, u16 hkid) pr_tdx_error(TDH_PHYMEM_PAGE_RECLAIM, err, &out); return -EIO; } + /* out.r8 =3D=3D tdx sept page level */ + WARN_ON_ONCE(out.r8 !=3D pg_level_to_tdx_sept_level(level)); =20 - if (do_wb) { + /* only TDR page gets into this path */ + if (do_wb && level =3D=3D PG_LEVEL_4K) { err =3D tdh_phymem_page_wbinvd(set_hkid_to_hpa(pa, hkid)); if (WARN_ON_ONCE(err)) { pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL); @@ -228,7 +234,7 @@ static int tdx_reclaim_page(unsigned long va, hpa_t pa,= bool do_wb, u16 hkid) } } =20 - tdx_clear_page(va); + tdx_clear_page(va, KVM_HPAGE_SIZE(level)); return 0; } =20 @@ -257,7 +263,7 @@ static void tdx_reclaim_td_page(struct tdx_td_page *pag= e) * was already flushed by TDH.PHYMEM.CACHE.WB before here, So * cache doesn't need to be flushed again. */ - if (tdx_reclaim_page(page->va, page->pa, false, 0)) + if (tdx_reclaim_page(page->va, page->pa, PG_LEVEL_4K, false, 0)) return; =20 page->added =3D false; @@ -404,8 +410,8 @@ void tdx_vm_free(struct kvm *kvm) * TDX global HKID is needed. */ if (kvm_tdx->tdr.added && - tdx_reclaim_page(kvm_tdx->tdr.va, kvm_tdx->tdr.pa, true, - tdx_global_keyid)) + tdx_reclaim_page(kvm_tdx->tdr.va, kvm_tdx->tdr.pa, PG_LEVEL_4K, + true, tdx_global_keyid)) return; =20 free_page(kvm_tdx->tdr.va); @@ -1548,7 +1554,8 @@ static void tdx_sept_drop_private_spte( * The HKID assigned to this TD was already freed and cache * was already flushed. We don't have to flush again. */ - err =3D tdx_reclaim_page((unsigned long)__va(hpa), hpa, false, 0); + err =3D tdx_reclaim_page((unsigned long)__va(hpa), hpa, level, + false, 0); =20 unlock: spin_unlock(&kvm_tdx->seamcall_lock); @@ -1667,7 +1674,8 @@ static int tdx_sept_free_private_sp(struct kvm *kvm, = gfn_t gfn, enum pg_level le * already flushed. We don't have to flush again. */ spin_lock(&kvm_tdx->seamcall_lock); - ret =3D tdx_reclaim_page((unsigned long)sept_page, __pa(sept_page), false= , 0); + ret =3D tdx_reclaim_page((unsigned long)sept_page, __pa(sept_page), + PG_LEVEL_4K, false, 0); spin_unlock(&kvm_tdx->seamcall_lock); =20 return ret; --=20 2.25.1 From nobody Sat Apr 11 19:32:16 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A095DC19F2A for ; Sun, 7 Aug 2022 22:32:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233557AbiHGWcY (ORCPT ); Sun, 7 Aug 2022 18:32:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35548 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242221AbiHGWbj (ORCPT ); Sun, 7 Aug 2022 18:31:39 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3297D18362; Sun, 7 Aug 2022 15:18:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659910731; x=1691446731; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CC+oCHJCHlL6/p6anZ5hH11kfUqGKD9/J/qtkJCwbqE=; b=F2cVsO3mNoGfp5SZA9ucssD6/NYTc5U63IcUpGWCwMICiQhlvriUK1Wu cFQ+4lRzl4O4TZM6sfhtt/cImlf+eovs/hg/AAARjKQbspf/IEXUqFHl0 gWTL5uvXoUOwRj0YLeTkwKNCCPkinwrswlHHnRV7VHEOpLcCmRsvG3RiF jvfWjwapYKGuqtb2gehLFAokjKSzH/dJKm4dEU/Zj19ciXdtO6lRNuEsu P0QyvxLy4aOX3zWd7QphXQs9iFN39bu3PAbqLF1viN1jk3Tz0I+9ClDmb +hD1hRu43g4RnaFJ3aQRHnXkFhvRo5VMGFdkbmt9GGgMxufYMo8QqufqV g==; X-IronPort-AV: E=McAfee;i="6400,9594,10432"; a="270852835" X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="270852835" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:50 -0700 X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="632642321" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:50 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar Subject: [RFC PATCH 07/13] KVM: TDX: Update tdx_sept_{set,drop}_private_spte() to support large page Date: Sun, 7 Aug 2022 15:18:40 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li Allow large page level AUG and REMOVE for TDX pages. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx.c | 46 +++++++++++++++++++++--------------------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 0b9f9075e1ea..cdd421fb5024 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1458,20 +1458,18 @@ static void __tdx_sept_set_private_spte(struct kvm = *kvm, gfn_t gfn, struct tdx_module_output out; hpa_t source_pa; u64 err; + int i; =20 if (WARN_ON_ONCE(is_error_noslot_pfn(pfn) || !kvm_pfn_to_refcounted_page(pfn))) return; =20 /* To prevent page migration, do nothing on mmu notifier. */ - get_page(pfn_to_page(pfn)); + for (i =3D 0; i < KVM_PAGES_PER_HPAGE(level); i++) + get_page(pfn_to_page(pfn + i)); =20 /* Build-time faults are induced and handled via TDH_MEM_PAGE_ADD. */ if (likely(is_td_finalized(kvm_tdx))) { - /* TODO: handle large pages. */ - if (KVM_BUG_ON(level !=3D PG_LEVEL_4K, kvm)) - return; - err =3D tdh_mem_page_aug(kvm_tdx->tdr.pa, gpa, tdx_level, hpa, &out); if (KVM_BUG_ON(err, kvm)) { pr_tdx_error(TDH_MEM_PAGE_AUG, err, &out); @@ -1530,38 +1528,40 @@ static void tdx_sept_drop_private_spte( hpa_t hpa_with_hkid; struct tdx_module_output out; u64 err =3D 0; + int i; =20 - /* TODO: handle large pages. */ - if (KVM_BUG_ON(level !=3D PG_LEVEL_4K, kvm)) - return; - - spin_lock(&kvm_tdx->seamcall_lock); if (is_hkid_assigned(kvm_tdx)) { + spin_lock(&kvm_tdx->seamcall_lock); err =3D tdh_mem_page_remove(kvm_tdx->tdr.pa, gpa, tdx_level, &out); + spin_unlock(&kvm_tdx->seamcall_lock); if (KVM_BUG_ON(err, kvm)) { pr_tdx_error(TDH_MEM_PAGE_REMOVE, err, &out); - goto unlock; + return; } =20 - hpa_with_hkid =3D set_hkid_to_hpa(hpa, (u16)kvm_tdx->hkid); - err =3D tdh_phymem_page_wbinvd(hpa_with_hkid); - if (WARN_ON_ONCE(err)) { - pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL); - goto unlock; + for (i =3D 0; i < KVM_PAGES_PER_HPAGE(level); i++) { + hpa_with_hkid =3D set_hkid_to_hpa(hpa, (u16)kvm_tdx->hkid); + spin_lock(&kvm_tdx->seamcall_lock); + err =3D tdh_phymem_page_wbinvd(hpa_with_hkid); + spin_unlock(&kvm_tdx->seamcall_lock); + if (WARN_ON_ONCE(err)) + pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL); + else + tdx_unpin(kvm, gfn + i, pfn + i); + hpa +=3D PAGE_SIZE; } - } else + } else { /* * The HKID assigned to this TD was already freed and cache * was already flushed. We don't have to flush again. */ + spin_lock(&kvm_tdx->seamcall_lock); err =3D tdx_reclaim_page((unsigned long)__va(hpa), hpa, level, false, 0); - -unlock: - spin_unlock(&kvm_tdx->seamcall_lock); - - if (!err) - tdx_unpin_pfn(kvm, pfn); + spin_unlock(&kvm_tdx->seamcall_lock); + if (!err) + tdx_unpin(kvm, gfn, pfn); + } } =20 static int tdx_sept_link_private_sp(struct kvm *kvm, gfn_t gfn, --=20 2.25.1 From nobody Sat Apr 11 19:32:16 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BD18C19F2A for ; Sun, 7 Aug 2022 22:32:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242427AbiHGWce (ORCPT ); Sun, 7 Aug 2022 18:32:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35130 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242278AbiHGWbk (ORCPT ); Sun, 7 Aug 2022 18:31:40 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F113018355; Sun, 7 Aug 2022 15:18:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659910731; x=1691446731; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lZ/I1kwSS0Uil7WX0418qxi0GaX7fgJpF8/v0/AE3mo=; b=FrgRHD3F/tWw4fiX0oQsvlnodWatpAWXVjrmMI+8r6LikPJ0R5Uh5D7F GsQoRmu8ARIPw28mLnGhsFJTADZL4eEB7B+IRa2i5xqM4fQ0w5Xw5gNjU 5ecfvdCtf2PO7duKlmmTtF8DVGEAI0WVVg08GsKPFZrs75X/viGFnu1/j aXfys39ewsYKaFuHNr86B+FzQs/2PlLML6in80wQ/hVIm6ILerzgz/1mH tZdGKdGASfsyfQrHZ0udqPuNvSY9LjWy1z/hCpuGWn3UxMyAtV1xARCWV c2z5WjuFfu3WoKudmO3S7HdXVt0uFzp0EMMVBlH27oD97cDcDjzUVXwg3 w==; X-IronPort-AV: E=McAfee;i="6400,9594,10432"; a="270852836" X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="270852836" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:50 -0700 X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="632642326" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:50 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar Subject: [RFC PATCH 08/13] KVM: MMU: Introduce level info in PFERR code Date: Sun, 7 Aug 2022 15:18:41 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li For TDX, EPT violation can happen when TDG.MEM.PAGE.ACCEPT. And TDG.MEM.PAGE.ACCEPT contains the desired accept page level of TD guest. 1. KVM can map it with 4KB page while TD guest wants to accept 2MB page. TD geust will get TDX_PAGE_SIZE_MISMATCH and it should try to accept 4KB size. 2. KVM can map it with 2MB page while TD guest wants to accept 4KB page. KVM needs to honor it because a) there is no way to tell guest KVM maps it as 2MB size. And b) guest accepts it in 4KB size since guest knows some other 4KB page in the same 2MB range will be used as shared page. For case 2, it need to pass desired page level to MMU's page_fault_handler. Use bit 29:31 of kvm PF error code for this purpose. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm_host.h | 3 +++ arch/x86/kvm/mmu/mmu.c | 5 +++++ arch/x86/kvm/vmx/common.h | 6 +++++- arch/x86/kvm/vmx/tdx.c | 15 ++++++++++++++- arch/x86/kvm/vmx/tdx.h | 19 +++++++++++++++++++ arch/x86/kvm/vmx/vmx.c | 2 +- 6 files changed, 47 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 2bdb1de9bce0..c01bde832de2 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -251,6 +251,8 @@ enum x86_intercept_stage; #define PFERR_FETCH_BIT 4 #define PFERR_PK_BIT 5 #define PFERR_SGX_BIT 15 +#define PFERR_LEVEL_START_BIT 29 +#define PFERR_LEVEL_END_BIT 31 #define PFERR_GUEST_FINAL_BIT 32 #define PFERR_GUEST_PAGE_BIT 33 #define PFERR_IMPLICIT_ACCESS_BIT 48 @@ -262,6 +264,7 @@ enum x86_intercept_stage; #define PFERR_FETCH_MASK (1U << PFERR_FETCH_BIT) #define PFERR_PK_MASK (1U << PFERR_PK_BIT) #define PFERR_SGX_MASK (1U << PFERR_SGX_BIT) +#define PFERR_LEVEL_MASK GENMASK_ULL(PFERR_LEVEL_END_BIT, PFERR_LEVEL_STAR= T_BIT) #define PFERR_GUEST_FINAL_MASK (1ULL << PFERR_GUEST_FINAL_BIT) #define PFERR_GUEST_PAGE_MASK (1ULL << PFERR_GUEST_PAGE_BIT) #define PFERR_IMPLICIT_ACCESS (1ULL << PFERR_IMPLICIT_ACCESS_BIT) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a03aa609a0da..ba21503fa46f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4451,6 +4451,11 @@ EXPORT_SYMBOL_GPL(kvm_handle_page_fault); =20 int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { + u8 err_level =3D (fault->error_code & PFERR_LEVEL_MASK) >> PFERR_LEVEL_ST= ART_BIT; + + if (err_level) + fault->max_level =3D min(fault->max_level, err_level); + /* * If the guest's MTRRs may be used to compute the "real" memtype, * restrict the mapping level to ensure KVM uses a consistent memtype diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h index fd5ed3c0f894..f512eaa458a2 100644 --- a/arch/x86/kvm/vmx/common.h +++ b/arch/x86/kvm/vmx/common.h @@ -78,7 +78,8 @@ static inline void vmx_handle_external_interrupt_irqoff(s= truct kvm_vcpu *vcpu, } =20 static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t = gpa, - unsigned long exit_qualification) + unsigned long exit_qualification, + int err_page_level) { u64 error_code; =20 @@ -98,6 +99,9 @@ static inline int __vmx_handle_ept_violation(struct kvm_v= cpu *vcpu, gpa_t gpa, error_code |=3D (exit_qualification & EPT_VIOLATION_GVA_TRANSLATED) !=3D = 0 ? PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK; =20 + if (err_page_level > 0) + error_code |=3D (err_page_level << PFERR_LEVEL_START_BIT) & PFERR_LEVEL_= MASK; + return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0); } =20 diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index cdd421fb5024..81d88b1e63ac 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1765,7 +1765,20 @@ void tdx_deliver_interrupt(struct kvm_lapic *apic, i= nt delivery_mode, =20 static int tdx_handle_ept_violation(struct kvm_vcpu *vcpu) { + union tdx_ext_exit_qualification ext_exit_qual; unsigned long exit_qual; + int err_page_level =3D 0; + + ext_exit_qual.full =3D tdexit_ext_exit_qual(vcpu); + + if (ext_exit_qual.type >=3D NUM_EXT_EXIT_QUAL) { + pr_err("EPT violation at gpa 0x%lx, with invalid ext exit qualification = type 0x%x\n", + tdexit_gpa(vcpu), ext_exit_qual.type); + kvm_vm_bugged(vcpu->kvm); + return 0; + } else if (ext_exit_qual.type =3D=3D EXT_EXIT_QUAL_ACCEPT) { + err_page_level =3D ext_exit_qual.req_sept_level + 1; + } =20 if (kvm_is_private_gpa(vcpu->kvm, tdexit_gpa(vcpu))) { /* @@ -1792,7 +1805,7 @@ static int tdx_handle_ept_violation(struct kvm_vcpu *= vcpu) } =20 trace_kvm_page_fault(tdexit_gpa(vcpu), exit_qual); - return __vmx_handle_ept_violation(vcpu, tdexit_gpa(vcpu), exit_qual); + return __vmx_handle_ept_violation(vcpu, tdexit_gpa(vcpu), exit_qual, err_= page_level); } =20 static int tdx_handle_ept_misconfig(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h index 8284cce0d385..3400563a2254 100644 --- a/arch/x86/kvm/vmx/tdx.h +++ b/arch/x86/kvm/vmx/tdx.h @@ -79,6 +79,25 @@ union tdx_exit_reason { u64 full; }; =20 +union tdx_ext_exit_qualification { + struct { + u64 type : 4; + u64 reserved0 : 28; + u64 req_sept_level : 3; + u64 err_sept_level : 3; + u64 err_sept_state : 8; + u64 err_sept_is_leaf : 1; + u64 reserved1 : 17; + }; + u64 full; +}; + +enum tdx_ext_exit_qualification_type { + EXT_EXIT_QUAL_NONE, + EXT_EXIT_QUAL_ACCEPT, + NUM_EXT_EXIT_QUAL, +}; + struct vcpu_tdx { struct kvm_vcpu vcpu; =20 diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index e5aa805f6db4..6ba3eded55a7 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -5646,7 +5646,7 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu) if (unlikely(allow_smaller_maxphyaddr && kvm_vcpu_is_illegal_gpa(vcpu, gp= a))) return kvm_emulate_instruction(vcpu, 0); =20 - return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification); + return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification, 0); } =20 static int handle_ept_misconfig(struct kvm_vcpu *vcpu) --=20 2.25.1 From nobody Sat Apr 11 19:32:16 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68081C19F2A for ; Sun, 7 Aug 2022 22:32:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242412AbiHGWc3 (ORCPT ); Sun, 7 Aug 2022 18:32:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242248AbiHGWbj (ORCPT ); Sun, 7 Aug 2022 18:31:39 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F11E918366; Sun, 7 Aug 2022 15:18:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659910731; x=1691446731; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CTJMM/0shdwoQoSS4boE1AzzhW200V6Zj+VgeNun5mA=; b=jyNA2pZNncajIgyYQeUPduTNK0nfWSQ4mnTHY003iPtqMyQNZnNaqhlP ABBqzK91oJnhcYdIarVdBa4l0IGNk1qQGAIP/SNvMDH2jZ86h62gUopGL SLU272way44NuiwBj5t1kUObMInpq9rM/G0N9jA83Rl5/DjYBdCM/wDaI dW0+hvqfNHe++lIhRymz+lwFPE159azq+DvhnkgZjEQ653nW4t6+GAwe3 gtJ+hEXD7zZxchfZrgHFHpvk2ji5PkI1/Bh8ReFMYSVsgoSFSFJQfPKBO m9xuZLWeRj8QlnUvRd9SDzZxPFYfi2QGNGESWndi92v/yAA05xZKy5cgo w==; X-IronPort-AV: E=McAfee;i="6400,9594,10432"; a="270852837" X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="270852837" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:51 -0700 X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="632642330" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:50 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar Subject: [RFC PATCH 09/13] KVM: TDX: Pin pages via get_page() right before ADD/AUG'ed to TDs Date: Sun, 7 Aug 2022 15:18:42 -0700 Message-Id: <8fa629b2c53c0578ed4025770190e6f960558b9c.1659854957.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li When kvm_faultin_pfn(), it doesn't have the info regarding which page level will the gfn be mapped at. Hence it doesn't know to pin a 4K page or a 2M page. Move the guest private pages pinning logic right before TDH_MEM_PAGE_ADD/AUG() since at that time it knows the page level info. Signed-off-by: Xiaoyao Li --- arch/x86/kvm/vmx/tdx.c | 28 +++++++++++++++++++--------- 1 file changed, 19 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 81d88b1e63ac..2fdf3aa70c57 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1440,12 +1440,22 @@ static void tdx_measure_page(struct kvm_tdx *kvm_td= x, hpa_t gpa, int size) } } =20 -static void tdx_unpin_pfn(struct kvm *kvm, kvm_pfn_t pfn) +static void tdx_unpin(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, + enum pg_level level) { - struct page *page =3D pfn_to_page(pfn); + struct kvm_memory_slot *slot =3D gfn_to_memslot(kvm, gfn); + int i; + + for (i =3D 0; i < KVM_PAGES_PER_HPAGE(level); i++) { + struct page *page =3D pfn_to_page(pfn + i); =20 - put_page(page); - WARN_ON(!page_count(page) && to_kvm_tdx(kvm)->hkid > 0); + put_page(page); + WARN_ON(!page_count(page) && to_kvm_tdx(kvm)->hkid > 0); + } + if (kvm_slot_can_be_private(slot)) { + /* Private slot case */ + return; + } } =20 static void __tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn, @@ -1473,7 +1483,7 @@ static void __tdx_sept_set_private_spte(struct kvm *k= vm, gfn_t gfn, err =3D tdh_mem_page_aug(kvm_tdx->tdr.pa, gpa, tdx_level, hpa, &out); if (KVM_BUG_ON(err, kvm)) { pr_tdx_error(TDH_MEM_PAGE_AUG, err, &out); - tdx_unpin_pfn(kvm, pfn); + tdx_unpin(kvm, gfn, pfn, level); } return; } @@ -1492,7 +1502,7 @@ static void __tdx_sept_set_private_spte(struct kvm *k= vm, gfn_t gfn, * always uses vcpu 0's page table and protected by vcpu->mutex). */ if (KVM_BUG_ON(kvm_tdx->source_pa =3D=3D INVALID_PAGE, kvm)) { - tdx_unpin_pfn(kvm, pfn); + tdx_unpin(kvm, gfn, pfn, level); return; } =20 @@ -1501,7 +1511,7 @@ static void __tdx_sept_set_private_spte(struct kvm *k= vm, gfn_t gfn, err =3D tdh_mem_page_add(kvm_tdx->tdr.pa, gpa, tdx_level, hpa, source_pa,= &out); if (KVM_BUG_ON(err, kvm)) { pr_tdx_error(TDH_MEM_PAGE_ADD, err, &out); - tdx_unpin_pfn(kvm, pfn); + tdx_unpin(kvm, gfn, pfn, level); } else if ((kvm_tdx->source_pa & KVM_TDX_MEASURE_MEMORY_REGION)) tdx_measure_page(kvm_tdx, gpa, KVM_HPAGE_SIZE(level)); =20 @@ -1547,7 +1557,7 @@ static void tdx_sept_drop_private_spte( if (WARN_ON_ONCE(err)) pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL); else - tdx_unpin(kvm, gfn + i, pfn + i); + tdx_unpin(kvm, gfn + i, pfn + i, PG_LEVEL_4K); hpa +=3D PAGE_SIZE; } } else { @@ -1560,7 +1570,7 @@ static void tdx_sept_drop_private_spte( false, 0); spin_unlock(&kvm_tdx->seamcall_lock); if (!err) - tdx_unpin(kvm, gfn, pfn); + tdx_unpin(kvm, gfn, pfn, level); } } =20 --=20 2.25.1 From nobody Sat Apr 11 19:32:16 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21C8EC19F2A for ; Sun, 7 Aug 2022 22:32:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242278AbiHGWcj (ORCPT ); Sun, 7 Aug 2022 18:32:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241766AbiHGWbk (ORCPT ); Sun, 7 Aug 2022 18:31:40 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2EDB318367; Sun, 7 Aug 2022 15:18:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659910732; x=1691446732; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/ak/GZpX2iOgkbhao08i+WoO/ngVEvNdqD8iJNRSVFE=; b=HytzFknpEKyoZrbuNAPsf5t2cGLKoI+LeavGR16Y4Qys4Gf6vanf7tM8 W38qmlmm41InkVEXMFX1y5w0QtnJDvkKjjEopRBxVy0CmavRqcABaJ/Bp EuA8Assu5L76LHNlNO8cWp4WbG5tw7EAmE92tdtWOE4QDcDKAgcBJIciX pbfZPNlMjVUCUO6GfZzA9h42ajHlUU7mb2donQhDVgELVSSmPWlCDfKnp uFOVd9YaO8gn5ErUXQTco18Q1UrrpMb4uMC0t75BeEPA2fVKHWuLYhF11 +RStuks6N2lSrjN+xAZRBeiQpoLqxqwx6JUI4COYYt3SCC6hTBJeWt7v6 Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10432"; a="270852840" X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="270852840" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:51 -0700 X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="632642335" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:51 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar Subject: [RFC PATCH 10/13] KVM: MMU: Pass desired page level in err code for page fault handler Date: Sun, 7 Aug 2022 15:18:43 -0700 Message-Id: <6bddb15cc5913c27330faead819d68b69c84d0a0.1659854957.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li For TDX, EPT violation can happen when TDG.MEM.PAGE.ACCEPT. And TDG.MEM.PAGE.ACCEPT contains the desired accept page level of TD guest. 1. KVM can map it with 4KB page while TD guest wants to accept 2MB page. TD geust will get TDX_PAGE_SIZE_MISMATCH and it should try to accept 4KB size. 2. KVM can map it with 2MB page while TD guest wants to accept 4KB page. KVM needs to honor it because a) there is no way to tell guest KVM maps it as 2MB size. And b) guest accepts it in 4KB size since guest knows some other 4KB page in the same 2MB range will be used as shared page. For case 2, it need to pass desired page level to MMU's page_fault_handler. Use bit 29:31 of kvm PF error code for this purpose. Signed-off-by: Xiaoyao Li --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/vmx/common.h | 2 +- arch/x86/kvm/vmx/tdx.c | 9 +++++++-- arch/x86/kvm/vmx/tdx.h | 19 ------------------- arch/x86/kvm/vmx/tdx_arch.h | 19 +++++++++++++++++++ arch/x86/kvm/vmx/vmx.c | 2 +- 6 files changed, 30 insertions(+), 23 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index c01bde832de2..a6bfcabcbbd7 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -273,6 +273,8 @@ enum x86_intercept_stage; PFERR_WRITE_MASK | \ PFERR_PRESENT_MASK) =20 +#define PFERR_LEVEL(err_code) (((err_code) & PFERR_LEVEL_MASK) >> PFERR_LE= VEL_START_BIT) + /* apic attention bits */ #define KVM_APIC_CHECK_VAPIC 0 /* diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h index f512eaa458a2..0835ea975250 100644 --- a/arch/x86/kvm/vmx/common.h +++ b/arch/x86/kvm/vmx/common.h @@ -99,7 +99,7 @@ static inline int __vmx_handle_ept_violation(struct kvm_v= cpu *vcpu, gpa_t gpa, error_code |=3D (exit_qualification & EPT_VIOLATION_GVA_TRANSLATED) !=3D = 0 ? PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK; =20 - if (err_page_level > 0) + if (err_page_level > PG_LEVEL_NONE) error_code |=3D (err_page_level << PFERR_LEVEL_START_BIT) & PFERR_LEVEL_= MASK; =20 return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0); diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 2fdf3aa70c57..e4e193b1a758 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1803,7 +1803,7 @@ static int tdx_handle_ept_violation(struct kvm_vcpu *= vcpu) #define TDX_SEPT_VIOLATION_EXIT_QUAL EPT_VIOLATION_ACC_WRITE exit_qual =3D TDX_SEPT_VIOLATION_EXIT_QUAL; } else { - exit_qual =3D tdexit_exit_qual(vcpu);; + exit_qual =3D tdexit_exit_qual(vcpu); if (exit_qual & EPT_VIOLATION_ACC_INSTR) { pr_warn("kvm: TDX instr fetch to shared GPA =3D 0x%lx @ RIP =3D 0x%lx\n= ", tdexit_gpa(vcpu), kvm_rip_read(vcpu)); @@ -2303,6 +2303,7 @@ static int tdx_init_mem_region(struct kvm *kvm, struc= t kvm_tdx_cmd *cmd) struct kvm_tdx_init_mem_region region; struct kvm_vcpu *vcpu; struct page *page; + u64 error_code; kvm_pfn_t pfn; int idx, ret =3D 0; =20 @@ -2356,7 +2357,11 @@ static int tdx_init_mem_region(struct kvm *kvm, stru= ct kvm_tdx_cmd *cmd) kvm_tdx->source_pa =3D pfn_to_hpa(page_to_pfn(page)) | (cmd->flags & KVM_TDX_MEASURE_MEMORY_REGION); =20 - pfn =3D kvm_mmu_map_tdp_page(vcpu, region.gpa, TDX_SEPT_PFERR, + /* TODO: large page support. */ + error_code =3D TDX_SEPT_PFERR; + error_code |=3D (PG_LEVEL_4K << PFERR_LEVEL_START_BIT) & + PFERR_LEVEL_MASK; + pfn =3D kvm_mmu_map_tdp_page(vcpu, region.gpa, error_code, PG_LEVEL_4K); if (is_error_noslot_pfn(pfn) || kvm->vm_bugged) ret =3D -EFAULT; diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h index 3400563a2254..8284cce0d385 100644 --- a/arch/x86/kvm/vmx/tdx.h +++ b/arch/x86/kvm/vmx/tdx.h @@ -79,25 +79,6 @@ union tdx_exit_reason { u64 full; }; =20 -union tdx_ext_exit_qualification { - struct { - u64 type : 4; - u64 reserved0 : 28; - u64 req_sept_level : 3; - u64 err_sept_level : 3; - u64 err_sept_state : 8; - u64 err_sept_is_leaf : 1; - u64 reserved1 : 17; - }; - u64 full; -}; - -enum tdx_ext_exit_qualification_type { - EXT_EXIT_QUAL_NONE, - EXT_EXIT_QUAL_ACCEPT, - NUM_EXT_EXIT_QUAL, -}; - struct vcpu_tdx { struct kvm_vcpu vcpu; =20 diff --git a/arch/x86/kvm/vmx/tdx_arch.h b/arch/x86/kvm/vmx/tdx_arch.h index 94258056d742..fbf334bc18c9 100644 --- a/arch/x86/kvm/vmx/tdx_arch.h +++ b/arch/x86/kvm/vmx/tdx_arch.h @@ -154,4 +154,23 @@ struct td_params { #define TDX_MIN_TSC_FREQUENCY_KHZ (100 * 1000) #define TDX_MAX_TSC_FREQUENCY_KHZ (10 * 1000 * 1000) =20 +union tdx_ext_exit_qualification { + struct { + u64 type : 4; + u64 reserved0 : 28; + u64 req_sept_level : 3; + u64 err_sept_level : 3; + u64 err_sept_state : 8; + u64 err_sept_is_leaf : 1; + u64 reserved1 : 17; + }; + u64 full; +}; + +enum tdx_ext_exit_qualification_type { + EXT_EXIT_QUAL_NONE =3D 0, + EXT_EXIT_QUAL_ACCEPT, + NUM_EXT_EXIT_QUAL, +}; + #endif /* __KVM_X86_TDX_ARCH_H */ diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 6ba3eded55a7..bb493ce80fa9 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -5646,7 +5646,7 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu) if (unlikely(allow_smaller_maxphyaddr && kvm_vcpu_is_illegal_gpa(vcpu, gp= a))) return kvm_emulate_instruction(vcpu, 0); =20 - return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification, 0); + return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification, PG_LEVEL= _NONE); } =20 static int handle_ept_misconfig(struct kvm_vcpu *vcpu) --=20 2.25.1 From nobody Sat Apr 11 19:32:16 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F8EDC19F2A for ; Sun, 7 Aug 2022 22:32:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242441AbiHGWcs (ORCPT ); Sun, 7 Aug 2022 18:32:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34360 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242290AbiHGWbk (ORCPT ); Sun, 7 Aug 2022 18:31:40 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BAAA91836E; Sun, 7 Aug 2022 15:18:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659910732; x=1691446732; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vpSLndbfs6RqeobMkXPOe0+TrlAhD95ZYYobrsGmqRk=; b=gn8LeKaOUkE35m8TWsHu8qzlPVV0wTswctvdWp6KiB5FBonCZs/YOSve TzsKPBauYKqi5JnwL8Qkuj+nQR3S5SPUt3ieqcKVIcdQRWzFLzsYrefDn TmqxTwg0newHLHfYE5q5eRwxhoAzFt3af7ncpWODibqaGVXpKflpIRDMQ AxEVeucARptk38FZBeTxs/Pmmt+SyBH5GmNbsjLSHuNhdkDCVI32nPULm PrY0WYvp+6l2hZ1kpNEGfg+VNEffAqklGl034eljytNXG6aZvKVHih6aB Na/kgwRe7STn/BNB1I+VpdDjy7VJNxkunJUYwv3otA4AKy3dIqC3x+aVy Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10432"; a="270852841" X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="270852841" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:52 -0700 X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="632642339" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:51 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar Subject: [RFC PATCH 11/13] KVM: TDP_MMU: Split the large page when zap leaf Date: Sun, 7 Aug 2022 15:18:44 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li When TDX enabled, a large page cannot be zapped if it contains mixed pages. In this case, it has to split the large page. Signed-off-by: Xiaoyao Li --- arch/x86/kvm/mmu/tdp_mmu.c | 28 ++++++++++++++++++++++++++-- 1 file changed, 26 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index faf278e0c740..e5d31242677a 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1033,6 +1033,14 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_= mmu_page *sp) return true; } =20 + +static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm, + struct tdp_iter *iter, + bool shared); + +static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, + struct kvm_mmu_page *sp, bool shared); + /* * If can_yield is true, will release the MMU lock and reschedule if the * scheduler needs the CPU or there is contention on the MMU lock. If this @@ -1075,6 +1083,24 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struc= t kvm_mmu_page *root, !is_last_spte(iter.old_spte, iter.level)) continue; =20 + if (kvm_gfn_shared_mask(kvm) && is_large_pte(iter.old_spte)) { + gfn_t gfn =3D iter.gfn & ~kvm_gfn_shared_mask(kvm); + gfn_t mask =3D KVM_PAGES_PER_HPAGE(iter.level) - 1; + struct kvm_memory_slot *slot; + struct kvm_mmu_page *sp; + + slot =3D gfn_to_memslot(kvm, gfn); + if (kvm_mem_attr_is_mixed(slot, gfn, iter.level) || + (gfn & mask) < start || + end < (gfn & mask) + KVM_PAGES_PER_HPAGE(iter.level)) { + sp =3D tdp_mmu_alloc_sp_for_split(kvm, &iter, false); + WARN_ON(!sp); + + tdp_mmu_split_huge_page(kvm, &iter, sp, false); + continue; + } + } + tdp_mmu_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE); flush =3D true; } @@ -1642,8 +1668,6 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_spli= t(struct kvm *kvm, =20 WARN_ON(kvm_mmu_page_role_is_private(role) !=3D is_private_sptep(iter->sptep)); - /* TODO: Large page isn't supported for private SPTE yet. */ - WARN_ON(kvm_mmu_page_role_is_private(role)); =20 /* * Since we are allocating while under the MMU lock we have to be --=20 2.25.1 From nobody Sat Apr 11 19:32:16 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72BFCC19F2A for ; Sun, 7 Aug 2022 22:32:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242435AbiHGWcn (ORCPT ); Sun, 7 Aug 2022 18:32:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34450 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242291AbiHGWbl (ORCPT ); Sun, 7 Aug 2022 18:31:41 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1496618370; Sun, 7 Aug 2022 15:18:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659910733; x=1691446733; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dYUOiB5tTFSzBs+i3koPMnn3IvN02+stBGoE31HoWro=; b=bjKCdsUu3r9bsJDWwmsm7L4w6UsFvAU+yYhsRgkH5HFiLiGiTu6h/Csx 7/wcuevAhmihPaGpuyVvVhcSK4jqhEnsCIExfRizXUv6sw8usnMLCla0h cmoUGcf8Q33ynp42kTr6Or2DDtpLsX+FWKB39dPR1qLq1A+hyZXMYJDhI uVFBkffGkoCvn28Wrh2o1sl0Nxm0wQ5ivTOh+FFBr1oFPhVWz9TNoLi4W JuZF2sn3u8gF2zv3FLZZa1juOXtSkAYtspW8wnFWEDAwa0r8h9a7iYOmK WbmaLDXyur4t9Bl8+VXV6umc7laxfQRtw3Uw7Rf4Qmg4Rhmn/0AsAkKyd A==; X-IronPort-AV: E=McAfee;i="6400,9594,10432"; a="270852842" X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="270852842" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:52 -0700 X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="632642345" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:52 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar Subject: [RFC PATCH 12/13] KVM: TDX: Split a large page when 4KB page within it converted to shared Date: Sun, 7 Aug 2022 15:18:45 -0700 Message-Id: <5831f5fad935edd3c3e0f198c0c3668f33eb65b3.1659854957.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li When mapping the shared page for TDX, it needs to zap private alias. In the case that private page is mapped as large page (2MB), it can be removed directly only when the whole 2MB is converted to shared. Otherwise, it has to split 2MB page into 512 4KB page, and only remove the pages that converted to shared. When a present large leaf spte switches to present non-leaf spte, TDX needs to split the corresponding SEPT page to reflect it. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx.c | 36 +++++++++++++++++++++++++++--------- arch/x86/kvm/vmx/tdx_arch.h | 1 + arch/x86/kvm/vmx/tdx_ops.h | 7 +++++++ 3 files changed, 35 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index e4e193b1a758..a340caeb9c62 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1595,6 +1595,28 @@ static int tdx_sept_link_private_sp(struct kvm *kvm,= gfn_t gfn, return 0; } =20 +static int tdx_sept_split_private_spte(struct kvm *kvm, gfn_t gfn, + enum pg_level level, void *sept_page) +{ + int tdx_level =3D pg_level_to_tdx_sept_level(level); + struct kvm_tdx *kvm_tdx =3D to_kvm_tdx(kvm); + gpa_t gpa =3D gfn << PAGE_SHIFT; + hpa_t hpa =3D __pa(sept_page); + struct tdx_module_output out; + u64 err; + + /* See comment in tdx_sept_set_private_spte() */ + spin_lock(&kvm_tdx->seamcall_lock); + err =3D tdh_mem_page_demote(kvm_tdx->tdr.pa, gpa, tdx_level, hpa, &out); + spin_unlock(&kvm_tdx->seamcall_lock); + if (KVM_BUG_ON(err, kvm)) { + pr_tdx_error(TDH_MEM_PAGE_DEMOTE, err, &out); + return -EIO; + } + + return 0; +} + static void tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn, enum pg_level level) { @@ -1604,8 +1626,6 @@ static void tdx_sept_zap_private_spte(struct kvm *kvm= , gfn_t gfn, struct tdx_module_output out; u64 err; =20 - /* For now large page isn't supported yet. */ - WARN_ON_ONCE(level !=3D PG_LEVEL_4K); spin_lock(&kvm_tdx->seamcall_lock); err =3D tdh_mem_range_block(kvm_tdx->tdr.pa, gpa, tdx_level, &out); spin_unlock(&kvm_tdx->seamcall_lock); @@ -1717,13 +1737,11 @@ static void tdx_handle_changed_private_spte( lockdep_assert_held(&kvm->mmu_lock); =20 if (change->new.is_present) { - /* TDP MMU doesn't change present -> present */ - WARN_ON(change->old.is_present); - /* - * Use different call to either set up middle level - * private page table, or leaf. - */ - if (is_leaf) + if (level > PG_LEVEL_4K && was_leaf && !is_leaf) { + tdx_sept_zap_private_spte(kvm, gfn, level); + tdx_sept_tlb_remote_flush(kvm); + tdx_sept_split_private_spte(kvm, gfn, level, change->sept_page); + } else if (is_leaf) tdx_sept_set_private_spte( kvm, gfn, level, change->new.pfn); else { diff --git a/arch/x86/kvm/vmx/tdx_arch.h b/arch/x86/kvm/vmx/tdx_arch.h index fbf334bc18c9..5970416e95b2 100644 --- a/arch/x86/kvm/vmx/tdx_arch.h +++ b/arch/x86/kvm/vmx/tdx_arch.h @@ -21,6 +21,7 @@ #define TDH_MNG_CREATE 9 #define TDH_VP_CREATE 10 #define TDH_MNG_RD 11 +#define TDH_MEM_PAGE_DEMOTE 15 #define TDH_MR_EXTEND 16 #define TDH_MR_FINALIZE 17 #define TDH_VP_FLUSH 18 diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h index da662aa46cd9..3b7373272d61 100644 --- a/arch/x86/kvm/vmx/tdx_ops.h +++ b/arch/x86/kvm/vmx/tdx_ops.h @@ -127,6 +127,13 @@ static inline u64 tdh_mng_rd(hpa_t tdr, u64 field, str= uct tdx_module_output *out return __seamcall(TDH_MNG_RD, tdr, field, 0, 0, out); } =20 +static inline u64 tdh_mem_page_demote(hpa_t tdr, gpa_t gpa, int level, hpa= _t page, + struct tdx_module_output *out) +{ + return seamcall_sept_retry(TDH_MEM_PAGE_DEMOTE, gpa | level, tdr, page, + 0, out); +} + static inline u64 tdh_mr_extend(hpa_t tdr, gpa_t gpa, struct tdx_module_output *out) { --=20 2.25.1 From nobody Sat Apr 11 19:32:16 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0D1CC19F2A for ; Sun, 7 Aug 2022 22:32:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236307AbiHGWcv (ORCPT ); Sun, 7 Aug 2022 18:32:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35750 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242183AbiHGWbl (ORCPT ); Sun, 7 Aug 2022 18:31:41 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 40AF518372; Sun, 7 Aug 2022 15:18:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659910733; x=1691446733; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YtV3gUfgaWbloXUiDdmU0LoSOosfztMOPHvLdoD+at8=; b=V6cKMrCaICq9HmZpFICxUm22nFRq70Me1RMcgSer606kjiOIVq5Tvm+5 Au9OiKzoFrPm9diK7KX+Y+954ujIwttv/hjm8BlQoazm9EHCrqXnhqTE8 gy+QX3A2Y1hALNJFyxxhT6uD2cCaJFq8PXOdGUzme51nxHgC01febvsRj tIMBBuKFQtVs+y0YeK018arMu/BfQZVCZVsHNNYplzOtiX2HHm0Am8MWB I7PKcjndgE5FJzZuNM1DEuypJnySCUt3FMevgbZQ0zLLL+84VFZUq3R93 4LYYtP6YaIumAaO/9prCa9C02uW40Gtgg5MQ/b5fpHGfuIXLHJDnAP+Gj w==; X-IronPort-AV: E=McAfee;i="6400,9594,10432"; a="270852843" X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="270852843" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:52 -0700 X-IronPort-AV: E=Sophos;i="5.93,221,1654585200"; d="scan'208";a="632642348" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2022 15:18:52 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar Subject: [RFC PATCH 13/13] KVM: x86: remove struct kvm_arch.tdp_max_page_level Date: Sun, 7 Aug 2022 15:18:46 -0700 Message-Id: <1469a0a4aabcaf51f67ed4b4e25155267e07bfd1.1659854957.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Xiaoyao Li Now that everything is there to support large page for TD guest. Remove tdp_max_page_level from struct kvm_arch that limits the page size. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm_host.h | 1 - arch/x86/kvm/mmu/mmu.c | 1 - arch/x86/kvm/mmu/mmu_internal.h | 2 +- arch/x86/kvm/vmx/tdx.c | 3 --- 4 files changed, 1 insertion(+), 6 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index a6bfcabcbbd7..80f2bc3fbf0c 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1190,7 +1190,6 @@ struct kvm_arch { unsigned long n_requested_mmu_pages; unsigned long n_max_mmu_pages; unsigned int indirect_shadow_pages; - int tdp_max_page_level; u8 mmu_valid_gen; struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES]; struct list_head active_mmu_pages; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index ba21503fa46f..0cbd52c476d7 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6232,7 +6232,6 @@ int kvm_mmu_init_vm(struct kvm *kvm) kvm->arch.split_desc_cache.kmem_cache =3D pte_list_desc_cache; kvm->arch.split_desc_cache.gfp_zero =3D __GFP_ZERO; =20 - kvm->arch.tdp_max_page_level =3D KVM_MAX_HUGEPAGE_LEVEL; return 0; } =20 diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index e5d5fea29bfa..82b220c4d1bd 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -395,7 +395,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu= *vcpu, gpa_t cr2_or_gpa, is_nx_huge_page_enabled(vcpu->kvm), .is_private =3D kvm_is_private_gpa(vcpu->kvm, cr2_or_gpa), =20 - .max_level =3D vcpu->kvm->arch.tdp_max_page_level, + .max_level =3D KVM_MAX_HUGEPAGE_LEVEL, .req_level =3D PG_LEVEL_4K, .goal_level =3D PG_LEVEL_4K, }; diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index a340caeb9c62..72f21f5f78af 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -460,9 +460,6 @@ int tdx_vm_init(struct kvm *kvm) */ kvm_mmu_set_mmio_spte_mask(kvm, 0, VMX_EPT_RWX_MASK); =20 - /* TODO: Enable 2mb and 1gb large page support. */ - kvm->arch.tdp_max_page_level =3D PG_LEVEL_4K; - /* vCPUs can't be created until after KVM_TDX_INIT_VM. */ kvm->max_vcpus =3D 0; =20 --=20 2.25.1