From nobody Mon Apr 6 15:50:55 2026 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CC6A631195B; Thu, 19 Mar 2026 01:36:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.9 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773884199; cv=none; b=rvINZvlTzA/uWafr9dbAz9BMypwHbkU9sBm0LLoiZQkRAVwSq5HNq34htfYkkLLPET8PN6rx6kdNtoo0J8ajDiPXVTldxqTe0IHA2Q4PCUm2McQJ2ICptSAowfSAENmO4Bs5F/qmCQLrV6u2LWGJWIM752H8epwYrHQvUihAJhA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773884199; c=relaxed/simple; bh=haZ0Kg7/SN1L0eQyMS8aG15Leypc29fv/ZzmH4/AZbQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VYletouyKxaEQbeTxarqZAA3xh3gxNMVTI3wO4LcFvEi5bqYrF9DqaHerHvya6D1rHqk5CpkbEiqspC9hPaY4edivanzZKdZi1ZT2n2PY0cOF+Zxyv1GEmD1W/Od9EULg0KtZrKS6Nm1oHMYAvhBP6dYcZ4ResdrdnXk+IAiv2M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=i2g+AU39; arc=none smtp.client-ip=198.175.65.9 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="i2g+AU39" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1773884192; x=1805420192; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=haZ0Kg7/SN1L0eQyMS8aG15Leypc29fv/ZzmH4/AZbQ=; b=i2g+AU392aSsgJY5Aoebrn4biusR174F7U76Oh2uld7ImNuO94x64dW/ ltDnvLJJL+eqN+0lcnnquSxoJwjAxiFmpGmRJh2B8sBkDS9I+BPBihEgt IrmTFDlxFGoHwa3te6L87LlMqQtD4DKe7cbC1yEKLNE0neieFiSXCgI8r JzzDd08D5qJ60z7hXlRx3c5VAY3nauaj+5lEv7HbQjWgQgR0Khxe7ZinW cpi7z8WJPly1ZcS5O4vqDegr9CVTEFprZ5kpeveOi1qOyFfio1gGr2PQp YKhhNDOevAcmI72YXT9GG0P1eV4J5aFGyKil+CYN+SEn5SQxeVSmOpKy3 A==; X-CSE-ConnectionGUID: r2tvcMMVRvi1uJlGvINKLg== X-CSE-MsgGUID: zbZ2m2TvQmaGfl2lVLuqEA== X-IronPort-AV: E=McAfee;i="6800,10657,11733"; a="97563729" X-IronPort-AV: E=Sophos;i="6.23,128,1770624000"; d="scan'208";a="97563729" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2026 18:36:31 -0700 X-CSE-ConnectionGUID: b6cASDauSHWAnagsMSo00w== X-CSE-MsgGUID: FOwraE6UT86wfhAxrMCF+g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,128,1770624000"; d="scan'208";a="227292102" Received: from yzhao56-desk.sh.intel.com ([10.239.47.19]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2026 18:36:26 -0700 From: Yan Zhao To: seanjc@google.com, pbonzini@redhat.com, dave.hansen@linux.intel.com Cc: tglx@kernel.org, mingo@redhat.com, bp@alien8.de, kas@kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-coco@lists.linux.dev, kai.huang@intel.com, rick.p.edgecombe@intel.com, yan.y.zhao@intel.com, yilun.xu@linux.intel.com, vannapurve@google.com, ackerleytng@google.com, sagis@google.com, binbin.wu@linux.intel.com, xiaoyao.li@intel.com, isaku.yamahata@intel.com Subject: [PATCH 1/2] x86/virt/tdx: Use PFN directly for mapping guest private memory Date: Thu, 19 Mar 2026 08:57:03 +0800 Message-ID: <20260319005703.8983-1-yan.y.zhao@intel.com> X-Mailer: git-send-email 2.43.2 In-Reply-To: <20260319005605.8965-1-yan.y.zhao@intel.com> References: <20260319005605.8965-1-yan.y.zhao@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Sean Christopherson Remove the completely unnecessary assumption that memory mapped into a TDX guest is backed by refcounted struct page memory. From KVM's point of view, TDH_MEM_PAGE_ADD and TDH_MEM_PAGE_AUG are glorified writes to PTEs, so they have no business placing requirements on how KVM and guest_memfd manage memory. Rip out the misguided struct page assumptions/constraints and instead have the two SEAMCALL wrapper APIs take PFN directly. This ensures that for future huge page support in S-EPT, the kernel doesn't pick up even worse assumptions like "a hugepage must be contained in a single folio". Use "kvm_pfn_t pfn" for type safety. Using this KVM type is appropriate since APIs tdh_mem_page_add() and tdh_mem_page_aug() are exported to KVM only. [ Yan: Replace "u64 pfn" with "kvm_pfn_t pfn" ] Signed-off-by: Sean Christopherson Signed-off-by: Yan Zhao --- arch/x86/include/asm/tdx.h | 5 +++-- arch/x86/kvm/vmx/tdx.c | 7 +++---- arch/x86/virt/vmx/tdx/tdx.c | 20 +++++++++++++------- 3 files changed, 19 insertions(+), 13 deletions(-) diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h index a149740b24e8..f3f0b1872176 100644 --- a/arch/x86/include/asm/tdx.h +++ b/arch/x86/include/asm/tdx.h @@ -6,6 +6,7 @@ #include #include #include +#include =20 #include #include @@ -195,10 +196,10 @@ static inline int pg_level_to_tdx_sept_level(enum pg_= level level) =20 u64 tdh_vp_enter(struct tdx_vp *vp, struct tdx_module_args *args); u64 tdh_mng_addcx(struct tdx_td *td, struct page *tdcs_page); -u64 tdh_mem_page_add(struct tdx_td *td, u64 gpa, struct page *page, struct= page *source, u64 *ext_err1, u64 *ext_err2); +u64 tdh_mem_page_add(struct tdx_td *td, u64 gpa, kvm_pfn_t pfn, struct pag= e *source, u64 *ext_err1, u64 *ext_err2); u64 tdh_mem_sept_add(struct tdx_td *td, u64 gpa, int level, struct page *p= age, u64 *ext_err1, u64 *ext_err2); u64 tdh_vp_addcx(struct tdx_vp *vp, struct page *tdcx_page); -u64 tdh_mem_page_aug(struct tdx_td *td, u64 gpa, int level, struct page *p= age, u64 *ext_err1, u64 *ext_err2); +u64 tdh_mem_page_aug(struct tdx_td *td, u64 gpa, int level, kvm_pfn_t pfn,= u64 *ext_err1, u64 *ext_err2); u64 tdh_mem_range_block(struct tdx_td *td, u64 gpa, int level, u64 *ext_er= r1, u64 *ext_err2); u64 tdh_mng_key_config(struct tdx_td *td); u64 tdh_mng_create(struct tdx_td *td, u16 hkid); diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 1e47c194af53..1f1abc5b5655 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1624,8 +1624,8 @@ static int tdx_mem_page_add(struct kvm *kvm, gfn_t gf= n, enum pg_level level, KVM_BUG_ON(!kvm_tdx->page_add_src, kvm)) return -EIO; =20 - err =3D tdh_mem_page_add(&kvm_tdx->td, gpa, pfn_to_page(pfn), - kvm_tdx->page_add_src, &entry, &level_state); + err =3D tdh_mem_page_add(&kvm_tdx->td, gpa, pfn, kvm_tdx->page_add_src, + &entry, &level_state); if (unlikely(tdx_operand_busy(err))) return -EBUSY; =20 @@ -1640,12 +1640,11 @@ static int tdx_mem_page_aug(struct kvm *kvm, gfn_t = gfn, { int tdx_level =3D pg_level_to_tdx_sept_level(level); struct kvm_tdx *kvm_tdx =3D to_kvm_tdx(kvm); - struct page *page =3D pfn_to_page(pfn); gpa_t gpa =3D gfn_to_gpa(gfn); u64 entry, level_state; u64 err; =20 - err =3D tdh_mem_page_aug(&kvm_tdx->td, gpa, tdx_level, page, &entry, &lev= el_state); + err =3D tdh_mem_page_aug(&kvm_tdx->td, gpa, tdx_level, pfn, &entry, &leve= l_state); if (unlikely(tdx_operand_busy(err))) return -EBUSY; =20 diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c index cb9b3210ab71..a9dd75190c67 100644 --- a/arch/x86/virt/vmx/tdx/tdx.c +++ b/arch/x86/virt/vmx/tdx/tdx.c @@ -30,7 +30,6 @@ #include #include #include -#include #include #include #include @@ -1568,6 +1567,11 @@ static void tdx_clflush_page(struct page *page) clflush_cache_range(page_to_virt(page), PAGE_SIZE); } =20 +static void tdx_clflush_pfn(kvm_pfn_t pfn) +{ + clflush_cache_range(__va(PFN_PHYS(pfn)), PAGE_SIZE); +} + noinstr u64 tdh_vp_enter(struct tdx_vp *td, struct tdx_module_args *args) { args->rcx =3D td->tdvpr_pa; @@ -1588,17 +1592,18 @@ u64 tdh_mng_addcx(struct tdx_td *td, struct page *t= dcs_page) } EXPORT_SYMBOL_FOR_KVM(tdh_mng_addcx); =20 -u64 tdh_mem_page_add(struct tdx_td *td, u64 gpa, struct page *page, struct= page *source, u64 *ext_err1, u64 *ext_err2) +u64 tdh_mem_page_add(struct tdx_td *td, u64 gpa, kvm_pfn_t pfn, struct pag= e *source, + u64 *ext_err1, u64 *ext_err2) { struct tdx_module_args args =3D { .rcx =3D gpa, .rdx =3D tdx_tdr_pa(td), - .r8 =3D page_to_phys(page), + .r8 =3D PFN_PHYS(pfn), .r9 =3D page_to_phys(source), }; u64 ret; =20 - tdx_clflush_page(page); + tdx_clflush_pfn(pfn); ret =3D seamcall_ret(TDH_MEM_PAGE_ADD, &args); =20 *ext_err1 =3D args.rcx; @@ -1639,16 +1644,17 @@ u64 tdh_vp_addcx(struct tdx_vp *vp, struct page *td= cx_page) } EXPORT_SYMBOL_FOR_KVM(tdh_vp_addcx); =20 -u64 tdh_mem_page_aug(struct tdx_td *td, u64 gpa, int level, struct page *p= age, u64 *ext_err1, u64 *ext_err2) +u64 tdh_mem_page_aug(struct tdx_td *td, u64 gpa, int level, kvm_pfn_t pfn, + u64 *ext_err1, u64 *ext_err2) { struct tdx_module_args args =3D { .rcx =3D gpa | level, .rdx =3D tdx_tdr_pa(td), - .r8 =3D page_to_phys(page), + .r8 =3D PFN_PHYS(pfn), }; u64 ret; =20 - tdx_clflush_page(page); + tdx_clflush_pfn(pfn); ret =3D seamcall_ret(TDH_MEM_PAGE_AUG, &args); =20 *ext_err1 =3D args.rcx; --=20 2.43.2 From nobody Mon Apr 6 15:50:55 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 313743112BC; Thu, 19 Mar 2026 01:37:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773884264; cv=none; b=J8Wsuh8uCCvwWQ7AW0odvxKkwWCyA6e9Ckg4RgRRoqQn5K3V9gag1jov6tTTHKhNFj6RwNGgTVxK0wFSbnL0490xtRZ+EUifC1gSRxOFz5AlO12XaEVK5Wc50/1HUhkm8syay3TqIVlROpqZDEU1gmEnAv1pGeuPxfqPBd7Yk8w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773884264; c=relaxed/simple; bh=7Ko2/68sFmhdmTw5twBrknA4E8EBmOVStQOriTBS1No=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lazv8TrE238COa7xr6R4F5uHYlLHVCBjulrBES1mVZDduzLCdGCOjxU2fvXyqt7/fz5aVG74EkbZRVmtSFnUribcKgUka/kEFzMNm1Pgx2ntAOYHRQzoPbTbxbyJLLU98H6pnfQQPE9+04N35hO6Qru6EYGyHRE97gPTVYlmQBw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Yq9YHy3D; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Yq9YHy3D" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1773884255; x=1805420255; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7Ko2/68sFmhdmTw5twBrknA4E8EBmOVStQOriTBS1No=; b=Yq9YHy3D03mB7EnudFiYPl7bVmJw7JiNE5JqCkJVg2jiMsJI0b2vddDE 9QVyShCyipce/ABT06rQil5kSJ24tc8YdYfwEgP4nIX+bs7PDm5Br8zuj J1BFg966/+CGrQ6/Ud57LAtOnS9yOZ8pIsmGppLS2i6mt/b+x9eeDuIXS t9P1wfbcApHtXCILVgLFXGlndvU1o2SqKjTxc/lXDLNFDjhWb/fv34EuX FeLbDzGRZcAERlCTMpuAAO+rPq7ZCfuLkVYQe41DD0IuJALrXwLM/AgQa l0nV3pQRLqIo0jjjakc53ha/tciqLUy/7BuAKSCxfk85RsLn2s3HbzB7N Q==; X-CSE-ConnectionGUID: pA73L0nuTpmp18/SPl75ww== X-CSE-MsgGUID: zqkMj3K2QmaawXaFOtOzaQ== X-IronPort-AV: E=McAfee;i="6800,10657,11733"; a="62522964" X-IronPort-AV: E=Sophos;i="6.23,128,1770624000"; d="scan'208";a="62522964" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2026 18:37:34 -0700 X-CSE-ConnectionGUID: wFcw+muPS3WmYPbVGIDYeA== X-CSE-MsgGUID: hJ5xcdDbS3GIQYhT4GojZQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,128,1770624000"; d="scan'208";a="226948320" Received: from yzhao56-desk.sh.intel.com ([10.239.47.19]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2026 18:37:29 -0700 From: Yan Zhao To: seanjc@google.com, pbonzini@redhat.com, dave.hansen@linux.intel.com Cc: tglx@kernel.org, mingo@redhat.com, bp@alien8.de, kas@kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-coco@lists.linux.dev, kai.huang@intel.com, rick.p.edgecombe@intel.com, yan.y.zhao@intel.com, yilun.xu@linux.intel.com, vannapurve@google.com, ackerleytng@google.com, sagis@google.com, binbin.wu@linux.intel.com, xiaoyao.li@intel.com, isaku.yamahata@intel.com Subject: [PATCH 2/2] x86/virt/tdx: Use PFN directly for unmapping guest private memory Date: Thu, 19 Mar 2026 08:58:08 +0800 Message-ID: <20260319005808.9013-1-yan.y.zhao@intel.com> X-Mailer: git-send-email 2.43.2 In-Reply-To: <20260319005605.8965-1-yan.y.zhao@intel.com> References: <20260319005605.8965-1-yan.y.zhao@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Sean Christopherson Remove the completely unnecessary assumptions that memory unmapped from a TDX guest is backed by refcounted struct page memory. APIs tdh_phymem_page_wbinvd_hkid(), tdx_quirk_reset_page() are used when unmapping guest private memory from S-EPT. Since mapping of guest private memory places no requirements on how KVM and guest_memfd manage memory, neither does guest private memory unmapping. Rip out the misguided struct page assumptions/constraints by having the two APIs take PFN directly. This ensures that for future huge page support in S-EPT, the kernel doesn't pick up even worse assumptions like "a hugepage must be contained in a single folio". Use "kvm_pfn_t pfn" for type safety. Using this KVM type is appropriate since APIs tdh_phymem_page_wbinvd_hkid() and tdx_quirk_reset_page() are exported to KVM only. Update mk_keyed_paddr(), which is invoked by tdh_phymem_page_wbinvd_hkid(), to take PFN as parameter accordingly. Opportunistically, move mk_keyed_paddr() from tdx.h to tdx.c since there are no external users. Have tdx_reclaim_page() remain using struct page as parameter since it's currently not used for removing guest private memory yet. [Yan: Use kvm_pfn_t, drop reclaim API param update, move mk_keyed_paddr()] Signed-off-by: Sean Christopherson Signed-off-by: Yan Zhao --- arch/x86/include/asm/tdx.h | 15 ++------------- arch/x86/kvm/vmx/tdx.c | 10 +++++----- arch/x86/virt/vmx/tdx/tdx.c | 16 +++++++++++----- 3 files changed, 18 insertions(+), 23 deletions(-) diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h index f3f0b1872176..6ceb4cd9ff21 100644 --- a/arch/x86/include/asm/tdx.h +++ b/arch/x86/include/asm/tdx.h @@ -153,7 +153,7 @@ int tdx_guest_keyid_alloc(void); u32 tdx_get_nr_guest_keyids(void); void tdx_guest_keyid_free(unsigned int keyid); =20 -void tdx_quirk_reset_page(struct page *page); +void tdx_quirk_reset_page(kvm_pfn_t pfn); =20 struct tdx_td { /* TD root structure: */ @@ -177,17 +177,6 @@ struct tdx_vp { struct page **tdcx_pages; }; =20 -static inline u64 mk_keyed_paddr(u16 hkid, struct page *page) -{ - u64 ret; - - ret =3D page_to_phys(page); - /* KeyID bits are just above the physical address bits: */ - ret |=3D (u64)hkid << boot_cpu_data.x86_phys_bits; - - return ret; -} - static inline int pg_level_to_tdx_sept_level(enum pg_level level) { WARN_ON_ONCE(level =3D=3D PG_LEVEL_NONE); @@ -219,7 +208,7 @@ u64 tdh_mem_track(struct tdx_td *tdr); u64 tdh_mem_page_remove(struct tdx_td *td, u64 gpa, u64 level, u64 *ext_er= r1, u64 *ext_err2); u64 tdh_phymem_cache_wb(bool resume); u64 tdh_phymem_page_wbinvd_tdr(struct tdx_td *td); -u64 tdh_phymem_page_wbinvd_hkid(u64 hkid, struct page *page); +u64 tdh_phymem_page_wbinvd_hkid(u64 hkid, kvm_pfn_t pfn); #else static inline void tdx_init(void) { } static inline u32 tdx_get_nr_guest_keyids(void) { return 0; } diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 1f1abc5b5655..75ad3debcd84 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -343,7 +343,7 @@ static int tdx_reclaim_page(struct page *page) =20 r =3D __tdx_reclaim_page(page); if (!r) - tdx_quirk_reset_page(page); + tdx_quirk_reset_page(page_to_pfn(page)); return r; } =20 @@ -597,7 +597,7 @@ static void tdx_reclaim_td_control_pages(struct kvm *kv= m) if (TDX_BUG_ON(err, TDH_PHYMEM_PAGE_WBINVD, kvm)) return; =20 - tdx_quirk_reset_page(kvm_tdx->td.tdr_page); + tdx_quirk_reset_page(page_to_pfn(kvm_tdx->td.tdr_page)); =20 __free_page(kvm_tdx->td.tdr_page); kvm_tdx->td.tdr_page =3D NULL; @@ -1776,9 +1776,9 @@ static int tdx_sept_free_private_spt(struct kvm *kvm,= gfn_t gfn, static void tdx_sept_remove_private_spte(struct kvm *kvm, gfn_t gfn, enum pg_level level, u64 mirror_spte) { - struct page *page =3D pfn_to_page(spte_to_pfn(mirror_spte)); int tdx_level =3D pg_level_to_tdx_sept_level(level); struct kvm_tdx *kvm_tdx =3D to_kvm_tdx(kvm); + kvm_pfn_t pfn =3D spte_to_pfn(mirror_spte); gpa_t gpa =3D gfn_to_gpa(gfn); u64 err, entry, level_state; =20 @@ -1817,11 +1817,11 @@ static void tdx_sept_remove_private_spte(struct kvm= *kvm, gfn_t gfn, if (TDX_BUG_ON_2(err, TDH_MEM_PAGE_REMOVE, entry, level_state, kvm)) return; =20 - err =3D tdh_phymem_page_wbinvd_hkid((u16)kvm_tdx->hkid, page); + err =3D tdh_phymem_page_wbinvd_hkid((u16)kvm_tdx->hkid, pfn); if (TDX_BUG_ON(err, TDH_PHYMEM_PAGE_WBINVD, kvm)) return; =20 - tdx_quirk_reset_page(page); + tdx_quirk_reset_page(pfn); } =20 void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode, diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c index a9dd75190c67..2f9d07ad1a9a 100644 --- a/arch/x86/virt/vmx/tdx/tdx.c +++ b/arch/x86/virt/vmx/tdx/tdx.c @@ -730,9 +730,9 @@ static void tdx_quirk_reset_paddr(unsigned long base, u= nsigned long size) mb(); } =20 -void tdx_quirk_reset_page(struct page *page) +void tdx_quirk_reset_page(kvm_pfn_t pfn) { - tdx_quirk_reset_paddr(page_to_phys(page), PAGE_SIZE); + tdx_quirk_reset_paddr(PFN_PHYS(pfn), PAGE_SIZE); } EXPORT_SYMBOL_FOR_KVM(tdx_quirk_reset_page); =20 @@ -1907,21 +1907,27 @@ u64 tdh_phymem_cache_wb(bool resume) } EXPORT_SYMBOL_FOR_KVM(tdh_phymem_cache_wb); =20 +static inline u64 mk_keyed_paddr(u16 hkid, kvm_pfn_t pfn) +{ + /* KeyID bits are just above the physical address bits. */ + return PFN_PHYS(pfn) | ((u64)hkid << boot_cpu_data.x86_phys_bits); +} + u64 tdh_phymem_page_wbinvd_tdr(struct tdx_td *td) { struct tdx_module_args args =3D {}; =20 - args.rcx =3D mk_keyed_paddr(tdx_global_keyid, td->tdr_page); + args.rcx =3D mk_keyed_paddr(tdx_global_keyid, page_to_pfn(td->tdr_page)); =20 return seamcall(TDH_PHYMEM_PAGE_WBINVD, &args); } EXPORT_SYMBOL_FOR_KVM(tdh_phymem_page_wbinvd_tdr); =20 -u64 tdh_phymem_page_wbinvd_hkid(u64 hkid, struct page *page) +u64 tdh_phymem_page_wbinvd_hkid(u64 hkid, kvm_pfn_t pfn) { struct tdx_module_args args =3D {}; =20 - args.rcx =3D mk_keyed_paddr(hkid, page); + args.rcx =3D mk_keyed_paddr(hkid, pfn); =20 return seamcall(TDH_PHYMEM_PAGE_WBINVD, &args); } --=20 2.43.2