From nobody Mon Feb 9 13:59:07 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 345BDC05051 for ; Tue, 25 Jul 2023 22:18:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232177AbjGYWRv (ORCPT ); Tue, 25 Jul 2023 18:17:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33684 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232236AbjGYWQO (ORCPT ); Tue, 25 Jul 2023 18:16:14 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5072030ED; Tue, 25 Jul 2023 15:15:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690323351; x=1721859351; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zCAgGQEEML4CMrA+OwifaNyRtVtPmIPEwrZLmmByLZ8=; b=ksa2D0W9zgaSUqQI/WfFgMBFj8GsYsdKCTFrJFUJPZLCvtZ07AcgPHiT SM0DKskxm2tWFsxcrqGBDUA4ulpjSYOlwayGGW4nzQG50oBOTx6CXgcFi hdoOO+jgPIciY4IAn/HwLxn9LafmExns/GK3pUHerUuAGHVLIc5Xi1/Zq ojjS1q1U1soleFuMK21XV9IYPpGWUm3jI+tdTIRwtb+0tX7kJx35RzG93 TOv8Jqd9ENI58MZOao7Oe8WPLKI1uoSXsOXqVD2qfSRg5FZDNwDlxgXtF w3l31JHJlf8YL3IO3fCKESh9+2BZIzq1TVYPnskJtw1uR795lY3q8Dpmn w==; X-IronPort-AV: E=McAfee;i="6600,9927,10782"; a="357863198" X-IronPort-AV: E=Sophos;i="6.01,231,1684825200"; d="scan'208";a="357863198" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jul 2023 15:15:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10782"; a="1056938873" X-IronPort-AV: E=Sophos;i="6.01,231,1684825200"; d="scan'208";a="1056938873" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jul 2023 15:15:31 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Rick Edgecombe Subject: [PATCH v15 029/115] KVM: x86/mmu: Add address conversion functions for TDX shared bit of GPA Date: Tue, 25 Jul 2023 15:13:40 -0700 Message-Id: <8f3242ea8a9f825f56ce31e6015799866d3dfe58.1690322424.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata TDX repurposes one GPA bit (51 bit or 47 bit based on configuration) to indicate the GPA is private(if cleared) or shared (if set) with VMM. If GPA.shared is set, GPA is covered by the existing conventional EPT pointed by EPTP. If GPA.shared bit is cleared, GPA is covered by TDX module. VMM has to issue SEAMCALLs to operate. Add a member to remember GPA shared bit for each guest TDs, add address conversion functions between private GPA and shared GPA and test if GPA is private. Because struct kvm_arch (or struct kvm which includes struct kvm_arch. See kvm_arch_alloc_vm() that passes __GPF_ZERO) is zero-cleared when allocated, the new member to remember GPA shared bit is guaranteed to be zero with this patch unless it's initialized explicitly. Co-developed-by: Rick Edgecombe Signed-off-by: Rick Edgecombe Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm_host.h | 4 ++++ arch/x86/kvm/mmu.h | 27 +++++++++++++++++++++++++++ arch/x86/kvm/vmx/tdx.c | 5 +++++ 3 files changed, 36 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index b265e4507a1e..a39d88d2f6fc 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1447,6 +1447,10 @@ struct kvm_arch { */ #define SPLIT_DESC_CACHE_MIN_NR_OBJECTS (SPTE_ENT_PER_PAGE + 1) struct kvm_mmu_memory_cache split_desc_cache; + +#ifdef CONFIG_KVM_MMU_PRIVATE + gfn_t gfn_shared_mask; +#endif }; =20 struct kvm_vm_stat { diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 963c734642f6..919fa5109e8c 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -300,4 +300,31 @@ static inline gpa_t kvm_translate_gpa(struct kvm_vcpu = *vcpu, return gpa; return translate_nested_gpa(vcpu, gpa, access, exception); } + +static inline gfn_t kvm_gfn_shared_mask(const struct kvm *kvm) +{ +#ifdef CONFIG_KVM_MMU_PRIVATE + return kvm->arch.gfn_shared_mask; +#else + return 0; +#endif +} + +static inline gfn_t kvm_gfn_to_shared(const struct kvm *kvm, gfn_t gfn) +{ + return gfn | kvm_gfn_shared_mask(kvm); +} + +static inline gfn_t kvm_gfn_to_private(const struct kvm *kvm, gfn_t gfn) +{ + return gfn & ~kvm_gfn_shared_mask(kvm); +} + +static inline bool kvm_is_private_gpa(const struct kvm *kvm, gpa_t gpa) +{ + gfn_t mask =3D kvm_gfn_shared_mask(kvm); + + return mask && !(gpa_to_gfn(gpa) & mask); +} + #endif diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 488fefad1833..a10caf87e4fb 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -863,6 +863,11 @@ static int tdx_td_init(struct kvm *kvm, struct kvm_tdx= _cmd *cmd) kvm_tdx->attributes =3D td_params->attributes; kvm_tdx->xfam =3D td_params->xfam; =20 + if (td_params->exec_controls & TDX_EXEC_CONTROL_MAX_GPAW) + kvm->arch.gfn_shared_mask =3D gpa_to_gfn(BIT_ULL(51)); + else + kvm->arch.gfn_shared_mask =3D gpa_to_gfn(BIT_ULL(47)); + out: /* kfree() accepts NULL. */ kfree(init_vm); --=20 2.25.1