From nobody Mon Feb 9 18:46:22 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97A6FC07E8C for ; Tue, 25 Jul 2023 22:20:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231386AbjGYWUb (ORCPT ); Tue, 25 Jul 2023 18:20:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232358AbjGYWTV (ORCPT ); Tue, 25 Jul 2023 18:19:21 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 024C24482; Tue, 25 Jul 2023 15:16:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690323392; x=1721859392; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=oSpwVS/FUjL4d73DdU92utlrGySSdjlxHtDcqhGvYZw=; b=k4yM29fXIQYiIa+0yHggSdfvSE18XSyYon/D0F8QTPwMGvAtntEsFmgh 7ea81ELsvF73lm6Y7m18wb8627SjRFDfhMES2t1/SZsmsgakOS3gkZ/Sg 25WpA0cBKRjlWG2vmwJ+V8kbJV9yT2HSu5q5NdcOlapx+89Updipzg5nJ t6SOrqp31S0Ag/1OJ/KJ4dS3URoiyAk0vqZGpy9gsR0SkNqk5QaaIrN4F UcvLkxUl2JavL0sE3eK1xDfdXD6vxUSLpyA1cUm3t3tBMFJ7SKYrabFZQ HPmOiFylwuubMX7hm6pMUyVf7iKtB6drpSij4W1vMLaUf6Yj6SoKGvM8t g==; X-IronPort-AV: E=McAfee;i="6600,9927,10782"; a="357863260" X-IronPort-AV: E=Sophos;i="6.01,231,1684825200"; d="scan'208";a="357863260" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jul 2023 15:15:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10782"; a="1056938923" X-IronPort-AV: E=Sophos;i="6.01,231,1684825200"; d="scan'208";a="1056938923" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jul 2023 15:15:36 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com Subject: [PATCH v15 042/115] KVM: x86/mmu: Add a new is_private member for union kvm_mmu_page_role Date: Tue, 25 Jul 2023 15:13:53 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata Because TDX support introduces private mapping, add a new member in union kvm_mmu_page_role with access functions to check the member. Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm_host.h | 27 +++++++++++++++++++++++++++ arch/x86/kvm/mmu/mmu_internal.h | 5 +++++ arch/x86/kvm/mmu/spte.h | 6 ++++++ 3 files changed, 38 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 0bc53c942c6c..56f9297b1bb8 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -340,7 +340,12 @@ union kvm_mmu_page_role { unsigned ad_disabled:1; unsigned guest_mode:1; unsigned passthrough:1; +#ifdef CONFIG_KVM_MMU_PRIVATE + unsigned is_private:1; + unsigned :4; +#else unsigned :5; +#endif =20 /* * This is left at the top of the word so that @@ -352,6 +357,28 @@ union kvm_mmu_page_role { }; }; =20 +#ifdef CONFIG_KVM_MMU_PRIVATE +static inline bool kvm_mmu_page_role_is_private(union kvm_mmu_page_role ro= le) +{ + return !!role.is_private; +} + +static inline void kvm_mmu_page_role_set_private(union kvm_mmu_page_role *= role) +{ + role->is_private =3D 1; +} +#else +static inline bool kvm_mmu_page_role_is_private(union kvm_mmu_page_role ro= le) +{ + return false; +} + +static inline void kvm_mmu_page_role_set_private(union kvm_mmu_page_role *= role) +{ + WARN_ON_ONCE(1); +} +#endif + /* * kvm_mmu_extended_role complements kvm_mmu_page_role, tracking properties * relevant to the current MMU configuration. When loading CR0, CR4, or = EFER, diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index 76fa38da74f1..2409c4dca208 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -143,6 +143,11 @@ static inline int kvm_mmu_page_as_id(struct kvm_mmu_pa= ge *sp) return kvm_mmu_role_as_id(sp->role); } =20 +static inline bool is_private_sp(const struct kvm_mmu_page *sp) +{ + return kvm_mmu_page_role_is_private(sp->role); +} + static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page = *sp) { /* diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index a8418fd8ae9e..41973fe6bc22 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -251,6 +251,12 @@ static inline struct kvm_mmu_page *sptep_to_sp(u64 *sp= tep) return to_shadow_page(__pa(sptep)); } =20 +static inline bool is_private_sptep(u64 *sptep) +{ + WARN_ON_ONCE(!sptep); + return is_private_sp(sptep_to_sp(sptep)); +} + static inline bool is_mmio_spte(struct kvm *kvm, u64 spte) { return (spte & shadow_mmio_mask) =3D=3D kvm->arch.shadow_mmio_value && --=20 2.25.1