From nobody Sun Feb 8 23:43:07 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01624C433EF for ; Fri, 4 Mar 2022 19:59:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229876AbiCDUAe (ORCPT ); Fri, 4 Mar 2022 15:00:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229570AbiCDT7M (ORCPT ); Fri, 4 Mar 2022 14:59:12 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14F002335D6; Fri, 4 Mar 2022 11:50:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646423448; x=1677959448; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gQ2HFrAUzwOFK+v0RpVHUFa5/SQdoyNcxId2bU4O338=; b=R1oR62MiwN6VQCWJ51GyRmDYbHJ90e0Pfbh5u8zN4iwFCZg3wHFpLOEh /hxNsX0gwMJayO1vpAByp2yMyV+jRLitoLNAMTJPgrTcE6yrdJDQCAmcy n2rvGZxpqSMdMepLlOR3LXog+/t2mjGxLktZ1T0q4lGX9RIgK6yCjmTMg 2h0XYrx0hcCv7otVZkyOgqQR+sdOnicDRh8u9XsresqGjagR6XJrENXKE ZTeudHy2UuTSj8BqygYr50U9yNJgv1+D6pQ5jK5QMsyQyJP3Tj9k3ZY6n L5v8QHsXALxcN8fhx2lIfoB0FMDAk6VGlCmDQLwXW+SRcaiOMmrDKEwkn A==; X-IronPort-AV: E=McAfee;i="6200,9189,10276"; a="253779666" X-IronPort-AV: E=Sophos;i="5.90,156,1643702400"; d="scan'208";a="253779666" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2022 11:50:47 -0800 X-IronPort-AV: E=Sophos;i="5.90,156,1643702400"; d="scan'208";a="552344614" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2022 11:50:47 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , Jim Mattson , erdemaktas@google.com, Connor Kuehl , Sean Christopherson Subject: [RFC PATCH v5 100/104] KVM: TDX: Silently discard SMI request Date: Fri, 4 Mar 2022 11:49:56 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata TDX doesn't support system-management mode (SMM) and system-management interrupt (SMI) in guest TDs. Because guest state (vcpu state, memory state) is protected, it must go through the TDX module APIs to change guest state, injecting SMI and changing vcpu mode into SMM. The TDX module doesn't provide a way for VMM to inject SMI into guest TD and a way for VMM to switch guest vcpu mode into SMM. We have two options in KVM when handling SMM or SMI in the guest TD or the device model (e.g. QEMU): 1) silently ignore the request or 2) return a meaningful error. For simplicity, we implemented the option 1). Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/main.c | 43 ++++++++++++++++++++++++++++++++++---- arch/x86/kvm/vmx/tdx.c | 25 ++++++++++++++++++++++ arch/x86/kvm/vmx/x86_ops.h | 8 +++++++ 3 files changed, 72 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index a528cfdbce54..478aa63acefa 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -193,6 +193,41 @@ static int vt_get_msr(struct kvm_vcpu *vcpu, struct ms= r_data *msr_info) return vmx_get_msr(vcpu, msr_info); } =20 +static int vt_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection) +{ + if (is_td_vcpu(vcpu)) + return tdx_smi_allowed(vcpu, for_injection); + + return vmx_smi_allowed(vcpu, for_injection); +} + +static int vt_enter_smm(struct kvm_vcpu *vcpu, char *smstate) +{ + if (unlikely(is_td_vcpu(vcpu))) + return tdx_enter_smm(vcpu, smstate); + + return vmx_enter_smm(vcpu, smstate); +} + +static int vt_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) +{ + if (unlikely(is_td_vcpu(vcpu))) + return tdx_leave_smm(vcpu, smstate); + + return vmx_leave_smm(vcpu, smstate); +} + +static void vt_enable_smi_window(struct kvm_vcpu *vcpu) +{ + if (is_td_vcpu(vcpu)) { + tdx_enable_smi_window(vcpu); + return; + } + + /* RSM will cause a vmexit anyway. */ + vmx_enable_smi_window(vcpu); +} + static void vt_apicv_post_state_restore(struct kvm_vcpu *vcpu) { if (is_td_vcpu(vcpu)) @@ -539,10 +574,10 @@ struct kvm_x86_ops vt_x86_ops __initdata =3D { =20 .setup_mce =3D vmx_setup_mce, =20 - .smi_allowed =3D vmx_smi_allowed, - .enter_smm =3D vmx_enter_smm, - .leave_smm =3D vmx_leave_smm, - .enable_smi_window =3D vmx_enable_smi_window, + .smi_allowed =3D vt_smi_allowed, + .enter_smm =3D vt_enter_smm, + .leave_smm =3D vt_leave_smm, + .enable_smi_window =3D vt_enable_smi_window, =20 .can_emulate_instruction =3D vmx_can_emulate_instruction, .apic_init_signal_blocked =3D vmx_apic_init_signal_blocked, diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index e5eccec0e24d..7bbf6271967b 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1692,6 +1692,31 @@ int tdx_set_msr(struct kvm_vcpu *vcpu, struct msr_da= ta *msr) return 1; } =20 +int tdx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection) +{ + /* SMI isn't supported for TDX. */ + return false; +} + +int tdx_enter_smm(struct kvm_vcpu *vcpu, char *smstate) +{ + /* smi_allowed() is always false for TDX as above. */ + WARN_ON_ONCE(1); + return 0; +} + +int tdx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) +{ + WARN_ON_ONCE(1); + return 0; +} + +void tdx_enable_smi_window(struct kvm_vcpu *vcpu) +{ + /* SMI isn't supported for TDX. Silently discard SMI request. */ + vcpu->arch.smi_pending =3D false; +} + static int tdx_capabilities(struct kvm *kvm, struct kvm_tdx_cmd *cmd) { struct kvm_tdx_capabilities __user *user_caps; diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index dcaa5806802e..19d793609cc4 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -159,6 +159,10 @@ void tdx_get_exit_info(struct kvm_vcpu *vcpu, u32 *rea= son, bool tdx_is_emulated_msr(u32 index, bool write); int tdx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr); int tdx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr); +int tdx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection); +int tdx_enter_smm(struct kvm_vcpu *vcpu, char *smstate); +int tdx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate); +void tdx_enable_smi_window(struct kvm_vcpu *vcpu); =20 int tdx_vm_ioctl(struct kvm *kvm, void __user *argp); int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp); @@ -199,6 +203,10 @@ static inline void tdx_get_exit_info( static inline bool tdx_is_emulated_msr(u32 index, bool write) { return fal= se; } static inline int tdx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)= { return 1; } static inline int tdx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)= { return 1; } +static inline int tdx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injectio= n) { return false; } +static inline int tdx_enter_smm(struct kvm_vcpu *vcpu, char *smstate) { re= turn 0; } +static inline int tdx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate= ) { return 0; } +static inline void tdx_enable_smi_window(struct kvm_vcpu *vcpu) {} =20 static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { retur= n -EOPNOTSUPP; } static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp)= { return -EOPNOTSUPP; } --=20 2.25.1