From nobody Wed Dec 17 17:23:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7C7FC61DF4 for ; Fri, 24 Nov 2023 08:00:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344933AbjKXIAK (ORCPT ); Fri, 24 Nov 2023 03:00:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38586 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229705AbjKXH6l (ORCPT ); Fri, 24 Nov 2023 02:58:41 -0500 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 639CB172A; Thu, 23 Nov 2023 23:58:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1700812727; x=1732348727; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/jps8O3J9tj+Drgl/U7yfUEGIelR5fxrqFXNDYPm5Wc=; b=iYEWjjAuFzjozj4zeL1H9mpqP+EHVWpIvzOSGfP00CtPKDL1PJCaCBos uVtf3redspe+8aC+3581zLB0RPmjv3T69RcDJW12HKhdUWBF2+QcQ+zfC BjYkAXMfditk2UFZgyDe3dFFf3Eh9mPV+RjHKnlfyMfC6mxnPx0OdKKrz mbDPSaKDDAywOA7EPJpa9aNr9+0R20aOTsSiBtLHytxXpRTSd9zFvdIUd VLxMcgj5/Bfii3CI/nmj5SQIj0LiBdBbmnG4u7B2rnH64+R/D8y0YrocJ hlaq6yT5xgB+rERS14owwGeQxn9yYDJ6iqATdcJHscv9Ij3wQS18X6cVe Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10902"; a="458872356" X-IronPort-AV: E=Sophos;i="6.04,223,1695711600"; d="scan'208";a="458872356" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2023 23:58:40 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10902"; a="833629837" X-IronPort-AV: E=Sophos;i="6.04,223,1695711600"; d="scan'208";a="833629837" Received: from unknown (HELO embargo.jf.intel.com) ([10.165.9.183]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2023 23:58:40 -0800 From: Yang Weijiang To: seanjc@google.com, pbonzini@redhat.com, dave.hansen@intel.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: peterz@infradead.org, chao.gao@intel.com, rick.p.edgecombe@intel.com, mlevitsk@redhat.com, john.allen@amd.com, weijiang.yang@intel.com Subject: [PATCH v7 15/26] KVM: x86: Load guest FPU state when access XSAVE-managed MSRs Date: Fri, 24 Nov 2023 00:53:19 -0500 Message-Id: <20231124055330.138870-16-weijiang.yang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20231124055330.138870-1-weijiang.yang@intel.com> References: <20231124055330.138870-1-weijiang.yang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Sean Christopherson Load the guest's FPU state if userspace is accessing MSRs whose values are managed by XSAVES. Introduce two helpers, kvm_{get,set}_xstate_msr(), to facilitate access to such kind of MSRs. If MSRs supported in kvm_caps.supported_xss are passed through to guest, the guest MSRs are swapped with host's before vCPU exits to userspace and after it reenters kernel before next VM-entry. Because the modified code is also used for the KVM_GET_MSRS device ioctl(), explicitly check @vcpu is non-null before attempting to load guest state. The XSAVE-managed MSRs cannot be retrieved via the device ioctl() without loading guest FPU state (which doesn't exist). Note that guest_cpuid_has() is not queried as host userspace is allowed to access MSRs that have not been exposed to the guest, e.g. it might do KVM_SET_MSRS prior to KVM_SET_CPUID2. The two helpers are put here in order to manifest accessing xsave-managed M= SRs requires special check and handling to guarantee the correctness of read/wr= ite to the MSRs. Signed-off-by: Sean Christopherson Co-developed-by: Yang Weijiang Signed-off-by: Yang Weijiang Reviewed-by: Maxim Levitsky --- arch/x86/kvm/x86.c | 35 ++++++++++++++++++++++++++++++++++- arch/x86/kvm/x86.h | 24 ++++++++++++++++++++++++ 2 files changed, 58 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 607082aca80d..fd48b825510c 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -133,6 +133,9 @@ static int __set_sregs2(struct kvm_vcpu *vcpu, struct k= vm_sregs2 *sregs2); static void __get_sregs2(struct kvm_vcpu *vcpu, struct kvm_sregs2 *sregs2); =20 static DEFINE_MUTEX(vendor_module_lock); +static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu); +static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu); + struct kvm_x86_ops kvm_x86_ops __read_mostly; =20 #define KVM_X86_OP(func) \ @@ -4482,6 +4485,21 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct= msr_data *msr_info) } EXPORT_SYMBOL_GPL(kvm_get_msr_common); =20 +/* + * Returns true if the MSR in question is managed via XSTATE, i.e. is con= text + * switched with the rest of guest FPU state. + */ +static bool is_xstate_managed_msr(u32 index) +{ + switch (index) { + case MSR_IA32_U_CET: + case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP: + return true; + default: + return false; + } +} + /* * Read or write a bunch of msrs. All parameters are kernel addresses. * @@ -4492,11 +4510,26 @@ static int __msr_io(struct kvm_vcpu *vcpu, struct k= vm_msrs *msrs, int (*do_msr)(struct kvm_vcpu *vcpu, unsigned index, u64 *data)) { + bool fpu_loaded =3D false; int i; =20 - for (i =3D 0; i < msrs->nmsrs; ++i) + for (i =3D 0; i < msrs->nmsrs; ++i) { + /* + * If userspace is accessing one or more XSTATE-managed MSRs, + * temporarily load the guest's FPU state so that the guest's + * MSR value(s) is resident in hardware, i.e. so that KVM can + * get/set the MSR via RDMSR/WRMSR. + */ + if (vcpu && !fpu_loaded && kvm_caps.supported_xss && + is_xstate_managed_msr(entries[i].index)) { + kvm_load_guest_fpu(vcpu); + fpu_loaded =3D true; + } if (do_msr(vcpu, entries[i].index, &entries[i].data)) break; + } + if (fpu_loaded) + kvm_put_guest_fpu(vcpu); =20 return i; } diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 5184fde1dc54..6e42ede335f5 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -541,4 +541,28 @@ int kvm_sev_es_string_io(struct kvm_vcpu *vcpu, unsign= ed int size, unsigned int port, void *data, unsigned int count, int in); =20 +/* + * Lock and/or reload guest FPU and access xstate MSRs. For accesses initi= ated + * by host, guest FPU is loaded in __msr_io(). For accesses initiated by g= uest, + * guest FPU should have been loaded already. + */ + +static inline void kvm_get_xstate_msr(struct kvm_vcpu *vcpu, + struct msr_data *msr_info) +{ + KVM_BUG_ON(!vcpu->arch.guest_fpu.fpstate->in_use, vcpu->kvm); + kvm_fpu_get(); + rdmsrl(msr_info->index, msr_info->data); + kvm_fpu_put(); +} + +static inline void kvm_set_xstate_msr(struct kvm_vcpu *vcpu, + struct msr_data *msr_info) +{ + KVM_BUG_ON(!vcpu->arch.guest_fpu.fpstate->in_use, vcpu->kvm); + kvm_fpu_get(); + wrmsrl(msr_info->index, msr_info->data); + kvm_fpu_put(); +} + #endif --=20 2.27.0