From nobody Fri Dec 19 19:15:43 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9C9EC433EF for ; Wed, 16 Feb 2022 10:27:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232917AbiBPK1X (ORCPT ); Wed, 16 Feb 2022 05:27:23 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:49798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232847AbiBPK1E (ORCPT ); Wed, 16 Feb 2022 05:27:04 -0500 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B03A20DF00; Wed, 16 Feb 2022 02:26:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645007212; x=1676543212; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nMyJPHNHNbykUvL1ldDw43g9Lgmj3rxlbe9UP4TE4TU=; b=LbvKrAsbm1veogHwejMseb/GSHsakvOukk0qH/iPQKChI/BRJlKKZHYY 3GVj+k2rQ7I+C2jXL4gyYhv2ozfr/khsDg1Eph3/QsAfV7qBannAPl0Ob PMzUONCZ8lZMecj1THaclYGDOfo7a795Sg+/gqmWjwYdk/kmmBzPNQI2B O1l9jv4gFzx/aAYtw+PPXaivYn2w8H7z5VETLG1ToXysdtChYnW+pEido ZGsFz0qzaRIOZbA2fte7KeMU8lAp1R3k584d3QyQ4/rKE7/3VZsi4mvG7 TzP1EtHTeNWzLzUk842c1D9d3YkMtKn91+jP//vmwCKqzJzk2ODQA6W0U g==; X-IronPort-AV: E=McAfee;i="6200,9189,10259"; a="231201156" X-IronPort-AV: E=Sophos;i="5.88,373,1635231600"; d="scan'208";a="231201156" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2022 02:26:51 -0800 X-IronPort-AV: E=Sophos;i="5.88,373,1635231600"; d="scan'208";a="498708574" Received: from embargo.jf.intel.com ([10.165.9.183]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2022 02:26:50 -0800 From: Yang Weijiang To: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, like.xu.linux@gmail.com, vkuznets@redhat.com, wei.w.wang@intel.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Sean Christopherson , Yang Weijiang Subject: [PATCH v9 03/17] KVM: x86: Load guest fpu state when accessing MSRs managed by XSAVES Date: Tue, 15 Feb 2022 16:25:30 -0500 Message-Id: <20220215212544.51666-4-weijiang.yang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220215212544.51666-1-weijiang.yang@intel.com> References: <20220215212544.51666-1-weijiang.yang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Sean Christopherson If new feature MSRs are supported in XSS and passed through to the guest they are saved and restored by XSAVES/XRSTORS, i.e. in the guest's FPU state. Load the guest's FPU state if userspace is accessing MSRs whose values are managed by XSAVES so that the MSR helper, e.g. kvm_{get,set}_xsave_msr(), can simply do {RD,WR}MSR to access the guest's value. Because it's also used for the KVM_GET_MSRS device ioctl(), explicitly check that @vcpu is non-null before attempting to load guest state. The XSS supporting MSRs cannot be retrieved via the device ioctl() without loading guest FPU state (which doesn't exist). MSRs that are switched through XSAVES are especially annoying due to the possibility of the kernel's FPU being used in IRQ context. Disable IRQs and ensure the guest's FPU state is loaded when accessing such MSRs. Note that guest_cpuid_has() is not queried as host userspace is allowed to access MSRs that have not been exposed to the guest, e.g. it might do KVM_SET_MSRS prior to KVM_SET_CPUID2. Signed-off-by: Sean Christopherson Co-developed-by: Yang Weijiang Signed-off-by: Yang Weijiang --- arch/x86/include/asm/kvm_host.h | 3 +++ arch/x86/kvm/x86.c | 44 ++++++++++++++++++++++++++++++++- 2 files changed, 46 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 9d59dfc27e5a..0c3a6feb41eb 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1731,6 +1731,9 @@ int kvm_emulate_xsetbv(struct kvm_vcpu *vcpu); int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr); int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr); =20 +void kvm_get_xsave_msr(struct msr_data *msr_info); +void kvm_set_xsave_msr(struct msr_data *msr_info); + unsigned long kvm_get_rflags(struct kvm_vcpu *vcpu); void kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags); int kvm_emulate_rdpmc(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 8397e5bc4ed5..8891329d594c 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -125,6 +125,9 @@ static int kvm_vcpu_do_singlestep(struct kvm_vcpu *vcpu= ); static int __set_sregs2(struct kvm_vcpu *vcpu, struct kvm_sregs2 *sregs2); static void __get_sregs2(struct kvm_vcpu *vcpu, struct kvm_sregs2 *sregs2); =20 +static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu); +static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu); + struct kvm_x86_ops kvm_x86_ops __read_mostly; EXPORT_SYMBOL_GPL(kvm_x86_ops); =20 @@ -4072,6 +4075,36 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct= msr_data *msr_info) } EXPORT_SYMBOL_GPL(kvm_get_msr_common); =20 +void __maybe_unused kvm_get_xsave_msr(struct msr_data *msr_info) +{ + local_irq_disable(); + if (test_thread_flag(TIF_NEED_FPU_LOAD)) + switch_fpu_return(); + rdmsrl(msr_info->index, msr_info->data); + local_irq_enable(); +} +EXPORT_SYMBOL_GPL(kvm_get_xsave_msr); + +void __maybe_unused kvm_set_xsave_msr(struct msr_data *msr_info) +{ + local_irq_disable(); + if (test_thread_flag(TIF_NEED_FPU_LOAD)) + switch_fpu_return(); + wrmsrl(msr_info->index, msr_info->data); + local_irq_enable(); +} +EXPORT_SYMBOL_GPL(kvm_set_xsave_msr); + +/* + * If new features passthrough XSS managed MSRs to guest, it's required to + * add separate checks here so as to load feature dependent guest MSRs bef= ore + * access them. + */ +static bool is_xsaves_msr(u32 index) +{ + return false; +} + /* * Read or write a bunch of msrs. All parameters are kernel addresses. * @@ -4082,11 +4115,20 @@ static int __msr_io(struct kvm_vcpu *vcpu, struct k= vm_msrs *msrs, int (*do_msr)(struct kvm_vcpu *vcpu, unsigned index, u64 *data)) { + bool fpu_loaded =3D false; int i; =20 - for (i =3D 0; i < msrs->nmsrs; ++i) + for (i =3D 0; i < msrs->nmsrs; ++i) { + if (vcpu && !fpu_loaded && supported_xss && + is_xsaves_msr(entries[i].index)) { + kvm_load_guest_fpu(vcpu); + fpu_loaded =3D true; + } if (do_msr(vcpu, entries[i].index, &entries[i].data)) break; + } + if (fpu_loaded) + kvm_put_guest_fpu(vcpu); =20 return i; } --=20 2.27.0