From nobody Fri Dec 19 19:14:30 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF4FDC3F6B0 for ; Wed, 3 Aug 2022 15:50:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238122AbiHCPu1 (ORCPT ); Wed, 3 Aug 2022 11:50:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236984AbiHCPuY (ORCPT ); Wed, 3 Aug 2022 11:50:24 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B93FB262B for ; Wed, 3 Aug 2022 08:50:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1659541822; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=psYHeyeefyckM8qqiGafKX9oY62tjOLZwMmaha8+kwM=; b=IS1wzzyw1h5IJH6dg67u+A91BrdPfYhx376dSs0gdN4O2z5PimsEo6O4U2tfJk6xXJRX6b BehDDRNLgVUw+Zp1h71+3ttck1estKUFm9TUoVhEjSeXHZ9sKDakLe0O0qDOopt2EVihDD yOpAnlb8hoIP5ZEjRuPebPzceFoLBkA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-398-Ug3rWSMHO2yf6usSNZ-GxQ-1; Wed, 03 Aug 2022 11:50:21 -0400 X-MC-Unique: Ug3rWSMHO2yf6usSNZ-GxQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 31A298041B5; Wed, 3 Aug 2022 15:50:21 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8C5BB1121314; Wed, 3 Aug 2022 15:50:17 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Borislav Petkov , Dave Hansen , linux-kernel@vger.kernel.org, Wanpeng Li , Maxim Levitsky , Ingo Molnar , Sean Christopherson , x86@kernel.org, Jim Mattson , Kees Cook , Thomas Gleixner , "H. Peter Anvin" , Joerg Roedel , Vitaly Kuznetsov , Paolo Bonzini Subject: [PATCH v3 01/13] bug: introduce ASSERT_STRUCT_OFFSET Date: Wed, 3 Aug 2022 18:49:59 +0300 Message-Id: <20220803155011.43721-2-mlevitsk@redhat.com> In-Reply-To: <20220803155011.43721-1-mlevitsk@redhat.com> References: <20220803155011.43721-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" ASSERT_STRUCT_OFFSET allows to assert during the build of the kernel that a field in a struct have an expected offset. KVM used to have such macro, but there is almost nothing KVM specific in it so move it to build_bug.h, so that it can be used in other places in KVM. Signed-off-by: Maxim Levitsky Tested-by: Thomas Lamprecht --- arch/x86/kvm/vmx/vmcs12.h | 5 ++--- include/linux/build_bug.h | 9 +++++++++ 2 files changed, 11 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/vmx/vmcs12.h b/arch/x86/kvm/vmx/vmcs12.h index 746129ddd5ae02..01936013428b5c 100644 --- a/arch/x86/kvm/vmx/vmcs12.h +++ b/arch/x86/kvm/vmx/vmcs12.h @@ -208,9 +208,8 @@ struct __packed vmcs12 { /* * For save/restore compatibility, the vmcs12 field offsets must not chang= e. */ -#define CHECK_OFFSET(field, loc) \ - BUILD_BUG_ON_MSG(offsetof(struct vmcs12, field) !=3D (loc), \ - "Offset of " #field " in struct vmcs12 has changed.") +#define CHECK_OFFSET(field, loc) \ + ASSERT_STRUCT_OFFSET(struct vmcs12, field, loc) =20 static inline void vmx_check_vmcs12_offsets(void) { diff --git a/include/linux/build_bug.h b/include/linux/build_bug.h index e3a0be2c90ad98..3aa3640f8c181f 100644 --- a/include/linux/build_bug.h +++ b/include/linux/build_bug.h @@ -77,4 +77,13 @@ #define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #exp= r) #define __static_assert(expr, msg, ...) _Static_assert(expr, msg) =20 + +/* + * Compile time check that field has an expected offset + */ +#define ASSERT_STRUCT_OFFSET(type, field, expected_offset) \ + BUILD_BUG_ON_MSG(offsetof(type, field) !=3D (expected_offset), \ + "Offset of " #field " in " #type " has changed.") + + #endif /* _LINUX_BUILD_BUG_H */ --=20 2.26.3 From nobody Fri Dec 19 19:14:30 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 325A5C19F28 for ; Wed, 3 Aug 2022 15:50:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238284AbiHCPuj (ORCPT ); Wed, 3 Aug 2022 11:50:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238218AbiHCPua (ORCPT ); Wed, 3 Aug 2022 11:50:30 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E33E12DC3 for ; Wed, 3 Aug 2022 08:50:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1659541829; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WNhWOc+yr+2wT/jsCzj92Jb7fDCD7N/zAbD0yvwDZp4=; b=N+f+cRfejLnlQcfGi15oRAQPh+5pZ+RVCThe/Y8G7EuJqM7VL2Y9epPUkP0ddpUhI/sn8Z 93xpWDvoFETeTPRqnSHC20ENZ1oXK0GWAIXIBblN7+m50Jx9qIQmQ2Jl+SF0slQB9WrGeJ dznWc0w9s7RMo++nE+14AHdPJKsEkuQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-515-5FKKviBjP1KIafHzq22m6Q-1; Wed, 03 Aug 2022 11:50:26 -0400 X-MC-Unique: 5FKKviBjP1KIafHzq22m6Q-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 35EE8101A54E; Wed, 3 Aug 2022 15:50:25 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id 91AB01121314; Wed, 3 Aug 2022 15:50:21 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Borislav Petkov , Dave Hansen , linux-kernel@vger.kernel.org, Wanpeng Li , Maxim Levitsky , Ingo Molnar , Sean Christopherson , x86@kernel.org, Jim Mattson , Kees Cook , Thomas Gleixner , "H. Peter Anvin" , Joerg Roedel , Vitaly Kuznetsov , Paolo Bonzini Subject: [PATCH v3 02/13] KVM: x86: emulator: em_sysexit should update ctxt->mode Date: Wed, 3 Aug 2022 18:50:00 +0300 Message-Id: <20220803155011.43721-3-mlevitsk@redhat.com> In-Reply-To: <20220803155011.43721-1-mlevitsk@redhat.com> References: <20220803155011.43721-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This is one of the instructions that can change the processor mode. Note that this is likely a benign bug, because the only problematic mode change is from 32 bit to 64 bit which can lead to truncation of RIP, and it is not possible to do with sysexit, since sysexit running in 32 bit mode will be limited to 32 bit version. Signed-off-by: Maxim Levitsky Tested-by: Thomas Lamprecht --- arch/x86/kvm/emulate.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 047c583596bb86..7bdc495710bd0e 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -2888,6 +2888,7 @@ static int em_sysexit(struct x86_emulate_ctxt *ctxt) ops->set_segment(ctxt, ss_sel, &ss, 0, VCPU_SREG_SS); =20 ctxt->_eip =3D rdx; + ctxt->mode =3D usermode; *reg_write(ctxt, VCPU_REGS_RSP) =3D rcx; =20 return X86EMUL_CONTINUE; --=20 2.26.3 From nobody Fri Dec 19 19:14:30 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF8EBC19F28 for ; Wed, 3 Aug 2022 15:50:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237721AbiHCPuv (ORCPT ); Wed, 3 Aug 2022 11:50:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49592 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238264AbiHCPuj (ORCPT ); Wed, 3 Aug 2022 11:50:39 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3941826EA for ; Wed, 3 Aug 2022 08:50:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1659541835; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=17VfZfCxqVl14kQCoPbDldIOA1ngqaJxR8piBSehzY8=; b=TPbsrhRi4xM8tPQHwN/MfJ08LHcSGoY0VIjbbtECu4O+zWBEX25UjaHopMse/V8cwiw2eg O4s3iSd654W6MAx9DTnA+yzZwsiSAty8Jm9SZs7pNJ9g5kMZcOr0Bl4XOYD1dMQLfLAeM7 rDSiDeUORb+YgwlDiaBGT7IBlm9Cjbc= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-615-pRCs3AAhP-WEvXecYNf2TQ-1; Wed, 03 Aug 2022 11:50:31 -0400 X-MC-Unique: pRCs3AAhP-WEvXecYNf2TQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 374AE2919EAC; Wed, 3 Aug 2022 15:50:30 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9564B112131B; Wed, 3 Aug 2022 15:50:25 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Borislav Petkov , Dave Hansen , linux-kernel@vger.kernel.org, Wanpeng Li , Maxim Levitsky , Ingo Molnar , Sean Christopherson , x86@kernel.org, Jim Mattson , Kees Cook , Thomas Gleixner , "H. Peter Anvin" , Joerg Roedel , Vitaly Kuznetsov , Paolo Bonzini Subject: [PATCH v3 03/13] KVM: x86: emulator: introduce emulator_recalc_and_set_mode Date: Wed, 3 Aug 2022 18:50:01 +0300 Message-Id: <20220803155011.43721-4-mlevitsk@redhat.com> In-Reply-To: <20220803155011.43721-1-mlevitsk@redhat.com> References: <20220803155011.43721-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Some instructions update the cpu execution mode, which needs to update the emulation mode. Extract this code, and make assign_eip_far use it. assign_eip_far now reads CS, instead of getting it via a parameter, which is ok, because callers always assign CS to the same value before calling it. No functional change is intended. Signed-off-by: Maxim Levitsky Tested-by: Thomas Lamprecht --- arch/x86/kvm/emulate.c | 85 ++++++++++++++++++++++++++++-------------- 1 file changed, 57 insertions(+), 28 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 7bdc495710bd0e..bc70caf403c2b4 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -805,8 +805,7 @@ static int linearize(struct x86_emulate_ctxt *ctxt, ctxt->mode, linear); } =20 -static inline int assign_eip(struct x86_emulate_ctxt *ctxt, ulong dst, - enum x86emul_mode mode) +static inline int assign_eip(struct x86_emulate_ctxt *ctxt, ulong dst) { ulong linear; int rc; @@ -816,41 +815,71 @@ static inline int assign_eip(struct x86_emulate_ctxt = *ctxt, ulong dst, =20 if (ctxt->op_bytes !=3D sizeof(unsigned long)) addr.ea =3D dst & ((1UL << (ctxt->op_bytes << 3)) - 1); - rc =3D __linearize(ctxt, addr, &max_size, 1, false, true, mode, &linear); + rc =3D __linearize(ctxt, addr, &max_size, 1, false, true, ctxt->mode, &li= near); if (rc =3D=3D X86EMUL_CONTINUE) ctxt->_eip =3D addr.ea; return rc; } =20 +static inline int emulator_recalc_and_set_mode(struct x86_emulate_ctxt *ct= xt) +{ + u64 efer; + struct desc_struct cs; + u16 selector; + u32 base3; + + ctxt->ops->get_msr(ctxt, MSR_EFER, &efer); + + if (!ctxt->ops->get_cr(ctxt, 0) & X86_CR0_PE) { + /* Real mode. cpu must not have long mode active */ + if (efer & EFER_LMA) + return X86EMUL_UNHANDLEABLE; + ctxt->mode =3D X86EMUL_MODE_REAL; + return X86EMUL_CONTINUE; + } + + if (ctxt->eflags & X86_EFLAGS_VM) { + /* Protected/VM86 mode. cpu must not have long mode active */ + if (efer & EFER_LMA) + return X86EMUL_UNHANDLEABLE; + ctxt->mode =3D X86EMUL_MODE_VM86; + return X86EMUL_CONTINUE; + } + + if (!ctxt->ops->get_segment(ctxt, &selector, &cs, &base3, VCPU_SREG_CS)) + return X86EMUL_UNHANDLEABLE; + + if (efer & EFER_LMA) { + if (cs.l) { + /* Proper long mode */ + ctxt->mode =3D X86EMUL_MODE_PROT64; + } else if (cs.d) { + /* 32 bit compatibility mode*/ + ctxt->mode =3D X86EMUL_MODE_PROT32; + } else { + ctxt->mode =3D X86EMUL_MODE_PROT16; + } + } else { + /* Legacy 32 bit / 16 bit mode */ + ctxt->mode =3D cs.d ? X86EMUL_MODE_PROT32 : X86EMUL_MODE_PROT16; + } + + return X86EMUL_CONTINUE; +} + static inline int assign_eip_near(struct x86_emulate_ctxt *ctxt, ulong dst) { - return assign_eip(ctxt, dst, ctxt->mode); + return assign_eip(ctxt, dst); } =20 -static int assign_eip_far(struct x86_emulate_ctxt *ctxt, ulong dst, - const struct desc_struct *cs_desc) +static int assign_eip_far(struct x86_emulate_ctxt *ctxt, ulong dst) { - enum x86emul_mode mode =3D ctxt->mode; - int rc; + int rc =3D emulator_recalc_and_set_mode(ctxt); =20 -#ifdef CONFIG_X86_64 - if (ctxt->mode >=3D X86EMUL_MODE_PROT16) { - if (cs_desc->l) { - u64 efer =3D 0; + if (rc !=3D X86EMUL_CONTINUE) + return rc; =20 - ctxt->ops->get_msr(ctxt, MSR_EFER, &efer); - if (efer & EFER_LMA) - mode =3D X86EMUL_MODE_PROT64; - } else - mode =3D X86EMUL_MODE_PROT32; /* temporary value */ - } -#endif - if (mode =3D=3D X86EMUL_MODE_PROT16 || mode =3D=3D X86EMUL_MODE_PROT32) - mode =3D cs_desc->d ? X86EMUL_MODE_PROT32 : X86EMUL_MODE_PROT16; - rc =3D assign_eip(ctxt, dst, mode); - if (rc =3D=3D X86EMUL_CONTINUE) - ctxt->mode =3D mode; - return rc; + return assign_eip(ctxt, dst); } =20 static inline int jmp_rel(struct x86_emulate_ctxt *ctxt, int rel) @@ -2184,7 +2213,7 @@ static int em_jmp_far(struct x86_emulate_ctxt *ctxt) if (rc !=3D X86EMUL_CONTINUE) return rc; =20 - rc =3D assign_eip_far(ctxt, ctxt->src.val, &new_desc); + rc =3D assign_eip_far(ctxt, ctxt->src.val); /* Error handling is not implemented. */ if (rc !=3D X86EMUL_CONTINUE) return X86EMUL_UNHANDLEABLE; @@ -2262,7 +2291,7 @@ static int em_ret_far(struct x86_emulate_ctxt *ctxt) &new_desc); if (rc !=3D X86EMUL_CONTINUE) return rc; - rc =3D assign_eip_far(ctxt, eip, &new_desc); + rc =3D assign_eip_far(ctxt, eip); /* Error handling is not implemented. */ if (rc !=3D X86EMUL_CONTINUE) return X86EMUL_UNHANDLEABLE; @@ -3482,7 +3511,7 @@ static int em_call_far(struct x86_emulate_ctxt *ctxt) if (rc !=3D X86EMUL_CONTINUE) return rc; =20 - rc =3D assign_eip_far(ctxt, ctxt->src.val, &new_desc); + rc =3D assign_eip_far(ctxt, ctxt->src.val); if (rc !=3D X86EMUL_CONTINUE) goto fail; =20 --=20 2.26.3 From nobody Fri Dec 19 19:14:30 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E220C19F2C for ; Wed, 3 Aug 2022 15:51:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238216AbiHCPvD (ORCPT ); Wed, 3 Aug 2022 11:51:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49876 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238343AbiHCPuq (ORCPT ); Wed, 3 Aug 2022 11:50:46 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 16AFF9FEB for ; Wed, 3 Aug 2022 08:50:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1659541839; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ThplJ8IHF9IznuW9y6nPyTNnXMAPmQCa7vY+5A4uob8=; b=T3B2jTsxozhHUY8dBhdxWECGiwH4POhaVo2pqJDT/FiPinVDXnD7LfZ+Zy4PJgwkQmvXUK Dob5pTPpu5YNhVf+a/zhDP+MyFl4RQ74SXhMduArGNIKHbEBJ8PJGhNZqi4PIa5HypgsIZ sVI7V+PoAwzHCnAXRTrdytSzEgdoGUY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-277-mqHa6Xm0NL-Omt0tAJuTpQ-1; Wed, 03 Aug 2022 11:50:35 -0400 X-MC-Unique: mqHa6Xm0NL-Omt0tAJuTpQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 39B4D18A6522; Wed, 3 Aug 2022 15:50:34 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id 96D261121319; Wed, 3 Aug 2022 15:50:30 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Borislav Petkov , Dave Hansen , linux-kernel@vger.kernel.org, Wanpeng Li , Maxim Levitsky , Ingo Molnar , Sean Christopherson , x86@kernel.org, Jim Mattson , Kees Cook , Thomas Gleixner , "H. Peter Anvin" , Joerg Roedel , Vitaly Kuznetsov , Paolo Bonzini Subject: [PATCH v3 04/13] KVM: x86: emulator: update the emulation mode after rsm Date: Wed, 3 Aug 2022 18:50:02 +0300 Message-Id: <20220803155011.43721-5-mlevitsk@redhat.com> In-Reply-To: <20220803155011.43721-1-mlevitsk@redhat.com> References: <20220803155011.43721-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This ensures that RIP will be correctly written back, because the RSM instruction can switch the CPU mode from 32 bit (or less) to 64 bit. This fixes a guest crash in case the #SMI is received while the guest runs a code from an address > 32 bit. Signed-off-by: Maxim Levitsky Tested-by: Thomas Lamprecht --- arch/x86/kvm/emulate.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index bc70caf403c2b4..5e91b26cc1d8aa 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -2666,6 +2666,11 @@ static int em_rsm(struct x86_emulate_ctxt *ctxt) if (ret !=3D X86EMUL_CONTINUE) goto emulate_shutdown; =20 + + ret =3D emulator_recalc_and_set_mode(ctxt); + if (ret !=3D X86EMUL_CONTINUE) + goto emulate_shutdown; + /* * Note, the ctxt->ops callbacks are responsible for handling side * effects when writing MSRs and CRs, e.g. MMU context resets, CPUID --=20 2.26.3 From nobody Fri Dec 19 19:14:30 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66E3FC19F28 for ; Wed, 3 Aug 2022 15:51:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236462AbiHCPvK (ORCPT ); Wed, 3 Aug 2022 11:51:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238264AbiHCPuw (ORCPT ); Wed, 3 Aug 2022 11:50:52 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3DFF96453 for ; Wed, 3 Aug 2022 08:50:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1659541843; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LmwB0r7xATrpDyRlyUFgvOk4mUnvm3hP8rUm/cmUZV8=; b=KEJbekhtR/XqmhQY0sETXiS3JicF3L0nmCTo7TE9+ktsKvsX66Jolsyo4E2g3b525Pu02V MFEj8nloJMasvoUcqyB89PXA/nN0VQDrUuFg6DrrIcqMaWYCaAierrrPBUST3l6kL2b3eg qODmNiN5p2CID2P5Ud8FsUP3uZU6JKs= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-408-XCfdXdIDMCOVUNhfW_Rq0A-1; Wed, 03 Aug 2022 11:50:39 -0400 X-MC-Unique: XCfdXdIDMCOVUNhfW_Rq0A-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3AD7D3C0D841; Wed, 3 Aug 2022 15:50:38 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id 991C11121315; Wed, 3 Aug 2022 15:50:34 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Borislav Petkov , Dave Hansen , linux-kernel@vger.kernel.org, Wanpeng Li , Maxim Levitsky , Ingo Molnar , Sean Christopherson , x86@kernel.org, Jim Mattson , Kees Cook , Thomas Gleixner , "H. Peter Anvin" , Joerg Roedel , Vitaly Kuznetsov , Paolo Bonzini Subject: [PATCH v3 05/13] KVM: x86: emulator: update the emulation mode after CR0 write Date: Wed, 3 Aug 2022 18:50:03 +0300 Message-Id: <20220803155011.43721-6-mlevitsk@redhat.com> In-Reply-To: <20220803155011.43721-1-mlevitsk@redhat.com> References: <20220803155011.43721-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" CR0.PE toggles real/protected mode, thus its update should update the emulation mode. This is likely a benign bug because there is no writeback of state, other than the RIP increment, and when toggling CR0.PE, the CPU has to execute code from a very low memory address. Also CR0.PG toggle when EFER.LMA is set, toggles the long mode. Signed-off-by: Maxim Levitsky Tested-by: Thomas Lamprecht --- arch/x86/kvm/emulate.c | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 5e91b26cc1d8aa..765ec65b2861ba 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -3658,11 +3658,23 @@ static int em_movbe(struct x86_emulate_ctxt *ctxt) =20 static int em_cr_write(struct x86_emulate_ctxt *ctxt) { - if (ctxt->ops->set_cr(ctxt, ctxt->modrm_reg, ctxt->src.val)) + int cr_num =3D ctxt->modrm_reg; + int r; + + if (ctxt->ops->set_cr(ctxt, cr_num, ctxt->src.val)) return emulate_gp(ctxt, 0); =20 /* Disable writeback. */ ctxt->dst.type =3D OP_NONE; + + if (cr_num =3D=3D 0) { + /* CR0 write might have updated CR0.PE and/or CR0.PG + * which can affect the cpu execution mode */ + r =3D emulator_recalc_and_set_mode(ctxt); + if (r !=3D X86EMUL_CONTINUE) + return r; + } + return X86EMUL_CONTINUE; } =20 --=20 2.26.3 From nobody Fri Dec 19 19:14:30 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85F34C19F28 for ; Wed, 3 Aug 2022 15:51:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238065AbiHCPvY (ORCPT ); Wed, 3 Aug 2022 11:51:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238330AbiHCPu6 (ORCPT ); Wed, 3 Aug 2022 11:50:58 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 0EAC65582 for ; Wed, 3 Aug 2022 08:50:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1659541847; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VtSzKXeXuo8UPe5oGViC3ZV9LBnnbbE6LXAjjaS3w3c=; b=JA8/p8Bph9p43AkK3xlxW73lqMnVBhOoLtBZCgvyZGiS+78ADDHNfP11mLxQ7GuVVjHJz7 u1oczODo1FHMMgaUX/L+PlSE+V+CidyDt7Akr1pte9yPIVVblALjeMcWYUohj5WRFYJQdh vlcywwmueVusZRCMn8amdQe7bQggdIY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-30-dM8-sA2IPP-VlC_VoGipsQ-1; Wed, 03 Aug 2022 11:50:43 -0400 X-MC-Unique: dM8-sA2IPP-VlC_VoGipsQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6A87E185A794; Wed, 3 Aug 2022 15:50:42 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9A3A21121314; Wed, 3 Aug 2022 15:50:38 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Borislav Petkov , Dave Hansen , linux-kernel@vger.kernel.org, Wanpeng Li , Maxim Levitsky , Ingo Molnar , Sean Christopherson , x86@kernel.org, Jim Mattson , Kees Cook , Thomas Gleixner , "H. Peter Anvin" , Joerg Roedel , Vitaly Kuznetsov , Paolo Bonzini Subject: [PATCH v3 06/13] KVM: x86: emulator/smm: number of GPRs in the SMRAM image depends on the image format Date: Wed, 3 Aug 2022 18:50:04 +0300 Message-Id: <20220803155011.43721-7-mlevitsk@redhat.com> In-Reply-To: <20220803155011.43721-1-mlevitsk@redhat.com> References: <20220803155011.43721-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" On 64 bit host, if the guest doesn't have X86_FEATURE_LM, KVM will access 16 gprs to 32-bit smram image, causing out-ouf-bound ram access. On 32 bit host, the rsm_load_state_64/enter_smm_save_state_64 is compiled out, thus access overflow can't happen. Fixes: b443183a25ab61 ("KVM: x86: Reduce the number of emulator GPRs to '8'= for 32-bit KVM") Signed-off-by: Maxim Levitsky Reviewed-by: Sean Christopherson Tested-by: Thomas Lamprecht --- arch/x86/kvm/emulate.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 765ec65b2861ba..18551611cb13af 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -2473,7 +2473,7 @@ static int rsm_load_state_32(struct x86_emulate_ctxt = *ctxt, ctxt->eflags =3D GET_SMSTATE(u32, smstate, 0x7ff4) | X86_EFLA= GS_FIXED; ctxt->_eip =3D GET_SMSTATE(u32, smstate, 0x7ff0); =20 - for (i =3D 0; i < NR_EMULATOR_GPRS; i++) + for (i =3D 0; i < 8; i++) *reg_write(ctxt, i) =3D GET_SMSTATE(u32, smstate, 0x7fd0 + i * 4); =20 val =3D GET_SMSTATE(u32, smstate, 0x7fcc); @@ -2530,7 +2530,7 @@ static int rsm_load_state_64(struct x86_emulate_ctxt = *ctxt, u16 selector; int i, r; =20 - for (i =3D 0; i < NR_EMULATOR_GPRS; i++) + for (i =3D 0; i < 16; i++) *reg_write(ctxt, i) =3D GET_SMSTATE(u64, smstate, 0x7ff8 - i * 8); =20 ctxt->_eip =3D GET_SMSTATE(u64, smstate, 0x7f78); --=20 2.26.3 From nobody Fri Dec 19 19:14:30 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2250DC19F28 for ; Wed, 3 Aug 2022 15:51:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236265AbiHCPvh (ORCPT ); Wed, 3 Aug 2022 11:51:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238100AbiHCPvD (ORCPT ); Wed, 3 Aug 2022 11:51:03 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id BBC86459B8 for ; Wed, 3 Aug 2022 08:50:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1659541854; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Beu1AQm9n7nTfMyN9Np+GAEvHHJH5+9TMEzmmDh6rKI=; b=Kz+pbzAOymdUfF5UnDCzf5WTMIr8BtfC7oe8IQDmE1j9uLPYyJMuLwkSqJ8g53qehMiVU+ ugaaoJbqGMSB/frwtTJL+TLwiqFNs5mCbvXn2qCcQSpCR+E5dW16Uqav2rZRYmGNN26dXk LumSD5BHXEM6YhdaWZmXbYW0jmaHKD0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-617-dQ4-KMAhMEyFCp80zOmB-Q-1; Wed, 03 Aug 2022 11:50:47 -0400 X-MC-Unique: dQ4-KMAhMEyFCp80zOmB-Q-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6D34285A587; Wed, 3 Aug 2022 15:50:46 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id CA2F11121314; Wed, 3 Aug 2022 15:50:42 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Borislav Petkov , Dave Hansen , linux-kernel@vger.kernel.org, Wanpeng Li , Maxim Levitsky , Ingo Molnar , Sean Christopherson , x86@kernel.org, Jim Mattson , Kees Cook , Thomas Gleixner , "H. Peter Anvin" , Joerg Roedel , Vitaly Kuznetsov , Paolo Bonzini Subject: [PATCH v3 07/13] KVM: x86: emulator/smm: add structs for KVM's smram layout Date: Wed, 3 Aug 2022 18:50:05 +0300 Message-Id: <20220803155011.43721-8-mlevitsk@redhat.com> In-Reply-To: <20220803155011.43721-1-mlevitsk@redhat.com> References: <20220803155011.43721-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Those structs will be used to read/write the smram state image. Also document the differences between KVM's SMRAM layout and SMRAM layout that is used by real Intel/AMD cpus. Signed-off-by: Maxim Levitsky Tested-by: Thomas Lamprecht --- arch/x86/kvm/emulate.c | 6 + arch/x86/kvm/kvm_emulate.h | 218 +++++++++++++++++++++++++++++++++++++ arch/x86/kvm/x86.c | 1 + 3 files changed, 225 insertions(+) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 18551611cb13af..55d9328e6074a2 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -5864,3 +5864,9 @@ bool emulator_can_use_gpa(struct x86_emulate_ctxt *ct= xt) =20 return true; } + +void __init kvm_emulator_init(void) +{ + __check_smram32_offsets(); + __check_smram64_offsets(); +} diff --git a/arch/x86/kvm/kvm_emulate.h b/arch/x86/kvm/kvm_emulate.h index 89246446d6aa9d..dd0ae61e44a116 100644 --- a/arch/x86/kvm/kvm_emulate.h +++ b/arch/x86/kvm/kvm_emulate.h @@ -13,6 +13,7 @@ #define _ASM_X86_KVM_X86_EMULATE_H =20 #include +#include #include "fpu.h" =20 struct x86_emulate_ctxt; @@ -503,6 +504,223 @@ enum x86_intercept { nr_x86_intercepts }; =20 + +/* 32 bit KVM's emulated SMM layout. Loosely based on Intel's layout */ + +struct kvm_smm_seg_state_32 { + u32 flags; + u32 limit; + u32 base; +} __packed; + +struct kvm_smram_state_32 { + u32 reserved1[62]; + u32 smbase; + u32 smm_revision; + u32 reserved2[5]; + u32 cr4; /* CR4 is not present in Intel/AMD SMRAM image */ + u32 reserved3[5]; + + /* + * Segment state is not present/documented in the Intel/AMD SMRAM image + * Instead this area on Intel/AMD contains IO/HLT restart flags. + */ + struct kvm_smm_seg_state_32 ds; + struct kvm_smm_seg_state_32 fs; + struct kvm_smm_seg_state_32 gs; + struct kvm_smm_seg_state_32 idtr; /* IDTR has only base and limit */ + struct kvm_smm_seg_state_32 tr; + u32 reserved; + struct kvm_smm_seg_state_32 gdtr; /* GDTR has only base and limit */ + struct kvm_smm_seg_state_32 ldtr; + struct kvm_smm_seg_state_32 es; + struct kvm_smm_seg_state_32 cs; + struct kvm_smm_seg_state_32 ss; + + u32 es_sel; + u32 cs_sel; + u32 ss_sel; + u32 ds_sel; + u32 fs_sel; + u32 gs_sel; + u32 ldtr_sel; + u32 tr_sel; + + u32 dr7; + u32 dr6; + u32 gprs[8]; /* GPRS in the "natural" X86 order (EAX/ECX/EDX.../EDI) */ + u32 eip; + u32 eflags; + u32 cr3; + u32 cr0; +} __packed; + + +static inline void __check_smram32_offsets(void) +{ +#define __CHECK_SMRAM32_OFFSET(field, offset) \ + ASSERT_STRUCT_OFFSET(struct kvm_smram_state_32, field, offset - 0xFE00) + + __CHECK_SMRAM32_OFFSET(reserved1, 0xFE00); + __CHECK_SMRAM32_OFFSET(smbase, 0xFEF8); + __CHECK_SMRAM32_OFFSET(smm_revision, 0xFEFC); + __CHECK_SMRAM32_OFFSET(reserved2, 0xFF00); + __CHECK_SMRAM32_OFFSET(cr4, 0xFF14); + __CHECK_SMRAM32_OFFSET(reserved3, 0xFF18); + __CHECK_SMRAM32_OFFSET(ds, 0xFF2C); + __CHECK_SMRAM32_OFFSET(fs, 0xFF38); + __CHECK_SMRAM32_OFFSET(gs, 0xFF44); + __CHECK_SMRAM32_OFFSET(idtr, 0xFF50); + __CHECK_SMRAM32_OFFSET(tr, 0xFF5C); + __CHECK_SMRAM32_OFFSET(gdtr, 0xFF6C); + __CHECK_SMRAM32_OFFSET(ldtr, 0xFF78); + __CHECK_SMRAM32_OFFSET(es, 0xFF84); + __CHECK_SMRAM32_OFFSET(cs, 0xFF90); + __CHECK_SMRAM32_OFFSET(ss, 0xFF9C); + __CHECK_SMRAM32_OFFSET(es_sel, 0xFFA8); + __CHECK_SMRAM32_OFFSET(cs_sel, 0xFFAC); + __CHECK_SMRAM32_OFFSET(ss_sel, 0xFFB0); + __CHECK_SMRAM32_OFFSET(ds_sel, 0xFFB4); + __CHECK_SMRAM32_OFFSET(fs_sel, 0xFFB8); + __CHECK_SMRAM32_OFFSET(gs_sel, 0xFFBC); + __CHECK_SMRAM32_OFFSET(ldtr_sel, 0xFFC0); + __CHECK_SMRAM32_OFFSET(tr_sel, 0xFFC4); + __CHECK_SMRAM32_OFFSET(dr7, 0xFFC8); + __CHECK_SMRAM32_OFFSET(dr6, 0xFFCC); + __CHECK_SMRAM32_OFFSET(gprs, 0xFFD0); + __CHECK_SMRAM32_OFFSET(eip, 0xFFF0); + __CHECK_SMRAM32_OFFSET(eflags, 0xFFF4); + __CHECK_SMRAM32_OFFSET(cr3, 0xFFF8); + __CHECK_SMRAM32_OFFSET(cr0, 0xFFFC); +#undef __CHECK_SMRAM32_OFFSET +} + + +/* 64 bit KVM's emulated SMM layout. Based on AMD64 layout */ + +struct kvm_smm_seg_state_64 { + u16 selector; + u16 attributes; + u32 limit; + u64 base; +}; + +struct kvm_smram_state_64 { + + struct kvm_smm_seg_state_64 es; + struct kvm_smm_seg_state_64 cs; + struct kvm_smm_seg_state_64 ss; + struct kvm_smm_seg_state_64 ds; + struct kvm_smm_seg_state_64 fs; + struct kvm_smm_seg_state_64 gs; + struct kvm_smm_seg_state_64 gdtr; /* GDTR has only base and limit*/ + struct kvm_smm_seg_state_64 ldtr; + struct kvm_smm_seg_state_64 idtr; /* IDTR has only base and limit*/ + struct kvm_smm_seg_state_64 tr; + + /* I/O restart and auto halt restart are not implemented by KVM */ + u64 io_restart_rip; + u64 io_restart_rcx; + u64 io_restart_rsi; + u64 io_restart_rdi; + u32 io_restart_dword; + u32 reserved1; + u8 io_inst_restart; + u8 auto_hlt_restart; + u8 reserved2[6]; + + u64 efer; + + /* + * Two fields below are implemented on AMD only, to store + * SVM guest vmcb address if the #SMI was received while in the guest mod= e. + */ + u64 svm_guest_flag; + u64 svm_guest_vmcb_gpa; + u64 svm_guest_virtual_int; /* unknown purpose, not implemented */ + + u32 reserved3[3]; + u32 smm_revison; + u32 smbase; + u32 reserved4[5]; + + /* ssp and svm_* fields below are not implemented by KVM */ + u64 ssp; + u64 svm_guest_pat; + u64 svm_host_efer; + u64 svm_host_cr4; + u64 svm_host_cr3; + u64 svm_host_cr0; + + u64 cr4; + u64 cr3; + u64 cr0; + u64 dr7; + u64 dr6; + u64 rflags; + u64 rip; + u64 gprs[16]; /* GPRS in a reversed "natural" X86 order (R15/R14/../RCX/R= AX.) */ +}; + + +static inline void __check_smram64_offsets(void) +{ +#define __CHECK_SMRAM64_OFFSET(field, offset) \ + ASSERT_STRUCT_OFFSET(struct kvm_smram_state_64, field, offset - 0xFE00) + + __CHECK_SMRAM64_OFFSET(es, 0xFE00); + __CHECK_SMRAM64_OFFSET(cs, 0xFE10); + __CHECK_SMRAM64_OFFSET(ss, 0xFE20); + __CHECK_SMRAM64_OFFSET(ds, 0xFE30); + __CHECK_SMRAM64_OFFSET(fs, 0xFE40); + __CHECK_SMRAM64_OFFSET(gs, 0xFE50); + __CHECK_SMRAM64_OFFSET(gdtr, 0xFE60); + __CHECK_SMRAM64_OFFSET(ldtr, 0xFE70); + __CHECK_SMRAM64_OFFSET(idtr, 0xFE80); + __CHECK_SMRAM64_OFFSET(tr, 0xFE90); + __CHECK_SMRAM64_OFFSET(io_restart_rip, 0xFEA0); + __CHECK_SMRAM64_OFFSET(io_restart_rcx, 0xFEA8); + __CHECK_SMRAM64_OFFSET(io_restart_rsi, 0xFEB0); + __CHECK_SMRAM64_OFFSET(io_restart_rdi, 0xFEB8); + __CHECK_SMRAM64_OFFSET(io_restart_dword, 0xFEC0); + __CHECK_SMRAM64_OFFSET(reserved1, 0xFEC4); + __CHECK_SMRAM64_OFFSET(io_inst_restart, 0xFEC8); + __CHECK_SMRAM64_OFFSET(auto_hlt_restart, 0xFEC9); + __CHECK_SMRAM64_OFFSET(reserved2, 0xFECA); + __CHECK_SMRAM64_OFFSET(efer, 0xFED0); + __CHECK_SMRAM64_OFFSET(svm_guest_flag, 0xFED8); + __CHECK_SMRAM64_OFFSET(svm_guest_vmcb_gpa, 0xFEE0); + __CHECK_SMRAM64_OFFSET(svm_guest_virtual_int, 0xFEE8); + __CHECK_SMRAM64_OFFSET(reserved3, 0xFEF0); + __CHECK_SMRAM64_OFFSET(smm_revison, 0xFEFC); + __CHECK_SMRAM64_OFFSET(smbase, 0xFF00); + __CHECK_SMRAM64_OFFSET(reserved4, 0xFF04); + __CHECK_SMRAM64_OFFSET(ssp, 0xFF18); + __CHECK_SMRAM64_OFFSET(svm_guest_pat, 0xFF20); + __CHECK_SMRAM64_OFFSET(svm_host_efer, 0xFF28); + __CHECK_SMRAM64_OFFSET(svm_host_cr4, 0xFF30); + __CHECK_SMRAM64_OFFSET(svm_host_cr3, 0xFF38); + __CHECK_SMRAM64_OFFSET(svm_host_cr0, 0xFF40); + __CHECK_SMRAM64_OFFSET(cr4, 0xFF48); + __CHECK_SMRAM64_OFFSET(cr3, 0xFF50); + __CHECK_SMRAM64_OFFSET(cr0, 0xFF58); + __CHECK_SMRAM64_OFFSET(dr7, 0xFF60); + __CHECK_SMRAM64_OFFSET(dr6, 0xFF68); + __CHECK_SMRAM64_OFFSET(rflags, 0xFF70); + __CHECK_SMRAM64_OFFSET(rip, 0xFF78); + __CHECK_SMRAM64_OFFSET(gprs, 0xFF80); +#undef __CHECK_SMRAM64_OFFSET +} + +union kvm_smram { + struct kvm_smram_state_64 smram64; + struct kvm_smram_state_32 smram32; + u8 bytes[512]; +}; + +void __init kvm_emulator_init(void); + + /* Host execution mode. */ #if defined(CONFIG_X86_32) #define X86EMUL_MODE_HOST X86EMUL_MODE_PROT32 diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 33560bfa0cac6e..bea7e5015d592e 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -13355,6 +13355,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_vmgexit_msr_protoc= ol_exit); static int __init kvm_x86_init(void) { kvm_mmu_x86_module_init(); + kvm_emulator_init(); return 0; } module_init(kvm_x86_init); --=20 2.26.3 From nobody Fri Dec 19 19:14:30 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 014D1C19F2C for ; Wed, 3 Aug 2022 15:51:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238318AbiHCPvm (ORCPT ); Wed, 3 Aug 2022 11:51:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238256AbiHCPvE (ORCPT ); Wed, 3 Aug 2022 11:51:04 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 2EAD046D9A for ; Wed, 3 Aug 2022 08:50:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1659541854; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2n5o7mheKtO8vvklO92VzMiud5N3yOqYKtpNEIS5Hpk=; b=cgRqxHzXVXAGEvuE/bhO6UNDylfo8fjlcwIjtwCG/KFUbvy6crQjVWbTLkrsrCTAx+aSI8 uvZO5I6srw+qv67u6/zwwlvsJgM4/l/gzYtI0XHW2Ae4UH6p1/Fssy/hyhO3cIBmOTDvMB Eurh2jHs8TyhY5BZK4mnob/CpufJ0b0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-389-ETP-vPSoN2KOmkDW-D7iGQ-1; Wed, 03 Aug 2022 11:50:51 -0400 X-MC-Unique: ETP-vPSoN2KOmkDW-D7iGQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6F032802E5C; Wed, 3 Aug 2022 15:50:50 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id CC9C31121314; Wed, 3 Aug 2022 15:50:46 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Borislav Petkov , Dave Hansen , linux-kernel@vger.kernel.org, Wanpeng Li , Maxim Levitsky , Ingo Molnar , Sean Christopherson , x86@kernel.org, Jim Mattson , Kees Cook , Thomas Gleixner , "H. Peter Anvin" , Joerg Roedel , Vitaly Kuznetsov , Paolo Bonzini Subject: [PATCH v3 08/13] KVM: x86: emulator/smm: use smram structs in the common code Date: Wed, 3 Aug 2022 18:50:06 +0300 Message-Id: <20220803155011.43721-9-mlevitsk@redhat.com> In-Reply-To: <20220803155011.43721-1-mlevitsk@redhat.com> References: <20220803155011.43721-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Switch from using a raw array to 'union kvm_smram'. Signed-off-by: Maxim Levitsky Tested-by: Thomas Lamprecht --- arch/x86/include/asm/kvm_host.h | 5 +++-- arch/x86/kvm/emulate.c | 12 +++++++----- arch/x86/kvm/kvm_emulate.h | 3 ++- arch/x86/kvm/svm/svm.c | 8 ++++++-- arch/x86/kvm/vmx/vmx.c | 4 ++-- arch/x86/kvm/x86.c | 16 ++++++++-------- 6 files changed, 28 insertions(+), 20 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index e8281d64a4315a..d752fabde94ad2 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -204,6 +204,7 @@ typedef enum exit_fastpath_completion fastpath_t; =20 struct x86_emulate_ctxt; struct x86_exception; +union kvm_smram; enum x86_intercept; enum x86_intercept_stage; =20 @@ -1600,8 +1601,8 @@ struct kvm_x86_ops { void (*setup_mce)(struct kvm_vcpu *vcpu); =20 int (*smi_allowed)(struct kvm_vcpu *vcpu, bool for_injection); - int (*enter_smm)(struct kvm_vcpu *vcpu, char *smstate); - int (*leave_smm)(struct kvm_vcpu *vcpu, const char *smstate); + int (*enter_smm)(struct kvm_vcpu *vcpu, union kvm_smram *smram); + int (*leave_smm)(struct kvm_vcpu *vcpu, const union kvm_smram *smram); void (*enable_smi_window)(struct kvm_vcpu *vcpu); =20 int (*mem_enc_ioctl)(struct kvm *kvm, void __user *argp); diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 55d9328e6074a2..610978d00b52b0 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -2594,16 +2594,18 @@ static int rsm_load_state_64(struct x86_emulate_ctx= t *ctxt, static int em_rsm(struct x86_emulate_ctxt *ctxt) { unsigned long cr0, cr4, efer; - char buf[512]; + const union kvm_smram smram; u64 smbase; int ret; =20 + BUILD_BUG_ON(sizeof(smram) !=3D 512); + if ((ctxt->ops->get_hflags(ctxt) & X86EMUL_SMM_MASK) =3D=3D 0) return emulate_ud(ctxt); =20 smbase =3D ctxt->ops->get_smbase(ctxt); =20 - ret =3D ctxt->ops->read_phys(ctxt, smbase + 0xfe00, buf, sizeof(buf)); + ret =3D ctxt->ops->read_phys(ctxt, smbase + 0xfe00, (void *)&smram, sizeo= f(smram)); if (ret !=3D X86EMUL_CONTINUE) return X86EMUL_UNHANDLEABLE; =20 @@ -2653,15 +2655,15 @@ static int em_rsm(struct x86_emulate_ctxt *ctxt) * state (e.g. enter guest mode) before loading state from the SMM * state-save area. */ - if (ctxt->ops->leave_smm(ctxt, buf)) + if (ctxt->ops->leave_smm(ctxt, &smram)) goto emulate_shutdown; =20 #ifdef CONFIG_X86_64 if (emulator_has_longmode(ctxt)) - ret =3D rsm_load_state_64(ctxt, buf); + ret =3D rsm_load_state_64(ctxt, (const char *)&smram); else #endif - ret =3D rsm_load_state_32(ctxt, buf); + ret =3D rsm_load_state_32(ctxt, (const char *)&smram); =20 if (ret !=3D X86EMUL_CONTINUE) goto emulate_shutdown; diff --git a/arch/x86/kvm/kvm_emulate.h b/arch/x86/kvm/kvm_emulate.h index dd0ae61e44a116..76c0b8e7890b5d 100644 --- a/arch/x86/kvm/kvm_emulate.h +++ b/arch/x86/kvm/kvm_emulate.h @@ -19,6 +19,7 @@ struct x86_emulate_ctxt; enum x86_intercept; enum x86_intercept_stage; +union kvm_smram; =20 struct x86_exception { u8 vector; @@ -236,7 +237,7 @@ struct x86_emulate_ops { =20 unsigned (*get_hflags)(struct x86_emulate_ctxt *ctxt); void (*exiting_smm)(struct x86_emulate_ctxt *ctxt); - int (*leave_smm)(struct x86_emulate_ctxt *ctxt, const char *smstate); + int (*leave_smm)(struct x86_emulate_ctxt *ctxt, const union kvm_smram *sm= ram); void (*triple_fault)(struct x86_emulate_ctxt *ctxt); int (*set_xcr)(struct x86_emulate_ctxt *ctxt, u32 index, u64 xcr); }; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 38f873cb6f2c14..688315d1dfabd1 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4433,12 +4433,14 @@ static int svm_smi_allowed(struct kvm_vcpu *vcpu, b= ool for_injection) return 1; } =20 -static int svm_enter_smm(struct kvm_vcpu *vcpu, char *smstate) +static int svm_enter_smm(struct kvm_vcpu *vcpu, union kvm_smram *smram) { struct vcpu_svm *svm =3D to_svm(vcpu); struct kvm_host_map map_save; int ret; =20 + char *smstate =3D (char *)smram; + if (!is_guest_mode(vcpu)) return 0; =20 @@ -4480,7 +4482,7 @@ static int svm_enter_smm(struct kvm_vcpu *vcpu, char = *smstate) return 0; } =20 -static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) +static int svm_leave_smm(struct kvm_vcpu *vcpu, const union kvm_smram *smr= am) { struct vcpu_svm *svm =3D to_svm(vcpu); struct kvm_host_map map, map_save; @@ -4488,6 +4490,8 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const= char *smstate) struct vmcb *vmcb12; int ret; =20 + const char *smstate =3D (const char *)smram; + if (!guest_cpuid_has(vcpu, X86_FEATURE_LM)) return 0; =20 diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index d7f8331d6f7e72..fdb7e9280e9150 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7919,7 +7919,7 @@ static int vmx_smi_allowed(struct kvm_vcpu *vcpu, boo= l for_injection) return !is_smm(vcpu); } =20 -static int vmx_enter_smm(struct kvm_vcpu *vcpu, char *smstate) +static int vmx_enter_smm(struct kvm_vcpu *vcpu, union kvm_smram *smram) { struct vcpu_vmx *vmx =3D to_vmx(vcpu); =20 @@ -7940,7 +7940,7 @@ static int vmx_enter_smm(struct kvm_vcpu *vcpu, char = *smstate) return 0; } =20 -static int vmx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) +static int vmx_leave_smm(struct kvm_vcpu *vcpu, const union kvm_smram *smr= am) { struct vcpu_vmx *vmx =3D to_vmx(vcpu); int ret; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index bea7e5015d592e..cbbe49bdc58787 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8054,9 +8054,9 @@ static void emulator_exiting_smm(struct x86_emulate_c= txt *ctxt) } =20 static int emulator_leave_smm(struct x86_emulate_ctxt *ctxt, - const char *smstate) + const union kvm_smram *smram) { - return static_call(kvm_x86_leave_smm)(emul_to_vcpu(ctxt), smstate); + return static_call(kvm_x86_leave_smm)(emul_to_vcpu(ctxt), smram); } =20 static void emulator_triple_fault(struct x86_emulate_ctxt *ctxt) @@ -9979,25 +9979,25 @@ static void enter_smm(struct kvm_vcpu *vcpu) struct kvm_segment cs, ds; struct desc_ptr dt; unsigned long cr0; - char buf[512]; + union kvm_smram smram; =20 - memset(buf, 0, 512); + memset(smram.bytes, 0, sizeof(smram.bytes)); #ifdef CONFIG_X86_64 if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) - enter_smm_save_state_64(vcpu, buf); + enter_smm_save_state_64(vcpu, (char *)&smram); else #endif - enter_smm_save_state_32(vcpu, buf); + enter_smm_save_state_32(vcpu, (char *)&smram); =20 /* * Give enter_smm() a chance to make ISA-specific changes to the vCPU * state (e.g. leave guest mode) after we've saved the state into the * SMM state-save area. */ - static_call(kvm_x86_enter_smm)(vcpu, buf); + static_call(kvm_x86_enter_smm)(vcpu, &smram); =20 kvm_smm_changed(vcpu, true); - kvm_vcpu_write_guest(vcpu, vcpu->arch.smbase + 0xfe00, buf, sizeof(buf)); + kvm_vcpu_write_guest(vcpu, vcpu->arch.smbase + 0xfe00, &smram, sizeof(smr= am)); =20 if (static_call(kvm_x86_get_nmi_mask)(vcpu)) vcpu->arch.hflags |=3D HF_SMM_INSIDE_NMI_MASK; --=20 2.26.3 From nobody Fri Dec 19 19:14:30 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35D31C19F2C for ; Wed, 3 Aug 2022 15:52:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237150AbiHCPwA (ORCPT ); Wed, 3 Aug 2022 11:52:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238341AbiHCPvc (ORCPT ); Wed, 3 Aug 2022 11:51:32 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B53E648E94 for ; Wed, 3 Aug 2022 08:51:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1659541862; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pf511JrR0Yvb5ZevB6nKDQJWx2WgRXzliG96oOf4kx4=; b=dFu8Sb+UbOD2pWK7uQI5/+/6sGSTHIM0cbjRws+83saKVIWVEIM7Ui1ENQVnibwsQ47UXc P4SmxjtxOAvkAJK7jnTHoMEqAos1Pox9UG4DsX+/1gnwptOS8GzqhRfYM0sfqiceomB7ix ud/D10ef6wOSBjBGcMZJj8vGT6utLh4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-119-xxSHHQ6FP92ECrzMzO7Ppw-1; Wed, 03 Aug 2022 11:50:55 -0400 X-MC-Unique: xxSHHQ6FP92ECrzMzO7Ppw-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 732F18041B5; Wed, 3 Aug 2022 15:50:54 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id CECA51121315; Wed, 3 Aug 2022 15:50:50 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Borislav Petkov , Dave Hansen , linux-kernel@vger.kernel.org, Wanpeng Li , Maxim Levitsky , Ingo Molnar , Sean Christopherson , x86@kernel.org, Jim Mattson , Kees Cook , Thomas Gleixner , "H. Peter Anvin" , Joerg Roedel , Vitaly Kuznetsov , Paolo Bonzini Subject: [PATCH v3 09/13] KVM: x86: emulator/smm: use smram struct for 32 bit smram load/restore Date: Wed, 3 Aug 2022 18:50:07 +0300 Message-Id: <20220803155011.43721-10-mlevitsk@redhat.com> In-Reply-To: <20220803155011.43721-1-mlevitsk@redhat.com> References: <20220803155011.43721-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use kvm_smram_state_32 struct to save/restore 32 bit SMM state (used when X86_FEATURE_LM is not present in the guest CPUID). Signed-off-by: Maxim Levitsky Tested-by: Thomas Lamprecht --- arch/x86/kvm/emulate.c | 81 +++++++++++++++--------------------------- arch/x86/kvm/x86.c | 75 +++++++++++++++++--------------------- 2 files changed, 60 insertions(+), 96 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 610978d00b52b0..3339d542a25439 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -2371,25 +2371,17 @@ static void rsm_set_desc_flags(struct desc_struct *= desc, u32 flags) desc->type =3D (flags >> 8) & 15; } =20 -static int rsm_load_seg_32(struct x86_emulate_ctxt *ctxt, const char *smst= ate, +static void rsm_load_seg_32(struct x86_emulate_ctxt *ctxt, + const struct kvm_smm_seg_state_32 *state, + u16 selector, int n) { struct desc_struct desc; - int offset; - u16 selector; - - selector =3D GET_SMSTATE(u32, smstate, 0x7fa8 + n * 4); - - if (n < 3) - offset =3D 0x7f84 + n * 12; - else - offset =3D 0x7f2c + (n - 3) * 12; =20 - set_desc_base(&desc, GET_SMSTATE(u32, smstate, offset + 8)); - set_desc_limit(&desc, GET_SMSTATE(u32, smstate, offset + 4)); - rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, offset)); + set_desc_base(&desc, state->base); + set_desc_limit(&desc, state->limit); + rsm_set_desc_flags(&desc, state->flags); ctxt->ops->set_segment(ctxt, selector, &desc, 0, n); - return X86EMUL_CONTINUE; } =20 #ifdef CONFIG_X86_64 @@ -2460,63 +2452,46 @@ static int rsm_enter_protected_mode(struct x86_emul= ate_ctxt *ctxt, } =20 static int rsm_load_state_32(struct x86_emulate_ctxt *ctxt, - const char *smstate) + const struct kvm_smram_state_32 *smstate) { - struct desc_struct desc; struct desc_ptr dt; - u16 selector; - u32 val, cr0, cr3, cr4; int i; =20 - cr0 =3D GET_SMSTATE(u32, smstate, 0x7ffc); - cr3 =3D GET_SMSTATE(u32, smstate, 0x7ff8); - ctxt->eflags =3D GET_SMSTATE(u32, smstate, 0x7ff4) | X86_EFLA= GS_FIXED; - ctxt->_eip =3D GET_SMSTATE(u32, smstate, 0x7ff0); + ctxt->eflags =3D smstate->eflags | X86_EFLAGS_FIXED; + ctxt->_eip =3D smstate->eip; =20 for (i =3D 0; i < 8; i++) - *reg_write(ctxt, i) =3D GET_SMSTATE(u32, smstate, 0x7fd0 + i * 4); - - val =3D GET_SMSTATE(u32, smstate, 0x7fcc); + *reg_write(ctxt, i) =3D smstate->gprs[i]; =20 - if (ctxt->ops->set_dr(ctxt, 6, val)) + if (ctxt->ops->set_dr(ctxt, 6, smstate->dr6)) return X86EMUL_UNHANDLEABLE; - - val =3D GET_SMSTATE(u32, smstate, 0x7fc8); - - if (ctxt->ops->set_dr(ctxt, 7, val)) + if (ctxt->ops->set_dr(ctxt, 7, smstate->dr7)) return X86EMUL_UNHANDLEABLE; =20 - selector =3D GET_SMSTATE(u32, smstate, 0x7fc4); - set_desc_base(&desc, GET_SMSTATE(u32, smstate, 0x7f64)); - set_desc_limit(&desc, GET_SMSTATE(u32, smstate, 0x7f60)); - rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, 0x7f5c)); - ctxt->ops->set_segment(ctxt, selector, &desc, 0, VCPU_SREG_TR); + rsm_load_seg_32(ctxt, &smstate->tr, smstate->tr_sel, VCPU_SREG_TR); + rsm_load_seg_32(ctxt, &smstate->ldtr, smstate->ldtr_sel, VCPU_SREG_LDTR); =20 - selector =3D GET_SMSTATE(u32, smstate, 0x7fc0); - set_desc_base(&desc, GET_SMSTATE(u32, smstate, 0x7f80)); - set_desc_limit(&desc, GET_SMSTATE(u32, smstate, 0x7f7c)); - rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, 0x7f78)); - ctxt->ops->set_segment(ctxt, selector, &desc, 0, VCPU_SREG_LDTR); =20 - dt.address =3D GET_SMSTATE(u32, smstate, 0x7f74); - dt.size =3D GET_SMSTATE(u32, smstate, 0x7f70); + dt.address =3D smstate->gdtr.base; + dt.size =3D smstate->gdtr.limit; ctxt->ops->set_gdt(ctxt, &dt); =20 - dt.address =3D GET_SMSTATE(u32, smstate, 0x7f58); - dt.size =3D GET_SMSTATE(u32, smstate, 0x7f54); + dt.address =3D smstate->idtr.base; + dt.size =3D smstate->idtr.limit; ctxt->ops->set_idt(ctxt, &dt); =20 - for (i =3D 0; i < 6; i++) { - int r =3D rsm_load_seg_32(ctxt, smstate, i); - if (r !=3D X86EMUL_CONTINUE) - return r; - } + rsm_load_seg_32(ctxt, &smstate->es, smstate->es_sel, VCPU_SREG_ES); + rsm_load_seg_32(ctxt, &smstate->cs, smstate->cs_sel, VCPU_SREG_CS); + rsm_load_seg_32(ctxt, &smstate->ss, smstate->ss_sel, VCPU_SREG_SS); =20 - cr4 =3D GET_SMSTATE(u32, smstate, 0x7f14); + rsm_load_seg_32(ctxt, &smstate->ds, smstate->ds_sel, VCPU_SREG_DS); + rsm_load_seg_32(ctxt, &smstate->fs, smstate->fs_sel, VCPU_SREG_FS); + rsm_load_seg_32(ctxt, &smstate->gs, smstate->gs_sel, VCPU_SREG_GS); =20 - ctxt->ops->set_smbase(ctxt, GET_SMSTATE(u32, smstate, 0x7ef8)); + ctxt->ops->set_smbase(ctxt, smstate->smbase); =20 - return rsm_enter_protected_mode(ctxt, cr0, cr3, cr4); + return rsm_enter_protected_mode(ctxt, smstate->cr0, + smstate->cr3, smstate->cr4); } =20 #ifdef CONFIG_X86_64 @@ -2663,7 +2638,7 @@ static int em_rsm(struct x86_emulate_ctxt *ctxt) ret =3D rsm_load_state_64(ctxt, (const char *)&smram); else #endif - ret =3D rsm_load_state_32(ctxt, (const char *)&smram); + ret =3D rsm_load_state_32(ctxt, &smram.smram32); =20 if (ret !=3D X86EMUL_CONTINUE) goto emulate_shutdown; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index cbbe49bdc58787..6abe35f7687e2c 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9833,22 +9833,18 @@ static u32 enter_smm_get_segment_flags(struct kvm_s= egment *seg) return flags; } =20 -static void enter_smm_save_seg_32(struct kvm_vcpu *vcpu, char *buf, int n) +static void enter_smm_save_seg_32(struct kvm_vcpu *vcpu, + struct kvm_smm_seg_state_32 *state, + u32 *selector, + int n) { struct kvm_segment seg; - int offset; =20 kvm_get_segment(vcpu, &seg, n); - put_smstate(u32, buf, 0x7fa8 + n * 4, seg.selector); - - if (n < 3) - offset =3D 0x7f84 + n * 12; - else - offset =3D 0x7f2c + (n - 3) * 12; - - put_smstate(u32, buf, offset + 8, seg.base); - put_smstate(u32, buf, offset + 4, seg.limit); - put_smstate(u32, buf, offset, enter_smm_get_segment_flags(&seg)); + *selector =3D seg.selector; + state->base =3D seg.base; + state->limit =3D seg.limit; + state->flags =3D enter_smm_get_segment_flags(&seg); } =20 #ifdef CONFIG_X86_64 @@ -9869,54 +9865,47 @@ static void enter_smm_save_seg_64(struct kvm_vcpu *= vcpu, char *buf, int n) } #endif =20 -static void enter_smm_save_state_32(struct kvm_vcpu *vcpu, char *buf) +static void enter_smm_save_state_32(struct kvm_vcpu *vcpu, struct kvm_smra= m_state_32 *smram) { struct desc_ptr dt; - struct kvm_segment seg; unsigned long val; int i; =20 - put_smstate(u32, buf, 0x7ffc, kvm_read_cr0(vcpu)); - put_smstate(u32, buf, 0x7ff8, kvm_read_cr3(vcpu)); - put_smstate(u32, buf, 0x7ff4, kvm_get_rflags(vcpu)); - put_smstate(u32, buf, 0x7ff0, kvm_rip_read(vcpu)); + smram->cr0 =3D kvm_read_cr0(vcpu); + smram->cr3 =3D kvm_read_cr3(vcpu); + smram->eflags =3D kvm_get_rflags(vcpu); + smram->eip =3D kvm_rip_read(vcpu); =20 for (i =3D 0; i < 8; i++) - put_smstate(u32, buf, 0x7fd0 + i * 4, kvm_register_read_raw(vcpu, i)); + smram->gprs[i] =3D kvm_register_read_raw(vcpu, i); =20 kvm_get_dr(vcpu, 6, &val); - put_smstate(u32, buf, 0x7fcc, (u32)val); + smram->dr6 =3D (u32)val; kvm_get_dr(vcpu, 7, &val); - put_smstate(u32, buf, 0x7fc8, (u32)val); + smram->dr7 =3D (u32)val; =20 - kvm_get_segment(vcpu, &seg, VCPU_SREG_TR); - put_smstate(u32, buf, 0x7fc4, seg.selector); - put_smstate(u32, buf, 0x7f64, seg.base); - put_smstate(u32, buf, 0x7f60, seg.limit); - put_smstate(u32, buf, 0x7f5c, enter_smm_get_segment_flags(&seg)); - - kvm_get_segment(vcpu, &seg, VCPU_SREG_LDTR); - put_smstate(u32, buf, 0x7fc0, seg.selector); - put_smstate(u32, buf, 0x7f80, seg.base); - put_smstate(u32, buf, 0x7f7c, seg.limit); - put_smstate(u32, buf, 0x7f78, enter_smm_get_segment_flags(&seg)); + enter_smm_save_seg_32(vcpu, &smram->tr, &smram->tr_sel, VCPU_SREG_TR); + enter_smm_save_seg_32(vcpu, &smram->ldtr, &smram->ldtr_sel, VCPU_SREG_LDT= R); =20 static_call(kvm_x86_get_gdt)(vcpu, &dt); - put_smstate(u32, buf, 0x7f74, dt.address); - put_smstate(u32, buf, 0x7f70, dt.size); + smram->gdtr.base =3D dt.address; + smram->gdtr.limit =3D dt.size; =20 static_call(kvm_x86_get_idt)(vcpu, &dt); - put_smstate(u32, buf, 0x7f58, dt.address); - put_smstate(u32, buf, 0x7f54, dt.size); + smram->idtr.base =3D dt.address; + smram->idtr.limit =3D dt.size; =20 - for (i =3D 0; i < 6; i++) - enter_smm_save_seg_32(vcpu, buf, i); + enter_smm_save_seg_32(vcpu, &smram->es, &smram->es_sel, VCPU_SREG_ES); + enter_smm_save_seg_32(vcpu, &smram->cs, &smram->cs_sel, VCPU_SREG_CS); + enter_smm_save_seg_32(vcpu, &smram->ss, &smram->ss_sel, VCPU_SREG_SS); =20 - put_smstate(u32, buf, 0x7f14, kvm_read_cr4(vcpu)); + enter_smm_save_seg_32(vcpu, &smram->ds, &smram->ds_sel, VCPU_SREG_DS); + enter_smm_save_seg_32(vcpu, &smram->fs, &smram->fs_sel, VCPU_SREG_FS); + enter_smm_save_seg_32(vcpu, &smram->gs, &smram->gs_sel, VCPU_SREG_GS); =20 - /* revision id */ - put_smstate(u32, buf, 0x7efc, 0x00020000); - put_smstate(u32, buf, 0x7ef8, vcpu->arch.smbase); + smram->cr4 =3D kvm_read_cr4(vcpu); + smram->smm_revision =3D 0x00020000; + smram->smbase =3D vcpu->arch.smbase; } =20 #ifdef CONFIG_X86_64 @@ -9987,7 +9976,7 @@ static void enter_smm(struct kvm_vcpu *vcpu) enter_smm_save_state_64(vcpu, (char *)&smram); else #endif - enter_smm_save_state_32(vcpu, (char *)&smram); + enter_smm_save_state_32(vcpu, &smram.smram32); =20 /* * Give enter_smm() a chance to make ISA-specific changes to the vCPU --=20 2.26.3 From nobody Fri Dec 19 19:14:30 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7CCAC19F28 for ; Wed, 3 Aug 2022 15:51:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236635AbiHCPv5 (ORCPT ); Wed, 3 Aug 2022 11:51:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49594 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238333AbiHCPvc (ORCPT ); Wed, 3 Aug 2022 11:51:32 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 923D0633E for ; Wed, 3 Aug 2022 08:51:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1659541862; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ettadgLRKmvccYSAapUAqdb423AILH63PoRVJ2j6Fq4=; b=WiQYEvCE+GJEslMO4/popTnproiOIRhHiHnq5qCyLK1ueNGL5dFjcaZ3w2tA1u7oeni/Wm pAucCmYfRmvrfJPAiR6Z6Wu7Cx6Ry0MGA3gPW19Z8S2SfaZOv6xlhSxnHL+9bsl4GnPR82 anPAqlQBovKoobNmOzdPqB8JTRPGyVM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-356-ka6fNQMCN9-3bGd7DD0WxA-1; Wed, 03 Aug 2022 11:50:59 -0400 X-MC-Unique: ka6fNQMCN9-3bGd7DD0WxA-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 78C4D185A7B2; Wed, 3 Aug 2022 15:50:58 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id D2CE71121314; Wed, 3 Aug 2022 15:50:54 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Borislav Petkov , Dave Hansen , linux-kernel@vger.kernel.org, Wanpeng Li , Maxim Levitsky , Ingo Molnar , Sean Christopherson , x86@kernel.org, Jim Mattson , Kees Cook , Thomas Gleixner , "H. Peter Anvin" , Joerg Roedel , Vitaly Kuznetsov , Paolo Bonzini Subject: [PATCH v3 10/13] KVM: x86: emulator/smm: use smram struct for 64 bit smram load/restore Date: Wed, 3 Aug 2022 18:50:08 +0300 Message-Id: <20220803155011.43721-11-mlevitsk@redhat.com> In-Reply-To: <20220803155011.43721-1-mlevitsk@redhat.com> References: <20220803155011.43721-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use kvm_smram_state_64 struct to save/restore the 64 bit SMM state (used when X86_FEATURE_LM is present in the guest CPUID, regardless of 32-bitness of the guest). Signed-off-by: Maxim Levitsky Tested-by: Thomas Lamprecht --- arch/x86/kvm/emulate.c | 88 ++++++++++++++---------------------------- arch/x86/kvm/x86.c | 75 ++++++++++++++++------------------- 2 files changed, 62 insertions(+), 101 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 3339d542a25439..4bdbc5893a1657 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -2385,24 +2385,16 @@ static void rsm_load_seg_32(struct x86_emulate_ctxt= *ctxt, } =20 #ifdef CONFIG_X86_64 -static int rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, const char *smst= ate, - int n) +static void rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, + const struct kvm_smm_seg_state_64 *state, + int n) { struct desc_struct desc; - int offset; - u16 selector; - u32 base3; - - offset =3D 0x7e00 + n * 16; - - selector =3D GET_SMSTATE(u16, smstate, offset); - rsm_set_desc_flags(&desc, GET_SMSTATE(u16, smstate, offset + 2) << 8); - set_desc_limit(&desc, GET_SMSTATE(u32, smstate, offset + 4)); - set_desc_base(&desc, GET_SMSTATE(u32, smstate, offset + 8)); - base3 =3D GET_SMSTATE(u32, smstate, offset + 12); =20 - ctxt->ops->set_segment(ctxt, selector, &desc, base3, n); - return X86EMUL_CONTINUE; + rsm_set_desc_flags(&desc, state->attributes << 8); + set_desc_limit(&desc, state->limit); + set_desc_base(&desc, (u32)state->base); + ctxt->ops->set_segment(ctxt, state->selector, &desc, state->base >> 32, n= ); } #endif =20 @@ -2496,71 +2488,49 @@ static int rsm_load_state_32(struct x86_emulate_ctx= t *ctxt, =20 #ifdef CONFIG_X86_64 static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, - const char *smstate) + const struct kvm_smram_state_64 *smstate) { - struct desc_struct desc; struct desc_ptr dt; - u64 val, cr0, cr3, cr4; - u32 base3; - u16 selector; int i, r; =20 for (i =3D 0; i < 16; i++) - *reg_write(ctxt, i) =3D GET_SMSTATE(u64, smstate, 0x7ff8 - i * 8); + *reg_write(ctxt, i) =3D smstate->gprs[15 - i]; =20 - ctxt->_eip =3D GET_SMSTATE(u64, smstate, 0x7f78); - ctxt->eflags =3D GET_SMSTATE(u32, smstate, 0x7f70) | X86_EFLAGS_FIXED; + ctxt->_eip =3D smstate->rip; + ctxt->eflags =3D smstate->rflags | X86_EFLAGS_FIXED; =20 - val =3D GET_SMSTATE(u64, smstate, 0x7f68); - - if (ctxt->ops->set_dr(ctxt, 6, val)) + if (ctxt->ops->set_dr(ctxt, 6, smstate->dr6)) return X86EMUL_UNHANDLEABLE; - - val =3D GET_SMSTATE(u64, smstate, 0x7f60); - - if (ctxt->ops->set_dr(ctxt, 7, val)) + if (ctxt->ops->set_dr(ctxt, 7, smstate->dr7)) return X86EMUL_UNHANDLEABLE; =20 - cr0 =3D GET_SMSTATE(u64, smstate, 0x7f58); - cr3 =3D GET_SMSTATE(u64, smstate, 0x7f50); - cr4 =3D GET_SMSTATE(u64, smstate, 0x7f48); - ctxt->ops->set_smbase(ctxt, GET_SMSTATE(u32, smstate, 0x7f00)); - val =3D GET_SMSTATE(u64, smstate, 0x7ed0); + ctxt->ops->set_smbase(ctxt, smstate->smbase); =20 - if (ctxt->ops->set_msr(ctxt, MSR_EFER, val & ~EFER_LMA)) + if (ctxt->ops->set_msr(ctxt, MSR_EFER, smstate->efer & ~EFER_LMA)) return X86EMUL_UNHANDLEABLE; =20 - selector =3D GET_SMSTATE(u32, smstate, 0x7e90); - rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, 0x7e92) << 8); - set_desc_limit(&desc, GET_SMSTATE(u32, smstate, 0x7e94)); - set_desc_base(&desc, GET_SMSTATE(u32, smstate, 0x7e98)); - base3 =3D GET_SMSTATE(u32, smstate, 0x7e9c); - ctxt->ops->set_segment(ctxt, selector, &desc, base3, VCPU_SREG_TR); + rsm_load_seg_64(ctxt, &smstate->tr, VCPU_SREG_TR); =20 - dt.size =3D GET_SMSTATE(u32, smstate, 0x7e84); - dt.address =3D GET_SMSTATE(u64, smstate, 0x7e88); + dt.size =3D smstate->idtr.limit; + dt.address =3D smstate->idtr.base; ctxt->ops->set_idt(ctxt, &dt); =20 - selector =3D GET_SMSTATE(u32, smstate, 0x7e70); - rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, 0x7e72) << 8); - set_desc_limit(&desc, GET_SMSTATE(u32, smstate, 0x7e74)); - set_desc_base(&desc, GET_SMSTATE(u32, smstate, 0x7e78)); - base3 =3D GET_SMSTATE(u32, smstate, 0x7e7c); - ctxt->ops->set_segment(ctxt, selector, &desc, base3, VCPU_SREG_LDTR); + rsm_load_seg_64(ctxt, &smstate->ldtr, VCPU_SREG_LDTR); =20 - dt.size =3D GET_SMSTATE(u32, smstate, 0x7e64); - dt.address =3D GET_SMSTATE(u64, smstate, 0x7e68); + dt.size =3D smstate->gdtr.limit; + dt.address =3D smstate->gdtr.base; ctxt->ops->set_gdt(ctxt, &dt); =20 - r =3D rsm_enter_protected_mode(ctxt, cr0, cr3, cr4); + r =3D rsm_enter_protected_mode(ctxt, smstate->cr0, smstate->cr3, smstate-= >cr4); if (r !=3D X86EMUL_CONTINUE) return r; =20 - for (i =3D 0; i < 6; i++) { - r =3D rsm_load_seg_64(ctxt, smstate, i); - if (r !=3D X86EMUL_CONTINUE) - return r; - } + rsm_load_seg_64(ctxt, &smstate->es, VCPU_SREG_ES); + rsm_load_seg_64(ctxt, &smstate->cs, VCPU_SREG_CS); + rsm_load_seg_64(ctxt, &smstate->ss, VCPU_SREG_SS); + rsm_load_seg_64(ctxt, &smstate->ds, VCPU_SREG_DS); + rsm_load_seg_64(ctxt, &smstate->fs, VCPU_SREG_FS); + rsm_load_seg_64(ctxt, &smstate->gs, VCPU_SREG_GS); =20 return X86EMUL_CONTINUE; } @@ -2635,7 +2605,7 @@ static int em_rsm(struct x86_emulate_ctxt *ctxt) =20 #ifdef CONFIG_X86_64 if (emulator_has_longmode(ctxt)) - ret =3D rsm_load_state_64(ctxt, (const char *)&smram); + ret =3D rsm_load_state_64(ctxt, &smram.smram64); else #endif ret =3D rsm_load_state_32(ctxt, &smram.smram32); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 6abe35f7687e2c..4e3ef63baf83df 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9848,20 +9848,17 @@ static void enter_smm_save_seg_32(struct kvm_vcpu *= vcpu, } =20 #ifdef CONFIG_X86_64 -static void enter_smm_save_seg_64(struct kvm_vcpu *vcpu, char *buf, int n) +static void enter_smm_save_seg_64(struct kvm_vcpu *vcpu, + struct kvm_smm_seg_state_64 *state, + int n) { struct kvm_segment seg; - int offset; - u16 flags; =20 kvm_get_segment(vcpu, &seg, n); - offset =3D 0x7e00 + n * 16; - - flags =3D enter_smm_get_segment_flags(&seg) >> 8; - put_smstate(u16, buf, offset, seg.selector); - put_smstate(u16, buf, offset + 2, flags); - put_smstate(u32, buf, offset + 4, seg.limit); - put_smstate(u64, buf, offset + 8, seg.base); + state->selector =3D seg.selector; + state->attributes =3D enter_smm_get_segment_flags(&seg) >> 8; + state->limit =3D seg.limit; + state->base =3D seg.base; } #endif =20 @@ -9909,57 +9906,51 @@ static void enter_smm_save_state_32(struct kvm_vcpu= *vcpu, struct kvm_smram_stat } =20 #ifdef CONFIG_X86_64 -static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, char *buf) +static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, struct kvm_smra= m_state_64 *smram) { struct desc_ptr dt; - struct kvm_segment seg; unsigned long val; int i; =20 for (i =3D 0; i < 16; i++) - put_smstate(u64, buf, 0x7ff8 - i * 8, kvm_register_read_raw(vcpu, i)); + smram->gprs[15 - i] =3D kvm_register_read_raw(vcpu, i); + + smram->rip =3D kvm_rip_read(vcpu); + smram->rflags =3D kvm_get_rflags(vcpu); =20 - put_smstate(u64, buf, 0x7f78, kvm_rip_read(vcpu)); - put_smstate(u32, buf, 0x7f70, kvm_get_rflags(vcpu)); =20 kvm_get_dr(vcpu, 6, &val); - put_smstate(u64, buf, 0x7f68, val); + smram->dr6 =3D val; kvm_get_dr(vcpu, 7, &val); - put_smstate(u64, buf, 0x7f60, val); - - put_smstate(u64, buf, 0x7f58, kvm_read_cr0(vcpu)); - put_smstate(u64, buf, 0x7f50, kvm_read_cr3(vcpu)); - put_smstate(u64, buf, 0x7f48, kvm_read_cr4(vcpu)); + smram->dr7 =3D val; =20 - put_smstate(u32, buf, 0x7f00, vcpu->arch.smbase); + smram->cr0 =3D kvm_read_cr0(vcpu); + smram->cr3 =3D kvm_read_cr3(vcpu); + smram->cr4 =3D kvm_read_cr4(vcpu); =20 - /* revision id */ - put_smstate(u32, buf, 0x7efc, 0x00020064); + smram->smbase =3D vcpu->arch.smbase; + smram->smm_revison =3D 0x00020064; =20 - put_smstate(u64, buf, 0x7ed0, vcpu->arch.efer); + smram->efer =3D vcpu->arch.efer; =20 - kvm_get_segment(vcpu, &seg, VCPU_SREG_TR); - put_smstate(u16, buf, 0x7e90, seg.selector); - put_smstate(u16, buf, 0x7e92, enter_smm_get_segment_flags(&seg) >> 8); - put_smstate(u32, buf, 0x7e94, seg.limit); - put_smstate(u64, buf, 0x7e98, seg.base); + enter_smm_save_seg_64(vcpu, &smram->tr, VCPU_SREG_TR); =20 static_call(kvm_x86_get_idt)(vcpu, &dt); - put_smstate(u32, buf, 0x7e84, dt.size); - put_smstate(u64, buf, 0x7e88, dt.address); + smram->idtr.limit =3D dt.size; + smram->idtr.base =3D dt.address; =20 - kvm_get_segment(vcpu, &seg, VCPU_SREG_LDTR); - put_smstate(u16, buf, 0x7e70, seg.selector); - put_smstate(u16, buf, 0x7e72, enter_smm_get_segment_flags(&seg) >> 8); - put_smstate(u32, buf, 0x7e74, seg.limit); - put_smstate(u64, buf, 0x7e78, seg.base); + enter_smm_save_seg_64(vcpu, &smram->ldtr, VCPU_SREG_LDTR); =20 static_call(kvm_x86_get_gdt)(vcpu, &dt); - put_smstate(u32, buf, 0x7e64, dt.size); - put_smstate(u64, buf, 0x7e68, dt.address); + smram->gdtr.limit =3D dt.size; + smram->gdtr.base =3D dt.address; =20 - for (i =3D 0; i < 6; i++) - enter_smm_save_seg_64(vcpu, buf, i); + enter_smm_save_seg_64(vcpu, &smram->es, VCPU_SREG_ES); + enter_smm_save_seg_64(vcpu, &smram->cs, VCPU_SREG_CS); + enter_smm_save_seg_64(vcpu, &smram->ss, VCPU_SREG_SS); + enter_smm_save_seg_64(vcpu, &smram->ds, VCPU_SREG_DS); + enter_smm_save_seg_64(vcpu, &smram->fs, VCPU_SREG_FS); + enter_smm_save_seg_64(vcpu, &smram->gs, VCPU_SREG_GS); } #endif =20 @@ -9973,7 +9964,7 @@ static void enter_smm(struct kvm_vcpu *vcpu) memset(smram.bytes, 0, sizeof(smram.bytes)); #ifdef CONFIG_X86_64 if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) - enter_smm_save_state_64(vcpu, (char *)&smram); + enter_smm_save_state_64(vcpu, &smram.smram64); else #endif enter_smm_save_state_32(vcpu, &smram.smram32); --=20 2.26.3 From nobody Fri Dec 19 19:14:30 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE369C19F28 for ; Wed, 3 Aug 2022 15:52:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238426AbiHCPwM (ORCPT ); Wed, 3 Aug 2022 11:52:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49648 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233529AbiHCPvh (ORCPT ); Wed, 3 Aug 2022 11:51:37 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 8DEAF6453 for ; Wed, 3 Aug 2022 08:51:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1659541867; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PZmvPRv2wyPIO8oJ82T1OQSX+OhBky8ZRvxbccr9Q84=; b=Ud73JFvJhtAB7om8BTuj53p4d8KxN88531I6un9/4VY2epJkPlCrfWKNhMlEEuc/eHzAgk RHdk2nT6EK4MMfDlfrJSUv68EP7Z1ULRSP6eNJdvYkYmSlaSTYi1vsXWXsY2vFGlt9+S9p 2ifiC8yOKqvd2i1u8ZKNbNgVZA6hS0I= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-222-5xbvyP-OODOtgpSK7vgQSA-1; Wed, 03 Aug 2022 11:51:03 -0400 X-MC-Unique: 5xbvyP-OODOtgpSK7vgQSA-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7B93E2919EAC; Wed, 3 Aug 2022 15:51:02 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id D7F321121314; Wed, 3 Aug 2022 15:50:58 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Borislav Petkov , Dave Hansen , linux-kernel@vger.kernel.org, Wanpeng Li , Maxim Levitsky , Ingo Molnar , Sean Christopherson , x86@kernel.org, Jim Mattson , Kees Cook , Thomas Gleixner , "H. Peter Anvin" , Joerg Roedel , Vitaly Kuznetsov , Paolo Bonzini Subject: [PATCH v3 11/13] KVM: x86: SVM: use smram structs Date: Wed, 3 Aug 2022 18:50:09 +0300 Message-Id: <20220803155011.43721-12-mlevitsk@redhat.com> In-Reply-To: <20220803155011.43721-1-mlevitsk@redhat.com> References: <20220803155011.43721-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This removes the last user of put_smstate/GET_SMSTATE so remove these functions as well. Also add a sanity check that we don't attempt to enter the SMM on non long mode capable guest CPU with a running nested guest. Signed-off-by: Maxim Levitsky Tested-by: Thomas Lamprecht --- arch/x86/include/asm/kvm_host.h | 6 ------ arch/x86/kvm/svm/svm.c | 21 ++++++--------------- 2 files changed, 6 insertions(+), 21 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index d752fabde94ad2..d570ec522ebb55 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2077,12 +2077,6 @@ static inline int kvm_cpu_get_apicid(int mps_cpu) #endif } =20 -#define put_smstate(type, buf, offset, val) \ - *(type *)((buf) + (offset) - 0x7e00) =3D val - -#define GET_SMSTATE(type, buf, offset) \ - (*(type *)((buf) + (offset) - 0x7e00)) - int kvm_cpu_dirty_log_size(void); =20 int memslot_rmap_alloc(struct kvm_memory_slot *slot, unsigned long npages); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 688315d1dfabd1..7ca5e06878e19a 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4439,15 +4439,11 @@ static int svm_enter_smm(struct kvm_vcpu *vcpu, uni= on kvm_smram *smram) struct kvm_host_map map_save; int ret; =20 - char *smstate =3D (char *)smram; - if (!is_guest_mode(vcpu)) return 0; =20 - /* FED8h - SVM Guest */ - put_smstate(u64, smstate, 0x7ed8, 1); - /* FEE0h - SVM Guest VMCB Physical Address */ - put_smstate(u64, smstate, 0x7ee0, svm->nested.vmcb12_gpa); + smram->smram64.svm_guest_flag =3D 1; + smram->smram64.svm_guest_vmcb_gpa =3D svm->nested.vmcb12_gpa; =20 svm->vmcb->save.rax =3D vcpu->arch.regs[VCPU_REGS_RAX]; svm->vmcb->save.rsp =3D vcpu->arch.regs[VCPU_REGS_RSP]; @@ -4486,28 +4482,23 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, con= st union kvm_smram *smram) { struct vcpu_svm *svm =3D to_svm(vcpu); struct kvm_host_map map, map_save; - u64 saved_efer, vmcb12_gpa; struct vmcb *vmcb12; int ret; =20 - const char *smstate =3D (const char *)smram; - if (!guest_cpuid_has(vcpu, X86_FEATURE_LM)) return 0; =20 /* Non-zero if SMI arrived while vCPU was in guest mode. */ - if (!GET_SMSTATE(u64, smstate, 0x7ed8)) + if (!smram->smram64.svm_guest_flag) return 0; =20 if (!guest_cpuid_has(vcpu, X86_FEATURE_SVM)) return 1; =20 - saved_efer =3D GET_SMSTATE(u64, smstate, 0x7ed0); - if (!(saved_efer & EFER_SVME)) + if (!(smram->smram64.efer & EFER_SVME)) return 1; =20 - vmcb12_gpa =3D GET_SMSTATE(u64, smstate, 0x7ee0); - if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcb12_gpa), &map) =3D=3D -EINVAL) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(smram->smram64.svm_guest_vmcb_gpa), &ma= p) =3D=3D -EINVAL) return 1; =20 ret =3D 1; @@ -4533,7 +4524,7 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const= union kvm_smram *smram) vmcb12 =3D map.hva; nested_copy_vmcb_control_to_cache(svm, &vmcb12->control); nested_copy_vmcb_save_to_cache(svm, &vmcb12->save); - ret =3D enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, false); + ret =3D enter_svm_guest_mode(vcpu, smram->smram64.svm_guest_vmcb_gpa, vmc= b12, false); =20 if (ret) goto unmap_save; --=20 2.26.3 From nobody Fri Dec 19 19:14:30 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51A30C19F28 for ; Wed, 3 Aug 2022 15:53:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236547AbiHCPxk (ORCPT ); Wed, 3 Aug 2022 11:53:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50418 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238326AbiHCPvu (ORCPT ); Wed, 3 Aug 2022 11:51:50 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 07B42101EE for ; Wed, 3 Aug 2022 08:51:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1659541871; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TwY8q9NKTMuKDlfD+95rBy75sDQtNoFH7OKEETfKcv8=; b=CfLsJ61ZtRxfzl6W9KSZ6ykiBhQB7F1kApBQTKwC/N+5MNbUioLxUW2yxFu7MVxI8woJK7 do7v4AnwkOZbr9ohYFEQNeFefAtycZnpOt5q7+HE51JhX330pGs7SmfT4kqtuEUtC0D0gg DLjJJJjt1MsI3MrQMeTIgpcCyU9x7rk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-503-Tev-prOKPAWzMgdrsCQ_cA-1; Wed, 03 Aug 2022 11:51:07 -0400 X-MC-Unique: Tev-prOKPAWzMgdrsCQ_cA-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7DE901019C9A; Wed, 3 Aug 2022 15:51:06 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id DA4F81121314; Wed, 3 Aug 2022 15:51:02 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Borislav Petkov , Dave Hansen , linux-kernel@vger.kernel.org, Wanpeng Li , Maxim Levitsky , Ingo Molnar , Sean Christopherson , x86@kernel.org, Jim Mattson , Kees Cook , Thomas Gleixner , "H. Peter Anvin" , Joerg Roedel , Vitaly Kuznetsov , Paolo Bonzini Subject: [PATCH v3 12/13] KVM: x86: SVM: don't save SVM state to SMRAM when VM is not long mode capable Date: Wed, 3 Aug 2022 18:50:10 +0300 Message-Id: <20220803155011.43721-13-mlevitsk@redhat.com> In-Reply-To: <20220803155011.43721-1-mlevitsk@redhat.com> References: <20220803155011.43721-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When the guest CPUID doesn't have support for long mode, 32 bit SMRAM layout is used and it has no support for preserving EFER and/or SVM state. Note that this isn't relevant to running 32 bit guests on VM which is long mode capable - such VM can still run 32 bit guests in compatibility mode. Signed-off-by: Maxim Levitsky Tested-by: Thomas Lamprecht --- arch/x86/kvm/svm/svm.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 7ca5e06878e19a..64cfd26bc5e7a6 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4442,6 +4442,15 @@ static int svm_enter_smm(struct kvm_vcpu *vcpu, unio= n kvm_smram *smram) if (!is_guest_mode(vcpu)) return 0; =20 + /* + * 32 bit SMRAM format doesn't preserve EFER and SVM state. + * SVM should not be enabled by the userspace without marking + * the CPU as at least long mode capable. + */ + + if (!guest_cpuid_has(vcpu, X86_FEATURE_LM)) + return 1; + smram->smram64.svm_guest_flag =3D 1; smram->smram64.svm_guest_vmcb_gpa =3D svm->nested.vmcb12_gpa; =20 --=20 2.26.3 From nobody Fri Dec 19 19:14:30 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6EEABC19F28 for ; Wed, 3 Aug 2022 15:52:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238251AbiHCPwg (ORCPT ); Wed, 3 Aug 2022 11:52:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236310AbiHCPvx (ORCPT ); Wed, 3 Aug 2022 11:51:53 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 598C54E613 for ; Wed, 3 Aug 2022 08:51:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1659541873; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PD5y4cQGbEfNPI2pCjHVa112SuT0OnV1h5+PPA1jp5U=; b=aY9BVgqiA35uZwfPMnM1WKViMfyOGSYBNBTCt4nY7n3mLaJXyXIQNwUmqDg5YfzNGFlBAo gboFcu8oklJNr5TQWsyBtNfd0iG/MHU4L3bDtXgm7gAl/uX8gFpv7DOK68i6dI6TUukyUb 9u+3bSnODj+OaViSJ5LNtpvh33sO9pk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-634-ENQY38BNMcuj35Ofl0mepA-1; Wed, 03 Aug 2022 11:51:11 -0400 X-MC-Unique: ENQY38BNMcuj35Ofl0mepA-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0F7E6185A79C; Wed, 3 Aug 2022 15:51:11 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id DD6ED1121314; Wed, 3 Aug 2022 15:51:06 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Borislav Petkov , Dave Hansen , linux-kernel@vger.kernel.org, Wanpeng Li , Maxim Levitsky , Ingo Molnar , Sean Christopherson , x86@kernel.org, Jim Mattson , Kees Cook , Thomas Gleixner , "H. Peter Anvin" , Joerg Roedel , Vitaly Kuznetsov , Paolo Bonzini Subject: [PATCH v3 13/13] KVM: x86: emulator/smm: preserve interrupt shadow in SMRAM Date: Wed, 3 Aug 2022 18:50:11 +0300 Message-Id: <20220803155011.43721-14-mlevitsk@redhat.com> In-Reply-To: <20220803155011.43721-1-mlevitsk@redhat.com> References: <20220803155011.43721-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When #SMI is asserted, the CPU can be in interrupt shadow due to sti or mov ss. It is not mandatory in Intel/AMD prm to have the #SMI blocked during the shadow, and on top of that, since neither SVM nor VMX has true support for SMI window, waiting for one instruction would mean single stepping the guest. Instead, allow #SMI in this case, but both reset the interrupt window and stash its value in SMRAM to restore it on exit from SMM. This fixes rare failures seen mostly on windows guests on VMX, when #SMI falls on the sti instruction which mainfest in VM entry failure due to EFLAGS.IF not being set, but STI interrupt window still being set in the VMCS. Signed-off-by: Maxim Levitsky Tested-by: Thomas Lamprecht --- arch/x86/kvm/emulate.c | 17 ++++++++++++++--- arch/x86/kvm/kvm_emulate.h | 10 ++++++---- arch/x86/kvm/x86.c | 12 ++++++++++++ 3 files changed, 32 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 4bdbc5893a1657..b4bc45cec3249d 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -2447,7 +2447,7 @@ static int rsm_load_state_32(struct x86_emulate_ctxt = *ctxt, const struct kvm_smram_state_32 *smstate) { struct desc_ptr dt; - int i; + int i, r; =20 ctxt->eflags =3D smstate->eflags | X86_EFLAGS_FIXED; ctxt->_eip =3D smstate->eip; @@ -2482,8 +2482,16 @@ static int rsm_load_state_32(struct x86_emulate_ctxt= *ctxt, =20 ctxt->ops->set_smbase(ctxt, smstate->smbase); =20 - return rsm_enter_protected_mode(ctxt, smstate->cr0, - smstate->cr3, smstate->cr4); + r =3D rsm_enter_protected_mode(ctxt, smstate->cr0, + smstate->cr3, smstate->cr4); + + if (r !=3D X86EMUL_CONTINUE) + return r; + + ctxt->ops->set_int_shadow(ctxt, 0); + ctxt->interruptibility =3D (u8)smstate->int_shadow; + + return X86EMUL_CONTINUE; } =20 #ifdef CONFIG_X86_64 @@ -2532,6 +2540,9 @@ static int rsm_load_state_64(struct x86_emulate_ctxt = *ctxt, rsm_load_seg_64(ctxt, &smstate->fs, VCPU_SREG_FS); rsm_load_seg_64(ctxt, &smstate->gs, VCPU_SREG_GS); =20 + ctxt->ops->set_int_shadow(ctxt, 0); + ctxt->interruptibility =3D (u8)smstate->int_shadow; + return X86EMUL_CONTINUE; } #endif diff --git a/arch/x86/kvm/kvm_emulate.h b/arch/x86/kvm/kvm_emulate.h index 76c0b8e7890b5d..a7313add0f2a58 100644 --- a/arch/x86/kvm/kvm_emulate.h +++ b/arch/x86/kvm/kvm_emulate.h @@ -234,6 +234,7 @@ struct x86_emulate_ops { bool (*guest_has_rdpid)(struct x86_emulate_ctxt *ctxt); =20 void (*set_nmi_mask)(struct x86_emulate_ctxt *ctxt, bool masked); + void (*set_int_shadow)(struct x86_emulate_ctxt *ctxt, u8 shadow); =20 unsigned (*get_hflags)(struct x86_emulate_ctxt *ctxt); void (*exiting_smm)(struct x86_emulate_ctxt *ctxt); @@ -518,7 +519,8 @@ struct kvm_smram_state_32 { u32 reserved1[62]; u32 smbase; u32 smm_revision; - u32 reserved2[5]; + u32 reserved2[4]; + u32 int_shadow; /* KVM extension */ u32 cr4; /* CR4 is not present in Intel/AMD SMRAM image */ u32 reserved3[5]; =20 @@ -566,6 +568,7 @@ static inline void __check_smram32_offsets(void) __CHECK_SMRAM32_OFFSET(smbase, 0xFEF8); __CHECK_SMRAM32_OFFSET(smm_revision, 0xFEFC); __CHECK_SMRAM32_OFFSET(reserved2, 0xFF00); + __CHECK_SMRAM32_OFFSET(int_shadow, 0xFF10); __CHECK_SMRAM32_OFFSET(cr4, 0xFF14); __CHECK_SMRAM32_OFFSET(reserved3, 0xFF18); __CHECK_SMRAM32_OFFSET(ds, 0xFF2C); @@ -625,7 +628,7 @@ struct kvm_smram_state_64 { u64 io_restart_rsi; u64 io_restart_rdi; u32 io_restart_dword; - u32 reserved1; + u32 int_shadow; u8 io_inst_restart; u8 auto_hlt_restart; u8 reserved2[6]; @@ -663,7 +666,6 @@ struct kvm_smram_state_64 { u64 gprs[16]; /* GPRS in a reversed "natural" X86 order (R15/R14/../RCX/R= AX.) */ }; =20 - static inline void __check_smram64_offsets(void) { #define __CHECK_SMRAM64_OFFSET(field, offset) \ @@ -684,7 +686,7 @@ static inline void __check_smram64_offsets(void) __CHECK_SMRAM64_OFFSET(io_restart_rsi, 0xFEB0); __CHECK_SMRAM64_OFFSET(io_restart_rdi, 0xFEB8); __CHECK_SMRAM64_OFFSET(io_restart_dword, 0xFEC0); - __CHECK_SMRAM64_OFFSET(reserved1, 0xFEC4); + __CHECK_SMRAM64_OFFSET(int_shadow, 0xFEC4); __CHECK_SMRAM64_OFFSET(io_inst_restart, 0xFEC8); __CHECK_SMRAM64_OFFSET(auto_hlt_restart, 0xFEC9); __CHECK_SMRAM64_OFFSET(reserved2, 0xFECA); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 4e3ef63baf83df..ae4c20cec7a9fc 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8041,6 +8041,11 @@ static void emulator_set_nmi_mask(struct x86_emulate= _ctxt *ctxt, bool masked) static_call(kvm_x86_set_nmi_mask)(emul_to_vcpu(ctxt), masked); } =20 +static void emulator_set_int_shadow(struct x86_emulate_ctxt *ctxt, u8 shad= ow) +{ + static_call(kvm_x86_set_interrupt_shadow)(emul_to_vcpu(ctxt), shadow); +} + static unsigned emulator_get_hflags(struct x86_emulate_ctxt *ctxt) { return emul_to_vcpu(ctxt)->arch.hflags; @@ -8121,6 +8126,7 @@ static const struct x86_emulate_ops emulate_ops =3D { .guest_has_fxsr =3D emulator_guest_has_fxsr, .guest_has_rdpid =3D emulator_guest_has_rdpid, .set_nmi_mask =3D emulator_set_nmi_mask, + .set_int_shadow =3D emulator_set_int_shadow, .get_hflags =3D emulator_get_hflags, .exiting_smm =3D emulator_exiting_smm, .leave_smm =3D emulator_leave_smm, @@ -9903,6 +9909,8 @@ static void enter_smm_save_state_32(struct kvm_vcpu *= vcpu, struct kvm_smram_stat smram->cr4 =3D kvm_read_cr4(vcpu); smram->smm_revision =3D 0x00020000; smram->smbase =3D vcpu->arch.smbase; + + smram->int_shadow =3D static_call(kvm_x86_get_interrupt_shadow)(vcpu); } =20 #ifdef CONFIG_X86_64 @@ -9951,6 +9959,8 @@ static void enter_smm_save_state_64(struct kvm_vcpu *= vcpu, struct kvm_smram_stat enter_smm_save_seg_64(vcpu, &smram->ds, VCPU_SREG_DS); enter_smm_save_seg_64(vcpu, &smram->fs, VCPU_SREG_FS); enter_smm_save_seg_64(vcpu, &smram->gs, VCPU_SREG_GS); + + smram->int_shadow =3D static_call(kvm_x86_get_interrupt_shadow)(vcpu); } #endif =20 @@ -9987,6 +9997,8 @@ static void enter_smm(struct kvm_vcpu *vcpu) kvm_set_rflags(vcpu, X86_EFLAGS_FIXED); kvm_rip_write(vcpu, 0x8000); =20 + static_call(kvm_x86_set_interrupt_shadow)(vcpu, 0); + cr0 =3D vcpu->arch.cr0 & ~(X86_CR0_PE | X86_CR0_EM | X86_CR0_TS | X86_CR0= _PG); static_call(kvm_x86_set_cr0)(vcpu, cr0); vcpu->arch.cr0 =3D cr0; --=20 2.26.3