From nobody Sun Apr 26 22:56:59 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BCC62C43334 for ; Tue, 21 Jun 2022 15:09:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352256AbiFUPJ0 (ORCPT ); Tue, 21 Jun 2022 11:09:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43660 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351813AbiFUPJS (ORCPT ); Tue, 21 Jun 2022 11:09:18 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 799DB1B7A6 for ; Tue, 21 Jun 2022 08:09:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655824156; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dt+3U31rFRFWtq9wFVO0GaPK9O5t7emZ4zDQ3ZmYMBI=; b=BRyCa6HVIq29WbQUAt7Vn9tKpP9ofoLR/kvvKgyy40Rm2MjlvODiyOCfibg2m9SvVw1gSb cXaVH6vOSRWWNyzAfshU5fnZfNuoI3i9OPgm2bwPPYEYxEv9nxL19q0rGBoXxX8VotHXUw z8uvIF6qNmrcvBCSDYw+4RkElHgPb+E= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-551-yPq1Mh0SO8mkg_Sz9t-l6Q-1; Tue, 21 Jun 2022 11:09:12 -0400 X-MC-Unique: yPq1Mh0SO8mkg_Sz9t-l6Q-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id ACB963815D2B; Tue, 21 Jun 2022 15:09:11 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.180]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3B9A09D7F; Tue, 21 Jun 2022 15:09:08 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Sean Christopherson , x86@kernel.org, Kees Cook , Dave Hansen , linux-kernel@vger.kernel.org, "H. Peter Anvin" , Borislav Petkov , Joerg Roedel , Ingo Molnar , Paolo Bonzini , Thomas Gleixner , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Maxim Levitsky Subject: [PATCH v2 01/11] KVM: x86: emulator: em_sysexit should update ctxt->mode Date: Tue, 21 Jun 2022 18:08:52 +0300 Message-Id: <20220621150902.46126-2-mlevitsk@redhat.com> In-Reply-To: <20220621150902.46126-1-mlevitsk@redhat.com> References: <20220621150902.46126-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This is one of the instructions that can change the processor mode. Note that this is likely a benign bug, because the only problematic mode change is from 32 bit to 64 bit which can lead to truncation of RIP, and it is not possible to do with sysexit, since sysexit running in 32 bit mode will be limited to 32 bit version. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/emulate.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 39ea9138224c62..5aeb343ca8b007 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -2888,6 +2888,7 @@ static int em_sysexit(struct x86_emulate_ctxt *ctxt) ops->set_segment(ctxt, ss_sel, &ss, 0, VCPU_SREG_SS); =20 ctxt->_eip =3D rdx; + ctxt->mode =3D usermode; *reg_write(ctxt, VCPU_REGS_RSP) =3D rcx; =20 return X86EMUL_CONTINUE; --=20 2.26.3 From nobody Sun Apr 26 22:56:59 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8892C433EF for ; Tue, 21 Jun 2022 15:09:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352271AbiFUPJb (ORCPT ); Tue, 21 Jun 2022 11:09:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352239AbiFUPJY (ORCPT ); Tue, 21 Jun 2022 11:09:24 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6103825EA1 for ; Tue, 21 Jun 2022 08:09:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655824162; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+7zixe9sxx7e2CtFYoSN/QgMYHehGvy9kir+Nu6gJzM=; b=Te3AxPhmlamTpmGzaLC+9/c6jh3K1vGbjsCqrisEStmSoN7kzg4yqmMA4HhRTrzovCg9gr QqOMxcPd9HaxlmRXyoAhsUyDccyPinSZMqIbCgLZgOE3mOzFaDyKMfRJNz6Lixs09nzoGZ ecxobNTcI8s/lv2dcoDcFrkDrBeWXz4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-53-XelTIXQgNRmBhCucK6-FvA-1; Tue, 21 Jun 2022 11:09:16 -0400 X-MC-Unique: XelTIXQgNRmBhCucK6-FvA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 83DF51044562; Tue, 21 Jun 2022 15:09:15 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.180]) by smtp.corp.redhat.com (Postfix) with ESMTP id 13CDA1131D; Tue, 21 Jun 2022 15:09:11 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Sean Christopherson , x86@kernel.org, Kees Cook , Dave Hansen , linux-kernel@vger.kernel.org, "H. Peter Anvin" , Borislav Petkov , Joerg Roedel , Ingo Molnar , Paolo Bonzini , Thomas Gleixner , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Maxim Levitsky Subject: [PATCH v2 02/11] KVM: x86: emulator: introduce update_emulation_mode Date: Tue, 21 Jun 2022 18:08:53 +0300 Message-Id: <20220621150902.46126-3-mlevitsk@redhat.com> In-Reply-To: <20220621150902.46126-1-mlevitsk@redhat.com> References: <20220621150902.46126-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Some instructions update the cpu execution mode, which needs to update the emulation mode. Extract this code, and make assign_eip_far use it. assign_eip_far now reads CS, instead of getting it via a parameter, which is ok, because callers always assign CS to the same value before calling it. No functional change is intended. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/emulate.c | 85 ++++++++++++++++++++++++++++-------------- 1 file changed, 57 insertions(+), 28 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 5aeb343ca8b007..2c0087df2d7e6a 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -804,8 +804,7 @@ static int linearize(struct x86_emulate_ctxt *ctxt, ctxt->mode, linear); } =20 -static inline int assign_eip(struct x86_emulate_ctxt *ctxt, ulong dst, - enum x86emul_mode mode) +static inline int assign_eip(struct x86_emulate_ctxt *ctxt, ulong dst) { ulong linear; int rc; @@ -815,41 +814,71 @@ static inline int assign_eip(struct x86_emulate_ctxt = *ctxt, ulong dst, =20 if (ctxt->op_bytes !=3D sizeof(unsigned long)) addr.ea =3D dst & ((1UL << (ctxt->op_bytes << 3)) - 1); - rc =3D __linearize(ctxt, addr, &max_size, 1, false, true, mode, &linear); + rc =3D __linearize(ctxt, addr, &max_size, 1, false, true, ctxt->mode, &li= near); if (rc =3D=3D X86EMUL_CONTINUE) ctxt->_eip =3D addr.ea; return rc; } =20 +static inline int update_emulation_mode(struct x86_emulate_ctxt *ctxt) +{ + u64 efer; + struct desc_struct cs; + u16 selector; + u32 base3; + + ctxt->ops->get_msr(ctxt, MSR_EFER, &efer); + + if (!ctxt->ops->get_cr(ctxt, 0) & X86_CR0_PE) { + /* Real mode. cpu must not have long mode active */ + if (efer & EFER_LMA) + return X86EMUL_UNHANDLEABLE; + ctxt->mode =3D X86EMUL_MODE_REAL; + return X86EMUL_CONTINUE; + } + + if (ctxt->eflags & X86_EFLAGS_VM) { + /* Protected/VM86 mode. cpu must not have long mode active */ + if (efer & EFER_LMA) + return X86EMUL_UNHANDLEABLE; + ctxt->mode =3D X86EMUL_MODE_VM86; + return X86EMUL_CONTINUE; + } + + if (!ctxt->ops->get_segment(ctxt, &selector, &cs, &base3, VCPU_SREG_CS)) + return X86EMUL_UNHANDLEABLE; + + if (efer & EFER_LMA) { + if (cs.l) { + /* Proper long mode */ + ctxt->mode =3D X86EMUL_MODE_PROT64; + } else if (cs.d) { + /* 32 bit compatibility mode*/ + ctxt->mode =3D X86EMUL_MODE_PROT32; + } else { + ctxt->mode =3D X86EMUL_MODE_PROT16; + } + } else { + /* Legacy 32 bit / 16 bit mode */ + ctxt->mode =3D cs.d ? X86EMUL_MODE_PROT32 : X86EMUL_MODE_PROT16; + } + + return X86EMUL_CONTINUE; +} + static inline int assign_eip_near(struct x86_emulate_ctxt *ctxt, ulong dst) { - return assign_eip(ctxt, dst, ctxt->mode); + return assign_eip(ctxt, dst); } =20 -static int assign_eip_far(struct x86_emulate_ctxt *ctxt, ulong dst, - const struct desc_struct *cs_desc) +static int assign_eip_far(struct x86_emulate_ctxt *ctxt, ulong dst) { - enum x86emul_mode mode =3D ctxt->mode; - int rc; + int rc =3D update_emulation_mode(ctxt); =20 -#ifdef CONFIG_X86_64 - if (ctxt->mode >=3D X86EMUL_MODE_PROT16) { - if (cs_desc->l) { - u64 efer =3D 0; + if (rc !=3D X86EMUL_CONTINUE) + return rc; =20 - ctxt->ops->get_msr(ctxt, MSR_EFER, &efer); - if (efer & EFER_LMA) - mode =3D X86EMUL_MODE_PROT64; - } else - mode =3D X86EMUL_MODE_PROT32; /* temporary value */ - } -#endif - if (mode =3D=3D X86EMUL_MODE_PROT16 || mode =3D=3D X86EMUL_MODE_PROT32) - mode =3D cs_desc->d ? X86EMUL_MODE_PROT32 : X86EMUL_MODE_PROT16; - rc =3D assign_eip(ctxt, dst, mode); - if (rc =3D=3D X86EMUL_CONTINUE) - ctxt->mode =3D mode; - return rc; + return assign_eip(ctxt, dst); } =20 static inline int jmp_rel(struct x86_emulate_ctxt *ctxt, int rel) @@ -2184,7 +2213,7 @@ static int em_jmp_far(struct x86_emulate_ctxt *ctxt) if (rc !=3D X86EMUL_CONTINUE) return rc; =20 - rc =3D assign_eip_far(ctxt, ctxt->src.val, &new_desc); + rc =3D assign_eip_far(ctxt, ctxt->src.val); /* Error handling is not implemented. */ if (rc !=3D X86EMUL_CONTINUE) return X86EMUL_UNHANDLEABLE; @@ -2262,7 +2291,7 @@ static int em_ret_far(struct x86_emulate_ctxt *ctxt) &new_desc); if (rc !=3D X86EMUL_CONTINUE) return rc; - rc =3D assign_eip_far(ctxt, eip, &new_desc); + rc =3D assign_eip_far(ctxt, eip); /* Error handling is not implemented. */ if (rc !=3D X86EMUL_CONTINUE) return X86EMUL_UNHANDLEABLE; @@ -3482,7 +3511,7 @@ static int em_call_far(struct x86_emulate_ctxt *ctxt) if (rc !=3D X86EMUL_CONTINUE) return rc; =20 - rc =3D assign_eip_far(ctxt, ctxt->src.val, &new_desc); + rc =3D assign_eip_far(ctxt, ctxt->src.val); if (rc !=3D X86EMUL_CONTINUE) goto fail; =20 --=20 2.26.3 From nobody Sun Apr 26 22:56:59 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C258AC43334 for ; Tue, 21 Jun 2022 15:09:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352279AbiFUPJe (ORCPT ); Tue, 21 Jun 2022 11:09:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352247AbiFUPJ0 (ORCPT ); Tue, 21 Jun 2022 11:09:26 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4C7A720194 for ; Tue, 21 Jun 2022 08:09:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655824164; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zQl5JSo1Y4BZzSR3riOjtyKbeCnCX/uu2DqkGjxut8c=; b=LhdvB5cDiGDI6yRjAZ88gEDAUxiP+lDGy0YJS3IoPYvj6n6lRGLfPNbxMciq2EXdFFz8Iq sLTUVRya4MLHzLvDSnnJrxnqgM1GFWo/3M3lLWLLiXyMYD4YFqS0UVQQF7Ph2N+QEyLk2M CXRT13TNmUqO7uC61lHq/FWPID1TGlo= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-427-H5i8OxydPByztx_MuGin9Q-1; Tue, 21 Jun 2022 11:09:20 -0400 X-MC-Unique: H5i8OxydPByztx_MuGin9Q-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5CB6A3C10233; Tue, 21 Jun 2022 15:09:19 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.180]) by smtp.corp.redhat.com (Postfix) with ESMTP id DE8A29D7F; Tue, 21 Jun 2022 15:09:15 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Sean Christopherson , x86@kernel.org, Kees Cook , Dave Hansen , linux-kernel@vger.kernel.org, "H. Peter Anvin" , Borislav Petkov , Joerg Roedel , Ingo Molnar , Paolo Bonzini , Thomas Gleixner , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Maxim Levitsky Subject: [PATCH v2 03/11] KVM: x86: emulator: remove assign_eip_near/far Date: Tue, 21 Jun 2022 18:08:54 +0300 Message-Id: <20220621150902.46126-4-mlevitsk@redhat.com> In-Reply-To: <20220621150902.46126-1-mlevitsk@redhat.com> References: <20220621150902.46126-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Now the assign_eip_far just updates the emulation mode in addition to updating the rip, it doesn't make sense to keep that function. Move mode update to the callers and remove these functions. No functional change is intended. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/emulate.c | 47 +++++++++++++++++++++--------------------- 1 file changed, 24 insertions(+), 23 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 2c0087df2d7e6a..334a06e6c9b093 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -866,24 +866,9 @@ static inline int update_emulation_mode(struct x86_emu= late_ctxt *ctxt) return X86EMUL_CONTINUE; } =20 -static inline int assign_eip_near(struct x86_emulate_ctxt *ctxt, ulong dst) -{ - return assign_eip(ctxt, dst); -} - -static int assign_eip_far(struct x86_emulate_ctxt *ctxt, ulong dst) -{ - int rc =3D update_emulation_mode(ctxt); - - if (rc !=3D X86EMUL_CONTINUE) - return rc; - - return assign_eip(ctxt, dst); -} - static inline int jmp_rel(struct x86_emulate_ctxt *ctxt, int rel) { - return assign_eip_near(ctxt, ctxt->_eip + rel); + return assign_eip(ctxt, ctxt->_eip + rel); } =20 static int linear_read_system(struct x86_emulate_ctxt *ctxt, ulong linear, @@ -2213,7 +2198,12 @@ static int em_jmp_far(struct x86_emulate_ctxt *ctxt) if (rc !=3D X86EMUL_CONTINUE) return rc; =20 - rc =3D assign_eip_far(ctxt, ctxt->src.val); + rc =3D update_emulation_mode(ctxt); + if (rc !=3D X86EMUL_CONTINUE) + return rc; + + rc =3D assign_eip(ctxt, ctxt->src.val); + /* Error handling is not implemented. */ if (rc !=3D X86EMUL_CONTINUE) return X86EMUL_UNHANDLEABLE; @@ -2223,7 +2213,7 @@ static int em_jmp_far(struct x86_emulate_ctxt *ctxt) =20 static int em_jmp_abs(struct x86_emulate_ctxt *ctxt) { - return assign_eip_near(ctxt, ctxt->src.val); + return assign_eip(ctxt, ctxt->src.val); } =20 static int em_call_near_abs(struct x86_emulate_ctxt *ctxt) @@ -2232,7 +2222,7 @@ static int em_call_near_abs(struct x86_emulate_ctxt *= ctxt) long int old_eip; =20 old_eip =3D ctxt->_eip; - rc =3D assign_eip_near(ctxt, ctxt->src.val); + rc =3D assign_eip(ctxt, ctxt->src.val); if (rc !=3D X86EMUL_CONTINUE) return rc; ctxt->src.val =3D old_eip; @@ -2270,7 +2260,7 @@ static int em_ret(struct x86_emulate_ctxt *ctxt) if (rc !=3D X86EMUL_CONTINUE) return rc; =20 - return assign_eip_near(ctxt, eip); + return assign_eip(ctxt, eip); } =20 static int em_ret_far(struct x86_emulate_ctxt *ctxt) @@ -2291,7 +2281,13 @@ static int em_ret_far(struct x86_emulate_ctxt *ctxt) &new_desc); if (rc !=3D X86EMUL_CONTINUE) return rc; - rc =3D assign_eip_far(ctxt, eip); + + rc =3D update_emulation_mode(ctxt); + if (rc !=3D X86EMUL_CONTINUE) + return rc; + + rc =3D assign_eip(ctxt, eip); + /* Error handling is not implemented. */ if (rc !=3D X86EMUL_CONTINUE) return X86EMUL_UNHANDLEABLE; @@ -3511,7 +3507,12 @@ static int em_call_far(struct x86_emulate_ctxt *ctxt) if (rc !=3D X86EMUL_CONTINUE) return rc; =20 - rc =3D assign_eip_far(ctxt, ctxt->src.val); + rc =3D update_emulation_mode(ctxt); + if (rc !=3D X86EMUL_CONTINUE) + return rc; + + rc =3D assign_eip(ctxt, ctxt->src.val); + if (rc !=3D X86EMUL_CONTINUE) goto fail; =20 @@ -3544,7 +3545,7 @@ static int em_ret_near_imm(struct x86_emulate_ctxt *c= txt) rc =3D emulate_pop(ctxt, &eip, ctxt->op_bytes); if (rc !=3D X86EMUL_CONTINUE) return rc; - rc =3D assign_eip_near(ctxt, eip); + rc =3D assign_eip(ctxt, eip); if (rc !=3D X86EMUL_CONTINUE) return rc; rsp_increment(ctxt, ctxt->src.val); --=20 2.26.3 From nobody Sun Apr 26 22:56:59 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1206AC43334 for ; Tue, 21 Jun 2022 15:09:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352288AbiFUPJi (ORCPT ); Tue, 21 Jun 2022 11:09:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352230AbiFUPJ3 (ORCPT ); Tue, 21 Jun 2022 11:09:29 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 0FF0A1B7A6 for ; Tue, 21 Jun 2022 08:09:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655824167; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=upW2uqliqa+iIFcXCLr0qzoV2a1Po0dZZyYiiWg6RBw=; b=X9W//nQfG6oruAvUw85qp7OIevtKfdb1nYeEB9XMeUWCumHwq5uxx4+/cp7n03dsFVtn9o stHB3yzZUN73Iw+EeC7Zh3B66BpjpqzuF4qSAyFHYwrdxuyourbzXI231xpIGRp6mensft shCYhN2uJhA8ThNknT4Ebh+SSWIcvI8= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-15-TBHVxBAKM82JOc8OXJNPHQ-1; Tue, 21 Jun 2022 11:09:24 -0400 X-MC-Unique: TBHVxBAKM82JOc8OXJNPHQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 410A3803520; Tue, 21 Jun 2022 15:09:23 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.180]) by smtp.corp.redhat.com (Postfix) with ESMTP id B7E059D7F; Tue, 21 Jun 2022 15:09:19 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Sean Christopherson , x86@kernel.org, Kees Cook , Dave Hansen , linux-kernel@vger.kernel.org, "H. Peter Anvin" , Borislav Petkov , Joerg Roedel , Ingo Molnar , Paolo Bonzini , Thomas Gleixner , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Maxim Levitsky Subject: [PATCH v2 04/11] KVM: x86: emulator: update the emulation mode after rsm Date: Tue, 21 Jun 2022 18:08:55 +0300 Message-Id: <20220621150902.46126-5-mlevitsk@redhat.com> In-Reply-To: <20220621150902.46126-1-mlevitsk@redhat.com> References: <20220621150902.46126-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This ensures that RIP will be correctly written back, because the RSM instruction can switch the CPU mode from 32 bit (or less) to 64 bit. This fixes a guest crash in case the #SMI is received while the guest runs a code from an address > 32 bit. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/emulate.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 334a06e6c9b093..6f4632babc4cd8 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -2662,6 +2662,11 @@ static int em_rsm(struct x86_emulate_ctxt *ctxt) if (ret !=3D X86EMUL_CONTINUE) goto emulate_shutdown; =20 + + ret =3D update_emulation_mode(ctxt); + if (ret !=3D X86EMUL_CONTINUE) + goto emulate_shutdown; + /* * Note, the ctxt->ops callbacks are responsible for handling side * effects when writing MSRs and CRs, e.g. MMU context resets, CPUID --=20 2.26.3 From nobody Sun Apr 26 22:56:59 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0619DC433EF for ; Tue, 21 Jun 2022 15:09:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352316AbiFUPJq (ORCPT ); Tue, 21 Jun 2022 11:09:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44072 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351485AbiFUPJo (ORCPT ); Tue, 21 Jun 2022 11:09:44 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5AA5827CD3 for ; Tue, 21 Jun 2022 08:09:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655824175; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iKMJAOMUnqojm6f+PSfvuxFpiYCb8Fg7PQ9tT8zsGYc=; b=agdHOcDXdDsuVs1HBLaD4OptKLURahy5LVCNfXByprGOQLowM5KgzNEJkjrmr6RaUKI+Ec Jld5KLOggigI3/QZbOJ46eMhs/Hmq4UydafJEpP3PY0nFbBYlGPn4UTcCaZEPCJusVoxT0 xw2G+kFJgbcCjl9KIYjmx63OFTMb7+A= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-445-WnP0U4rHOa-85xVkzuFcJw-1; Tue, 21 Jun 2022 11:09:28 -0400 X-MC-Unique: WnP0U4rHOa-85xVkzuFcJw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4C0011044564; Tue, 21 Jun 2022 15:09:27 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.180]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9BEC818EA3; Tue, 21 Jun 2022 15:09:23 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Sean Christopherson , x86@kernel.org, Kees Cook , Dave Hansen , linux-kernel@vger.kernel.org, "H. Peter Anvin" , Borislav Petkov , Joerg Roedel , Ingo Molnar , Paolo Bonzini , Thomas Gleixner , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Maxim Levitsky Subject: [PATCH v2 05/11] KVM: x86: emulator: update the emulation mode after CR0 write Date: Tue, 21 Jun 2022 18:08:56 +0300 Message-Id: <20220621150902.46126-6-mlevitsk@redhat.com> In-Reply-To: <20220621150902.46126-1-mlevitsk@redhat.com> References: <20220621150902.46126-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" CR0.PE toggles real/protected mode, thus its update should update the emulation mode. This is likely a benign bug because there is no writeback of state, other than the RIP increment, and when toggling CR0.PE, the CPU has to execute code from a very low memory address. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/emulate.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 6f4632babc4cd8..002687d17f9364 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -3659,11 +3659,22 @@ static int em_movbe(struct x86_emulate_ctxt *ctxt) =20 static int em_cr_write(struct x86_emulate_ctxt *ctxt) { - if (ctxt->ops->set_cr(ctxt, ctxt->modrm_reg, ctxt->src.val)) + int cr_num =3D ctxt->modrm_reg; + int r; + + if (ctxt->ops->set_cr(ctxt, cr_num, ctxt->src.val)) return emulate_gp(ctxt, 0); =20 /* Disable writeback. */ ctxt->dst.type =3D OP_NONE; + + if (cr_num =3D=3D 0) { + /* CR0 write might have updated CR0.PE */ + r =3D update_emulation_mode(ctxt); + if (r !=3D X86EMUL_CONTINUE) + return r; + } + return X86EMUL_CONTINUE; } =20 --=20 2.26.3 From nobody Sun Apr 26 22:56:59 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C896DC433EF for ; Tue, 21 Jun 2022 15:09:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352349AbiFUPJ4 (ORCPT ); Tue, 21 Jun 2022 11:09:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44102 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352291AbiFUPJp (ORCPT ); Tue, 21 Jun 2022 11:09:45 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 83A6A28983 for ; Tue, 21 Jun 2022 08:09:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655824182; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NYiOoA3CHr2nOmHO8vU5aX4tOn3kv3w+htprdLNd+uI=; b=iNM0NVZxCgqzkjVjnMPFYG/Xi2n4EnkiTDX1NFqnsCE8y9nL2yAe1EeODCpVKOkPnqZ+kr 3gDH0uYGNn7nEC77NhyiAanIZIRUpCNdStxjLjfU5pm4bjKJ13qbn7tpaq73AH6pxewYZF f3O/l2xmsC8zZ/N5jkv9OR5YrpCHBeE= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-119-sBrtEnZDMfi6RFbwg89pdw-1; Tue, 21 Jun 2022 11:09:32 -0400 X-MC-Unique: sBrtEnZDMfi6RFbwg89pdw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 250BA1C08966; Tue, 21 Jun 2022 15:09:31 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.180]) by smtp.corp.redhat.com (Postfix) with ESMTP id A6D4610725; Tue, 21 Jun 2022 15:09:27 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Sean Christopherson , x86@kernel.org, Kees Cook , Dave Hansen , linux-kernel@vger.kernel.org, "H. Peter Anvin" , Borislav Petkov , Joerg Roedel , Ingo Molnar , Paolo Bonzini , Thomas Gleixner , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Maxim Levitsky Subject: [PATCH v2 06/11] KVM: x86: emulator/smm: number of GPRs in the SMRAM image depends on the image format Date: Tue, 21 Jun 2022 18:08:57 +0300 Message-Id: <20220621150902.46126-7-mlevitsk@redhat.com> In-Reply-To: <20220621150902.46126-1-mlevitsk@redhat.com> References: <20220621150902.46126-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" On 64 bit host, if the guest doesn't have X86_FEATURE_LM, we would access 16 gprs to 32-bit smram image, causing out-ouf-bound ram access. On 32 bit host, the rsm_load_state_64/enter_smm_save_state_64 is compiled out, thus access overflow can't happen. Fixes: b443183a25ab61 ("KVM: x86: Reduce the number of emulator GPRs to '8'= for 32-bit KVM") Signed-off-by: Maxim Levitsky Reviewed-by: Sean Christopherson --- arch/x86/kvm/emulate.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 002687d17f9364..ce186aebca8e83 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -2469,7 +2469,7 @@ static int rsm_load_state_32(struct x86_emulate_ctxt = *ctxt, ctxt->eflags =3D GET_SMSTATE(u32, smstate, 0x7ff4) | X86_EFLA= GS_FIXED; ctxt->_eip =3D GET_SMSTATE(u32, smstate, 0x7ff0); =20 - for (i =3D 0; i < NR_EMULATOR_GPRS; i++) + for (i =3D 0; i < 8; i++) *reg_write(ctxt, i) =3D GET_SMSTATE(u32, smstate, 0x7fd0 + i * 4); =20 val =3D GET_SMSTATE(u32, smstate, 0x7fcc); @@ -2526,7 +2526,7 @@ static int rsm_load_state_64(struct x86_emulate_ctxt = *ctxt, u16 selector; int i, r; =20 - for (i =3D 0; i < NR_EMULATOR_GPRS; i++) + for (i =3D 0; i < 16; i++) *reg_write(ctxt, i) =3D GET_SMSTATE(u64, smstate, 0x7ff8 - i * 8); =20 ctxt->_eip =3D GET_SMSTATE(u64, smstate, 0x7f78); --=20 2.26.3 From nobody Sun Apr 26 22:56:59 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 381AECCA473 for ; Tue, 21 Jun 2022 15:09:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351168AbiFUPJu (ORCPT ); Tue, 21 Jun 2022 11:09:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44090 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352308AbiFUPJo (ORCPT ); Tue, 21 Jun 2022 11:09:44 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 27D61286E2 for ; Tue, 21 Jun 2022 08:09:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655824178; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=d9bdeuqQcUBSdjmQo8dMHBeTNo5swwTJ8mubljrij5I=; b=YGE4BpLCgYqicIk3n/EGvfplysa6SebhLQd9ctuL69MEfej6f+cNOk1yRnaON6vBOvp5VU +eliJg71PDXbJw1pZHHzlSaNwKgI5AgVERlwK/QUQk5T0SaMJn5Hj1TzEhDcamji6436Yq 670oDFrugbQwpUY1zBYd3ApQEOiW5vc= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-168-shJJ2STeMgyU3YMrYn421g-1; Tue, 21 Jun 2022 11:09:36 -0400 X-MC-Unique: shJJ2STeMgyU3YMrYn421g-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id F142C85A580; Tue, 21 Jun 2022 15:09:34 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.180]) by smtp.corp.redhat.com (Postfix) with ESMTP id 802EF18EA3; Tue, 21 Jun 2022 15:09:31 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Sean Christopherson , x86@kernel.org, Kees Cook , Dave Hansen , linux-kernel@vger.kernel.org, "H. Peter Anvin" , Borislav Petkov , Joerg Roedel , Ingo Molnar , Paolo Bonzini , Thomas Gleixner , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Maxim Levitsky Subject: [PATCH v2 07/11] KVM: x86: emulator/smm: add structs for KVM's smram layout Date: Tue, 21 Jun 2022 18:08:58 +0300 Message-Id: <20220621150902.46126-8-mlevitsk@redhat.com> In-Reply-To: <20220621150902.46126-1-mlevitsk@redhat.com> References: <20220621150902.46126-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Those structs will be used to read/write the smram state image. Also document the differences between KVM's SMRAM layout and SMRAM layout that is used by real Intel/AMD cpus. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/kvm_emulate.h | 139 +++++++++++++++++++++++++++++++++++++ 1 file changed, 139 insertions(+) diff --git a/arch/x86/kvm/kvm_emulate.h b/arch/x86/kvm/kvm_emulate.h index 89246446d6aa9d..7015728da36d5f 100644 --- a/arch/x86/kvm/kvm_emulate.h +++ b/arch/x86/kvm/kvm_emulate.h @@ -503,6 +503,145 @@ enum x86_intercept { nr_x86_intercepts }; =20 + +/* + * 32 bit KVM's emulated SMM layout + * Loosely based on Intel's layout + */ + +struct kvm_smm_seg_state_32 { + u32 flags; + u32 limit; + u32 base; +} __packed; + +struct kvm_smram_state_32 { + + u32 reserved1[62]; /* FE00 - FEF7 */ + u32 smbase; /* FEF8 */ + u32 smm_revision; /* FEFC */ + u32 reserved2[5]; /* FF00-FF13 */ + /* CR4 is not present in Intel/AMD SMRAM image*/ + u32 cr4; /* FF14 */ + u32 reserved3[5]; /* FF18 */ + + /* + * Segment state is not present/documented in the + * Intel/AMD SMRAM image + */ + struct kvm_smm_seg_state_32 ds; /* FF2C */ + struct kvm_smm_seg_state_32 fs; /* FF38 */ + struct kvm_smm_seg_state_32 gs; /* FF44 */ + /* idtr has only base and limit*/ + struct kvm_smm_seg_state_32 idtr; /* FF50 */ + struct kvm_smm_seg_state_32 tr; /* FF5C */ + u32 reserved; /* FF68 */ + /* gdtr has only base and limit*/ + struct kvm_smm_seg_state_32 gdtr; /* FF6C */ + struct kvm_smm_seg_state_32 ldtr; /* FF78 */ + struct kvm_smm_seg_state_32 es; /* FF84 */ + struct kvm_smm_seg_state_32 cs; /* FF90 */ + struct kvm_smm_seg_state_32 ss; /* FF9C */ + + u32 es_sel; /* FFA8 */ + u32 cs_sel; /* FFAC */ + u32 ss_sel; /* FFB0 */ + u32 ds_sel; /* FFB4 */ + u32 fs_sel; /* FFB8 */ + u32 gs_sel; /* FFBC */ + u32 ldtr_sel; /* FFC0 */ + u32 tr_sel; /* FFC4 */ + + u32 dr7; /* FFC8 */ + u32 dr6; /* FFCC */ + + /* GPRS in the "natural" X86 order (RAX/RCX/RDX.../RDI)*/ + u32 gprs[8]; /* FFD0-FFEC */ + + u32 eip; /* FFF0 */ + u32 eflags; /* FFF4 */ + u32 cr3; /* FFF8 */ + u32 cr0; /* FFFC */ +} __packed; + +/* + * 64 bit KVM's emulated SMM layout + * Based on AMD64 layout + */ + +struct kvm_smm_seg_state_64 { + u16 selector; + u16 attributes; + u32 limit; + u64 base; +}; + +struct kvm_smram_state_64 { + struct kvm_smm_seg_state_64 es; /* FE00 (R/O) */ + struct kvm_smm_seg_state_64 cs; /* FE10 (R/O) */ + struct kvm_smm_seg_state_64 ss; /* FE20 (R/O) */ + struct kvm_smm_seg_state_64 ds; /* FE30 (R/O) */ + struct kvm_smm_seg_state_64 fs; /* FE40 (R/O) */ + struct kvm_smm_seg_state_64 gs; /* FE50 (R/O) */ + + /* gdtr has only base and limit*/ + struct kvm_smm_seg_state_64 gdtr; /* FE60 (R/O) */ + struct kvm_smm_seg_state_64 ldtr; /* FE70 (R/O) */ + + /* idtr has only base and limit*/ + struct kvm_smm_seg_state_64 idtr; /* FE80 (R/O) */ + struct kvm_smm_seg_state_64 tr; /* FE90 (R/O) */ + + /* I/O restart and auto halt restart are not implemented by KVM */ + u64 io_restart_rip; /* FEA0 (R/O) */ + u64 io_restart_rcx; /* FEA8 (R/O) */ + u64 io_restart_rsi; /* FEB0 (R/O) */ + u64 io_restart_rdi; /* FEB8 (R/O) */ + u32 io_restart_dword; /* FEC0 (R/O) */ + u32 reserved1; /* FEC4 */ + u8 io_instruction_restart; /* FEC8 (R/W) */ + u8 auto_halt_restart; /* FEC9 (R/W) */ + u8 reserved2[6]; /* FECA-FECF */ + + u64 efer; /* FED0 (R/O) */ + + /* + * Implemented on AMD only, to store current SVM guest address. + * svm_guest_virtual_int has unknown purpose, not implemented. + */ + + u64 svm_guest_flag; /* FED8 (R/O) */ + u64 svm_guest_vmcb_gpa; /* FEE0 (R/O) */ + u64 svm_guest_virtual_int; /* FEE8 (R/O) */ + + u32 reserved3[3]; /* FEF0-FEFB */ + u32 smm_revison; /* FEFC (R/O) */ + u32 smbase; /* FFF0 (R/W) */ + u32 reserved4[5]; /* FF04-FF17 */ + + /* SSP and SVM fields below are not implemented by KVM */ + u64 ssp; /* FF18 (R/W) */ + u64 svm_guest_pat; /* FF20 (R/O) */ + u64 svm_host_efer; /* FF28 (R/O) */ + u64 svm_host_cr4; /* FF30 (R/O) */ + u64 svm_host_cr3; /* FF38 (R/O) */ + u64 svm_host_cr0; /* FF40 (R/O) */ + + u64 cr4; /* FF48 (R/O) */ + u64 cr3; /* FF50 (R/O) */ + u64 cr0; /* FF58 (R/O) */ + + u64 dr7; /* FF60 (R/O) */ + u64 dr6; /* FF68 (R/O) */ + + u64 rflags; /* FF70 (R/W) */ + u64 rip; /* FF78 (R/W) */ + + /* GPRS in a reversed "natural" X86 order (R15/R14/../RCX/RAX.) */ + u64 gprs[16]; /* FF80-FFFF (R/W) */ +}; + + /* Host execution mode. */ #if defined(CONFIG_X86_32) #define X86EMUL_MODE_HOST X86EMUL_MODE_PROT32 --=20 2.26.3 From nobody Sun Apr 26 22:56:59 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1610BC433EF for ; Tue, 21 Jun 2022 15:09:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352312AbiFUPJy (ORCPT ); Tue, 21 Jun 2022 11:09:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352311AbiFUPJp (ORCPT ); Tue, 21 Jun 2022 11:09:45 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 70E9E2715C for ; Tue, 21 Jun 2022 08:09:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655824182; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZWGkZqIjT7aQTWcqZ+1ciTjgVqpXFyF9QMnvNGdcu9c=; b=QOfaUIlWvTBV+HRPCYgt63EUbY+2SdgjuDHyPJb2wLPbADfGNephLb1qAPX3wctNVKCe67 GalqLvJeeALo9XGHS6DH0O9uqnbF1JTajE9Ze6tBOtOuHNRRyk//fk/htkVpDhWx4bsoOW Jwjdzv802ukC1vPt1Ts5WQ/lywB2fxY= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-504-4-zpjV5XP2-RhKlAqRrl_A-1; Tue, 21 Jun 2022 11:09:39 -0400 X-MC-Unique: 4-zpjV5XP2-RhKlAqRrl_A-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C70593C10229; Tue, 21 Jun 2022 15:09:38 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.180]) by smtp.corp.redhat.com (Postfix) with ESMTP id 582909D7F; Tue, 21 Jun 2022 15:09:35 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Sean Christopherson , x86@kernel.org, Kees Cook , Dave Hansen , linux-kernel@vger.kernel.org, "H. Peter Anvin" , Borislav Petkov , Joerg Roedel , Ingo Molnar , Paolo Bonzini , Thomas Gleixner , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Maxim Levitsky Subject: [PATCH v2 08/11] KVM: x86: emulator/smm: use smram struct for 32 bit smram load/restore Date: Tue, 21 Jun 2022 18:08:59 +0300 Message-Id: <20220621150902.46126-9-mlevitsk@redhat.com> In-Reply-To: <20220621150902.46126-1-mlevitsk@redhat.com> References: <20220621150902.46126-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use kvm_smram_state_32 struct to save/restore 32 bit SMM state (used when X86_FEATURE_LM is not present in the guest CPUID). Signed-off-by: Maxim Levitsky --- arch/x86/kvm/emulate.c | 81 +++++++++++++++--------------------------- arch/x86/kvm/x86.c | 75 +++++++++++++++++--------------------- 2 files changed, 60 insertions(+), 96 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index ce186aebca8e83..6d263906054689 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -2367,25 +2367,17 @@ static void rsm_set_desc_flags(struct desc_struct *= desc, u32 flags) desc->type =3D (flags >> 8) & 15; } =20 -static int rsm_load_seg_32(struct x86_emulate_ctxt *ctxt, const char *smst= ate, +static void rsm_load_seg_32(struct x86_emulate_ctxt *ctxt, + struct kvm_smm_seg_state_32 *state, + u16 selector, int n) { struct desc_struct desc; - int offset; - u16 selector; - - selector =3D GET_SMSTATE(u32, smstate, 0x7fa8 + n * 4); - - if (n < 3) - offset =3D 0x7f84 + n * 12; - else - offset =3D 0x7f2c + (n - 3) * 12; =20 - set_desc_base(&desc, GET_SMSTATE(u32, smstate, offset + 8)); - set_desc_limit(&desc, GET_SMSTATE(u32, smstate, offset + 4)); - rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, offset)); + set_desc_base(&desc, state->base); + set_desc_limit(&desc, state->limit); + rsm_set_desc_flags(&desc, state->flags); ctxt->ops->set_segment(ctxt, selector, &desc, 0, n); - return X86EMUL_CONTINUE; } =20 #ifdef CONFIG_X86_64 @@ -2456,63 +2448,46 @@ static int rsm_enter_protected_mode(struct x86_emul= ate_ctxt *ctxt, } =20 static int rsm_load_state_32(struct x86_emulate_ctxt *ctxt, - const char *smstate) + struct kvm_smram_state_32 *smstate) { - struct desc_struct desc; struct desc_ptr dt; - u16 selector; - u32 val, cr0, cr3, cr4; int i; =20 - cr0 =3D GET_SMSTATE(u32, smstate, 0x7ffc); - cr3 =3D GET_SMSTATE(u32, smstate, 0x7ff8); - ctxt->eflags =3D GET_SMSTATE(u32, smstate, 0x7ff4) | X86_EFLA= GS_FIXED; - ctxt->_eip =3D GET_SMSTATE(u32, smstate, 0x7ff0); + ctxt->eflags =3D smstate->eflags | X86_EFLAGS_FIXED; + ctxt->_eip =3D smstate->eip; =20 for (i =3D 0; i < 8; i++) - *reg_write(ctxt, i) =3D GET_SMSTATE(u32, smstate, 0x7fd0 + i * 4); - - val =3D GET_SMSTATE(u32, smstate, 0x7fcc); + *reg_write(ctxt, i) =3D smstate->gprs[i]; =20 - if (ctxt->ops->set_dr(ctxt, 6, val)) + if (ctxt->ops->set_dr(ctxt, 6, smstate->dr6)) return X86EMUL_UNHANDLEABLE; - - val =3D GET_SMSTATE(u32, smstate, 0x7fc8); - - if (ctxt->ops->set_dr(ctxt, 7, val)) + if (ctxt->ops->set_dr(ctxt, 7, smstate->dr7)) return X86EMUL_UNHANDLEABLE; =20 - selector =3D GET_SMSTATE(u32, smstate, 0x7fc4); - set_desc_base(&desc, GET_SMSTATE(u32, smstate, 0x7f64)); - set_desc_limit(&desc, GET_SMSTATE(u32, smstate, 0x7f60)); - rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, 0x7f5c)); - ctxt->ops->set_segment(ctxt, selector, &desc, 0, VCPU_SREG_TR); + rsm_load_seg_32(ctxt, &smstate->tr, smstate->tr_sel, VCPU_SREG_TR); + rsm_load_seg_32(ctxt, &smstate->ldtr, smstate->ldtr_sel, VCPU_SREG_LDTR); =20 - selector =3D GET_SMSTATE(u32, smstate, 0x7fc0); - set_desc_base(&desc, GET_SMSTATE(u32, smstate, 0x7f80)); - set_desc_limit(&desc, GET_SMSTATE(u32, smstate, 0x7f7c)); - rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, 0x7f78)); - ctxt->ops->set_segment(ctxt, selector, &desc, 0, VCPU_SREG_LDTR); =20 - dt.address =3D GET_SMSTATE(u32, smstate, 0x7f74); - dt.size =3D GET_SMSTATE(u32, smstate, 0x7f70); + dt.address =3D smstate->gdtr.base; + dt.size =3D smstate->gdtr.limit; ctxt->ops->set_gdt(ctxt, &dt); =20 - dt.address =3D GET_SMSTATE(u32, smstate, 0x7f58); - dt.size =3D GET_SMSTATE(u32, smstate, 0x7f54); + dt.address =3D smstate->idtr.base; + dt.size =3D smstate->idtr.limit; ctxt->ops->set_idt(ctxt, &dt); =20 - for (i =3D 0; i < 6; i++) { - int r =3D rsm_load_seg_32(ctxt, smstate, i); - if (r !=3D X86EMUL_CONTINUE) - return r; - } + rsm_load_seg_32(ctxt, &smstate->es, smstate->es_sel, VCPU_SREG_ES); + rsm_load_seg_32(ctxt, &smstate->cs, smstate->cs_sel, VCPU_SREG_CS); + rsm_load_seg_32(ctxt, &smstate->ss, smstate->ss_sel, VCPU_SREG_SS); =20 - cr4 =3D GET_SMSTATE(u32, smstate, 0x7f14); + rsm_load_seg_32(ctxt, &smstate->ds, smstate->ds_sel, VCPU_SREG_DS); + rsm_load_seg_32(ctxt, &smstate->fs, smstate->fs_sel, VCPU_SREG_FS); + rsm_load_seg_32(ctxt, &smstate->gs, smstate->gs_sel, VCPU_SREG_GS); =20 - ctxt->ops->set_smbase(ctxt, GET_SMSTATE(u32, smstate, 0x7ef8)); + ctxt->ops->set_smbase(ctxt, smstate->smbase); =20 - return rsm_enter_protected_mode(ctxt, cr0, cr3, cr4); + return rsm_enter_protected_mode(ctxt, smstate->cr0, + smstate->cr3, smstate->cr4); } =20 #ifdef CONFIG_X86_64 @@ -2657,7 +2632,7 @@ static int em_rsm(struct x86_emulate_ctxt *ctxt) ret =3D rsm_load_state_64(ctxt, buf); else #endif - ret =3D rsm_load_state_32(ctxt, buf); + ret =3D rsm_load_state_32(ctxt, (struct kvm_smram_state_32 *)buf); =20 if (ret !=3D X86EMUL_CONTINUE) goto emulate_shutdown; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 00e23dc518e091..a1bbf2ed520769 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9674,22 +9674,18 @@ static u32 enter_smm_get_segment_flags(struct kvm_s= egment *seg) return flags; } =20 -static void enter_smm_save_seg_32(struct kvm_vcpu *vcpu, char *buf, int n) +static void enter_smm_save_seg_32(struct kvm_vcpu *vcpu, + struct kvm_smm_seg_state_32 *state, + u32 *selector, + int n) { struct kvm_segment seg; - int offset; =20 kvm_get_segment(vcpu, &seg, n); - put_smstate(u32, buf, 0x7fa8 + n * 4, seg.selector); - - if (n < 3) - offset =3D 0x7f84 + n * 12; - else - offset =3D 0x7f2c + (n - 3) * 12; - - put_smstate(u32, buf, offset + 8, seg.base); - put_smstate(u32, buf, offset + 4, seg.limit); - put_smstate(u32, buf, offset, enter_smm_get_segment_flags(&seg)); + *selector =3D seg.selector; + state->base =3D seg.base; + state->limit =3D seg.limit; + state->flags =3D enter_smm_get_segment_flags(&seg); } =20 #ifdef CONFIG_X86_64 @@ -9710,54 +9706,47 @@ static void enter_smm_save_seg_64(struct kvm_vcpu *= vcpu, char *buf, int n) } #endif =20 -static void enter_smm_save_state_32(struct kvm_vcpu *vcpu, char *buf) +static void enter_smm_save_state_32(struct kvm_vcpu *vcpu, struct kvm_smra= m_state_32 *smram) { struct desc_ptr dt; - struct kvm_segment seg; unsigned long val; int i; =20 - put_smstate(u32, buf, 0x7ffc, kvm_read_cr0(vcpu)); - put_smstate(u32, buf, 0x7ff8, kvm_read_cr3(vcpu)); - put_smstate(u32, buf, 0x7ff4, kvm_get_rflags(vcpu)); - put_smstate(u32, buf, 0x7ff0, kvm_rip_read(vcpu)); + smram->cr0 =3D kvm_read_cr0(vcpu); + smram->cr3 =3D kvm_read_cr3(vcpu); + smram->eflags =3D kvm_get_rflags(vcpu); + smram->eip =3D kvm_rip_read(vcpu); =20 for (i =3D 0; i < 8; i++) - put_smstate(u32, buf, 0x7fd0 + i * 4, kvm_register_read_raw(vcpu, i)); + smram->gprs[i] =3D kvm_register_read_raw(vcpu, i); =20 kvm_get_dr(vcpu, 6, &val); - put_smstate(u32, buf, 0x7fcc, (u32)val); + smram->dr6 =3D (u32)val; kvm_get_dr(vcpu, 7, &val); - put_smstate(u32, buf, 0x7fc8, (u32)val); + smram->dr7 =3D (u32)val; =20 - kvm_get_segment(vcpu, &seg, VCPU_SREG_TR); - put_smstate(u32, buf, 0x7fc4, seg.selector); - put_smstate(u32, buf, 0x7f64, seg.base); - put_smstate(u32, buf, 0x7f60, seg.limit); - put_smstate(u32, buf, 0x7f5c, enter_smm_get_segment_flags(&seg)); - - kvm_get_segment(vcpu, &seg, VCPU_SREG_LDTR); - put_smstate(u32, buf, 0x7fc0, seg.selector); - put_smstate(u32, buf, 0x7f80, seg.base); - put_smstate(u32, buf, 0x7f7c, seg.limit); - put_smstate(u32, buf, 0x7f78, enter_smm_get_segment_flags(&seg)); + enter_smm_save_seg_32(vcpu, &smram->tr, &smram->tr_sel, VCPU_SREG_TR); + enter_smm_save_seg_32(vcpu, &smram->ldtr, &smram->ldtr_sel, VCPU_SREG_LDT= R); =20 static_call(kvm_x86_get_gdt)(vcpu, &dt); - put_smstate(u32, buf, 0x7f74, dt.address); - put_smstate(u32, buf, 0x7f70, dt.size); + smram->gdtr.base =3D dt.address; + smram->gdtr.limit =3D dt.size; =20 static_call(kvm_x86_get_idt)(vcpu, &dt); - put_smstate(u32, buf, 0x7f58, dt.address); - put_smstate(u32, buf, 0x7f54, dt.size); + smram->idtr.base =3D dt.address; + smram->idtr.limit =3D dt.size; =20 - for (i =3D 0; i < 6; i++) - enter_smm_save_seg_32(vcpu, buf, i); + enter_smm_save_seg_32(vcpu, &smram->es, &smram->es_sel, VCPU_SREG_ES); + enter_smm_save_seg_32(vcpu, &smram->cs, &smram->cs_sel, VCPU_SREG_CS); + enter_smm_save_seg_32(vcpu, &smram->ss, &smram->ss_sel, VCPU_SREG_SS); =20 - put_smstate(u32, buf, 0x7f14, kvm_read_cr4(vcpu)); + enter_smm_save_seg_32(vcpu, &smram->ds, &smram->ds_sel, VCPU_SREG_DS); + enter_smm_save_seg_32(vcpu, &smram->fs, &smram->fs_sel, VCPU_SREG_FS); + enter_smm_save_seg_32(vcpu, &smram->gs, &smram->gs_sel, VCPU_SREG_GS); =20 - /* revision id */ - put_smstate(u32, buf, 0x7efc, 0x00020000); - put_smstate(u32, buf, 0x7ef8, vcpu->arch.smbase); + smram->cr4 =3D kvm_read_cr4(vcpu); + smram->smm_revision =3D 0x00020000; + smram->smbase =3D vcpu->arch.smbase; } =20 #ifdef CONFIG_X86_64 @@ -9828,7 +9817,7 @@ static void enter_smm(struct kvm_vcpu *vcpu) enter_smm_save_state_64(vcpu, buf); else #endif - enter_smm_save_state_32(vcpu, buf); + enter_smm_save_state_32(vcpu, (struct kvm_smram_state_32 *)buf); =20 /* * Give enter_smm() a chance to make ISA-specific changes to the vCPU --=20 2.26.3 From nobody Sun Apr 26 22:56:59 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51544C43334 for ; Tue, 21 Jun 2022 15:10:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352383AbiFUPKE (ORCPT ); Tue, 21 Jun 2022 11:10:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44086 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352318AbiFUPJq (ORCPT ); Tue, 21 Jun 2022 11:09:46 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B3BB0140A1 for ; Tue, 21 Jun 2022 08:09:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655824184; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=v4me+rx5lprMbOUkm8ynaUjPa/ULejOXXSA3qQv2brc=; b=fR9cpnrm2LywGoThIMusro8IL8ONQekWaD02H0GqKFUNzsTwzEOB6J6Dq6REIQwuJ2bxVN fLz5es9Mot+5yojwZMRDxErsvGMdp0Yi85jx74A7iLKK+pPw7GAxu1eD9ToHBz6wlTUzBU L9tZOUZ7RD1ErNNTUXQvSjoEgZI9o0c= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-247-5awzXp5tNrKrwepOWkhN-g-1; Tue, 21 Jun 2022 11:09:43 -0400 X-MC-Unique: 5awzXp5tNrKrwepOWkhN-g-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A1962802D1F; Tue, 21 Jun 2022 15:09:42 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.180]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2DEB518EA3; Tue, 21 Jun 2022 15:09:38 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Sean Christopherson , x86@kernel.org, Kees Cook , Dave Hansen , linux-kernel@vger.kernel.org, "H. Peter Anvin" , Borislav Petkov , Joerg Roedel , Ingo Molnar , Paolo Bonzini , Thomas Gleixner , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Maxim Levitsky Subject: [PATCH v2 09/11] KVM: x86: emulator/smm: use smram struct for 64 bit smram load/restore Date: Tue, 21 Jun 2022 18:09:00 +0300 Message-Id: <20220621150902.46126-10-mlevitsk@redhat.com> In-Reply-To: <20220621150902.46126-1-mlevitsk@redhat.com> References: <20220621150902.46126-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use kvm_smram_state_64 struct to save/restore the 64 bit SMM state (used when X86_FEATURE_LM is present in the guest CPUID, regardless of 32-bitness of the guest). Signed-off-by: Maxim Levitsky --- arch/x86/kvm/emulate.c | 88 ++++++++++++++---------------------------- arch/x86/kvm/x86.c | 75 ++++++++++++++++------------------- 2 files changed, 62 insertions(+), 101 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 6d263906054689..7a3a042d6b862a 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -2381,24 +2381,16 @@ static void rsm_load_seg_32(struct x86_emulate_ctxt= *ctxt, } =20 #ifdef CONFIG_X86_64 -static int rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, const char *smst= ate, - int n) +static void rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, + struct kvm_smm_seg_state_64 *state, + int n) { struct desc_struct desc; - int offset; - u16 selector; - u32 base3; - - offset =3D 0x7e00 + n * 16; - - selector =3D GET_SMSTATE(u16, smstate, offset); - rsm_set_desc_flags(&desc, GET_SMSTATE(u16, smstate, offset + 2) << 8); - set_desc_limit(&desc, GET_SMSTATE(u32, smstate, offset + 4)); - set_desc_base(&desc, GET_SMSTATE(u32, smstate, offset + 8)); - base3 =3D GET_SMSTATE(u32, smstate, offset + 12); =20 - ctxt->ops->set_segment(ctxt, selector, &desc, base3, n); - return X86EMUL_CONTINUE; + rsm_set_desc_flags(&desc, state->attributes << 8); + set_desc_limit(&desc, state->limit); + set_desc_base(&desc, (u32)state->base); + ctxt->ops->set_segment(ctxt, state->selector, &desc, state->base >> 32, n= ); } #endif =20 @@ -2492,71 +2484,49 @@ static int rsm_load_state_32(struct x86_emulate_ctx= t *ctxt, =20 #ifdef CONFIG_X86_64 static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, - const char *smstate) + struct kvm_smram_state_64 *smstate) { - struct desc_struct desc; struct desc_ptr dt; - u64 val, cr0, cr3, cr4; - u32 base3; - u16 selector; int i, r; =20 for (i =3D 0; i < 16; i++) - *reg_write(ctxt, i) =3D GET_SMSTATE(u64, smstate, 0x7ff8 - i * 8); + *reg_write(ctxt, i) =3D smstate->gprs[15 - i]; =20 - ctxt->_eip =3D GET_SMSTATE(u64, smstate, 0x7f78); - ctxt->eflags =3D GET_SMSTATE(u32, smstate, 0x7f70) | X86_EFLAGS_FIXED; + ctxt->_eip =3D smstate->rip; + ctxt->eflags =3D smstate->rflags | X86_EFLAGS_FIXED; =20 - val =3D GET_SMSTATE(u64, smstate, 0x7f68); - - if (ctxt->ops->set_dr(ctxt, 6, val)) + if (ctxt->ops->set_dr(ctxt, 6, smstate->dr6)) return X86EMUL_UNHANDLEABLE; - - val =3D GET_SMSTATE(u64, smstate, 0x7f60); - - if (ctxt->ops->set_dr(ctxt, 7, val)) + if (ctxt->ops->set_dr(ctxt, 7, smstate->dr7)) return X86EMUL_UNHANDLEABLE; =20 - cr0 =3D GET_SMSTATE(u64, smstate, 0x7f58); - cr3 =3D GET_SMSTATE(u64, smstate, 0x7f50); - cr4 =3D GET_SMSTATE(u64, smstate, 0x7f48); - ctxt->ops->set_smbase(ctxt, GET_SMSTATE(u32, smstate, 0x7f00)); - val =3D GET_SMSTATE(u64, smstate, 0x7ed0); + ctxt->ops->set_smbase(ctxt, smstate->smbase); =20 - if (ctxt->ops->set_msr(ctxt, MSR_EFER, val & ~EFER_LMA)) + if (ctxt->ops->set_msr(ctxt, MSR_EFER, smstate->efer & ~EFER_LMA)) return X86EMUL_UNHANDLEABLE; =20 - selector =3D GET_SMSTATE(u32, smstate, 0x7e90); - rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, 0x7e92) << 8); - set_desc_limit(&desc, GET_SMSTATE(u32, smstate, 0x7e94)); - set_desc_base(&desc, GET_SMSTATE(u32, smstate, 0x7e98)); - base3 =3D GET_SMSTATE(u32, smstate, 0x7e9c); - ctxt->ops->set_segment(ctxt, selector, &desc, base3, VCPU_SREG_TR); + rsm_load_seg_64(ctxt, &smstate->tr, VCPU_SREG_TR); =20 - dt.size =3D GET_SMSTATE(u32, smstate, 0x7e84); - dt.address =3D GET_SMSTATE(u64, smstate, 0x7e88); + dt.size =3D smstate->idtr.limit; + dt.address =3D smstate->idtr.base; ctxt->ops->set_idt(ctxt, &dt); =20 - selector =3D GET_SMSTATE(u32, smstate, 0x7e70); - rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, 0x7e72) << 8); - set_desc_limit(&desc, GET_SMSTATE(u32, smstate, 0x7e74)); - set_desc_base(&desc, GET_SMSTATE(u32, smstate, 0x7e78)); - base3 =3D GET_SMSTATE(u32, smstate, 0x7e7c); - ctxt->ops->set_segment(ctxt, selector, &desc, base3, VCPU_SREG_LDTR); + rsm_load_seg_64(ctxt, &smstate->ldtr, VCPU_SREG_LDTR); =20 - dt.size =3D GET_SMSTATE(u32, smstate, 0x7e64); - dt.address =3D GET_SMSTATE(u64, smstate, 0x7e68); + dt.size =3D smstate->gdtr.limit; + dt.address =3D smstate->gdtr.base; ctxt->ops->set_gdt(ctxt, &dt); =20 - r =3D rsm_enter_protected_mode(ctxt, cr0, cr3, cr4); + r =3D rsm_enter_protected_mode(ctxt, smstate->cr0, smstate->cr3, smstate-= >cr4); if (r !=3D X86EMUL_CONTINUE) return r; =20 - for (i =3D 0; i < 6; i++) { - r =3D rsm_load_seg_64(ctxt, smstate, i); - if (r !=3D X86EMUL_CONTINUE) - return r; - } + rsm_load_seg_64(ctxt, &smstate->es, VCPU_SREG_ES); + rsm_load_seg_64(ctxt, &smstate->cs, VCPU_SREG_CS); + rsm_load_seg_64(ctxt, &smstate->ss, VCPU_SREG_SS); + rsm_load_seg_64(ctxt, &smstate->ds, VCPU_SREG_DS); + rsm_load_seg_64(ctxt, &smstate->fs, VCPU_SREG_FS); + rsm_load_seg_64(ctxt, &smstate->gs, VCPU_SREG_GS); =20 return X86EMUL_CONTINUE; } @@ -2629,7 +2599,7 @@ static int em_rsm(struct x86_emulate_ctxt *ctxt) =20 #ifdef CONFIG_X86_64 if (emulator_has_longmode(ctxt)) - ret =3D rsm_load_state_64(ctxt, buf); + ret =3D rsm_load_state_64(ctxt, (struct kvm_smram_state_64 *)buf); else #endif ret =3D rsm_load_state_32(ctxt, (struct kvm_smram_state_32 *)buf); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index a1bbf2ed520769..a1b138f0815d30 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9689,20 +9689,17 @@ static void enter_smm_save_seg_32(struct kvm_vcpu *= vcpu, } =20 #ifdef CONFIG_X86_64 -static void enter_smm_save_seg_64(struct kvm_vcpu *vcpu, char *buf, int n) +static void enter_smm_save_seg_64(struct kvm_vcpu *vcpu, + struct kvm_smm_seg_state_64 *state, + int n) { struct kvm_segment seg; - int offset; - u16 flags; =20 kvm_get_segment(vcpu, &seg, n); - offset =3D 0x7e00 + n * 16; - - flags =3D enter_smm_get_segment_flags(&seg) >> 8; - put_smstate(u16, buf, offset, seg.selector); - put_smstate(u16, buf, offset + 2, flags); - put_smstate(u32, buf, offset + 4, seg.limit); - put_smstate(u64, buf, offset + 8, seg.base); + state->selector =3D seg.selector; + state->attributes =3D enter_smm_get_segment_flags(&seg) >> 8; + state->limit =3D seg.limit; + state->base =3D seg.base; } #endif =20 @@ -9750,57 +9747,51 @@ static void enter_smm_save_state_32(struct kvm_vcpu= *vcpu, struct kvm_smram_stat } =20 #ifdef CONFIG_X86_64 -static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, char *buf) +static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, struct kvm_smra= m_state_64 *smram) { struct desc_ptr dt; - struct kvm_segment seg; unsigned long val; int i; =20 for (i =3D 0; i < 16; i++) - put_smstate(u64, buf, 0x7ff8 - i * 8, kvm_register_read_raw(vcpu, i)); + smram->gprs[15 - i] =3D kvm_register_read_raw(vcpu, i); + + smram->rip =3D kvm_rip_read(vcpu); + smram->rflags =3D kvm_get_rflags(vcpu); =20 - put_smstate(u64, buf, 0x7f78, kvm_rip_read(vcpu)); - put_smstate(u32, buf, 0x7f70, kvm_get_rflags(vcpu)); =20 kvm_get_dr(vcpu, 6, &val); - put_smstate(u64, buf, 0x7f68, val); + smram->dr6 =3D val; kvm_get_dr(vcpu, 7, &val); - put_smstate(u64, buf, 0x7f60, val); - - put_smstate(u64, buf, 0x7f58, kvm_read_cr0(vcpu)); - put_smstate(u64, buf, 0x7f50, kvm_read_cr3(vcpu)); - put_smstate(u64, buf, 0x7f48, kvm_read_cr4(vcpu)); + smram->dr7 =3D val; =20 - put_smstate(u32, buf, 0x7f00, vcpu->arch.smbase); + smram->cr0 =3D kvm_read_cr0(vcpu); + smram->cr3 =3D kvm_read_cr3(vcpu); + smram->cr4 =3D kvm_read_cr4(vcpu); =20 - /* revision id */ - put_smstate(u32, buf, 0x7efc, 0x00020064); + smram->smbase =3D vcpu->arch.smbase; + smram->smm_revison =3D 0x00020064; =20 - put_smstate(u64, buf, 0x7ed0, vcpu->arch.efer); + smram->efer =3D vcpu->arch.efer; =20 - kvm_get_segment(vcpu, &seg, VCPU_SREG_TR); - put_smstate(u16, buf, 0x7e90, seg.selector); - put_smstate(u16, buf, 0x7e92, enter_smm_get_segment_flags(&seg) >> 8); - put_smstate(u32, buf, 0x7e94, seg.limit); - put_smstate(u64, buf, 0x7e98, seg.base); + enter_smm_save_seg_64(vcpu, &smram->tr, VCPU_SREG_TR); =20 static_call(kvm_x86_get_idt)(vcpu, &dt); - put_smstate(u32, buf, 0x7e84, dt.size); - put_smstate(u64, buf, 0x7e88, dt.address); + smram->idtr.limit =3D dt.size; + smram->idtr.base =3D dt.address; =20 - kvm_get_segment(vcpu, &seg, VCPU_SREG_LDTR); - put_smstate(u16, buf, 0x7e70, seg.selector); - put_smstate(u16, buf, 0x7e72, enter_smm_get_segment_flags(&seg) >> 8); - put_smstate(u32, buf, 0x7e74, seg.limit); - put_smstate(u64, buf, 0x7e78, seg.base); + enter_smm_save_seg_64(vcpu, &smram->ldtr, VCPU_SREG_LDTR); =20 static_call(kvm_x86_get_gdt)(vcpu, &dt); - put_smstate(u32, buf, 0x7e64, dt.size); - put_smstate(u64, buf, 0x7e68, dt.address); + smram->gdtr.limit =3D dt.size; + smram->gdtr.base =3D dt.address; =20 - for (i =3D 0; i < 6; i++) - enter_smm_save_seg_64(vcpu, buf, i); + enter_smm_save_seg_64(vcpu, &smram->es, VCPU_SREG_ES); + enter_smm_save_seg_64(vcpu, &smram->cs, VCPU_SREG_CS); + enter_smm_save_seg_64(vcpu, &smram->ss, VCPU_SREG_SS); + enter_smm_save_seg_64(vcpu, &smram->ds, VCPU_SREG_DS); + enter_smm_save_seg_64(vcpu, &smram->fs, VCPU_SREG_FS); + enter_smm_save_seg_64(vcpu, &smram->gs, VCPU_SREG_GS); } #endif =20 @@ -9814,7 +9805,7 @@ static void enter_smm(struct kvm_vcpu *vcpu) memset(buf, 0, 512); #ifdef CONFIG_X86_64 if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) - enter_smm_save_state_64(vcpu, buf); + enter_smm_save_state_64(vcpu, (struct kvm_smram_state_64 *)buf); else #endif enter_smm_save_state_32(vcpu, (struct kvm_smram_state_32 *)buf); --=20 2.26.3 From nobody Sun Apr 26 22:56:59 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 775A2C43334 for ; Tue, 21 Jun 2022 15:10:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352388AbiFUPKK (ORCPT ); Tue, 21 Jun 2022 11:10:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44198 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352330AbiFUPJv (ORCPT ); Tue, 21 Jun 2022 11:09:51 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 36D9727CD3 for ; Tue, 21 Jun 2022 08:09:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655824190; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ka4XL8eUbPK3JlNTcyLvESzgetbfj19qXaCNRJQDQwU=; b=X89n48G8ikGvxZ/HVrFM8mtt/uVtCQRZS6wfpmNwUm4qrrO2zKWXvpPrlrXWgmZJxQ7ZN9 SlLRV+m5xStqQRNUaxPUAvMqf6oA5TvHn8CII0k8G9pn9e6sQysABPMI47GrhIyPjWsJBv 1kJXFqmBOMcpzRNPySFH83h8TV+3y2A= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-324-WIYvMZs6Mw-UA6Q5f7iyJw-1; Tue, 21 Jun 2022 11:09:47 -0400 X-MC-Unique: WIYvMZs6Mw-UA6Q5f7iyJw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7B64E811E83; Tue, 21 Jun 2022 15:09:46 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.180]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0810710725; Tue, 21 Jun 2022 15:09:42 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Sean Christopherson , x86@kernel.org, Kees Cook , Dave Hansen , linux-kernel@vger.kernel.org, "H. Peter Anvin" , Borislav Petkov , Joerg Roedel , Ingo Molnar , Paolo Bonzini , Thomas Gleixner , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Maxim Levitsky Subject: [PATCH v2 10/11] KVM: x86: SVM: use smram structs Date: Tue, 21 Jun 2022 18:09:01 +0300 Message-Id: <20220621150902.46126-11-mlevitsk@redhat.com> In-Reply-To: <20220621150902.46126-1-mlevitsk@redhat.com> References: <20220621150902.46126-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This removes the last user of put_smstate/GET_SMSTATE so remove these functions as well. Also add a sanity check that we don't attempt to enter the SMM on non long mode capable guest CPU with a running nested guest. Signed-off-by: Maxim Levitsky --- arch/x86/include/asm/kvm_host.h | 6 ------ arch/x86/kvm/svm/svm.c | 28 +++++++++++++++++----------- 2 files changed, 17 insertions(+), 17 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 1038ccb7056a39..9e8467be96b4e6 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2057,12 +2057,6 @@ static inline int kvm_cpu_get_apicid(int mps_cpu) #endif } =20 -#define put_smstate(type, buf, offset, val) \ - *(type *)((buf) + (offset) - 0x7e00) =3D val - -#define GET_SMSTATE(type, buf, offset) \ - (*(type *)((buf) + (offset) - 0x7e00)) - int kvm_cpu_dirty_log_size(void); =20 int memslot_rmap_alloc(struct kvm_memory_slot *slot, unsigned long npages); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 136298cfb3fb57..8dcbbe839bef36 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4399,6 +4399,7 @@ static int svm_smi_allowed(struct kvm_vcpu *vcpu, boo= l for_injection) =20 static int svm_enter_smm(struct kvm_vcpu *vcpu, char *smstate) { + struct kvm_smram_state_64 *smram =3D (struct kvm_smram_state_64 *)smstate; struct vcpu_svm *svm =3D to_svm(vcpu); struct kvm_host_map map_save; int ret; @@ -4406,10 +4407,17 @@ static int svm_enter_smm(struct kvm_vcpu *vcpu, cha= r *smstate) if (!is_guest_mode(vcpu)) return 0; =20 - /* FED8h - SVM Guest */ - put_smstate(u64, smstate, 0x7ed8, 1); - /* FEE0h - SVM Guest VMCB Physical Address */ - put_smstate(u64, smstate, 0x7ee0, svm->nested.vmcb12_gpa); + /* + * 32 bit SMRAM format doesn't preserve EFER and SVM state. + * SVM should not be enabled by the userspace without marking + * the CPU as at least long mode capable. + */ + + if (!guest_cpuid_has(vcpu, X86_FEATURE_SVM)) + return 1; + + smram->svm_guest_flag =3D 1; + smram->svm_guest_vmcb_gpa =3D svm->nested.vmcb12_gpa; =20 svm->vmcb->save.rax =3D vcpu->arch.regs[VCPU_REGS_RAX]; svm->vmcb->save.rsp =3D vcpu->arch.regs[VCPU_REGS_RSP]; @@ -4446,9 +4454,9 @@ static int svm_enter_smm(struct kvm_vcpu *vcpu, char = *smstate) =20 static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) { + struct kvm_smram_state_64 *smram =3D (struct kvm_smram_state_64 *)smstate; struct vcpu_svm *svm =3D to_svm(vcpu); struct kvm_host_map map, map_save; - u64 saved_efer, vmcb12_gpa; struct vmcb *vmcb12; int ret; =20 @@ -4456,18 +4464,16 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, con= st char *smstate) return 0; =20 /* Non-zero if SMI arrived while vCPU was in guest mode. */ - if (!GET_SMSTATE(u64, smstate, 0x7ed8)) + if (!smram->svm_guest_flag) return 0; =20 if (!guest_cpuid_has(vcpu, X86_FEATURE_SVM)) return 1; =20 - saved_efer =3D GET_SMSTATE(u64, smstate, 0x7ed0); - if (!(saved_efer & EFER_SVME)) + if (!(smram->efer & EFER_SVME)) return 1; =20 - vmcb12_gpa =3D GET_SMSTATE(u64, smstate, 0x7ee0); - if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcb12_gpa), &map) =3D=3D -EINVAL) + if (kvm_vcpu_map(vcpu, gpa_to_gfn(smram->svm_guest_vmcb_gpa), &map) =3D= =3D -EINVAL) return 1; =20 ret =3D 1; @@ -4493,7 +4499,7 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const= char *smstate) vmcb12 =3D map.hva; nested_copy_vmcb_control_to_cache(svm, &vmcb12->control); nested_copy_vmcb_save_to_cache(svm, &vmcb12->save); - ret =3D enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, false); + ret =3D enter_svm_guest_mode(vcpu, smram->svm_guest_vmcb_gpa, vmcb12, fal= se); =20 if (ret) goto unmap_save; --=20 2.26.3 From nobody Sun Apr 26 22:56:59 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4986C433EF for ; Tue, 21 Jun 2022 15:10:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351841AbiFUPKH (ORCPT ); Tue, 21 Jun 2022 11:10:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44280 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352342AbiFUPJ4 (ORCPT ); Tue, 21 Jun 2022 11:09:56 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 41D8F2872D for ; Tue, 21 Jun 2022 08:09:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655824194; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=u6RxoWDQZEfB5inmogmfuRCsiEMBnChH++ZuB43pWms=; b=bdt+4BuabVDsnGQrvTk2mzm52WEwMbx/Qupib1TmN9DS60zubuJnLi0qAyK7mX1S83T7y6 uRt2uziND6B+sjv1ThgZuat4Bxd6vSO5AjFux89VBpQmwsx2bUvCo6ut2lfyfUVWrw9HD6 TCHfx47En3pXWTcY0xnhikW703zDSqU= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-518-mRT5HXJxPvmWCrCKmJg9Og-1; Tue, 21 Jun 2022 11:09:51 -0400 X-MC-Unique: mRT5HXJxPvmWCrCKmJg9Og-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 841FB2999B2D; Tue, 21 Jun 2022 15:09:50 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.180]) by smtp.corp.redhat.com (Postfix) with ESMTP id D56DA10725; Tue, 21 Jun 2022 15:09:46 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Sean Christopherson , x86@kernel.org, Kees Cook , Dave Hansen , linux-kernel@vger.kernel.org, "H. Peter Anvin" , Borislav Petkov , Joerg Roedel , Ingo Molnar , Paolo Bonzini , Thomas Gleixner , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Maxim Levitsky Subject: [PATCH v2 11/11] KVM: x86: emulator/smm: preserve interrupt shadow in SMRAM Date: Tue, 21 Jun 2022 18:09:02 +0300 Message-Id: <20220621150902.46126-12-mlevitsk@redhat.com> In-Reply-To: <20220621150902.46126-1-mlevitsk@redhat.com> References: <20220621150902.46126-1-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When #SMI is asserted, the CPU can be in interrupt shadow due to sti or mov ss. It is not mandatory in Intel/AMD prm to have the #SMI blocked during the shadow, and on top of that, since neither SVM nor VMX has true support for SMI window, waiting for one instruction would mean single stepping the guest. Instead, allow #SMI in this case, but both reset the interrupt window and stash its value in SMRAM to restore it on exit from SMM. This fixes rare failures seen mostly on windows guests on VMX, when #SMI falls on the sti instruction which mainfest in VM entry failure due to EFLAGS.IF not being set, but STI interrupt window still being set in the VMCS. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/emulate.c | 17 ++++++++++++++--- arch/x86/kvm/kvm_emulate.h | 13 ++++++++++--- arch/x86/kvm/x86.c | 12 ++++++++++++ 3 files changed, 36 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 7a3a042d6b862a..d4ede5216491ad 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -2443,7 +2443,7 @@ static int rsm_load_state_32(struct x86_emulate_ctxt = *ctxt, struct kvm_smram_state_32 *smstate) { struct desc_ptr dt; - int i; + int i, r; =20 ctxt->eflags =3D smstate->eflags | X86_EFLAGS_FIXED; ctxt->_eip =3D smstate->eip; @@ -2478,8 +2478,16 @@ static int rsm_load_state_32(struct x86_emulate_ctxt= *ctxt, =20 ctxt->ops->set_smbase(ctxt, smstate->smbase); =20 - return rsm_enter_protected_mode(ctxt, smstate->cr0, - smstate->cr3, smstate->cr4); + r =3D rsm_enter_protected_mode(ctxt, smstate->cr0, + smstate->cr3, smstate->cr4); + + if (r !=3D X86EMUL_CONTINUE) + return r; + + ctxt->ops->set_int_shadow(ctxt, 0); + ctxt->interruptibility =3D (u8)smstate->int_shadow; + + return X86EMUL_CONTINUE; } =20 #ifdef CONFIG_X86_64 @@ -2528,6 +2536,9 @@ static int rsm_load_state_64(struct x86_emulate_ctxt = *ctxt, rsm_load_seg_64(ctxt, &smstate->fs, VCPU_SREG_FS); rsm_load_seg_64(ctxt, &smstate->gs, VCPU_SREG_GS); =20 + ctxt->ops->set_int_shadow(ctxt, 0); + ctxt->interruptibility =3D (u8)smstate->int_shadow; + return X86EMUL_CONTINUE; } #endif diff --git a/arch/x86/kvm/kvm_emulate.h b/arch/x86/kvm/kvm_emulate.h index 7015728da36d5f..11928306439c77 100644 --- a/arch/x86/kvm/kvm_emulate.h +++ b/arch/x86/kvm/kvm_emulate.h @@ -232,6 +232,7 @@ struct x86_emulate_ops { bool (*guest_has_rdpid)(struct x86_emulate_ctxt *ctxt); =20 void (*set_nmi_mask)(struct x86_emulate_ctxt *ctxt, bool masked); + void (*set_int_shadow)(struct x86_emulate_ctxt *ctxt, u8 shadow); =20 unsigned (*get_hflags)(struct x86_emulate_ctxt *ctxt); void (*exiting_smm)(struct x86_emulate_ctxt *ctxt); @@ -520,7 +521,9 @@ struct kvm_smram_state_32 { u32 reserved1[62]; /* FE00 - FEF7 */ u32 smbase; /* FEF8 */ u32 smm_revision; /* FEFC */ - u32 reserved2[5]; /* FF00-FF13 */ + u32 reserved2[4]; /* FF00-FF0F*/ + /* int_shadow is KVM extension*/ + u32 int_shadow; /* FF10 */ /* CR4 is not present in Intel/AMD SMRAM image*/ u32 cr4; /* FF14 */ u32 reserved3[5]; /* FF18 */ @@ -592,13 +595,17 @@ struct kvm_smram_state_64 { struct kvm_smm_seg_state_64 idtr; /* FE80 (R/O) */ struct kvm_smm_seg_state_64 tr; /* FE90 (R/O) */ =20 - /* I/O restart and auto halt restart are not implemented by KVM */ + /* + * I/O restart and auto halt restart are not implemented by KVM + * int_shadow is KVM's extension + */ + u64 io_restart_rip; /* FEA0 (R/O) */ u64 io_restart_rcx; /* FEA8 (R/O) */ u64 io_restart_rsi; /* FEB0 (R/O) */ u64 io_restart_rdi; /* FEB8 (R/O) */ u32 io_restart_dword; /* FEC0 (R/O) */ - u32 reserved1; /* FEC4 */ + u32 int_shadow; /* FEC4 (R/O) */ u8 io_instruction_restart; /* FEC8 (R/W) */ u8 auto_halt_restart; /* FEC9 (R/W) */ u8 reserved2[6]; /* FECA-FECF */ diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index a1b138f0815d30..665134b1096b25 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7887,6 +7887,11 @@ static void emulator_set_nmi_mask(struct x86_emulate= _ctxt *ctxt, bool masked) static_call(kvm_x86_set_nmi_mask)(emul_to_vcpu(ctxt), masked); } =20 +static void emulator_set_int_shadow(struct x86_emulate_ctxt *ctxt, u8 shad= ow) +{ + static_call(kvm_x86_set_interrupt_shadow)(emul_to_vcpu(ctxt), shadow); +} + static unsigned emulator_get_hflags(struct x86_emulate_ctxt *ctxt) { return emul_to_vcpu(ctxt)->arch.hflags; @@ -7967,6 +7972,7 @@ static const struct x86_emulate_ops emulate_ops =3D { .guest_has_fxsr =3D emulator_guest_has_fxsr, .guest_has_rdpid =3D emulator_guest_has_rdpid, .set_nmi_mask =3D emulator_set_nmi_mask, + .set_int_shadow =3D emulator_set_int_shadow, .get_hflags =3D emulator_get_hflags, .exiting_smm =3D emulator_exiting_smm, .leave_smm =3D emulator_leave_smm, @@ -9744,6 +9750,8 @@ static void enter_smm_save_state_32(struct kvm_vcpu *= vcpu, struct kvm_smram_stat smram->cr4 =3D kvm_read_cr4(vcpu); smram->smm_revision =3D 0x00020000; smram->smbase =3D vcpu->arch.smbase; + + smram->int_shadow =3D static_call(kvm_x86_get_interrupt_shadow)(vcpu); } =20 #ifdef CONFIG_X86_64 @@ -9792,6 +9800,8 @@ static void enter_smm_save_state_64(struct kvm_vcpu *= vcpu, struct kvm_smram_stat enter_smm_save_seg_64(vcpu, &smram->ds, VCPU_SREG_DS); enter_smm_save_seg_64(vcpu, &smram->fs, VCPU_SREG_FS); enter_smm_save_seg_64(vcpu, &smram->gs, VCPU_SREG_GS); + + smram->int_shadow =3D static_call(kvm_x86_get_interrupt_shadow)(vcpu); } #endif =20 @@ -9828,6 +9838,8 @@ static void enter_smm(struct kvm_vcpu *vcpu) kvm_set_rflags(vcpu, X86_EFLAGS_FIXED); kvm_rip_write(vcpu, 0x8000); =20 + static_call(kvm_x86_set_interrupt_shadow)(vcpu, 0); + cr0 =3D vcpu->arch.cr0 & ~(X86_CR0_PE | X86_CR0_EM | X86_CR0_TS | X86_CR0= _PG); static_call(kvm_x86_set_cr0)(vcpu, cr0); vcpu->arch.cr0 =3D cr0; --=20 2.26.3