From nobody Sat Apr 27 16:08:04 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1504583840616356.7482268283288; Mon, 4 Sep 2017 20:57:20 -0700 (PDT) Received: from localhost ([::1]:56712 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp4zH-0002e9-H1 for importer@patchew.org; Mon, 04 Sep 2017 23:57:19 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41393) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp4xS-0001YJ-4N for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:31 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dp4xN-0007dd-21 for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:26 -0400 Received: from mail-ua0-x243.google.com ([2607:f8b0:400c:c08::243]:35027) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1dp4xM-0007dF-SA for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:20 -0400 Received: by mail-ua0-x243.google.com with SMTP id g47so847543uad.2 for ; Mon, 04 Sep 2017 20:55:20 -0700 (PDT) Received: from localhost.localdomain ([161.10.80.59]) by smtp.gmail.com with ESMTPSA id d206sm1877252vka.29.2017.09.04.20.55.18 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 04 Sep 2017 20:55:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=DtmkP1sElrkOXuzj2agLnJnLK86CstdrvAjOxmiM05w=; b=nEKu+JpUcnQmKUY4j5+eNoJLm4d2Ua78GEpkNqUfLbG9R3DxnRdW8MGMZadiCMz8LB a0JFuN5gsJktmPgsaC9QGpkkFv3Qwa3jDR9J8YWH+yPzGC/JI8SiW/EdKQk+wIT1ooqJ kO/eiM1yfeHjAQ8gMOsXjTgMI8ZlkZzXFN56wddk/SWbI3VNvcr+gQqjDnHqWvSuDoQV fvg592yICMXOo5AtvbtE0UT7zJyFk3BxG1sbknPk9fPxSP+nlxFH8HBBS1SHhqhamNPq VJZKLT5BwBxlqTwNlxIf0tJESmX7kqV43dI5VsXTloh80GrD+QHVTJMc8NmlS8B4cBpz KJWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=DtmkP1sElrkOXuzj2agLnJnLK86CstdrvAjOxmiM05w=; b=Ca460z44YhJiVab4YUrCCyRthcGOjUM+6pn12DVoPoFDFNyvkyXa9BHzUwpinjKbO2 yDHBE2zkaDm1Nv+rP4e/kTr78c0QKnqAcuZA0rZ8A90AWMi/ldC8bM0hBXXb4oBaICbV ksRBA0xL22PCjYEGMtnmZfT14z/HRY/iNMFT8xu1AbRytEz6CaGpFEnIf7y3hkaVTusK 2Rw0CLYgjgHj/ja4x2j8q/57HzJl8LqRh8j8jsXw2jnY1k0rN7nlRmll7VvCEehRyBj8 5aH//PpP3REg0dV7+i6g5Gsn4ICHWSZnhXYxnDaPNnpWGjvVeiZ562YGbqCdRnzqMlKN osjQ== X-Gm-Message-State: AHPjjUiXMhaSdZkO7tyVitr3T1n2FQ8NaKl7O3EYYmcPQ08EMj0GjhzF npUhTzDxOxcJpicg X-Google-Smtp-Source: ADKCNb4i1ZKn3xUKAl5/h2TyJcNpdeR7XlEVq323kyPJgWdtgKfSr/88IRKKkP6aOnLY7h31pyb7kg== X-Received: by 10.159.59.164 with SMTP id r36mr1707948uah.169.1504583719993; Mon, 04 Sep 2017 20:55:19 -0700 (PDT) From: Sergio Andres Gomez Del Real X-Google-Original-From: Sergio Andres Gomez Del Real To: qemu-devel@nongnu.org Date: Mon, 4 Sep 2017 22:54:44 -0500 Message-Id: <20170905035457.3753-2-Sergio.G.DelReal@gmail.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> References: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:400c:c08::243 Subject: [Qemu-devel] [PATCH v3 01/14] hvf: add support for Hypervisor.framework in the configure script X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sergio Andres Gomez Del Real , pbonzini@redhat.com, stefanha@gmail.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZohoMail: RDKM_2 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This patch adds to the configure script the code to support the --enable-hvf argument. If the OS is Darwin, it checks for presence of HVF in the system. The patch also adds strings related to HVF in the file qemu-options.hx. QEMU will only support the modern syntax style '-M accel=3Dhvf' no enable hvf; the legacy '-enable-hvf' will not be supported. Signed-off-by: Sergio Andres Gomez Del Real --- configure | 38 ++++++++++++++++++++++++++++++++++++++ qemu-options.hx | 10 +++++----- 2 files changed, 43 insertions(+), 5 deletions(-) diff --git a/configure b/configure index dd73cce62f..5d9152b80a 100755 --- a/configure +++ b/configure @@ -211,6 +211,17 @@ supported_xen_target() { return 1 } =20 +supported_hvf_target() { + test "$hvf" =3D "yes" || return 1 + glob "$1" "*-softmmu" || return 1 + case "${1%-softmmu}" in + x86_64) + return 0 + ;; + esac + return 1 +} + supported_target() { case "$1" in *-softmmu) @@ -236,6 +247,7 @@ supported_target() { supported_kvm_target "$1" && return 0 supported_xen_target "$1" && return 0 supported_hax_target "$1" && return 0 + supported_hvf_target "$1" && return 0 print_error "TCG disabled, but hardware accelerator not available for = '$target'" return 1 } @@ -309,6 +321,7 @@ vhost_vsock=3D"no" vhost_user=3D"" kvm=3D"no" hax=3D"no" +hvf=3D"no" rdma=3D"" gprof=3D"no" debug_tcg=3D"no" @@ -727,6 +740,7 @@ Darwin) bsd=3D"yes" darwin=3D"yes" hax=3D"yes" + hvf=3D"yes" LDFLAGS_SHARED=3D"-bundle -undefined dynamic_lookup" if [ "$cpu" =3D "x86_64" ] ; then QEMU_CFLAGS=3D"-arch x86_64 $QEMU_CFLAGS" @@ -1027,6 +1041,10 @@ for opt do ;; --enable-hax) hax=3D"yes" ;; + --disable-hvf) hvf=3D"no" + ;; + --enable-hvf) hvf=3D"yes" + ;; --disable-tcg-interpreter) tcg_interpreter=3D"no" ;; --enable-tcg-interpreter) tcg_interpreter=3D"yes" @@ -1499,6 +1517,7 @@ disabled with --disable-FEATURE, default is enabled i= f available: bluez bluez stack connectivity kvm KVM acceleration support hax HAX acceleration support + hvf Hypervisor.framework acceleration support rdma RDMA-based migration support vde support for vde network netmap support for netmap network @@ -4900,6 +4919,21 @@ then fi =20 =20 +################################################# +# Check to see if we have the Hypervisor framework +if [ "$darwin" =3D=3D "yes" ] ; then + cat > $TMPC << EOF +#include +int main() { return 0;} +EOF + if ! compile_object ""; then + hvf=3D'no' + else + hvf=3D'yes' + LDFLAGS=3D"-framework Hypervisor $LDFLAGS" + fi +fi + ################################################# # Sparc implicitly links with --relax, which is # incompatible with -r, so --no-relax should be @@ -5356,6 +5390,7 @@ if test "$tcg" =3D "yes" ; then echo "TCG debug enabled $debug_tcg" echo "TCG interpreter $tcg_interpreter" fi +echo "HVF support $hvf" echo "RDMA support $rdma" echo "fdt support $fdt" echo "preadv support $preadv" @@ -6388,6 +6423,9 @@ fi if supported_hax_target $target; then echo "CONFIG_HAX=3Dy" >> $config_target_mak fi +if supported_hvf_target $target; then + echo "CONFIG_HVF=3Dy" >> $config_target_mak +fi if test "$target_bigendian" =3D "yes" ; then echo "TARGET_WORDS_BIGENDIAN=3Dy" >> $config_target_mak fi diff --git a/qemu-options.hx b/qemu-options.hx index 9f6e2adfff..bcb44420ee 100644 --- a/qemu-options.hx +++ b/qemu-options.hx @@ -31,7 +31,7 @@ DEF("machine", HAS_ARG, QEMU_OPTION_machine, \ "-machine [type=3D]name[,prop[=3Dvalue][,...]]\n" " selects emulated machine ('-machine help' for list)\n" " property accel=3Daccel1[:accel2[:...]] selects accele= rator\n" - " supported accelerators are kvm, xen, hax or tcg (defa= ult: tcg)\n" + " supported accelerators are kvm, xen, hax, hvf or tcg = (default: tcg)\n" " kernel_irqchip=3Don|off|split controls accelerated ir= qchip support (default=3Doff)\n" " vmport=3Don|off|auto controls emulation of vmport (de= fault: auto)\n" " kvm_shadow_mem=3Dsize of KVM shadow MMU in bytes\n" @@ -66,7 +66,7 @@ Supported machine properties are: @table @option @item accel=3D@var{accels1}[:@var{accels2}[:...]] This is used to enable an accelerator. Depending on the target architectur= e, -kvm, xen, hax or tcg can be available. By default, tcg is used. If there is +kvm, xen, hax, hvf or tcg can be available. By default, tcg is used. If th= ere is more than one accelerator specified, the next one is used if the previous = one fails to initialize. @item kernel_irqchip=3Don|off @@ -120,13 +120,13 @@ ETEXI =20 DEF("accel", HAS_ARG, QEMU_OPTION_accel, "-accel [accel=3D]accelerator[,thread=3Dsingle|multi]\n" - " select accelerator (kvm, xen, hax or tcg; use 'help' = for a list)\n" - " thread=3Dsingle|multi (enable multi-threaded TCG)\n",= QEMU_ARCH_ALL) + " select accelerator (kvm, xen, hax, hvf or tcg; use 'h= elp' for a list)\n" + " thread=3Dsingle|multi (enable multi-threaded TCG)", Q= EMU_ARCH_ALL) STEXI @item -accel @var{name}[,prop=3D@var{value}[,...]] @findex -accel This is used to enable an accelerator. Depending on the target architectur= e, -kvm, xen, hax or tcg can be available. By default, tcg is used. If there is +kvm, xen, hax, hvf or tcg can be available. By default, tcg is used. If th= ere is more than one accelerator specified, the next one is used if the previous = one fails to initialize. @table @option --=20 2.14.1 From nobody Sat Apr 27 16:08:04 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1504586880753133.7870202636576; Mon, 4 Sep 2017 21:48:00 -0700 (PDT) Received: from localhost ([::1]:56861 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp5mI-00026j-Sm for importer@patchew.org; Tue, 05 Sep 2017 00:47:58 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41682) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp4y0-0001xT-01 for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:56:33 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dp4xW-0007lN-Bd for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:56:00 -0400 Received: from mail-ua0-x241.google.com ([2607:f8b0:400c:c08::241]:37867) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1dp4xV-0007kI-Rw for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:30 -0400 Received: by mail-ua0-x241.google.com with SMTP id q29so835232uaf.4 for ; Mon, 04 Sep 2017 20:55:29 -0700 (PDT) Received: from localhost.localdomain ([161.10.80.59]) by smtp.gmail.com with ESMTPSA id d206sm1877252vka.29.2017.09.04.20.55.20 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 04 Sep 2017 20:55:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=cNPQ8+z7XPqBPzB7+nIt0B67yxBpmGaUPqAd85efv+Q=; b=fziTZ4ajFRa1dv/VEmC1WCRws5n0hLM/31BQJHC7FSXDfBuRQIU6UjHttPz7r+rMso yNkRWKuOA/VUnqaRT1ksDZ+XnI2EE2dgfoHwvWTj5KttKUMwwfEIN8NLm6iakFmL/ZA0 fJTWBipmQwVKp0cEgb4SGNTF2jAoUtLrAIbvQpArNUVhp473UMsuaBHECtMTgezWVUMU RzwmhH4k3xQo3kIuxXbpQi+LUNKslS9ydwtVLhueQ0iaTqN8hyuHnt3H+nUXqh2M9bG5 ESCBN9E+dBVOmEKTrFOse58EdB/NnWgw0EcJwhu0dzwaEj8HUtAQnJLoieWjfPlY2fCU MyvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=cNPQ8+z7XPqBPzB7+nIt0B67yxBpmGaUPqAd85efv+Q=; b=gKn/1iMNwnjiQWBRVY/xV+rMYmoxBhLI21a/tplzBK3QLrh7UPmKSIwq0kGqi+MWj/ gfg4OXUM7a7PVBPNtpPcSIuUKv8yoCpQzwuOGOrdnp2Fr80/PE/FcwgGMa7cHe7LnXil dgQmvrmaroQBw9jad8QTLt0T2rRnlET8t2EdYdSmNQIIuGqVAqYGAJC03nQQivVRcXH7 q9/i5EPaoOVW7bLsCU7VU2NWkH4bNU19rJv97uzE65J2Cqo4GyWJZqbzazLlnKdbtPvA 8rUxnFHPTF66t2hYlvXRaR/9f9GsdzfeRtP+vIgFpFH1ZsElXS9MI1GBTXj+ktnsfWk8 UicA== X-Gm-Message-State: AHPjjUjCpeHI01spiToG3Pd5StWkjvbEZbqM/b1JZCT55XE4HDIjiUhW g16FwwxUFrgMbDWg X-Google-Smtp-Source: ADKCNb7+bHS9xIeh58cN2lemiyMajjIczTa5s+0MACoufS2oYI1oEyxE0yNCIojPuMgJ3J4EaoS2UA== X-Received: by 10.176.8.79 with SMTP id b15mr1732013uaf.98.1504583727010; Mon, 04 Sep 2017 20:55:27 -0700 (PDT) From: Sergio Andres Gomez Del Real X-Google-Original-From: Sergio Andres Gomez Del Real To: qemu-devel@nongnu.org Date: Mon, 4 Sep 2017 22:54:45 -0500 Message-Id: <20170905035457.3753-3-Sergio.G.DelReal@gmail.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> References: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:400c:c08::241 X-Mailman-Approved-At: Tue, 05 Sep 2017 00:46:14 -0400 Subject: [Qemu-devel] [PATCH v3 02/14] hvf: add code base from Google's QEMU repository X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sergio Andres Gomez Del Real , pbonzini@redhat.com, stefanha@gmail.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZohoMail: RDKM_2 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This file begins tracking the files that will be the code base for HVF support in QEMU. This code base is part of Google's QEMU version of their Android emulator, and can be found at https://android.googlesource.com/platform/external/qemu/+/emu-master-dev This code is based on Veertu Inc's vdhh (Veertu Desktop Hosted Hypervisor), found at https://github.com/veertuinc/vdhh. Everything is appropriately licensed under GPL v2-or-later, except for the code inside x86_task.c and x86_task.h, which, deriving from KVM (the Linux kernel), is licensed GPL v2-only. This code base already implements a very great deal of functionality, although Google's version removed from Vertuu's the support for APIC page and hyperv-related stuff. According to the Android Emulator Release Notes, Revision 26.1.3 (August 2017), "Hypervisor.framework is now enabled by default on macOS for 32-bit x86 images to improve performance and macOS compatibility", although we better use with caution for, as the same Revision warns us, "If you experience issues with it specifically, please file a bug report...". The code hasn't seen much update in the last 5 months, so I think that we can further develop the code with occasional visiting Google's repository to see if there has been any update. The code's style isn't aligned to QEMU's standards; this will be fixed in a subsequent patch in this series. On top of this code base we are implementing the following features: fix the code that passes the cpuid features to the guest; implementing dirty page tracking for vga memory region; reimplementing the event injection mechanism for exception injection and many other minor fixes/refactoring that are documented in their respective patches. It is important to note that this patch still doesn't add rules for compili= ng these files, because some glue code in cpus.c and other issues need to be handled before the HVF files compile cleanly. This will be done in subseque= nt patches. Signed-off-by: Sergio Andres Gomez Del Real --- cpus.c | 42 + include/sysemu/hvf.h | 99 +++ target/i386/hvf-all.c | 1000 +++++++++++++++++++++ target/i386/hvf-i386.h | 48 + target/i386/hvf-utils/Makefile.objs | 1 + target/i386/hvf-utils/README.md | 7 + target/i386/hvf-utils/vmcs.h | 368 ++++++++ target/i386/hvf-utils/vmx.h | 200 +++++ target/i386/hvf-utils/x86.c | 174 ++++ target/i386/hvf-utils/x86.h | 470 ++++++++++ target/i386/hvf-utils/x86_cpuid.c | 270 ++++++ target/i386/hvf-utils/x86_cpuid.h | 51 ++ target/i386/hvf-utils/x86_decode.c | 1659 +++++++++++++++++++++++++++++++= ++++ target/i386/hvf-utils/x86_decode.h | 314 +++++++ target/i386/hvf-utils/x86_descr.c | 124 +++ target/i386/hvf-utils/x86_descr.h | 40 + target/i386/hvf-utils/x86_emu.c | 1466 +++++++++++++++++++++++++++++++ target/i386/hvf-utils/x86_emu.h | 33 + target/i386/hvf-utils/x86_flags.c | 317 +++++++ target/i386/hvf-utils/x86_flags.h | 218 +++++ target/i386/hvf-utils/x86_gen.h | 53 ++ target/i386/hvf-utils/x86_mmu.c | 254 ++++++ target/i386/hvf-utils/x86_mmu.h | 45 + target/i386/hvf-utils/x86hvf.c | 501 +++++++++++ target/i386/hvf-utils/x86hvf.h | 36 + 25 files changed, 7790 insertions(+) create mode 100644 include/sysemu/hvf.h create mode 100644 target/i386/hvf-all.c create mode 100644 target/i386/hvf-i386.h create mode 100644 target/i386/hvf-utils/Makefile.objs create mode 100644 target/i386/hvf-utils/README.md create mode 100644 target/i386/hvf-utils/vmcs.h create mode 100644 target/i386/hvf-utils/vmx.h create mode 100644 target/i386/hvf-utils/x86.c create mode 100644 target/i386/hvf-utils/x86.h create mode 100644 target/i386/hvf-utils/x86_cpuid.c create mode 100644 target/i386/hvf-utils/x86_cpuid.h create mode 100644 target/i386/hvf-utils/x86_decode.c create mode 100644 target/i386/hvf-utils/x86_decode.h create mode 100644 target/i386/hvf-utils/x86_descr.c create mode 100644 target/i386/hvf-utils/x86_descr.h create mode 100644 target/i386/hvf-utils/x86_emu.c create mode 100644 target/i386/hvf-utils/x86_emu.h create mode 100644 target/i386/hvf-utils/x86_flags.c create mode 100644 target/i386/hvf-utils/x86_flags.h create mode 100644 target/i386/hvf-utils/x86_gen.h create mode 100644 target/i386/hvf-utils/x86_mmu.c create mode 100644 target/i386/hvf-utils/x86_mmu.h create mode 100644 target/i386/hvf-utils/x86hvf.c create mode 100644 target/i386/hvf-utils/x86hvf.h diff --git a/cpus.c b/cpus.c index 9bed61eefc..a2cd9dfa5d 100644 --- a/cpus.c +++ b/cpus.c @@ -1434,6 +1434,48 @@ static void *qemu_hax_cpu_thread_fn(void *arg) return NULL; } =20 +/* The HVF-specific vCPU thread function. This one should only run when th= e host + * CPU supports the VMX "unrestricted guest" feature. */ +static void *qemu_hvf_cpu_thread_fn(void *arg) +{ + CPUState *cpu =3D arg; + + int r; + + assert(hvf_enabled()); + + rcu_register_thread(); + + qemu_mutex_lock_iothread(); + qemu_thread_get_self(cpu->thread); + + cpu->thread_id =3D qemu_get_thread_id(); + cpu->can_do_io =3D 1; + current_cpu =3D cpu; + + hvf_init_vcpu(cpu); + + /* signal CPU creation */ + cpu->created =3D true; + qemu_cond_signal(&qemu_cpu_cond); + + do { + if (cpu_can_run(cpu)) { + r =3D hvf_vcpu_exec(cpu); + if (r =3D=3D EXCP_DEBUG) { + cpu_handle_guest_debug(cpu); + } + } + qemu_hvf_wait_io_event(cpu); + } while (!cpu->unplug || cpu_can_run(cpu)); + + hvf_vcpu_destroy(cpu); + cpu->created =3D false; + qemu_cond_signal(&qemu_cpu_cond); + qemu_mutex_unlock_iothread(); + return NULL; +} + #ifdef _WIN32 static void CALLBACK dummy_apc_func(ULONG_PTR unused) { diff --git a/include/sysemu/hvf.h b/include/sysemu/hvf.h new file mode 100644 index 0000000000..752a78eaa4 --- /dev/null +++ b/include/sysemu/hvf.h @@ -0,0 +1,99 @@ +/* + * QEMU Hypervisor.framework (HVF) support + * + * Copyright Google Inc., 2017 + * + * This work is licensed under the terms of the GNU GPL, version 2 or late= r. + * See the COPYING file in the top-level directory. + * + */ + +/* header to be included in non-HVF-specific code */ +#ifndef _HVF_H +#define _HVF_H + +#include "config-host.h" +#include "qemu/osdep.h" +#include "qemu-common.h" +#include "hw/hw.h" +#include "target/i386/cpu.h" +#include "qemu/bitops.h" +#include "exec/memory.h" +#include "sysemu/accel.h" +#include +#include +#include + + +typedef struct hvf_slot { + uint64_t start; + uint64_t size; + uint8_t *mem; + int slot_id; +} hvf_slot; + +struct hvf_vcpu_caps { + uint64_t vmx_cap_pinbased; + uint64_t vmx_cap_procbased; + uint64_t vmx_cap_procbased2; + uint64_t vmx_cap_entry; + uint64_t vmx_cap_exit; + uint64_t vmx_cap_preemption_timer; +}; + +int __hvf_set_memory(hvf_slot *); +typedef struct HVFState { + AccelState parent; + hvf_slot slots[32]; + int num_slots; + + struct hvf_vcpu_caps *hvf_caps; +} HVFState; +extern HVFState *hvf_state; + +void hvf_set_phys_mem(MemoryRegionSection *, bool); +void hvf_handle_io(CPUArchState *, uint16_t, void *, + int, int, int); +hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t); + +/* Returns 1 if HVF is available and enabled, 0 otherwise. */ +int hvf_enabled(void); + +/* Disable HVF if |disable| is 1, otherwise, enable it iff it is supported= by the host CPU. + * Use hvf_enabled() after this to get the result. */ +void hvf_disable(int disable); + +/* Returns non-0 if the host CPU supports the VMX "unrestricted guest" fea= ture which + * allows the virtual CPU to directly run in "real mode". If true, this al= lows QEMU to run + * several vCPU threads in parallel (see cpus.c). Otherwise, only a a sing= le TCG thread + * can run, and it will call HVF to run the current instructions, except i= n case of + * "real mode" (paging disabled, typically at boot time), or MMIO operatio= ns. */ +// int hvf_ug_platform(void); does not apply to HVF; assume we must be in = UG mode + +int hvf_sync_vcpus(void); + +int hvf_init_vcpu(CPUState *); +int hvf_vcpu_exec(CPUState *); +int hvf_smp_cpu_exec(CPUState *); +void hvf_cpu_synchronize_state(CPUState *); +void hvf_cpu_synchronize_post_reset(CPUState *); +void hvf_cpu_synchronize_post_init(CPUState *); +void _hvf_cpu_synchronize_post_init(CPUState *, run_on_cpu_data); + +void hvf_vcpu_destroy(CPUState *); +void hvf_raise_event(CPUState *); +// void hvf_reset_vcpu_state(void *opaque); +void vmx_reset_vcpu(CPUState *); +void __hvf_cpu_synchronize_state(CPUState *, run_on_cpu_data); +void __hvf_cpu_synchronize_post_reset(CPUState *, run_on_cpu_data); +void vmx_update_tpr(CPUState *); +void update_apic_tpr(CPUState *); +int apic_get_highest_priority_irr(DeviceState *); +int hvf_put_registers(CPUState *); + +#define TYPE_HVF_ACCEL ACCEL_CLASS_NAME("hvf") + +#define HVF_STATE(obj) \ + OBJECT_CHECK(HVFState, (obj), TYPE_HVF_ACCEL) + +#endif diff --git a/target/i386/hvf-all.c b/target/i386/hvf-all.c new file mode 100644 index 0000000000..d5e18faa68 --- /dev/null +++ b/target/i386/hvf-all.c @@ -0,0 +1,1000 @@ +// Copyright 2008 IBM Corporation +// 2008 Red Hat, Inc. +// Copyright 2011 Intel Corporation +// Copyright 2016 Veertu, Inc. +// Copyright 2017 The Android Open Source Project +//=20 +// QEMU Hypervisor.framework support +//=20 +// This software is licensed under the terms of the GNU General Public +// License version 2, as published by the Free Software Foundation, and +// may be copied, distributed, and modified under those terms. +//=20 +// This program is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU General Public License for more details. +#include "qemu/osdep.h" +#include "qemu-common.h" +#include "qemu/error-report.h" + +#include "sysemu/hvf.h" +#include "hvf-i386.h" +#include "hvf-utils/vmcs.h" +#include "hvf-utils/vmx.h" +#include "hvf-utils/x86.h" +#include "hvf-utils/x86_descr.h" +#include "hvf-utils/x86_mmu.h" +#include "hvf-utils/x86_decode.h" +#include "hvf-utils/x86_emu.h" +#include "hvf-utils/x86_cpuid.h" +#include "hvf-utils/x86hvf.h" + +#include +#include + +#include "exec/address-spaces.h" +#include "exec/exec-all.h" +#include "exec/ioport.h" +#include "hw/i386/apic_internal.h" +#include "hw/boards.h" +#include "qemu/main-loop.h" +#include "strings.h" +#include "trace.h" +#include "sysemu/accel.h" +#include "sysemu/sysemu.h" +#include "target/i386/cpu.h" + +pthread_rwlock_t mem_lock =3D PTHREAD_RWLOCK_INITIALIZER; +HVFState *hvf_state; +static int hvf_disabled =3D 1; + +static void assert_hvf_ok(hv_return_t ret) +{ + if (ret =3D=3D HV_SUCCESS) + return; + + switch (ret) { + case HV_ERROR: + fprintf(stderr, "Error: HV_ERROR\n"); + break; + case HV_BUSY: + fprintf(stderr, "Error: HV_BUSY\n"); + break; + case HV_BAD_ARGUMENT: + fprintf(stderr, "Error: HV_BAD_ARGUMENT\n"); + break; + case HV_NO_RESOURCES: + fprintf(stderr, "Error: HV_NO_RESOURCES\n"); + break; + case HV_NO_DEVICE: + fprintf(stderr, "Error: HV_NO_DEVICE\n"); + break; + case HV_UNSUPPORTED: + fprintf(stderr, "Error: HV_UNSUPPORTED\n"); + break; + default: + fprintf(stderr, "Unknown Error\n"); + } + + abort(); +} + +// Memory slots///////////////////////////////////////////////////////////= ////// + +hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t end) { + hvf_slot *slot; + int x; + for (x =3D 0; x < hvf_state->num_slots; ++x) { + slot =3D &hvf_state->slots[x]; + if (slot->size && start < (slot->start + slot->size) && end > slot= ->start) + return slot; + } + return NULL; +} + +struct mac_slot { + int present; + uint64_t size; + uint64_t gpa_start; + uint64_t gva; +}; + +struct mac_slot mac_slots[32]; +#define ALIGN(x, y) (((x)+(y)-1) & ~((y)-1)) + +int __hvf_set_memory(hvf_slot *slot) +{ + struct mac_slot *macslot; + hv_memory_flags_t flags; + pthread_rwlock_wrlock(&mem_lock); + hv_return_t ret; + + macslot =3D &mac_slots[slot->slot_id]; + + if (macslot->present) { + if (macslot->size !=3D slot->size) { + macslot->present =3D 0; + ret =3D hv_vm_unmap(macslot->gpa_start, macslot->size); + assert_hvf_ok(ret); + } + } + + if (!slot->size) { + pthread_rwlock_unlock(&mem_lock); + return 0; + } + + flags =3D HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC; + + macslot->present =3D 1; + macslot->gpa_start =3D slot->start; + macslot->size =3D slot->size; + ret =3D hv_vm_map((hv_uvaddr_t)slot->mem, slot->start, slot->size, fla= gs); + assert_hvf_ok(ret); + pthread_rwlock_unlock(&mem_lock); + return 0; +} + +void hvf_set_phys_mem(MemoryRegionSection* section, bool add) +{ + hvf_slot *mem; + MemoryRegion *area =3D section->mr; + + if (!memory_region_is_ram(area)) return; + + mem =3D hvf_find_overlap_slot( + section->offset_within_address_space, + section->offset_within_address_space + int128_get64(section->s= ize)); + + if (mem && add) { + if (mem->size =3D=3D int128_get64(section->size) && + mem->start =3D=3D section->offset_within_address_space && + mem->mem =3D=3D (memory_region_get_ram_ptr(area) + section= ->offset_within_region)) + return; // Same region was attempted to register, go away. + } + + // Region needs to be reset. set the size to 0 and remap it. + if (mem) { + mem->size =3D 0; + if (__hvf_set_memory(mem)) { + fprintf(stderr, "Failed to reset overlapping slot\n"); + abort(); + } + } + + if (!add) return; + + // Now make a new slot. + int x; + + for (x =3D 0; x < hvf_state->num_slots; ++x) { + mem =3D &hvf_state->slots[x]; + if (!mem->size) + break; + } + + if (x =3D=3D hvf_state->num_slots) { + fprintf(stderr, "No free slots\n"); + abort(); + } + + mem->size =3D int128_get64(section->size); + mem->mem =3D memory_region_get_ram_ptr(area) + section->offset_within_= region; + mem->start =3D section->offset_within_address_space; + + if (__hvf_set_memory(mem)) { + fprintf(stderr, "Error registering new memory slot\n"); + abort(); + } +} + +/* return -1 if no bit is set */ +static int get_highest_priority_int(uint32_t *tab) +{ + int i; + for (i =3D 7; i >=3D 0; i--) { + if (tab[i] !=3D 0) { + return i * 32 + apic_fls_bit(tab[i]); + } + } + return -1; +} + +void vmx_update_tpr(CPUState *cpu) +{ + // TODO: need integrate APIC handling + X86CPU *x86_cpu =3D X86_CPU(cpu); + int tpr =3D cpu_get_apic_tpr(x86_cpu->apic_state) << 4; + int irr =3D apic_get_highest_priority_irr(x86_cpu->apic_state); + + wreg(cpu->hvf_fd, HV_X86_TPR, tpr); + if (irr =3D=3D -1) + wvmcs(cpu->hvf_fd, VMCS_TPR_THRESHOLD, 0); + else + wvmcs(cpu->hvf_fd, VMCS_TPR_THRESHOLD, (irr > tpr) ? tpr >> 4 : ir= r >> 4); +} + +void update_apic_tpr(CPUState *cpu) +{ + X86CPU *x86_cpu =3D X86_CPU(cpu); + int tpr =3D rreg(cpu->hvf_fd, HV_X86_TPR) >> 4; + cpu_set_apic_tpr(x86_cpu->apic_state, tpr); +} + +#define VECTORING_INFO_VECTOR_MASK 0xff + +// TODO: taskswitch handling +static void save_state_to_tss32(CPUState *cpu, struct x86_tss_segment32 *t= ss) +{ + /* CR3 and ldt selector are not saved intentionally */ + tss->eip =3D EIP(cpu); + tss->eflags =3D EFLAGS(cpu); + tss->eax =3D EAX(cpu); + tss->ecx =3D ECX(cpu); + tss->edx =3D EDX(cpu); + tss->ebx =3D EBX(cpu); + tss->esp =3D ESP(cpu); + tss->ebp =3D EBP(cpu); + tss->esi =3D ESI(cpu); + tss->edi =3D EDI(cpu); + + tss->es =3D vmx_read_segment_selector(cpu, REG_SEG_ES).sel; + tss->cs =3D vmx_read_segment_selector(cpu, REG_SEG_CS).sel; + tss->ss =3D vmx_read_segment_selector(cpu, REG_SEG_SS).sel; + tss->ds =3D vmx_read_segment_selector(cpu, REG_SEG_DS).sel; + tss->fs =3D vmx_read_segment_selector(cpu, REG_SEG_FS).sel; + tss->gs =3D vmx_read_segment_selector(cpu, REG_SEG_GS).sel; +} + +static void load_state_from_tss32(CPUState *cpu, struct x86_tss_segment32 = *tss) +{ + wvmcs(cpu->hvf_fd, VMCS_GUEST_CR3, tss->cr3); + + RIP(cpu) =3D tss->eip; + EFLAGS(cpu) =3D tss->eflags | 2; + + /* General purpose registers */ + RAX(cpu) =3D tss->eax; + RCX(cpu) =3D tss->ecx; + RDX(cpu) =3D tss->edx; + RBX(cpu) =3D tss->ebx; + RSP(cpu) =3D tss->esp; + RBP(cpu) =3D tss->ebp; + RSI(cpu) =3D tss->esi; + RDI(cpu) =3D tss->edi; + + vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->ldt}}, RE= G_SEG_LDTR); + vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->es}}, REG= _SEG_ES); + vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->cs}}, REG= _SEG_CS); + vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->ss}}, REG= _SEG_SS); + vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->ds}}, REG= _SEG_DS); + vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->fs}}, REG= _SEG_FS); + vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->gs}}, REG= _SEG_GS); + +#if 0 + load_segment(cpu, REG_SEG_LDTR, tss->ldt); + load_segment(cpu, REG_SEG_ES, tss->es); + load_segment(cpu, REG_SEG_CS, tss->cs); + load_segment(cpu, REG_SEG_SS, tss->ss); + load_segment(cpu, REG_SEG_DS, tss->ds); + load_segment(cpu, REG_SEG_FS, tss->fs); + load_segment(cpu, REG_SEG_GS, tss->gs); +#endif +} + +static int task_switch_32(CPUState *cpu, x68_segment_selector tss_sel, x68= _segment_selector old_tss_sel, + uint64_t old_tss_base, struct x86_segment_descri= ptor *new_desc) +{ + struct x86_tss_segment32 tss_seg; + uint32_t new_tss_base =3D x86_segment_base(new_desc); + uint32_t eip_offset =3D offsetof(struct x86_tss_segment32, eip); + uint32_t ldt_sel_offset =3D offsetof(struct x86_tss_segment32, ldt); + + vmx_read_mem(cpu, &tss_seg, old_tss_base, sizeof(tss_seg)); + save_state_to_tss32(cpu, &tss_seg); + + vmx_write_mem(cpu, old_tss_base + eip_offset, &tss_seg.eip, ldt_sel_of= fset - eip_offset); + vmx_read_mem(cpu, &tss_seg, new_tss_base, sizeof(tss_seg)); + + if (old_tss_sel.sel !=3D 0xffff) { + tss_seg.prev_tss =3D old_tss_sel.sel; + + vmx_write_mem(cpu, new_tss_base, &tss_seg.prev_tss, sizeof(tss_seg= .prev_tss)); + } + load_state_from_tss32(cpu, &tss_seg); + return 0; +} + +static void vmx_handle_task_switch(CPUState *cpu, x68_segment_selector tss= _sel, int reason, bool gate_valid, uint8_t gate, uint64_t gate_type) +{ + uint64_t rip =3D rreg(cpu->hvf_fd, HV_X86_RIP); + if (!gate_valid || (gate_type !=3D VMCS_INTR_T_HWEXCEPTION && + gate_type !=3D VMCS_INTR_T_HWINTR && + gate_type !=3D VMCS_INTR_T_NMI)) { + int ins_len =3D rvmcs(cpu->hvf_fd, VMCS_EXIT_INSTRUCTION_LENGTH); + macvm_set_rip(cpu, rip + ins_len); + return; + } + + load_regs(cpu); + + struct x86_segment_descriptor curr_tss_desc, next_tss_desc; + int ret; + x68_segment_selector old_tss_sel =3D vmx_read_segment_selector(cpu, RE= G_SEG_TR); + uint64_t old_tss_base =3D vmx_read_segment_base(cpu, REG_SEG_TR); + uint32_t desc_limit; + struct x86_call_gate task_gate_desc; + struct vmx_segment vmx_seg; + + x86_read_segment_descriptor(cpu, &next_tss_desc, tss_sel); + x86_read_segment_descriptor(cpu, &curr_tss_desc, old_tss_sel); + + if (reason =3D=3D TSR_IDT_GATE && gate_valid) { + int dpl; + + ret =3D x86_read_call_gate(cpu, &task_gate_desc, gate); + + dpl =3D task_gate_desc.dpl; + x68_segment_selector cs =3D vmx_read_segment_selector(cpu, REG_SEG= _CS); + if (tss_sel.rpl > dpl || cs.rpl > dpl) + ;//DPRINTF("emulate_gp"); + } + + desc_limit =3D x86_segment_limit(&next_tss_desc); + if (!next_tss_desc.p || ((desc_limit < 0x67 && (next_tss_desc.type & 8= )) || desc_limit < 0x2b)) { + VM_PANIC("emulate_ts"); + } + + if (reason =3D=3D TSR_IRET || reason =3D=3D TSR_JMP) { + curr_tss_desc.type &=3D ~(1 << 1); /* clear busy flag */ + x86_write_segment_descriptor(cpu, &curr_tss_desc, old_tss_sel); + } + + if (reason =3D=3D TSR_IRET) + EFLAGS(cpu) &=3D ~RFLAGS_NT; + + if (reason !=3D TSR_CALL && reason !=3D TSR_IDT_GATE) + old_tss_sel.sel =3D 0xffff; + + if (reason !=3D TSR_IRET) { + next_tss_desc.type |=3D (1 << 1); /* set busy flag */ + x86_write_segment_descriptor(cpu, &next_tss_desc, tss_sel); + } + + if (next_tss_desc.type & 8) + ret =3D task_switch_32(cpu, tss_sel, old_tss_sel, old_tss_base, &n= ext_tss_desc); + else + //ret =3D task_switch_16(cpu, tss_sel, old_tss_sel, old_tss_base, = &next_tss_desc); + VM_PANIC("task_switch_16"); + + macvm_set_cr0(cpu->hvf_fd, rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0) | CR0_TS= ); + x86_segment_descriptor_to_vmx(cpu, tss_sel, &next_tss_desc, &vmx_seg); + vmx_write_segment_descriptor(cpu, &vmx_seg, REG_SEG_TR); + + store_regs(cpu); + + hv_vcpu_invalidate_tlb(cpu->hvf_fd); + hv_vcpu_flush(cpu->hvf_fd); +} + +static void hvf_handle_interrupt(CPUState * cpu, int mask) +{ + cpu->interrupt_request |=3D mask; + if (!qemu_cpu_is_self(cpu)) { + qemu_cpu_kick(cpu); + } +} + +void hvf_handle_io(CPUArchState * env, uint16_t port, void* buffer, + int direction, int size, int count) +{ + int i; + uint8_t *ptr =3D buffer; + + for (i =3D 0; i < count; i++) { + address_space_rw(&address_space_io, port, MEMTXATTRS_UNSPECIFIED, + ptr, size, + direction); + ptr +=3D size; + } +} +// +// TODO: synchronize vcpu state +void __hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg) +{ + CPUState *cpu_state =3D cpu;//(CPUState *)data; + if (cpu_state->hvf_vcpu_dirty =3D=3D 0) + hvf_get_registers(cpu_state); + + cpu_state->hvf_vcpu_dirty =3D 1; +} + +void hvf_cpu_synchronize_state(CPUState *cpu_state) +{ + if (cpu_state->hvf_vcpu_dirty =3D=3D 0) + run_on_cpu(cpu_state, __hvf_cpu_synchronize_state, RUN_ON_CPU_NULL= ); +} + +void __hvf_cpu_synchronize_post_reset(CPUState *cpu, run_on_cpu_data arg) +{ + CPUState *cpu_state =3D cpu; + hvf_put_registers(cpu_state); + cpu_state->hvf_vcpu_dirty =3D false; +} + +void hvf_cpu_synchronize_post_reset(CPUState *cpu_state) +{ + run_on_cpu(cpu_state, __hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NUL= L); +} + +void _hvf_cpu_synchronize_post_init(CPUState *cpu, run_on_cpu_data arg) +{ + CPUState *cpu_state =3D cpu; + hvf_put_registers(cpu_state); + cpu_state->hvf_vcpu_dirty =3D false; +} + +void hvf_cpu_synchronize_post_init(CPUState *cpu_state) +{ + run_on_cpu(cpu_state, _hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL); +} +=20 +// TODO: ept fault handlig +void vmx_clear_int_window_exiting(CPUState *cpu); +static bool ept_emulation_fault(uint64_t ept_qual) +{ + int read, write; + + /* EPT fault on an instruction fetch doesn't make sense here */ + if (ept_qual & EPT_VIOLATION_INST_FETCH) + return false; + + /* EPT fault must be a read fault or a write fault */ + read =3D ept_qual & EPT_VIOLATION_DATA_READ ? 1 : 0; + write =3D ept_qual & EPT_VIOLATION_DATA_WRITE ? 1 : 0; + if ((read | write) =3D=3D 0) + return false; + + /* + * The EPT violation must have been caused by accessing a + * guest-physical address that is a translation of a guest-linear + * address. + */ + if ((ept_qual & EPT_VIOLATION_GLA_VALID) =3D=3D 0 || + (ept_qual & EPT_VIOLATION_XLAT_VALID) =3D=3D 0) { + return false; + } + + return true; +} + +static void hvf_region_add(MemoryListener * listener, + MemoryRegionSection * section) +{ + hvf_set_phys_mem(section, true); +} + +static void hvf_region_del(MemoryListener * listener, + MemoryRegionSection * section) +{ + hvf_set_phys_mem(section, false); +} + +static MemoryListener hvf_memory_listener =3D { + .priority =3D 10, + .region_add =3D hvf_region_add, + .region_del =3D hvf_region_del, +}; + +static MemoryListener hvf_io_listener =3D { + .priority =3D 10, +}; + +void vmx_reset_vcpu(CPUState *cpu) { + + wvmcs(cpu->hvf_fd, VMCS_ENTRY_CTLS, 0); + wvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER, 0); + macvm_set_cr0(cpu->hvf_fd, 0x60000010); + + wvmcs(cpu->hvf_fd, VMCS_CR4_MASK, CR4_VMXE_MASK); + wvmcs(cpu->hvf_fd, VMCS_CR4_SHADOW, 0x0); + wvmcs(cpu->hvf_fd, VMCS_GUEST_CR4, CR4_VMXE_MASK); + + // set VMCS guest state fields + wvmcs(cpu->hvf_fd, VMCS_GUEST_CS_SELECTOR, 0xf000); + wvmcs(cpu->hvf_fd, VMCS_GUEST_CS_LIMIT, 0xffff); + wvmcs(cpu->hvf_fd, VMCS_GUEST_CS_ACCESS_RIGHTS, 0x9b); + wvmcs(cpu->hvf_fd, VMCS_GUEST_CS_BASE, 0xffff0000); + + wvmcs(cpu->hvf_fd, VMCS_GUEST_DS_SELECTOR, 0); + wvmcs(cpu->hvf_fd, VMCS_GUEST_DS_LIMIT, 0xffff); + wvmcs(cpu->hvf_fd, VMCS_GUEST_DS_ACCESS_RIGHTS, 0x93); + wvmcs(cpu->hvf_fd, VMCS_GUEST_DS_BASE, 0); + + wvmcs(cpu->hvf_fd, VMCS_GUEST_ES_SELECTOR, 0); + wvmcs(cpu->hvf_fd, VMCS_GUEST_ES_LIMIT, 0xffff); + wvmcs(cpu->hvf_fd, VMCS_GUEST_ES_ACCESS_RIGHTS, 0x93); + wvmcs(cpu->hvf_fd, VMCS_GUEST_ES_BASE, 0); + + wvmcs(cpu->hvf_fd, VMCS_GUEST_FS_SELECTOR, 0); + wvmcs(cpu->hvf_fd, VMCS_GUEST_FS_LIMIT, 0xffff); + wvmcs(cpu->hvf_fd, VMCS_GUEST_FS_ACCESS_RIGHTS, 0x93); + wvmcs(cpu->hvf_fd, VMCS_GUEST_FS_BASE, 0); + + wvmcs(cpu->hvf_fd, VMCS_GUEST_GS_SELECTOR, 0); + wvmcs(cpu->hvf_fd, VMCS_GUEST_GS_LIMIT, 0xffff); + wvmcs(cpu->hvf_fd, VMCS_GUEST_GS_ACCESS_RIGHTS, 0x93); + wvmcs(cpu->hvf_fd, VMCS_GUEST_GS_BASE, 0); + + wvmcs(cpu->hvf_fd, VMCS_GUEST_SS_SELECTOR, 0); + wvmcs(cpu->hvf_fd, VMCS_GUEST_SS_LIMIT, 0xffff); + wvmcs(cpu->hvf_fd, VMCS_GUEST_SS_ACCESS_RIGHTS, 0x93); + wvmcs(cpu->hvf_fd, VMCS_GUEST_SS_BASE, 0); + + wvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_SELECTOR, 0); + wvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_LIMIT, 0); + wvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_ACCESS_RIGHTS, 0x10000); + wvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_BASE, 0); + + wvmcs(cpu->hvf_fd, VMCS_GUEST_TR_SELECTOR, 0); + wvmcs(cpu->hvf_fd, VMCS_GUEST_TR_LIMIT, 0); + wvmcs(cpu->hvf_fd, VMCS_GUEST_TR_ACCESS_RIGHTS, 0x83); + wvmcs(cpu->hvf_fd, VMCS_GUEST_TR_BASE, 0); + + wvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_LIMIT, 0); + wvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_BASE, 0); + + wvmcs(cpu->hvf_fd, VMCS_GUEST_IDTR_LIMIT, 0); + wvmcs(cpu->hvf_fd, VMCS_GUEST_IDTR_BASE, 0); + + //wvmcs(cpu->hvf_fd, VMCS_GUEST_CR2, 0x0); + wvmcs(cpu->hvf_fd, VMCS_GUEST_CR3, 0x0); + + wreg(cpu->hvf_fd, HV_X86_RIP, 0xfff0); + wreg(cpu->hvf_fd, HV_X86_RDX, 0x623); + wreg(cpu->hvf_fd, HV_X86_RFLAGS, 0x2); + wreg(cpu->hvf_fd, HV_X86_RSP, 0x0); + wreg(cpu->hvf_fd, HV_X86_RAX, 0x0); + wreg(cpu->hvf_fd, HV_X86_RBX, 0x0); + wreg(cpu->hvf_fd, HV_X86_RCX, 0x0); + wreg(cpu->hvf_fd, HV_X86_RSI, 0x0); + wreg(cpu->hvf_fd, HV_X86_RDI, 0x0); + wreg(cpu->hvf_fd, HV_X86_RBP, 0x0); + + for (int i =3D 0; i < 8; i++) + wreg(cpu->hvf_fd, HV_X86_R8+i, 0x0); + + hv_vm_sync_tsc(0); + cpu->halted =3D 0; + hv_vcpu_invalidate_tlb(cpu->hvf_fd); + hv_vcpu_flush(cpu->hvf_fd); +} + +void hvf_vcpu_destroy(CPUState* cpu)=20 +{ + hv_return_t ret =3D hv_vcpu_destroy((hv_vcpuid_t)cpu->hvf_fd); + assert_hvf_ok(ret); +} + +static void dummy_signal(int sig) +{ +} + +int hvf_init_vcpu(CPUState * cpu) { + + X86CPU *x86cpu; + =20 + // init cpu signals + sigset_t set; + struct sigaction sigact; + + memset(&sigact, 0, sizeof(sigact)); + sigact.sa_handler =3D dummy_signal; + sigaction(SIG_IPI, &sigact, NULL); + + pthread_sigmask(SIG_BLOCK, NULL, &set); + sigdelset(&set, SIG_IPI); + + int r; + init_emu(cpu); + init_decoder(cpu); + init_cpuid(cpu); + + hvf_state->hvf_caps =3D g_new0(struct hvf_vcpu_caps, 1); + env->hvf_emul =3D g_new0(HVFX86EmulatorState, 1); + + r =3D hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf_fd, HV_VCPU_DEFAULT); + cpu->hvf_vcpu_dirty =3D 1; + assert_hvf_ok(r); + + if (hv_vmx_read_capability(HV_VMX_CAP_PINBASED, &cpu->hvf_caps->vmx_cap_p= inbased)) + abort(); + if (hv_vmx_read_capability(HV_VMX_CAP_PROCBASED, &cpu->hvf_caps->vmx_cap_= procbased)) + abort(); + if (hv_vmx_read_capability(HV_VMX_CAP_PROCBASED2, &cpu->hvf_caps->vmx_cap= _procbased2)) + abort(); + if (hv_vmx_read_capability(HV_VMX_CAP_ENTRY, &cpu->hvf_caps->vmx_cap_entr= y)) + abort(); + + /* set VMCS control fields */ + wvmcs(cpu->hvf_fd, VMCS_PIN_BASED_CTLS, cap2ctrl(cpu->hvf_caps->vmx_ca= p_pinbased, 0)); + wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, cap2ctrl(cpu->hvf_caps->v= mx_cap_procbased, + VMCS_PRI_PROC_BASED_CTL= S_HLT | + VMCS_PRI_PROC_BASED_CTL= S_MWAIT | + VMCS_PRI_PROC_BASED_CTL= S_TSC_OFFSET | + VMCS_PRI_PROC_BASED_CTL= S_TPR_SHADOW) | + VMCS_PRI_PROC_BASED_CTL= S_SEC_CONTROL); + wvmcs(cpu->hvf_fd, VMCS_SEC_PROC_BASED_CTLS, + cap2ctrl(cpu->hvf_caps->vmx_cap_procbased2,VMCS_PRI_PROC_BASED2_= CTLS_APIC_ACCESSES)); + + wvmcs(cpu->hvf_fd, VMCS_ENTRY_CTLS, cap2ctrl(cpu->hvf_caps->vmx_cap_entry= , 0)); + wvmcs(cpu->hvf_fd, VMCS_EXCEPTION_BITMAP, 0); /* Double fault */ + + wvmcs(cpu->hvf_fd, VMCS_TPR_THRESHOLD, 0); + + vmx_reset_vcpu(cpu); + + x86cpu =3D X86_CPU(cpu); + x86cpu->env.kvm_xsave_buf =3D qemu_memalign(4096, sizeof(struct hvf_xs= ave_buf)); + + hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_STAR, 1); + hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_LSTAR, 1); + hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_CSTAR, 1); + hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_FMASK, 1); + hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_FSBASE, 1); + hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_GSBASE, 1); + hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_KERNELGSBASE, 1); + hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_TSC_AUX, 1); + //hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_TSC, 1); + hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_SYSENTER_CS, 1); + hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_SYSENTER_EIP, 1); + hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_SYSENTER_ESP, 1); + + return 0; +} + +int hvf_enabled() { return !hvf_disabled; } +void hvf_disable(int shouldDisable) { + hvf_disabled =3D shouldDisable; +} + +int hvf_vcpu_exec(CPUState* cpu) { + X86CPU *x86_cpu =3D X86_CPU(cpu); + CPUX86State *env =3D &x86_cpu->env; + int ret =3D 0; + uint64_t rip =3D 0; + + cpu->halted =3D 0; + + if (hvf_process_events(cpu)) { + return EXCP_HLT; + } + + do { + if (cpu->hvf_vcpu_dirty) { + hvf_put_registers(cpu); + cpu->hvf_vcpu_dirty =3D false; + } + + cpu->hvf_x86->interruptable =3D + !(rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY) & + (VMCS_INTERRUPTIBILITY_STI_BLOCKING | VMCS_INTERRUPTIBILITY_MO= VSS_BLOCKING)); + + hvf_inject_interrupts(cpu); + vmx_update_tpr(cpu); + + + qemu_mutex_unlock_iothread(); + if (!cpu_is_bsp(X86_CPU(cpu)) && cpu->halted) { + qemu_mutex_lock_iothread(); + return EXCP_HLT; + } + + hv_return_t r =3D hv_vcpu_run(cpu->hvf_fd); + assert_hvf_ok(r); + + /* handle VMEXIT */ + uint64_t exit_reason =3D rvmcs(cpu->hvf_fd, VMCS_EXIT_REASON); + uint64_t exit_qual =3D rvmcs(cpu->hvf_fd, VMCS_EXIT_QUALIFICATION); + uint32_t ins_len =3D (uint32_t)rvmcs(cpu->hvf_fd, VMCS_EXIT_INSTRU= CTION_LENGTH); + uint64_t idtvec_info =3D rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_INF= O); + rip =3D rreg(cpu->hvf_fd, HV_X86_RIP); + RFLAGS(cpu) =3D rreg(cpu->hvf_fd, HV_X86_RFLAGS); + env->eflags =3D RFLAGS(cpu); + + trace_hvf_vm_exit(exit_reason, exit_qual); + + qemu_mutex_lock_iothread(); + + update_apic_tpr(cpu); + current_cpu =3D cpu; + + ret =3D 0; + switch (exit_reason) { + case EXIT_REASON_HLT: { + macvm_set_rip(cpu, rip + ins_len); + if (!((cpu->interrupt_request & CPU_INTERRUPT_HARD) && (EF= LAGS(cpu) & IF_MASK)) + && !(cpu->interrupt_request & CPU_INTERRUPT_NMI) && + !(idtvec_info & VMCS_IDT_VEC_VALID)) { + cpu->halted =3D 1; + ret =3D EXCP_HLT; + } + ret =3D EXCP_INTERRUPT; + break; + } + case EXIT_REASON_MWAIT: { + ret =3D EXCP_INTERRUPT; + break; + } + /* Need to check if MMIO or unmmaped fault */ + case EXIT_REASON_EPT_FAULT: + { + hvf_slot *slot; + addr_t gpa =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_PHYSICAL_ADDR= ESS); + trace_hvf_vm_exit_gpa(gpa); + + if ((idtvec_info & VMCS_IDT_VEC_VALID) =3D=3D 0 && (exit_q= ual & EXIT_QUAL_NMIUDTI) !=3D 0) + vmx_set_nmi_blocking(cpu); + + slot =3D hvf_find_overlap_slot(gpa, gpa); + // mmio + if (ept_emulation_fault(exit_qual) && !slot) { + struct x86_decode decode; + + load_regs(cpu); + cpu->hvf_x86->fetch_rip =3D rip; + + decode_instruction(cpu, &decode); + exec_instruction(cpu, &decode); + store_regs(cpu); + break; + } +#ifdef DIRTY_VGA_TRACKING + if (slot) { + bool read =3D exit_qual & EPT_VIOLATION_DATA_READ ? 1 = : 0; + bool write =3D exit_qual & EPT_VIOLATION_DATA_WRITE ? = 1 : 0; + if (!read && !write) + break; + int flags =3D HV_MEMORY_READ | HV_MEMORY_EXEC; + if (write) flags |=3D HV_MEMORY_WRITE; + + pthread_rwlock_wrlock(&mem_lock); + if (write) + mark_slot_page_dirty(slot, gpa); + hv_vm_protect(gpa & ~0xfff, 4096, flags); + pthread_rwlock_unlock(&mem_lock); + } +#endif + break; + } + case EXIT_REASON_INOUT: + { + uint32_t in =3D (exit_qual & 8) !=3D 0; + uint32_t size =3D (exit_qual & 7) + 1; + uint32_t string =3D (exit_qual & 16) !=3D 0; + uint32_t port =3D exit_qual >> 16; + //uint32_t rep =3D (exit_qual & 0x20) !=3D 0; + +#if 1 + if (!string && in) { + uint64_t val =3D 0; + load_regs(cpu); + hvf_handle_io(env, port, &val, 0, size, 1); + if (size =3D=3D 1) AL(cpu) =3D val; + else if (size =3D=3D 2) AX(cpu) =3D val; + else if (size =3D=3D 4) RAX(cpu) =3D (uint32_t)val; + else VM_PANIC("size"); + RIP(cpu) +=3D ins_len; + store_regs(cpu); + break; + } else if (!string && !in) { + RAX(cpu) =3D rreg(cpu->hvf_fd, HV_X86_RAX); + hvf_handle_io(env, port, &RAX(cpu), 1, size, 1); + macvm_set_rip(cpu, rip + ins_len); + break; + } +#endif + struct x86_decode decode; + + load_regs(cpu); + cpu->hvf_x86->fetch_rip =3D rip; + + decode_instruction(cpu, &decode); + VM_PANIC_ON(ins_len !=3D decode.len); + exec_instruction(cpu, &decode); + store_regs(cpu); + + break; + } + case EXIT_REASON_CPUID: { + uint32_t rax =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RAX); + uint32_t rbx =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RBX); + uint32_t rcx =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RCX); + uint32_t rdx =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RDX); + + get_cpuid_func(cpu, rax, rcx, &rax, &rbx, &rcx, &rdx); + + wreg(cpu->hvf_fd, HV_X86_RAX, rax); + wreg(cpu->hvf_fd, HV_X86_RBX, rbx); + wreg(cpu->hvf_fd, HV_X86_RCX, rcx); + wreg(cpu->hvf_fd, HV_X86_RDX, rdx); + + macvm_set_rip(cpu, rip + ins_len); + break; + } + case EXIT_REASON_XSETBV: { + X86CPU *x86_cpu =3D X86_CPU(cpu); + CPUX86State *env =3D &x86_cpu->env; + uint32_t eax =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RAX); + uint32_t ecx =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RCX); + uint32_t edx =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RDX); + + if (ecx) { + macvm_set_rip(cpu, rip + ins_len); + break; + } + env->xcr0 =3D ((uint64_t)edx << 32) | eax; + wreg(cpu->hvf_fd, HV_X86_XCR0, env->xcr0 | 1); + macvm_set_rip(cpu, rip + ins_len); + break; + } + case EXIT_REASON_INTR_WINDOW: + vmx_clear_int_window_exiting(cpu); + ret =3D EXCP_INTERRUPT; + break; + case EXIT_REASON_NMI_WINDOW: + vmx_clear_nmi_window_exiting(cpu); + ret =3D EXCP_INTERRUPT; + break; + case EXIT_REASON_EXT_INTR: + /* force exit and allow io handling */ + ret =3D EXCP_INTERRUPT; + break; + case EXIT_REASON_RDMSR: + case EXIT_REASON_WRMSR: + { + load_regs(cpu); + if (exit_reason =3D=3D EXIT_REASON_RDMSR) + simulate_rdmsr(cpu); + else + simulate_wrmsr(cpu); + RIP(cpu) +=3D rvmcs(cpu->hvf_fd, VMCS_EXIT_INSTRUCTION_LEN= GTH); + store_regs(cpu); + break; + } + case EXIT_REASON_CR_ACCESS: { + int cr; + int reg; + + load_regs(cpu); + cr =3D exit_qual & 15; + reg =3D (exit_qual >> 8) & 15; + + switch (cr) { + case 0x0: { + macvm_set_cr0(cpu->hvf_fd, RRX(cpu, reg)); + break; + } + case 4: { + macvm_set_cr4(cpu->hvf_fd, RRX(cpu, reg)); + break; + } + case 8: { + X86CPU *x86_cpu =3D X86_CPU(cpu); + if (exit_qual & 0x10) { + RRX(cpu, reg) =3D cpu_get_apic_tpr(x86_cpu->ap= ic_state); + } + else { + int tpr =3D RRX(cpu, reg); + cpu_set_apic_tpr(x86_cpu->apic_state, tpr); + ret =3D EXCP_INTERRUPT; + } + break; + } + default: + fprintf(stderr, "Unrecognized CR %d\n", cr); + abort(); + } + RIP(cpu) +=3D ins_len; + store_regs(cpu); + break; + } + case EXIT_REASON_APIC_ACCESS: { // TODO + struct x86_decode decode; + + load_regs(cpu); + cpu->hvf_x86->fetch_rip =3D rip; + + decode_instruction(cpu, &decode); + exec_instruction(cpu, &decode); + store_regs(cpu); + break; + } + case EXIT_REASON_TPR: { + ret =3D 1; + break; + } + case EXIT_REASON_TASK_SWITCH: { + uint64_t vinfo =3D rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_I= NFO); + x68_segment_selector sel =3D {.sel =3D exit_qual & 0xffff}; + vmx_handle_task_switch(cpu, sel, (exit_qual >> 30) & 0x3, + vinfo & VMCS_INTR_VALID, vinfo & VECTORING_INFO_VECTOR_MA= SK, vinfo & VMCS_INTR_T_MASK); + break; + } + case EXIT_REASON_TRIPLE_FAULT: { + //addr_t gpa =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_PHYSICAL_AD= DRESS); + qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET); + usleep(1000 * 100); + ret =3D EXCP_INTERRUPT; + break; + } + case EXIT_REASON_RDPMC: + wreg(cpu->hvf_fd, HV_X86_RAX, 0); + wreg(cpu->hvf_fd, HV_X86_RDX, 0); + macvm_set_rip(cpu, rip + ins_len); + break; + case VMX_REASON_VMCALL: + // TODO: maybe just take this out? + // if (g_hypervisor_iface) { + // load_regs(cpu); + // g_hypervisor_iface->hypercall_handler(cpu); + // RIP(cpu) +=3D rvmcs(cpu->hvf_fd, VMCS_EXIT_INSTRUCT= ION_LENGTH); + // store_regs(cpu); + // } + break; + default: + fprintf(stderr, "%llx: unhandled exit %llx\n", rip, exit_r= eason); + } + } while (ret =3D=3D 0); + + return ret; +} + +static bool hvf_allowed; + +static int hvf_accel_init(MachineState *ms) +{ + int x; + hv_return_t ret; + HVFState *s; + + hvf_disable(0); + ret =3D hv_vm_create(HV_VM_DEFAULT); + assert_hvf_ok(ret); + + s =3D g_new0(HVFState, 1); +=20 + s->num_slots =3D 32; + for (x =3D 0; x < s->num_slots; ++x) { + s->slots[x].size =3D 0; + s->slots[x].slot_id =3D x; + } + =20 + hvf_state =3D s; + cpu_interrupt_handler =3D hvf_handle_interrupt; + memory_listener_register(&hvf_memory_listener, &address_space_memory); + memory_listener_register(&hvf_io_listener, &address_space_io); + return 0; +} + +static void hvf_accel_class_init(ObjectClass *oc, void *data) +{ + AccelClass *ac =3D ACCEL_CLASS(oc); + ac->name =3D "HVF"; + ac->init_machine =3D hvf_accel_init; + ac->allowed =3D &hvf_allowed; +} + +static const TypeInfo hvf_accel_type =3D { + .name =3D TYPE_HVF_ACCEL, + .parent =3D TYPE_ACCEL, + .class_init =3D hvf_accel_class_init, +}; + +static void hvf_type_init(void) +{ + type_register_static(&hvf_accel_type); +} + +type_init(hvf_type_init); diff --git a/target/i386/hvf-i386.h b/target/i386/hvf-i386.h new file mode 100644 index 0000000000..f3f958058a --- /dev/null +++ b/target/i386/hvf-i386.h @@ -0,0 +1,48 @@ +/* + * QEMU Hypervisor.framework (HVF) support + * + * Copyright 2017 Google Inc + * + * Adapted from target-i386/hax-i386.h: + * Copyright (c) 2011 Intel Corporation + * Written by: + * Jiang Yunhong + * + * This work is licensed under the terms of the GNU GPL, version 2 or late= r. + * See the COPYING file in the top-level directory. + * + */ + +#ifndef _HVF_I386_H +#define _HVF_I386_H + +#include "sysemu/hvf.h" +#include "cpu.h" +#include "hvf-utils/x86.h" + +#define HVF_MAX_VCPU 0x10 +#define MAX_VM_ID 0x40 +#define MAX_VCPU_ID 0x40 + +extern struct hvf_state hvf_global; + +struct hvf_vm { + int id; + struct hvf_vcpu_state *vcpus[HVF_MAX_VCPU]; +}; + +struct hvf_state { + uint32_t version; + struct hvf_vm *vm; + uint64_t mem_quota; +}; + +#ifdef NEED_CPU_H +/* Functions exported to host specific mode */ + +/* Host specific functions */ +int hvf_inject_interrupt(CPUArchState * env, int vector); +int hvf_vcpu_run(struct hvf_vcpu_state *vcpu); +#endif + +#endif diff --git a/target/i386/hvf-utils/Makefile.objs b/target/i386/hvf-utils/Ma= kefile.objs new file mode 100644 index 0000000000..7df219ad9c --- /dev/null +++ b/target/i386/hvf-utils/Makefile.objs @@ -0,0 +1 @@ +obj-y +=3D x86.o x86_cpuid.o x86_decode.o x86_descr.o x86_emu.o x86_flags.= o x86_mmu.o x86hvf.o diff --git a/target/i386/hvf-utils/README.md b/target/i386/hvf-utils/README= .md new file mode 100644 index 0000000000..0d27a0d52b --- /dev/null +++ b/target/i386/hvf-utils/README.md @@ -0,0 +1,7 @@ +# OS X Hypervisor.framework support in QEMU + +These sources (and ../hvf-all.c) are adapted from Veertu Inc's vdhh (Veert= u Desktop Hosted Hypervisor) (last known location: https://github.com/veert= uinc/vdhh) with some minor changes, the most significant of which were: + +1. Adapt to our current QEMU's `CPUState` structure and `address_space_rw`= API; many struct members have been moved around (emulated x86 state, kvm_x= save_buf) due to historical differences + QEMU needing to handle more emula= tion targets. +2. Removal of `apic_page` and hyperv-related functionality. +3. More relaxed use of `qemu_mutex_lock_iothread`. diff --git a/target/i386/hvf-utils/vmcs.h b/target/i386/hvf-utils/vmcs.h new file mode 100644 index 0000000000..6f7ccb361a --- /dev/null +++ b/target/i386/hvf-utils/vmcs.h @@ -0,0 +1,368 @@ +/*- + * Copyright (c) 2011 NetApp, Inc. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY NETAPP, INC ``AS IS'' AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURP= OSE + * ARE DISCLAIMED. IN NO EVENT SHALL NETAPP, INC OR CONTRIBUTORS BE LIABLE + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENT= IAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STR= ICT + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY W= AY + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF + * SUCH DAMAGE. + * + * $FreeBSD$ + */ + +#ifndef _VMCS_H_ +#define _VMCS_H_ + +#include +#include + +#define VMCS_INITIAL 0xffffffffffffffff + +#define VMCS_IDENT(encoding) ((encoding) | 0x80000000) +/* + * VMCS field encodings from Appendix H, Intel Architecture Manual Vol3B. + */ +#define VMCS_INVALID_ENCODING 0xffffffff + +/* 16-bit control fields */ +#define VMCS_VPID 0x00000000 +#define VMCS_PIR_VECTOR 0x00000002 + +/* 16-bit guest-state fields */ +#define VMCS_GUEST_ES_SELECTOR 0x00000800 +#define VMCS_GUEST_CS_SELECTOR 0x00000802 +#define VMCS_GUEST_SS_SELECTOR 0x00000804 +#define VMCS_GUEST_DS_SELECTOR 0x00000806 +#define VMCS_GUEST_FS_SELECTOR 0x00000808 +#define VMCS_GUEST_GS_SELECTOR 0x0000080A +#define VMCS_GUEST_LDTR_SELECTOR 0x0000080C +#define VMCS_GUEST_TR_SELECTOR 0x0000080E +#define VMCS_GUEST_INTR_STATUS 0x00000810 + +/* 16-bit host-state fields */ +#define VMCS_HOST_ES_SELECTOR 0x00000C00 +#define VMCS_HOST_CS_SELECTOR 0x00000C02 +#define VMCS_HOST_SS_SELECTOR 0x00000C04 +#define VMCS_HOST_DS_SELECTOR 0x00000C06 +#define VMCS_HOST_FS_SELECTOR 0x00000C08 +#define VMCS_HOST_GS_SELECTOR 0x00000C0A +#define VMCS_HOST_TR_SELECTOR 0x00000C0C + +/* 64-bit control fields */ +#define VMCS_IO_BITMAP_A 0x00002000 +#define VMCS_IO_BITMAP_B 0x00002002 +#define VMCS_MSR_BITMAP 0x00002004 +#define VMCS_EXIT_MSR_STORE 0x00002006 +#define VMCS_EXIT_MSR_LOAD 0x00002008 +#define VMCS_ENTRY_MSR_LOAD 0x0000200A +#define VMCS_EXECUTIVE_VMCS 0x0000200C +#define VMCS_TSC_OFFSET 0x00002010 +#define VMCS_VIRTUAL_APIC 0x00002012 +#define VMCS_APIC_ACCESS 0x00002014 +#define VMCS_PIR_DESC 0x00002016 +#define VMCS_EPTP 0x0000201A +#define VMCS_EOI_EXIT0 0x0000201C +#define VMCS_EOI_EXIT1 0x0000201E +#define VMCS_EOI_EXIT2 0x00002020 +#define VMCS_EOI_EXIT3 0x00002022 +#define VMCS_EOI_EXIT(vector) (VMCS_EOI_EXIT0 + ((vector) / 64) * 2) + +/* 64-bit read-only fields */ +#define VMCS_GUEST_PHYSICAL_ADDRESS 0x00002400 + +/* 64-bit guest-state fields */ +#define VMCS_LINK_POINTER 0x00002800 +#define VMCS_GUEST_IA32_DEBUGCTL 0x00002802 +#define VMCS_GUEST_IA32_PAT 0x00002804 +#define VMCS_GUEST_IA32_EFER 0x00002806 +#define VMCS_GUEST_IA32_PERF_GLOBAL_CTRL 0x00002808 +#define VMCS_GUEST_PDPTE0 0x0000280A +#define VMCS_GUEST_PDPTE1 0x0000280C +#define VMCS_GUEST_PDPTE2 0x0000280E +#define VMCS_GUEST_PDPTE3 0x00002810 + +/* 64-bit host-state fields */ +#define VMCS_HOST_IA32_PAT 0x00002C00 +#define VMCS_HOST_IA32_EFER 0x00002C02 +#define VMCS_HOST_IA32_PERF_GLOBAL_CTRL 0x00002C04 + +/* 32-bit control fields */ +#define VMCS_PIN_BASED_CTLS 0x00004000 +#define VMCS_PRI_PROC_BASED_CTLS 0x00004002 +#define VMCS_EXCEPTION_BITMAP 0x00004004 +#define VMCS_PF_ERROR_MASK 0x00004006 +#define VMCS_PF_ERROR_MATCH 0x00004008 +#define VMCS_CR3_TARGET_COUNT 0x0000400A +#define VMCS_EXIT_CTLS 0x0000400C +#define VMCS_EXIT_MSR_STORE_COUNT 0x0000400E +#define VMCS_EXIT_MSR_LOAD_COUNT 0x00004010 +#define VMCS_ENTRY_CTLS 0x00004012 +#define VMCS_ENTRY_MSR_LOAD_COUNT 0x00004014 +#define VMCS_ENTRY_INTR_INFO 0x00004016 +#define VMCS_ENTRY_EXCEPTION_ERROR 0x00004018 +#define VMCS_ENTRY_INST_LENGTH 0x0000401A +#define VMCS_TPR_THRESHOLD 0x0000401C +#define VMCS_SEC_PROC_BASED_CTLS 0x0000401E +#define VMCS_PLE_GAP 0x00004020 +#define VMCS_PLE_WINDOW 0x00004022 + +/* 32-bit read-only data fields */ +#define VMCS_INSTRUCTION_ERROR 0x00004400 +#define VMCS_EXIT_REASON 0x00004402 +#define VMCS_EXIT_INTR_INFO 0x00004404 +#define VMCS_EXIT_INTR_ERRCODE 0x00004406 +#define VMCS_IDT_VECTORING_INFO 0x00004408 +#define VMCS_IDT_VECTORING_ERROR 0x0000440A +#define VMCS_EXIT_INSTRUCTION_LENGTH 0x0000440C +#define VMCS_EXIT_INSTRUCTION_INFO 0x0000440E + +/* 32-bit guest-state fields */ +#define VMCS_GUEST_ES_LIMIT 0x00004800 +#define VMCS_GUEST_CS_LIMIT 0x00004802 +#define VMCS_GUEST_SS_LIMIT 0x00004804 +#define VMCS_GUEST_DS_LIMIT 0x00004806 +#define VMCS_GUEST_FS_LIMIT 0x00004808 +#define VMCS_GUEST_GS_LIMIT 0x0000480A +#define VMCS_GUEST_LDTR_LIMIT 0x0000480C +#define VMCS_GUEST_TR_LIMIT 0x0000480E +#define VMCS_GUEST_GDTR_LIMIT 0x00004810 +#define VMCS_GUEST_IDTR_LIMIT 0x00004812 +#define VMCS_GUEST_ES_ACCESS_RIGHTS 0x00004814 +#define VMCS_GUEST_CS_ACCESS_RIGHTS 0x00004816 +#define VMCS_GUEST_SS_ACCESS_RIGHTS 0x00004818 +#define VMCS_GUEST_DS_ACCESS_RIGHTS 0x0000481A +#define VMCS_GUEST_FS_ACCESS_RIGHTS 0x0000481C +#define VMCS_GUEST_GS_ACCESS_RIGHTS 0x0000481E +#define VMCS_GUEST_LDTR_ACCESS_RIGHTS 0x00004820 +#define VMCS_GUEST_TR_ACCESS_RIGHTS 0x00004822 +#define VMCS_GUEST_INTERRUPTIBILITY 0x00004824 +#define VMCS_GUEST_ACTIVITY 0x00004826 +#define VMCS_GUEST_SMBASE 0x00004828 +#define VMCS_GUEST_IA32_SYSENTER_CS 0x0000482A +#define VMCS_PREEMPTION_TIMER_VALUE 0x0000482E + +/* 32-bit host state fields */ +#define VMCS_HOST_IA32_SYSENTER_CS 0x00004C00 + +/* Natural Width control fields */ +#define VMCS_CR0_MASK 0x00006000 +#define VMCS_CR4_MASK 0x00006002 +#define VMCS_CR0_SHADOW 0x00006004 +#define VMCS_CR4_SHADOW 0x00006006 +#define VMCS_CR3_TARGET0 0x00006008 +#define VMCS_CR3_TARGET1 0x0000600A +#define VMCS_CR3_TARGET2 0x0000600C +#define VMCS_CR3_TARGET3 0x0000600E + +/* Natural Width read-only fields */ +#define VMCS_EXIT_QUALIFICATION 0x00006400 +#define VMCS_IO_RCX 0x00006402 +#define VMCS_IO_RSI 0x00006404 +#define VMCS_IO_RDI 0x00006406 +#define VMCS_IO_RIP 0x00006408 +#define VMCS_GUEST_LINEAR_ADDRESS 0x0000640A + +/* Natural Width guest-state fields */ +#define VMCS_GUEST_CR0 0x00006800 +#define VMCS_GUEST_CR3 0x00006802 +#define VMCS_GUEST_CR4 0x00006804 +#define VMCS_GUEST_ES_BASE 0x00006806 +#define VMCS_GUEST_CS_BASE 0x00006808 +#define VMCS_GUEST_SS_BASE 0x0000680A +#define VMCS_GUEST_DS_BASE 0x0000680C +#define VMCS_GUEST_FS_BASE 0x0000680E +#define VMCS_GUEST_GS_BASE 0x00006810 +#define VMCS_GUEST_LDTR_BASE 0x00006812 +#define VMCS_GUEST_TR_BASE 0x00006814 +#define VMCS_GUEST_GDTR_BASE 0x00006816 +#define VMCS_GUEST_IDTR_BASE 0x00006818 +#define VMCS_GUEST_DR7 0x0000681A +#define VMCS_GUEST_RSP 0x0000681C +#define VMCS_GUEST_RIP 0x0000681E +#define VMCS_GUEST_RFLAGS 0x00006820 +#define VMCS_GUEST_PENDING_DBG_EXCEPTIONS 0x00006822 +#define VMCS_GUEST_IA32_SYSENTER_ESP 0x00006824 +#define VMCS_GUEST_IA32_SYSENTER_EIP 0x00006826 + +/* Natural Width host-state fields */ +#define VMCS_HOST_CR0 0x00006C00 +#define VMCS_HOST_CR3 0x00006C02 +#define VMCS_HOST_CR4 0x00006C04 +#define VMCS_HOST_FS_BASE 0x00006C06 +#define VMCS_HOST_GS_BASE 0x00006C08 +#define VMCS_HOST_TR_BASE 0x00006C0A +#define VMCS_HOST_GDTR_BASE 0x00006C0C +#define VMCS_HOST_IDTR_BASE 0x00006C0E +#define VMCS_HOST_IA32_SYSENTER_ESP 0x00006C10 +#define VMCS_HOST_IA32_SYSENTER_EIP 0x00006C12 +#define VMCS_HOST_RSP 0x00006C14 +#define VMCS_HOST_RIP 0x00006c16 + +/* + * VM instruction error numbers + */ +#define VMRESUME_WITH_NON_LAUNCHED_VMCS 5 + +/* + * VMCS exit reasons + */ +#define EXIT_REASON_EXCEPTION 0 +#define EXIT_REASON_EXT_INTR 1 +#define EXIT_REASON_TRIPLE_FAULT 2 +#define EXIT_REASON_INIT 3 +#define EXIT_REASON_SIPI 4 +#define EXIT_REASON_IO_SMI 5 +#define EXIT_REASON_SMI 6 +#define EXIT_REASON_INTR_WINDOW 7 +#define EXIT_REASON_NMI_WINDOW 8 +#define EXIT_REASON_TASK_SWITCH 9 +#define EXIT_REASON_CPUID 10 +#define EXIT_REASON_GETSEC 11 +#define EXIT_REASON_HLT 12 +#define EXIT_REASON_INVD 13 +#define EXIT_REASON_INVLPG 14 +#define EXIT_REASON_RDPMC 15 +#define EXIT_REASON_RDTSC 16 +#define EXIT_REASON_RSM 17 +#define EXIT_REASON_VMCALL 18 +#define EXIT_REASON_VMCLEAR 19 +#define EXIT_REASON_VMLAUNCH 20 +#define EXIT_REASON_VMPTRLD 21 +#define EXIT_REASON_VMPTRST 22 +#define EXIT_REASON_VMREAD 23 +#define EXIT_REASON_VMRESUME 24 +#define EXIT_REASON_VMWRITE 25 +#define EXIT_REASON_VMXOFF 26 +#define EXIT_REASON_VMXON 27 +#define EXIT_REASON_CR_ACCESS 28 +#define EXIT_REASON_DR_ACCESS 29 +#define EXIT_REASON_INOUT 30 +#define EXIT_REASON_RDMSR 31 +#define EXIT_REASON_WRMSR 32 +#define EXIT_REASON_INVAL_VMCS 33 +#define EXIT_REASON_INVAL_MSR 34 +#define EXIT_REASON_MWAIT 36 +#define EXIT_REASON_MTF 37 +#define EXIT_REASON_MONITOR 39 +#define EXIT_REASON_PAUSE 40 +#define EXIT_REASON_MCE_DURING_ENTRY 41 +#define EXIT_REASON_TPR 43 +#define EXIT_REASON_APIC_ACCESS 44 +#define EXIT_REASON_VIRTUALIZED_EOI 45 +#define EXIT_REASON_GDTR_IDTR 46 +#define EXIT_REASON_LDTR_TR 47 +#define EXIT_REASON_EPT_FAULT 48 +#define EXIT_REASON_EPT_MISCONFIG 49 +#define EXIT_REASON_INVEPT 50 +#define EXIT_REASON_RDTSCP 51 +#define EXIT_REASON_VMX_PREEMPT 52 +#define EXIT_REASON_INVVPID 53 +#define EXIT_REASON_WBINVD 54 +#define EXIT_REASON_XSETBV 55 +#define EXIT_REASON_APIC_WRITE 56 + +/* + * NMI unblocking due to IRET. + * + * Applies to VM-exits due to hardware exception or EPT fault. + */ +#define EXIT_QUAL_NMIUDTI (1 << 12) +/* + * VMCS interrupt information fields + */ +#define VMCS_INTR_VALID (1U << 31) +#define VMCS_INTR_T_MASK 0x700 /* Interruption-info type */ +#define VMCS_INTR_T_HWINTR (0 << 8) +#define VMCS_INTR_T_NMI (2 << 8) +#define VMCS_INTR_T_HWEXCEPTION (3 << 8) +#define VMCS_INTR_T_SWINTR (4 << 8) +#define VMCS_INTR_T_PRIV_SWEXCEPTION (5 << 8) +#define VMCS_INTR_T_SWEXCEPTION (6 << 8) +#define VMCS_INTR_DEL_ERRCODE (1 << 11) + +/* + * VMCS IDT-Vectoring information fields + */ +#define VMCS_IDT_VEC_VALID (1U << 31) +#define VMCS_IDT_VEC_TYPE 0x700 +#define VMCS_IDT_VEC_ERRCODE_VALID (1U << 11) +#define VMCS_IDT_VEC_HWINTR (0 << 8) +#define VMCS_IDT_VEC_NMI (2 << 8) +#define VMCS_IDT_VEC_HWEXCEPTION (3 << 8) +#define VMCS_IDT_VEC_SWINTR (4 << 8) + +/* + * VMCS Guest interruptibility field + */ +#define VMCS_INTERRUPTIBILITY_STI_BLOCKING (1 << 0) +#define VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING (1 << 1) +#define VMCS_INTERRUPTIBILITY_SMI_BLOCKING (1 << 2) +#define VMCS_INTERRUPTIBILITY_NMI_BLOCKING (1 << 3) + +/* + * Exit qualification for EXIT_REASON_INVAL_VMCS + */ +#define EXIT_QUAL_NMI_WHILE_STI_BLOCKING 3 + +/* + * Exit qualification for EPT violation + */ +#define EPT_VIOLATION_DATA_READ (1UL << 0) +#define EPT_VIOLATION_DATA_WRITE (1UL << 1) +#define EPT_VIOLATION_INST_FETCH (1UL << 2) +#define EPT_VIOLATION_GPA_READABLE (1UL << 3) +#define EPT_VIOLATION_GPA_WRITEABLE (1UL << 4) +#define EPT_VIOLATION_GPA_EXECUTABLE (1UL << 5) +#define EPT_VIOLATION_GLA_VALID (1UL << 7) +#define EPT_VIOLATION_XLAT_VALID (1UL << 8) + +/* + * Exit qualification for APIC-access VM exit + */ +#define APIC_ACCESS_OFFSET(qual) ((qual) & 0xFFF) +#define APIC_ACCESS_TYPE(qual) (((qual) >> 12) & 0xF) + +/* + * Exit qualification for APIC-write VM exit + */ +#define APIC_WRITE_OFFSET(qual) ((qual) & 0xFFF) + + +#define VMCS_PRI_PROC_BASED_CTLS_INT_WINDOW_EXITING (1 << 2) +#define VMCS_PRI_PROC_BASED_CTLS_TSC_OFFSET (1 << 3) +#define VMCS_PRI_PROC_BASED_CTLS_HLT (1 << 7) +#define VMCS_PRI_PROC_BASED_CTLS_MWAIT (1 << 10) +#define VMCS_PRI_PROC_BASED_CTLS_TSC (1 << 12) +#define VMCS_PRI_PROC_BASED_CTLS_CR8_LOAD (1 << 19) +#define VMCS_PRI_PROC_BASED_CTLS_CR8_STORE (1 << 20) +#define VMCS_PRI_PROC_BASED_CTLS_TPR_SHADOW (1 << 21) +#define VMCS_PRI_PROC_BASED_CTLS_NMI_WINDOW_EXITING (1 << 22) +#define VMCS_PRI_PROC_BASED_CTLS_SEC_CONTROL (1 << 31) + +#define VMCS_PRI_PROC_BASED2_CTLS_APIC_ACCESSES (1 << 0) +#define VMCS_PRI_PROC_BASED2_CTLS_X2APIC (1 << 4) + +enum task_switch_reason { + TSR_CALL, + TSR_IRET, + TSR_JMP, + TSR_IDT_GATE, /* task gate in IDT */ +}; + +#endif diff --git a/target/i386/hvf-utils/vmx.h b/target/i386/hvf-utils/vmx.h new file mode 100644 index 0000000000..8a080e6777 --- /dev/null +++ b/target/i386/hvf-utils/vmx.h @@ -0,0 +1,200 @@ +/* + * Copyright (C) 2016 Veertu Inc, + * Copyright (C) 2017 Google Inc, + * Based on Veertu vddh/vmm/vmx.h + * + * Interfaces to Hypervisor.framework to read/write X86 registers and VMCS. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation; either version 2 or + * (at your option) version 3 of the License. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, see . + */ + +#ifndef VMX_H +#define VMX_H + +#include +#include +#include +#include "vmcs.h" +#include "cpu.h" +#include "x86.h" + +#include "exec/address-spaces.h" + +static uint64_t inline rreg(hv_vcpuid_t vcpu, hv_x86_reg_t reg) +{ + uint64_t v; + + if (hv_vcpu_read_register(vcpu, reg, &v)) { + abort(); + } + + return v; +} + +/* write GPR */ +static void inline wreg(hv_vcpuid_t vcpu, hv_x86_reg_t reg, uint64_t v) +{ + if (hv_vcpu_write_register(vcpu, reg, v)) { + abort(); + } +} + +/* read VMCS field */ +static uint64_t inline rvmcs(hv_vcpuid_t vcpu, uint32_t field) +{ + uint64_t v; + + hv_vmx_vcpu_read_vmcs(vcpu, field, &v); + + return v; +} + +/* write VMCS field */ +static void inline wvmcs(hv_vcpuid_t vcpu, uint32_t field, uint64_t v) +{ + hv_vmx_vcpu_write_vmcs(vcpu, field, v); +} + +/* desired control word constrained by hardware/hypervisor capabilities */ +static uint64_t inline cap2ctrl(uint64_t cap, uint64_t ctrl) +{ + return (ctrl | (cap & 0xffffffff)) & (cap >> 32); +} + +#define VM_ENTRY_GUEST_LMA (1LL << 9) + +#define AR_TYPE_ACCESSES_MASK 1 +#define AR_TYPE_READABLE_MASK (1 << 1) +#define AR_TYPE_WRITEABLE_MASK (1 << 2) +#define AR_TYPE_CODE_MASK (1 << 3) +#define AR_TYPE_MASK 0x0f +#define AR_TYPE_BUSY_64_TSS 11 +#define AR_TYPE_BUSY_32_TSS 11 +#define AR_TYPE_BUSY_16_TSS 3 +#define AR_TYPE_LDT 2 + +static void enter_long_mode(hv_vcpuid_t vcpu, uint64_t cr0, uint64_t efer) +{ + uint64_t entry_ctls; + + efer |=3D EFER_LMA; + wvmcs(vcpu, VMCS_GUEST_IA32_EFER, efer); + entry_ctls =3D rvmcs(vcpu, VMCS_ENTRY_CTLS); + wvmcs(vcpu, VMCS_ENTRY_CTLS, rvmcs(vcpu, VMCS_ENTRY_CTLS) | VM_ENTRY_G= UEST_LMA); + + uint64_t guest_tr_ar =3D rvmcs(vcpu, VMCS_GUEST_TR_ACCESS_RIGHTS); + if ((efer & EFER_LME) && (guest_tr_ar & AR_TYPE_MASK) !=3D AR_TYPE_BUS= Y_64_TSS) { + wvmcs(vcpu, VMCS_GUEST_TR_ACCESS_RIGHTS, (guest_tr_ar & ~AR_TYPE_M= ASK) | AR_TYPE_BUSY_64_TSS); + } +} + +static void exit_long_mode(hv_vcpuid_t vcpu, uint64_t cr0, uint64_t efer) +{ + uint64_t entry_ctls; + + entry_ctls =3D rvmcs(vcpu, VMCS_ENTRY_CTLS); + wvmcs(vcpu, VMCS_ENTRY_CTLS, entry_ctls & ~VM_ENTRY_GUEST_LMA); + + efer &=3D ~EFER_LMA; + wvmcs(vcpu, VMCS_GUEST_IA32_EFER, efer); +} + +static void inline macvm_set_cr0(hv_vcpuid_t vcpu, uint64_t cr0) +{ + int i; + uint64_t pdpte[4] =3D {0, 0, 0, 0}; + uint64_t efer =3D rvmcs(vcpu, VMCS_GUEST_IA32_EFER); + uint64_t old_cr0 =3D rvmcs(vcpu, VMCS_GUEST_CR0); + + if ((cr0 & CR0_PG) && (rvmcs(vcpu, VMCS_GUEST_CR4) & CR4_PAE) && !(efe= r & EFER_LME)) + address_space_rw(&address_space_memory, rvmcs(vcpu, VMCS_GUEST_CR3= ) & ~0x1f, + MEMTXATTRS_UNSPECIFIED, + (uint8_t *)pdpte, 32, 0); + + for (i =3D 0; i < 4; i++) + wvmcs(vcpu, VMCS_GUEST_PDPTE0 + i * 2, pdpte[i]); + + wvmcs(vcpu, VMCS_CR0_MASK, CR0_CD | CR0_NE | CR0_PG); + wvmcs(vcpu, VMCS_CR0_SHADOW, cr0); + + cr0 &=3D ~CR0_CD; + wvmcs(vcpu, VMCS_GUEST_CR0, cr0 | CR0_NE| CR0_ET); + + if (efer & EFER_LME) { + if (!(old_cr0 & CR0_PG) && (cr0 & CR0_PG)) + enter_long_mode(vcpu, cr0, efer); + if (/*(old_cr0 & CR0_PG) &&*/ !(cr0 & CR0_PG)) + exit_long_mode(vcpu, cr0, efer); + } + + hv_vcpu_invalidate_tlb(vcpu); + hv_vcpu_flush(vcpu); +} + +static void inline macvm_set_cr4(hv_vcpuid_t vcpu, uint64_t cr4) +{ + uint64_t guest_cr4 =3D cr4 | CR4_VMXE; + + wvmcs(vcpu, VMCS_GUEST_CR4, guest_cr4); + wvmcs(vcpu, VMCS_CR4_SHADOW, cr4); + + hv_vcpu_invalidate_tlb(vcpu); + hv_vcpu_flush(vcpu); +} + +static void inline macvm_set_rip(CPUState *cpu, uint64_t rip) +{ + uint64_t val; + + /* BUG, should take considering overlap.. */ + wreg(cpu->hvf_fd, HV_X86_RIP, rip); + + /* after moving forward in rip, we need to clean INTERRUPTABILITY */ + val =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY); + if (val & (VMCS_INTERRUPTIBILITY_STI_BLOCKING | VMCS_INTERRUPTIBILITY_M= OVSS_BLOCKING)) + wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY, + val & ~(VMCS_INTERRUPTIBILITY_STI_BLOCKING | VMCS_INTERRUPTI= BILITY_MOVSS_BLOCKING)); +} + +static void inline vmx_clear_nmi_blocking(CPUState *cpu) +{ + uint32_t gi =3D (uint32_t) rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBIL= ITY); + gi &=3D ~VMCS_INTERRUPTIBILITY_NMI_BLOCKING; + wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY, gi); +} + +static void inline vmx_set_nmi_blocking(CPUState *cpu) +{ + uint32_t gi =3D (uint32_t)rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILI= TY); + gi |=3D VMCS_INTERRUPTIBILITY_NMI_BLOCKING; + wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY, gi); +} + +static void inline vmx_set_nmi_window_exiting(CPUState *cpu) +{ + uint64_t val; + val =3D rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS); + wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val | VMCS_PRI_PROC_BASED= _CTLS_NMI_WINDOW_EXITING); + +} + +static void inline vmx_clear_nmi_window_exiting(CPUState *cpu) +{ + + uint64_t val; + val =3D rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS); + wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val & ~VMCS_PRI_PROC_BASE= D_CTLS_NMI_WINDOW_EXITING); +} + +#endif diff --git a/target/i386/hvf-utils/x86.c b/target/i386/hvf-utils/x86.c new file mode 100644 index 0000000000..e3db2c9c8b --- /dev/null +++ b/target/i386/hvf-utils/x86.c @@ -0,0 +1,174 @@ +/* + * Copyright (C) 2016 Veertu Inc, + * Copyright (C) 2017 Google Inc, + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation; either version 2 or + * (at your option) version 3 of the License. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, see . + */ + +#include "qemu/osdep.h" + +#include "qemu-common.h" +#include "x86_decode.h" +#include "x86_emu.h" +#include "vmcs.h" +#include "vmx.h" +#include "x86_mmu.h" +#include "x86_descr.h" + +static uint32_t x86_segment_access_rights(struct x86_segment_descriptor *v= ar) +{ + uint32_t ar; + + if (!var->p) { + ar =3D 1 << 16; + return ar; + } + + ar =3D var->type & 15; + ar |=3D (var->s & 1) << 4; + ar |=3D (var->dpl & 3) << 5; + ar |=3D (var->p & 1) << 7; + ar |=3D (var->avl & 1) << 12; + ar |=3D (var->l & 1) << 13; + ar |=3D (var->db & 1) << 14; + ar |=3D (var->g & 1) << 15; + return ar; +} + +bool x86_read_segment_descriptor(struct CPUState *cpu, struct x86_segment_= descriptor *desc, x68_segment_selector sel) +{ + addr_t base; + uint32_t limit; + + ZERO_INIT(*desc); + // valid gdt descriptors start from index 1 + if (!sel.index && GDT_SEL =3D=3D sel.ti) + return false; + + if (GDT_SEL =3D=3D sel.ti) { + base =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_BASE); + limit =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_LIMIT); + } else { + base =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_BASE); + limit =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_LIMIT); + } + + if (sel.index * 8 >=3D limit) + return false; + + vmx_read_mem(cpu, desc, base + sel.index * 8, sizeof(*desc)); + return true; +} + +bool x86_write_segment_descriptor(struct CPUState *cpu, struct x86_segment= _descriptor *desc, x68_segment_selector sel) +{ + addr_t base; + uint32_t limit; + =20 + if (GDT_SEL =3D=3D sel.ti) { + base =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_BASE); + limit =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_LIMIT); + } else { + base =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_BASE); + limit =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_LIMIT); + } + =20 + if (sel.index * 8 >=3D limit) { + printf("%s: gdt limit\n", __FUNCTION__); + return false; + } + vmx_write_mem(cpu, base + sel.index * 8, desc, sizeof(*desc)); + return true; +} + +bool x86_read_call_gate(struct CPUState *cpu, struct x86_call_gate *idt_de= sc, int gate) +{ + addr_t base =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_IDTR_BASE); + uint32_t limit =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_IDTR_LIMIT); + + ZERO_INIT(*idt_desc); + if (gate * 8 >=3D limit) { + printf("%s: idt limit\n", __FUNCTION__); + return false; + } + + vmx_read_mem(cpu, idt_desc, base + gate * 8, sizeof(*idt_desc)); + return true; +} + +bool x86_is_protected(struct CPUState *cpu) +{ + uint64_t cr0 =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0); + return cr0 & CR0_PE; +} + +bool x86_is_real(struct CPUState *cpu) +{ + return !x86_is_protected(cpu); +} + +bool x86_is_v8086(struct CPUState *cpu) +{ + return (x86_is_protected(cpu) && (RFLAGS(cpu) & RFLAGS_VM)); +} + +bool x86_is_long_mode(struct CPUState *cpu) +{ + return rvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER) & EFER_LMA; +} + +bool x86_is_long64_mode(struct CPUState *cpu) +{ + struct vmx_segment desc; + vmx_read_segment_descriptor(cpu, &desc, REG_SEG_CS); + + return x86_is_long_mode(cpu) && ((desc.ar >> 13) & 1); +} + +bool x86_is_paging_mode(struct CPUState *cpu) +{ + uint64_t cr0 =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0); + return cr0 & CR0_PG; +} + +bool x86_is_pae_enabled(struct CPUState *cpu) +{ + uint64_t cr4 =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_CR4); + return cr4 & CR4_PAE; +} + +addr_t linear_addr(struct CPUState *cpu, addr_t addr, x86_reg_segment seg) +{ + return vmx_read_segment_base(cpu, seg) + addr; +} + +addr_t linear_addr_size(struct CPUState *cpu, addr_t addr, int size, x86_r= eg_segment seg) +{ + switch (size) { + case 2: + addr =3D (uint16_t)addr; + break; + case 4: + addr =3D (uint32_t)addr; + break; + default: + break; + } + return linear_addr(cpu, addr, seg); +} + +addr_t linear_rip(struct CPUState *cpu, addr_t rip) +{ + return linear_addr(cpu, rip, REG_SEG_CS); +} diff --git a/target/i386/hvf-utils/x86.h b/target/i386/hvf-utils/x86.h new file mode 100644 index 0000000000..5dffdd6568 --- /dev/null +++ b/target/i386/hvf-utils/x86.h @@ -0,0 +1,470 @@ +/* + * Copyright (C) 2016 Veertu Inc, + * Copyright (C) 2017 Veertu Inc, + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation; either version 2 or + * (at your option) version 3 of the License. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, see . + */ + +#pragma once + +#include +#include +#include +#include +#include "qemu-common.h" +#include "x86_flags.h" + +// exceptions +typedef enum x86_exception { + EXCEPTION_DE, // divide error + EXCEPTION_DB, // debug fault + EXCEPTION_NMI, // non-maskable interrupt + EXCEPTION_BP, // breakpoint trap + EXCEPTION_OF, // overflow trap + EXCEPTION_BR, // boundary range exceeded fault + EXCEPTION_UD, // undefined opcode + EXCEPTION_NM, // device not available + EXCEPTION_DF, // double fault + EXCEPTION_RSVD, // not defined + EXCEPTION_TS, // invalid TSS fault + EXCEPTION_NP, // not present fault + EXCEPTION_GP, // general protection fault + EXCEPTION_PF, // page fault + EXCEPTION_RSVD2, // not defined +} x86_exception; + +// general purpose regs +typedef enum x86_reg_name { + REG_RAX =3D 0, + REG_RCX =3D 1, + REG_RDX =3D 2, + REG_RBX =3D 3, + REG_RSP =3D 4, + REG_RBP =3D 5, + REG_RSI =3D 6, + REG_RDI =3D 7, + REG_R8 =3D 8, + REG_R9 =3D 9, + REG_R10 =3D 10, + REG_R11 =3D 11, + REG_R12 =3D 12, + REG_R13 =3D 13, + REG_R14 =3D 14, + REG_R15 =3D 15, +} x86_reg_name; + +// segment regs +typedef enum x86_reg_segment { + REG_SEG_ES =3D 0, + REG_SEG_CS =3D 1, + REG_SEG_SS =3D 2, + REG_SEG_DS =3D 3, + REG_SEG_FS =3D 4, + REG_SEG_GS =3D 5, + REG_SEG_LDTR =3D 6, + REG_SEG_TR =3D 7, +} x86_reg_segment; + +typedef struct x86_register +{ + union { + struct { + uint64_t rrx; // full 64 bit + }; + struct { + uint32_t erx; // low 32 bit part + uint32_t hi32_unused1; + }; + struct { + uint16_t rx; // low 16 bit part + uint16_t hi16_unused1; + uint32_t hi32_unused2; + }; + struct { + uint8_t lx; // low 8 bit part + uint8_t hx; // high 8 bit + uint16_t hi16_unused2; + uint32_t hi32_unused3; + }; + }; +} __attribute__ ((__packed__)) x86_register; + +typedef enum x86_rflags { + RFLAGS_CF =3D (1L << 0), + RFLAGS_PF =3D (1L << 2), + RFLAGS_AF =3D (1L << 4), + RFLAGS_ZF =3D (1L << 6), + RFLAGS_SF =3D (1L << 7), + RFLAGS_TF =3D (1L << 8), + RFLAGS_IF =3D (1L << 9), + RFLAGS_DF =3D (1L << 10), + RFLAGS_OF =3D (1L << 11), + RFLAGS_IOPL =3D (3L << 12), + RFLAGS_NT =3D (1L << 14), + RFLAGS_RF =3D (1L << 16), + RFLAGS_VM =3D (1L << 17), + RFLAGS_AC =3D (1L << 18), + RFLAGS_VIF =3D (1L << 19), + RFLAGS_VIP =3D (1L << 20), + RFLAGS_ID =3D (1L << 21), +} x86_rflags; + +// rflags register +typedef struct x86_reg_flags { + union { + struct { + uint64_t rflags; + }; + struct { + uint32_t eflags; + uint32_t hi32_unused1; + }; + struct { + uint32_t cf:1; + uint32_t unused1:1; + uint32_t pf:1; + uint32_t unused2:1; + uint32_t af:1; + uint32_t unused3:1; + uint32_t zf:1; + uint32_t sf:1; + uint32_t tf:1; + uint32_t ief:1; + uint32_t df:1; + uint32_t of:1; + uint32_t iopl:2; + uint32_t nt:1; + uint32_t unused4:1; + uint32_t rf:1; + uint32_t vm:1; + uint32_t ac:1; + uint32_t vif:1; + uint32_t vip:1; + uint32_t id:1; + uint32_t unused5:10; + uint32_t hi32_unused2; + }; + }; +} __attribute__ ((__packed__)) x86_reg_flags; + +typedef enum x86_reg_efer { + EFER_SCE =3D (1L << 0), + EFER_LME =3D (1L << 8), + EFER_LMA =3D (1L << 10), + EFER_NXE =3D (1L << 11), + EFER_SVME =3D (1L << 12), + EFER_FXSR =3D (1L << 14), +} x86_reg_efer; + +typedef struct x86_efer { + uint64_t efer; +} __attribute__ ((__packed__)) x86_efer; + +typedef enum x86_reg_cr0 { + CR0_PE =3D (1L << 0), + CR0_MP =3D (1L << 1), + CR0_EM =3D (1L << 2), + CR0_TS =3D (1L << 3), + CR0_ET =3D (1L << 4), + CR0_NE =3D (1L << 5), + CR0_WP =3D (1L << 16), + CR0_AM =3D (1L << 18), + CR0_NW =3D (1L << 29), + CR0_CD =3D (1L << 30), + CR0_PG =3D (1L << 31), +} x86_reg_cr0; + +typedef enum x86_reg_cr4 { + CR4_VME =3D (1L << 0), + CR4_PVI =3D (1L << 1), + CR4_TSD =3D (1L << 2), + CR4_DE =3D (1L << 3), + CR4_PSE =3D (1L << 4), + CR4_PAE =3D (1L << 5), + CR4_MSE =3D (1L << 6), + CR4_PGE =3D (1L << 7), + CR4_PCE =3D (1L << 8), + CR4_OSFXSR =3D (1L << 9), + CR4_OSXMMEXCPT =3D (1L << 10), + CR4_VMXE =3D (1L << 13), + CR4_SMXE =3D (1L << 14), + CR4_FSGSBASE =3D (1L << 16), + CR4_PCIDE =3D (1L << 17), + CR4_OSXSAVE =3D (1L << 18), + CR4_SMEP =3D (1L << 20), +} x86_reg_cr4; + +// 16 bit Task State Segment +typedef struct x86_tss_segment16 { + uint16_t link; + uint16_t sp0; + uint16_t ss0; + uint32_t sp1; + uint16_t ss1; + uint32_t sp2; + uint16_t ss2; + uint16_t ip; + uint16_t flags; + uint16_t ax; + uint16_t cx; + uint16_t dx; + uint16_t bx; + uint16_t sp; + uint16_t bp; + uint16_t si; + uint16_t di; + uint16_t es; + uint16_t cs; + uint16_t ss; + uint16_t ds; + uint16_t ldtr; +} __attribute__((packed)) x86_tss_segment16; + +// 32 bit Task State Segment +typedef struct x86_tss_segment32 +{ + uint32_t prev_tss; + uint32_t esp0; + uint32_t ss0; + uint32_t esp1; + uint32_t ss1; + uint32_t esp2; + uint32_t ss2; + uint32_t cr3; + uint32_t eip; + uint32_t eflags; + uint32_t eax; + uint32_t ecx; + uint32_t edx; + uint32_t ebx; + uint32_t esp; + uint32_t ebp; + uint32_t esi; + uint32_t edi; + uint32_t es; + uint32_t cs; + uint32_t ss; + uint32_t ds; + uint32_t fs; + uint32_t gs; + uint32_t ldt; + uint16_t trap; + uint16_t iomap_base; +} __attribute__ ((__packed__)) x86_tss_segment32; + +// 64 bit Task State Segment +typedef struct x86_tss_segment64 +{ + uint32_t unused; + uint64_t rsp0; + uint64_t rsp1; + uint64_t rsp2; + uint64_t unused1; + uint64_t ist1; + uint64_t ist2; + uint64_t ist3; + uint64_t ist4; + uint64_t ist5; + uint64_t ist6; + uint64_t ist7; + uint64_t unused2; + uint16_t unused3; + uint16_t iomap_base; +} __attribute__ ((__packed__)) x86_tss_segment64; + +// segment descriptors +typedef struct x86_segment_descriptor { + uint64_t limit0:16; + uint64_t base0:16; + uint64_t base1:8; + uint64_t type:4; + uint64_t s:1; + uint64_t dpl:2; + uint64_t p:1; + uint64_t limit1:4; + uint64_t avl:1; + uint64_t l:1; + uint64_t db:1; + uint64_t g:1; + uint64_t base2:8; +} __attribute__ ((__packed__)) x86_segment_descriptor; + +static inline uint32_t x86_segment_base(x86_segment_descriptor *desc) +{ + return (uint32_t)((desc->base2 << 24) | (desc->base1 << 16) | desc->ba= se0); +} + +static inline void x86_set_segment_base(x86_segment_descriptor *desc, uint= 32_t base) +{ + desc->base2 =3D base >> 24; + desc->base1 =3D (base >> 16) & 0xff; + desc->base0 =3D base & 0xffff; +} + +static inline uint32_t x86_segment_limit(x86_segment_descriptor *desc) +{ + uint32_t limit =3D (uint32_t)((desc->limit1 << 16) | desc->limit0); + if (desc->g) + return (limit << 12) | 0xfff; + return limit; +} + +static inline void x86_set_segment_limit(x86_segment_descriptor *desc, uin= t32_t limit) +{ + desc->limit0 =3D limit & 0xffff; + desc->limit1 =3D limit >> 16; +} + +typedef struct x86_call_gate { + uint64_t offset0:16; + uint64_t selector:16; + uint64_t param_count:4; + uint64_t reserved:3; + uint64_t type:4; + uint64_t dpl:1; + uint64_t p:1; + uint64_t offset1:16; +} __attribute__ ((__packed__)) x86_call_gate; + +static inline uint32_t x86_call_gate_offset(x86_call_gate *gate) +{ + return (uint32_t)((gate->offset1 << 16) | gate->offset0); +} + +#define LDT_SEL 0 +#define GDT_SEL 1 + +typedef struct x68_segment_selector { + union { + uint16_t sel; + struct { + uint16_t rpl:3; + uint16_t ti:1; + uint16_t index:12; + }; + }; +} __attribute__ ((__packed__)) x68_segment_selector; + +// Definition of hvf_x86_state is here +struct hvf_x86_state { + int hlt; + uint64_t init_tsc; + =20 + int interruptable; + uint64_t exp_rip; + uint64_t fetch_rip; + uint64_t rip; + struct x86_register regs[16]; + struct x86_reg_flags rflags; + struct lazy_flags lflags; + struct x86_efer efer; + uint8_t mmio_buf[4096]; + uint8_t* apic_page; +}; + +/* +* hvf xsave area +*/ +struct hvf_xsave_buf { + uint32_t data[1024]; +}; + +// useful register access macros +#define RIP(cpu) (cpu->hvf_x86->rip) +#define EIP(cpu) ((uint32_t)cpu->hvf_x86->rip) +#define RFLAGS(cpu) (cpu->hvf_x86->rflags.rflags) +#define EFLAGS(cpu) (cpu->hvf_x86->rflags.eflags) + +#define RRX(cpu, reg) (cpu->hvf_x86->regs[reg].rrx) +#define RAX(cpu) RRX(cpu, REG_RAX) +#define RCX(cpu) RRX(cpu, REG_RCX) +#define RDX(cpu) RRX(cpu, REG_RDX) +#define RBX(cpu) RRX(cpu, REG_RBX) +#define RSP(cpu) RRX(cpu, REG_RSP) +#define RBP(cpu) RRX(cpu, REG_RBP) +#define RSI(cpu) RRX(cpu, REG_RSI) +#define RDI(cpu) RRX(cpu, REG_RDI) +#define R8(cpu) RRX(cpu, REG_R8) +#define R9(cpu) RRX(cpu, REG_R9) +#define R10(cpu) RRX(cpu, REG_R10) +#define R11(cpu) RRX(cpu, REG_R11) +#define R12(cpu) RRX(cpu, REG_R12) +#define R13(cpu) RRX(cpu, REG_R13) +#define R14(cpu) RRX(cpu, REG_R14) +#define R15(cpu) RRX(cpu, REG_R15) + +#define ERX(cpu, reg) (cpu->hvf_x86->regs[reg].erx) +#define EAX(cpu) ERX(cpu, REG_RAX) +#define ECX(cpu) ERX(cpu, REG_RCX) +#define EDX(cpu) ERX(cpu, REG_RDX) +#define EBX(cpu) ERX(cpu, REG_RBX) +#define ESP(cpu) ERX(cpu, REG_RSP) +#define EBP(cpu) ERX(cpu, REG_RBP) +#define ESI(cpu) ERX(cpu, REG_RSI) +#define EDI(cpu) ERX(cpu, REG_RDI) + +#define RX(cpu, reg) (cpu->hvf_x86->regs[reg].rx) +#define AX(cpu) RX(cpu, REG_RAX) +#define CX(cpu) RX(cpu, REG_RCX) +#define DX(cpu) RX(cpu, REG_RDX) +#define BP(cpu) RX(cpu, REG_RBP) +#define SP(cpu) RX(cpu, REG_RSP) +#define BX(cpu) RX(cpu, REG_RBX) +#define SI(cpu) RX(cpu, REG_RSI) +#define DI(cpu) RX(cpu, REG_RDI) + +#define RL(cpu, reg) (cpu->hvf_x86->regs[reg].lx) +#define AL(cpu) RL(cpu, REG_RAX) +#define CL(cpu) RL(cpu, REG_RCX) +#define DL(cpu) RL(cpu, REG_RDX) +#define BL(cpu) RL(cpu, REG_RBX) + +#define RH(cpu, reg) (cpu->hvf_x86->regs[reg].hx) +#define AH(cpu) RH(cpu, REG_RAX) +#define CH(cpu) RH(cpu, REG_RCX) +#define DH(cpu) RH(cpu, REG_RDX) +#define BH(cpu) RH(cpu, REG_RBX) + +// deal with GDT/LDT descriptors in memory +bool x86_read_segment_descriptor(struct CPUState *cpu, struct x86_segment_= descriptor *desc, x68_segment_selector sel); +bool x86_write_segment_descriptor(struct CPUState *cpu, struct x86_segment= _descriptor *desc, x68_segment_selector sel); + +bool x86_read_call_gate(struct CPUState *cpu, struct x86_call_gate *idt_de= sc, int gate); + +// helpers +bool x86_is_protected(struct CPUState *cpu); +bool x86_is_real(struct CPUState *cpu); +bool x86_is_v8086(struct CPUState *cpu); +bool x86_is_long_mode(struct CPUState *cpu); +bool x86_is_long64_mode(struct CPUState *cpu); +bool x86_is_paging_mode(struct CPUState *cpu); +bool x86_is_pae_enabled(struct CPUState *cpu); + +addr_t linear_addr(struct CPUState *cpu, addr_t addr, x86_reg_segment seg); +addr_t linear_addr_size(struct CPUState *cpu, addr_t addr, int size, x86_r= eg_segment seg); +addr_t linear_rip(struct CPUState *cpu, addr_t rip); + +static inline uint64_t rdtscp(void) +{ + uint64_t tsc; + __asm__ __volatile__("rdtscp; " // serializing read of tsc + "shl $32,%%rdx; " // shift higher 32 bits stored= in rdx up + "or %%rdx,%%rax" // and or onto rax + : "=3Da"(tsc) // output to tsc variable + : + : "%rcx", "%rdx"); // rcx and rdx are clobbered + =20 + return tsc; +} + diff --git a/target/i386/hvf-utils/x86_cpuid.c b/target/i386/hvf-utils/x86_= cpuid.c new file mode 100644 index 0000000000..e496cf001c --- /dev/null +++ b/target/i386/hvf-utils/x86_cpuid.c @@ -0,0 +1,270 @@ +/* + * i386 CPUID helper functions + * + * Copyright (c) 2003 Fabrice Bellard + * Copyright (c) 2017 Google Inc. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, see . + * + * cpuid + */ + +#include "qemu/osdep.h" +#include "x86_cpuid.h" +#include "x86.h" +#include "vmx.h" + +#define PPRO_FEATURES (CPUID_FP87 | CPUID_DE | CPUID_PSE | CPUID_TSC | \ + CPUID_MSR | CPUID_MCE | CPUID_CX8 | CPUID_PGE | CPUID_CMOV | \ + CPUID_PAT | CPUID_FXSR | CPUID_MMX | CPUID_SSE | CPUID_SSE2 | \ + CPUID_PAE | CPUID_SEP | CPUID_APIC) + +struct x86_cpuid builtin_cpus[] =3D { + { + .name =3D "vmx32", + .vendor1 =3D CPUID_VENDOR_INTEL_1, + .vendor2 =3D CPUID_VENDOR_INTEL_2, + .vendor3 =3D CPUID_VENDOR_INTEL_3, + .level =3D 4, + .family =3D 6, + .model =3D 3, + .stepping =3D 3, + .features =3D PPRO_FEATURES, + .ext_features =3D /*CPUID_EXT_SSE3 |*/ CPUID_EXT_POPCNT, CPUID_MTR= R | CPUID_CLFLUSH, + CPUID_PSE36, + .ext2_features =3D CPUID_EXT2_LM | CPUID_EXT2_SYSCALL | CPUID_EXT2= _NX, + .ext3_features =3D 0,//CPUID_EXT3_LAHF_LM, + .xlevel =3D 0x80000004, + .model_id =3D "vmx32", + }, + { + .name =3D "core2duo", + .vendor1 =3D CPUID_VENDOR_INTEL_1, + .vendor2 =3D CPUID_VENDOR_INTEL_2, + .vendor3 =3D CPUID_VENDOR_INTEL_3, + .level =3D 10, + .family =3D 6, + .model =3D 15, + .stepping =3D 11, + .features =3D PPRO_FEATURES | + CPUID_MTRR | CPUID_CLFLUSH | CPUID_MCA | + CPUID_PSE36 | CPUID_VME | CPUID_DTS | CPUID_ACPI | CPUID_SS | + CPUID_HT | CPUID_TM | CPUID_PBE, + .ext_features =3D CPUID_EXT_SSE3 | CPUID_EXT_SSSE3 |=20 + CPUID_EXT_DTES64 | CPUID_EXT_DSCPL | + CPUID_EXT_CX16 | CPUID_EXT_XTPR | CPUID_EXT_PDCM | CPUID_EXT_HYPER= VISOR, + .ext2_features =3D CPUID_EXT2_LM | CPUID_EXT2_SYSCALL | CPUID_EXT2= _NX, + .ext3_features =3D CPUID_EXT3_LAHF_LM, + .xlevel =3D 0x80000008, + .model_id =3D "Intel(R) Core(TM)2 Duo GETCPU T7700 @ 2.40GHz", + }, + { + .name =3D "vmX", + .vendor1 =3D CPUID_VENDOR_INTEL_1, + .vendor2 =3D CPUID_VENDOR_INTEL_2, + .vendor3 =3D CPUID_VENDOR_INTEL_3, + .level =3D 0xd, + .family =3D 6, + .model =3D 15, + .stepping =3D 11, + .features =3D PPRO_FEATURES | + CPUID_MTRR | CPUID_CLFLUSH | CPUID_MCA | + CPUID_PSE36 | CPUID_VME | CPUID_DTS | CPUID_ACPI | CPUID_SS | + CPUID_HT | CPUID_TM | CPUID_PBE, + .ext_features =3D CPUID_EXT_SSE3 | CPUID_EXT_SSSE3 | + CPUID_EXT_DTES64 | CPUID_EXT_DSCPL | + CPUID_EXT_CX16 | CPUID_EXT_XTPR | CPUID_EXT_PDCM | CPUID_EXT_HYPER= VISOR, + .ext2_features =3D CPUID_EXT2_LM | CPUID_EXT2_SYSCALL | CPUID_EXT2= _NX, + .ext3_features =3D CPUID_EXT3_LAHF_LM, + .xlevel =3D 0x80000008, + .model_id =3D "Common vmX processor", + }, +}; + +static struct x86_cpuid *_cpuid =3D NULL; + +void init_cpuid(struct CPUState* cpu) +{ + _cpuid =3D &builtin_cpus[2]; // core2duo +} + +void get_cpuid_func(struct CPUState* cpu, int func, int cnt, uint32_t *eax= , uint32_t *ebx, uint32_t *ecx, uint32_t *edx) +{ + uint32_t h_rax, h_rbx, h_rcx, h_rdx; + host_cpuid(func, cnt, &h_rax, &h_rbx, &h_rcx, &h_rdx); + uint32_t apic_id =3D X86_CPU(cpu)->apic_id; + + + *eax =3D *ebx =3D *ecx =3D *edx =3D 0; + switch(func) { + case 0: + *eax =3D _cpuid->level; + *ebx =3D _cpuid->vendor1; + *edx =3D _cpuid->vendor2; + *ecx =3D _cpuid->vendor3; + break; + case 1: + *eax =3D h_rax;//_cpuid->stepping | (_cpuid->model << 3) | (_c= puid->family << 6); + *ebx =3D (apic_id << 24) | (h_rbx & 0x00ffffff); + *ecx =3D h_rcx; + *edx =3D h_rdx; + + if (cpu->nr_cores * cpu->nr_threads > 1) { + *ebx |=3D (cpu->nr_cores * cpu->nr_threads) << 16; + *edx |=3D 1 << 28; /* Enable Hyper-Threading */ + } + + *ecx =3D *ecx & ~(CPUID_EXT_OSXSAVE | CPUID_EXT_MONITOR | CPUI= D_EXT_X2APIC | + CPUID_EXT_VMX | CPUID_EXT_TSC_DEADLINE_TIMER | CPU= ID_EXT_TM2 | CPUID_EXT_PCID | + CPUID_EXT_EST | CPUID_EXT_SSE42 | CPUID_EXT_SSE41); + *ecx |=3D CPUID_EXT_HYPERVISOR; + break; + case 2: + /* cache info: needed for Pentium Pro compatibility */ + *eax =3D h_rax; + *ebx =3D h_rbx; + *ecx =3D h_rcx; + *edx =3D h_rdx; + break; + case 4: + /* cache info: needed for Core compatibility */ + *eax =3D h_rax; + *ebx =3D h_rbx; + *ecx =3D h_rcx; + *edx =3D h_rdx; + break; + case 5: + /* mwait info: needed for Core compatibility */ + *eax =3D h_rax; + *ebx =3D h_rbx; + *ecx =3D h_rcx; + *edx =3D h_rdx; + break; + case 6: + /* Thermal and Power Leaf */ + *eax =3D 0; + *ebx =3D 0; + *ecx =3D 0; + *edx =3D 0; + break; + case 7: + *eax =3D h_rax; + *ebx =3D h_rbx & ~(CPUID_7_0_EBX_AVX512F | CPUID_7_0_EBX_AVX51= 2PF | CPUID_7_0_EBX_AVX512ER | CPUID_7_0_EBX_AVX512CD | + CPUID_7_0_EBX_AVX512BW | CPUID_7_0_EBX_AVX512= VL | CPUID_7_0_EBX_MPX | CPUID_7_0_EBX_INVPCID); + *ecx =3D h_rcx & ~(CPUID_7_0_ECX_AVX512BMI); + *edx =3D h_rdx; + break; + case 9: + /* Direct Cache Access Information Leaf */ + *eax =3D h_rax; + *ebx =3D h_rbx; + *ecx =3D h_rcx; + *edx =3D h_rdx; + break; + case 0xA: + /* Architectural Performance Monitoring Leaf */ + *eax =3D 0; + *ebx =3D 0; + *ecx =3D 0; + *edx =3D 0; + break; + case 0xB: + /* CPU Topology Leaf */ + *eax =3D 0; + *ebx =3D 0; /* Means that we don't support this leaf */ + *ecx =3D 0; + *edx =3D 0; + break; + case 0xD: + *eax =3D h_rax; + if (!cnt) + *eax &=3D (XSTATE_FP_MASK | XSTATE_SSE_MASK | XSTATE_YMM_M= ASK); + if (1 =3D=3D cnt) + *eax &=3D (CPUID_XSAVE_XSAVEOPT | CPUID_XSAVE_XSAVEC); + *ebx =3D h_rbx; + *ecx =3D h_rcx; + *edx =3D h_rdx; + break; + case 0x80000000: + *eax =3D _cpuid->xlevel; + *ebx =3D _cpuid->vendor1; + *edx =3D _cpuid->vendor2; + *ecx =3D _cpuid->vendor3; + break; + case 0x80000001: + *eax =3D h_rax;//_cpuid->stepping | (_cpuid->model << 3) | (_c= puid->family << 6); + *ebx =3D 0; + *ecx =3D _cpuid->ext3_features & h_rcx; + *edx =3D _cpuid->ext2_features & h_rdx; + break; + case 0x80000002: + case 0x80000003: + case 0x80000004: + *eax =3D h_rax; + *ebx =3D h_rbx; + *ecx =3D h_rcx; + *edx =3D h_rdx; + break; + case 0x80000005: + /* cache info (L1 cache) */ + *eax =3D h_rax; + *ebx =3D h_rbx; + *ecx =3D h_rcx; + *edx =3D h_rdx; + break; + case 0x80000006: + /* cache info (L2 cache) */ + *eax =3D h_rax; + *ebx =3D h_rbx; + *ecx =3D h_rcx; + *edx =3D h_rdx; + break; + case 0x80000007: + *eax =3D 0; + *ebx =3D 0; + *ecx =3D 0; + *edx =3D 0; /* Note - We disable invariant TSC (bit 8) in pu= rpose */ + break; + case 0x80000008: + /* virtual & phys address size in low 2 bytes. */ + *eax =3D h_rax; + *ebx =3D 0; + *ecx =3D 0; + *edx =3D 0; + break; + case 0x8000000A: + *eax =3D 0; + *ebx =3D 0; + *ecx =3D 0; + *edx =3D 0; + break; + case 0x80000019: + *eax =3D h_rax; + *ebx =3D h_rbx; + *ecx =3D 0; + *edx =3D 0; + case 0xC0000000: + *eax =3D _cpuid->xlevel2; + *ebx =3D 0; + *ecx =3D 0; + *edx =3D 0; + break; + default: + *eax =3D 0; + *ebx =3D 0; + *ecx =3D 0; + *edx =3D 0; + break; + } +} diff --git a/target/i386/hvf-utils/x86_cpuid.h b/target/i386/hvf-utils/x86_= cpuid.h new file mode 100644 index 0000000000..02f2f115b0 --- /dev/null +++ b/target/i386/hvf-utils/x86_cpuid.h @@ -0,0 +1,51 @@ +/* + * Copyright (C) 2016 Veertu Inc, + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation; either version 2 or + * (at your option) version 3 of the License. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, see . + */ +#ifndef __CPUID_H__ +#define __CPUID_H__ + +#include +#include +#include +#include +#include "qemu-common.h" +#include "x86_flags.h" + +struct x86_cpuid { + const char *name; + uint32_t level; + uint32_t vendor1, vendor2, vendor3; + int family; + int model; + int stepping; + int tsc_khz; + uint32_t features, ext_features, ext2_features, ext3_features; + uint32_t kvm_features, svm_features; + uint32_t xlevel; + char model_id[48]; + int vendor_override; + uint32_t flags; + uint32_t xlevel2; + uint32_t cpuid_7_0_ebx_features; +}; + +struct CPUState; + +void init_cpuid(struct CPUState* cpu); +void get_cpuid_func(struct CPUState *cpu, int func, int cnt, uint32_t *eax= , uint32_t *ebx, uint32_t *ecx, uint32_t *edx); + +#endif /* __CPUID_H__ */ + diff --git a/target/i386/hvf-utils/x86_decode.c b/target/i386/hvf-utils/x86= _decode.c new file mode 100644 index 0000000000..b4d8e22449 --- /dev/null +++ b/target/i386/hvf-utils/x86_decode.c @@ -0,0 +1,1659 @@ +/* + * Copyright (C) 2016 Veertu Inc, + * Copyright (C) 2017 Google Inc, + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation; either version 2 or + * (at your option) version 3 of the License. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, see . + */ + +#include "qemu/osdep.h" + +#include "x86_decode.h" +#include "string.h" +#include "vmx.h" +#include "x86_gen.h" +#include "x86_mmu.h" +#include "x86_descr.h" + +#define OPCODE_ESCAPE 0xf + +static void decode_invalid(CPUState *cpu, struct x86_decode *decode) +{ + printf("%llx: failed to decode instruction ", cpu->hvf_x86->fetch_rip = - decode->len); + for (int i =3D 0; i < decode->opcode_len; i++) + printf("%x ", decode->opcode[i]); + printf("\n"); + VM_PANIC("decoder failed\n"); +} + +uint64_t sign(uint64_t val, int size) +{ + switch (size) { + case 1: + val =3D (int8_t)val; + break; + case 2: + val =3D (int16_t)val; + break; + case 4: + val =3D (int32_t)val; + break; + case 8: + val =3D (int64_t)val; + break; + default: + VM_PANIC_EX("%s invalid size %d\n", __FUNCTION__, size); + break; + } + return val; +} + +static inline uint64_t decode_bytes(CPUState *cpu, struct x86_decode *deco= de, int size) +{ + addr_t val =3D 0; + =20 + switch (size) { + case 1: + case 2: + case 4: + case 8: + break; + default: + VM_PANIC_EX("%s invalid size %d\n", __FUNCTION__, size); + break; + } + addr_t va =3D linear_rip(cpu, RIP(cpu)) + decode->len; + vmx_read_mem(cpu, &val, va, size); + decode->len +=3D size; + =20 + return val; +} + +static inline uint8_t decode_byte(CPUState *cpu, struct x86_decode *decode) +{ + return (uint8_t)decode_bytes(cpu, decode, 1); +} + +static inline uint16_t decode_word(CPUState *cpu, struct x86_decode *decod= e) +{ + return (uint16_t)decode_bytes(cpu, decode, 2); +} + +static inline uint32_t decode_dword(CPUState *cpu, struct x86_decode *deco= de) +{ + return (uint32_t)decode_bytes(cpu, decode, 4); +} + +static inline uint64_t decode_qword(CPUState *cpu, struct x86_decode *deco= de) +{ + return decode_bytes(cpu, decode, 8); +} + +static void decode_modrm_rm(CPUState *cpu, struct x86_decode *decode, stru= ct x86_decode_op *op) +{ + op->type =3D X86_VAR_RM; +} + +static void decode_modrm_reg(CPUState *cpu, struct x86_decode *decode, str= uct x86_decode_op *op) +{ + op->type =3D X86_VAR_REG; + op->reg =3D decode->modrm.reg; + op->ptr =3D get_reg_ref(cpu, op->reg, decode->rex.r, decode->operand_s= ize); +} + +static void decode_rax(CPUState *cpu, struct x86_decode *decode, struct x8= 6_decode_op *op) +{ + op->type =3D X86_VAR_REG; + op->reg =3D REG_RAX; + op->ptr =3D get_reg_ref(cpu, op->reg, 0, decode->operand_size); +} + +static inline void decode_immediate(CPUState *cpu, struct x86_decode *deco= de, struct x86_decode_op *var, int size) +{ + var->type =3D X86_VAR_IMMEDIATE; + var->size =3D size; + switch (size) { + case 1: + var->val =3D decode_byte(cpu, decode); + break; + case 2: + var->val =3D decode_word(cpu, decode); + break; + case 4: + var->val =3D decode_dword(cpu, decode); + break; + case 8: + var->val =3D decode_qword(cpu, decode); + break; + default: + VM_PANIC_EX("bad size %d\n", size); + } +} + +static void decode_imm8(CPUState *cpu, struct x86_decode *decode, struct x= 86_decode_op *op) +{ + decode_immediate(cpu, decode, op, 1); + op->type =3D X86_VAR_IMMEDIATE; +} + +static void decode_imm8_signed(CPUState *cpu, struct x86_decode *decode, s= truct x86_decode_op *op) +{ + decode_immediate(cpu, decode, op, 1); + op->val =3D sign(op->val, 1); + op->type =3D X86_VAR_IMMEDIATE; +} + +static void decode_imm16(CPUState *cpu, struct x86_decode *decode, struct = x86_decode_op *op) +{ + decode_immediate(cpu, decode, op, 2); + op->type =3D X86_VAR_IMMEDIATE; +} + + +static void decode_imm(CPUState *cpu, struct x86_decode *decode, struct x8= 6_decode_op *op) +{ + if (8 =3D=3D decode->operand_size) { + decode_immediate(cpu, decode, op, 4); + op->val =3D sign(op->val, decode->operand_size); + } else { + decode_immediate(cpu, decode, op, decode->operand_size); + } + op->type =3D X86_VAR_IMMEDIATE; +} + +static void decode_imm_signed(CPUState *cpu, struct x86_decode *decode, st= ruct x86_decode_op *op) +{ + decode_immediate(cpu, decode, op, decode->operand_size); + op->val =3D sign(op->val, decode->operand_size); + op->type =3D X86_VAR_IMMEDIATE; +} + +static void decode_imm_1(CPUState *cpu, struct x86_decode *decode, struct = x86_decode_op *op) +{ + op->type =3D X86_VAR_IMMEDIATE; + op->val =3D 1; +} + +static void decode_imm_0(CPUState *cpu, struct x86_decode *decode, struct = x86_decode_op *op) +{ + op->type =3D X86_VAR_IMMEDIATE; + op->val =3D 0; +} + + +static void decode_pushseg(CPUState *cpu, struct x86_decode *decode) +{ + uint8_t op =3D (decode->opcode_len > 1) ? decode->opcode[1] : decode->= opcode[0]; + =20 + decode->op[0].type =3D X86_VAR_REG; + switch (op) { + case 0xe: + decode->op[0].reg =3D REG_SEG_CS; + break; + case 0x16: + decode->op[0].reg =3D REG_SEG_SS; + break; + case 0x1e: + decode->op[0].reg =3D REG_SEG_DS; + break; + case 0x06: + decode->op[0].reg =3D REG_SEG_ES; + break; + case 0xa0: + decode->op[0].reg =3D REG_SEG_FS; + break; + case 0xa8: + decode->op[0].reg =3D REG_SEG_GS; + break; + } +} + +static void decode_popseg(CPUState *cpu, struct x86_decode *decode) +{ + uint8_t op =3D (decode->opcode_len > 1) ? decode->opcode[1] : decode->= opcode[0]; + =20 + decode->op[0].type =3D X86_VAR_REG; + switch (op) { + case 0xf: + decode->op[0].reg =3D REG_SEG_CS; + break; + case 0x17: + decode->op[0].reg =3D REG_SEG_SS; + break; + case 0x1f: + decode->op[0].reg =3D REG_SEG_DS; + break; + case 0x07: + decode->op[0].reg =3D REG_SEG_ES; + break; + case 0xa1: + decode->op[0].reg =3D REG_SEG_FS; + break; + case 0xa9: + decode->op[0].reg =3D REG_SEG_GS; + break; + } +} + +static void decode_incgroup(CPUState *cpu, struct x86_decode *decode) +{ + decode->op[0].type =3D X86_VAR_REG; + decode->op[0].reg =3D decode->opcode[0] - 0x40; + decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, decode->operand_size); +} + +static void decode_decgroup(CPUState *cpu, struct x86_decode *decode) +{ + decode->op[0].type =3D X86_VAR_REG; + decode->op[0].reg =3D decode->opcode[0] - 0x48; + decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, decode->operand_size); +} + +static void decode_incgroup2(CPUState *cpu, struct x86_decode *decode) +{ + if (!decode->modrm.reg) + decode->cmd =3D X86_DECODE_CMD_INC; + else if (1 =3D=3D decode->modrm.reg) + decode->cmd =3D X86_DECODE_CMD_DEC; +} + +static void decode_pushgroup(CPUState *cpu, struct x86_decode *decode) +{ + decode->op[0].type =3D X86_VAR_REG; + decode->op[0].reg =3D decode->opcode[0] - 0x50; + decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, decode->operand_size); +} + +static void decode_popgroup(CPUState *cpu, struct x86_decode *decode) +{ + decode->op[0].type =3D X86_VAR_REG; + decode->op[0].reg =3D decode->opcode[0] - 0x58; + decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, decode->operand_size); +} + +static void decode_jxx(CPUState *cpu, struct x86_decode *decode) +{ + decode->displacement =3D decode_bytes(cpu, decode, decode->operand_siz= e); + decode->displacement_size =3D decode->operand_size; +} + +static void decode_farjmp(CPUState *cpu, struct x86_decode *decode) +{ + decode->op[0].type =3D X86_VAR_IMMEDIATE; + decode->op[0].val =3D decode_bytes(cpu, decode, decode->operand_size); + decode->displacement =3D decode_word(cpu, decode); +} + +static void decode_addgroup(CPUState *cpu, struct x86_decode *decode) +{ + enum x86_decode_cmd group[] =3D { + X86_DECODE_CMD_ADD, + X86_DECODE_CMD_OR, + X86_DECODE_CMD_ADC, + X86_DECODE_CMD_SBB, + X86_DECODE_CMD_AND, + X86_DECODE_CMD_SUB, + X86_DECODE_CMD_XOR, + X86_DECODE_CMD_CMP + }; + decode->cmd =3D group[decode->modrm.reg]; +} + +static void decode_rotgroup(CPUState *cpu, struct x86_decode *decode) +{ + enum x86_decode_cmd group[] =3D { + X86_DECODE_CMD_ROL, + X86_DECODE_CMD_ROR, + X86_DECODE_CMD_RCL, + X86_DECODE_CMD_RCR, + X86_DECODE_CMD_SHL, + X86_DECODE_CMD_SHR, + X86_DECODE_CMD_SHL, + X86_DECODE_CMD_SAR + }; + decode->cmd =3D group[decode->modrm.reg]; +} + +static void decode_f7group(CPUState *cpu, struct x86_decode *decode) +{ + enum x86_decode_cmd group[] =3D { + X86_DECODE_CMD_TST, + X86_DECODE_CMD_TST, + X86_DECODE_CMD_NOT, + X86_DECODE_CMD_NEG, + X86_DECODE_CMD_MUL, + X86_DECODE_CMD_IMUL_1, + X86_DECODE_CMD_DIV, + X86_DECODE_CMD_IDIV + }; + decode->cmd =3D group[decode->modrm.reg]; + decode_modrm_rm(cpu, decode, &decode->op[0]); + + switch (decode->modrm.reg) { + case 0: + case 1: + decode_imm(cpu, decode, &decode->op[1]); + break; + case 2: + break; + case 3: + decode->op[1].type =3D X86_VAR_IMMEDIATE; + decode->op[1].val =3D 0; + break; + default: + break; + } +} + +static void decode_xchgroup(CPUState *cpu, struct x86_decode *decode) +{ + decode->op[0].type =3D X86_VAR_REG; + decode->op[0].reg =3D decode->opcode[0] - 0x90; + decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, decode->operand_size); +} + +static void decode_movgroup(CPUState *cpu, struct x86_decode *decode) +{ + decode->op[0].type =3D X86_VAR_REG; + decode->op[0].reg =3D decode->opcode[0] - 0xb8; + decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, decode->operand_size); + decode_immediate(cpu, decode, &decode->op[1], decode->operand_size); +} + +static void fetch_moffs(CPUState *cpu, struct x86_decode *decode, struct x= 86_decode_op *op) +{ + op->type =3D X86_VAR_OFFSET; + op->ptr =3D decode_bytes(cpu, decode, decode->addressing_size); +} + +static void decode_movgroup8(CPUState *cpu, struct x86_decode *decode) +{ + decode->op[0].type =3D X86_VAR_REG; + decode->op[0].reg =3D decode->opcode[0] - 0xb0; + decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, decode->operand_size); + decode_immediate(cpu, decode, &decode->op[1], decode->operand_size); +} + +static void decode_rcx(CPUState *cpu, struct x86_decode *decode, struct x8= 6_decode_op *op) +{ + op->type =3D X86_VAR_REG; + op->reg =3D REG_RCX; + op->ptr =3D get_reg_ref(cpu, op->reg, decode->rex.b, decode->operand_s= ize); +} + +struct decode_tbl { + uint8_t opcode; + enum x86_decode_cmd cmd; + uint8_t operand_size; + bool is_modrm; + void (*decode_op1)(CPUState *cpu, struct x86_decode *decode, struct x8= 6_decode_op *op1); + void (*decode_op2)(CPUState *cpu, struct x86_decode *decode, struct x8= 6_decode_op *op2); + void (*decode_op3)(CPUState *cpu, struct x86_decode *decode, struct x8= 6_decode_op *op3); + void (*decode_op4)(CPUState *cpu, struct x86_decode *decode, struct x8= 6_decode_op *op4); + void (*decode_postfix)(CPUState *cpu, struct x86_decode *decode); + addr_t flags_mask; +}; + +struct decode_x87_tbl { + uint8_t opcode; + uint8_t modrm_reg; + uint8_t modrm_mod; + enum x86_decode_cmd cmd; + uint8_t operand_size; + bool rev; + bool pop; + void (*decode_op1)(CPUState *cpu, struct x86_decode *decode, struct x8= 6_decode_op *op1); + void (*decode_op2)(CPUState *cpu, struct x86_decode *decode, struct x8= 6_decode_op *op2); + void (*decode_postfix)(CPUState *cpu, struct x86_decode *decode); + addr_t flags_mask; +}; + +struct decode_tbl invl_inst =3D {0x0, 0, 0, false, NULL, NULL, NULL, NULL,= decode_invalid}; + +struct decode_tbl _decode_tbl1[255]; +struct decode_tbl _decode_tbl2[255]; +struct decode_x87_tbl _decode_tbl3[255]; + +static void decode_x87_ins(CPUState *cpu, struct x86_decode *decode) +{ + struct decode_x87_tbl *decoder; + =20 + decode->is_fpu =3D true; + int mode =3D decode->modrm.mod =3D=3D 3 ? 1 : 0; + int index =3D ((decode->opcode[0] & 0xf) << 4) | (mode << 3) | decode-= >modrm.reg; + =20 + decoder =3D &_decode_tbl3[index]; + =20 + decode->cmd =3D decoder->cmd; + if (decoder->operand_size) + decode->operand_size =3D decoder->operand_size; + decode->flags_mask =3D decoder->flags_mask; + decode->fpop_stack =3D decoder->pop; + decode->frev =3D decoder->rev; + =20 + if (decoder->decode_op1) + decoder->decode_op1(cpu, decode, &decode->op[0]); + if (decoder->decode_op2) + decoder->decode_op2(cpu, decode, &decode->op[1]); + if (decoder->decode_postfix) + decoder->decode_postfix(cpu, decode); + =20 + VM_PANIC_ON_EX(!decode->cmd, "x87 opcode %x %x (%x %x) not decoded\n",= decode->opcode[0], decode->modrm.modrm, decoder->modrm_reg, decoder->modrm= _mod); +} + +static void decode_ffgroup(CPUState *cpu, struct x86_decode *decode) +{ + enum x86_decode_cmd group[] =3D { + X86_DECODE_CMD_INC, + X86_DECODE_CMD_DEC, + X86_DECODE_CMD_CALL_NEAR_ABS_INDIRECT, + X86_DECODE_CMD_CALL_FAR_ABS_INDIRECT, + X86_DECODE_CMD_JMP_NEAR_ABS_INDIRECT, + X86_DECODE_CMD_JMP_FAR_ABS_INDIRECT, + X86_DECODE_CMD_PUSH, + X86_DECODE_CMD_INVL, + X86_DECODE_CMD_INVL + }; + decode->cmd =3D group[decode->modrm.reg]; + if (decode->modrm.reg > 2) + decode->flags_mask =3D 0; +} + +static void decode_sldtgroup(CPUState *cpu, struct x86_decode *decode) +{ + enum x86_decode_cmd group[] =3D { + X86_DECODE_CMD_SLDT, + X86_DECODE_CMD_STR, + X86_DECODE_CMD_LLDT, + X86_DECODE_CMD_LTR, + X86_DECODE_CMD_VERR, + X86_DECODE_CMD_VERW, + X86_DECODE_CMD_INVL, + X86_DECODE_CMD_INVL + }; + decode->cmd =3D group[decode->modrm.reg]; + printf("%llx: decode_sldtgroup: %d\n", cpu->hvf_x86->fetch_rip, decode= ->modrm.reg); +} + +static void decode_lidtgroup(CPUState *cpu, struct x86_decode *decode) +{ + enum x86_decode_cmd group[] =3D { + X86_DECODE_CMD_SGDT, + X86_DECODE_CMD_SIDT, + X86_DECODE_CMD_LGDT, + X86_DECODE_CMD_LIDT, + X86_DECODE_CMD_SMSW, + X86_DECODE_CMD_LMSW, + X86_DECODE_CMD_LMSW, + X86_DECODE_CMD_INVLPG + }; + decode->cmd =3D group[decode->modrm.reg]; + if (0xf9 =3D=3D decode->modrm.modrm) { + decode->opcode[decode->len++] =3D decode->modrm.modrm; + decode->cmd =3D X86_DECODE_CMD_RDTSCP; + } +} + +static void decode_btgroup(CPUState *cpu, struct x86_decode *decode) +{ + enum x86_decode_cmd group[] =3D { + X86_DECODE_CMD_INVL, + X86_DECODE_CMD_INVL, + X86_DECODE_CMD_INVL, + X86_DECODE_CMD_INVL, + X86_DECODE_CMD_BT, + X86_DECODE_CMD_BTS, + X86_DECODE_CMD_BTR, + X86_DECODE_CMD_BTC + }; + decode->cmd =3D group[decode->modrm.reg]; +} + +static void decode_x87_general(CPUState *cpu, struct x86_decode *decode) +{ + decode->is_fpu =3D true; +} + +static void decode_x87_modrm_floatp(CPUState *cpu, struct x86_decode *deco= de, struct x86_decode_op *op) +{ + op->type =3D X87_VAR_FLOATP; +} + +static void decode_x87_modrm_intp(CPUState *cpu, struct x86_decode *decode= , struct x86_decode_op *op) +{ + op->type =3D X87_VAR_INTP; +} + +static void decode_x87_modrm_bytep(CPUState *cpu, struct x86_decode *decod= e, struct x86_decode_op *op) +{ + op->type =3D X87_VAR_BYTEP; +} + +static void decode_x87_modrm_st0(CPUState *cpu, struct x86_decode *decode,= struct x86_decode_op *op) +{ + op->type =3D X87_VAR_REG; + op->reg =3D 0; +} + +static void decode_decode_x87_modrm_st0(CPUState *cpu, struct x86_decode *= decode, struct x86_decode_op *op) +{ + op->type =3D X87_VAR_REG; + op->reg =3D decode->modrm.modrm & 7; +} + + +static void decode_aegroup(CPUState *cpu, struct x86_decode *decode) +{ + decode->is_fpu =3D true; + switch (decode->modrm.reg) { + case 0: + decode->cmd =3D X86_DECODE_CMD_FXSAVE; + decode_x87_modrm_bytep(cpu, decode, &decode->op[0]); + break; + case 1: + decode_x87_modrm_bytep(cpu, decode, &decode->op[0]); + decode->cmd =3D X86_DECODE_CMD_FXRSTOR; + break; + case 5: + if (decode->modrm.modrm =3D=3D 0xe8) { + decode->cmd =3D X86_DECODE_CMD_LFENCE; + } else { + VM_PANIC("xrstor"); + } + break; + case 6: + VM_PANIC_ON(decode->modrm.modrm !=3D 0xf0); + decode->cmd =3D X86_DECODE_CMD_MFENCE; + break; + case 7: + if (decode->modrm.modrm =3D=3D 0xf8) { + decode->cmd =3D X86_DECODE_CMD_SFENCE; + } else { + decode->cmd =3D X86_DECODE_CMD_CLFLUSH; + } + break; + default: + VM_PANIC_ON_EX(1, "0xae: reg %d\n", decode->modrm.reg); + break; + } +} + +static void decode_bswap(CPUState *cpu, struct x86_decode *decode) +{ + decode->op[0].type =3D X86_VAR_REG; + decode->op[0].reg =3D decode->opcode[1] - 0xc8; + decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, decode->operand_size); +} + +static void decode_d9_4(CPUState *cpu, struct x86_decode *decode) +{ + switch(decode->modrm.modrm) { + case 0xe0: + // FCHS + decode->cmd =3D X86_DECODE_CMD_FCHS; + break; + case 0xe1: + decode->cmd =3D X86_DECODE_CMD_FABS; + break; + case 0xe4: + VM_PANIC_ON_EX(1, "FTST"); + break; + case 0xe5: + // FXAM + decode->cmd =3D X86_DECODE_CMD_FXAM; + break; + default: + VM_PANIC_ON_EX(1, "FLDENV"); + break; + } +} + +static void decode_db_4(CPUState *cpu, struct x86_decode *decode) +{ + switch (decode->modrm.modrm) { + case 0xe0: + VM_PANIC_ON_EX(1, "unhandled FNENI: %x %x\n", decode->opcode[0= ], decode->modrm.modrm); + break; + case 0xe1: + VM_PANIC_ON_EX(1, "unhandled FNDISI: %x %x\n", decode->opcode[= 0], decode->modrm.modrm); + break; + case 0xe2: + VM_PANIC_ON_EX(1, "unhandled FCLEX: %x %x\n", decode->opcode[0= ], decode->modrm.modrm); + break; + case 0xe3: + decode->cmd =3D X86_DECODE_CMD_FNINIT; + break; + case 0xe4: + decode->cmd =3D X86_DECODE_CMD_FNSETPM; + break; + default: + VM_PANIC_ON_EX(1, "unhandled fpu opcode: %x %x\n", decode->opc= ode[0], decode->modrm.modrm); + break; + } +} + + +#define RFLAGS_MASK_NONE 0 +#define RFLAGS_MASK_OSZAPC (RFLAGS_OF | RFLAGS_SF | RFLAGS_ZF | RFLAGS_AF= | RFLAGS_PF | RFLAGS_CF) +#define RFLAGS_MASK_LAHF (RFLAGS_SF | RFLAGS_ZF | RFLAGS_AF | RFLAGS_PF= | RFLAGS_CF) +#define RFLAGS_MASK_CF (RFLAGS_CF) +#define RFLAGS_MASK_IF (RFLAGS_IF) +#define RFLAGS_MASK_TF (RFLAGS_TF) +#define RFLAGS_MASK_DF (RFLAGS_DF) +#define RFLAGS_MASK_ZF (RFLAGS_ZF) + +struct decode_tbl _1op_inst[] =3D +{ + {0x0, X86_DECODE_CMD_ADD, 1, true, decode_modrm_rm, decode_modrm_reg, = NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x1, X86_DECODE_CMD_ADD, 0, true, decode_modrm_rm, decode_modrm_reg, = NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x2, X86_DECODE_CMD_ADD, 1, true, decode_modrm_reg, decode_modrm_rm, = NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x3, X86_DECODE_CMD_ADD, 0, true, decode_modrm_reg, decode_modrm_rm, = NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x4, X86_DECODE_CMD_ADD, 1, false, decode_rax, decode_imm8, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, + {0x5, X86_DECODE_CMD_ADD, 0, false, decode_rax, decode_imm, NULL, NULL= , NULL, RFLAGS_MASK_OSZAPC}, + {0x6, X86_DECODE_CMD_PUSH_SEG, 0, false, false, NULL, NULL, NULL, deco= de_pushseg, RFLAGS_MASK_NONE}, + {0x7, X86_DECODE_CMD_POP_SEG, 0, false, false, NULL, NULL, NULL, decod= e_popseg, RFLAGS_MASK_NONE}, + {0x8, X86_DECODE_CMD_OR, 1, true, decode_modrm_rm, decode_modrm_reg, N= ULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x9, X86_DECODE_CMD_OR, 0, true, decode_modrm_rm, decode_modrm_reg, N= ULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xa, X86_DECODE_CMD_OR, 1, true, decode_modrm_reg, decode_modrm_rm, N= ULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xb, X86_DECODE_CMD_OR, 0, true, decode_modrm_reg, decode_modrm_rm, N= ULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xc, X86_DECODE_CMD_OR, 1, false, decode_rax, decode_imm8, NULL, NULL= , NULL, RFLAGS_MASK_OSZAPC}, + {0xd, X86_DECODE_CMD_OR, 0, false, decode_rax, decode_imm, NULL, NULL,= NULL, RFLAGS_MASK_OSZAPC}, + =20 + {0xe, X86_DECODE_CMD_PUSH_SEG, 0, false, false, NULL, NULL, NULL, deco= de_pushseg, RFLAGS_MASK_NONE}, + {0xf, X86_DECODE_CMD_POP_SEG, 0, false, false, NULL, NULL, NULL, decod= e_popseg, RFLAGS_MASK_NONE}, + =20 + {0x10, X86_DECODE_CMD_ADC, 1, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x11, X86_DECODE_CMD_ADC, 0, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x12, X86_DECODE_CMD_ADC, 1, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x13, X86_DECODE_CMD_ADC, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x14, X86_DECODE_CMD_ADC, 1, false, decode_rax, decode_imm, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, + {0x15, X86_DECODE_CMD_ADC, 0, false, decode_rax, decode_imm, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, + =20 + {0x16, X86_DECODE_CMD_PUSH_SEG, 0, false, false, NULL, NULL, NULL, dec= ode_pushseg, RFLAGS_MASK_NONE}, + {0x17, X86_DECODE_CMD_POP_SEG, 0, false, false, NULL, NULL, NULL, deco= de_popseg, RFLAGS_MASK_NONE}, + =20 + {0x18, X86_DECODE_CMD_SBB, 1, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x19, X86_DECODE_CMD_SBB, 0, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x1a, X86_DECODE_CMD_SBB, 1, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x1b, X86_DECODE_CMD_SBB, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x1c, X86_DECODE_CMD_SBB, 1, false, decode_rax, decode_imm8, NULL, NU= LL, NULL, RFLAGS_MASK_OSZAPC}, + {0x1d, X86_DECODE_CMD_SBB, 0, false, decode_rax, decode_imm, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, + =20 + {0x1e, X86_DECODE_CMD_PUSH_SEG, 0, false, false, NULL, NULL, NULL, dec= ode_pushseg, RFLAGS_MASK_NONE}, + {0x1f, X86_DECODE_CMD_POP_SEG, 0, false, false, NULL, NULL, NULL, deco= de_popseg, RFLAGS_MASK_NONE}, + =20 + {0x20, X86_DECODE_CMD_AND, 1, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x21, X86_DECODE_CMD_AND, 0, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x22, X86_DECODE_CMD_AND, 1, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x23, X86_DECODE_CMD_AND, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x24, X86_DECODE_CMD_AND, 1, false, decode_rax, decode_imm, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, + {0x25, X86_DECODE_CMD_AND, 0, false, decode_rax, decode_imm, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, + {0x28, X86_DECODE_CMD_SUB, 1, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x29, X86_DECODE_CMD_SUB, 0, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x2a, X86_DECODE_CMD_SUB, 1, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x2b, X86_DECODE_CMD_SUB, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x2c, X86_DECODE_CMD_SUB, 1, false, decode_rax, decode_imm, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, + {0x2d, X86_DECODE_CMD_SUB, 0, false, decode_rax, decode_imm, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, + {0x2f, X86_DECODE_CMD_DAS, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_OSZAPC}, + {0x30, X86_DECODE_CMD_XOR, 1, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x31, X86_DECODE_CMD_XOR, 0, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x32, X86_DECODE_CMD_XOR, 1, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x33, X86_DECODE_CMD_XOR, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x34, X86_DECODE_CMD_XOR, 1, false, decode_rax, decode_imm, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, + {0x35, X86_DECODE_CMD_XOR, 0, false, decode_rax, decode_imm, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, + =20 + {0x38, X86_DECODE_CMD_CMP, 1, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x39, X86_DECODE_CMD_CMP, 0, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x3a, X86_DECODE_CMD_CMP, 1, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x3b, X86_DECODE_CMD_CMP, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x3c, X86_DECODE_CMD_CMP, 1, false, decode_rax, decode_imm8, NULL, NU= LL, NULL, RFLAGS_MASK_OSZAPC}, + {0x3d, X86_DECODE_CMD_CMP, 0, false, decode_rax, decode_imm, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, + =20 + {0x3f, X86_DECODE_CMD_AAS, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_OSZAPC}, + =20 + {0x40, X86_DECODE_CMD_INC, 0, false, NULL, NULL, NULL, NULL, decode_in= cgroup, RFLAGS_MASK_OSZAPC}, + {0x41, X86_DECODE_CMD_INC, 0, false, NULL, NULL, NULL, NULL, decode_in= cgroup, RFLAGS_MASK_OSZAPC}, + {0x42, X86_DECODE_CMD_INC, 0, false, NULL, NULL, NULL, NULL, decode_in= cgroup, RFLAGS_MASK_OSZAPC}, + {0x43, X86_DECODE_CMD_INC, 0, false, NULL, NULL, NULL, NULL, decode_in= cgroup, RFLAGS_MASK_OSZAPC}, + {0x44, X86_DECODE_CMD_INC, 0, false, NULL, NULL, NULL, NULL, decode_in= cgroup, RFLAGS_MASK_OSZAPC}, + {0x45, X86_DECODE_CMD_INC, 0, false, NULL, NULL, NULL, NULL, decode_in= cgroup, RFLAGS_MASK_OSZAPC}, + {0x46, X86_DECODE_CMD_INC, 0, false, NULL, NULL, NULL, NULL, decode_in= cgroup, RFLAGS_MASK_OSZAPC}, + {0x47, X86_DECODE_CMD_INC, 0, false, NULL, NULL, NULL, NULL, decode_in= cgroup, RFLAGS_MASK_OSZAPC}, + =20 + {0x48, X86_DECODE_CMD_DEC, 0, false, NULL, NULL, NULL, NULL, decode_de= cgroup, RFLAGS_MASK_OSZAPC}, + {0x49, X86_DECODE_CMD_DEC, 0, false, NULL, NULL, NULL, NULL, decode_de= cgroup, RFLAGS_MASK_OSZAPC}, + {0x4a, X86_DECODE_CMD_DEC, 0, false, NULL, NULL, NULL, NULL, decode_de= cgroup, RFLAGS_MASK_OSZAPC}, + {0x4b, X86_DECODE_CMD_DEC, 0, false, NULL, NULL, NULL, NULL, decode_de= cgroup, RFLAGS_MASK_OSZAPC}, + {0x4c, X86_DECODE_CMD_DEC, 0, false, NULL, NULL, NULL, NULL, decode_de= cgroup, RFLAGS_MASK_OSZAPC}, + {0x4d, X86_DECODE_CMD_DEC, 0, false, NULL, NULL, NULL, NULL, decode_de= cgroup, RFLAGS_MASK_OSZAPC}, + {0x4e, X86_DECODE_CMD_DEC, 0, false, NULL, NULL, NULL, NULL, decode_de= cgroup, RFLAGS_MASK_OSZAPC}, + {0x4f, X86_DECODE_CMD_DEC, 0, false, NULL, NULL, NULL, NULL, decode_de= cgroup, RFLAGS_MASK_OSZAPC}, + =20 + {0x50, X86_DECODE_CMD_PUSH, 0, false, NULL, NULL, NULL, NULL, decode_p= ushgroup, RFLAGS_MASK_NONE}, + {0x51, X86_DECODE_CMD_PUSH, 0, false, NULL, NULL, NULL, NULL, decode_p= ushgroup, RFLAGS_MASK_NONE}, + {0x52, X86_DECODE_CMD_PUSH, 0, false, NULL, NULL, NULL, NULL, decode_p= ushgroup, RFLAGS_MASK_NONE}, + {0x53, X86_DECODE_CMD_PUSH, 0, false, NULL, NULL, NULL, NULL, decode_p= ushgroup, RFLAGS_MASK_NONE}, + {0x54, X86_DECODE_CMD_PUSH, 0, false, NULL, NULL, NULL, NULL, decode_p= ushgroup, RFLAGS_MASK_NONE}, + {0x55, X86_DECODE_CMD_PUSH, 0, false, NULL, NULL, NULL, NULL, decode_p= ushgroup, RFLAGS_MASK_NONE}, + {0x56, X86_DECODE_CMD_PUSH, 0, false, NULL, NULL, NULL, NULL, decode_p= ushgroup, RFLAGS_MASK_NONE}, + {0x57, X86_DECODE_CMD_PUSH, 0, false, NULL, NULL, NULL, NULL, decode_p= ushgroup, RFLAGS_MASK_NONE}, + =20 + {0x58, X86_DECODE_CMD_POP, 0, false, NULL, NULL, NULL, NULL, decode_po= pgroup, RFLAGS_MASK_NONE}, + {0x59, X86_DECODE_CMD_POP, 0, false, NULL, NULL, NULL, NULL, decode_po= pgroup, RFLAGS_MASK_NONE}, + {0x5a, X86_DECODE_CMD_POP, 0, false, NULL, NULL, NULL, NULL, decode_po= pgroup, RFLAGS_MASK_NONE}, + {0x5b, X86_DECODE_CMD_POP, 0, false, NULL, NULL, NULL, NULL, decode_po= pgroup, RFLAGS_MASK_NONE}, + {0x5c, X86_DECODE_CMD_POP, 0, false, NULL, NULL, NULL, NULL, decode_po= pgroup, RFLAGS_MASK_NONE}, + {0x5d, X86_DECODE_CMD_POP, 0, false, NULL, NULL, NULL, NULL, decode_po= pgroup, RFLAGS_MASK_NONE}, + {0x5e, X86_DECODE_CMD_POP, 0, false, NULL, NULL, NULL, NULL, decode_po= pgroup, RFLAGS_MASK_NONE}, + {0x5f, X86_DECODE_CMD_POP, 0, false, NULL, NULL, NULL, NULL, decode_po= pgroup, RFLAGS_MASK_NONE}, + =20 + {0x60, X86_DECODE_CMD_PUSHA, 0, false, NULL, NULL, NULL, NULL, NULL, R= FLAGS_MASK_NONE}, + {0x61, X86_DECODE_CMD_POPA, 0, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_NONE}, + =20 + {0x68, X86_DECODE_CMD_PUSH, 0, false, decode_imm, NULL, NULL, NULL, NU= LL, RFLAGS_MASK_NONE}, + {0x6a, X86_DECODE_CMD_PUSH, 0, false, decode_imm8_signed, NULL, NULL, = NULL, NULL, RFLAGS_MASK_NONE}, + {0x69, X86_DECODE_CMD_IMUL_3, 0, true, decode_modrm_reg, decode_modrm_= rm, decode_imm, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x6b, X86_DECODE_CMD_IMUL_3, 0, true, decode_modrm_reg, decode_modrm_= rm, decode_imm8_signed, NULL, NULL, RFLAGS_MASK_OSZAPC}, + =20 + {0x6c, X86_DECODE_CMD_INS, 1, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_NONE}, + {0x6d, X86_DECODE_CMD_INS, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_NONE}, + {0x6e, X86_DECODE_CMD_OUTS, 1, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_NONE}, + {0x6f, X86_DECODE_CMD_OUTS, 0, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_NONE}, + =20 + {0x70, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x71, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x72, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x73, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x74, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x75, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x76, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x77, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x78, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x79, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x7a, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x7b, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x7c, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x7d, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x7e, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x7f, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + =20 + {0x80, X86_DECODE_CMD_INVL, 1, true, decode_modrm_rm, decode_imm8, NUL= L, NULL, decode_addgroup, RFLAGS_MASK_OSZAPC}, + {0x81, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, decode_imm, NULL= , NULL, decode_addgroup, RFLAGS_MASK_OSZAPC}, + {0x82, X86_DECODE_CMD_INVL, 1, true, decode_modrm_rm, decode_imm8, NUL= L, NULL, decode_addgroup, RFLAGS_MASK_OSZAPC}, + {0x83, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, decode_imm8_sign= ed, NULL, NULL, decode_addgroup, RFLAGS_MASK_OSZAPC}, + {0x84, X86_DECODE_CMD_TST, 1, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x85, X86_DECODE_CMD_TST, 0, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x86, X86_DECODE_CMD_XCHG, 1, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x87, X86_DECODE_CMD_XCHG, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x88, X86_DECODE_CMD_MOV, 1, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x89, X86_DECODE_CMD_MOV, 0, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x8a, X86_DECODE_CMD_MOV, 1, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x8b, X86_DECODE_CMD_MOV, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x8c, X86_DECODE_CMD_MOV_FROM_SEG, 0, true, decode_modrm_rm, decode_m= odrm_reg, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x8d, X86_DECODE_CMD_LEA, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x8e, X86_DECODE_CMD_MOV_TO_SEG, 0, true, decode_modrm_reg, decode_mo= drm_rm, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x8f, X86_DECODE_CMD_POP, 0, true, decode_modrm_rm, NULL, NULL, NULL,= NULL, RFLAGS_MASK_NONE}, + =20 + {0x90, X86_DECODE_CMD_NOP, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_NONE}, + {0x91, X86_DECODE_CMD_XCHG, 0, false, NULL, decode_rax, NULL, NULL, de= code_xchgroup, RFLAGS_MASK_NONE}, + {0x92, X86_DECODE_CMD_XCHG, 0, false, NULL, decode_rax, NULL, NULL, de= code_xchgroup, RFLAGS_MASK_NONE}, + {0x93, X86_DECODE_CMD_XCHG, 0, false, NULL, decode_rax, NULL, NULL, de= code_xchgroup, RFLAGS_MASK_NONE}, + {0x94, X86_DECODE_CMD_XCHG, 0, false, NULL, decode_rax, NULL, NULL, de= code_xchgroup, RFLAGS_MASK_NONE}, + {0x95, X86_DECODE_CMD_XCHG, 0, false, NULL, decode_rax, NULL, NULL, de= code_xchgroup, RFLAGS_MASK_NONE}, + {0x96, X86_DECODE_CMD_XCHG, 0, false, NULL, decode_rax, NULL, NULL, de= code_xchgroup, RFLAGS_MASK_NONE}, + {0x97, X86_DECODE_CMD_XCHG, 0, false, NULL, decode_rax, NULL, NULL, de= code_xchgroup, RFLAGS_MASK_NONE}, + =20 + {0x98, X86_DECODE_CMD_CBW, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_NONE}, + {0x99, X86_DECODE_CMD_CWD, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_NONE}, + =20 + {0x9a, X86_DECODE_CMD_CALL_FAR, 0, false, NULL, NULL, NULL, NULL, deco= de_farjmp, RFLAGS_MASK_NONE}, + =20 + {0x9c, X86_DECODE_CMD_PUSHF, 0, false, NULL, NULL, NULL, NULL, NULL, R= FLAGS_MASK_NONE}, + //{0x9d, X86_DECODE_CMD_POPF, 0, false, NULL, NULL, NULL, NULL, NULL, = RFLAGS_MASK_POPF}, + {0x9e, X86_DECODE_CMD_SAHF, 0, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_NONE}, + {0x9f, X86_DECODE_CMD_LAHF, 0, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_LAHF}, + =20 + {0xa0, X86_DECODE_CMD_MOV, 1, false, decode_rax, fetch_moffs, NULL, NU= LL, NULL, RFLAGS_MASK_NONE}, + {0xa1, X86_DECODE_CMD_MOV, 0, false, decode_rax, fetch_moffs, NULL, NU= LL, NULL, RFLAGS_MASK_NONE}, + {0xa2, X86_DECODE_CMD_MOV, 1, false, fetch_moffs, decode_rax, NULL, NU= LL, NULL, RFLAGS_MASK_NONE}, + {0xa3, X86_DECODE_CMD_MOV, 0, false, fetch_moffs, decode_rax, NULL, NU= LL, NULL, RFLAGS_MASK_NONE}, + =20 + {0xa4, X86_DECODE_CMD_MOVS, 1, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_NONE}, + {0xa5, X86_DECODE_CMD_MOVS, 0, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_NONE}, + {0xa6, X86_DECODE_CMD_CMPS, 1, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_OSZAPC}, + {0xa7, X86_DECODE_CMD_CMPS, 0, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_OSZAPC}, + {0xaa, X86_DECODE_CMD_STOS, 1, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_NONE}, + {0xab, X86_DECODE_CMD_STOS, 0, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_NONE}, + {0xac, X86_DECODE_CMD_LODS, 1, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_NONE}, + {0xad, X86_DECODE_CMD_LODS, 0, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_NONE}, + {0xae, X86_DECODE_CMD_SCAS, 1, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_OSZAPC}, + {0xaf, X86_DECODE_CMD_SCAS, 0, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_OSZAPC}, + =20 + {0xa8, X86_DECODE_CMD_TST, 1, false, decode_rax, decode_imm, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, + {0xa9, X86_DECODE_CMD_TST, 0, false, decode_rax, decode_imm, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, + =20 + {0xb0, X86_DECODE_CMD_MOV, 1, false, NULL, NULL, NULL, NULL, decode_mo= vgroup8, RFLAGS_MASK_NONE}, + {0xb1, X86_DECODE_CMD_MOV, 1, false, NULL, NULL, NULL, NULL, decode_mo= vgroup8, RFLAGS_MASK_NONE}, + {0xb2, X86_DECODE_CMD_MOV, 1, false, NULL, NULL, NULL, NULL, decode_mo= vgroup8, RFLAGS_MASK_NONE}, + {0xb3, X86_DECODE_CMD_MOV, 1, false, NULL, NULL, NULL, NULL, decode_mo= vgroup8, RFLAGS_MASK_NONE}, + {0xb4, X86_DECODE_CMD_MOV, 1, false, NULL, NULL, NULL, NULL, decode_mo= vgroup8, RFLAGS_MASK_NONE}, + {0xb5, X86_DECODE_CMD_MOV, 1, false, NULL, NULL, NULL, NULL, decode_mo= vgroup8, RFLAGS_MASK_NONE}, + {0xb6, X86_DECODE_CMD_MOV, 1, false, NULL, NULL, NULL, NULL, decode_mo= vgroup8, RFLAGS_MASK_NONE}, + {0xb7, X86_DECODE_CMD_MOV, 1, false, NULL, NULL, NULL, NULL, decode_mo= vgroup8, RFLAGS_MASK_NONE}, + =20 + {0xb8, X86_DECODE_CMD_MOV, 0, false, NULL, NULL, NULL, NULL, decode_mo= vgroup, RFLAGS_MASK_NONE}, + {0xb9, X86_DECODE_CMD_MOV, 0, false, NULL, NULL, NULL, NULL, decode_mo= vgroup, RFLAGS_MASK_NONE}, + {0xba, X86_DECODE_CMD_MOV, 0, false, NULL, NULL, NULL, NULL, decode_mo= vgroup, RFLAGS_MASK_NONE}, + {0xbb, X86_DECODE_CMD_MOV, 0, false, NULL, NULL, NULL, NULL, decode_mo= vgroup, RFLAGS_MASK_NONE}, + {0xbc, X86_DECODE_CMD_MOV, 0, false, NULL, NULL, NULL, NULL, decode_mo= vgroup, RFLAGS_MASK_NONE}, + {0xbd, X86_DECODE_CMD_MOV, 0, false, NULL, NULL, NULL, NULL, decode_mo= vgroup, RFLAGS_MASK_NONE}, + {0xbe, X86_DECODE_CMD_MOV, 0, false, NULL, NULL, NULL, NULL, decode_mo= vgroup, RFLAGS_MASK_NONE}, + {0xbf, X86_DECODE_CMD_MOV, 0, false, NULL, NULL, NULL, NULL, decode_mo= vgroup, RFLAGS_MASK_NONE}, + =20 + {0xc0, X86_DECODE_CMD_INVL, 1, true, decode_modrm_rm, decode_imm8, NUL= L, NULL, decode_rotgroup, RFLAGS_MASK_OSZAPC}, + {0xc1, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, decode_imm8, NUL= L, NULL, decode_rotgroup, RFLAGS_MASK_OSZAPC}, + =20 + {0xc2, X86_DECODE_RET_NEAR, 0, false, decode_imm16, NULL, NULL, NULL, = NULL, RFLAGS_MASK_NONE}, + {0xc3, X86_DECODE_RET_NEAR, 0, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_NONE}, + =20 + {0xc4, X86_DECODE_CMD_LES, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xc5, X86_DECODE_CMD_LDS, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_NONE}, + =20 + {0xc6, X86_DECODE_CMD_MOV, 1, true, decode_modrm_rm, decode_imm8, NULL= , NULL, NULL, RFLAGS_MASK_NONE}, + {0xc7, X86_DECODE_CMD_MOV, 0, true, decode_modrm_rm, decode_imm, NULL,= NULL, NULL, RFLAGS_MASK_NONE}, + =20 + {0xc8, X86_DECODE_CMD_ENTER, 0, false, decode_imm16, decode_imm8, NULL= , NULL, NULL, RFLAGS_MASK_NONE}, + {0xc9, X86_DECODE_CMD_LEAVE, 0, false, NULL, NULL, NULL, NULL, NULL, R= FLAGS_MASK_NONE}, + {0xca, X86_DECODE_RET_FAR, 0, false, decode_imm16, NULL, NULL, NULL, N= ULL, RFLAGS_MASK_NONE}, + {0xcb, X86_DECODE_RET_FAR, 0, false, decode_imm_0, NULL, NULL, NULL, N= ULL, RFLAGS_MASK_NONE}, + {0xcd, X86_DECODE_CMD_INT, 0, false, decode_imm8, NULL, NULL, NULL, NU= LL, RFLAGS_MASK_NONE}, + //{0xcf, X86_DECODE_CMD_IRET, 0, false, NULL, NULL, NULL, NULL, NULL, = RFLAGS_MASK_IRET}, + =20 + {0xd0, X86_DECODE_CMD_INVL, 1, true, decode_modrm_rm, decode_imm_1, NU= LL, NULL, decode_rotgroup, RFLAGS_MASK_OSZAPC}, + {0xd1, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, decode_imm_1, NU= LL, NULL, decode_rotgroup, RFLAGS_MASK_OSZAPC}, + {0xd2, X86_DECODE_CMD_INVL, 1, true, decode_modrm_rm, decode_rcx, NULL= , NULL, decode_rotgroup, RFLAGS_MASK_OSZAPC}, + {0xd3, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, decode_rcx, NULL= , NULL, decode_rotgroup, RFLAGS_MASK_OSZAPC}, + =20 + {0xd4, X86_DECODE_CMD_AAM, 0, false, decode_imm8, NULL, NULL, NULL, NU= LL, RFLAGS_MASK_OSZAPC}, + {0xd5, X86_DECODE_CMD_AAD, 0, false, decode_imm8, NULL, NULL, NULL, NU= LL, RFLAGS_MASK_OSZAPC}, + =20 + {0xd7, X86_DECODE_CMD_XLAT, 0, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_NONE}, + =20 + {0xd8, X86_DECODE_CMD_INVL, 0, true, NULL, NULL, NULL, NULL, decode_x8= 7_ins, RFLAGS_MASK_NONE}, + {0xd9, X86_DECODE_CMD_INVL, 0, true, NULL, NULL, NULL, NULL, decode_x8= 7_ins, RFLAGS_MASK_NONE}, + {0xda, X86_DECODE_CMD_INVL, 0, true, NULL, NULL, NULL, NULL, decode_x8= 7_ins, RFLAGS_MASK_NONE}, + {0xdb, X86_DECODE_CMD_INVL, 0, true, NULL, NULL, NULL, NULL, decode_x8= 7_ins, RFLAGS_MASK_NONE}, + {0xdc, X86_DECODE_CMD_INVL, 0, true, NULL, NULL, NULL, NULL, decode_x8= 7_ins, RFLAGS_MASK_NONE}, + {0xdd, X86_DECODE_CMD_INVL, 0, true, NULL, NULL, NULL, NULL, decode_x8= 7_ins, RFLAGS_MASK_NONE}, + {0xde, X86_DECODE_CMD_INVL, 0, true, NULL, NULL, NULL, NULL, decode_x8= 7_ins, RFLAGS_MASK_NONE}, + {0xdf, X86_DECODE_CMD_INVL, 0, true, NULL, NULL, NULL, NULL, decode_x8= 7_ins, RFLAGS_MASK_NONE}, + =20 + {0xe0, X86_DECODE_CMD_LOOP, 0, false, decode_imm8_signed, NULL, NULL, = NULL, NULL, RFLAGS_MASK_NONE}, + {0xe1, X86_DECODE_CMD_LOOP, 0, false, decode_imm8_signed, NULL, NULL, = NULL, NULL, RFLAGS_MASK_NONE}, + {0xe2, X86_DECODE_CMD_LOOP, 0, false, decode_imm8_signed, NULL, NULL, = NULL, NULL, RFLAGS_MASK_NONE}, + =20 + {0xe3, X86_DECODE_CMD_JCXZ, 1, false, NULL, NULL, NULL, NULL, decode_j= xx, RFLAGS_MASK_NONE}, + =20 + {0xe4, X86_DECODE_CMD_IN, 1, false, decode_imm8, NULL, NULL, NULL, NUL= L, RFLAGS_MASK_NONE}, + {0xe5, X86_DECODE_CMD_IN, 0, false, decode_imm8, NULL, NULL, NULL, NUL= L, RFLAGS_MASK_NONE}, + {0xe6, X86_DECODE_CMD_OUT, 1, false, decode_imm8, NULL, NULL, NULL, NU= LL, RFLAGS_MASK_NONE}, + {0xe7, X86_DECODE_CMD_OUT, 0, false, decode_imm8, NULL, NULL, NULL, NU= LL, RFLAGS_MASK_NONE}, + {0xe8, X86_DECODE_CMD_CALL_NEAR, 0, false, decode_imm_signed, NULL, NU= LL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xe9, X86_DECODE_CMD_JMP_NEAR, 0, false, decode_imm_signed, NULL, NUL= L, NULL, NULL, RFLAGS_MASK_NONE}, + {0xea, X86_DECODE_CMD_JMP_FAR, 0, false, NULL, NULL, NULL, NULL, decod= e_farjmp, RFLAGS_MASK_NONE}, + {0xeb, X86_DECODE_CMD_JMP_NEAR, 1, false, decode_imm8_signed, NULL, NU= LL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xec, X86_DECODE_CMD_IN, 1, false, NULL, NULL, NULL, NULL, NULL, RFLA= GS_MASK_NONE}, + {0xed, X86_DECODE_CMD_IN, 0, false, NULL, NULL, NULL, NULL, NULL, RFLA= GS_MASK_NONE}, + {0xee, X86_DECODE_CMD_OUT, 1, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_NONE}, + {0xef, X86_DECODE_CMD_OUT, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_NONE}, + =20 + {0xf4, X86_DECODE_CMD_HLT, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_NONE}, + =20 + {0xf5, X86_DECODE_CMD_CMC, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_CF}, + =20 + {0xf6, X86_DECODE_CMD_INVL, 1, true, NULL, NULL, NULL, NULL, decode_f7= group, RFLAGS_MASK_OSZAPC}, + {0xf7, X86_DECODE_CMD_INVL, 0, true, NULL, NULL, NULL, NULL, decode_f7= group, RFLAGS_MASK_OSZAPC}, + =20 + {0xf8, X86_DECODE_CMD_CLC, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_CF}, + {0xf9, X86_DECODE_CMD_STC, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_CF}, + =20 + {0xfa, X86_DECODE_CMD_CLI, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_IF}, + {0xfb, X86_DECODE_CMD_STI, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_IF}, + {0xfc, X86_DECODE_CMD_CLD, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_DF}, + {0xfd, X86_DECODE_CMD_STD, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_DF}, + {0xfe, X86_DECODE_CMD_INVL, 1, true, decode_modrm_rm, NULL, NULL, NULL= , decode_incgroup2, RFLAGS_MASK_OSZAPC}, + {0xff, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, NULL, NULL, NULL= , decode_ffgroup, RFLAGS_MASK_OSZAPC}, +}; + +struct decode_tbl _2op_inst[] =3D +{ + {0x0, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, NULL, NULL, NULL,= decode_sldtgroup, RFLAGS_MASK_NONE}, + {0x1, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, NULL, NULL, NULL,= decode_lidtgroup, RFLAGS_MASK_NONE}, + {0x6, X86_DECODE_CMD_CLTS, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_TF}, + {0x9, X86_DECODE_CMD_WBINVD, 0, false, NULL, NULL, NULL, NULL, NULL, R= FLAGS_MASK_NONE}, + {0x18, X86_DECODE_CMD_PREFETCH, 0, true, NULL, NULL, NULL, NULL, decod= e_x87_general, RFLAGS_MASK_NONE}, + {0x1f, X86_DECODE_CMD_NOP, 0, true, decode_modrm_rm, NULL, NULL, NULL,= NULL, RFLAGS_MASK_NONE}, + {0x20, X86_DECODE_CMD_MOV_FROM_CR, 0, true, decode_modrm_rm, decode_mo= drm_reg, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x21, X86_DECODE_CMD_MOV_FROM_DR, 0, true, decode_modrm_rm, decode_mo= drm_reg, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x22, X86_DECODE_CMD_MOV_TO_CR, 0, true, decode_modrm_reg, decode_mod= rm_rm, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x23, X86_DECODE_CMD_MOV_TO_DR, 0, true, decode_modrm_reg, decode_mod= rm_rm, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x30, X86_DECODE_CMD_WRMSR, 0, false, NULL, NULL, NULL, NULL, NULL, R= FLAGS_MASK_NONE}, + {0x31, X86_DECODE_CMD_RDTSC, 0, false, NULL, NULL, NULL, NULL, NULL, R= FLAGS_MASK_NONE}, + {0x32, X86_DECODE_CMD_RDMSR, 0, false, NULL, NULL, NULL, NULL, NULL, R= FLAGS_MASK_NONE}, + {0x40, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x41, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x42, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x43, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x44, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x45, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x46, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x47, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x48, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x49, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x4a, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x4b, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x4c, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x4d, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x4e, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x4f, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x77, X86_DECODE_CMD_EMMS, 0, false, NULL, NULL, NULL, NULL, decode_x= 87_general, RFLAGS_MASK_NONE}, + {0x82, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x83, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x84, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x85, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x86, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x87, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x88, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x89, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x8a, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x8b, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x8c, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x8d, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x8e, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x8f, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, + {0x90, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, + {0x91, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, + {0x92, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, + {0x93, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, + {0x94, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, + {0x95, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, + {0x96, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, + {0x97, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, + {0x98, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, + {0x99, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, + {0x9a, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, + {0x9b, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, + {0x9c, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, + {0x9d, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, + {0x9e, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, + {0x9f, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, + =20 + {0xb0, X86_DECODE_CMD_CMPXCHG, 1, true, decode_modrm_rm, decode_modrm_= reg, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xb1, X86_DECODE_CMD_CMPXCHG, 0, true, decode_modrm_rm, decode_modrm_= reg, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + =20 + {0xb6, X86_DECODE_CMD_MOVZX, 0, true, decode_modrm_reg, decode_modrm_r= m, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xb7, X86_DECODE_CMD_MOVZX, 0, true, decode_modrm_reg, decode_modrm_r= m, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xb8, X86_DECODE_CMD_POPCNT, 0, true, decode_modrm_reg, decode_modrm_= rm, NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xbe, X86_DECODE_CMD_MOVSX, 0, true, decode_modrm_reg, decode_modrm_r= m, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xbf, X86_DECODE_CMD_MOVSX, 0, true, decode_modrm_reg, decode_modrm_r= m, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xa0, X86_DECODE_CMD_PUSH_SEG, 0, false, false, NULL, NULL, NULL, dec= ode_pushseg, RFLAGS_MASK_NONE}, + {0xa1, X86_DECODE_CMD_POP_SEG, 0, false, false, NULL, NULL, NULL, deco= de_popseg, RFLAGS_MASK_NONE}, + {0xa2, X86_DECODE_CMD_CPUID, 0, false, NULL, NULL, NULL, NULL, NULL, R= FLAGS_MASK_NONE}, + {0xa3, X86_DECODE_CMD_BT, 0, true, decode_modrm_rm, decode_modrm_reg, = NULL, NULL, NULL, RFLAGS_MASK_CF}, + {0xa4, X86_DECODE_CMD_SHLD, 0, true, decode_modrm_rm, decode_modrm_reg= , decode_imm8, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xa5, X86_DECODE_CMD_SHLD, 0, true, decode_modrm_rm, decode_modrm_reg= , decode_rcx, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xa8, X86_DECODE_CMD_PUSH_SEG, 0, false, false, NULL, NULL, NULL, dec= ode_pushseg, RFLAGS_MASK_NONE}, + {0xa9, X86_DECODE_CMD_POP_SEG, 0, false, false, NULL, NULL, NULL, deco= de_popseg, RFLAGS_MASK_NONE}, + {0xab, X86_DECODE_CMD_BTS, 0, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_CF}, + {0xac, X86_DECODE_CMD_SHRD, 0, true, decode_modrm_rm, decode_modrm_reg= , decode_imm8, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xad, X86_DECODE_CMD_SHRD, 0, true, decode_modrm_rm, decode_modrm_reg= , decode_rcx, NULL, NULL, RFLAGS_MASK_OSZAPC}, + =20 + {0xae, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, NULL, NULL, NULL= , decode_aegroup, RFLAGS_MASK_NONE}, + =20 + {0xaf, X86_DECODE_CMD_IMUL_2, 0, true, decode_modrm_reg, decode_modrm_= rm, NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xb2, X86_DECODE_CMD_LSS, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xb3, X86_DECODE_CMD_BTR, 0, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xba, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, decode_imm8, NUL= L, NULL, decode_btgroup, RFLAGS_MASK_OSZAPC}, + {0xbb, X86_DECODE_CMD_BTC, 0, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xbc, X86_DECODE_CMD_BSF, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xbd, X86_DECODE_CMD_BSR, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + =20 + {0xc1, X86_DECODE_CMD_XADD, 0, true, decode_modrm_rm, decode_modrm_reg= , NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + =20 + {0xc7, X86_DECODE_CMD_CMPXCHG8B, 0, true, decode_modrm_rm, NULL, NULL,= NULL, NULL, RFLAGS_MASK_ZF}, + =20 + {0xc8, X86_DECODE_CMD_BSWAP, 0, false, NULL, NULL, NULL, NULL, decode_= bswap, RFLAGS_MASK_NONE}, + {0xc9, X86_DECODE_CMD_BSWAP, 0, false, NULL, NULL, NULL, NULL, decode_= bswap, RFLAGS_MASK_NONE}, + {0xca, X86_DECODE_CMD_BSWAP, 0, false, NULL, NULL, NULL, NULL, decode_= bswap, RFLAGS_MASK_NONE}, + {0xcb, X86_DECODE_CMD_BSWAP, 0, false, NULL, NULL, NULL, NULL, decode_= bswap, RFLAGS_MASK_NONE}, + {0xcc, X86_DECODE_CMD_BSWAP, 0, false, NULL, NULL, NULL, NULL, decode_= bswap, RFLAGS_MASK_NONE}, + {0xcd, X86_DECODE_CMD_BSWAP, 0, false, NULL, NULL, NULL, NULL, decode_= bswap, RFLAGS_MASK_NONE}, + {0xce, X86_DECODE_CMD_BSWAP, 0, false, NULL, NULL, NULL, NULL, decode_= bswap, RFLAGS_MASK_NONE}, + {0xcf, X86_DECODE_CMD_BSWAP, 0, false, NULL, NULL, NULL, NULL, decode_= bswap, RFLAGS_MASK_NONE}, +}; + +struct decode_x87_tbl invl_inst_x87 =3D {0x0, 0, 0, 0, 0, false, false, NU= LL, NULL, decode_invalid, 0}; + +struct decode_x87_tbl _x87_inst[] =3D +{ + {0xd8, 0, 3, X86_DECODE_CMD_FADD, 10, false, false, decode_x87_modrm_s= t0, decode_decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xd8, 0, 0, X86_DECODE_CMD_FADD, 4, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, + {0xd8, 1, 3, X86_DECODE_CMD_FMUL, 10, false, false, decode_x87_modrm_s= t0, decode_decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xd8, 1, 0, X86_DECODE_CMD_FMUL, 4, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, + {0xd8, 4, 3, X86_DECODE_CMD_FSUB, 10, false, false, decode_x87_modrm_s= t0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xd8, 4, 0, X86_DECODE_CMD_FSUB, 4, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, + {0xd8, 5, 3, X86_DECODE_CMD_FSUB, 10, true, false, decode_x87_modrm_st= 0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xd8, 5, 0, X86_DECODE_CMD_FSUB, 4, true, false, decode_x87_modrm_st0= , decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, + {0xd8, 6, 3, X86_DECODE_CMD_FDIV, 10, false, false, decode_x87_modrm_s= t0,decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xd8, 6, 0, X86_DECODE_CMD_FDIV, 4, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, + {0xd8, 7, 3, X86_DECODE_CMD_FDIV, 10, true, false, decode_x87_modrm_st= 0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xd8, 7, 0, X86_DECODE_CMD_FDIV, 4, true, false, decode_x87_modrm_st0= , decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, + =20 + {0xd9, 0, 3, X86_DECODE_CMD_FLD, 10, false, false, decode_x87_modrm_st= 0, NULL, NULL, RFLAGS_MASK_NONE}, + {0xd9, 0, 0, X86_DECODE_CMD_FLD, 4, false, false, decode_x87_modrm_flo= atp, NULL, NULL, RFLAGS_MASK_NONE}, + {0xd9, 1, 3, X86_DECODE_CMD_FXCH, 10, false, false, decode_x87_modrm_s= t0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xd9, 1, 0, X86_DECODE_CMD_INVL, 10, false, false, decode_x87_modrm_s= t0, NULL, NULL, RFLAGS_MASK_NONE}, + {0xd9, 2, 3, X86_DECODE_CMD_INVL, 10, false, false, decode_x87_modrm_s= t0, NULL, NULL, RFLAGS_MASK_NONE}, + {0xd9, 2, 0, X86_DECODE_CMD_FST, 4, false, false, decode_x87_modrm_flo= atp, NULL, NULL, RFLAGS_MASK_NONE}, + {0xd9, 3, 3, X86_DECODE_CMD_INVL, 10, false, false, decode_x87_modrm_s= t0, NULL, NULL, RFLAGS_MASK_NONE}, + {0xd9, 3, 0, X86_DECODE_CMD_FST, 4, false, true, decode_x87_modrm_floa= tp, NULL, NULL, RFLAGS_MASK_NONE}, + {0xd9, 4, 3, X86_DECODE_CMD_INVL, 10, false, false, decode_x87_modrm_s= t0, NULL, decode_d9_4, RFLAGS_MASK_NONE}, + {0xd9, 4, 0, X86_DECODE_CMD_INVL, 4, false, false, decode_x87_modrm_by= tep, NULL, NULL, RFLAGS_MASK_NONE}, + {0xd9, 5, 3, X86_DECODE_CMD_FLDxx, 10, false, false, NULL, NULL, NULL,= RFLAGS_MASK_NONE}, + {0xd9, 5, 0, X86_DECODE_CMD_FLDCW, 2, false, false, decode_x87_modrm_b= ytep, NULL, NULL, RFLAGS_MASK_NONE}, + // + {0xd9, 7, 3, X86_DECODE_CMD_FNSTCW, 2, false, false, decode_x87_modrm_= bytep, NULL, NULL, RFLAGS_MASK_NONE}, + {0xd9, 7, 0, X86_DECODE_CMD_FNSTCW, 2, false, false, decode_x87_modrm_= bytep, NULL, NULL, RFLAGS_MASK_NONE}, + =20 + {0xda, 0, 3, X86_DECODE_CMD_FCMOV, 10, false, false, decode_x87_modrm_= st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xda, 0, 0, X86_DECODE_CMD_FADD, 4, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, + {0xda, 1, 3, X86_DECODE_CMD_FCMOV, 10, false, false, decode_x87_modrm_= st0, decode_decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xda, 1, 0, X86_DECODE_CMD_FMUL, 4, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, + {0xda, 2, 3, X86_DECODE_CMD_FCMOV, 10, false, false, decode_x87_modrm_= st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xda, 3, 3, X86_DECODE_CMD_FCMOV, 10, false, false, decode_x87_modrm_= st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xda, 4, 3, X86_DECODE_CMD_INVL, 10, false, false, NULL, NULL, NULL, = RFLAGS_MASK_NONE}, + {0xda, 4, 0, X86_DECODE_CMD_FSUB, 4, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, + {0xda, 5, 3, X86_DECODE_CMD_FUCOM, 10, false, true, decode_x87_modrm_s= t0, decode_decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xda, 5, 0, X86_DECODE_CMD_FSUB, 4, true, false, decode_x87_modrm_st0= , decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, + {0xda, 6, 3, X86_DECODE_CMD_INVL, 10, false, false, NULL, NULL, NULL, = RFLAGS_MASK_NONE}, + {0xda, 6, 0, X86_DECODE_CMD_FDIV, 4, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, + {0xda, 7, 3, X86_DECODE_CMD_INVL, 10, false, false, NULL, NULL, NULL, = RFLAGS_MASK_NONE}, + {0xda, 7, 0, X86_DECODE_CMD_FDIV, 4, true, false, decode_x87_modrm_st0= , decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, + =20 + {0xdb, 0, 3, X86_DECODE_CMD_FCMOV, 10, false, false, decode_x87_modrm_= st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdb, 0, 0, X86_DECODE_CMD_FLD, 4, false, false, decode_x87_modrm_int= p, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdb, 1, 3, X86_DECODE_CMD_FCMOV, 10, false, false, decode_x87_modrm_= st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdb, 2, 3, X86_DECODE_CMD_FCMOV, 10, false, false, decode_x87_modrm_= st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdb, 2, 0, X86_DECODE_CMD_FST, 4, false, false, decode_x87_modrm_int= p, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdb, 3, 3, X86_DECODE_CMD_FCMOV, 10, false, false, decode_x87_modrm_= st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdb, 3, 0, X86_DECODE_CMD_FST, 4, false, true, decode_x87_modrm_intp= , NULL, NULL, RFLAGS_MASK_NONE}, + {0xdb, 4, 3, X86_DECODE_CMD_INVL, 10, false, false, NULL, NULL, decode= _db_4, RFLAGS_MASK_NONE}, + {0xdb, 4, 0, X86_DECODE_CMD_INVL, 10, false, false, NULL, NULL, NULL, = RFLAGS_MASK_NONE}, + {0xdb, 5, 3, X86_DECODE_CMD_FUCOMI, 10, false, false, decode_x87_modrm= _st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdb, 5, 0, X86_DECODE_CMD_FLD, 10, false, false, decode_x87_modrm_fl= oatp, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdb, 7, 0, X86_DECODE_CMD_FST, 10, false, true, decode_x87_modrm_flo= atp, NULL, NULL, RFLAGS_MASK_NONE}, + =20 + {0xdc, 0, 3, X86_DECODE_CMD_FADD, 10, false, false, decode_x87_modrm_s= t0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdc, 0, 0, X86_DECODE_CMD_FADD, 8, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, + {0xdc, 1, 3, X86_DECODE_CMD_FMUL, 10, false, false, decode_x87_modrm_s= t0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdc, 1, 0, X86_DECODE_CMD_FMUL, 8, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, + {0xdc, 4, 3, X86_DECODE_CMD_FSUB, 10, true, false, decode_x87_modrm_st= 0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdc, 4, 0, X86_DECODE_CMD_FSUB, 8, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, + {0xdc, 5, 3, X86_DECODE_CMD_FSUB, 10, false, false, decode_x87_modrm_s= t0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdc, 5, 0, X86_DECODE_CMD_FSUB, 8, true, false, decode_x87_modrm_st0= , decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, + {0xdc, 6, 3, X86_DECODE_CMD_FDIV, 10, true, false, decode_x87_modrm_st= 0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdc, 6, 0, X86_DECODE_CMD_FDIV, 8, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, + {0xdc, 7, 3, X86_DECODE_CMD_FDIV, 10, false, false, decode_x87_modrm_s= t0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdc, 7, 0, X86_DECODE_CMD_FDIV, 8, true, false, decode_x87_modrm_st0= , decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, + =20 + {0xdd, 0, 0, X86_DECODE_CMD_FLD, 8, false, false, decode_x87_modrm_flo= atp, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdd, 1, 3, X86_DECODE_CMD_FXCH, 10, false, false, decode_x87_modrm_s= t0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdd, 2, 3, X86_DECODE_CMD_FST, 10, false, false, decode_x87_modrm_st= 0, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdd, 2, 0, X86_DECODE_CMD_FST, 8, false, false, decode_x87_modrm_flo= atp, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdd, 3, 3, X86_DECODE_CMD_FST, 10, false, true, decode_x87_modrm_st0= , NULL, NULL, RFLAGS_MASK_NONE}, + {0xdd, 3, 0, X86_DECODE_CMD_FST, 8, false, true, decode_x87_modrm_floa= tp, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdd, 4, 3, X86_DECODE_CMD_FUCOM, 10, false, false, decode_x87_modrm_= st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdd, 4, 0, X86_DECODE_CMD_FRSTOR, 8, false, false, decode_x87_modrm_= bytep, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdd, 5, 3, X86_DECODE_CMD_FUCOM, 10, false, true, decode_x87_modrm_s= t0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdd, 7, 0, X86_DECODE_CMD_FNSTSW, 0, false, false, decode_x87_modrm_= bytep, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdd, 7, 3, X86_DECODE_CMD_FNSTSW, 0, false, false, decode_x87_modrm_= bytep, NULL, NULL, RFLAGS_MASK_NONE}, + =20 + {0xde, 0, 3, X86_DECODE_CMD_FADD, 10, false, true, decode_x87_modrm_st= 0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xde, 0, 0, X86_DECODE_CMD_FADD, 2, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, + {0xde, 1, 3, X86_DECODE_CMD_FMUL, 10, false, true, decode_x87_modrm_st= 0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xde, 1, 0, X86_DECODE_CMD_FMUL, 2, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, + {0xde, 4, 3, X86_DECODE_CMD_FSUB, 10, true, true, decode_x87_modrm_st0= , decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xde, 4, 0, X86_DECODE_CMD_FSUB, 2, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, + {0xde, 5, 3, X86_DECODE_CMD_FSUB, 10, false, true, decode_x87_modrm_st= 0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xde, 5, 0, X86_DECODE_CMD_FSUB, 2, true, false, decode_x87_modrm_st0= , decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, + {0xde, 6, 3, X86_DECODE_CMD_FDIV, 10, true, true, decode_x87_modrm_st0= , decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xde, 6, 0, X86_DECODE_CMD_FDIV, 2, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, + {0xde, 7, 3, X86_DECODE_CMD_FDIV, 10, false, true, decode_x87_modrm_st= 0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xde, 7, 0, X86_DECODE_CMD_FDIV, 2, true, false, decode_x87_modrm_st0= , decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, + =20 + {0xdf, 0, 0, X86_DECODE_CMD_FLD, 2, false, false, decode_x87_modrm_int= p, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdf, 1, 3, X86_DECODE_CMD_FXCH, 10, false, false, decode_x87_modrm_s= t0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdf, 2, 3, X86_DECODE_CMD_FST, 10, false, true, decode_x87_modrm_st0= , decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdf, 2, 0, X86_DECODE_CMD_FST, 2, false, false, decode_x87_modrm_int= p, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdf, 3, 3, X86_DECODE_CMD_FST, 10, false, true, decode_x87_modrm_st0= , decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdf, 3, 0, X86_DECODE_CMD_FST, 2, false, true, decode_x87_modrm_intp= , NULL, NULL, RFLAGS_MASK_NONE}, + {0xdf, 4, 3, X86_DECODE_CMD_FNSTSW, 2, false, true, decode_x87_modrm_b= ytep, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdf, 5, 3, X86_DECODE_CMD_FUCOMI, 10, false, true, decode_x87_modrm_= st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdf, 5, 0, X86_DECODE_CMD_FLD, 8, false, false, decode_x87_modrm_int= p, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdf, 7, 0, X86_DECODE_CMD_FST, 8, false, true, decode_x87_modrm_intp= , NULL, NULL, RFLAGS_MASK_NONE}, +}; + +void calc_modrm_operand16(CPUState *cpu, struct x86_decode *decode, struct= x86_decode_op *op) +{ + addr_t ptr =3D 0; + x86_reg_segment seg =3D REG_SEG_DS; + + if (!decode->modrm.mod && 6 =3D=3D decode->modrm.rm) { + op->ptr =3D (uint16_t)decode->displacement; + goto calc_addr; + } + + if (decode->displacement_size) + ptr =3D sign(decode->displacement, decode->displacement_size); + + switch (decode->modrm.rm) { + case 0: + ptr +=3D BX(cpu) + SI(cpu); + break; + case 1: + ptr +=3D BX(cpu) + DI(cpu); + break; + case 2: + ptr +=3D BP(cpu) + SI(cpu); + seg =3D REG_SEG_SS; + break; + case 3: + ptr +=3D BP(cpu) + DI(cpu); + seg =3D REG_SEG_SS; + break; + case 4: + ptr +=3D SI(cpu); + break; + case 5: + ptr +=3D DI(cpu); + break; + case 6: + ptr +=3D BP(cpu); + seg =3D REG_SEG_SS; + break; + case 7: + ptr +=3D BX(cpu); + break; + } +calc_addr: + if (X86_DECODE_CMD_LEA =3D=3D decode->cmd) + op->ptr =3D (uint16_t)ptr; + else + op->ptr =3D decode_linear_addr(cpu, decode, (uint16_t)ptr, seg); +} + +addr_t get_reg_ref(CPUState *cpu, int reg, int is_extended, int size) +{ + addr_t ptr =3D 0; + int which =3D 0; + + if (is_extended) + reg |=3D REG_R8; + + + switch (size) { + case 1: + if (is_extended || reg < 4) { + which =3D 1; + ptr =3D (addr_t)&RL(cpu, reg); + } else { + which =3D 2; + ptr =3D (addr_t)&RH(cpu, reg - 4); + } + break; + default: + which =3D 3; + ptr =3D (addr_t)&RRX(cpu, reg); + break; + } + return ptr; +} + +addr_t get_reg_val(CPUState *cpu, int reg, int is_extended, int size) +{ + addr_t val =3D 0; + memcpy(&val, (void*)get_reg_ref(cpu, reg, is_extended, size), size); + return val; +} + +static addr_t get_sib_val(CPUState *cpu, struct x86_decode *decode, x86_re= g_segment *sel) +{ + addr_t base =3D 0; + addr_t scaled_index =3D 0; + int addr_size =3D decode->addressing_size; + int base_reg =3D decode->sib.base; + int index_reg =3D decode->sib.index; + + *sel =3D REG_SEG_DS; + + if (decode->modrm.mod || base_reg !=3D REG_RBP) { + if (decode->rex.b) + base_reg |=3D REG_R8; + if (REG_RSP =3D=3D base_reg || REG_RBP =3D=3D base_reg) + *sel =3D REG_SEG_SS; + base =3D get_reg_val(cpu, decode->sib.base, decode->rex.b, addr_si= ze); + } + + if (decode->rex.x) + index_reg |=3D REG_R8; + + if (index_reg !=3D REG_RSP) + scaled_index =3D get_reg_val(cpu, index_reg, decode->rex.x, addr_s= ize) << decode->sib.scale; + return base + scaled_index; +} + +void calc_modrm_operand32(CPUState *cpu, struct x86_decode *decode, struct= x86_decode_op *op) +{ + x86_reg_segment seg =3D REG_SEG_DS; + addr_t ptr =3D 0; + int addr_size =3D decode->addressing_size; + + if (decode->displacement_size) + ptr =3D sign(decode->displacement, decode->displacement_size); + + if (4 =3D=3D decode->modrm.rm) { + ptr +=3D get_sib_val(cpu, decode, &seg); + } + else if (!decode->modrm.mod && 5 =3D=3D decode->modrm.rm) { + if (x86_is_long_mode(cpu)) + ptr +=3D RIP(cpu) + decode->len; + else + ptr =3D decode->displacement; + } + else { + if (REG_RBP =3D=3D decode->modrm.rm || REG_RSP =3D=3D decode->modr= m.rm) + seg =3D REG_SEG_SS; + ptr +=3D get_reg_val(cpu, decode->modrm.rm, decode->rex.b, addr_si= ze); + } + + if (X86_DECODE_CMD_LEA =3D=3D decode->cmd) + op->ptr =3D (uint32_t)ptr; + else + op->ptr =3D decode_linear_addr(cpu, decode, (uint32_t)ptr, seg); +} + +void calc_modrm_operand64(CPUState *cpu, struct x86_decode *decode, struct= x86_decode_op *op) +{ + x86_reg_segment seg =3D REG_SEG_DS; + int32_t offset =3D 0; + int mod =3D decode->modrm.mod; + int rm =3D decode->modrm.rm; + addr_t ptr; + int src =3D decode->modrm.rm; + =20 + if (decode->displacement_size) + offset =3D sign(decode->displacement, decode->displacement_size); + + if (4 =3D=3D rm) + ptr =3D get_sib_val(cpu, decode, &seg) + offset; + else if (0 =3D=3D mod && 5 =3D=3D rm) + ptr =3D RIP(cpu) + decode->len + (int32_t) offset; + else + ptr =3D get_reg_val(cpu, src, decode->rex.b, 8) + (int64_t) offset; + =20 + if (X86_DECODE_CMD_LEA =3D=3D decode->cmd) + op->ptr =3D ptr; + else + op->ptr =3D decode_linear_addr(cpu, decode, ptr, seg); +} + + +void calc_modrm_operand(CPUState *cpu, struct x86_decode *decode, struct x= 86_decode_op *op) +{ + if (3 =3D=3D decode->modrm.mod) { + op->reg =3D decode->modrm.reg; + op->type =3D X86_VAR_REG; + op->ptr =3D get_reg_ref(cpu, decode->modrm.rm, decode->rex.b, deco= de->operand_size); + return; + } + + switch (decode->addressing_size) { + case 2: + calc_modrm_operand16(cpu, decode, op); + break; + case 4: + calc_modrm_operand32(cpu, decode, op); + break; + case 8: + calc_modrm_operand64(cpu, decode, op); + break; + default: + VM_PANIC_EX("unsupported address size %d\n", decode->addressin= g_size); + break; + } +} + +static void decode_prefix(CPUState *cpu, struct x86_decode *decode) +{ + while (1) { + uint8_t byte =3D decode_byte(cpu, decode); + switch (byte) { + case PREFIX_LOCK: + decode->lock =3D byte; + break; + case PREFIX_REPN: + case PREFIX_REP: + decode->rep =3D byte; + break; + case PREFIX_CS_SEG_OVEERIDE: + case PREFIX_SS_SEG_OVEERIDE: + case PREFIX_DS_SEG_OVEERIDE: + case PREFIX_ES_SEG_OVEERIDE: + case PREFIX_FS_SEG_OVEERIDE: + case PREFIX_GS_SEG_OVEERIDE: + decode->segment_override =3D byte; + break; + case PREFIX_OP_SIZE_OVERRIDE: + decode->op_size_override =3D byte; + break; + case PREFIX_ADDR_SIZE_OVERRIDE: + decode->addr_size_override =3D byte; + break; + case PREFIX_REX ... (PREFIX_REX + 0xf): + if (x86_is_long_mode(cpu)) { + decode->rex.rex =3D byte; + break; + } + // fall through when not in long mode + default: + decode->len--; + return; + } + } +} + +void set_addressing_size(CPUState *cpu, struct x86_decode *decode) +{ + decode->addressing_size =3D -1; + if (x86_is_real(cpu) || x86_is_v8086(cpu)) { + if (decode->addr_size_override) + decode->addressing_size =3D 4; + else + decode->addressing_size =3D 2; + } + else if (!x86_is_long_mode(cpu)) { + // protected + struct vmx_segment cs; + vmx_read_segment_descriptor(cpu, &cs, REG_SEG_CS); + // check db + if ((cs.ar >> 14) & 1) { + if (decode->addr_size_override) + decode->addressing_size =3D 2; + else + decode->addressing_size =3D 4; + } else { + if (decode->addr_size_override) + decode->addressing_size =3D 4; + else + decode->addressing_size =3D 2; + } + } else { + // long + if (decode->addr_size_override) + decode->addressing_size =3D 4; + else + decode->addressing_size =3D 8; + } +} + +void set_operand_size(CPUState *cpu, struct x86_decode *decode) +{ + decode->operand_size =3D -1; + if (x86_is_real(cpu) || x86_is_v8086(cpu)) { + if (decode->op_size_override) + decode->operand_size =3D 4; + else + decode->operand_size =3D 2; + } + else if (!x86_is_long_mode(cpu)) { + // protected + struct vmx_segment cs; + vmx_read_segment_descriptor(cpu, &cs, REG_SEG_CS); + // check db + if ((cs.ar >> 14) & 1) { + if (decode->op_size_override) + decode->operand_size =3D 2; + else + decode->operand_size =3D 4; + } else { + if (decode->op_size_override) + decode->operand_size =3D 4; + else + decode->operand_size =3D 2; + } + } else { + // long + if (decode->op_size_override) + decode->operand_size =3D 2; + else + decode->operand_size =3D 4; + + if (decode->rex.w) + decode->operand_size =3D 8; + } +} + +static void decode_sib(CPUState *cpu, struct x86_decode *decode) +{ + if ((decode->modrm.mod !=3D 3) && (4 =3D=3D decode->modrm.rm) && (deco= de->addressing_size !=3D 2)) { + decode->sib.sib =3D decode_byte(cpu, decode); + decode->sib_present =3D true; + } +} + +/* 16 bit modrm + * mod R/M + * 00 [BX+SI] [BX+DI] [BP+SI] [BP+DI] [SI]= [DI] [disp16] [BX] + * 01 [BX+SI+disp8] [BX+DI+disp8] [BP+SI+disp8] [BP+DI+disp8] [SI+disp8] [= DI+disp8] [BP+disp8] [BX+disp8] + * 10 [BX+SI+disp16] [BX+DI+disp16] [BP+SI+disp16] [BP+DI+disp16] [SI+disp= 16] [DI+disp16] [BP+disp16] [BX+disp16] + * 11 - - - - = - - - - + */ +int disp16_tbl[4][8] =3D + {{0, 0, 0, 0, 0, 0, 2, 0}, + {1, 1, 1, 1, 1, 1, 1, 1}, + {2, 2, 2, 2, 2, 2, 2, 2}, + {0, 0, 0, 0, 0, 0, 0, 0}}; + +/* + 32/64-bit modrm + Mod + 00 [r/m] [r/m] [r/m] [r/m] [SIB] [= RIP/EIP1,2+disp32] [r/m] [r/m] + 01 [r/m+disp8] [r/m+disp8] [r/m+disp8] [r/m+disp8] [SIB+disp8] [= r/m+disp8] [SIB+disp8] [r/m+disp8] + 10 [r/m+disp32] [r/m+disp32] [r/m+disp32] [r/m+disp32] [SIB+disp32] [= r/m+disp32] [SIB+disp32] [r/m+disp32] + 11 - - - - - -= - - + */ +int disp32_tbl[4][8] =3D + {{0, 0, 0, 0, -1, 4, 0, 0}, + {1, 1, 1, 1, 1, 1, 1, 1}, + {4, 4, 4, 4, 4, 4, 4, 4}, + {0, 0, 0, 0, 0, 0, 0, 0}}; + +static inline void decode_displacement(CPUState *cpu, struct x86_decode *d= ecode) +{ + int addressing_size =3D decode->addressing_size; + int mod =3D decode->modrm.mod; + int rm =3D decode->modrm.rm; + =20 + decode->displacement_size =3D 0; + switch (addressing_size) { + case 2: + decode->displacement_size =3D disp16_tbl[mod][rm]; + if (decode->displacement_size) + decode->displacement =3D (uint16_t)decode_bytes(cpu, decod= e, decode->displacement_size); + break; + case 4: + case 8: + if (-1 =3D=3D disp32_tbl[mod][rm]) { + if (5 =3D=3D decode->sib.base) + decode->displacement_size =3D 4; + } + else + decode->displacement_size =3D disp32_tbl[mod][rm]; + =20 + if (decode->displacement_size) + decode->displacement =3D (uint32_t)decode_bytes(cpu, decod= e, decode->displacement_size); + break; + } +} + +static inline void decode_modrm(CPUState *cpu, struct x86_decode *decode) +{ + decode->modrm.modrm =3D decode_byte(cpu, decode); + decode->is_modrm =3D true; + =20 + decode_sib(cpu, decode); + decode_displacement(cpu, decode); +} + +static inline void decode_opcode_general(CPUState *cpu, struct x86_decode = *decode, uint8_t opcode, struct decode_tbl *inst_decoder) +{ + decode->cmd =3D inst_decoder->cmd; + if (inst_decoder->operand_size) + decode->operand_size =3D inst_decoder->operand_size; + decode->flags_mask =3D inst_decoder->flags_mask; + =20 + if (inst_decoder->is_modrm) + decode_modrm(cpu, decode); + if (inst_decoder->decode_op1) + inst_decoder->decode_op1(cpu, decode, &decode->op[0]); + if (inst_decoder->decode_op2) + inst_decoder->decode_op2(cpu, decode, &decode->op[1]); + if (inst_decoder->decode_op3) + inst_decoder->decode_op3(cpu, decode, &decode->op[2]); + if (inst_decoder->decode_op4) + inst_decoder->decode_op4(cpu, decode, &decode->op[3]); + if (inst_decoder->decode_postfix) + inst_decoder->decode_postfix(cpu, decode); +} + +static inline void decode_opcode_1(CPUState *cpu, struct x86_decode *decod= e, uint8_t opcode) +{ + struct decode_tbl *inst_decoder =3D &_decode_tbl1[opcode]; + decode_opcode_general(cpu, decode, opcode, inst_decoder); +} + + +static inline void decode_opcode_2(CPUState *cpu, struct x86_decode *decod= e, uint8_t opcode) +{ + struct decode_tbl *inst_decoder =3D &_decode_tbl2[opcode]; + decode_opcode_general(cpu, decode, opcode, inst_decoder); +} + +static void decode_opcodes(CPUState *cpu, struct x86_decode *decode) +{ + uint8_t opcode; + =20 + opcode =3D decode_byte(cpu, decode); + decode->opcode[decode->opcode_len++] =3D opcode; + if (opcode !=3D OPCODE_ESCAPE) { + decode_opcode_1(cpu, decode, opcode); + } else { + opcode =3D decode_byte(cpu, decode); + decode->opcode[decode->opcode_len++] =3D opcode; + decode_opcode_2(cpu, decode, opcode); + } +} + +uint32_t decode_instruction(CPUState *cpu, struct x86_decode *decode) +{ + ZERO_INIT(*decode); + + decode_prefix(cpu, decode); + set_addressing_size(cpu, decode); + set_operand_size(cpu, decode); + + decode_opcodes(cpu, decode); + =20 + return decode->len; +} + +void init_decoder(CPUState *cpu) +{ + int i; + =20 + for (i =3D 0; i < ARRAY_SIZE(_decode_tbl2); i++) + memcpy(_decode_tbl1, &invl_inst, sizeof(invl_inst)); + for (i =3D 0; i < ARRAY_SIZE(_decode_tbl2); i++) + memcpy(_decode_tbl2, &invl_inst, sizeof(invl_inst)); + for (i =3D 0; i < ARRAY_SIZE(_decode_tbl3); i++) + memcpy(_decode_tbl3, &invl_inst, sizeof(invl_inst_x87)); + =20 + for (i =3D 0; i < ARRAY_SIZE(_1op_inst); i++) { + _decode_tbl1[_1op_inst[i].opcode] =3D _1op_inst[i]; + } + for (i =3D 0; i < ARRAY_SIZE(_2op_inst); i++) { + _decode_tbl2[_2op_inst[i].opcode] =3D _2op_inst[i]; + } + for (i =3D 0; i < ARRAY_SIZE(_x87_inst); i++) { + int index =3D ((_x87_inst[i].opcode & 0xf) << 4) | ((_x87_inst[i].= modrm_mod & 1) << 3) | _x87_inst[i].modrm_reg; + _decode_tbl3[index] =3D _x87_inst[i]; + } +} + + +const char *decode_cmd_to_string(enum x86_decode_cmd cmd) +{ + static const char *cmds[] =3D {"INVL", "PUSH", "PUSH_SEG", "POP", "POP= _SEG", "MOV", "MOVSX", "MOVZX", "CALL_NEAR", + "CALL_NEAR_ABS_INDIRECT", "CALL_FAR_ABS_INDIRECT", "CMD_CALL_FAR",= "RET_NEAR", "RET_FAR", "ADD", "OR", + "ADC", "SBB", "AND", "SUB", "XOR", "CMP", "INC", "DEC", "TST", "NO= T", "NEG", "JMP_NEAR", "JMP_NEAR_ABS_INDIRECT", + "JMP_FAR", "JMP_FAR_ABS_INDIRECT", "LEA", "JXX", + "JCXZ", "SETXX", "MOV_TO_SEG", "MOV_FROM_SEG", "CLI", "STI", "CLD"= , "STD", "STC", + "CLC", "OUT", "IN", "INS", "OUTS", "LIDT", "SIDT", "LGDT", "SGDT",= "SMSW", "LMSW", "RDTSCP", "INVLPG", "MOV_TO_CR", + "MOV_FROM_CR", "MOV_TO_DR", "MOV_FROM_DR", "PUSHF", "POPF", "CPUID= ", "ROL", "ROR", "RCL", "RCR", "SHL", "SAL", + "SHR","SHRD", "SHLD", "SAR", "DIV", "IDIV", "MUL", "IMUL_3", "IMUL= _2", "IMUL_1", "MOVS", "CMPS", "SCAS", + "LODS", "STOS", "BSWAP", "XCHG", "RDTSC", "RDMSR", "WRMSR", "ENTER= ", "LEAVE", "BT", "BTS", "BTC", "BTR", "BSF", + "BSR", "IRET", "INT", "POPA", "PUSHA", "CWD", "CBW", "DAS", "AAD",= "AAM", "AAS", "LOOP", "SLDT", "STR", "LLDT", + "LTR", "VERR", "VERW", "SAHF", "LAHF", "WBINVD", "LDS", "LSS", "LE= S", "LGS", "LFS", "CMC", "XLAT", "NOP", "CMOV", + "CLTS", "XADD", "HLT", "CMPXCHG8B", "CMPXCHG", "POPCNT", + "FNINIT", "FLD", "FLDxx", "FNSTCW", "FNSTSW", "FNSETPM", "FSAVE", = "FRSTOR", "FXSAVE", "FXRSTOR", "FDIV", "FMUL", + "FSUB", "FADD", "EMMS", "MFENCE", "SFENCE", "LFENCE", "PREFETCH", = "FST", "FABS", "FUCOM", "FUCOMI", "FLDCW", + "FXCH", "FCHS", "FCMOV", "FRNDINT", "FXAM", "LAST"}; + return cmds[cmd]; +} + +addr_t decode_linear_addr(struct CPUState *cpu, struct x86_decode *decode,= addr_t addr, x86_reg_segment seg) +{ + switch (decode->segment_override) { + case PREFIX_CS_SEG_OVEERIDE: + seg =3D REG_SEG_CS; + break; + case PREFIX_SS_SEG_OVEERIDE: + seg =3D REG_SEG_SS; + break; + case PREFIX_DS_SEG_OVEERIDE: + seg =3D REG_SEG_DS; + break; + case PREFIX_ES_SEG_OVEERIDE: + seg =3D REG_SEG_ES; + break; + case PREFIX_FS_SEG_OVEERIDE: + seg =3D REG_SEG_FS; + break; + case PREFIX_GS_SEG_OVEERIDE: + seg =3D REG_SEG_GS; + break; + default: + break; + } + return linear_addr_size(cpu, addr, decode->addressing_size, seg); +} diff --git a/target/i386/hvf-utils/x86_decode.h b/target/i386/hvf-utils/x86= _decode.h new file mode 100644 index 0000000000..3a22d7d1a5 --- /dev/null +++ b/target/i386/hvf-utils/x86_decode.h @@ -0,0 +1,314 @@ +/* + * Copyright (C) 2016 Veertu Inc, + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation; either version 2 or + * (at your option) version 3 of the License. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, see . + */ + +#pragma once + +#include +#include +#include +#include +#include "qemu-common.h" +#include "x86.h" + +typedef enum x86_prefix { + // group 1 + PREFIX_LOCK =3D 0xf0, + PREFIX_REPN =3D 0xf2, + PREFIX_REP =3D 0xf3, + // group 2 + PREFIX_CS_SEG_OVEERIDE =3D 0x2e, + PREFIX_SS_SEG_OVEERIDE =3D 0x36, + PREFIX_DS_SEG_OVEERIDE =3D 0x3e, + PREFIX_ES_SEG_OVEERIDE =3D 0x26, + PREFIX_FS_SEG_OVEERIDE =3D 0x64, + PREFIX_GS_SEG_OVEERIDE =3D 0x65, + // group 3 + PREFIX_OP_SIZE_OVERRIDE =3D 0x66, + // group 4 + PREFIX_ADDR_SIZE_OVERRIDE =3D 0x67, + + PREFIX_REX =3D 0x40, +} x86_prefix; + +enum x86_decode_cmd { + X86_DECODE_CMD_INVL =3D 0, + =20 + X86_DECODE_CMD_PUSH, + X86_DECODE_CMD_PUSH_SEG, + X86_DECODE_CMD_POP, + X86_DECODE_CMD_POP_SEG, + X86_DECODE_CMD_MOV, + X86_DECODE_CMD_MOVSX, + X86_DECODE_CMD_MOVZX, + X86_DECODE_CMD_CALL_NEAR, + X86_DECODE_CMD_CALL_NEAR_ABS_INDIRECT, + X86_DECODE_CMD_CALL_FAR_ABS_INDIRECT, + X86_DECODE_CMD_CALL_FAR, + X86_DECODE_RET_NEAR, + X86_DECODE_RET_FAR, + X86_DECODE_CMD_ADD, + X86_DECODE_CMD_OR, + X86_DECODE_CMD_ADC, + X86_DECODE_CMD_SBB, + X86_DECODE_CMD_AND, + X86_DECODE_CMD_SUB, + X86_DECODE_CMD_XOR, + X86_DECODE_CMD_CMP, + X86_DECODE_CMD_INC, + X86_DECODE_CMD_DEC, + X86_DECODE_CMD_TST, + X86_DECODE_CMD_NOT, + X86_DECODE_CMD_NEG, + X86_DECODE_CMD_JMP_NEAR, + X86_DECODE_CMD_JMP_NEAR_ABS_INDIRECT, + X86_DECODE_CMD_JMP_FAR, + X86_DECODE_CMD_JMP_FAR_ABS_INDIRECT, + X86_DECODE_CMD_LEA, + X86_DECODE_CMD_JXX, + X86_DECODE_CMD_JCXZ, + X86_DECODE_CMD_SETXX, + X86_DECODE_CMD_MOV_TO_SEG, + X86_DECODE_CMD_MOV_FROM_SEG, + X86_DECODE_CMD_CLI, + X86_DECODE_CMD_STI, + X86_DECODE_CMD_CLD, + X86_DECODE_CMD_STD, + X86_DECODE_CMD_STC, + X86_DECODE_CMD_CLC, + X86_DECODE_CMD_OUT, + X86_DECODE_CMD_IN, + X86_DECODE_CMD_INS, + X86_DECODE_CMD_OUTS, + X86_DECODE_CMD_LIDT, + X86_DECODE_CMD_SIDT, + X86_DECODE_CMD_LGDT, + X86_DECODE_CMD_SGDT, + X86_DECODE_CMD_SMSW, + X86_DECODE_CMD_LMSW, + X86_DECODE_CMD_RDTSCP, + X86_DECODE_CMD_INVLPG, + X86_DECODE_CMD_MOV_TO_CR, + X86_DECODE_CMD_MOV_FROM_CR, + X86_DECODE_CMD_MOV_TO_DR, + X86_DECODE_CMD_MOV_FROM_DR, + X86_DECODE_CMD_PUSHF, + X86_DECODE_CMD_POPF, + X86_DECODE_CMD_CPUID, + X86_DECODE_CMD_ROL, + X86_DECODE_CMD_ROR, + X86_DECODE_CMD_RCL, + X86_DECODE_CMD_RCR, + X86_DECODE_CMD_SHL, + X86_DECODE_CMD_SAL, + X86_DECODE_CMD_SHR, + X86_DECODE_CMD_SHRD, + X86_DECODE_CMD_SHLD, + X86_DECODE_CMD_SAR, + X86_DECODE_CMD_DIV, + X86_DECODE_CMD_IDIV, + X86_DECODE_CMD_MUL, + X86_DECODE_CMD_IMUL_3, + X86_DECODE_CMD_IMUL_2, + X86_DECODE_CMD_IMUL_1, + X86_DECODE_CMD_MOVS, + X86_DECODE_CMD_CMPS, + X86_DECODE_CMD_SCAS, + X86_DECODE_CMD_LODS, + X86_DECODE_CMD_STOS, + X86_DECODE_CMD_BSWAP, + X86_DECODE_CMD_XCHG, + X86_DECODE_CMD_RDTSC, + X86_DECODE_CMD_RDMSR, + X86_DECODE_CMD_WRMSR, + X86_DECODE_CMD_ENTER, + X86_DECODE_CMD_LEAVE, + X86_DECODE_CMD_BT, + X86_DECODE_CMD_BTS, + X86_DECODE_CMD_BTC, + X86_DECODE_CMD_BTR, + X86_DECODE_CMD_BSF, + X86_DECODE_CMD_BSR, + X86_DECODE_CMD_IRET, + X86_DECODE_CMD_INT, + X86_DECODE_CMD_POPA, + X86_DECODE_CMD_PUSHA, + X86_DECODE_CMD_CWD, + X86_DECODE_CMD_CBW, + X86_DECODE_CMD_DAS, + X86_DECODE_CMD_AAD, + X86_DECODE_CMD_AAM, + X86_DECODE_CMD_AAS, + X86_DECODE_CMD_LOOP, + X86_DECODE_CMD_SLDT, + X86_DECODE_CMD_STR, + X86_DECODE_CMD_LLDT, + X86_DECODE_CMD_LTR, + X86_DECODE_CMD_VERR, + X86_DECODE_CMD_VERW, + X86_DECODE_CMD_SAHF, + X86_DECODE_CMD_LAHF, + X86_DECODE_CMD_WBINVD, + X86_DECODE_CMD_LDS, + X86_DECODE_CMD_LSS, + X86_DECODE_CMD_LES, + X86_DECODE_XMD_LGS, + X86_DECODE_CMD_LFS, + X86_DECODE_CMD_CMC, + X86_DECODE_CMD_XLAT, + X86_DECODE_CMD_NOP, + X86_DECODE_CMD_CMOV, + X86_DECODE_CMD_CLTS, + X86_DECODE_CMD_XADD, + X86_DECODE_CMD_HLT, + X86_DECODE_CMD_CMPXCHG8B, + X86_DECODE_CMD_CMPXCHG, + X86_DECODE_CMD_POPCNT, + =20 + X86_DECODE_CMD_FNINIT, + X86_DECODE_CMD_FLD, + X86_DECODE_CMD_FLDxx, + X86_DECODE_CMD_FNSTCW, + X86_DECODE_CMD_FNSTSW, + X86_DECODE_CMD_FNSETPM, + X86_DECODE_CMD_FSAVE, + X86_DECODE_CMD_FRSTOR, + X86_DECODE_CMD_FXSAVE, + X86_DECODE_CMD_FXRSTOR, + X86_DECODE_CMD_FDIV, + X86_DECODE_CMD_FMUL, + X86_DECODE_CMD_FSUB, + X86_DECODE_CMD_FADD, + X86_DECODE_CMD_EMMS, + X86_DECODE_CMD_MFENCE, + X86_DECODE_CMD_SFENCE, + X86_DECODE_CMD_LFENCE, + X86_DECODE_CMD_PREFETCH, + X86_DECODE_CMD_CLFLUSH, + X86_DECODE_CMD_FST, + X86_DECODE_CMD_FABS, + X86_DECODE_CMD_FUCOM, + X86_DECODE_CMD_FUCOMI, + X86_DECODE_CMD_FLDCW, + X86_DECODE_CMD_FXCH, + X86_DECODE_CMD_FCHS, + X86_DECODE_CMD_FCMOV, + X86_DECODE_CMD_FRNDINT, + X86_DECODE_CMD_FXAM, + + X86_DECODE_CMD_LAST, +}; + +const char *decode_cmd_to_string(enum x86_decode_cmd cmd); + +typedef struct x86_modrm { + union { + uint8_t modrm; + struct { + uint8_t rm:3; + uint8_t reg:3; + uint8_t mod:2; + }; + }; +} __attribute__ ((__packed__)) x86_modrm; + +typedef struct x86_sib { + union { + uint8_t sib; + struct { + uint8_t base:3; + uint8_t index:3; + uint8_t scale:2; + }; + }; +} __attribute__ ((__packed__)) x86_sib; + +typedef struct x86_rex { + union { + uint8_t rex; + struct { + uint8_t b:1; + uint8_t x:1; + uint8_t r:1; + uint8_t w:1; + uint8_t unused:4; + }; + }; +} __attribute__ ((__packed__)) x86_rex; + +typedef enum x86_var_type { + X86_VAR_IMMEDIATE, + X86_VAR_OFFSET, + X86_VAR_REG, + X86_VAR_RM, + + // for floating point computations + X87_VAR_REG, + X87_VAR_FLOATP, + X87_VAR_INTP, + X87_VAR_BYTEP, +} x86_var_type; + +typedef struct x86_decode_op { + enum x86_var_type type; + int size; + + int reg; + addr_t val; + + addr_t ptr; +} x86_decode_op; + +typedef struct x86_decode { + int len; + uint8_t opcode[4]; + uint8_t opcode_len; + enum x86_decode_cmd cmd; + int addressing_size; + int operand_size; + int lock; + int rep; + int op_size_override; + int addr_size_override; + int segment_override; + int control_change_inst; + bool fwait; + bool fpop_stack; + bool frev; + + uint32_t displacement; + uint8_t displacement_size; + struct x86_rex rex; + bool is_modrm; + bool sib_present; + struct x86_sib sib; + struct x86_modrm modrm; + struct x86_decode_op op[4]; + bool is_fpu; + addr_t flags_mask; + +} x86_decode; + +uint64_t sign(uint64_t val, int size); + +uint32_t decode_instruction(CPUState *cpu, struct x86_decode *decode); + +addr_t get_reg_ref(CPUState *cpu, int reg, int is_extended, int size); +addr_t get_reg_val(CPUState *cpu, int reg, int is_extended, int size); +void calc_modrm_operand(CPUState *cpu, struct x86_decode *decode, struct x= 86_decode_op *op); +addr_t decode_linear_addr(struct CPUState *cpu, struct x86_decode *decode,= addr_t addr, x86_reg_segment seg); + +void init_decoder(CPUState* cpu); diff --git a/target/i386/hvf-utils/x86_descr.c b/target/i386/hvf-utils/x86_= descr.c new file mode 100644 index 0000000000..c3b089aaa8 --- /dev/null +++ b/target/i386/hvf-utils/x86_descr.c @@ -0,0 +1,124 @@ +/* + * Copyright (C) 2016 Veertu Inc, + * Copyright (C) 2017 Google Inc, + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation; either version 2 or + * (at your option) version 3 of the License. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, see . + */ + +#include "qemu/osdep.h" + +#include "vmx.h" +#include "x86_descr.h" + +#define VMX_SEGMENT_FIELD(seg) \ + [REG_SEG_##seg] =3D { \ + .selector =3D VMCS_GUEST_##seg##_SELECTOR, \ + .base =3D VMCS_GUEST_##seg##_BASE, \ + .limit =3D VMCS_GUEST_##seg##_LIMIT, \ + .ar_bytes =3D VMCS_GUEST_##seg##_ACCESS_RIGHTS, \ +} + +static const struct vmx_segment_field { + int selector; + int base; + int limit; + int ar_bytes; +} vmx_segment_fields[] =3D { + VMX_SEGMENT_FIELD(ES), + VMX_SEGMENT_FIELD(CS), + VMX_SEGMENT_FIELD(SS), + VMX_SEGMENT_FIELD(DS), + VMX_SEGMENT_FIELD(FS), + VMX_SEGMENT_FIELD(GS), + VMX_SEGMENT_FIELD(LDTR), + VMX_SEGMENT_FIELD(TR), +}; + +uint32_t vmx_read_segment_limit(CPUState *cpu, x86_reg_segment seg) +{ + return (uint32_t)rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].limit); +} + +uint32_t vmx_read_segment_ar(CPUState *cpu, x86_reg_segment seg) +{ + return (uint32_t)rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].ar_bytes); +} + +uint64_t vmx_read_segment_base(CPUState *cpu, x86_reg_segment seg) +{ + return rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].base); +} + +x68_segment_selector vmx_read_segment_selector(CPUState *cpu, x86_reg_segm= ent seg) +{ + x68_segment_selector sel; + sel.sel =3D rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].selector); + return sel; +} + +void vmx_write_segment_selector(struct CPUState *cpu, x68_segment_selector= selector, x86_reg_segment seg) +{ + wvmcs(cpu->hvf_fd, vmx_segment_fields[seg].selector, selector.sel); +} + +void vmx_read_segment_descriptor(struct CPUState *cpu, struct vmx_segment = *desc, x86_reg_segment seg) +{ + desc->sel =3D rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].selector); + desc->base =3D rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].base); + desc->limit =3D rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].limit); + desc->ar =3D rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].ar_bytes); +} + +void vmx_write_segment_descriptor(CPUState *cpu, struct vmx_segment *desc,= x86_reg_segment seg) +{ + const struct vmx_segment_field *sf =3D &vmx_segment_fields[seg]; + + wvmcs(cpu->hvf_fd, sf->base, desc->base); + wvmcs(cpu->hvf_fd, sf->limit, desc->limit); + wvmcs(cpu->hvf_fd, sf->selector, desc->sel); + wvmcs(cpu->hvf_fd, sf->ar_bytes, desc->ar); +} + +void x86_segment_descriptor_to_vmx(struct CPUState *cpu, x68_segment_selec= tor selector, struct x86_segment_descriptor *desc, struct vmx_segment *vmx_= desc) +{ + vmx_desc->sel =3D selector.sel; + vmx_desc->base =3D x86_segment_base(desc); + vmx_desc->limit =3D x86_segment_limit(desc); + + vmx_desc->ar =3D (selector.sel ? 0 : 1) << 16 | + desc->g << 15 | + desc->db << 14 | + desc->l << 13 | + desc->avl << 12 | + desc->p << 7 | + desc->dpl << 5 | + desc->s << 4 | + desc->type; +} + +void vmx_segment_to_x86_descriptor(struct CPUState *cpu, struct vmx_segmen= t *vmx_desc, struct x86_segment_descriptor *desc) +{ + x86_set_segment_limit(desc, vmx_desc->limit); + x86_set_segment_base(desc, vmx_desc->base); + =20 + desc->type =3D vmx_desc->ar & 15; + desc->s =3D (vmx_desc->ar >> 4) & 1; + desc->dpl =3D (vmx_desc->ar >> 5) & 3; + desc->p =3D (vmx_desc->ar >> 7) & 1; + desc->avl =3D (vmx_desc->ar >> 12) & 1; + desc->l =3D (vmx_desc->ar >> 13) & 1; + desc->db =3D (vmx_desc->ar >> 14) & 1; + desc->g =3D (vmx_desc->ar >> 15) & 1; +} + diff --git a/target/i386/hvf-utils/x86_descr.h b/target/i386/hvf-utils/x86_= descr.h new file mode 100644 index 0000000000..78fb1bc420 --- /dev/null +++ b/target/i386/hvf-utils/x86_descr.h @@ -0,0 +1,40 @@ +/* + * Copyright (C) 2016 Veertu Inc, + * Copyright (C) 2017 Google Inc, + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation; either version 2 or + * (at your option) version 3 of the License. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, see . + */ + +#pragma once + +#include "x86.h" + +typedef struct vmx_segment { + uint16_t sel; + uint64_t base; + uint64_t limit; + uint64_t ar; +} vmx_segment; + +// deal with vmstate descriptors +void vmx_read_segment_descriptor(struct CPUState *cpu, struct vmx_segment = *desc, x86_reg_segment seg); +void vmx_write_segment_descriptor(CPUState *cpu, struct vmx_segment *desc,= x86_reg_segment seg); + +x68_segment_selector vmx_read_segment_selector(struct CPUState *cpu, x86_r= eg_segment seg); +void vmx_write_segment_selector(struct CPUState *cpu, x68_segment_selector= selector, x86_reg_segment seg); + +uint64_t vmx_read_segment_base(struct CPUState *cpu, x86_reg_segment seg); +void vmx_write_segment_base(struct CPUState *cpu, x86_reg_segment seg, uin= t64_t base); + +void x86_segment_descriptor_to_vmx(struct CPUState *cpu, x68_segment_selec= tor selector, struct x86_segment_descriptor *desc, struct vmx_segment *vmx_= desc); diff --git a/target/i386/hvf-utils/x86_emu.c b/target/i386/hvf-utils/x86_em= u.c new file mode 100644 index 0000000000..8b5efc76f0 --- /dev/null +++ b/target/i386/hvf-utils/x86_emu.c @@ -0,0 +1,1466 @@ +/* + * Copyright (C) 2016 Veertu Inc, + * Copyright (C) 2017 Google Inc, + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation; either version 2 or + * (at your option) version 3 of the License. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, see . + */ + +///////////////////////////////////////////////////////////////////////// +// +// Copyright (C) 2001-2012 The Bochs Project +// +// This library is free software; you can redistribute it and/or +// modify it under the terms of the GNU Lesser General Public +// License as published by the Free Software Foundation; either +// version 2 of the License, or (at your option) any later version. +// +// This library is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +// Lesser General Public License for more details. +// +// You should have received a copy of the GNU Lesser General Public +// License along with this library; if not, write to the Free Software +// Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA B 02110-1301= USA +///////////////////////////////////////////////////////////////////////// + +#include "qemu/osdep.h" + +#include "qemu-common.h" +#include "x86_decode.h" +#include "x86.h" +#include "x86_emu.h" +#include "x86_mmu.h" +#include "vmcs.h" +#include "vmx.h" + +static void print_debug(struct CPUState *cpu); +void hvf_handle_io(struct CPUState *cpu, uint16_t port, void *data, int di= rection, int size, uint32_t count); + +#define EXEC_2OP_LOGIC_CMD(cpu, decode, cmd, FLAGS_FUNC, save_res) \ +{ \ + fetch_operands(cpu, decode, 2, true, true, false); \ + switch (decode->operand_size) { \ + case 1: \ + { \ + uint8_t v1 =3D (uint8_t)decode->op[0].val; \ + uint8_t v2 =3D (uint8_t)decode->op[1].val; \ + uint8_t diff =3D v1 cmd v2; \ + if (save_res) \ + write_val_ext(cpu, decode->op[0].ptr, diff, 1); \ + FLAGS_FUNC##_8(diff); \ + break; \ + } \ + case 2: \ + { \ + uint16_t v1 =3D (uint16_t)decode->op[0].val; \ + uint16_t v2 =3D (uint16_t)decode->op[1].val; \ + uint16_t diff =3D v1 cmd v2; \ + if (save_res) \ + write_val_ext(cpu, decode->op[0].ptr, diff, 2); \ + FLAGS_FUNC##_16(diff); \ + break; \ + } \ + case 4: \ + { \ + uint32_t v1 =3D (uint32_t)decode->op[0].val; \ + uint32_t v2 =3D (uint32_t)decode->op[1].val; \ + uint32_t diff =3D v1 cmd v2; \ + if (save_res) \ + write_val_ext(cpu, decode->op[0].ptr, diff, 4); \ + FLAGS_FUNC##_32(diff); \ + break; \ + } \ + default: \ + VM_PANIC("bad size\n"); \ + } \ +} \ + + +#define EXEC_2OP_ARITH_CMD(cpu, decode, cmd, FLAGS_FUNC, save_res) \ +{ \ + fetch_operands(cpu, decode, 2, true, true, false); \ + switch (decode->operand_size) { \ + case 1: \ + { \ + uint8_t v1 =3D (uint8_t)decode->op[0].val; \ + uint8_t v2 =3D (uint8_t)decode->op[1].val; \ + uint8_t diff =3D v1 cmd v2; \ + if (save_res) \ + write_val_ext(cpu, decode->op[0].ptr, diff, 1); \ + FLAGS_FUNC##_8(v1, v2, diff); \ + break; \ + } \ + case 2: \ + { \ + uint16_t v1 =3D (uint16_t)decode->op[0].val; \ + uint16_t v2 =3D (uint16_t)decode->op[1].val; \ + uint16_t diff =3D v1 cmd v2; \ + if (save_res) \ + write_val_ext(cpu, decode->op[0].ptr, diff, 2); \ + FLAGS_FUNC##_16(v1, v2, diff); \ + break; \ + } \ + case 4: \ + { \ + uint32_t v1 =3D (uint32_t)decode->op[0].val; \ + uint32_t v2 =3D (uint32_t)decode->op[1].val; \ + uint32_t diff =3D v1 cmd v2; \ + if (save_res) \ + write_val_ext(cpu, decode->op[0].ptr, diff, 4); \ + FLAGS_FUNC##_32(v1, v2, diff); \ + break; \ + } \ + default: \ + VM_PANIC("bad size\n"); \ + } \ +} + +addr_t read_reg(struct CPUState* cpu, int reg, int size) +{ + switch (size) { + case 1: + return cpu->hvf_x86->regs[reg].lx; + case 2: + return cpu->hvf_x86->regs[reg].rx; + case 4: + return cpu->hvf_x86->regs[reg].erx; + case 8: + return cpu->hvf_x86->regs[reg].rrx; + default: + VM_PANIC_ON("read_reg size"); + } + return 0; +} + +void write_reg(struct CPUState* cpu, int reg, addr_t val, int size) +{ + switch (size) { + case 1: + cpu->hvf_x86->regs[reg].lx =3D val; + break; + case 2: + cpu->hvf_x86->regs[reg].rx =3D val; + break; + case 4: + cpu->hvf_x86->regs[reg].rrx =3D (uint32_t)val; + break; + case 8: + cpu->hvf_x86->regs[reg].rrx =3D val; + break; + default: + VM_PANIC_ON("write_reg size"); + } +} + +addr_t read_val_from_reg(addr_t reg_ptr, int size) +{ + addr_t val; + =20 + switch (size) { + case 1: + val =3D *(uint8_t*)reg_ptr; + break; + case 2: + val =3D *(uint16_t*)reg_ptr; + break; + case 4: + val =3D *(uint32_t*)reg_ptr; + break; + case 8: + val =3D *(uint64_t*)reg_ptr; + break; + default: + VM_PANIC_ON_EX(1, "read_val: Unknown size %d\n", size); + break; + } + return val; +} + +void write_val_to_reg(addr_t reg_ptr, addr_t val, int size) +{ + switch (size) { + case 1: + *(uint8_t*)reg_ptr =3D val; + break; + case 2: + *(uint16_t*)reg_ptr =3D val; + break; + case 4: + *(uint64_t*)reg_ptr =3D (uint32_t)val; + break; + case 8: + *(uint64_t*)reg_ptr =3D val; + break; + default: + VM_PANIC("write_val: Unknown size\n"); + break; + } +} + +static bool is_host_reg(struct CPUState* cpu, addr_t ptr) { + return (ptr > (addr_t)cpu && ptr < (addr_t)cpu + sizeof(struct CPUStat= e)) || + (ptr > (addr_t)cpu->hvf_x86 && ptr < (addr_t)(cpu->hvf_x86 + si= zeof(struct hvf_x86_state))); +} + +void write_val_ext(struct CPUState* cpu, addr_t ptr, addr_t val, int size) +{ + if (is_host_reg(cpu, ptr)) { + write_val_to_reg(ptr, val, size); + return; + } + vmx_write_mem(cpu, ptr, &val, size); +} + +uint8_t *read_mmio(struct CPUState* cpu, addr_t ptr, int bytes) +{ + vmx_read_mem(cpu, cpu->hvf_x86->mmio_buf, ptr, bytes); + return cpu->hvf_x86->mmio_buf; +} + +addr_t read_val_ext(struct CPUState* cpu, addr_t ptr, int size) +{ + addr_t val; + uint8_t *mmio_ptr; + =20 + if (is_host_reg(cpu, ptr)) { + return read_val_from_reg(ptr, size); + } + =20 + mmio_ptr =3D read_mmio(cpu, ptr, size); + switch (size) { + case 1: + val =3D *(uint8_t*)mmio_ptr; + break; + case 2: + val =3D *(uint16_t*)mmio_ptr; + break; + case 4: + val =3D *(uint32_t*)mmio_ptr; + break; + case 8: + val =3D *(uint64_t*)mmio_ptr; + break; + default: + VM_PANIC("bad size\n"); + break; + } + return val; +} + +static void fetch_operands(struct CPUState *cpu, struct x86_decode *decode= , int n, bool val_op0, bool val_op1, bool val_op2) +{ + int i; + bool calc_val[3] =3D {val_op0, val_op1, val_op2}; + + for (i =3D 0; i < n; i++) { + switch (decode->op[i].type) { + case X86_VAR_IMMEDIATE: + break; + case X86_VAR_REG: + VM_PANIC_ON(!decode->op[i].ptr); + if (calc_val[i]) + decode->op[i].val =3D read_val_from_reg(decode->op[i].= ptr, decode->operand_size); + break; + case X86_VAR_RM: + calc_modrm_operand(cpu, decode, &decode->op[i]); + if (calc_val[i]) + decode->op[i].val =3D read_val_ext(cpu, decode->op[i].= ptr, decode->operand_size); + break; + case X86_VAR_OFFSET: + decode->op[i].ptr =3D decode_linear_addr(cpu, decode, deco= de->op[i].ptr, REG_SEG_DS); + if (calc_val[i]) + decode->op[i].val =3D read_val_ext(cpu, decode->op[i].= ptr, decode->operand_size); + break; + default: + break; + } + } +} + +static void exec_mov(struct CPUState *cpu, struct x86_decode *decode) +{ + fetch_operands(cpu, decode, 2, false, true, false); + write_val_ext(cpu, decode->op[0].ptr, decode->op[1].val, decode->opera= nd_size); + + RIP(cpu) +=3D decode->len; +} + +static void exec_add(struct CPUState *cpu, struct x86_decode *decode) +{ + EXEC_2OP_ARITH_CMD(cpu, decode, +, SET_FLAGS_OSZAPC_ADD, true); + RIP(cpu) +=3D decode->len; +} + +static void exec_or(struct CPUState *cpu, struct x86_decode *decode) +{ + EXEC_2OP_LOGIC_CMD(cpu, decode, |, SET_FLAGS_OSZAPC_LOGIC, true); + RIP(cpu) +=3D decode->len; +} + +static void exec_adc(struct CPUState *cpu, struct x86_decode *decode) +{ + EXEC_2OP_ARITH_CMD(cpu, decode, +get_CF(cpu)+, SET_FLAGS_OSZAPC_ADD, t= rue); + RIP(cpu) +=3D decode->len; +} + +static void exec_sbb(struct CPUState *cpu, struct x86_decode *decode) +{ + EXEC_2OP_ARITH_CMD(cpu, decode, -get_CF(cpu)-, SET_FLAGS_OSZAPC_SUB, t= rue); + RIP(cpu) +=3D decode->len; +} + +static void exec_and(struct CPUState *cpu, struct x86_decode *decode) +{ + EXEC_2OP_LOGIC_CMD(cpu, decode, &, SET_FLAGS_OSZAPC_LOGIC, true); + RIP(cpu) +=3D decode->len; +} + +static void exec_sub(struct CPUState *cpu, struct x86_decode *decode) +{ + EXEC_2OP_ARITH_CMD(cpu, decode, -, SET_FLAGS_OSZAPC_SUB, true); + RIP(cpu) +=3D decode->len; +} + +static void exec_xor(struct CPUState *cpu, struct x86_decode *decode) +{ + EXEC_2OP_LOGIC_CMD(cpu, decode, ^, SET_FLAGS_OSZAPC_LOGIC, true); + RIP(cpu) +=3D decode->len; +} + +static void exec_neg(struct CPUState *cpu, struct x86_decode *decode) +{ + //EXEC_2OP_ARITH_CMD(cpu, decode, -, SET_FLAGS_OSZAPC_SUB, false); + int32_t val; + fetch_operands(cpu, decode, 2, true, true, false); + + val =3D 0 - sign(decode->op[1].val, decode->operand_size); + write_val_ext(cpu, decode->op[1].ptr, val, decode->operand_size); + + if (4 =3D=3D decode->operand_size) { + SET_FLAGS_OSZAPC_SUB_32(0, 0 - val, val); + } + else if (2 =3D=3D decode->operand_size) { + SET_FLAGS_OSZAPC_SUB_16(0, 0 - val, val); + } + else if (1 =3D=3D decode->operand_size) { + SET_FLAGS_OSZAPC_SUB_8(0, 0 - val, val); + } else { + VM_PANIC("bad op size\n"); + } + + //lflags_to_rflags(cpu); + RIP(cpu) +=3D decode->len; +} + +static void exec_cmp(struct CPUState *cpu, struct x86_decode *decode) +{ + EXEC_2OP_ARITH_CMD(cpu, decode, -, SET_FLAGS_OSZAPC_SUB, false); + RIP(cpu) +=3D decode->len; +} + +static void exec_inc(struct CPUState *cpu, struct x86_decode *decode) +{ + decode->op[1].type =3D X86_VAR_IMMEDIATE; + decode->op[1].val =3D 0; + + EXEC_2OP_ARITH_CMD(cpu, decode, +1+, SET_FLAGS_OSZAP_ADD, true); + + RIP(cpu) +=3D decode->len; +} + +static void exec_dec(struct CPUState *cpu, struct x86_decode *decode) +{ + decode->op[1].type =3D X86_VAR_IMMEDIATE; + decode->op[1].val =3D 0; + + EXEC_2OP_ARITH_CMD(cpu, decode, -1-, SET_FLAGS_OSZAP_SUB, true); + RIP(cpu) +=3D decode->len; +} + +static void exec_tst(struct CPUState *cpu, struct x86_decode *decode) +{ + EXEC_2OP_LOGIC_CMD(cpu, decode, &, SET_FLAGS_OSZAPC_LOGIC, false); + RIP(cpu) +=3D decode->len; +} + +static void exec_not(struct CPUState *cpu, struct x86_decode *decode) +{ + fetch_operands(cpu, decode, 1, true, false, false); + + write_val_ext(cpu, decode->op[0].ptr, ~decode->op[0].val, decode->oper= and_size); + RIP(cpu) +=3D decode->len; +} + +void exec_movzx(struct CPUState *cpu, struct x86_decode *decode) +{ + int src_op_size; + int op_size =3D decode->operand_size; + + fetch_operands(cpu, decode, 1, false, false, false); + + if (0xb6 =3D=3D decode->opcode[1]) + src_op_size =3D 1; + else + src_op_size =3D 2; + decode->operand_size =3D src_op_size; + calc_modrm_operand(cpu, decode, &decode->op[1]); + decode->op[1].val =3D read_val_ext(cpu, decode->op[1].ptr, src_op_size= ); + write_val_ext(cpu, decode->op[0].ptr, decode->op[1].val, op_size); + + RIP(cpu) +=3D decode->len; +} + +static void exec_out(struct CPUState *cpu, struct x86_decode *decode) +{ + switch (decode->opcode[0]) { + case 0xe6: + hvf_handle_io(cpu, decode->op[0].val, &AL(cpu), 1, 1, 1); + break; + case 0xe7: + hvf_handle_io(cpu, decode->op[0].val, &RAX(cpu), 1, decode->op= erand_size, 1); + break; + case 0xee: + hvf_handle_io(cpu, DX(cpu), &AL(cpu), 1, 1, 1); + break; + case 0xef: + hvf_handle_io(cpu, DX(cpu), &RAX(cpu), 1, decode->operand_size= , 1); + break; + default: + VM_PANIC("Bad out opcode\n"); + break; + } + RIP(cpu) +=3D decode->len; +} + +static void exec_in(struct CPUState *cpu, struct x86_decode *decode) +{ + addr_t val =3D 0; + switch (decode->opcode[0]) { + case 0xe4: + hvf_handle_io(cpu, decode->op[0].val, &AL(cpu), 0, 1, 1); + break; + case 0xe5: + hvf_handle_io(cpu, decode->op[0].val, &val, 0, decode->operand= _size, 1); + if (decode->operand_size =3D=3D 2) + AX(cpu) =3D val; + else + RAX(cpu) =3D (uint32_t)val; + break; + case 0xec: + hvf_handle_io(cpu, DX(cpu), &AL(cpu), 0, 1, 1); + break; + case 0xed: + hvf_handle_io(cpu, DX(cpu), &val, 0, decode->operand_size, 1); + if (decode->operand_size =3D=3D 2) + AX(cpu) =3D val; + else + RAX(cpu) =3D (uint32_t)val; + + break; + default: + VM_PANIC("Bad in opcode\n"); + break; + } + + RIP(cpu) +=3D decode->len; +} + +static inline void string_increment_reg(struct CPUState * cpu, int reg, st= ruct x86_decode *decode) +{ + addr_t val =3D read_reg(cpu, reg, decode->addressing_size); + if (cpu->hvf_x86->rflags.df) + val -=3D decode->operand_size; + else + val +=3D decode->operand_size; + write_reg(cpu, reg, val, decode->addressing_size); +} + +static inline void string_rep(struct CPUState * cpu, struct x86_decode *de= code, void (*func)(struct CPUState *cpu, struct x86_decode *ins), int rep) +{ + addr_t rcx =3D read_reg(cpu, REG_RCX, decode->addressing_size); + while (rcx--) { + func(cpu, decode); + write_reg(cpu, REG_RCX, rcx, decode->addressing_size); + if ((PREFIX_REP =3D=3D rep) && !get_ZF(cpu)) + break; + if ((PREFIX_REPN =3D=3D rep) && get_ZF(cpu)) + break; + } +} + +static void exec_ins_single(struct CPUState *cpu, struct x86_decode *decod= e) +{ + addr_t addr =3D linear_addr_size(cpu, RDI(cpu), decode->addressing_siz= e, REG_SEG_ES); + + hvf_handle_io(cpu, DX(cpu), cpu->hvf_x86->mmio_buf, 0, decode->operand= _size, 1); + vmx_write_mem(cpu, addr, cpu->hvf_x86->mmio_buf, decode->operand_size); + + string_increment_reg(cpu, REG_RDI, decode); +} + +static void exec_ins(struct CPUState *cpu, struct x86_decode *decode) +{ + if (decode->rep) + string_rep(cpu, decode, exec_ins_single, 0); + else + exec_ins_single(cpu, decode); + + RIP(cpu) +=3D decode->len; +} + +static void exec_outs_single(struct CPUState *cpu, struct x86_decode *deco= de) +{ + addr_t addr =3D decode_linear_addr(cpu, decode, RSI(cpu), REG_SEG_DS); + + vmx_read_mem(cpu, cpu->hvf_x86->mmio_buf, addr, decode->operand_size); + hvf_handle_io(cpu, DX(cpu), cpu->hvf_x86->mmio_buf, 1, decode->operand= _size, 1); + + string_increment_reg(cpu, REG_RSI, decode); +} + +static void exec_outs(struct CPUState *cpu, struct x86_decode *decode) +{ + if (decode->rep) + string_rep(cpu, decode, exec_outs_single, 0); + else + exec_outs_single(cpu, decode); + =20 + RIP(cpu) +=3D decode->len; +} + +static void exec_movs_single(struct CPUState *cpu, struct x86_decode *deco= de) +{ + addr_t src_addr; + addr_t dst_addr; + addr_t val; + =20 + src_addr =3D decode_linear_addr(cpu, decode, RSI(cpu), REG_SEG_DS); + dst_addr =3D linear_addr_size(cpu, RDI(cpu), decode->addressing_size, = REG_SEG_ES); + =20 + val =3D read_val_ext(cpu, src_addr, decode->operand_size); + write_val_ext(cpu, dst_addr, val, decode->operand_size); + + string_increment_reg(cpu, REG_RSI, decode); + string_increment_reg(cpu, REG_RDI, decode); +} + +static void exec_movs(struct CPUState *cpu, struct x86_decode *decode) +{ + if (decode->rep) { + string_rep(cpu, decode, exec_movs_single, 0); + } + else + exec_movs_single(cpu, decode); + + RIP(cpu) +=3D decode->len; +} + +static void exec_cmps_single(struct CPUState *cpu, struct x86_decode *deco= de) +{ + addr_t src_addr; + addr_t dst_addr; + + src_addr =3D decode_linear_addr(cpu, decode, RSI(cpu), REG_SEG_DS); + dst_addr =3D linear_addr_size(cpu, RDI(cpu), decode->addressing_size, = REG_SEG_ES); + + decode->op[0].type =3D X86_VAR_IMMEDIATE; + decode->op[0].val =3D read_val_ext(cpu, src_addr, decode->operand_size= ); + decode->op[1].type =3D X86_VAR_IMMEDIATE; + decode->op[1].val =3D read_val_ext(cpu, dst_addr, decode->operand_size= ); + + EXEC_2OP_ARITH_CMD(cpu, decode, -, SET_FLAGS_OSZAPC_SUB, false); + + string_increment_reg(cpu, REG_RSI, decode); + string_increment_reg(cpu, REG_RDI, decode); +} + +static void exec_cmps(struct CPUState *cpu, struct x86_decode *decode) +{ + if (decode->rep) { + string_rep(cpu, decode, exec_cmps_single, decode->rep); + } + else + exec_cmps_single(cpu, decode); + RIP(cpu) +=3D decode->len; +} + + +static void exec_stos_single(struct CPUState *cpu, struct x86_decode *deco= de) +{ + addr_t addr; + addr_t val; + + addr =3D linear_addr_size(cpu, RDI(cpu), decode->addressing_size, REG_= SEG_ES); + val =3D read_reg(cpu, REG_RAX, decode->operand_size); + vmx_write_mem(cpu, addr, &val, decode->operand_size); + + string_increment_reg(cpu, REG_RDI, decode); +} + + +static void exec_stos(struct CPUState *cpu, struct x86_decode *decode) +{ + if (decode->rep) { + string_rep(cpu, decode, exec_stos_single, 0); + } + else + exec_stos_single(cpu, decode); + + RIP(cpu) +=3D decode->len; +} + +static void exec_scas_single(struct CPUState *cpu, struct x86_decode *deco= de) +{ + addr_t addr; + =20 + addr =3D linear_addr_size(cpu, RDI(cpu), decode->addressing_size, REG_= SEG_ES); + decode->op[1].type =3D X86_VAR_IMMEDIATE; + vmx_read_mem(cpu, &decode->op[1].val, addr, decode->operand_size); + + EXEC_2OP_ARITH_CMD(cpu, decode, -, SET_FLAGS_OSZAPC_SUB, false); + string_increment_reg(cpu, REG_RDI, decode); +} + +static void exec_scas(struct CPUState *cpu, struct x86_decode *decode) +{ + decode->op[0].type =3D X86_VAR_REG; + decode->op[0].reg =3D REG_RAX; + if (decode->rep) { + string_rep(cpu, decode, exec_scas_single, decode->rep); + } + else + exec_scas_single(cpu, decode); + + RIP(cpu) +=3D decode->len; +} + +static void exec_lods_single(struct CPUState *cpu, struct x86_decode *deco= de) +{ + addr_t addr; + addr_t val =3D 0; + =20 + addr =3D decode_linear_addr(cpu, decode, RSI(cpu), REG_SEG_DS); + vmx_read_mem(cpu, &val, addr, decode->operand_size); + write_reg(cpu, REG_RAX, val, decode->operand_size); + + string_increment_reg(cpu, REG_RSI, decode); +} + +static void exec_lods(struct CPUState *cpu, struct x86_decode *decode) +{ + if (decode->rep) { + string_rep(cpu, decode, exec_lods_single, 0); + } + else + exec_lods_single(cpu, decode); + + RIP(cpu) +=3D decode->len; +} + +#define MSR_IA32_UCODE_REV 0x00000017 + +void simulate_rdmsr(struct CPUState *cpu) +{ + X86CPU *x86_cpu =3D X86_CPU(cpu); + CPUX86State *env =3D &x86_cpu->env; + uint32_t msr =3D ECX(cpu); + uint64_t val =3D 0; + + switch (msr) { + case MSR_IA32_TSC: + val =3D rdtscp() + rvmcs(cpu->hvf_fd, VMCS_TSC_OFFSET); + break; + case MSR_IA32_APICBASE: + val =3D cpu_get_apic_base(X86_CPU(cpu)->apic_state); + break; + case MSR_IA32_UCODE_REV: + val =3D (0x100000000ULL << 32) | 0x100000000ULL; + break; + case MSR_EFER: + val =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER); + break; + case MSR_FSBASE: + val =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_FS_BASE); + break; + case MSR_GSBASE: + val =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_GS_BASE); + break; + case MSR_KERNELGSBASE: + val =3D rvmcs(cpu->hvf_fd, VMCS_HOST_FS_BASE); + break; + case MSR_STAR: + abort(); + break; + case MSR_LSTAR: + abort(); + break; + case MSR_CSTAR: + abort(); + break; + case MSR_IA32_MISC_ENABLE: + val =3D env->msr_ia32_misc_enable; + break; + case MSR_MTRRphysBase(0): + case MSR_MTRRphysBase(1): + case MSR_MTRRphysBase(2): + case MSR_MTRRphysBase(3): + case MSR_MTRRphysBase(4): + case MSR_MTRRphysBase(5): + case MSR_MTRRphysBase(6): + case MSR_MTRRphysBase(7): + val =3D env->mtrr_var[(ECX(cpu) - MSR_MTRRphysBase(0)) / 2].ba= se; + break; + case MSR_MTRRphysMask(0): + case MSR_MTRRphysMask(1): + case MSR_MTRRphysMask(2): + case MSR_MTRRphysMask(3): + case MSR_MTRRphysMask(4): + case MSR_MTRRphysMask(5): + case MSR_MTRRphysMask(6): + case MSR_MTRRphysMask(7): + val =3D env->mtrr_var[(ECX(cpu) - MSR_MTRRphysMask(0)) / 2].ma= sk; + break; + case MSR_MTRRfix64K_00000: + val =3D env->mtrr_fixed[0]; + break; + case MSR_MTRRfix16K_80000: + case MSR_MTRRfix16K_A0000: + val =3D env->mtrr_fixed[ECX(cpu) - MSR_MTRRfix16K_80000 + 1]; + break; + case MSR_MTRRfix4K_C0000: + case MSR_MTRRfix4K_C8000: + case MSR_MTRRfix4K_D0000: + case MSR_MTRRfix4K_D8000: + case MSR_MTRRfix4K_E0000: + case MSR_MTRRfix4K_E8000: + case MSR_MTRRfix4K_F0000: + case MSR_MTRRfix4K_F8000: + val =3D env->mtrr_fixed[ECX(cpu) - MSR_MTRRfix4K_C0000 + 3]; + break; + case MSR_MTRRdefType: + val =3D env->mtrr_deftype; + break; + default: + // fprintf(stderr, "%s: unknown msr 0x%x\n", __func__, msr); + val =3D 0; + break; + } + + RAX(cpu) =3D (uint32_t)val; + RDX(cpu) =3D (uint32_t)(val >> 32); +} + +static void exec_rdmsr(struct CPUState *cpu, struct x86_decode *decode) +{ + simulate_rdmsr(cpu); + RIP(cpu) +=3D decode->len; +} + +void simulate_wrmsr(struct CPUState *cpu) +{ + X86CPU *x86_cpu =3D X86_CPU(cpu); + CPUX86State *env =3D &x86_cpu->env; + uint32_t msr =3D ECX(cpu); + uint64_t data =3D ((uint64_t)EDX(cpu) << 32) | EAX(cpu); + + switch (msr) { + case MSR_IA32_TSC: + // if (!osx_is_sierra()) + // wvmcs(cpu->hvf_fd, VMCS_TSC_OFFSET, data - rdtscp()); + //hv_vm_sync_tsc(data); + break; + case MSR_IA32_APICBASE: + cpu_set_apic_base(X86_CPU(cpu)->apic_state, data); + break; + case MSR_FSBASE: + wvmcs(cpu->hvf_fd, VMCS_GUEST_FS_BASE, data); + break; + case MSR_GSBASE: + wvmcs(cpu->hvf_fd, VMCS_GUEST_GS_BASE, data); + break; + case MSR_KERNELGSBASE: + wvmcs(cpu->hvf_fd, VMCS_HOST_FS_BASE, data); + break; + case MSR_STAR: + abort(); + break; + case MSR_LSTAR: + abort(); + break; + case MSR_CSTAR: + abort(); + break; + case MSR_EFER: + cpu->hvf_x86->efer.efer =3D data; + //printf("new efer %llx\n", EFER(cpu)); + wvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER, data); + if (data & EFER_NXE) + hv_vcpu_invalidate_tlb(cpu->hvf_fd); + break; + case MSR_MTRRphysBase(0): + case MSR_MTRRphysBase(1): + case MSR_MTRRphysBase(2): + case MSR_MTRRphysBase(3): + case MSR_MTRRphysBase(4): + case MSR_MTRRphysBase(5): + case MSR_MTRRphysBase(6): + case MSR_MTRRphysBase(7): + env->mtrr_var[(ECX(cpu) - MSR_MTRRphysBase(0)) / 2].base =3D d= ata; + break; + case MSR_MTRRphysMask(0): + case MSR_MTRRphysMask(1): + case MSR_MTRRphysMask(2): + case MSR_MTRRphysMask(3): + case MSR_MTRRphysMask(4): + case MSR_MTRRphysMask(5): + case MSR_MTRRphysMask(6): + case MSR_MTRRphysMask(7): + env->mtrr_var[(ECX(cpu) - MSR_MTRRphysMask(0)) / 2].mask =3D d= ata; + break; + case MSR_MTRRfix64K_00000: + env->mtrr_fixed[ECX(cpu) - MSR_MTRRfix64K_00000] =3D data; + break; + case MSR_MTRRfix16K_80000: + case MSR_MTRRfix16K_A0000: + env->mtrr_fixed[ECX(cpu) - MSR_MTRRfix16K_80000 + 1] =3D data; + break; + case MSR_MTRRfix4K_C0000: + case MSR_MTRRfix4K_C8000: + case MSR_MTRRfix4K_D0000: + case MSR_MTRRfix4K_D8000: + case MSR_MTRRfix4K_E0000: + case MSR_MTRRfix4K_E8000: + case MSR_MTRRfix4K_F0000: + case MSR_MTRRfix4K_F8000: + env->mtrr_fixed[ECX(cpu) - MSR_MTRRfix4K_C0000 + 3] =3D data; + break; + case MSR_MTRRdefType: + env->mtrr_deftype =3D data; + break; + default: + break; + } + + /* Related to support known hypervisor interface */ + // if (g_hypervisor_iface) + // g_hypervisor_iface->wrmsr_handler(cpu, msr, data); + + //printf("write msr %llx\n", RCX(cpu)); +} + +static void exec_wrmsr(struct CPUState *cpu, struct x86_decode *decode) +{ + simulate_wrmsr(cpu); + RIP(cpu) +=3D decode->len; +} + +/* + * flag: + * 0 - bt, 1 - btc, 2 - bts, 3 - btr + */ +static void do_bt(struct CPUState *cpu, struct x86_decode *decode, int fla= g) +{ + int32_t displacement; + uint8_t index; + bool cf; + int mask =3D (4 =3D=3D decode->operand_size) ? 0x1f : 0xf; + + VM_PANIC_ON(decode->rex.rex); + + fetch_operands(cpu, decode, 2, false, true, false); + index =3D decode->op[1].val & mask; + + if (decode->op[0].type !=3D X86_VAR_REG) { + if (4 =3D=3D decode->operand_size) { + displacement =3D ((int32_t) (decode->op[1].val & 0xffffffe0)) = / 32; + decode->op[0].ptr +=3D 4 * displacement; + } else if (2 =3D=3D decode->operand_size) { + displacement =3D ((int16_t) (decode->op[1].val & 0xfff0)) / 16; + decode->op[0].ptr +=3D 2 * displacement; + } else { + VM_PANIC("bt 64bit\n"); + } + } + decode->op[0].val =3D read_val_ext(cpu, decode->op[0].ptr, decode->ope= rand_size); + cf =3D (decode->op[0].val >> index) & 0x01; + + switch (flag) { + case 0: + set_CF(cpu, cf); + return; + case 1: + decode->op[0].val ^=3D (1u << index); + break; + case 2: + decode->op[0].val |=3D (1u << index); + break; + case 3: + decode->op[0].val &=3D ~(1u << index); + break; + } + write_val_ext(cpu, decode->op[0].ptr, decode->op[0].val, decode->opera= nd_size); + set_CF(cpu, cf); +} + +static void exec_bt(struct CPUState *cpu, struct x86_decode *decode) +{ + do_bt(cpu, decode, 0); + RIP(cpu) +=3D decode->len; +} + +static void exec_btc(struct CPUState *cpu, struct x86_decode *decode) +{ + do_bt(cpu, decode, 1); + RIP(cpu) +=3D decode->len; +} + +static void exec_btr(struct CPUState *cpu, struct x86_decode *decode) +{ + do_bt(cpu, decode, 3); + RIP(cpu) +=3D decode->len; +} + +static void exec_bts(struct CPUState *cpu, struct x86_decode *decode) +{ + do_bt(cpu, decode, 2); + RIP(cpu) +=3D decode->len; +} + +void exec_shl(struct CPUState *cpu, struct x86_decode *decode) +{ + uint8_t count; + int of =3D 0, cf =3D 0; + + fetch_operands(cpu, decode, 2, true, true, false); + + count =3D decode->op[1].val; + count &=3D 0x1f; // count is masked to 5 bits + if (!count) + goto exit; + + switch (decode->operand_size) { + case 1: + { + uint8_t res =3D 0; + if (count <=3D 8) { + res =3D (decode->op[0].val << count); + cf =3D (decode->op[0].val >> (8 - count)) & 0x1; + of =3D cf ^ (res >> 7); + } + + write_val_ext(cpu, decode->op[0].ptr, res, 1); + SET_FLAGS_OSZAPC_LOGIC_8(res); + SET_FLAGS_OxxxxC(cpu, of, cf); + break; + } + case 2: + { + uint16_t res =3D 0; + + /* from bochs */ + if (count <=3D 16) { + res =3D (decode->op[0].val << count); + cf =3D (decode->op[0].val >> (16 - count)) & 0x1; + of =3D cf ^ (res >> 15); // of =3D cf ^ result15 + } + + write_val_ext(cpu, decode->op[0].ptr, res, 2); + SET_FLAGS_OSZAPC_LOGIC_16(res); + SET_FLAGS_OxxxxC(cpu, of, cf); + break; + } + case 4: + { + uint32_t res =3D decode->op[0].val << count; + =20 + write_val_ext(cpu, decode->op[0].ptr, res, 4); + SET_FLAGS_OSZAPC_LOGIC_32(res); + cf =3D (decode->op[0].val >> (32 - count)) & 0x1; + of =3D cf ^ (res >> 31); // of =3D cf ^ result31 + SET_FLAGS_OxxxxC(cpu, of, cf); + break; + } + default: + abort(); + } + +exit: + //lflags_to_rflags(cpu); + RIP(cpu) +=3D decode->len; +} + +void exec_movsx(struct CPUState *cpu, struct x86_decode *decode) +{ + int src_op_size; + int op_size =3D decode->operand_size; + + fetch_operands(cpu, decode, 2, false, false, false); + + if (0xbe =3D=3D decode->opcode[1]) + src_op_size =3D 1; + else + src_op_size =3D 2; + + decode->operand_size =3D src_op_size; + calc_modrm_operand(cpu, decode, &decode->op[1]); + decode->op[1].val =3D sign(read_val_ext(cpu, decode->op[1].ptr, src_op= _size), src_op_size); + + write_val_ext(cpu, decode->op[0].ptr, decode->op[1].val, op_size); + + RIP(cpu) +=3D decode->len; +} + +void exec_ror(struct CPUState *cpu, struct x86_decode *decode) +{ + uint8_t count; + + fetch_operands(cpu, decode, 2, true, true, false); + count =3D decode->op[1].val; + + switch (decode->operand_size) { + case 1: + { + uint32_t bit6, bit7; + uint8_t res; + + if ((count & 0x07) =3D=3D 0) { + if (count & 0x18) { + bit6 =3D ((uint8_t)decode->op[0].val >> 6) & 1; + bit7 =3D ((uint8_t)decode->op[0].val >> 7) & 1; + SET_FLAGS_OxxxxC(cpu, bit6 ^ bit7, bit7); + } + } else { + count &=3D 0x7; /* use only bottom 3 bits */ + res =3D ((uint8_t)decode->op[0].val >> count) | ((uint8_t)= decode->op[0].val << (8 - count)); + write_val_ext(cpu, decode->op[0].ptr, res, 1); + bit6 =3D (res >> 6) & 1; + bit7 =3D (res >> 7) & 1; + /* set eflags: ROR count affects the following flags: C, O= */ + SET_FLAGS_OxxxxC(cpu, bit6 ^ bit7, bit7); + } + break; + } + case 2: + { + uint32_t bit14, bit15; + uint16_t res; + + if ((count & 0x0f) =3D=3D 0) { + if (count & 0x10) { + bit14 =3D ((uint16_t)decode->op[0].val >> 14) & 1; + bit15 =3D ((uint16_t)decode->op[0].val >> 15) & 1; + // of =3D result14 ^ result15 + SET_FLAGS_OxxxxC(cpu, bit14 ^ bit15, bit15); + } + } else { + count &=3D 0x0f; // use only 4 LSB's + res =3D ((uint16_t)decode->op[0].val >> count) | ((uint16_= t)decode->op[0].val << (16 - count)); + write_val_ext(cpu, decode->op[0].ptr, res, 2); + + bit14 =3D (res >> 14) & 1; + bit15 =3D (res >> 15) & 1; + // of =3D result14 ^ result15 + SET_FLAGS_OxxxxC(cpu, bit14 ^ bit15, bit15); + } + break; + } + case 4: + { + uint32_t bit31, bit30; + uint32_t res; + + count &=3D 0x1f; + if (count) { + res =3D ((uint32_t)decode->op[0].val >> count) | ((uint32_= t)decode->op[0].val << (32 - count)); + write_val_ext(cpu, decode->op[0].ptr, res, 4); + + bit31 =3D (res >> 31) & 1; + bit30 =3D (res >> 30) & 1; + // of =3D result30 ^ result31 + SET_FLAGS_OxxxxC(cpu, bit30 ^ bit31, bit31); + } + break; + } + } + RIP(cpu) +=3D decode->len; +} + +void exec_rol(struct CPUState *cpu, struct x86_decode *decode) +{ + uint8_t count; + + fetch_operands(cpu, decode, 2, true, true, false); + count =3D decode->op[1].val; + + switch (decode->operand_size) { + case 1: + { + uint32_t bit0, bit7; + uint8_t res; + + if ((count & 0x07) =3D=3D 0) { + if (count & 0x18) { + bit0 =3D ((uint8_t)decode->op[0].val & 1); + bit7 =3D ((uint8_t)decode->op[0].val >> 7); + SET_FLAGS_OxxxxC(cpu, bit0 ^ bit7, bit0); + } + } else { + count &=3D 0x7; // use only lowest 3 bits + res =3D ((uint8_t)decode->op[0].val << count) | ((uint8_t)= decode->op[0].val >> (8 - count)); + + write_val_ext(cpu, decode->op[0].ptr, res, 1); + /* set eflags: + * ROL count affects the following flags: C, O + */ + bit0 =3D (res & 1); + bit7 =3D (res >> 7); + SET_FLAGS_OxxxxC(cpu, bit0 ^ bit7, bit0); + } + break; + } + case 2: + { + uint32_t bit0, bit15; + uint16_t res; + + if ((count & 0x0f) =3D=3D 0) { + if (count & 0x10) { + bit0 =3D ((uint16_t)decode->op[0].val & 0x1); + bit15 =3D ((uint16_t)decode->op[0].val >> 15); + // of =3D cf ^ result15 + SET_FLAGS_OxxxxC(cpu, bit0 ^ bit15, bit0); + } + } else { + count &=3D 0x0f; // only use bottom 4 bits + res =3D ((uint16_t)decode->op[0].val << count) | ((uint16_= t)decode->op[0].val >> (16 - count)); + + write_val_ext(cpu, decode->op[0].ptr, res, 2); + bit0 =3D (res & 0x1); + bit15 =3D (res >> 15); + // of =3D cf ^ result15 + SET_FLAGS_OxxxxC(cpu, bit0 ^ bit15, bit0); + } + break; + } + case 4: + { + uint32_t bit0, bit31; + uint32_t res; + + count &=3D 0x1f; + if (count) { + res =3D ((uint32_t)decode->op[0].val << count) | ((uint32_= t)decode->op[0].val >> (32 - count)); + + write_val_ext(cpu, decode->op[0].ptr, res, 4); + bit0 =3D (res & 0x1); + bit31 =3D (res >> 31); + // of =3D cf ^ result31 + SET_FLAGS_OxxxxC(cpu, bit0 ^ bit31, bit0); + } + break; + } + } + RIP(cpu) +=3D decode->len; +} + + +void exec_rcl(struct CPUState *cpu, struct x86_decode *decode) +{ + uint8_t count; + int of =3D 0, cf =3D 0; + + fetch_operands(cpu, decode, 2, true, true, false); + count =3D decode->op[1].val & 0x1f; + + switch(decode->operand_size) { + case 1: + { + uint8_t op1_8 =3D decode->op[0].val; + uint8_t res; + count %=3D 9; + if (!count) + break; + + if (1 =3D=3D count) + res =3D (op1_8 << 1) | get_CF(cpu); + else + res =3D (op1_8 << count) | (get_CF(cpu) << (count - 1)) | = (op1_8 >> (9 - count)); + + write_val_ext(cpu, decode->op[0].ptr, res, 1); + + cf =3D (op1_8 >> (8 - count)) & 0x01; + of =3D cf ^ (res >> 7); // of =3D cf ^ result7 + SET_FLAGS_OxxxxC(cpu, of, cf); + break; + } + case 2: + { + uint16_t res; + uint16_t op1_16 =3D decode->op[0].val; + + count %=3D 17; + if (!count) + break; + + if (1 =3D=3D count) + res =3D (op1_16 << 1) | get_CF(cpu); + else if (count =3D=3D 16) + res =3D (get_CF(cpu) << 15) | (op1_16 >> 1); + else // 2..15 + res =3D (op1_16 << count) | (get_CF(cpu) << (count - 1)) |= (op1_16 >> (17 - count)); + =20 + write_val_ext(cpu, decode->op[0].ptr, res, 2); + =20 + cf =3D (op1_16 >> (16 - count)) & 0x1; + of =3D cf ^ (res >> 15); // of =3D cf ^ result15 + SET_FLAGS_OxxxxC(cpu, of, cf); + break; + } + case 4: + { + uint32_t res; + uint32_t op1_32 =3D decode->op[0].val; + + if (!count) + break; + + if (1 =3D=3D count) + res =3D (op1_32 << 1) | get_CF(cpu); + else + res =3D (op1_32 << count) | (get_CF(cpu) << (count - 1)) |= (op1_32 >> (33 - count)); + + write_val_ext(cpu, decode->op[0].ptr, res, 4); + + cf =3D (op1_32 >> (32 - count)) & 0x1; + of =3D cf ^ (res >> 31); // of =3D cf ^ result31 + SET_FLAGS_OxxxxC(cpu, of, cf); + break; + } + } + RIP(cpu) +=3D decode->len; +} + +void exec_rcr(struct CPUState *cpu, struct x86_decode *decode) +{ + uint8_t count; + int of =3D 0, cf =3D 0; + + fetch_operands(cpu, decode, 2, true, true, false); + count =3D decode->op[1].val & 0x1f; + + switch(decode->operand_size) { + case 1: + { + uint8_t op1_8 =3D decode->op[0].val; + uint8_t res; + + count %=3D 9; + if (!count) + break; + res =3D (op1_8 >> count) | (get_CF(cpu) << (8 - count)) | (op1= _8 << (9 - count)); + + write_val_ext(cpu, decode->op[0].ptr, res, 1); + + cf =3D (op1_8 >> (count - 1)) & 0x1; + of =3D (((res << 1) ^ res) >> 7) & 0x1; // of =3D result6 ^ re= sult7 + SET_FLAGS_OxxxxC(cpu, of, cf); + break; + } + case 2: + { + uint16_t op1_16 =3D decode->op[0].val; + uint16_t res; + + count %=3D 17; + if (!count) + break; + res =3D (op1_16 >> count) | (get_CF(cpu) << (16 - count)) | (o= p1_16 << (17 - count)); + + write_val_ext(cpu, decode->op[0].ptr, res, 2); + + cf =3D (op1_16 >> (count - 1)) & 0x1; + of =3D ((uint16_t)((res << 1) ^ res) >> 15) & 0x1; // of =3D r= esult15 ^ result14 + SET_FLAGS_OxxxxC(cpu, of, cf); + break; + } + case 4: + { + uint32_t res; + uint32_t op1_32 =3D decode->op[0].val; + + if (!count) + break; +=20 + if (1 =3D=3D count) + res =3D (op1_32 >> 1) | (get_CF(cpu) << 31); + else + res =3D (op1_32 >> count) | (get_CF(cpu) << (32 - count)) = | (op1_32 << (33 - count)); + + write_val_ext(cpu, decode->op[0].ptr, res, 4); + + cf =3D (op1_32 >> (count - 1)) & 0x1; + of =3D ((res << 1) ^ res) >> 31; // of =3D result30 ^ result31 + SET_FLAGS_OxxxxC(cpu, of, cf); + break; + } + } + RIP(cpu) +=3D decode->len; +} + +static void exec_xchg(struct CPUState *cpu, struct x86_decode *decode) +{ + fetch_operands(cpu, decode, 2, true, true, false); + + write_val_ext(cpu, decode->op[0].ptr, decode->op[1].val, decode->opera= nd_size); + write_val_ext(cpu, decode->op[1].ptr, decode->op[0].val, decode->opera= nd_size); + + RIP(cpu) +=3D decode->len; +} + +static void exec_xadd(struct CPUState *cpu, struct x86_decode *decode) +{ + EXEC_2OP_ARITH_CMD(cpu, decode, +, SET_FLAGS_OSZAPC_ADD, true); + write_val_ext(cpu, decode->op[1].ptr, decode->op[0].val, decode->opera= nd_size); + + RIP(cpu) +=3D decode->len; +} + +static struct cmd_handler { + enum x86_decode_cmd cmd; + void (*handler)(struct CPUState *cpu, struct x86_decode *ins); +} handlers[] =3D { + {X86_DECODE_CMD_INVL, NULL,}, + {X86_DECODE_CMD_MOV, exec_mov}, + {X86_DECODE_CMD_ADD, exec_add}, + {X86_DECODE_CMD_OR, exec_or}, + {X86_DECODE_CMD_ADC, exec_adc}, + {X86_DECODE_CMD_SBB, exec_sbb}, + {X86_DECODE_CMD_AND, exec_and}, + {X86_DECODE_CMD_SUB, exec_sub}, + {X86_DECODE_CMD_NEG, exec_neg}, + {X86_DECODE_CMD_XOR, exec_xor}, + {X86_DECODE_CMD_CMP, exec_cmp}, + {X86_DECODE_CMD_INC, exec_inc}, + {X86_DECODE_CMD_DEC, exec_dec}, + {X86_DECODE_CMD_TST, exec_tst}, + {X86_DECODE_CMD_NOT, exec_not}, + {X86_DECODE_CMD_MOVZX, exec_movzx}, + {X86_DECODE_CMD_OUT, exec_out}, + {X86_DECODE_CMD_IN, exec_in}, + {X86_DECODE_CMD_INS, exec_ins}, + {X86_DECODE_CMD_OUTS, exec_outs}, + {X86_DECODE_CMD_RDMSR, exec_rdmsr}, + {X86_DECODE_CMD_WRMSR, exec_wrmsr}, + {X86_DECODE_CMD_BT, exec_bt}, + {X86_DECODE_CMD_BTR, exec_btr}, + {X86_DECODE_CMD_BTC, exec_btc}, + {X86_DECODE_CMD_BTS, exec_bts}, + {X86_DECODE_CMD_SHL, exec_shl}, + {X86_DECODE_CMD_ROL, exec_rol}, + {X86_DECODE_CMD_ROR, exec_ror}, + {X86_DECODE_CMD_RCR, exec_rcr}, + {X86_DECODE_CMD_RCL, exec_rcl}, + /*{X86_DECODE_CMD_CPUID, exec_cpuid},*/ + {X86_DECODE_CMD_MOVS, exec_movs}, + {X86_DECODE_CMD_CMPS, exec_cmps}, + {X86_DECODE_CMD_STOS, exec_stos}, + {X86_DECODE_CMD_SCAS, exec_scas}, + {X86_DECODE_CMD_LODS, exec_lods}, + {X86_DECODE_CMD_MOVSX, exec_movsx}, + {X86_DECODE_CMD_XCHG, exec_xchg}, + {X86_DECODE_CMD_XADD, exec_xadd}, +}; + +static struct cmd_handler _cmd_handler[X86_DECODE_CMD_LAST]; + +static void init_cmd_handler(CPUState *cpu) +{ + int i; + for (i =3D 0; i < ARRAY_SIZE(handlers); i++) + _cmd_handler[handlers[i].cmd] =3D handlers[i]; +} + +static void print_debug(struct CPUState *cpu) +{ + printf("%llx: eax %llx ebx %llx ecx %llx edx %llx esi %llx edi %llx eb= p %llx esp %llx flags %llx\n", RIP(cpu), RAX(cpu), RBX(cpu), RCX(cpu), RDX(= cpu), RSI(cpu), RDI(cpu), RBP(cpu), RSP(cpu), EFLAGS(cpu)); +} + +void load_regs(struct CPUState *cpu) +{ + int i =3D 0; + RRX(cpu, REG_RAX) =3D rreg(cpu->hvf_fd, HV_X86_RAX); + RRX(cpu, REG_RBX) =3D rreg(cpu->hvf_fd, HV_X86_RBX); + RRX(cpu, REG_RCX) =3D rreg(cpu->hvf_fd, HV_X86_RCX); + RRX(cpu, REG_RDX) =3D rreg(cpu->hvf_fd, HV_X86_RDX); + RRX(cpu, REG_RSI) =3D rreg(cpu->hvf_fd, HV_X86_RSI); + RRX(cpu, REG_RDI) =3D rreg(cpu->hvf_fd, HV_X86_RDI); + RRX(cpu, REG_RSP) =3D rreg(cpu->hvf_fd, HV_X86_RSP); + RRX(cpu, REG_RBP) =3D rreg(cpu->hvf_fd, HV_X86_RBP); + for (i =3D 8; i < 16; i++) + RRX(cpu, i) =3D rreg(cpu->hvf_fd, HV_X86_RAX + i); + =20 + RFLAGS(cpu) =3D rreg(cpu->hvf_fd, HV_X86_RFLAGS); + rflags_to_lflags(cpu); + RIP(cpu) =3D rreg(cpu->hvf_fd, HV_X86_RIP); + + //print_debug(cpu); +} + +void store_regs(struct CPUState *cpu) +{ + int i =3D 0; + wreg(cpu->hvf_fd, HV_X86_RAX, RAX(cpu)); + wreg(cpu->hvf_fd, HV_X86_RBX, RBX(cpu)); + wreg(cpu->hvf_fd, HV_X86_RCX, RCX(cpu)); + wreg(cpu->hvf_fd, HV_X86_RDX, RDX(cpu)); + wreg(cpu->hvf_fd, HV_X86_RSI, RSI(cpu)); + wreg(cpu->hvf_fd, HV_X86_RDI, RDI(cpu)); + wreg(cpu->hvf_fd, HV_X86_RBP, RBP(cpu)); + wreg(cpu->hvf_fd, HV_X86_RSP, RSP(cpu)); + for (i =3D 8; i < 16; i++) + wreg(cpu->hvf_fd, HV_X86_RAX + i, RRX(cpu, i)); + =20 + lflags_to_rflags(cpu); + wreg(cpu->hvf_fd, HV_X86_RFLAGS, RFLAGS(cpu)); + macvm_set_rip(cpu, RIP(cpu)); + + //print_debug(cpu); +} + +bool exec_instruction(struct CPUState *cpu, struct x86_decode *ins) +{ + //if (hvf_vcpu_id(cpu)) + //printf("%d, %llx: exec_instruction %s\n", hvf_vcpu_id(cpu), RIP(cpu= ), decode_cmd_to_string(ins->cmd)); + =20 + if (0 && ins->is_fpu) { + VM_PANIC("emulate fpu\n"); + } else { + if (!_cmd_handler[ins->cmd].handler) { + printf("Unimplemented handler (%llx) for %d (%x %x) \n", RIP(c= pu), ins->cmd, ins->opcode[0], + ins->opcode_len > 1 ? ins->opcode[1] : 0); + RIP(cpu) +=3D ins->len; + return true; + } + =20 + VM_PANIC_ON_EX(!_cmd_handler[ins->cmd].handler, "Unimplemented han= dler (%llx) for %d (%x %x) \n", RIP(cpu), ins->cmd, ins->opcode[0], ins->op= code_len > 1 ? ins->opcode[1] : 0); + _cmd_handler[ins->cmd].handler(cpu, ins); + } + return true; +} + +void init_emu(struct CPUState *cpu) +{ + init_cmd_handler(cpu); +} diff --git a/target/i386/hvf-utils/x86_emu.h b/target/i386/hvf-utils/x86_em= u.h new file mode 100644 index 0000000000..42cc5e4296 --- /dev/null +++ b/target/i386/hvf-utils/x86_emu.h @@ -0,0 +1,33 @@ +/* + * Copyright (C) 2016 Veertu Inc, + * Copyright (C) 2017 Google Inc, + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation; either version 2 or + * (at your option) version 3 of the License. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, see . + */ +#ifndef __X86_EMU_H__ +#define __X86_EMU_H__ + +#include "x86.h" +#include "x86_decode.h" + +void init_emu(struct CPUState *cpu); +bool exec_instruction(struct CPUState *cpu, struct x86_decode *ins); + +void load_regs(struct CPUState *cpu); +void store_regs(struct CPUState *cpu); + +void simulate_rdmsr(struct CPUState *cpu); +void simulate_wrmsr(struct CPUState *cpu); + +#endif diff --git a/target/i386/hvf-utils/x86_flags.c b/target/i386/hvf-utils/x86_= flags.c new file mode 100644 index 0000000000..ca876d03dd --- /dev/null +++ b/target/i386/hvf-utils/x86_flags.c @@ -0,0 +1,317 @@ +///////////////////////////////////////////////////////////////////////// +// +// Copyright (C) 2001-2012 The Bochs Project +// Copyright (C) 2017 Google Inc. +// +// This library is free software; you can redistribute it and/or +// modify it under the terms of the GNU Lesser General Public +// License as published by the Free Software Foundation; either +// version 2 of the License, or (at your option) any later version. +// +// This library is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +// Lesser General Public License for more details. +// +// You should have received a copy of the GNU Lesser General Public +// License along with this library; if not, write to the Free Software +// Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA B 02110-1301= USA +///////////////////////////////////////////////////////////////////////// +/* + * flags functions + */ + +#include "qemu/osdep.h" +#include "qemu-common.h" + +#include "cpu.h" +#include "x86_flags.h" +#include "x86.h" + +void SET_FLAGS_OxxxxC(struct CPUState *cpu, uint32_t new_of, uint32_t new_= cf) +{ + uint32_t temp_po =3D new_of ^ new_cf; + cpu->hvf_x86->lflags.auxbits &=3D ~(LF_MASK_PO | LF_MASK_CF); + cpu->hvf_x86->lflags.auxbits |=3D (temp_po << LF_BIT_PO) | (new_cf << = LF_BIT_CF); +} + +void SET_FLAGS_OSZAPC_SUB32(struct CPUState *cpu, uint32_t v1, uint32_t v2= , uint32_t diff) +{ + SET_FLAGS_OSZAPC_SUB_32(v1, v2, diff); +} + +void SET_FLAGS_OSZAPC_SUB16(struct CPUState *cpu, uint16_t v1, uint16_t v2= , uint16_t diff) +{ + SET_FLAGS_OSZAPC_SUB_16(v1, v2, diff); +} + +void SET_FLAGS_OSZAPC_SUB8(struct CPUState *cpu, uint8_t v1, uint8_t v2, u= int8_t diff) +{ + SET_FLAGS_OSZAPC_SUB_8(v1, v2, diff); +} + +void SET_FLAGS_OSZAPC_ADD32(struct CPUState *cpu, uint32_t v1, uint32_t v2= , uint32_t diff) +{ + SET_FLAGS_OSZAPC_ADD_32(v1, v2, diff); +} + +void SET_FLAGS_OSZAPC_ADD16(struct CPUState *cpu, uint16_t v1, uint16_t v2= , uint16_t diff) +{ + SET_FLAGS_OSZAPC_ADD_16(v1, v2, diff); +} + +void SET_FLAGS_OSZAPC_ADD8(struct CPUState *cpu, uint8_t v1, uint8_t v2, u= int8_t diff) +{ + SET_FLAGS_OSZAPC_ADD_8(v1, v2, diff); +} + +void SET_FLAGS_OSZAP_SUB32(struct CPUState *cpu, uint32_t v1, uint32_t v2,= uint32_t diff) +{ + SET_FLAGS_OSZAP_SUB_32(v1, v2, diff); +} + +void SET_FLAGS_OSZAP_SUB16(struct CPUState *cpu, uint16_t v1, uint16_t v2,= uint16_t diff) +{ + SET_FLAGS_OSZAP_SUB_16(v1, v2, diff); +} + +void SET_FLAGS_OSZAP_SUB8(struct CPUState *cpu, uint8_t v1, uint8_t v2, ui= nt8_t diff) +{ + SET_FLAGS_OSZAP_SUB_8(v1, v2, diff); +} + +void SET_FLAGS_OSZAP_ADD32(struct CPUState *cpu, uint32_t v1, uint32_t v2,= uint32_t diff) +{ + SET_FLAGS_OSZAP_ADD_32(v1, v2, diff); +} + +void SET_FLAGS_OSZAP_ADD16(struct CPUState *cpu, uint16_t v1, uint16_t v2,= uint16_t diff) +{ + SET_FLAGS_OSZAP_ADD_16(v1, v2, diff); +} + +void SET_FLAGS_OSZAP_ADD8(struct CPUState *cpu, uint8_t v1, uint8_t v2, ui= nt8_t diff) +{ + SET_FLAGS_OSZAP_ADD_8(v1, v2, diff); +} + + +void SET_FLAGS_OSZAPC_LOGIC32(struct CPUState *cpu, uint32_t diff) +{ + SET_FLAGS_OSZAPC_LOGIC_32(diff); +} + +void SET_FLAGS_OSZAPC_LOGIC16(struct CPUState *cpu, uint16_t diff) +{ + SET_FLAGS_OSZAPC_LOGIC_16(diff); +} + +void SET_FLAGS_OSZAPC_LOGIC8(struct CPUState *cpu, uint8_t diff) +{ + SET_FLAGS_OSZAPC_LOGIC_8(diff); +} + +void SET_FLAGS_SHR32(struct CPUState *cpu, uint32_t v, int count, uint32_t= res) +{ + int cf =3D (v >> (count - 1)) & 0x1; + int of =3D (((res << 1) ^ res) >> 31); + + SET_FLAGS_OSZAPC_LOGIC_32(res); + SET_FLAGS_OxxxxC(cpu, of, cf); +} + +void SET_FLAGS_SHR16(struct CPUState *cpu, uint16_t v, int count, uint16_t= res) +{ + int cf =3D (v >> (count - 1)) & 0x1; + int of =3D (((res << 1) ^ res) >> 15); + + SET_FLAGS_OSZAPC_LOGIC_16(res); + SET_FLAGS_OxxxxC(cpu, of, cf); +} + +void SET_FLAGS_SHR8(struct CPUState *cpu, uint8_t v, int count, uint8_t re= s) +{ + int cf =3D (v >> (count - 1)) & 0x1; + int of =3D (((res << 1) ^ res) >> 7); + + SET_FLAGS_OSZAPC_LOGIC_8(res); + SET_FLAGS_OxxxxC(cpu, of, cf); +} + +void SET_FLAGS_SAR32(struct CPUState *cpu, int32_t v, int count, uint32_t = res) +{ + int cf =3D (v >> (count - 1)) & 0x1; + + SET_FLAGS_OSZAPC_LOGIC_32(res); + SET_FLAGS_OxxxxC(cpu, 0, cf); +} + +void SET_FLAGS_SAR16(struct CPUState *cpu, int16_t v, int count, uint16_t = res) +{ + int cf =3D (v >> (count - 1)) & 0x1; + + SET_FLAGS_OSZAPC_LOGIC_16(res); + SET_FLAGS_OxxxxC(cpu, 0, cf); +} + +void SET_FLAGS_SAR8(struct CPUState *cpu, int8_t v, int count, uint8_t res) +{ + int cf =3D (v >> (count - 1)) & 0x1; + + SET_FLAGS_OSZAPC_LOGIC_8(res); + SET_FLAGS_OxxxxC(cpu, 0, cf); +} + + +void SET_FLAGS_SHL32(struct CPUState *cpu, uint32_t v, int count, uint32_t= res) +{ + int of, cf; + + cf =3D (v >> (32 - count)) & 0x1; + of =3D cf ^ (res >> 31); + + SET_FLAGS_OSZAPC_LOGIC_32(res); + SET_FLAGS_OxxxxC(cpu, of, cf); +} + +void SET_FLAGS_SHL16(struct CPUState *cpu, uint16_t v, int count, uint16_t= res) +{ + int of =3D 0, cf =3D 0; + + if (count <=3D 16) { + cf =3D (v >> (16 - count)) & 0x1; + of =3D cf ^ (res >> 15); + } + + SET_FLAGS_OSZAPC_LOGIC_16(res); + SET_FLAGS_OxxxxC(cpu, of, cf); +} + +void SET_FLAGS_SHL8(struct CPUState *cpu, uint8_t v, int count, uint8_t re= s) +{ + int of =3D 0, cf =3D 0; + + if (count <=3D 8) { + cf =3D (v >> (8 - count)) & 0x1; + of =3D cf ^ (res >> 7); + } + + SET_FLAGS_OSZAPC_LOGIC_8(res); + SET_FLAGS_OxxxxC(cpu, of, cf); +} + +bool get_PF(struct CPUState *cpu) +{ + uint32_t temp =3D (255 & cpu->hvf_x86->lflags.result); + temp =3D temp ^ (255 & (cpu->hvf_x86->lflags.auxbits >> LF_BIT_PDB)); + temp =3D (temp ^ (temp >> 4)) & 0x0F; + return (0x9669U >> temp) & 1; +} + +void set_PF(struct CPUState *cpu, bool val) +{ + uint32_t temp =3D (255 & cpu->hvf_x86->lflags.result) ^ (!val); + cpu->hvf_x86->lflags.auxbits &=3D ~(LF_MASK_PDB); + cpu->hvf_x86->lflags.auxbits |=3D (temp << LF_BIT_PDB); +} + +bool _get_OF(struct CPUState *cpu) +{ + return ((cpu->hvf_x86->lflags.auxbits + (1U << LF_BIT_PO)) >> LF_BIT_C= F) & 1; +} + +bool get_OF(struct CPUState *cpu) +{ + return _get_OF(cpu); +} + +bool _get_CF(struct CPUState *cpu) +{ + return (cpu->hvf_x86->lflags.auxbits >> LF_BIT_CF) & 1; +} + +bool get_CF(struct CPUState *cpu) +{ + return _get_CF(cpu); +} + +void set_OF(struct CPUState *cpu, bool val) +{ + SET_FLAGS_OxxxxC(cpu, val, _get_CF(cpu)); +} + +void set_CF(struct CPUState *cpu, bool val) +{ + SET_FLAGS_OxxxxC(cpu, _get_OF(cpu), (val)); +} + +bool get_AF(struct CPUState *cpu) +{ + return (cpu->hvf_x86->lflags.auxbits >> LF_BIT_AF) & 1; +} + +void set_AF(struct CPUState *cpu, bool val) +{ + cpu->hvf_x86->lflags.auxbits &=3D ~(LF_MASK_AF); + cpu->hvf_x86->lflags.auxbits |=3D (val) << LF_BIT_AF; +} + +bool get_ZF(struct CPUState *cpu) +{ + return !cpu->hvf_x86->lflags.result; +} + +void set_ZF(struct CPUState *cpu, bool val) +{ + if (val) { + cpu->hvf_x86->lflags.auxbits ^=3D (((cpu->hvf_x86->lflags.result >= > LF_SIGN_BIT) & 1) << LF_BIT_SD); + // merge the parity bits into the Parity Delta Byte + uint32_t temp_pdb =3D (255 & cpu->hvf_x86->lflags.result); + cpu->hvf_x86->lflags.auxbits ^=3D (temp_pdb << LF_BIT_PDB); + // now zero the .result value + cpu->hvf_x86->lflags.result =3D 0; + } else + cpu->hvf_x86->lflags.result |=3D (1 << 8); +} + +bool get_SF(struct CPUState *cpu) +{ + return ((cpu->hvf_x86->lflags.result >> LF_SIGN_BIT) ^ (cpu->hvf_x86->= lflags.auxbits >> LF_BIT_SD)) & 1; +} + +void set_SF(struct CPUState *cpu, bool val) +{ + bool temp_sf =3D get_SF(cpu); + cpu->hvf_x86->lflags.auxbits ^=3D (temp_sf ^ val) << LF_BIT_SD; +} + +void set_OSZAPC(struct CPUState *cpu, uint32_t flags32) +{ + set_OF(cpu, cpu->hvf_x86->rflags.of); + set_SF(cpu, cpu->hvf_x86->rflags.sf); + set_ZF(cpu, cpu->hvf_x86->rflags.zf); + set_AF(cpu, cpu->hvf_x86->rflags.af); + set_PF(cpu, cpu->hvf_x86->rflags.pf); + set_CF(cpu, cpu->hvf_x86->rflags.cf); +} + +void lflags_to_rflags(struct CPUState *cpu) +{ + cpu->hvf_x86->rflags.cf =3D get_CF(cpu); + cpu->hvf_x86->rflags.pf =3D get_PF(cpu); + cpu->hvf_x86->rflags.af =3D get_AF(cpu); + cpu->hvf_x86->rflags.zf =3D get_ZF(cpu); + cpu->hvf_x86->rflags.sf =3D get_SF(cpu); + cpu->hvf_x86->rflags.of =3D get_OF(cpu); +} + +void rflags_to_lflags(struct CPUState *cpu) +{ + cpu->hvf_x86->lflags.auxbits =3D cpu->hvf_x86->lflags.result =3D 0; + set_OF(cpu, cpu->hvf_x86->rflags.of); + set_SF(cpu, cpu->hvf_x86->rflags.sf); + set_ZF(cpu, cpu->hvf_x86->rflags.zf); + set_AF(cpu, cpu->hvf_x86->rflags.af); + set_PF(cpu, cpu->hvf_x86->rflags.pf); + set_CF(cpu, cpu->hvf_x86->rflags.cf); +} diff --git a/target/i386/hvf-utils/x86_flags.h b/target/i386/hvf-utils/x86_= flags.h new file mode 100644 index 0000000000..f963f8ad1b --- /dev/null +++ b/target/i386/hvf-utils/x86_flags.h @@ -0,0 +1,218 @@ +///////////////////////////////////////////////////////////////////////// +// +// Copyright (C) 2001-2012 The Bochs Project +// Copyright (C) 2017 Google Inc. +// +// This library is free software; you can redistribute it and/or +// modify it under the terms of the GNU Lesser General Public +// License as published by the Free Software Foundation; either +// version 2 of the License, or (at your option) any later version. +// +// This library is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +// Lesser General Public License for more details. +// +// You should have received a copy of the GNU Lesser General Public +// License along with this library; if not, write to the Free Software +// Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA B 02110-1301= USA +///////////////////////////////////////////////////////////////////////// +/* + * x86 eflags functions + */ +#ifndef __X86_FLAGS_H__ +#define __X86_FLAGS_H__ + +#include "x86_gen.h" + +/* this is basically bocsh code */ + +typedef struct lazy_flags { + addr_t result; + addr_t auxbits; +} lazy_flags; + +#define LF_SIGN_BIT 31 + +#define LF_BIT_SD (0) /* lazy Sign Flag Delta */ +#define LF_BIT_AF (3) /* lazy Adjust flag */ +#define LF_BIT_PDB (8) /* lazy Parity Delta Byte (8 bits) */ +#define LF_BIT_CF (31) /* lazy Carry Flag */ +#define LF_BIT_PO (30) /* lazy Partial Overflow =3D CF ^ OF */ + +#define LF_MASK_SD (0x01 << LF_BIT_SD) +#define LF_MASK_AF (0x01 << LF_BIT_AF) +#define LF_MASK_PDB (0xFF << LF_BIT_PDB) +#define LF_MASK_CF (0x01 << LF_BIT_CF) +#define LF_MASK_PO (0x01 << LF_BIT_PO) + +#define ADD_COUT_VEC(op1, op2, result) \ + (((op1) & (op2)) | (((op1) | (op2)) & (~(result)))) + +#define SUB_COUT_VEC(op1, op2, result) \ + (((~(op1)) & (op2)) | (((~(op1)) ^ (op2)) & (result))) + +#define GET_ADD_OVERFLOW(op1, op2, result, mask) \ + ((((op1) ^ (result)) & ((op2) ^ (result))) & (mask)) + +// ******************* +// OSZAPC +// ******************* + +/* size, carries, result */ +#define SET_FLAGS_OSZAPC_SIZE(size, lf_carries, lf_result) { \ + addr_t temp =3D ((lf_carries) & (LF_MASK_AF)) | \ + (((lf_carries) >> (size - 2)) << LF_BIT_PO); \ + cpu->hvf_x86->lflags.result =3D (addr_t)(int##size##_t)(lf_result); \ + if ((size) =3D=3D 32) temp =3D ((lf_carries) & ~(LF_MASK_PDB | LF_MASK= _SD)); \ + else if ((size) =3D=3D 16) temp =3D ((lf_carries) & (LF_MASK_AF)) | ((= lf_carries) << 16); \ + else if ((size) =3D=3D 8) temp =3D ((lf_carries) & (LF_MASK_AF)) | ((= lf_carries) << 24); \ + else VM_PANIC("unimplemented"); = \ + cpu->hvf_x86->lflags.auxbits =3D (addr_t)(uint32_t)temp; \ +} + +/* carries, result */ +#define SET_FLAGS_OSZAPC_8(carries, result) \ + SET_FLAGS_OSZAPC_SIZE(8, carries, result) +#define SET_FLAGS_OSZAPC_16(carries, result) \ + SET_FLAGS_OSZAPC_SIZE(16, carries, result) +#define SET_FLAGS_OSZAPC_32(carries, result) \ + SET_FLAGS_OSZAPC_SIZE(32, carries, result) + +/* result */ +#define SET_FLAGS_OSZAPC_LOGIC_8(result_8) \ + SET_FLAGS_OSZAPC_8(0, (result_8)) +#define SET_FLAGS_OSZAPC_LOGIC_16(result_16) \ + SET_FLAGS_OSZAPC_16(0, (result_16)) +#define SET_FLAGS_OSZAPC_LOGIC_32(result_32) \ + SET_FLAGS_OSZAPC_32(0, (result_32)) +#define SET_FLAGS_OSZAPC_LOGIC_SIZE(size, result) { \ + if (32 =3D=3D size) {SET_FLAGS_OSZAPC_LOGIC_32(result);} \ + else if (16 =3D=3D size) {SET_FLAGS_OSZAPC_LOGIC_16(result);} \ + else if (8 =3D=3D size) {SET_FLAGS_OSZAPC_LOGIC_8(result);} \ + else VM_PANIC("unimplemented"); \ +} + +/* op1, op2, result */ +#define SET_FLAGS_OSZAPC_ADD_8(op1_8, op2_8, sum_8) \ + SET_FLAGS_OSZAPC_8(ADD_COUT_VEC((op1_8), (op2_8), (sum_8)), (sum_8)) +#define SET_FLAGS_OSZAPC_ADD_16(op1_16, op2_16, sum_16) \ + SET_FLAGS_OSZAPC_16(ADD_COUT_VEC((op1_16), (op2_16), (sum_16)), (sum_1= 6)) +#define SET_FLAGS_OSZAPC_ADD_32(op1_32, op2_32, sum_32) \ + SET_FLAGS_OSZAPC_32(ADD_COUT_VEC((op1_32), (op2_32), (sum_32)), (sum_3= 2)) + +/* op1, op2, result */ +#define SET_FLAGS_OSZAPC_SUB_8(op1_8, op2_8, diff_8) \ + SET_FLAGS_OSZAPC_8(SUB_COUT_VEC((op1_8), (op2_8), (diff_8)), (diff_8)) +#define SET_FLAGS_OSZAPC_SUB_16(op1_16, op2_16, diff_16) \ + SET_FLAGS_OSZAPC_16(SUB_COUT_VEC((op1_16), (op2_16), (diff_16)), (diff= _16)) +#define SET_FLAGS_OSZAPC_SUB_32(op1_32, op2_32, diff_32) \ + SET_FLAGS_OSZAPC_32(SUB_COUT_VEC((op1_32), (op2_32), (diff_32)), (diff= _32)) + +// ******************* +// OSZAP +// ******************* +/* size, carries, result */ +#define SET_FLAGS_OSZAP_SIZE(size, lf_carries, lf_result) { \ + addr_t temp =3D ((lf_carries) & (LF_MASK_AF)) | \ + (((lf_carries) >> (size - 2)) << LF_BIT_PO); \ + if ((size) =3D=3D 32) temp =3D ((lf_carries) & ~(LF_MASK_PDB | LF_MASK= _SD)); \ + else if ((size) =3D=3D 16) temp =3D ((lf_carries) & (LF_MASK_AF)) | ((= lf_carries) << 16); \ + else if ((size) =3D=3D 8) temp =3D ((lf_carries) & (LF_MASK_AF)) | ((= lf_carries) << 24); \ + else VM_PANIC("unimplemented"); = \ + cpu->hvf_x86->lflags.result =3D (addr_t)(int##size##_t)(lf_result); \ + addr_t delta_c =3D (cpu->hvf_x86->lflags.auxbits ^ temp) & LF_MASK_CF;= \ + delta_c ^=3D (delta_c >> 1); \ + cpu->hvf_x86->lflags.auxbits =3D (addr_t)(uint32_t)(temp ^ delta_c); \ +} + +/* carries, result */ +#define SET_FLAGS_OSZAP_8(carries, result) \ + SET_FLAGS_OSZAP_SIZE(8, carries, result) +#define SET_FLAGS_OSZAP_16(carries, result) \ + SET_FLAGS_OSZAP_SIZE(16, carries, result) +#define SET_FLAGS_OSZAP_32(carries, result) \ + SET_FLAGS_OSZAP_SIZE(32, carries, result) + +/* op1, op2, result */ +#define SET_FLAGS_OSZAP_ADD_8(op1_8, op2_8, sum_8) \ + SET_FLAGS_OSZAP_8(ADD_COUT_VEC((op1_8), (op2_8), (sum_8)), (sum_8)) +#define SET_FLAGS_OSZAP_ADD_16(op1_16, op2_16, sum_16) \ + SET_FLAGS_OSZAP_16(ADD_COUT_VEC((op1_16), (op2_16), (sum_16)), (sum_16= )) +#define SET_FLAGS_OSZAP_ADD_32(op1_32, op2_32, sum_32) \ + SET_FLAGS_OSZAP_32(ADD_COUT_VEC((op1_32), (op2_32), (sum_32)), (sum_32= )) + +/* op1, op2, result */ +#define SET_FLAGS_OSZAP_SUB_8(op1_8, op2_8, diff_8) \ + SET_FLAGS_OSZAP_8(SUB_COUT_VEC((op1_8), (op2_8), (diff_8)), (diff_8)) +#define SET_FLAGS_OSZAP_SUB_16(op1_16, op2_16, diff_16) \ + SET_FLAGS_OSZAP_16(SUB_COUT_VEC((op1_16), (op2_16), (diff_16)), (diff_= 16)) +#define SET_FLAGS_OSZAP_SUB_32(op1_32, op2_32, diff_32) \ + SET_FLAGS_OSZAP_32(SUB_COUT_VEC((op1_32), (op2_32), (diff_32)), (diff_= 32)) + +// ******************* +// OSZAxC +// ******************* +/* size, carries, result */ +#define SET_FLAGS_OSZAxC_LOGIC_SIZE(size, lf_result) { \ + bool saved_PF =3D getB_PF(); \ + SET_FLAGS_OSZAPC_SIZE(size, (int##size##_t)(0), lf_result); \ + set_PF(saved_PF); \ +} + +/* result */ +#define SET_FLAGS_OSZAxC_LOGIC_32(result_32) \ + SET_FLAGS_OSZAxC_LOGIC_SIZE(32, (result_32)) + +void lflags_to_rflags(struct CPUState *cpu); +void rflags_to_lflags(struct CPUState *cpu); + +bool get_PF(struct CPUState *cpu); +void set_PF(struct CPUState *cpu, bool val); +bool get_CF(struct CPUState *cpu); +void set_CF(struct CPUState *cpu, bool val); +bool get_AF(struct CPUState *cpu); +void set_AF(struct CPUState *cpu, bool val); +bool get_ZF(struct CPUState *cpu); +void set_ZF(struct CPUState *cpu, bool val); +bool get_SF(struct CPUState *cpu); +void set_SF(struct CPUState *cpu, bool val); +bool get_OF(struct CPUState *cpu); +void set_OF(struct CPUState *cpu, bool val); +void set_OSZAPC(struct CPUState *cpu, uint32_t flags32); + +void SET_FLAGS_OxxxxC(struct CPUState *cpu, uint32_t new_of, uint32_t new_= cf); + +void SET_FLAGS_OSZAPC_SUB32(struct CPUState *cpu, uint32_t v1, uint32_t v2= , uint32_t diff); +void SET_FLAGS_OSZAPC_SUB16(struct CPUState *cpu, uint16_t v1, uint16_t v2= , uint16_t diff); +void SET_FLAGS_OSZAPC_SUB8(struct CPUState *cpu, uint8_t v1, uint8_t v2, u= int8_t diff); + +void SET_FLAGS_OSZAPC_ADD32(struct CPUState *cpu, uint32_t v1, uint32_t v2= , uint32_t diff); +void SET_FLAGS_OSZAPC_ADD16(struct CPUState *cpu, uint16_t v1, uint16_t v2= , uint16_t diff); +void SET_FLAGS_OSZAPC_ADD8(struct CPUState *cpu, uint8_t v1, uint8_t v2, u= int8_t diff); + +void SET_FLAGS_OSZAP_SUB32(struct CPUState *cpu, uint32_t v1, uint32_t v2,= uint32_t diff); +void SET_FLAGS_OSZAP_SUB16(struct CPUState *cpu, uint16_t v1, uint16_t v2,= uint16_t diff); +void SET_FLAGS_OSZAP_SUB8(struct CPUState *cpu, uint8_t v1, uint8_t v2, ui= nt8_t diff); + +void SET_FLAGS_OSZAP_ADD32(struct CPUState *cpu, uint32_t v1, uint32_t v2,= uint32_t diff); +void SET_FLAGS_OSZAP_ADD16(struct CPUState *cpu, uint16_t v1, uint16_t v2,= uint16_t diff); +void SET_FLAGS_OSZAP_ADD8(struct CPUState *cpu, uint8_t v1, uint8_t v2, ui= nt8_t diff); + +void SET_FLAGS_OSZAPC_LOGIC32(struct CPUState *cpu, uint32_t diff); +void SET_FLAGS_OSZAPC_LOGIC16(struct CPUState *cpu, uint16_t diff); +void SET_FLAGS_OSZAPC_LOGIC8(struct CPUState *cpu, uint8_t diff); + +void SET_FLAGS_SHR32(struct CPUState *cpu, uint32_t v, int count, uint32_t= res); +void SET_FLAGS_SHR16(struct CPUState *cpu, uint16_t v, int count, uint16_t= res); +void SET_FLAGS_SHR8(struct CPUState *cpu, uint8_t v, int count, uint8_t re= s); + +void SET_FLAGS_SAR32(struct CPUState *cpu, int32_t v, int count, uint32_t = res); +void SET_FLAGS_SAR16(struct CPUState *cpu, int16_t v, int count, uint16_t = res); +void SET_FLAGS_SAR8(struct CPUState *cpu, int8_t v, int count, uint8_t res= ); + +void SET_FLAGS_SHL32(struct CPUState *cpu, uint32_t v, int count, uint32_t= res); +void SET_FLAGS_SHL16(struct CPUState *cpu, uint16_t v, int count, uint16_t= res); +void SET_FLAGS_SHL8(struct CPUState *cpu, uint8_t v, int count, uint8_t re= s); + +#endif /* __X86_FLAGS_H__ */ diff --git a/target/i386/hvf-utils/x86_gen.h b/target/i386/hvf-utils/x86_ge= n.h new file mode 100644 index 0000000000..e4340fa244 --- /dev/null +++ b/target/i386/hvf-utils/x86_gen.h @@ -0,0 +1,53 @@ +/* + * Copyright (C) 2016 Veertu Inc, + * Copyright (C) 2017 Google Inc, + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation; either version 2 or + * (at your option) version 3 of the License. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, see . + */ +#ifndef __X86_GEN_H__ +#define __X86_GEN_H__ + +#include +#include +#include "qemu-common.h" + +typedef uint64_t addr_t; + +#define VM_PANIC(x) {\ + printf("%s\n", x); \ + abort(); \ +} + +#define VM_PANIC_ON(x) {\ + if (x) { \ + printf("%s\n", #x); \ + abort(); \ + } \ +} + +#define VM_PANIC_EX(...) {\ + printf(__VA_ARGS__); \ + abort(); \ +} + +#define VM_PANIC_ON_EX(x, ...) {\ + if (x) { \ + printf(__VA_ARGS__); \ + abort(); \ + } \ +} + +#define ZERO_INIT(obj) memset((void *) &obj, 0, sizeof(obj)) + +#endif diff --git a/target/i386/hvf-utils/x86_mmu.c b/target/i386/hvf-utils/x86_mm= u.c new file mode 100644 index 0000000000..00fae735be --- /dev/null +++ b/target/i386/hvf-utils/x86_mmu.c @@ -0,0 +1,254 @@ +/* + * Copyright (C) 2016 Veertu Inc, + * Copyright (C) 2017 Google Inc, + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation; either version 2 or + * (at your option) version 3 of the License. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, see . + */ +#include "qemu/osdep.h" + +#include "qemu-common.h" +#include "x86.h" +#include "x86_mmu.h" +#include "string.h" +#include "vmcs.h" +#include "vmx.h" + +#include "memory.h" +#include "exec/address-spaces.h" + +#define pte_present(pte) (pte & PT_PRESENT) +#define pte_write_access(pte) (pte & PT_WRITE) +#define pte_user_access(pte) (pte & PT_USER) +#define pte_exec_access(pte) (!(pte & PT_NX)) + +#define pte_large_page(pte) (pte & PT_PS) +#define pte_global_access(pte) (pte & PT_GLOBAL) + +#define PAE_CR3_MASK (~0x1fllu) +#define LEGACY_CR3_MASK (0xffffffff) + +#define LEGACY_PTE_PAGE_MASK (0xffffffffllu << 12) +#define PAE_PTE_PAGE_MASK ((-1llu << 12) & ((1llu << 52) - 1)) +#define PAE_PTE_LARGE_PAGE_MASK ((-1llu << (21)) & ((1llu << 52) - 1)) + +struct gpt_translation { + addr_t gva; + addr_t gpa; + int err_code; + uint64_t pte[5]; + bool write_access; + bool user_access; + bool exec_access; +}; + +static int gpt_top_level(struct CPUState *cpu, bool pae) +{ + if (!pae) + return 2; + if (x86_is_long_mode(cpu)) + return 4; + + return 3; +} + +static inline int gpt_entry(addr_t addr, int level, bool pae) +{ + int level_shift =3D pae ? 9 : 10; + return (addr >> (level_shift * (level - 1) + 12)) & ((1 << level_shift= ) - 1); +} + +static inline int pte_size(bool pae) +{ + return pae ? 8 : 4; +} + + +static bool get_pt_entry(struct CPUState *cpu, struct gpt_translation *pt,= int level, bool pae) +{ + int index; + uint64_t pte =3D 0; + addr_t page_mask =3D pae ? PAE_PTE_PAGE_MASK : LEGACY_PTE_PAGE_MASK; + addr_t gpa =3D pt->pte[level] & page_mask; + + if (level =3D=3D 3 && !x86_is_long_mode(cpu)) + gpa =3D pt->pte[level]; + + index =3D gpt_entry(pt->gva, level, pae); + address_space_rw(&address_space_memory, gpa + index * pte_size(pae), M= EMTXATTRS_UNSPECIFIED, (uint8_t *)&pte, pte_size(pae), 0); + + pt->pte[level - 1] =3D pte; + + return true; +} + +/* test page table entry */ +static bool test_pt_entry(struct CPUState *cpu, struct gpt_translation *pt= , int level, bool *is_large, bool pae) +{ + uint64_t pte =3D pt->pte[level]; + =20 + if (pt->write_access) + pt->err_code |=3D MMU_PAGE_WT; + if (pt->user_access) + pt->err_code |=3D MMU_PAGE_US; + if (pt->exec_access) + pt->err_code |=3D MMU_PAGE_NX; + + if (!pte_present(pte)) { + addr_t page_mask =3D pae ? PAE_PTE_PAGE_MASK : LEGACY_PTE_PAGE_MAS= K; + return false; + } + =20 + if (pae && !x86_is_long_mode(cpu) && 2 =3D=3D level) + goto exit; + =20 + if (1 =3D=3D level && pte_large_page(pte)) { + pt->err_code |=3D MMU_PAGE_PT; + *is_large =3D true; + } + if (!level) + pt->err_code |=3D MMU_PAGE_PT; + =20 + addr_t cr0 =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0); + /* check protection */ + if (cr0 & CR0_WP) { + if (pt->write_access && !pte_write_access(pte)) { + return false; + } + } + + if (pt->user_access && !pte_user_access(pte)) { + return false; + } + + if (pae && pt->exec_access && !pte_exec_access(pte)) { + return false; + } + =20 +exit: + /* TODO: check reserved bits */ + return true; +} + +static inline uint64_t pse_pte_to_page(uint64_t pte) +{ + return ((pte & 0x1fe000) << 19) | (pte & 0xffc00000); +} + +static inline uint64_t large_page_gpa(struct gpt_translation *pt, bool pae) +{ + VM_PANIC_ON(!pte_large_page(pt->pte[1])) + /* 2Mb large page */ + if (pae) + return (pt->pte[1] & PAE_PTE_LARGE_PAGE_MASK) | (pt->gva & 0x1ffff= f); + =20 + /* 4Mb large page */ + return pse_pte_to_page(pt->pte[1]) | (pt->gva & 0x3fffff); +} + + + +static bool walk_gpt(struct CPUState *cpu, addr_t addr, int err_code, stru= ct gpt_translation* pt, bool pae) +{ + int top_level, level; + bool is_large =3D false; + addr_t cr3 =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_CR3); + addr_t page_mask =3D pae ? PAE_PTE_PAGE_MASK : LEGACY_PTE_PAGE_MASK; + =20 + memset(pt, 0, sizeof(*pt)); + top_level =3D gpt_top_level(cpu, pae); + + pt->pte[top_level] =3D pae ? (cr3 & PAE_CR3_MASK) : (cr3 & LEGACY_CR3_= MASK); + pt->gva =3D addr; + pt->user_access =3D (err_code & MMU_PAGE_US); + pt->write_access =3D (err_code & MMU_PAGE_WT); + pt->exec_access =3D (err_code & MMU_PAGE_NX); + =20 + for (level =3D top_level; level > 0; level--) { + get_pt_entry(cpu, pt, level, pae); + + if (!test_pt_entry(cpu, pt, level - 1, &is_large, pae)) { + return false; + } + + if (is_large) + break; + } + + if (!is_large) + pt->gpa =3D (pt->pte[0] & page_mask) | (pt->gva & 0xfff); + else + pt->gpa =3D large_page_gpa(pt, pae); + + return true; +} + + +bool mmu_gva_to_gpa(struct CPUState *cpu, addr_t gva, addr_t *gpa) +{ + bool res; + struct gpt_translation pt; + int err_code =3D 0; + + if (!x86_is_paging_mode(cpu)) { + *gpa =3D gva; + return true; + } + + res =3D walk_gpt(cpu, gva, err_code, &pt, x86_is_pae_enabled(cpu)); + if (res) { + *gpa =3D pt.gpa; + return true; + } + + return false; +} + +void vmx_write_mem(struct CPUState* cpu, addr_t gva, void *data, int bytes) +{ + addr_t gpa; + + while (bytes > 0) { + // copy page + int copy =3D MIN(bytes, 0x1000 - (gva & 0xfff)); + + if (!mmu_gva_to_gpa(cpu, gva, &gpa)) { + VM_PANIC_ON_EX(1, "%s: mmu_gva_to_gpa %llx failed\n", __FUNCTI= ON__, gva); + } else { + address_space_rw(&address_space_memory, gpa, MEMTXATTRS_UNSPEC= IFIED, data, copy, 1); + } + + bytes -=3D copy; + gva +=3D copy; + data +=3D copy; + } +} + +void vmx_read_mem(struct CPUState* cpu, void *data, addr_t gva, int bytes) +{ + addr_t gpa; + + while (bytes > 0) { + // copy page + int copy =3D MIN(bytes, 0x1000 - (gva & 0xfff)); + + if (!mmu_gva_to_gpa(cpu, gva, &gpa)) { + VM_PANIC_ON_EX(1, "%s: mmu_gva_to_gpa %llx failed\n", __FUNCTI= ON__, gva); + } + address_space_rw(&address_space_memory, gpa, MEMTXATTRS_UNSPECIFIE= D, data, copy, 0); + + bytes -=3D copy; + gva +=3D copy; + data +=3D copy; + } +} diff --git a/target/i386/hvf-utils/x86_mmu.h b/target/i386/hvf-utils/x86_mm= u.h new file mode 100644 index 0000000000..f794d04b3d --- /dev/null +++ b/target/i386/hvf-utils/x86_mmu.h @@ -0,0 +1,45 @@ +/* + * Copyright (C) 2016 Veertu Inc, + * Copyright (C) 2017 Google Inc, + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation; either version 2 or + * (at your option) version 3 of the License. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, see . + */ +#ifndef __X86_MMU_H__ +#define __X86_MMU_H__ + +#include "x86_gen.h" + +#define PT_PRESENT (1 << 0) +#define PT_WRITE (1 << 1) +#define PT_USER (1 << 2) +#define PT_WT (1 << 3) +#define PT_CD (1 << 4) +#define PT_ACCESSED (1 << 5) +#define PT_DIRTY (1 << 6) +#define PT_PS (1 << 7) +#define PT_GLOBAL (1 << 8) +#define PT_NX (1llu << 63) + +// error codes +#define MMU_PAGE_PT (1 << 0) +#define MMU_PAGE_WT (1 << 1) +#define MMU_PAGE_US (1 << 2) +#define MMU_PAGE_NX (1 << 3) + +bool mmu_gva_to_gpa(struct CPUState *cpu, addr_t gva, addr_t *gpa); + +void vmx_write_mem(struct CPUState* cpu, addr_t gva, void *data, int bytes= ); +void vmx_read_mem(struct CPUState* cpu, void *data, addr_t gva, int bytes); + +#endif /* __X86_MMU_H__ */ diff --git a/target/i386/hvf-utils/x86hvf.c b/target/i386/hvf-utils/x86hvf.c new file mode 100644 index 0000000000..d5668df37f --- /dev/null +++ b/target/i386/hvf-utils/x86hvf.c @@ -0,0 +1,501 @@ +/* + * Copyright (c) 2003-2008 Fabrice Bellard + * Copyright (C) 2016 Veertu Inc, + * Copyright (C) 2017 Google Inc, + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation; either version 2 or + * (at your option) version 3 of the License. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, see . + */ + +#include "qemu/osdep.h" +#include "qemu-common.h" + +#include "x86hvf.h" +#include "vmx.h" +#include "vmcs.h" +#include "cpu.h" +#include "x86_descr.h" +#include "x86_decode.h" + +#include "hw/i386/apic_internal.h" + +#include +#include +#include +#include +#include + +void hvf_cpu_synchronize_state(struct CPUState* cpu_state); + +void hvf_set_segment(struct CPUState *cpu, struct vmx_segment *vmx_seg, Se= gmentCache *qseg, bool is_tr) +{ + vmx_seg->sel =3D qseg->selector; + vmx_seg->base =3D qseg->base; + vmx_seg->limit =3D qseg->limit; + + if (!qseg->selector && !x86_is_real(cpu) && !is_tr) { + // the TR register is usable after processor reset despite having = a null selector + vmx_seg->ar =3D 1 << 16; + return; + } + vmx_seg->ar =3D (qseg->flags >> DESC_TYPE_SHIFT) & 0xf; + vmx_seg->ar |=3D ((qseg->flags >> DESC_G_SHIFT) & 1) << 15; + vmx_seg->ar |=3D ((qseg->flags >> DESC_B_SHIFT) & 1) << 14; + vmx_seg->ar |=3D ((qseg->flags >> DESC_L_SHIFT) & 1) << 13; + vmx_seg->ar |=3D ((qseg->flags >> DESC_AVL_SHIFT) & 1) << 12; + vmx_seg->ar |=3D ((qseg->flags >> DESC_P_SHIFT) & 1) << 7; + vmx_seg->ar |=3D ((qseg->flags >> DESC_DPL_SHIFT) & 3) << 5; + vmx_seg->ar |=3D ((qseg->flags >> DESC_S_SHIFT) & 1) << 4; +} + +void hvf_get_segment(SegmentCache *qseg, struct vmx_segment *vmx_seg) +{ + qseg->limit =3D vmx_seg->limit; + qseg->base =3D vmx_seg->base; + qseg->selector =3D vmx_seg->sel; + qseg->flags =3D ((vmx_seg->ar & 0xf) << DESC_TYPE_SHIFT) | + (((vmx_seg->ar >> 4) & 1) << DESC_S_SHIFT) | + (((vmx_seg->ar >> 5) & 3) << DESC_DPL_SHIFT) | + (((vmx_seg->ar >> 7) & 1) << DESC_P_SHIFT) | + (((vmx_seg->ar >> 12) & 1) << DESC_AVL_SHIFT) | + (((vmx_seg->ar >> 13) & 1) << DESC_L_SHIFT) | + (((vmx_seg->ar >> 14) & 1) << DESC_B_SHIFT) | + (((vmx_seg->ar >> 15) & 1) << DESC_G_SHIFT); +} + +void hvf_put_xsave(CPUState *cpu_state) +{ + + int x; + struct hvf_xsave_buf *xsave; + =20 + xsave =3D X86_CPU(cpu_state)->env.kvm_xsave_buf; + memset(xsave, 0, sizeof(*xsave));=20 + =20 + memcpy(&xsave->data[4], &X86_CPU(cpu_state)->env.fpdp, sizeof(X86_CPU(= cpu_state)->env.fpdp)); + memcpy(&xsave->data[2], &X86_CPU(cpu_state)->env.fpip, sizeof(X86_CPU(= cpu_state)->env.fpip)); + memcpy(&xsave->data[8], &X86_CPU(cpu_state)->env.fpregs, sizeof(X86_CP= U(cpu_state)->env.fpregs)); + memcpy(&xsave->data[144], &X86_CPU(cpu_state)->env.ymmh_regs, sizeof(X= 86_CPU(cpu_state)->env.ymmh_regs)); + memcpy(&xsave->data[288], &X86_CPU(cpu_state)->env.zmmh_regs, sizeof(X= 86_CPU(cpu_state)->env.zmmh_regs)); + memcpy(&xsave->data[272], &X86_CPU(cpu_state)->env.opmask_regs, sizeof= (X86_CPU(cpu_state)->env.opmask_regs)); + memcpy(&xsave->data[240], &X86_CPU(cpu_state)->env.bnd_regs, sizeof(X8= 6_CPU(cpu_state)->env.bnd_regs)); + memcpy(&xsave->data[256], &X86_CPU(cpu_state)->env.bndcs_regs, sizeof(= X86_CPU(cpu_state)->env.bndcs_regs)); + memcpy(&xsave->data[416], &X86_CPU(cpu_state)->env.hi16_zmm_regs, size= of(X86_CPU(cpu_state)->env.hi16_zmm_regs)); + =20 + xsave->data[0] =3D (uint16_t)X86_CPU(cpu_state)->env.fpuc; + xsave->data[0] |=3D (X86_CPU(cpu_state)->env.fpus << 16); + xsave->data[0] |=3D (X86_CPU(cpu_state)->env.fpstt & 7) << 11; + =20 + for (x =3D 0; x < 8; ++x) + xsave->data[1] |=3D ((!X86_CPU(cpu_state)->env.fptags[x]) << x); + xsave->data[1] |=3D (uint32_t)(X86_CPU(cpu_state)->env.fpop << 16); + =20 + memcpy(&xsave->data[40], &X86_CPU(cpu_state)->env.xmm_regs, sizeof(X86= _CPU(cpu_state)->env.xmm_regs)); + =20 + xsave->data[6] =3D X86_CPU(cpu_state)->env.mxcsr; + *(uint64_t *)&xsave->data[128] =3D X86_CPU(cpu_state)->env.xstate_bv; + =20 + if (hv_vcpu_write_fpstate(cpu_state->hvf_fd, xsave->data, 4096)){ + abort(); + } +} + +void vmx_update_tpr(CPUState *cpu); +void hvf_put_segments(CPUState *cpu_state) +{ + CPUX86State *env =3D &X86_CPU(cpu_state)->env; + struct vmx_segment seg; + =20 + wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_LIMIT, env->idt.limit); + wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_BASE, env->idt.base); + + wvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_LIMIT, env->gdt.limit); + wvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_BASE, env->gdt.base); + + //wvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR2, env->cr[2]); + wvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR3, env->cr[3]); + vmx_update_tpr(cpu_state); + wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IA32_EFER, env->efer); + + macvm_set_cr4(cpu_state->hvf_fd, env->cr[4]); + macvm_set_cr0(cpu_state->hvf_fd, env->cr[0]); + + hvf_set_segment(cpu_state, &seg, &env->segs[R_CS], false); + vmx_write_segment_descriptor(cpu_state, &seg, REG_SEG_CS); + =20 + hvf_set_segment(cpu_state, &seg, &env->segs[R_DS], false); + vmx_write_segment_descriptor(cpu_state, &seg, REG_SEG_DS); + + hvf_set_segment(cpu_state, &seg, &env->segs[R_ES], false); + vmx_write_segment_descriptor(cpu_state, &seg, REG_SEG_ES); + + hvf_set_segment(cpu_state, &seg, &env->segs[R_SS], false); + vmx_write_segment_descriptor(cpu_state, &seg, REG_SEG_SS); + + hvf_set_segment(cpu_state, &seg, &env->segs[R_FS], false); + vmx_write_segment_descriptor(cpu_state, &seg, REG_SEG_FS); + + hvf_set_segment(cpu_state, &seg, &env->segs[R_GS], false); + vmx_write_segment_descriptor(cpu_state, &seg, REG_SEG_GS); + + hvf_set_segment(cpu_state, &seg, &env->tr, true); + vmx_write_segment_descriptor(cpu_state, &seg, REG_SEG_TR); + + hvf_set_segment(cpu_state, &seg, &env->ldt, false); + vmx_write_segment_descriptor(cpu_state, &seg, REG_SEG_LDTR); + =20 + hv_vcpu_flush(cpu_state->hvf_fd); +} + =20 +void hvf_put_msrs(CPUState *cpu_state) +{ + CPUX86State *env =3D &X86_CPU(cpu_state)->env; + + hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_CS, env->sysent= er_cs); + hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_ESP, env->sysen= ter_esp); + hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_EIP, env->sysen= ter_eip); + + hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_STAR, env->star); + +#ifdef TARGET_X86_64 + hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_CSTAR, env->cstar); + hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_KERNELGSBASE, env->kernelgsba= se); + hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_FMASK, env->fmask); + hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_LSTAR, env->lstar); +#endif + + hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_GSBASE, env->segs[R_GS].base); + hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_FSBASE, env->segs[R_FS].base); + + // if (!osx_is_sierra()) + // wvmcs(cpu_state->hvf_fd, VMCS_TSC_OFFSET, env->tsc - rdtscp()); + hv_vm_sync_tsc(env->tsc); +} + + +void hvf_get_xsave(CPUState *cpu_state) +{ + int x; + struct hvf_xsave_buf *xsave; + =20 + xsave =3D X86_CPU(cpu_state)->env.kvm_xsave_buf; + =20 + if (hv_vcpu_read_fpstate(cpu_state->hvf_fd, xsave->data, 4096)) { + abort(); + } + + memcpy(&X86_CPU(cpu_state)->env.fpdp, &xsave->data[4], sizeof(X86_CPU(= cpu_state)->env.fpdp)); + memcpy(&X86_CPU(cpu_state)->env.fpip, &xsave->data[2], sizeof(X86_CPU(= cpu_state)->env.fpip)); + memcpy(&X86_CPU(cpu_state)->env.fpregs, &xsave->data[8], sizeof(X86_CP= U(cpu_state)->env.fpregs)); + memcpy(&X86_CPU(cpu_state)->env.ymmh_regs, &xsave->data[144], sizeof(X= 86_CPU(cpu_state)->env.ymmh_regs)); + memcpy(&X86_CPU(cpu_state)->env.zmmh_regs, &xsave->data[288], sizeof(X= 86_CPU(cpu_state)->env.zmmh_regs)); + memcpy(&X86_CPU(cpu_state)->env.opmask_regs, &xsave->data[272], sizeof= (X86_CPU(cpu_state)->env.opmask_regs)); + memcpy(&X86_CPU(cpu_state)->env.bnd_regs, &xsave->data[240], sizeof(X8= 6_CPU(cpu_state)->env.bnd_regs)); + memcpy(&X86_CPU(cpu_state)->env.bndcs_regs, &xsave->data[256], sizeof(= X86_CPU(cpu_state)->env.bndcs_regs)); + memcpy(&X86_CPU(cpu_state)->env.hi16_zmm_regs, &xsave->data[416], size= of(X86_CPU(cpu_state)->env.hi16_zmm_regs)); + =20 + =20 + X86_CPU(cpu_state)->env.fpuc =3D (uint16_t)xsave->data[0]; + X86_CPU(cpu_state)->env.fpus =3D (uint16_t)(xsave->data[0] >> 16); + X86_CPU(cpu_state)->env.fpstt =3D (X86_CPU(cpu_state)->env.fpus >> 11)= & 7; + X86_CPU(cpu_state)->env.fpop =3D (uint16_t)(xsave->data[1] >> 16); + =20 + for (x =3D 0; x < 8; ++x) + X86_CPU(cpu_state)->env.fptags[x] =3D + ((((uint16_t)xsave->data[1] >> x) & 1) =3D=3D 0); + =20 + memcpy(&X86_CPU(cpu_state)->env.xmm_regs, &xsave->data[40], sizeof(X86= _CPU(cpu_state)->env.xmm_regs)); + + X86_CPU(cpu_state)->env.mxcsr =3D xsave->data[6]; + X86_CPU(cpu_state)->env.xstate_bv =3D *(uint64_t *)&xsave->data[128]; +} + +void hvf_get_segments(CPUState *cpu_state) +{ + CPUX86State *env =3D &X86_CPU(cpu_state)->env; + + struct vmx_segment seg; + + env->interrupt_injected =3D -1; + + vmx_read_segment_descriptor(cpu_state, &seg, REG_SEG_CS); + hvf_get_segment(&env->segs[R_CS], &seg); + =20 + vmx_read_segment_descriptor(cpu_state, &seg, REG_SEG_DS); + hvf_get_segment(&env->segs[R_DS], &seg); + + vmx_read_segment_descriptor(cpu_state, &seg, REG_SEG_ES); + hvf_get_segment(&env->segs[R_ES], &seg); + + vmx_read_segment_descriptor(cpu_state, &seg, REG_SEG_FS); + hvf_get_segment(&env->segs[R_FS], &seg); + + vmx_read_segment_descriptor(cpu_state, &seg, REG_SEG_GS); + hvf_get_segment(&env->segs[R_GS], &seg); + + vmx_read_segment_descriptor(cpu_state, &seg, REG_SEG_SS); + hvf_get_segment(&env->segs[R_SS], &seg); + + vmx_read_segment_descriptor(cpu_state, &seg, REG_SEG_TR); + hvf_get_segment(&env->tr, &seg); + + vmx_read_segment_descriptor(cpu_state, &seg, REG_SEG_LDTR); + hvf_get_segment(&env->ldt, &seg); + + env->idt.limit =3D rvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_LIMIT); + env->idt.base =3D rvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_BASE); + env->gdt.limit =3D rvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_LIMIT); + env->gdt.base =3D rvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_BASE); + + env->cr[0] =3D rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR0); + env->cr[2] =3D 0; + env->cr[3] =3D rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR3); + env->cr[4] =3D rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR4); + =20 + env->efer =3D rvmcs(cpu_state->hvf_fd, VMCS_GUEST_IA32_EFER); +} + +void hvf_get_msrs(CPUState *cpu_state) +{ + CPUX86State *env =3D &X86_CPU(cpu_state)->env; + uint64_t tmp; + =20 + hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_CS, &tmp); + env->sysenter_cs =3D tmp; + =20 + hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_ESP, &tmp); + env->sysenter_esp =3D tmp; + + hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_EIP, &tmp); + env->sysenter_eip =3D tmp; + + hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_STAR, &env->star); + +#ifdef TARGET_X86_64 + hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_CSTAR, &env->cstar); + hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_KERNELGSBASE, &env->kernelgsba= se); + hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_FMASK, &env->fmask); + hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_LSTAR, &env->lstar); +#endif + + hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_APICBASE, &tmp); + =20 + env->tsc =3D rdtscp() + rvmcs(cpu_state->hvf_fd, VMCS_TSC_OFFSET); +} + +int hvf_put_registers(CPUState *cpu_state) +{ + X86CPU *x86cpu =3D X86_CPU(cpu_state); + CPUX86State *env =3D &x86cpu->env; + + wreg(cpu_state->hvf_fd, HV_X86_RAX, env->regs[R_EAX]); + wreg(cpu_state->hvf_fd, HV_X86_RBX, env->regs[R_EBX]); + wreg(cpu_state->hvf_fd, HV_X86_RCX, env->regs[R_ECX]); + wreg(cpu_state->hvf_fd, HV_X86_RDX, env->regs[R_EDX]); + wreg(cpu_state->hvf_fd, HV_X86_RBP, env->regs[R_EBP]); + wreg(cpu_state->hvf_fd, HV_X86_RSP, env->regs[R_ESP]); + wreg(cpu_state->hvf_fd, HV_X86_RSI, env->regs[R_ESI]); + wreg(cpu_state->hvf_fd, HV_X86_RDI, env->regs[R_EDI]); + wreg(cpu_state->hvf_fd, HV_X86_R8, env->regs[8]); + wreg(cpu_state->hvf_fd, HV_X86_R9, env->regs[9]); + wreg(cpu_state->hvf_fd, HV_X86_R10, env->regs[10]); + wreg(cpu_state->hvf_fd, HV_X86_R11, env->regs[11]); + wreg(cpu_state->hvf_fd, HV_X86_R12, env->regs[12]); + wreg(cpu_state->hvf_fd, HV_X86_R13, env->regs[13]); + wreg(cpu_state->hvf_fd, HV_X86_R14, env->regs[14]); + wreg(cpu_state->hvf_fd, HV_X86_R15, env->regs[15]); + wreg(cpu_state->hvf_fd, HV_X86_RFLAGS, env->eflags); + wreg(cpu_state->hvf_fd, HV_X86_RIP, env->eip); + =20 + wreg(cpu_state->hvf_fd, HV_X86_XCR0, env->xcr0); + =20 + hvf_put_xsave(cpu_state); + =20 + hvf_put_segments(cpu_state); + =20 + hvf_put_msrs(cpu_state); + =20 + wreg(cpu_state->hvf_fd, HV_X86_DR0, env->dr[0]); + wreg(cpu_state->hvf_fd, HV_X86_DR1, env->dr[1]); + wreg(cpu_state->hvf_fd, HV_X86_DR2, env->dr[2]); + wreg(cpu_state->hvf_fd, HV_X86_DR3, env->dr[3]); + wreg(cpu_state->hvf_fd, HV_X86_DR4, env->dr[4]); + wreg(cpu_state->hvf_fd, HV_X86_DR5, env->dr[5]); + wreg(cpu_state->hvf_fd, HV_X86_DR6, env->dr[6]); + wreg(cpu_state->hvf_fd, HV_X86_DR7, env->dr[7]); + =20 + return 0; +} + +int hvf_get_registers(CPUState *cpu_state) +{ + X86CPU *x86cpu =3D X86_CPU(cpu_state); + CPUX86State *env =3D &x86cpu->env; + + + env->regs[R_EAX] =3D rreg(cpu_state->hvf_fd, HV_X86_RAX); + env->regs[R_EBX] =3D rreg(cpu_state->hvf_fd, HV_X86_RBX); + env->regs[R_ECX] =3D rreg(cpu_state->hvf_fd, HV_X86_RCX); + env->regs[R_EDX] =3D rreg(cpu_state->hvf_fd, HV_X86_RDX); + env->regs[R_EBP] =3D rreg(cpu_state->hvf_fd, HV_X86_RBP); + env->regs[R_ESP] =3D rreg(cpu_state->hvf_fd, HV_X86_RSP); + env->regs[R_ESI] =3D rreg(cpu_state->hvf_fd, HV_X86_RSI); + env->regs[R_EDI] =3D rreg(cpu_state->hvf_fd, HV_X86_RDI); + env->regs[8] =3D rreg(cpu_state->hvf_fd, HV_X86_R8); + env->regs[9] =3D rreg(cpu_state->hvf_fd, HV_X86_R9); + env->regs[10] =3D rreg(cpu_state->hvf_fd, HV_X86_R10); + env->regs[11] =3D rreg(cpu_state->hvf_fd, HV_X86_R11); + env->regs[12] =3D rreg(cpu_state->hvf_fd, HV_X86_R12); + env->regs[13] =3D rreg(cpu_state->hvf_fd, HV_X86_R13); + env->regs[14] =3D rreg(cpu_state->hvf_fd, HV_X86_R14); + env->regs[15] =3D rreg(cpu_state->hvf_fd, HV_X86_R15); + =20 + env->eflags =3D rreg(cpu_state->hvf_fd, HV_X86_RFLAGS); + env->eip =3D rreg(cpu_state->hvf_fd, HV_X86_RIP); + =20 + hvf_get_xsave(cpu_state); + env->xcr0 =3D rreg(cpu_state->hvf_fd, HV_X86_XCR0); + =20 + hvf_get_segments(cpu_state); + hvf_get_msrs(cpu_state); + =20 + env->dr[0] =3D rreg(cpu_state->hvf_fd, HV_X86_DR0); + env->dr[1] =3D rreg(cpu_state->hvf_fd, HV_X86_DR1); + env->dr[2] =3D rreg(cpu_state->hvf_fd, HV_X86_DR2); + env->dr[3] =3D rreg(cpu_state->hvf_fd, HV_X86_DR3); + env->dr[4] =3D rreg(cpu_state->hvf_fd, HV_X86_DR4); + env->dr[5] =3D rreg(cpu_state->hvf_fd, HV_X86_DR5); + env->dr[6] =3D rreg(cpu_state->hvf_fd, HV_X86_DR6); + env->dr[7] =3D rreg(cpu_state->hvf_fd, HV_X86_DR7); + =20 + return 0; +} + +static void vmx_set_int_window_exiting(CPUState *cpu) +{ + uint64_t val; + val =3D rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS); + wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val | VMCS_PRI_PROC_BASE= D_CTLS_INT_WINDOW_EXITING); +} + +void vmx_clear_int_window_exiting(CPUState *cpu) +{ + uint64_t val; + val =3D rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS); + wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val & ~VMCS_PRI_PROC_BAS= ED_CTLS_INT_WINDOW_EXITING); +} + +#define NMI_VEC 2 + +void hvf_inject_interrupts(CPUState *cpu_state) +{ + X86CPU *x86cpu =3D X86_CPU(cpu_state); + int allow_nmi =3D !(rvmcs(cpu_state->hvf_fd, VMCS_GUEST_INTERRUPTIBILI= TY) & VMCS_INTERRUPTIBILITY_NMI_BLOCKING); + + uint64_t idt_info =3D rvmcs(cpu_state->hvf_fd, VMCS_IDT_VECTORING_INFO= ); + uint64_t info =3D 0; + =20 + if (idt_info & VMCS_IDT_VEC_VALID) { + uint8_t vector =3D idt_info & 0xff; + uint64_t intr_type =3D idt_info & VMCS_INTR_T_MASK; + info =3D idt_info; + =20 + uint64_t reason =3D rvmcs(cpu_state->hvf_fd, VMCS_EXIT_REASON); + if (intr_type =3D=3D VMCS_INTR_T_NMI && reason !=3D EXIT_REASON_TA= SK_SWITCH) { + allow_nmi =3D 1; + vmx_clear_nmi_blocking(cpu_state); + } + =20 + if ((allow_nmi || intr_type !=3D VMCS_INTR_T_NMI)) { + info &=3D ~(1 << 12); /* clear undefined bit */ + if (intr_type =3D=3D VMCS_INTR_T_SWINTR || + intr_type =3D=3D VMCS_INTR_T_PRIV_SWEXCEPTION || + intr_type =3D=3D VMCS_INTR_T_SWEXCEPTION) { + uint64_t ins_len =3D rvmcs(cpu_state->hvf_fd, VMCS_EXIT_IN= STRUCTION_LENGTH); + wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INST_LENGTH, ins_len); + } + if (vector =3D=3D EXCEPTION_BP || vector =3D=3D EXCEPTION_OF) { + /* + * VT-x requires #BP and #OF to be injected as software + * exceptions. + */ + info &=3D ~VMCS_INTR_T_MASK; + info |=3D VMCS_INTR_T_SWEXCEPTION; + uint64_t ins_len =3D rvmcs(cpu_state->hvf_fd, VMCS_EXIT_IN= STRUCTION_LENGTH); + wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INST_LENGTH, ins_len); + } + =20 + uint64_t err =3D 0; + if (idt_info & VMCS_INTR_DEL_ERRCODE) { + err =3D rvmcs(cpu_state->hvf_fd, VMCS_IDT_VECTORING_ERROR); + wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_EXCEPTION_ERROR, err); + } + //printf("reinject %lx err %d\n", info, err); + wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, info); + }; + } + + if (cpu_state->interrupt_request & CPU_INTERRUPT_NMI) { + if (allow_nmi && !(info & VMCS_INTR_VALID)) { + cpu_state->interrupt_request &=3D ~CPU_INTERRUPT_NMI; + info =3D VMCS_INTR_VALID | VMCS_INTR_T_NMI | NMI_VEC; + wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, info); + } else { + vmx_set_nmi_window_exiting(cpu_state); + } + } + + if (cpu_state->hvf_x86->interruptable && (cpu_state->interrupt_request= & CPU_INTERRUPT_HARD) && + (EFLAGS(cpu_state) & IF_MASK) && !(info & VMCS_INTR_VALID)) { + int line =3D cpu_get_pic_interrupt(&x86cpu->env); + cpu_state->interrupt_request &=3D ~CPU_INTERRUPT_HARD; + if (line >=3D 0) + wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, line | VMCS_INT= R_VALID | VMCS_INTR_T_HWINTR); + } + if (cpu_state->interrupt_request & CPU_INTERRUPT_HARD) + vmx_set_int_window_exiting(cpu_state); +} + +int hvf_process_events(CPUState *cpu_state) +{ + X86CPU *cpu =3D X86_CPU(cpu_state); + CPUX86State *env =3D &cpu->env; + =20 + EFLAGS(cpu_state) =3D rreg(cpu_state->hvf_fd, HV_X86_RFLAGS); + + if (cpu_state->interrupt_request & CPU_INTERRUPT_INIT) { + hvf_cpu_synchronize_state(cpu_state); + do_cpu_init(cpu); + } + + if (cpu_state->interrupt_request & CPU_INTERRUPT_POLL) { + cpu_state->interrupt_request &=3D ~CPU_INTERRUPT_POLL; + apic_poll_irq(cpu->apic_state); + } + if (((cpu_state->interrupt_request & CPU_INTERRUPT_HARD) && (EFLAGS(cp= u_state) & IF_MASK)) || + (cpu_state->interrupt_request & CPU_INTERRUPT_NMI)) { + cpu_state->halted =3D 0; + } + if (cpu_state->interrupt_request & CPU_INTERRUPT_SIPI) { + hvf_cpu_synchronize_state(cpu_state); + do_cpu_sipi(cpu); + } + if (cpu_state->interrupt_request & CPU_INTERRUPT_TPR) { + cpu_state->interrupt_request &=3D ~CPU_INTERRUPT_TPR; + hvf_cpu_synchronize_state(cpu_state); + apic_handle_tpr_access_report(cpu->apic_state, env->eip, + env->tpr_access_type); + } + return cpu_state->halted; +} + diff --git a/target/i386/hvf-utils/x86hvf.h b/target/i386/hvf-utils/x86hvf.h new file mode 100644 index 0000000000..fbc2158a2b --- /dev/null +++ b/target/i386/hvf-utils/x86hvf.h @@ -0,0 +1,36 @@ +/* + * Copyright (C) 2016 Veertu Inc, + * Copyright (C) 2017 Google Inc, + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation; either version 2 or + * (at your option) version 3 of the License. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, see . + */ +#ifndef X86HVF_H +#define X86HVF_H +#include "cpu.h" +#include "x86_descr.h" + +int hvf_process_events(CPUState *); +int hvf_put_registers(CPUState *); +int hvf_get_registers(CPUState *); +void hvf_inject_interrupts(CPUState *); +void hvf_set_segment(struct CPUState *cpu, struct vmx_segment *vmx_seg, Se= gmentCache *qseg, bool is_tr); +void hvf_get_segment(SegmentCache *qseg, struct vmx_segment *vmx_seg); +void hvf_put_xsave(CPUState *cpu_state); +void hvf_put_segments(CPUState *cpu_state); +void hvf_put_msrs(CPUState *cpu_state); +void hvf_get_xsave(CPUState *cpu_state); +void hvf_get_msrs(CPUState *cpu_state); +void vmx_clear_int_window_exiting(CPUState *cpu); +void hvf_get_segments(CPUState *cpu_state); +#endif --=20 2.14.1 From nobody Sat Apr 27 16:08:04 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1504584001405788.0278801864595; Mon, 4 Sep 2017 21:00:01 -0700 (PDT) Received: from localhost ([::1]:56722 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp51s-00052h-71 for importer@patchew.org; Tue, 05 Sep 2017 00:00:00 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41444) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp4xf-0001dN-A8 for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:48 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dp4xW-0007lf-SV for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:39 -0400 Received: from mail-ua0-x241.google.com ([2607:f8b0:400c:c08::241]:38621) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1dp4xW-0007lT-LI for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:30 -0400 Received: by mail-ua0-x241.google.com with SMTP id l24so842519uaa.5 for ; Mon, 04 Sep 2017 20:55:30 -0700 (PDT) Received: from localhost.localdomain ([161.10.80.59]) by smtp.gmail.com with ESMTPSA id d206sm1877252vka.29.2017.09.04.20.55.27 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 04 Sep 2017 20:55:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=BRzCjml4jI9eZ3UCVIuuO1qhD4Q5yfs0oY9OgCgPaSI=; b=O6vYPAW8WPc4f6w2uOpQ8wtB+R66IOcgSFZdMA/65s/LBvWPH9p65RdFbHj/emr9QQ njukJVL+uwLeSW/NX95sAeOV0mQQhXwXat4QOz9MzL+lTaFlrRqjj/8c1CT93RB2v/AJ YjYYKbzAM1N+rRZln6tZKVmv6GzqRi5tFNBrBHTlejfZW9QSnTZBjsTuW7bzJUvrEkGp OUPCbSBNGH+KGCnnqsrXh+UX/24UEpC+Xy/ueVIgQb5JtNWkS67teTzgBn1XVEE/feKV RVEXlxYcq17CcjDus7nlhf2uj6+E2EWCdja7Ggv/zvDH2DyFtbxsbjtbb+Vkq/27sTcc mGPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=BRzCjml4jI9eZ3UCVIuuO1qhD4Q5yfs0oY9OgCgPaSI=; b=a6/Di/p9AodujhV/H9Rn8a//hLbeT0jhggsE2wgqjDznSdswYFXv/ajAGozuCyJiEf jlsVlPflO0UrfwwAek15epeYnVjaP82LmQSk2CdYGXqDgrLWI9oEBG9tSxmiKSiMVQMm 1uY1wbvVwJB6wQNwtsIOr+WhGlS6qpv8gq1AQqe0RqupRM9nm/n0iT67WjCrp/BGfTvU 9ECzsPrbdCIrsm8yfqFN+5Uobnp9KXp7XQEIqXS2zG+zyRTsika+7Y852IQJa67k3ynU u7snbyCYMbhMsKyE/3thYRFi/VsjiCd2MTRrecZcjS75z1B2sAeeJyrjOqKe1e4nag7D Vk1Q== X-Gm-Message-State: AHPjjUj7KHcl3JVdNzs8q1YjPPvFdCfMzRdt/uEHi3E/0ZLe5awBAVb9 1TDKj2FiEBe0bbKv X-Google-Smtp-Source: ADKCNb7ToJFDnoAqn8pyzc2d8ZdfOcT9i6PyiArsMvN6of7pr6/ePuK+fo2t9BwQZnms4+bAHyKVmA== X-Received: by 10.159.49.2 with SMTP id m2mr1773897uab.96.1504583728940; Mon, 04 Sep 2017 20:55:28 -0700 (PDT) From: Sergio Andres Gomez Del Real X-Google-Original-From: Sergio Andres Gomez Del Real To: qemu-devel@nongnu.org Date: Mon, 4 Sep 2017 22:54:46 -0500 Message-Id: <20170905035457.3753-4-Sergio.G.DelReal@gmail.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> References: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:400c:c08::241 Subject: [Qemu-devel] [PATCH v3 03/14] hvf: fix licensing issues; isolate task handling code (GPL v2-only) X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sergio Andres Gomez Del Real , pbonzini@redhat.com, stefanha@gmail.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZohoMail: RDKM_2 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This patch replaces the license header for those files that were either GPL v2-or-v3, or GPL v2-only; the replacing license is GPL v2-or-later. The code for task switching/handling, which is derived from KVM and hence is GPL v2-only, is isolated in the new files (with this license) x86_task.c/.h Signed-off-by: Sergio Andres Gomez Del Real --- target/i386/hvf-all.c | 173 +++---------------------------- target/i386/hvf-utils/Makefile.objs | 2 +- target/i386/hvf-utils/vmx.h | 14 +-- target/i386/hvf-utils/x86.c | 14 +-- target/i386/hvf-utils/x86.h | 14 +-- target/i386/hvf-utils/x86_cpuid.c | 6 +- target/i386/hvf-utils/x86_cpuid.h | 14 +-- target/i386/hvf-utils/x86_decode.c | 14 +-- target/i386/hvf-utils/x86_decode.h | 14 +-- target/i386/hvf-utils/x86_descr.c | 14 +-- target/i386/hvf-utils/x86_descr.h | 14 +-- target/i386/hvf-utils/x86_emu.c | 14 +-- target/i386/hvf-utils/x86_emu.h | 14 +-- target/i386/hvf-utils/x86_gen.h | 14 +-- target/i386/hvf-utils/x86_mmu.c | 14 +-- target/i386/hvf-utils/x86_mmu.h | 14 +-- target/i386/hvf-utils/x86_task.c | 201 ++++++++++++++++++++++++++++++++= ++++ target/i386/hvf-utils/x86_task.h | 18 ++++ target/i386/hvf-utils/x86hvf.c | 14 +-- target/i386/hvf-utils/x86hvf.h | 14 +-- 20 files changed, 340 insertions(+), 270 deletions(-) create mode 100644 target/i386/hvf-utils/x86_task.c create mode 100644 target/i386/hvf-utils/x86_task.h diff --git a/target/i386/hvf-all.c b/target/i386/hvf-all.c index d5e18faa68..270ec56b8d 100644 --- a/target/i386/hvf-all.c +++ b/target/i386/hvf-all.c @@ -5,15 +5,19 @@ // Copyright 2017 The Android Open Source Project //=20 // QEMU Hypervisor.framework support -//=20 -// This software is licensed under the terms of the GNU General Public -// License version 2, as published by the Free Software Foundation, and -// may be copied, distributed, and modified under those terms. -//=20 +// +// This program is free software; you can redistribute it and/or +// modify it under the terms of the GNU Lesser General Public +// License as published by the Free Software Foundation; either +// version 2 of the License, or (at your option) any later version. +// // This program is distributed in the hope that it will be useful, // but WITHOUT ANY WARRANTY; without even the implied warranty of -// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -// GNU General Public License for more details. +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +// Lesser General Public License for more details. +// +// You should have received a copy of the GNU Lesser General Public +// License along with this program; if not, see . #include "qemu/osdep.h" #include "qemu-common.h" #include "qemu/error-report.h" @@ -28,6 +32,7 @@ #include "hvf-utils/x86_decode.h" #include "hvf-utils/x86_emu.h" #include "hvf-utils/x86_cpuid.h" +#include "hvf-utils/x86_task.h" #include "hvf-utils/x86hvf.h" =20 #include @@ -224,160 +229,6 @@ void update_apic_tpr(CPUState *cpu) =20 #define VECTORING_INFO_VECTOR_MASK 0xff =20 -// TODO: taskswitch handling -static void save_state_to_tss32(CPUState *cpu, struct x86_tss_segment32 *t= ss) -{ - /* CR3 and ldt selector are not saved intentionally */ - tss->eip =3D EIP(cpu); - tss->eflags =3D EFLAGS(cpu); - tss->eax =3D EAX(cpu); - tss->ecx =3D ECX(cpu); - tss->edx =3D EDX(cpu); - tss->ebx =3D EBX(cpu); - tss->esp =3D ESP(cpu); - tss->ebp =3D EBP(cpu); - tss->esi =3D ESI(cpu); - tss->edi =3D EDI(cpu); - - tss->es =3D vmx_read_segment_selector(cpu, REG_SEG_ES).sel; - tss->cs =3D vmx_read_segment_selector(cpu, REG_SEG_CS).sel; - tss->ss =3D vmx_read_segment_selector(cpu, REG_SEG_SS).sel; - tss->ds =3D vmx_read_segment_selector(cpu, REG_SEG_DS).sel; - tss->fs =3D vmx_read_segment_selector(cpu, REG_SEG_FS).sel; - tss->gs =3D vmx_read_segment_selector(cpu, REG_SEG_GS).sel; -} - -static void load_state_from_tss32(CPUState *cpu, struct x86_tss_segment32 = *tss) -{ - wvmcs(cpu->hvf_fd, VMCS_GUEST_CR3, tss->cr3); - - RIP(cpu) =3D tss->eip; - EFLAGS(cpu) =3D tss->eflags | 2; - - /* General purpose registers */ - RAX(cpu) =3D tss->eax; - RCX(cpu) =3D tss->ecx; - RDX(cpu) =3D tss->edx; - RBX(cpu) =3D tss->ebx; - RSP(cpu) =3D tss->esp; - RBP(cpu) =3D tss->ebp; - RSI(cpu) =3D tss->esi; - RDI(cpu) =3D tss->edi; - - vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->ldt}}, RE= G_SEG_LDTR); - vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->es}}, REG= _SEG_ES); - vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->cs}}, REG= _SEG_CS); - vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->ss}}, REG= _SEG_SS); - vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->ds}}, REG= _SEG_DS); - vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->fs}}, REG= _SEG_FS); - vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->gs}}, REG= _SEG_GS); - -#if 0 - load_segment(cpu, REG_SEG_LDTR, tss->ldt); - load_segment(cpu, REG_SEG_ES, tss->es); - load_segment(cpu, REG_SEG_CS, tss->cs); - load_segment(cpu, REG_SEG_SS, tss->ss); - load_segment(cpu, REG_SEG_DS, tss->ds); - load_segment(cpu, REG_SEG_FS, tss->fs); - load_segment(cpu, REG_SEG_GS, tss->gs); -#endif -} - -static int task_switch_32(CPUState *cpu, x68_segment_selector tss_sel, x68= _segment_selector old_tss_sel, - uint64_t old_tss_base, struct x86_segment_descri= ptor *new_desc) -{ - struct x86_tss_segment32 tss_seg; - uint32_t new_tss_base =3D x86_segment_base(new_desc); - uint32_t eip_offset =3D offsetof(struct x86_tss_segment32, eip); - uint32_t ldt_sel_offset =3D offsetof(struct x86_tss_segment32, ldt); - - vmx_read_mem(cpu, &tss_seg, old_tss_base, sizeof(tss_seg)); - save_state_to_tss32(cpu, &tss_seg); - - vmx_write_mem(cpu, old_tss_base + eip_offset, &tss_seg.eip, ldt_sel_of= fset - eip_offset); - vmx_read_mem(cpu, &tss_seg, new_tss_base, sizeof(tss_seg)); - - if (old_tss_sel.sel !=3D 0xffff) { - tss_seg.prev_tss =3D old_tss_sel.sel; - - vmx_write_mem(cpu, new_tss_base, &tss_seg.prev_tss, sizeof(tss_seg= .prev_tss)); - } - load_state_from_tss32(cpu, &tss_seg); - return 0; -} - -static void vmx_handle_task_switch(CPUState *cpu, x68_segment_selector tss= _sel, int reason, bool gate_valid, uint8_t gate, uint64_t gate_type) -{ - uint64_t rip =3D rreg(cpu->hvf_fd, HV_X86_RIP); - if (!gate_valid || (gate_type !=3D VMCS_INTR_T_HWEXCEPTION && - gate_type !=3D VMCS_INTR_T_HWINTR && - gate_type !=3D VMCS_INTR_T_NMI)) { - int ins_len =3D rvmcs(cpu->hvf_fd, VMCS_EXIT_INSTRUCTION_LENGTH); - macvm_set_rip(cpu, rip + ins_len); - return; - } - - load_regs(cpu); - - struct x86_segment_descriptor curr_tss_desc, next_tss_desc; - int ret; - x68_segment_selector old_tss_sel =3D vmx_read_segment_selector(cpu, RE= G_SEG_TR); - uint64_t old_tss_base =3D vmx_read_segment_base(cpu, REG_SEG_TR); - uint32_t desc_limit; - struct x86_call_gate task_gate_desc; - struct vmx_segment vmx_seg; - - x86_read_segment_descriptor(cpu, &next_tss_desc, tss_sel); - x86_read_segment_descriptor(cpu, &curr_tss_desc, old_tss_sel); - - if (reason =3D=3D TSR_IDT_GATE && gate_valid) { - int dpl; - - ret =3D x86_read_call_gate(cpu, &task_gate_desc, gate); - - dpl =3D task_gate_desc.dpl; - x68_segment_selector cs =3D vmx_read_segment_selector(cpu, REG_SEG= _CS); - if (tss_sel.rpl > dpl || cs.rpl > dpl) - ;//DPRINTF("emulate_gp"); - } - - desc_limit =3D x86_segment_limit(&next_tss_desc); - if (!next_tss_desc.p || ((desc_limit < 0x67 && (next_tss_desc.type & 8= )) || desc_limit < 0x2b)) { - VM_PANIC("emulate_ts"); - } - - if (reason =3D=3D TSR_IRET || reason =3D=3D TSR_JMP) { - curr_tss_desc.type &=3D ~(1 << 1); /* clear busy flag */ - x86_write_segment_descriptor(cpu, &curr_tss_desc, old_tss_sel); - } - - if (reason =3D=3D TSR_IRET) - EFLAGS(cpu) &=3D ~RFLAGS_NT; - - if (reason !=3D TSR_CALL && reason !=3D TSR_IDT_GATE) - old_tss_sel.sel =3D 0xffff; - - if (reason !=3D TSR_IRET) { - next_tss_desc.type |=3D (1 << 1); /* set busy flag */ - x86_write_segment_descriptor(cpu, &next_tss_desc, tss_sel); - } - - if (next_tss_desc.type & 8) - ret =3D task_switch_32(cpu, tss_sel, old_tss_sel, old_tss_base, &n= ext_tss_desc); - else - //ret =3D task_switch_16(cpu, tss_sel, old_tss_sel, old_tss_base, = &next_tss_desc); - VM_PANIC("task_switch_16"); - - macvm_set_cr0(cpu->hvf_fd, rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0) | CR0_TS= ); - x86_segment_descriptor_to_vmx(cpu, tss_sel, &next_tss_desc, &vmx_seg); - vmx_write_segment_descriptor(cpu, &vmx_seg, REG_SEG_TR); - - store_regs(cpu); - - hv_vcpu_invalidate_tlb(cpu->hvf_fd); - hv_vcpu_flush(cpu->hvf_fd); -} - static void hvf_handle_interrupt(CPUState * cpu, int mask) { cpu->interrupt_request |=3D mask; diff --git a/target/i386/hvf-utils/Makefile.objs b/target/i386/hvf-utils/Ma= kefile.objs index 7df219ad9c..79d8969ca8 100644 --- a/target/i386/hvf-utils/Makefile.objs +++ b/target/i386/hvf-utils/Makefile.objs @@ -1 +1 @@ -obj-y +=3D x86.o x86_cpuid.o x86_decode.o x86_descr.o x86_emu.o x86_flags.= o x86_mmu.o x86hvf.o +obj-y +=3D x86.o x86_cpuid.o x86_decode.o x86_descr.o x86_emu.o x86_flags.= o x86_mmu.o x86hvf.o x86_task.o diff --git a/target/i386/hvf-utils/vmx.h b/target/i386/hvf-utils/vmx.h index 8a080e6777..e5359df87f 100644 --- a/target/i386/hvf-utils/vmx.h +++ b/target/i386/hvf-utils/vmx.h @@ -6,17 +6,17 @@ * Interfaces to Hypervisor.framework to read/write X86 registers and VMCS. * * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License as - * published by the Free Software Foundation; either version 2 or - * (at your option) version 3 of the License. + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . + * You should have received a copy of the GNU Lesser General Public + * License along with this program; if not, see . */ =20 #ifndef VMX_H diff --git a/target/i386/hvf-utils/x86.c b/target/i386/hvf-utils/x86.c index e3db2c9c8b..4debbff31c 100644 --- a/target/i386/hvf-utils/x86.c +++ b/target/i386/hvf-utils/x86.c @@ -3,17 +3,17 @@ * Copyright (C) 2017 Google Inc, * * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License as - * published by the Free Software Foundation; either version 2 or - * (at your option) version 3 of the License. + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . + * You should have received a copy of the GNU Lesser General Public + * License along with this program; if not, see . */ =20 #include "qemu/osdep.h" diff --git a/target/i386/hvf-utils/x86.h b/target/i386/hvf-utils/x86.h index 5dffdd6568..d433d15ea4 100644 --- a/target/i386/hvf-utils/x86.h +++ b/target/i386/hvf-utils/x86.h @@ -3,17 +3,17 @@ * Copyright (C) 2017 Veertu Inc, * * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License as - * published by the Free Software Foundation; either version 2 or - * (at your option) version 3 of the License. + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . + * You should have received a copy of the GNU Lesser General Public + * License along with this program; if not, see . */ =20 #pragma once diff --git a/target/i386/hvf-utils/x86_cpuid.c b/target/i386/hvf-utils/x86_= cpuid.c index e496cf001c..4abeb5c2da 100644 --- a/target/i386/hvf-utils/x86_cpuid.c +++ b/target/i386/hvf-utils/x86_cpuid.c @@ -4,18 +4,18 @@ * Copyright (c) 2003 Fabrice Bellard * Copyright (c) 2017 Google Inc. * - * This library is free software; you can redistribute it and/or + * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2 of the License, or (at your option) any later version. * - * This library is distributed in the hope that it will be useful, + * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public - * License along with this library; if not, see . + * License along with this program; if not, see . * * cpuid */ diff --git a/target/i386/hvf-utils/x86_cpuid.h b/target/i386/hvf-utils/x86_= cpuid.h index 02f2f115b0..b84a2f08df 100644 --- a/target/i386/hvf-utils/x86_cpuid.h +++ b/target/i386/hvf-utils/x86_cpuid.h @@ -2,17 +2,17 @@ * Copyright (C) 2016 Veertu Inc, * * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License as - * published by the Free Software Foundation; either version 2 or - * (at your option) version 3 of the License. + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . + * You should have received a copy of the GNU Lesser General Public + * License along with this program; if not, see . */ #ifndef __CPUID_H__ #define __CPUID_H__ diff --git a/target/i386/hvf-utils/x86_decode.c b/target/i386/hvf-utils/x86= _decode.c index b4d8e22449..8deaab11d2 100644 --- a/target/i386/hvf-utils/x86_decode.c +++ b/target/i386/hvf-utils/x86_decode.c @@ -3,17 +3,17 @@ * Copyright (C) 2017 Google Inc, * * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License as - * published by the Free Software Foundation; either version 2 or - * (at your option) version 3 of the License. + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . + * You should have received a copy of the GNU Lesser General Public + * License along with this program; if not, see . */ =20 #include "qemu/osdep.h" diff --git a/target/i386/hvf-utils/x86_decode.h b/target/i386/hvf-utils/x86= _decode.h index 3a22d7d1a5..fde524f819 100644 --- a/target/i386/hvf-utils/x86_decode.h +++ b/target/i386/hvf-utils/x86_decode.h @@ -2,17 +2,17 @@ * Copyright (C) 2016 Veertu Inc, * * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License as - * published by the Free Software Foundation; either version 2 or - * (at your option) version 3 of the License. + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . + * You should have received a copy of the GNU Lesser General Public + * License along with this program; if not, see . */ =20 #pragma once diff --git a/target/i386/hvf-utils/x86_descr.c b/target/i386/hvf-utils/x86_= descr.c index c3b089aaa8..0b9562818f 100644 --- a/target/i386/hvf-utils/x86_descr.c +++ b/target/i386/hvf-utils/x86_descr.c @@ -3,17 +3,17 @@ * Copyright (C) 2017 Google Inc, * * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License as - * published by the Free Software Foundation; either version 2 or - * (at your option) version 3 of the License. + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . + * You should have received a copy of the GNU Lesser General Public + * License along with this program; if not, see . */ =20 #include "qemu/osdep.h" diff --git a/target/i386/hvf-utils/x86_descr.h b/target/i386/hvf-utils/x86_= descr.h index 78fb1bc420..9917585aeb 100644 --- a/target/i386/hvf-utils/x86_descr.h +++ b/target/i386/hvf-utils/x86_descr.h @@ -3,17 +3,17 @@ * Copyright (C) 2017 Google Inc, * * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License as - * published by the Free Software Foundation; either version 2 or - * (at your option) version 3 of the License. + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . + * You should have received a copy of the GNU Lesser General Public + * License along with this program; if not, see . */ =20 #pragma once diff --git a/target/i386/hvf-utils/x86_emu.c b/target/i386/hvf-utils/x86_em= u.c index 8b5efc76f0..76680921d1 100644 --- a/target/i386/hvf-utils/x86_emu.c +++ b/target/i386/hvf-utils/x86_emu.c @@ -3,17 +3,17 @@ * Copyright (C) 2017 Google Inc, * * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License as - * published by the Free Software Foundation; either version 2 or - * (at your option) version 3 of the License. + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . + * You should have received a copy of the GNU Lesser General Public + * License along with this program; if not, see . */ =20 ///////////////////////////////////////////////////////////////////////// diff --git a/target/i386/hvf-utils/x86_emu.h b/target/i386/hvf-utils/x86_em= u.h index 42cc5e4296..f6feff5553 100644 --- a/target/i386/hvf-utils/x86_emu.h +++ b/target/i386/hvf-utils/x86_emu.h @@ -3,17 +3,17 @@ * Copyright (C) 2017 Google Inc, * * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License as - * published by the Free Software Foundation; either version 2 or - * (at your option) version 3 of the License. + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . + * You should have received a copy of the GNU Lesser General Public + * License along with this program; if not, see . */ #ifndef __X86_EMU_H__ #define __X86_EMU_H__ diff --git a/target/i386/hvf-utils/x86_gen.h b/target/i386/hvf-utils/x86_ge= n.h index e4340fa244..2045b0e69d 100644 --- a/target/i386/hvf-utils/x86_gen.h +++ b/target/i386/hvf-utils/x86_gen.h @@ -3,17 +3,17 @@ * Copyright (C) 2017 Google Inc, * * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License as - * published by the Free Software Foundation; either version 2 or - * (at your option) version 3 of the License. + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . + * You should have received a copy of the GNU Lesser General Public + * License along with this program; if not, see . */ #ifndef __X86_GEN_H__ #define __X86_GEN_H__ diff --git a/target/i386/hvf-utils/x86_mmu.c b/target/i386/hvf-utils/x86_mm= u.c index 00fae735be..95b3d15b94 100644 --- a/target/i386/hvf-utils/x86_mmu.c +++ b/target/i386/hvf-utils/x86_mmu.c @@ -3,17 +3,17 @@ * Copyright (C) 2017 Google Inc, * * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License as - * published by the Free Software Foundation; either version 2 or - * (at your option) version 3 of the License. + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . + * You should have received a copy of the GNU Lesser General Public + * License along with this program; if not, see . */ #include "qemu/osdep.h" =20 diff --git a/target/i386/hvf-utils/x86_mmu.h b/target/i386/hvf-utils/x86_mm= u.h index f794d04b3d..aa0fcfafd2 100644 --- a/target/i386/hvf-utils/x86_mmu.h +++ b/target/i386/hvf-utils/x86_mmu.h @@ -3,17 +3,17 @@ * Copyright (C) 2017 Google Inc, * * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License as - * published by the Free Software Foundation; either version 2 or - * (at your option) version 3 of the License. + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . + * You should have received a copy of the GNU Lesser General Public + * License along with this program; if not, see . */ #ifndef __X86_MMU_H__ #define __X86_MMU_H__ diff --git a/target/i386/hvf-utils/x86_task.c b/target/i386/hvf-utils/x86_t= ask.c new file mode 100644 index 0000000000..1a2646437a --- /dev/null +++ b/target/i386/hvf-utils/x86_task.c @@ -0,0 +1,201 @@ +// This software is licensed under the terms of the GNU General Public +// License version 2, as published by the Free Software Foundation, and +// may be copied, distributed, and modified under those terms. +//=20 +// This program is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU General Public License for more details. +#include "qemu/osdep.h" +#include "qemu-common.h" +#include "qemu/error-report.h" + +#include "sysemu/hvf.h" +#include "hvf-i386.h" +#include "hvf-utils/vmcs.h" +#include "hvf-utils/vmx.h" +#include "hvf-utils/x86.h" +#include "hvf-utils/x86_descr.h" +#include "hvf-utils/x86_mmu.h" +#include "hvf-utils/x86_decode.h" +#include "hvf-utils/x86_emu.h" +#include "hvf-utils/x86_cpuid.h" +#include "hvf-utils/x86_task.h" +#include "hvf-utils/x86hvf.h" + +#include +#include + +#include "exec/address-spaces.h" +#include "exec/exec-all.h" +#include "exec/ioport.h" +#include "hw/i386/apic_internal.h" +#include "hw/boards.h" +#include "qemu/main-loop.h" +#include "strings.h" +#include "sysemu/accel.h" +#include "sysemu/sysemu.h" +#include "target/i386/cpu.h" + +// TODO: taskswitch handling +static void save_state_to_tss32(CPUState *cpu, struct x86_tss_segment32 *t= ss) +{ + X86CPU *x86_cpu =3D X86_CPU(cpu); + CPUX86State *env =3D &x86_cpu->env; + + /* CR3 and ldt selector are not saved intentionally */ + tss->eip =3D EIP(env); + tss->eflags =3D EFLAGS(env); + tss->eax =3D EAX(env); + tss->ecx =3D ECX(env); + tss->edx =3D EDX(env); + tss->ebx =3D EBX(env); + tss->esp =3D ESP(env); + tss->ebp =3D EBP(env); + tss->esi =3D ESI(env); + tss->edi =3D EDI(env); + + tss->es =3D vmx_read_segment_selector(cpu, REG_SEG_ES).sel; + tss->cs =3D vmx_read_segment_selector(cpu, REG_SEG_CS).sel; + tss->ss =3D vmx_read_segment_selector(cpu, REG_SEG_SS).sel; + tss->ds =3D vmx_read_segment_selector(cpu, REG_SEG_DS).sel; + tss->fs =3D vmx_read_segment_selector(cpu, REG_SEG_FS).sel; + tss->gs =3D vmx_read_segment_selector(cpu, REG_SEG_GS).sel; +} + +static void load_state_from_tss32(CPUState *cpu, struct x86_tss_segment32 = *tss) +{ + X86CPU *x86_cpu =3D X86_CPU(cpu); + CPUX86State *env =3D &x86_cpu->env; + + wvmcs(cpu->hvf_fd, VMCS_GUEST_CR3, tss->cr3); + + RIP(env) =3D tss->eip; + EFLAGS(env) =3D tss->eflags | 2; + + /* General purpose registers */ + RAX(env) =3D tss->eax; + RCX(env) =3D tss->ecx; + RDX(env) =3D tss->edx; + RBX(env) =3D tss->ebx; + RSP(env) =3D tss->esp; + RBP(env) =3D tss->ebp; + RSI(env) =3D tss->esi; + RDI(env) =3D tss->edi; + + vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->ldt}}, RE= G_SEG_LDTR); + vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->es}}, REG= _SEG_ES); + vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->cs}}, REG= _SEG_CS); + vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->ss}}, REG= _SEG_SS); + vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->ds}}, REG= _SEG_DS); + vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->fs}}, REG= _SEG_FS); + vmx_write_segment_selector(cpu, (x68_segment_selector){{tss->gs}}, REG= _SEG_GS); + +#if 0 + load_segment(cpu, REG_SEG_LDTR, tss->ldt); + load_segment(cpu, REG_SEG_ES, tss->es); + load_segment(cpu, REG_SEG_CS, tss->cs); + load_segment(cpu, REG_SEG_SS, tss->ss); + load_segment(cpu, REG_SEG_DS, tss->ds); + load_segment(cpu, REG_SEG_FS, tss->fs); + load_segment(cpu, REG_SEG_GS, tss->gs); +#endif +} + +static int task_switch_32(CPUState *cpu, x68_segment_selector tss_sel, x68= _segment_selector old_tss_sel, + uint64_t old_tss_base, struct x86_segment_descri= ptor *new_desc) +{ + struct x86_tss_segment32 tss_seg; + uint32_t new_tss_base =3D x86_segment_base(new_desc); + uint32_t eip_offset =3D offsetof(struct x86_tss_segment32, eip); + uint32_t ldt_sel_offset =3D offsetof(struct x86_tss_segment32, ldt); + + vmx_read_mem(cpu, &tss_seg, old_tss_base, sizeof(tss_seg)); + save_state_to_tss32(cpu, &tss_seg); + + vmx_write_mem(cpu, old_tss_base + eip_offset, &tss_seg.eip, ldt_sel_of= fset - eip_offset); + vmx_read_mem(cpu, &tss_seg, new_tss_base, sizeof(tss_seg)); + + if (old_tss_sel.sel !=3D 0xffff) { + tss_seg.prev_tss =3D old_tss_sel.sel; + + vmx_write_mem(cpu, new_tss_base, &tss_seg.prev_tss, sizeof(tss_seg= .prev_tss)); + } + load_state_from_tss32(cpu, &tss_seg); + return 0; +} + +void vmx_handle_task_switch(CPUState *cpu, x68_segment_selector tss_sel, i= nt reason, bool gate_valid, uint8_t gate, uint64_t gate_type) +{ + uint64_t rip =3D rreg(cpu->hvf_fd, HV_X86_RIP); + if (!gate_valid || (gate_type !=3D VMCS_INTR_T_HWEXCEPTION && + gate_type !=3D VMCS_INTR_T_HWINTR && + gate_type !=3D VMCS_INTR_T_NMI)) { + int ins_len =3D rvmcs(cpu->hvf_fd, VMCS_EXIT_INSTRUCTION_LENGTH); + macvm_set_rip(cpu, rip + ins_len); + return; + } + + load_regs(cpu); + + struct x86_segment_descriptor curr_tss_desc, next_tss_desc; + int ret; + x68_segment_selector old_tss_sel =3D vmx_read_segment_selector(cpu, RE= G_SEG_TR); + uint64_t old_tss_base =3D vmx_read_segment_base(cpu, REG_SEG_TR); + uint32_t desc_limit; + struct x86_call_gate task_gate_desc; + struct vmx_segment vmx_seg; + + X86CPU *x86_cpu =3D X86_CPU(cpu); + CPUX86State *env =3D &x86_cpu->env; + + x86_read_segment_descriptor(cpu, &next_tss_desc, tss_sel); + x86_read_segment_descriptor(cpu, &curr_tss_desc, old_tss_sel); + + if (reason =3D=3D TSR_IDT_GATE && gate_valid) { + int dpl; + + ret =3D x86_read_call_gate(cpu, &task_gate_desc, gate); + + dpl =3D task_gate_desc.dpl; + x68_segment_selector cs =3D vmx_read_segment_selector(cpu, REG_SEG= _CS); + if (tss_sel.rpl > dpl || cs.rpl > dpl) + ;//DPRINTF("emulate_gp"); + } + + desc_limit =3D x86_segment_limit(&next_tss_desc); + if (!next_tss_desc.p || ((desc_limit < 0x67 && (next_tss_desc.type & 8= )) || desc_limit < 0x2b)) { + VM_PANIC("emulate_ts"); + } + + if (reason =3D=3D TSR_IRET || reason =3D=3D TSR_JMP) { + curr_tss_desc.type &=3D ~(1 << 1); /* clear busy flag */ + x86_write_segment_descriptor(cpu, &curr_tss_desc, old_tss_sel); + } + + if (reason =3D=3D TSR_IRET) + EFLAGS(env) &=3D ~RFLAGS_NT; + + if (reason !=3D TSR_CALL && reason !=3D TSR_IDT_GATE) + old_tss_sel.sel =3D 0xffff; + + if (reason !=3D TSR_IRET) { + next_tss_desc.type |=3D (1 << 1); /* set busy flag */ + x86_write_segment_descriptor(cpu, &next_tss_desc, tss_sel); + } + + if (next_tss_desc.type & 8) + ret =3D task_switch_32(cpu, tss_sel, old_tss_sel, old_tss_base, &n= ext_tss_desc); + else + //ret =3D task_switch_16(cpu, tss_sel, old_tss_sel, old_tss_base, = &next_tss_desc); + VM_PANIC("task_switch_16"); + + macvm_set_cr0(cpu->hvf_fd, rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0) | CR0_TS= ); + x86_segment_descriptor_to_vmx(cpu, tss_sel, &next_tss_desc, &vmx_seg); + vmx_write_segment_descriptor(cpu, &vmx_seg, REG_SEG_TR); + + store_regs(cpu); + + hv_vcpu_invalidate_tlb(cpu->hvf_fd); + hv_vcpu_flush(cpu->hvf_fd); +} diff --git a/target/i386/hvf-utils/x86_task.h b/target/i386/hvf-utils/x86_t= ask.h new file mode 100644 index 0000000000..4f1b188d2e --- /dev/null +++ b/target/i386/hvf-utils/x86_task.h @@ -0,0 +1,18 @@ +/* This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation; either version 2 or + * (at your option) version 3 of the License. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, see . + */ +#ifndef HVF_TASK +#define HVF_TASK +void vmx_handle_task_switch(CPUState *cpu, x68_segment_selector tss_sel, + int reason, bool gate_valid, uint8_t gate, uint64_t gate_type); +#endif diff --git a/target/i386/hvf-utils/x86hvf.c b/target/i386/hvf-utils/x86hvf.c index d5668df37f..aba8983dc7 100644 --- a/target/i386/hvf-utils/x86hvf.c +++ b/target/i386/hvf-utils/x86hvf.c @@ -4,17 +4,17 @@ * Copyright (C) 2017 Google Inc, * * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License as - * published by the Free Software Foundation; either version 2 or - * (at your option) version 3 of the License. + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . + * You should have received a copy of the GNU Lesser General Public + * License along with this program; if not, see . */ =20 #include "qemu/osdep.h" diff --git a/target/i386/hvf-utils/x86hvf.h b/target/i386/hvf-utils/x86hvf.h index fbc2158a2b..0c5bc3dcf8 100644 --- a/target/i386/hvf-utils/x86hvf.h +++ b/target/i386/hvf-utils/x86hvf.h @@ -3,17 +3,17 @@ * Copyright (C) 2017 Google Inc, * * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License as - * published by the Free Software Foundation; either version 2 or - * (at your option) version 3 of the License. + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. * - * You should have received a copy of the GNU General Public License along - * with this program; if not, see . + * You should have received a copy of the GNU Lesser General Public + * License along with this program; if not, see . */ #ifndef X86HVF_H #define X86HVF_H --=20 2.14.1 From nobody Sat Apr 27 16:08:04 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1504586927685439.31872900588803; Mon, 4 Sep 2017 21:48:47 -0700 (PDT) Received: from localhost ([::1]:56862 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp5n2-0002YB-S4 for importer@patchew.org; Tue, 05 Sep 2017 00:48:44 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41934) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp4yD-00029x-LZ for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:56:49 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dp4xg-0007vA-GW for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:56:13 -0400 Received: from mail-vk0-x242.google.com ([2607:f8b0:400c:c05::242]:37983) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1dp4xf-0007uB-OL for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:40 -0400 Received: by mail-vk0-x242.google.com with SMTP id o22so714409vke.5 for ; Mon, 04 Sep 2017 20:55:39 -0700 (PDT) Received: from localhost.localdomain ([161.10.80.59]) by smtp.gmail.com with ESMTPSA id d206sm1877252vka.29.2017.09.04.20.55.29 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 04 Sep 2017 20:55:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=SvuE5N9Z2Vh+6NkuL81TyebD3YFPg25EicaspnMgXAU=; b=amJy+t87MkTVswpYsTe0wpIFKVjHQEJ36rBuCIoRKx4vSxrt4Um3B6jyh4YqfpJ8K6 V/ko3wgYP+sBw21vZGOgMQ+whd0HQu9usrHmChbtNSzkYPiJt1oMnc+1i+TF6vgF7XWj h7kxmKN++fPjdjNk9TLT6eB0XykHqHnR5hy23GSUapR1GFbBxLoH8vUskjwK4X1PpdLL nfwM995Y0AaC1moCOiZ6ytzORcB9ybTroByxwOkHtw288r3xyVTvHg+iPL/YqqwQ6XYI GpposHTAGKdp+lySFmJ6kqwbkuPpIEyOlmXvrM7sVfii1F2HuWjCNM7DAOYWZbxAg0FY SaMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=SvuE5N9Z2Vh+6NkuL81TyebD3YFPg25EicaspnMgXAU=; b=YeCwMzox5NQhPMTHwMG6dY2ttDgWELef5PqYuQI6Gfs5uLK04XLYzdluGz1C8yn/lm EwAurdKmGVzjIYCZ8o1Tb5UIrvTSZzAO6iPvB6B6k6KK1WlEFikdTxLTbG89XV5DCFqW Le0S55xF59WnHHBY8wbtA4RBkvp/jzzz/iVSUuHNxCyxzOcBCd6erBXwk9szpIALYqEq Ae4KNldbvw8vUwQdQqFCTKDFKmAhtc0l0UYz4x+6eVaGrEk2wgrfMnEyFzEynprbM8Hd tI+KaUpDi4B4Gm++v9UwSf4o/B6BzGHJjbb7IKH+t5HHfO8awAoDO3eZG3gDakeBuDkC FbOw== X-Gm-Message-State: AHPjjUi5nbKNqupVHxQrVhIPChuOtUvj8VhoLEQB0q5he5oO6OhU5ywg x+f+9kRVqMRV6aUt X-Google-Smtp-Source: ADKCNb5bkQoZJx49fJhIwEKgXj7T1rvVsaPQ70RvC5EMmu0js7CqCBRTnupvQNMIeumZQDXMgu5ZXg== X-Received: by 10.31.102.198 with SMTP id a189mr1449230vkc.37.1504583736961; Mon, 04 Sep 2017 20:55:36 -0700 (PDT) From: Sergio Andres Gomez Del Real X-Google-Original-From: Sergio Andres Gomez Del Real To: qemu-devel@nongnu.org Date: Mon, 4 Sep 2017 22:54:47 -0500 Message-Id: <20170905035457.3753-5-Sergio.G.DelReal@gmail.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> References: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:400c:c05::242 X-Mailman-Approved-At: Tue, 05 Sep 2017 00:46:16 -0400 Subject: [Qemu-devel] [PATCH v3 04/14] hvf: run hvf code through checkpatch.pl and fix style issues X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sergio Andres Gomez Del Real , pbonzini@redhat.com, stefanha@gmail.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZohoMail: RDKM_2 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Signed-off-by: Sergio Andres Gomez Del Real --- include/sysemu/hvf.h | 19 +- target/i386/hvf-all.c | 647 +++++----- target/i386/hvf-i386.h | 2 +- target/i386/hvf-utils/vmcs.h | 484 ++++---- target/i386/hvf-utils/vmx.h | 92 +- target/i386/hvf-utils/x86.c | 80 +- target/i386/hvf-utils/x86.h | 110 +- target/i386/hvf-utils/x86_cpuid.c | 337 ++--- target/i386/hvf-utils/x86_cpuid.h | 5 +- target/i386/hvf-utils/x86_decode.c | 2378 ++++++++++++++++++++++----------= ---- target/i386/hvf-utils/x86_decode.h | 26 +- target/i386/hvf-utils/x86_descr.h | 23 +- target/i386/hvf-utils/x86_emu.c | 1324 ++++++++++---------- target/i386/hvf-utils/x86_flags.c | 52 +- target/i386/hvf-utils/x86_flags.h | 99 +- target/i386/hvf-utils/x86_mmu.c | 81 +- target/i386/hvf-utils/x86_mmu.h | 6 +- target/i386/hvf-utils/x86hvf.c | 53 +- target/i386/hvf-utils/x86hvf.h | 3 +- 19 files changed, 3285 insertions(+), 2536 deletions(-) diff --git a/include/sysemu/hvf.h b/include/sysemu/hvf.h index 752a78eaa4..d068a95e93 100644 --- a/include/sysemu/hvf.h +++ b/include/sysemu/hvf.h @@ -58,17 +58,16 @@ hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t); =20 /* Returns 1 if HVF is available and enabled, 0 otherwise. */ int hvf_enabled(void); - -/* Disable HVF if |disable| is 1, otherwise, enable it iff it is supported= by the host CPU. - * Use hvf_enabled() after this to get the result. */ +/* Disable HVF if |disable| is 1, otherwise, enable it iff it is supported= by + * the host CPU. Use hvf_enabled() after this to get the result. */ void hvf_disable(int disable); =20 -/* Returns non-0 if the host CPU supports the VMX "unrestricted guest" fea= ture which - * allows the virtual CPU to directly run in "real mode". If true, this al= lows QEMU to run - * several vCPU threads in parallel (see cpus.c). Otherwise, only a a sing= le TCG thread - * can run, and it will call HVF to run the current instructions, except i= n case of - * "real mode" (paging disabled, typically at boot time), or MMIO operatio= ns. */ -// int hvf_ug_platform(void); does not apply to HVF; assume we must be in = UG mode +/* Returns non-0 if the host CPU supports the VMX "unrestricted guest" fea= ture + * which allows the virtual CPU to directly run in "real mode". If true, t= his + * allows QEMU to run several vCPU threads in parallel (see cpus.c). Other= wise, + * only a a single TCG thread can run, and it will call HVF to run the cur= rent + * instructions, except in case of "real mode" (paging disabled, typically= at + * boot time), or MMIO operations. */ =20 int hvf_sync_vcpus(void); =20 @@ -82,7 +81,7 @@ void _hvf_cpu_synchronize_post_init(CPUState *, run_on_cp= u_data); =20 void hvf_vcpu_destroy(CPUState *); void hvf_raise_event(CPUState *); -// void hvf_reset_vcpu_state(void *opaque); +/* void hvf_reset_vcpu_state(void *opaque); */ void vmx_reset_vcpu(CPUState *); void __hvf_cpu_synchronize_state(CPUState *, run_on_cpu_data); void __hvf_cpu_synchronize_post_reset(CPUState *, run_on_cpu_data); diff --git a/target/i386/hvf-all.c b/target/i386/hvf-all.c index 270ec56b8d..9b871c2aa0 100644 --- a/target/i386/hvf-all.c +++ b/target/i386/hvf-all.c @@ -1,23 +1,24 @@ -// Copyright 2008 IBM Corporation -// 2008 Red Hat, Inc. -// Copyright 2011 Intel Corporation -// Copyright 2016 Veertu, Inc. -// Copyright 2017 The Android Open Source Project -//=20 -// QEMU Hypervisor.framework support -// -// This program is free software; you can redistribute it and/or -// modify it under the terms of the GNU Lesser General Public -// License as published by the Free Software Foundation; either -// version 2 of the License, or (at your option) any later version. -// -// This program is distributed in the hope that it will be useful, -// but WITHOUT ANY WARRANTY; without even the implied warranty of -// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -// Lesser General Public License for more details. -// -// You should have received a copy of the GNU Lesser General Public -// License along with this program; if not, see . +/* Copyright 2008 IBM Corporation + * 2008 Red Hat, Inc. + * Copyright 2011 Intel Corporation + * Copyright 2016 Veertu, Inc. + * Copyright 2017 The Android Open Source Project + *=20 + * QEMU Hypervisor.framework support + *=20 + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this program; if not, see . + */ #include "qemu/osdep.h" #include "qemu-common.h" #include "qemu/error-report.h" @@ -56,8 +57,9 @@ static int hvf_disabled =3D 1; =20 static void assert_hvf_ok(hv_return_t ret) { - if (ret =3D=3D HV_SUCCESS) + if (ret =3D=3D HV_SUCCESS) { return; + } =20 switch (ret) { case HV_ERROR: @@ -85,15 +87,17 @@ static void assert_hvf_ok(hv_return_t ret) abort(); } =20 -// Memory slots///////////////////////////////////////////////////////////= ////// - -hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t end) { +/* Memory slots */ +hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t end) +{ hvf_slot *slot; int x; for (x =3D 0; x < hvf_state->num_slots; ++x) { slot =3D &hvf_state->slots[x]; - if (slot->size && start < (slot->start + slot->size) && end > slot= ->start) + if (slot->size && start < (slot->start + slot->size) && + end > slot->start) { return slot; + } } return NULL; } @@ -106,7 +110,7 @@ struct mac_slot { }; =20 struct mac_slot mac_slots[32]; -#define ALIGN(x, y) (((x)+(y)-1) & ~((y)-1)) +#define ALIGN(x, y) (((x) + (y) - 1) & ~((y) - 1)) =20 int __hvf_set_memory(hvf_slot *slot) { @@ -141,12 +145,14 @@ int __hvf_set_memory(hvf_slot *slot) return 0; } =20 -void hvf_set_phys_mem(MemoryRegionSection* section, bool add) +void hvf_set_phys_mem(MemoryRegionSection *section, bool add) { hvf_slot *mem; MemoryRegion *area =3D section->mr; =20 - if (!memory_region_is_ram(area)) return; + if (!memory_region_is_ram(area)) { + return; + } =20 mem =3D hvf_find_overlap_slot( section->offset_within_address_space, @@ -154,12 +160,14 @@ void hvf_set_phys_mem(MemoryRegionSection* section, b= ool add) =20 if (mem && add) { if (mem->size =3D=3D int128_get64(section->size) && - mem->start =3D=3D section->offset_within_address_space && - mem->mem =3D=3D (memory_region_get_ram_ptr(area) + section= ->offset_within_region)) - return; // Same region was attempted to register, go away. + mem->start =3D=3D section->offset_within_address_space && + mem->mem =3D=3D (memory_region_get_ram_ptr(area) + + section->offset_within_region)) { + return; /* Same region was attempted to register, go away. */ + } } =20 - // Region needs to be reset. set the size to 0 and remap it. + /* Region needs to be reset. set the size to 0 and remap it. */ if (mem) { mem->size =3D 0; if (__hvf_set_memory(mem)) { @@ -168,15 +176,18 @@ void hvf_set_phys_mem(MemoryRegionSection* section, b= ool add) } } =20 - if (!add) return; + if (!add) { + return; + } =20 - // Now make a new slot. + /* Now make a new slot. */ int x; =20 for (x =3D 0; x < hvf_state->num_slots; ++x) { mem =3D &hvf_state->slots[x]; - if (!mem->size) + if (!mem->size) { break; + } } =20 if (x =3D=3D hvf_state->num_slots) { @@ -208,16 +219,18 @@ static int get_highest_priority_int(uint32_t *tab) =20 void vmx_update_tpr(CPUState *cpu) { - // TODO: need integrate APIC handling + /* TODO: need integrate APIC handling */ X86CPU *x86_cpu =3D X86_CPU(cpu); int tpr =3D cpu_get_apic_tpr(x86_cpu->apic_state) << 4; int irr =3D apic_get_highest_priority_irr(x86_cpu->apic_state); =20 wreg(cpu->hvf_fd, HV_X86_TPR, tpr); - if (irr =3D=3D -1) + if (irr =3D=3D -1) { wvmcs(cpu->hvf_fd, VMCS_TPR_THRESHOLD, 0); - else - wvmcs(cpu->hvf_fd, VMCS_TPR_THRESHOLD, (irr > tpr) ? tpr >> 4 : ir= r >> 4); + } else { + wvmcs(cpu->hvf_fd, VMCS_TPR_THRESHOLD, (irr > tpr) ? tpr >> 4 : + irr >> 4); + } } =20 void update_apic_tpr(CPUState *cpu) @@ -237,7 +250,7 @@ static void hvf_handle_interrupt(CPUState * cpu, int ma= sk) } } =20 -void hvf_handle_io(CPUArchState * env, uint16_t port, void* buffer, +void hvf_handle_io(CPUArchState *env, uint16_t port, void *buffer, int direction, int size, int count) { int i; @@ -250,21 +263,23 @@ void hvf_handle_io(CPUArchState * env, uint16_t port,= void* buffer, ptr +=3D size; } } -// -// TODO: synchronize vcpu state void __hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg) + +/* TODO: synchronize vcpu state */ { CPUState *cpu_state =3D cpu;//(CPUState *)data; - if (cpu_state->hvf_vcpu_dirty =3D=3D 0) + if (cpu_state->hvf_vcpu_dirty =3D=3D 0) { hvf_get_registers(cpu_state); + } =20 cpu_state->hvf_vcpu_dirty =3D 1; } =20 void hvf_cpu_synchronize_state(CPUState *cpu_state) { - if (cpu_state->hvf_vcpu_dirty =3D=3D 0) run_on_cpu(cpu_state, __hvf_cpu_synchronize_state, RUN_ON_CPU_NULL= ); + if (cpu_state->hvf_vcpu_dirty =3D=3D 0) { + } } =20 void __hvf_cpu_synchronize_post_reset(CPUState *cpu, run_on_cpu_data arg) @@ -290,44 +305,45 @@ void hvf_cpu_synchronize_post_init(CPUState *cpu_stat= e) { run_on_cpu(cpu_state, _hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL); } -=20 -// TODO: ept fault handlig -void vmx_clear_int_window_exiting(CPUState *cpu); + +/* TODO: ept fault handlig */ static bool ept_emulation_fault(uint64_t ept_qual) { - int read, write; - - /* EPT fault on an instruction fetch doesn't make sense here */ - if (ept_qual & EPT_VIOLATION_INST_FETCH) - return false; - - /* EPT fault must be a read fault or a write fault */ - read =3D ept_qual & EPT_VIOLATION_DATA_READ ? 1 : 0; - write =3D ept_qual & EPT_VIOLATION_DATA_WRITE ? 1 : 0; - if ((read | write) =3D=3D 0) - return false; - - /* - * The EPT violation must have been caused by accessing a - * guest-physical address that is a translation of a guest-linear - * address. - */ - if ((ept_qual & EPT_VIOLATION_GLA_VALID) =3D=3D 0 || - (ept_qual & EPT_VIOLATION_XLAT_VALID) =3D=3D 0) { - return false; - } - - return true; + int read, write; + + /* EPT fault on an instruction fetch doesn't make sense here */ + if (ept_qual & EPT_VIOLATION_INST_FETCH) { + return false; + } + + /* EPT fault must be a read fault or a write fault */ + read =3D ept_qual & EPT_VIOLATION_DATA_READ ? 1 : 0; + write =3D ept_qual & EPT_VIOLATION_DATA_WRITE ? 1 : 0; + if ((read | write) =3D=3D 0) { + return false; + } + + /* + * The EPT violation must have been caused by accessing a + * guest-physical address that is a translation of a guest-linear + * address. + */ + if ((ept_qual & EPT_VIOLATION_GLA_VALID) =3D=3D 0 || + (ept_qual & EPT_VIOLATION_XLAT_VALID) =3D=3D 0) { + return false; + } + + return true; } =20 -static void hvf_region_add(MemoryListener * listener, - MemoryRegionSection * section) +static void hvf_region_add(MemoryListener *listener, + MemoryRegionSection *section) { hvf_set_phys_mem(section, true); } =20 -static void hvf_region_del(MemoryListener * listener, - MemoryRegionSection * section) +static void hvf_region_del(MemoryListener *listener, + MemoryRegionSection *section) { hvf_set_phys_mem(section, false); } @@ -352,7 +368,7 @@ void vmx_reset_vcpu(CPUState *cpu) { wvmcs(cpu->hvf_fd, VMCS_CR4_SHADOW, 0x0); wvmcs(cpu->hvf_fd, VMCS_GUEST_CR4, CR4_VMXE_MASK); =20 - // set VMCS guest state fields + /* set VMCS guest state fields */ wvmcs(cpu->hvf_fd, VMCS_GUEST_CS_SELECTOR, 0xf000); wvmcs(cpu->hvf_fd, VMCS_GUEST_CS_LIMIT, 0xffff); wvmcs(cpu->hvf_fd, VMCS_GUEST_CS_ACCESS_RIGHTS, 0x9b); @@ -399,7 +415,7 @@ void vmx_reset_vcpu(CPUState *cpu) { wvmcs(cpu->hvf_fd, VMCS_GUEST_IDTR_LIMIT, 0); wvmcs(cpu->hvf_fd, VMCS_GUEST_IDTR_BASE, 0); =20 - //wvmcs(cpu->hvf_fd, VMCS_GUEST_CR2, 0x0); + /*wvmcs(cpu->hvf_fd, VMCS_GUEST_CR2, 0x0);*/ wvmcs(cpu->hvf_fd, VMCS_GUEST_CR3, 0x0); =20 wreg(cpu->hvf_fd, HV_X86_RIP, 0xfff0); @@ -413,8 +429,9 @@ void vmx_reset_vcpu(CPUState *cpu) { wreg(cpu->hvf_fd, HV_X86_RDI, 0x0); wreg(cpu->hvf_fd, HV_X86_RBP, 0x0); =20 - for (int i =3D 0; i < 8; i++) - wreg(cpu->hvf_fd, HV_X86_R8+i, 0x0); + for (int i =3D 0; i < 8; i++) { + wreg(cpu->hvf_fd, HV_X86_R8 + i, 0x0); + } =20 hv_vm_sync_tsc(0); cpu->halted =3D 0; @@ -422,7 +439,7 @@ void vmx_reset_vcpu(CPUState *cpu) { hv_vcpu_flush(cpu->hvf_fd); } =20 -void hvf_vcpu_destroy(CPUState* cpu)=20 +void hvf_vcpu_destroy(CPUState *cpu) { hv_return_t ret =3D hv_vcpu_destroy((hv_vcpuid_t)cpu->hvf_fd); assert_hvf_ok(ret); @@ -432,11 +449,12 @@ static void dummy_signal(int sig) { } =20 -int hvf_init_vcpu(CPUState * cpu) { +int hvf_init_vcpu(CPUState *cpu) +{ =20 X86CPU *x86cpu; - =20 - // init cpu signals + + /* init cpu signals */ sigset_t set; struct sigaction sigact; =20 @@ -459,35 +477,48 @@ int hvf_init_vcpu(CPUState * cpu) { cpu->hvf_vcpu_dirty =3D 1; assert_hvf_ok(r); =20 - if (hv_vmx_read_capability(HV_VMX_CAP_PINBASED, &cpu->hvf_caps->vmx_cap_p= inbased)) - abort(); - if (hv_vmx_read_capability(HV_VMX_CAP_PROCBASED, &cpu->hvf_caps->vmx_cap_= procbased)) - abort(); - if (hv_vmx_read_capability(HV_VMX_CAP_PROCBASED2, &cpu->hvf_caps->vmx_cap= _procbased2)) - abort(); - if (hv_vmx_read_capability(HV_VMX_CAP_ENTRY, &cpu->hvf_caps->vmx_cap_entr= y)) - abort(); - - /* set VMCS control fields */ - wvmcs(cpu->hvf_fd, VMCS_PIN_BASED_CTLS, cap2ctrl(cpu->hvf_caps->vmx_ca= p_pinbased, 0)); - wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, cap2ctrl(cpu->hvf_caps->v= mx_cap_procbased, - VMCS_PRI_PROC_BASED_CTL= S_HLT | - VMCS_PRI_PROC_BASED_CTL= S_MWAIT | - VMCS_PRI_PROC_BASED_CTL= S_TSC_OFFSET | - VMCS_PRI_PROC_BASED_CTL= S_TPR_SHADOW) | - VMCS_PRI_PROC_BASED_CTL= S_SEC_CONTROL); - wvmcs(cpu->hvf_fd, VMCS_SEC_PROC_BASED_CTLS, - cap2ctrl(cpu->hvf_caps->vmx_cap_procbased2,VMCS_PRI_PROC_BASED2_= CTLS_APIC_ACCESSES)); - - wvmcs(cpu->hvf_fd, VMCS_ENTRY_CTLS, cap2ctrl(cpu->hvf_caps->vmx_cap_entry= , 0)); - wvmcs(cpu->hvf_fd, VMCS_EXCEPTION_BITMAP, 0); /* Double fault */ + if (hv_vmx_read_capability(HV_VMX_CAP_PINBASED, + &hvf_state->hvf_caps->vmx_cap_pinbased)) { + abort(); + } + if (hv_vmx_read_capability(HV_VMX_CAP_PROCBASED, + &hvf_state->hvf_caps->vmx_cap_procbased)) { + abort(); + } + if (hv_vmx_read_capability(HV_VMX_CAP_PROCBASED2, + &hvf_state->hvf_caps->vmx_cap_procbased2)) { + abort(); + } + if (hv_vmx_read_capability(HV_VMX_CAP_ENTRY, + &hvf_state->hvf_caps->vmx_cap_entry)) { + abort(); + } + + /* set VMCS control fields */ + wvmcs(cpu->hvf_fd, VMCS_PIN_BASED_CTLS, + cap2ctrl(hvf_state->hvf_caps->vmx_cap_pinbased, 0)); + wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, + cap2ctrl(hvf_state->hvf_caps->vmx_cap_procbased, + VMCS_PRI_PROC_BASED_CTLS_HLT | + VMCS_PRI_PROC_BASED_CTLS_MWAIT | + VMCS_PRI_PROC_BASED_CTLS_TSC_OFFSET | + VMCS_PRI_PROC_BASED_CTLS_TPR_SHADOW) | + VMCS_PRI_PROC_BASED_CTLS_SEC_CONTROL); + wvmcs(cpu->hvf_fd, VMCS_SEC_PROC_BASED_CTLS, + cap2ctrl(hvf_state->hvf_caps->vmx_cap_procbased2, + VMCS_PRI_PROC_BASED2_CTLS_APIC_ACCESSES)); + + wvmcs(cpu->hvf_fd, VMCS_ENTRY_CTLS, cap2ctrl(hvf_state->hvf_caps->vmx_= cap_entry, + 0)); + wvmcs(cpu->hvf_fd, VMCS_EXCEPTION_BITMAP, 0); /* Double fault */ =20 wvmcs(cpu->hvf_fd, VMCS_TPR_THRESHOLD, 0); =20 vmx_reset_vcpu(cpu); =20 x86cpu =3D X86_CPU(cpu); - x86cpu->env.kvm_xsave_buf =3D qemu_memalign(4096, sizeof(struct hvf_xs= ave_buf)); + x86cpu->env.kvm_xsave_buf =3D qemu_memalign(4096, + sizeof(struct hvf_xsave_buf)); =20 hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_STAR, 1); hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_LSTAR, 1); @@ -497,7 +528,7 @@ int hvf_init_vcpu(CPUState * cpu) { hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_GSBASE, 1); hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_KERNELGSBASE, 1); hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_TSC_AUX, 1); - //hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_TSC, 1); + /*hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_TSC, 1);*/ hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_SYSENTER_CS, 1); hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_SYSENTER_EIP, 1); hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_SYSENTER_ESP, 1); @@ -505,12 +536,13 @@ int hvf_init_vcpu(CPUState * cpu) { return 0; } =20 -int hvf_enabled() { return !hvf_disabled; } -void hvf_disable(int shouldDisable) { +void hvf_disable(int shouldDisable) +{ hvf_disabled =3D shouldDisable; } =20 -int hvf_vcpu_exec(CPUState* cpu) { +int hvf_vcpu_exec(CPUState *cpu) +{ X86CPU *x86_cpu =3D X86_CPU(cpu); CPUX86State *env =3D &x86_cpu->env; int ret =3D 0; @@ -530,7 +562,8 @@ int hvf_vcpu_exec(CPUState* cpu) { =20 cpu->hvf_x86->interruptable =3D !(rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY) & - (VMCS_INTERRUPTIBILITY_STI_BLOCKING | VMCS_INTERRUPTIBILITY_MO= VSS_BLOCKING)); + (VMCS_INTERRUPTIBILITY_STI_BLOCKING | + VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING)); =20 hvf_inject_interrupts(cpu); vmx_update_tpr(cpu); @@ -548,7 +581,8 @@ int hvf_vcpu_exec(CPUState* cpu) { /* handle VMEXIT */ uint64_t exit_reason =3D rvmcs(cpu->hvf_fd, VMCS_EXIT_REASON); uint64_t exit_qual =3D rvmcs(cpu->hvf_fd, VMCS_EXIT_QUALIFICATION); - uint32_t ins_len =3D (uint32_t)rvmcs(cpu->hvf_fd, VMCS_EXIT_INSTRU= CTION_LENGTH); + uint32_t ins_len =3D (uint32_t)rvmcs(cpu->hvf_fd, + VMCS_EXIT_INSTRUCTION_LENGTH); uint64_t idtvec_info =3D rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_INF= O); rip =3D rreg(cpu->hvf_fd, HV_X86_RIP); RFLAGS(cpu) =3D rreg(cpu->hvf_fd, HV_X86_RFLAGS); @@ -563,239 +597,226 @@ int hvf_vcpu_exec(CPUState* cpu) { =20 ret =3D 0; switch (exit_reason) { - case EXIT_REASON_HLT: { - macvm_set_rip(cpu, rip + ins_len); - if (!((cpu->interrupt_request & CPU_INTERRUPT_HARD) && (EF= LAGS(cpu) & IF_MASK)) - && !(cpu->interrupt_request & CPU_INTERRUPT_NMI) && - !(idtvec_info & VMCS_IDT_VEC_VALID)) { - cpu->halted =3D 1; - ret =3D EXCP_HLT; - } - ret =3D EXCP_INTERRUPT; - break; + case EXIT_REASON_HLT: { + macvm_set_rip(cpu, rip + ins_len); + if (!((cpu->interrupt_request & CPU_INTERRUPT_HARD) && + (EFLAGS(cpu) & IF_MASK)) + && !(cpu->interrupt_request & CPU_INTERRUPT_NMI) && + !(idtvec_info & VMCS_IDT_VEC_VALID)) { + cpu->halted =3D 1; + ret =3D EXCP_HLT; } - case EXIT_REASON_MWAIT: { - ret =3D EXCP_INTERRUPT; - break; - } - /* Need to check if MMIO or unmmaped fault */ - case EXIT_REASON_EPT_FAULT: - { - hvf_slot *slot; - addr_t gpa =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_PHYSICAL_ADDR= ESS); - trace_hvf_vm_exit_gpa(gpa); - - if ((idtvec_info & VMCS_IDT_VEC_VALID) =3D=3D 0 && (exit_q= ual & EXIT_QUAL_NMIUDTI) !=3D 0) - vmx_set_nmi_blocking(cpu); - - slot =3D hvf_find_overlap_slot(gpa, gpa); - // mmio - if (ept_emulation_fault(exit_qual) && !slot) { - struct x86_decode decode; - - load_regs(cpu); - cpu->hvf_x86->fetch_rip =3D rip; - - decode_instruction(cpu, &decode); - exec_instruction(cpu, &decode); - store_regs(cpu); - break; - } -#ifdef DIRTY_VGA_TRACKING - if (slot) { - bool read =3D exit_qual & EPT_VIOLATION_DATA_READ ? 1 = : 0; - bool write =3D exit_qual & EPT_VIOLATION_DATA_WRITE ? = 1 : 0; - if (!read && !write) - break; - int flags =3D HV_MEMORY_READ | HV_MEMORY_EXEC; - if (write) flags |=3D HV_MEMORY_WRITE; - - pthread_rwlock_wrlock(&mem_lock); - if (write) - mark_slot_page_dirty(slot, gpa); - hv_vm_protect(gpa & ~0xfff, 4096, flags); - pthread_rwlock_unlock(&mem_lock); - } -#endif - break; + ret =3D EXCP_INTERRUPT; + break; + } + case EXIT_REASON_MWAIT: { + ret =3D EXCP_INTERRUPT; + break; + } + /* Need to check if MMIO or unmmaped fault */ + case EXIT_REASON_EPT_FAULT: + { + hvf_slot *slot; + addr_t gpa =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_PHYSICAL_ADDRESS); + + if (((idtvec_info & VMCS_IDT_VEC_VALID) =3D=3D 0) && + ((exit_qual & EXIT_QUAL_NMIUDTI) !=3D 0)) { + vmx_set_nmi_blocking(cpu); } - case EXIT_REASON_INOUT: - { - uint32_t in =3D (exit_qual & 8) !=3D 0; - uint32_t size =3D (exit_qual & 7) + 1; - uint32_t string =3D (exit_qual & 16) !=3D 0; - uint32_t port =3D exit_qual >> 16; - //uint32_t rep =3D (exit_qual & 0x20) !=3D 0; =20 -#if 1 - if (!string && in) { - uint64_t val =3D 0; - load_regs(cpu); - hvf_handle_io(env, port, &val, 0, size, 1); - if (size =3D=3D 1) AL(cpu) =3D val; - else if (size =3D=3D 2) AX(cpu) =3D val; - else if (size =3D=3D 4) RAX(cpu) =3D (uint32_t)val; - else VM_PANIC("size"); - RIP(cpu) +=3D ins_len; - store_regs(cpu); - break; - } else if (!string && !in) { - RAX(cpu) =3D rreg(cpu->hvf_fd, HV_X86_RAX); - hvf_handle_io(env, port, &RAX(cpu), 1, size, 1); - macvm_set_rip(cpu, rip + ins_len); - break; - } -#endif + slot =3D hvf_find_overlap_slot(gpa, gpa); + /* mmio */ + if (ept_emulation_fault(exit_qual) && !slot) { struct x86_decode decode; =20 load_regs(cpu); cpu->hvf_x86->fetch_rip =3D rip; =20 decode_instruction(cpu, &decode); - VM_PANIC_ON(ins_len !=3D decode.len); exec_instruction(cpu, &decode); store_regs(cpu); - - break; - } - case EXIT_REASON_CPUID: { - uint32_t rax =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RAX); - uint32_t rbx =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RBX); - uint32_t rcx =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RCX); - uint32_t rdx =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RDX); - - get_cpuid_func(cpu, rax, rcx, &rax, &rbx, &rcx, &rdx); - - wreg(cpu->hvf_fd, HV_X86_RAX, rax); - wreg(cpu->hvf_fd, HV_X86_RBX, rbx); - wreg(cpu->hvf_fd, HV_X86_RCX, rcx); - wreg(cpu->hvf_fd, HV_X86_RDX, rdx); - - macvm_set_rip(cpu, rip + ins_len); break; } - case EXIT_REASON_XSETBV: { - X86CPU *x86_cpu =3D X86_CPU(cpu); - CPUX86State *env =3D &x86_cpu->env; - uint32_t eax =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RAX); - uint32_t ecx =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RCX); - uint32_t edx =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RDX); - - if (ecx) { - macvm_set_rip(cpu, rip + ins_len); - break; - } - env->xcr0 =3D ((uint64_t)edx << 32) | eax; - wreg(cpu->hvf_fd, HV_X86_XCR0, env->xcr0 | 1); - macvm_set_rip(cpu, rip + ins_len); - break; - } - case EXIT_REASON_INTR_WINDOW: - vmx_clear_int_window_exiting(cpu); - ret =3D EXCP_INTERRUPT; - break; - case EXIT_REASON_NMI_WINDOW: - vmx_clear_nmi_window_exiting(cpu); - ret =3D EXCP_INTERRUPT; - break; - case EXIT_REASON_EXT_INTR: - /* force exit and allow io handling */ - ret =3D EXCP_INTERRUPT; - break; - case EXIT_REASON_RDMSR: - case EXIT_REASON_WRMSR: - { - load_regs(cpu); - if (exit_reason =3D=3D EXIT_REASON_RDMSR) - simulate_rdmsr(cpu); - else - simulate_wrmsr(cpu); - RIP(cpu) +=3D rvmcs(cpu->hvf_fd, VMCS_EXIT_INSTRUCTION_LEN= GTH); - store_regs(cpu); - break; - } - case EXIT_REASON_CR_ACCESS: { - int cr; - int reg; +#ifdef DIRTY_VGA_TRACKING + /* TODO: handle dirty page tracking */ +#endif + break; + } + case EXIT_REASON_INOUT: + { + uint32_t in =3D (exit_qual & 8) !=3D 0; + uint32_t size =3D (exit_qual & 7) + 1; + uint32_t string =3D (exit_qual & 16) !=3D 0; + uint32_t port =3D exit_qual >> 16; + /*uint32_t rep =3D (exit_qual & 0x20) !=3D 0;*/ =20 +#if 1 + if (!string && in) { + uint64_t val =3D 0; load_regs(cpu); - cr =3D exit_qual & 15; - reg =3D (exit_qual >> 8) & 15; - - switch (cr) { - case 0x0: { - macvm_set_cr0(cpu->hvf_fd, RRX(cpu, reg)); - break; - } - case 4: { - macvm_set_cr4(cpu->hvf_fd, RRX(cpu, reg)); - break; - } - case 8: { - X86CPU *x86_cpu =3D X86_CPU(cpu); - if (exit_qual & 0x10) { - RRX(cpu, reg) =3D cpu_get_apic_tpr(x86_cpu->ap= ic_state); - } - else { - int tpr =3D RRX(cpu, reg); - cpu_set_apic_tpr(x86_cpu->apic_state, tpr); - ret =3D EXCP_INTERRUPT; - } - break; - } - default: - fprintf(stderr, "Unrecognized CR %d\n", cr); - abort(); + hvf_handle_io(env, port, &val, 0, size, 1); + if (size =3D=3D 1) { + AL(cpu) =3D val; + } else if (size =3D=3D 2) { + AX(cpu) =3D val; + } else if (size =3D=3D 4) { + RAX(cpu) =3D (uint32_t)val; + } else { + VM_PANIC("size"); } RIP(cpu) +=3D ins_len; store_regs(cpu); break; + } else if (!string && !in) { + RAX(cpu) =3D rreg(cpu->hvf_fd, HV_X86_RAX); + hvf_handle_io(env, port, &RAX(cpu), 1, size, 1); + macvm_set_rip(cpu, rip + ins_len); + break; } - case EXIT_REASON_APIC_ACCESS: { // TODO - struct x86_decode decode; +#endif + struct x86_decode decode; =20 - load_regs(cpu); - cpu->hvf_x86->fetch_rip =3D rip; + load_regs(cpu); + cpu->hvf_x86->fetch_rip =3D rip; =20 - decode_instruction(cpu, &decode); - exec_instruction(cpu, &decode); - store_regs(cpu); + decode_instruction(cpu, &decode); + VM_PANIC_ON(ins_len !=3D decode.len); + exec_instruction(cpu, &decode); + store_regs(cpu); + + break; + } + case EXIT_REASON_CPUID: { + uint32_t rax =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RAX); + uint32_t rbx =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RBX); + uint32_t rcx =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RCX); + uint32_t rdx =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RDX); + + cpu_x86_cpuid(env, rax, rcx, &rax, &rbx, &rcx, &rdx); + + wreg(cpu->hvf_fd, HV_X86_RAX, rax); + wreg(cpu->hvf_fd, HV_X86_RBX, rbx); + wreg(cpu->hvf_fd, HV_X86_RCX, rcx); + wreg(cpu->hvf_fd, HV_X86_RDX, rdx); + + macvm_set_rip(cpu, rip + ins_len); + break; + } + case EXIT_REASON_XSETBV: { + X86CPU *x86_cpu =3D X86_CPU(cpu); + CPUX86State *env =3D &x86_cpu->env; + uint32_t eax =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RAX); + uint32_t ecx =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RCX); + uint32_t edx =3D (uint32_t)rreg(cpu->hvf_fd, HV_X86_RDX); + + if (ecx) { + macvm_set_rip(cpu, rip + ins_len); break; } - case EXIT_REASON_TPR: { - ret =3D 1; - break; + env->xcr0 =3D ((uint64_t)edx << 32) | eax; + wreg(cpu->hvf_fd, HV_X86_XCR0, env->xcr0 | 1); + macvm_set_rip(cpu, rip + ins_len); + break; + } + case EXIT_REASON_INTR_WINDOW: + vmx_clear_int_window_exiting(cpu); + ret =3D EXCP_INTERRUPT; + break; + case EXIT_REASON_NMI_WINDOW: + vmx_clear_nmi_window_exiting(cpu); + ret =3D EXCP_INTERRUPT; + break; + case EXIT_REASON_EXT_INTR: + /* force exit and allow io handling */ + ret =3D EXCP_INTERRUPT; + break; + case EXIT_REASON_RDMSR: + case EXIT_REASON_WRMSR: + { + load_regs(cpu); + if (exit_reason =3D=3D EXIT_REASON_RDMSR) { + simulate_rdmsr(cpu); + } else { + simulate_wrmsr(cpu); } - case EXIT_REASON_TASK_SWITCH: { - uint64_t vinfo =3D rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_I= NFO); - x68_segment_selector sel =3D {.sel =3D exit_qual & 0xffff}; - vmx_handle_task_switch(cpu, sel, (exit_qual >> 30) & 0x3, - vinfo & VMCS_INTR_VALID, vinfo & VECTORING_INFO_VECTOR_MA= SK, vinfo & VMCS_INTR_T_MASK); + RIP(cpu) +=3D rvmcs(cpu->hvf_fd, VMCS_EXIT_INSTRUCTION_LENGTH); + store_regs(cpu); + break; + } + case EXIT_REASON_CR_ACCESS: { + int cr; + int reg; + + load_regs(cpu); + cr =3D exit_qual & 15; + reg =3D (exit_qual >> 8) & 15; + + switch (cr) { + case 0x0: { + macvm_set_cr0(cpu->hvf_fd, RRX(cpu, reg)); break; } - case EXIT_REASON_TRIPLE_FAULT: { - //addr_t gpa =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_PHYSICAL_AD= DRESS); - qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET); - usleep(1000 * 100); - ret =3D EXCP_INTERRUPT; + case 4: { + macvm_set_cr4(cpu->hvf_fd, RRX(cpu, reg)); break; } - case EXIT_REASON_RDPMC: - wreg(cpu->hvf_fd, HV_X86_RAX, 0); - wreg(cpu->hvf_fd, HV_X86_RDX, 0); - macvm_set_rip(cpu, rip + ins_len); - break; - case VMX_REASON_VMCALL: - // TODO: maybe just take this out? - // if (g_hypervisor_iface) { - // load_regs(cpu); - // g_hypervisor_iface->hypercall_handler(cpu); - // RIP(cpu) +=3D rvmcs(cpu->hvf_fd, VMCS_EXIT_INSTRUCT= ION_LENGTH); - // store_regs(cpu); - // } + case 8: { + X86CPU *x86_cpu =3D X86_CPU(cpu); + if (exit_qual & 0x10) { + RRX(cpu, reg) =3D cpu_get_apic_tpr(x86_cpu->apic_state= ); + } else { + int tpr =3D RRX(cpu, reg); + cpu_set_apic_tpr(x86_cpu->apic_state, tpr); + ret =3D EXCP_INTERRUPT; + } break; + } default: - fprintf(stderr, "%llx: unhandled exit %llx\n", rip, exit_r= eason); + error_report("Unrecognized CR %d\n", cr); + abort(); + } + RIP(cpu) +=3D ins_len; + store_regs(cpu); + break; + } + case EXIT_REASON_APIC_ACCESS: { /* TODO */ + struct x86_decode decode; + + load_regs(cpu); + cpu->hvf_x86->fetch_rip =3D rip; + + decode_instruction(cpu, &decode); + exec_instruction(cpu, &decode); + store_regs(cpu); + break; + } + case EXIT_REASON_TPR: { + ret =3D 1; + break; + } + case EXIT_REASON_TASK_SWITCH: { + uint64_t vinfo =3D rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_INFO); + x68_segment_selector sel =3D {.sel =3D exit_qual & 0xffff}; + vmx_handle_task_switch(cpu, sel, (exit_qual >> 30) & 0x3, + vinfo & VMCS_INTR_VALID, vinfo & VECTORING_INFO_VECTOR_MASK, = vinfo + & VMCS_INTR_T_MASK); + break; + } + case EXIT_REASON_TRIPLE_FAULT: { + qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET); + ret =3D EXCP_INTERRUPT; + break; + } + case EXIT_REASON_RDPMC: + wreg(cpu->hvf_fd, HV_X86_RAX, 0); + wreg(cpu->hvf_fd, HV_X86_RDX, 0); + macvm_set_rip(cpu, rip + ins_len); + break; + case VMX_REASON_VMCALL: + /* TODO: inject #GP fault */ + break; + default: + error_report("%llx: unhandled exit %llx\n", rip, exit_reason); } } while (ret =3D=3D 0); =20 diff --git a/target/i386/hvf-i386.h b/target/i386/hvf-i386.h index f3f958058a..797718ce34 100644 --- a/target/i386/hvf-i386.h +++ b/target/i386/hvf-i386.h @@ -41,7 +41,7 @@ struct hvf_state { /* Functions exported to host specific mode */ =20 /* Host specific functions */ -int hvf_inject_interrupt(CPUArchState * env, int vector); +int hvf_inject_interrupt(CPUArchState *env, int vector); int hvf_vcpu_run(struct hvf_vcpu_state *vcpu); #endif =20 diff --git a/target/i386/hvf-utils/vmcs.h b/target/i386/hvf-utils/vmcs.h index 6f7ccb361a..c410dcfaaa 100644 --- a/target/i386/hvf-utils/vmcs.h +++ b/target/i386/hvf-utils/vmcs.h @@ -27,326 +27,326 @@ */ =20 #ifndef _VMCS_H_ -#define _VMCS_H_ +#define _VMCS_H_ =20 #include #include =20 -#define VMCS_INITIAL 0xffffffffffffffff +#define VMCS_INITIAL 0xffffffffffffffff =20 -#define VMCS_IDENT(encoding) ((encoding) | 0x80000000) +#define VMCS_IDENT(encoding) ((encoding) | 0x80000000) /* * VMCS field encodings from Appendix H, Intel Architecture Manual Vol3B. */ -#define VMCS_INVALID_ENCODING 0xffffffff +#define VMCS_INVALID_ENCODING 0xffffffff =20 /* 16-bit control fields */ -#define VMCS_VPID 0x00000000 -#define VMCS_PIR_VECTOR 0x00000002 +#define VMCS_VPID 0x00000000 +#define VMCS_PIR_VECTOR 0x00000002 =20 /* 16-bit guest-state fields */ -#define VMCS_GUEST_ES_SELECTOR 0x00000800 -#define VMCS_GUEST_CS_SELECTOR 0x00000802 -#define VMCS_GUEST_SS_SELECTOR 0x00000804 -#define VMCS_GUEST_DS_SELECTOR 0x00000806 -#define VMCS_GUEST_FS_SELECTOR 0x00000808 -#define VMCS_GUEST_GS_SELECTOR 0x0000080A -#define VMCS_GUEST_LDTR_SELECTOR 0x0000080C -#define VMCS_GUEST_TR_SELECTOR 0x0000080E -#define VMCS_GUEST_INTR_STATUS 0x00000810 +#define VMCS_GUEST_ES_SELECTOR 0x00000800 +#define VMCS_GUEST_CS_SELECTOR 0x00000802 +#define VMCS_GUEST_SS_SELECTOR 0x00000804 +#define VMCS_GUEST_DS_SELECTOR 0x00000806 +#define VMCS_GUEST_FS_SELECTOR 0x00000808 +#define VMCS_GUEST_GS_SELECTOR 0x0000080A +#define VMCS_GUEST_LDTR_SELECTOR 0x0000080C +#define VMCS_GUEST_TR_SELECTOR 0x0000080E +#define VMCS_GUEST_INTR_STATUS 0x00000810 =20 /* 16-bit host-state fields */ -#define VMCS_HOST_ES_SELECTOR 0x00000C00 -#define VMCS_HOST_CS_SELECTOR 0x00000C02 -#define VMCS_HOST_SS_SELECTOR 0x00000C04 -#define VMCS_HOST_DS_SELECTOR 0x00000C06 -#define VMCS_HOST_FS_SELECTOR 0x00000C08 -#define VMCS_HOST_GS_SELECTOR 0x00000C0A -#define VMCS_HOST_TR_SELECTOR 0x00000C0C +#define VMCS_HOST_ES_SELECTOR 0x00000C00 +#define VMCS_HOST_CS_SELECTOR 0x00000C02 +#define VMCS_HOST_SS_SELECTOR 0x00000C04 +#define VMCS_HOST_DS_SELECTOR 0x00000C06 +#define VMCS_HOST_FS_SELECTOR 0x00000C08 +#define VMCS_HOST_GS_SELECTOR 0x00000C0A +#define VMCS_HOST_TR_SELECTOR 0x00000C0C =20 /* 64-bit control fields */ -#define VMCS_IO_BITMAP_A 0x00002000 -#define VMCS_IO_BITMAP_B 0x00002002 -#define VMCS_MSR_BITMAP 0x00002004 -#define VMCS_EXIT_MSR_STORE 0x00002006 -#define VMCS_EXIT_MSR_LOAD 0x00002008 -#define VMCS_ENTRY_MSR_LOAD 0x0000200A -#define VMCS_EXECUTIVE_VMCS 0x0000200C -#define VMCS_TSC_OFFSET 0x00002010 -#define VMCS_VIRTUAL_APIC 0x00002012 -#define VMCS_APIC_ACCESS 0x00002014 -#define VMCS_PIR_DESC 0x00002016 -#define VMCS_EPTP 0x0000201A -#define VMCS_EOI_EXIT0 0x0000201C -#define VMCS_EOI_EXIT1 0x0000201E -#define VMCS_EOI_EXIT2 0x00002020 -#define VMCS_EOI_EXIT3 0x00002022 -#define VMCS_EOI_EXIT(vector) (VMCS_EOI_EXIT0 + ((vector) / 64) * 2) +#define VMCS_IO_BITMAP_A 0x00002000 +#define VMCS_IO_BITMAP_B 0x00002002 +#define VMCS_MSR_BITMAP 0x00002004 +#define VMCS_EXIT_MSR_STORE 0x00002006 +#define VMCS_EXIT_MSR_LOAD 0x00002008 +#define VMCS_ENTRY_MSR_LOAD 0x0000200A +#define VMCS_EXECUTIVE_VMCS 0x0000200C +#define VMCS_TSC_OFFSET 0x00002010 +#define VMCS_VIRTUAL_APIC 0x00002012 +#define VMCS_APIC_ACCESS 0x00002014 +#define VMCS_PIR_DESC 0x00002016 +#define VMCS_EPTP 0x0000201A +#define VMCS_EOI_EXIT0 0x0000201C +#define VMCS_EOI_EXIT1 0x0000201E +#define VMCS_EOI_EXIT2 0x00002020 +#define VMCS_EOI_EXIT3 0x00002022 +#define VMCS_EOI_EXIT(vector) (VMCS_EOI_EXIT0 + ((vector) / 64) * 2) =20 /* 64-bit read-only fields */ -#define VMCS_GUEST_PHYSICAL_ADDRESS 0x00002400 +#define VMCS_GUEST_PHYSICAL_ADDRESS 0x00002400 =20 /* 64-bit guest-state fields */ -#define VMCS_LINK_POINTER 0x00002800 -#define VMCS_GUEST_IA32_DEBUGCTL 0x00002802 -#define VMCS_GUEST_IA32_PAT 0x00002804 -#define VMCS_GUEST_IA32_EFER 0x00002806 -#define VMCS_GUEST_IA32_PERF_GLOBAL_CTRL 0x00002808 -#define VMCS_GUEST_PDPTE0 0x0000280A -#define VMCS_GUEST_PDPTE1 0x0000280C -#define VMCS_GUEST_PDPTE2 0x0000280E -#define VMCS_GUEST_PDPTE3 0x00002810 +#define VMCS_LINK_POINTER 0x00002800 +#define VMCS_GUEST_IA32_DEBUGCTL 0x00002802 +#define VMCS_GUEST_IA32_PAT 0x00002804 +#define VMCS_GUEST_IA32_EFER 0x00002806 +#define VMCS_GUEST_IA32_PERF_GLOBAL_CTRL 0x00002808 +#define VMCS_GUEST_PDPTE0 0x0000280A +#define VMCS_GUEST_PDPTE1 0x0000280C +#define VMCS_GUEST_PDPTE2 0x0000280E +#define VMCS_GUEST_PDPTE3 0x00002810 =20 /* 64-bit host-state fields */ -#define VMCS_HOST_IA32_PAT 0x00002C00 -#define VMCS_HOST_IA32_EFER 0x00002C02 -#define VMCS_HOST_IA32_PERF_GLOBAL_CTRL 0x00002C04 +#define VMCS_HOST_IA32_PAT 0x00002C00 +#define VMCS_HOST_IA32_EFER 0x00002C02 +#define VMCS_HOST_IA32_PERF_GLOBAL_CTRL 0x00002C04 =20 /* 32-bit control fields */ -#define VMCS_PIN_BASED_CTLS 0x00004000 -#define VMCS_PRI_PROC_BASED_CTLS 0x00004002 -#define VMCS_EXCEPTION_BITMAP 0x00004004 -#define VMCS_PF_ERROR_MASK 0x00004006 -#define VMCS_PF_ERROR_MATCH 0x00004008 -#define VMCS_CR3_TARGET_COUNT 0x0000400A -#define VMCS_EXIT_CTLS 0x0000400C -#define VMCS_EXIT_MSR_STORE_COUNT 0x0000400E -#define VMCS_EXIT_MSR_LOAD_COUNT 0x00004010 -#define VMCS_ENTRY_CTLS 0x00004012 -#define VMCS_ENTRY_MSR_LOAD_COUNT 0x00004014 -#define VMCS_ENTRY_INTR_INFO 0x00004016 -#define VMCS_ENTRY_EXCEPTION_ERROR 0x00004018 -#define VMCS_ENTRY_INST_LENGTH 0x0000401A -#define VMCS_TPR_THRESHOLD 0x0000401C -#define VMCS_SEC_PROC_BASED_CTLS 0x0000401E -#define VMCS_PLE_GAP 0x00004020 -#define VMCS_PLE_WINDOW 0x00004022 +#define VMCS_PIN_BASED_CTLS 0x00004000 +#define VMCS_PRI_PROC_BASED_CTLS 0x00004002 +#define VMCS_EXCEPTION_BITMAP 0x00004004 +#define VMCS_PF_ERROR_MASK 0x00004006 +#define VMCS_PF_ERROR_MATCH 0x00004008 +#define VMCS_CR3_TARGET_COUNT 0x0000400A +#define VMCS_EXIT_CTLS 0x0000400C +#define VMCS_EXIT_MSR_STORE_COUNT 0x0000400E +#define VMCS_EXIT_MSR_LOAD_COUNT 0x00004010 +#define VMCS_ENTRY_CTLS 0x00004012 +#define VMCS_ENTRY_MSR_LOAD_COUNT 0x00004014 +#define VMCS_ENTRY_INTR_INFO 0x00004016 +#define VMCS_ENTRY_EXCEPTION_ERROR 0x00004018 +#define VMCS_ENTRY_INST_LENGTH 0x0000401A +#define VMCS_TPR_THRESHOLD 0x0000401C +#define VMCS_SEC_PROC_BASED_CTLS 0x0000401E +#define VMCS_PLE_GAP 0x00004020 +#define VMCS_PLE_WINDOW 0x00004022 =20 /* 32-bit read-only data fields */ -#define VMCS_INSTRUCTION_ERROR 0x00004400 -#define VMCS_EXIT_REASON 0x00004402 -#define VMCS_EXIT_INTR_INFO 0x00004404 -#define VMCS_EXIT_INTR_ERRCODE 0x00004406 -#define VMCS_IDT_VECTORING_INFO 0x00004408 -#define VMCS_IDT_VECTORING_ERROR 0x0000440A -#define VMCS_EXIT_INSTRUCTION_LENGTH 0x0000440C -#define VMCS_EXIT_INSTRUCTION_INFO 0x0000440E +#define VMCS_INSTRUCTION_ERROR 0x00004400 +#define VMCS_EXIT_REASON 0x00004402 +#define VMCS_EXIT_INTR_INFO 0x00004404 +#define VMCS_EXIT_INTR_ERRCODE 0x00004406 +#define VMCS_IDT_VECTORING_INFO 0x00004408 +#define VMCS_IDT_VECTORING_ERROR 0x0000440A +#define VMCS_EXIT_INSTRUCTION_LENGTH 0x0000440C +#define VMCS_EXIT_INSTRUCTION_INFO 0x0000440E =20 /* 32-bit guest-state fields */ -#define VMCS_GUEST_ES_LIMIT 0x00004800 -#define VMCS_GUEST_CS_LIMIT 0x00004802 -#define VMCS_GUEST_SS_LIMIT 0x00004804 -#define VMCS_GUEST_DS_LIMIT 0x00004806 -#define VMCS_GUEST_FS_LIMIT 0x00004808 -#define VMCS_GUEST_GS_LIMIT 0x0000480A -#define VMCS_GUEST_LDTR_LIMIT 0x0000480C -#define VMCS_GUEST_TR_LIMIT 0x0000480E -#define VMCS_GUEST_GDTR_LIMIT 0x00004810 -#define VMCS_GUEST_IDTR_LIMIT 0x00004812 -#define VMCS_GUEST_ES_ACCESS_RIGHTS 0x00004814 -#define VMCS_GUEST_CS_ACCESS_RIGHTS 0x00004816 -#define VMCS_GUEST_SS_ACCESS_RIGHTS 0x00004818 -#define VMCS_GUEST_DS_ACCESS_RIGHTS 0x0000481A -#define VMCS_GUEST_FS_ACCESS_RIGHTS 0x0000481C -#define VMCS_GUEST_GS_ACCESS_RIGHTS 0x0000481E -#define VMCS_GUEST_LDTR_ACCESS_RIGHTS 0x00004820 -#define VMCS_GUEST_TR_ACCESS_RIGHTS 0x00004822 -#define VMCS_GUEST_INTERRUPTIBILITY 0x00004824 -#define VMCS_GUEST_ACTIVITY 0x00004826 -#define VMCS_GUEST_SMBASE 0x00004828 -#define VMCS_GUEST_IA32_SYSENTER_CS 0x0000482A -#define VMCS_PREEMPTION_TIMER_VALUE 0x0000482E +#define VMCS_GUEST_ES_LIMIT 0x00004800 +#define VMCS_GUEST_CS_LIMIT 0x00004802 +#define VMCS_GUEST_SS_LIMIT 0x00004804 +#define VMCS_GUEST_DS_LIMIT 0x00004806 +#define VMCS_GUEST_FS_LIMIT 0x00004808 +#define VMCS_GUEST_GS_LIMIT 0x0000480A +#define VMCS_GUEST_LDTR_LIMIT 0x0000480C +#define VMCS_GUEST_TR_LIMIT 0x0000480E +#define VMCS_GUEST_GDTR_LIMIT 0x00004810 +#define VMCS_GUEST_IDTR_LIMIT 0x00004812 +#define VMCS_GUEST_ES_ACCESS_RIGHTS 0x00004814 +#define VMCS_GUEST_CS_ACCESS_RIGHTS 0x00004816 +#define VMCS_GUEST_SS_ACCESS_RIGHTS 0x00004818 +#define VMCS_GUEST_DS_ACCESS_RIGHTS 0x0000481A +#define VMCS_GUEST_FS_ACCESS_RIGHTS 0x0000481C +#define VMCS_GUEST_GS_ACCESS_RIGHTS 0x0000481E +#define VMCS_GUEST_LDTR_ACCESS_RIGHTS 0x00004820 +#define VMCS_GUEST_TR_ACCESS_RIGHTS 0x00004822 +#define VMCS_GUEST_INTERRUPTIBILITY 0x00004824 +#define VMCS_GUEST_ACTIVITY 0x00004826 +#define VMCS_GUEST_SMBASE 0x00004828 +#define VMCS_GUEST_IA32_SYSENTER_CS 0x0000482A +#define VMCS_PREEMPTION_TIMER_VALUE 0x0000482E =20 /* 32-bit host state fields */ -#define VMCS_HOST_IA32_SYSENTER_CS 0x00004C00 +#define VMCS_HOST_IA32_SYSENTER_CS 0x00004C00 =20 /* Natural Width control fields */ -#define VMCS_CR0_MASK 0x00006000 -#define VMCS_CR4_MASK 0x00006002 -#define VMCS_CR0_SHADOW 0x00006004 -#define VMCS_CR4_SHADOW 0x00006006 -#define VMCS_CR3_TARGET0 0x00006008 -#define VMCS_CR3_TARGET1 0x0000600A -#define VMCS_CR3_TARGET2 0x0000600C -#define VMCS_CR3_TARGET3 0x0000600E +#define VMCS_CR0_MASK 0x00006000 +#define VMCS_CR4_MASK 0x00006002 +#define VMCS_CR0_SHADOW 0x00006004 +#define VMCS_CR4_SHADOW 0x00006006 +#define VMCS_CR3_TARGET0 0x00006008 +#define VMCS_CR3_TARGET1 0x0000600A +#define VMCS_CR3_TARGET2 0x0000600C +#define VMCS_CR3_TARGET3 0x0000600E =20 /* Natural Width read-only fields */ -#define VMCS_EXIT_QUALIFICATION 0x00006400 -#define VMCS_IO_RCX 0x00006402 -#define VMCS_IO_RSI 0x00006404 -#define VMCS_IO_RDI 0x00006406 -#define VMCS_IO_RIP 0x00006408 -#define VMCS_GUEST_LINEAR_ADDRESS 0x0000640A +#define VMCS_EXIT_QUALIFICATION 0x00006400 +#define VMCS_IO_RCX 0x00006402 +#define VMCS_IO_RSI 0x00006404 +#define VMCS_IO_RDI 0x00006406 +#define VMCS_IO_RIP 0x00006408 +#define VMCS_GUEST_LINEAR_ADDRESS 0x0000640A =20 /* Natural Width guest-state fields */ -#define VMCS_GUEST_CR0 0x00006800 -#define VMCS_GUEST_CR3 0x00006802 -#define VMCS_GUEST_CR4 0x00006804 -#define VMCS_GUEST_ES_BASE 0x00006806 -#define VMCS_GUEST_CS_BASE 0x00006808 -#define VMCS_GUEST_SS_BASE 0x0000680A -#define VMCS_GUEST_DS_BASE 0x0000680C -#define VMCS_GUEST_FS_BASE 0x0000680E -#define VMCS_GUEST_GS_BASE 0x00006810 -#define VMCS_GUEST_LDTR_BASE 0x00006812 -#define VMCS_GUEST_TR_BASE 0x00006814 -#define VMCS_GUEST_GDTR_BASE 0x00006816 -#define VMCS_GUEST_IDTR_BASE 0x00006818 -#define VMCS_GUEST_DR7 0x0000681A -#define VMCS_GUEST_RSP 0x0000681C -#define VMCS_GUEST_RIP 0x0000681E -#define VMCS_GUEST_RFLAGS 0x00006820 -#define VMCS_GUEST_PENDING_DBG_EXCEPTIONS 0x00006822 -#define VMCS_GUEST_IA32_SYSENTER_ESP 0x00006824 -#define VMCS_GUEST_IA32_SYSENTER_EIP 0x00006826 +#define VMCS_GUEST_CR0 0x00006800 +#define VMCS_GUEST_CR3 0x00006802 +#define VMCS_GUEST_CR4 0x00006804 +#define VMCS_GUEST_ES_BASE 0x00006806 +#define VMCS_GUEST_CS_BASE 0x00006808 +#define VMCS_GUEST_SS_BASE 0x0000680A +#define VMCS_GUEST_DS_BASE 0x0000680C +#define VMCS_GUEST_FS_BASE 0x0000680E +#define VMCS_GUEST_GS_BASE 0x00006810 +#define VMCS_GUEST_LDTR_BASE 0x00006812 +#define VMCS_GUEST_TR_BASE 0x00006814 +#define VMCS_GUEST_GDTR_BASE 0x00006816 +#define VMCS_GUEST_IDTR_BASE 0x00006818 +#define VMCS_GUEST_DR7 0x0000681A +#define VMCS_GUEST_RSP 0x0000681C +#define VMCS_GUEST_RIP 0x0000681E +#define VMCS_GUEST_RFLAGS 0x00006820 +#define VMCS_GUEST_PENDING_DBG_EXCEPTIONS 0x00006822 +#define VMCS_GUEST_IA32_SYSENTER_ESP 0x00006824 +#define VMCS_GUEST_IA32_SYSENTER_EIP 0x00006826 =20 /* Natural Width host-state fields */ -#define VMCS_HOST_CR0 0x00006C00 -#define VMCS_HOST_CR3 0x00006C02 -#define VMCS_HOST_CR4 0x00006C04 -#define VMCS_HOST_FS_BASE 0x00006C06 -#define VMCS_HOST_GS_BASE 0x00006C08 -#define VMCS_HOST_TR_BASE 0x00006C0A -#define VMCS_HOST_GDTR_BASE 0x00006C0C -#define VMCS_HOST_IDTR_BASE 0x00006C0E -#define VMCS_HOST_IA32_SYSENTER_ESP 0x00006C10 -#define VMCS_HOST_IA32_SYSENTER_EIP 0x00006C12 -#define VMCS_HOST_RSP 0x00006C14 -#define VMCS_HOST_RIP 0x00006c16 +#define VMCS_HOST_CR0 0x00006C00 +#define VMCS_HOST_CR3 0x00006C02 +#define VMCS_HOST_CR4 0x00006C04 +#define VMCS_HOST_FS_BASE 0x00006C06 +#define VMCS_HOST_GS_BASE 0x00006C08 +#define VMCS_HOST_TR_BASE 0x00006C0A +#define VMCS_HOST_GDTR_BASE 0x00006C0C +#define VMCS_HOST_IDTR_BASE 0x00006C0E +#define VMCS_HOST_IA32_SYSENTER_ESP 0x00006C10 +#define VMCS_HOST_IA32_SYSENTER_EIP 0x00006C12 +#define VMCS_HOST_RSP 0x00006C14 +#define VMCS_HOST_RIP 0x00006c16 =20 /* * VM instruction error numbers */ -#define VMRESUME_WITH_NON_LAUNCHED_VMCS 5 +#define VMRESUME_WITH_NON_LAUNCHED_VMCS 5 =20 /* * VMCS exit reasons */ -#define EXIT_REASON_EXCEPTION 0 -#define EXIT_REASON_EXT_INTR 1 -#define EXIT_REASON_TRIPLE_FAULT 2 -#define EXIT_REASON_INIT 3 -#define EXIT_REASON_SIPI 4 -#define EXIT_REASON_IO_SMI 5 -#define EXIT_REASON_SMI 6 -#define EXIT_REASON_INTR_WINDOW 7 -#define EXIT_REASON_NMI_WINDOW 8 -#define EXIT_REASON_TASK_SWITCH 9 -#define EXIT_REASON_CPUID 10 -#define EXIT_REASON_GETSEC 11 -#define EXIT_REASON_HLT 12 -#define EXIT_REASON_INVD 13 -#define EXIT_REASON_INVLPG 14 -#define EXIT_REASON_RDPMC 15 -#define EXIT_REASON_RDTSC 16 -#define EXIT_REASON_RSM 17 -#define EXIT_REASON_VMCALL 18 -#define EXIT_REASON_VMCLEAR 19 -#define EXIT_REASON_VMLAUNCH 20 -#define EXIT_REASON_VMPTRLD 21 -#define EXIT_REASON_VMPTRST 22 -#define EXIT_REASON_VMREAD 23 -#define EXIT_REASON_VMRESUME 24 -#define EXIT_REASON_VMWRITE 25 -#define EXIT_REASON_VMXOFF 26 -#define EXIT_REASON_VMXON 27 -#define EXIT_REASON_CR_ACCESS 28 -#define EXIT_REASON_DR_ACCESS 29 -#define EXIT_REASON_INOUT 30 -#define EXIT_REASON_RDMSR 31 -#define EXIT_REASON_WRMSR 32 -#define EXIT_REASON_INVAL_VMCS 33 -#define EXIT_REASON_INVAL_MSR 34 -#define EXIT_REASON_MWAIT 36 -#define EXIT_REASON_MTF 37 -#define EXIT_REASON_MONITOR 39 -#define EXIT_REASON_PAUSE 40 -#define EXIT_REASON_MCE_DURING_ENTRY 41 -#define EXIT_REASON_TPR 43 -#define EXIT_REASON_APIC_ACCESS 44 -#define EXIT_REASON_VIRTUALIZED_EOI 45 -#define EXIT_REASON_GDTR_IDTR 46 -#define EXIT_REASON_LDTR_TR 47 -#define EXIT_REASON_EPT_FAULT 48 -#define EXIT_REASON_EPT_MISCONFIG 49 -#define EXIT_REASON_INVEPT 50 -#define EXIT_REASON_RDTSCP 51 -#define EXIT_REASON_VMX_PREEMPT 52 -#define EXIT_REASON_INVVPID 53 -#define EXIT_REASON_WBINVD 54 -#define EXIT_REASON_XSETBV 55 -#define EXIT_REASON_APIC_WRITE 56 +#define EXIT_REASON_EXCEPTION 0 +#define EXIT_REASON_EXT_INTR 1 +#define EXIT_REASON_TRIPLE_FAULT 2 +#define EXIT_REASON_INIT 3 +#define EXIT_REASON_SIPI 4 +#define EXIT_REASON_IO_SMI 5 +#define EXIT_REASON_SMI 6 +#define EXIT_REASON_INTR_WINDOW 7 +#define EXIT_REASON_NMI_WINDOW 8 +#define EXIT_REASON_TASK_SWITCH 9 +#define EXIT_REASON_CPUID 10 +#define EXIT_REASON_GETSEC 11 +#define EXIT_REASON_HLT 12 +#define EXIT_REASON_INVD 13 +#define EXIT_REASON_INVLPG 14 +#define EXIT_REASON_RDPMC 15 +#define EXIT_REASON_RDTSC 16 +#define EXIT_REASON_RSM 17 +#define EXIT_REASON_VMCALL 18 +#define EXIT_REASON_VMCLEAR 19 +#define EXIT_REASON_VMLAUNCH 20 +#define EXIT_REASON_VMPTRLD 21 +#define EXIT_REASON_VMPTRST 22 +#define EXIT_REASON_VMREAD 23 +#define EXIT_REASON_VMRESUME 24 +#define EXIT_REASON_VMWRITE 25 +#define EXIT_REASON_VMXOFF 26 +#define EXIT_REASON_VMXON 27 +#define EXIT_REASON_CR_ACCESS 28 +#define EXIT_REASON_DR_ACCESS 29 +#define EXIT_REASON_INOUT 30 +#define EXIT_REASON_RDMSR 31 +#define EXIT_REASON_WRMSR 32 +#define EXIT_REASON_INVAL_VMCS 33 +#define EXIT_REASON_INVAL_MSR 34 +#define EXIT_REASON_MWAIT 36 +#define EXIT_REASON_MTF 37 +#define EXIT_REASON_MONITOR 39 +#define EXIT_REASON_PAUSE 40 +#define EXIT_REASON_MCE_DURING_ENTR 41 +#define EXIT_REASON_TPR 43 +#define EXIT_REASON_APIC_ACCESS 44 +#define EXIT_REASON_VIRTUALIZED_EOI 45 +#define EXIT_REASON_GDTR_IDTR 46 +#define EXIT_REASON_LDTR_TR 47 +#define EXIT_REASON_EPT_FAULT 48 +#define EXIT_REASON_EPT_MISCONFIG 49 +#define EXIT_REASON_INVEPT 50 +#define EXIT_REASON_RDTSCP 51 +#define EXIT_REASON_VMX_PREEMPT 52 +#define EXIT_REASON_INVVPID 53 +#define EXIT_REASON_WBINVD 54 +#define EXIT_REASON_XSETBV 55 +#define EXIT_REASON_APIC_WRITE 56 =20 /* * NMI unblocking due to IRET. * * Applies to VM-exits due to hardware exception or EPT fault. */ -#define EXIT_QUAL_NMIUDTI (1 << 12) +#define EXIT_QUAL_NMIUDTI (1 << 12) /* * VMCS interrupt information fields */ -#define VMCS_INTR_VALID (1U << 31) -#define VMCS_INTR_T_MASK 0x700 /* Interruption-info type */ -#define VMCS_INTR_T_HWINTR (0 << 8) -#define VMCS_INTR_T_NMI (2 << 8) -#define VMCS_INTR_T_HWEXCEPTION (3 << 8) -#define VMCS_INTR_T_SWINTR (4 << 8) -#define VMCS_INTR_T_PRIV_SWEXCEPTION (5 << 8) -#define VMCS_INTR_T_SWEXCEPTION (6 << 8) -#define VMCS_INTR_DEL_ERRCODE (1 << 11) +#define VMCS_INTR_VALID (1U << 31) +#define VMCS_INTR_T_MASK 0x700 /* Interruption-info type */ +#define VMCS_INTR_T_HWINTR (0 << 8) +#define VMCS_INTR_T_NMI (2 << 8) +#define VMCS_INTR_T_HWEXCEPTION (3 << 8) +#define VMCS_INTR_T_SWINTR (4 << 8) +#define VMCS_INTR_T_PRIV_SWEXCEPTION (5 << 8) +#define VMCS_INTR_T_SWEXCEPTION (6 << 8) +#define VMCS_INTR_DEL_ERRCODE (1 << 11) =20 /* * VMCS IDT-Vectoring information fields */ -#define VMCS_IDT_VEC_VALID (1U << 31) -#define VMCS_IDT_VEC_TYPE 0x700 -#define VMCS_IDT_VEC_ERRCODE_VALID (1U << 11) -#define VMCS_IDT_VEC_HWINTR (0 << 8) -#define VMCS_IDT_VEC_NMI (2 << 8) -#define VMCS_IDT_VEC_HWEXCEPTION (3 << 8) -#define VMCS_IDT_VEC_SWINTR (4 << 8) +#define VMCS_IDT_VEC_VALID (1U << 31) +#define VMCS_IDT_VEC_TYPE 0x700 +#define VMCS_IDT_VEC_ERRCODE_VALID (1U << 11) +#define VMCS_IDT_VEC_HWINTR (0 << 8) +#define VMCS_IDT_VEC_NMI (2 << 8) +#define VMCS_IDT_VEC_HWEXCEPTION (3 << 8) +#define VMCS_IDT_VEC_SWINTR (4 << 8) =20 /* * VMCS Guest interruptibility field */ -#define VMCS_INTERRUPTIBILITY_STI_BLOCKING (1 << 0) -#define VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING (1 << 1) -#define VMCS_INTERRUPTIBILITY_SMI_BLOCKING (1 << 2) -#define VMCS_INTERRUPTIBILITY_NMI_BLOCKING (1 << 3) +#define VMCS_INTERRUPTIBILITY_STI_BLOCKING (1 << 0) +#define VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING (1 << 1) +#define VMCS_INTERRUPTIBILITY_SMI_BLOCKING (1 << 2) +#define VMCS_INTERRUPTIBILITY_NMI_BLOCKING (1 << 3) =20 /* * Exit qualification for EXIT_REASON_INVAL_VMCS */ -#define EXIT_QUAL_NMI_WHILE_STI_BLOCKING 3 +#define EXIT_QUAL_NMI_WHILE_STI_BLOCKING 3 =20 /* * Exit qualification for EPT violation */ -#define EPT_VIOLATION_DATA_READ (1UL << 0) -#define EPT_VIOLATION_DATA_WRITE (1UL << 1) -#define EPT_VIOLATION_INST_FETCH (1UL << 2) -#define EPT_VIOLATION_GPA_READABLE (1UL << 3) -#define EPT_VIOLATION_GPA_WRITEABLE (1UL << 4) -#define EPT_VIOLATION_GPA_EXECUTABLE (1UL << 5) -#define EPT_VIOLATION_GLA_VALID (1UL << 7) -#define EPT_VIOLATION_XLAT_VALID (1UL << 8) +#define EPT_VIOLATION_DATA_READ (1UL << 0) +#define EPT_VIOLATION_DATA_WRITE (1UL << 1) +#define EPT_VIOLATION_INST_FETCH (1UL << 2) +#define EPT_VIOLATION_GPA_READABLE (1UL << 3) +#define EPT_VIOLATION_GPA_WRITEABLE (1UL << 4) +#define EPT_VIOLATION_GPA_EXECUTABLE (1UL << 5) +#define EPT_VIOLATION_GLA_VALID (1UL << 7) +#define EPT_VIOLATION_XLAT_VALID (1UL << 8) =20 /* * Exit qualification for APIC-access VM exit */ -#define APIC_ACCESS_OFFSET(qual) ((qual) & 0xFFF) -#define APIC_ACCESS_TYPE(qual) (((qual) >> 12) & 0xF) +#define APIC_ACCESS_OFFSET(qual) ((qual) & 0xFFF) +#define APIC_ACCESS_TYPE(qual) (((qual) >> 12) & 0xF) =20 /* * Exit qualification for APIC-write VM exit */ -#define APIC_WRITE_OFFSET(qual) ((qual) & 0xFFF) +#define APIC_WRITE_OFFSET(qual) ((qual) & 0xFFF) =20 =20 -#define VMCS_PRI_PROC_BASED_CTLS_INT_WINDOW_EXITING (1 << 2) -#define VMCS_PRI_PROC_BASED_CTLS_TSC_OFFSET (1 << 3) -#define VMCS_PRI_PROC_BASED_CTLS_HLT (1 << 7) +#define VMCS_PRI_PROC_BASED_CTLS_INT_WINDOW_EXITING (1 << 2) +#define VMCS_PRI_PROC_BASED_CTLS_TSC_OFFSET (1 << 3) +#define VMCS_PRI_PROC_BASED_CTLS_HLT (1 << 7) #define VMCS_PRI_PROC_BASED_CTLS_MWAIT (1 << 10) #define VMCS_PRI_PROC_BASED_CTLS_TSC (1 << 12) #define VMCS_PRI_PROC_BASED_CTLS_CR8_LOAD (1 << 19) @@ -359,10 +359,10 @@ #define VMCS_PRI_PROC_BASED2_CTLS_X2APIC (1 << 4) =20 enum task_switch_reason { - TSR_CALL, - TSR_IRET, + TSR_CALL, + TSR_IRET, TSR_JMP, - TSR_IDT_GATE, /* task gate in IDT */ + TSR_IDT_GATE, /* task gate in IDT */ }; =20 #endif diff --git a/target/i386/hvf-utils/vmx.h b/target/i386/hvf-utils/vmx.h index e5359df87f..44a5c6d554 100644 --- a/target/i386/hvf-utils/vmx.h +++ b/target/i386/hvf-utils/vmx.h @@ -31,45 +31,45 @@ =20 #include "exec/address-spaces.h" =20 -static uint64_t inline rreg(hv_vcpuid_t vcpu, hv_x86_reg_t reg) +static inline uint64_t rreg(hv_vcpuid_t vcpu, hv_x86_reg_t reg) { - uint64_t v; + uint64_t v; =20 - if (hv_vcpu_read_register(vcpu, reg, &v)) { - abort(); - } + if (hv_vcpu_read_register(vcpu, reg, &v)) { + abort(); + } =20 - return v; + return v; } =20 /* write GPR */ -static void inline wreg(hv_vcpuid_t vcpu, hv_x86_reg_t reg, uint64_t v) +static inline void wreg(hv_vcpuid_t vcpu, hv_x86_reg_t reg, uint64_t v) { - if (hv_vcpu_write_register(vcpu, reg, v)) { - abort(); - } + if (hv_vcpu_write_register(vcpu, reg, v)) { + abort(); + } } =20 /* read VMCS field */ -static uint64_t inline rvmcs(hv_vcpuid_t vcpu, uint32_t field) +static inline uint64_t rvmcs(hv_vcpuid_t vcpu, uint32_t field) { - uint64_t v; + uint64_t v; =20 - hv_vmx_vcpu_read_vmcs(vcpu, field, &v); + hv_vmx_vcpu_read_vmcs(vcpu, field, &v); =20 - return v; + return v; } =20 /* write VMCS field */ -static void inline wvmcs(hv_vcpuid_t vcpu, uint32_t field, uint64_t v) +static inline void wvmcs(hv_vcpuid_t vcpu, uint32_t field, uint64_t v) { - hv_vmx_vcpu_write_vmcs(vcpu, field, v); + hv_vmx_vcpu_write_vmcs(vcpu, field, v); } =20 /* desired control word constrained by hardware/hypervisor capabilities */ -static uint64_t inline cap2ctrl(uint64_t cap, uint64_t ctrl) +static inline uint64_t cap2ctrl(uint64_t cap, uint64_t ctrl) { - return (ctrl | (cap & 0xffffffff)) & (cap >> 32); + return (ctrl | (cap & 0xffffffff)) & (cap >> 32); } =20 #define VM_ENTRY_GUEST_LMA (1LL << 9) @@ -91,11 +91,14 @@ static void enter_long_mode(hv_vcpuid_t vcpu, uint64_t = cr0, uint64_t efer) efer |=3D EFER_LMA; wvmcs(vcpu, VMCS_GUEST_IA32_EFER, efer); entry_ctls =3D rvmcs(vcpu, VMCS_ENTRY_CTLS); - wvmcs(vcpu, VMCS_ENTRY_CTLS, rvmcs(vcpu, VMCS_ENTRY_CTLS) | VM_ENTRY_G= UEST_LMA); + wvmcs(vcpu, VMCS_ENTRY_CTLS, rvmcs(vcpu, VMCS_ENTRY_CTLS) | + VM_ENTRY_GUEST_LMA); =20 uint64_t guest_tr_ar =3D rvmcs(vcpu, VMCS_GUEST_TR_ACCESS_RIGHTS); - if ((efer & EFER_LME) && (guest_tr_ar & AR_TYPE_MASK) !=3D AR_TYPE_BUS= Y_64_TSS) { - wvmcs(vcpu, VMCS_GUEST_TR_ACCESS_RIGHTS, (guest_tr_ar & ~AR_TYPE_M= ASK) | AR_TYPE_BUSY_64_TSS); + if ((efer & EFER_LME) && + (guest_tr_ar & AR_TYPE_MASK) !=3D AR_TYPE_BUSY_64_TSS) { + wvmcs(vcpu, VMCS_GUEST_TR_ACCESS_RIGHTS, + (guest_tr_ar & ~AR_TYPE_MASK) | AR_TYPE_BUSY_64_TSS); } } =20 @@ -110,39 +113,45 @@ static void exit_long_mode(hv_vcpuid_t vcpu, uint64_t= cr0, uint64_t efer) wvmcs(vcpu, VMCS_GUEST_IA32_EFER, efer); } =20 -static void inline macvm_set_cr0(hv_vcpuid_t vcpu, uint64_t cr0) +static inline void macvm_set_cr0(hv_vcpuid_t vcpu, uint64_t cr0) { int i; uint64_t pdpte[4] =3D {0, 0, 0, 0}; uint64_t efer =3D rvmcs(vcpu, VMCS_GUEST_IA32_EFER); uint64_t old_cr0 =3D rvmcs(vcpu, VMCS_GUEST_CR0); =20 - if ((cr0 & CR0_PG) && (rvmcs(vcpu, VMCS_GUEST_CR4) & CR4_PAE) && !(efe= r & EFER_LME)) - address_space_rw(&address_space_memory, rvmcs(vcpu, VMCS_GUEST_CR3= ) & ~0x1f, + if ((cr0 & CR0_PG) && (rvmcs(vcpu, VMCS_GUEST_CR4) & CR4_PAE) && + !(efer & EFER_LME)) { + address_space_rw(&address_space_memory, + rvmcs(vcpu, VMCS_GUEST_CR3) & ~0x1f, MEMTXATTRS_UNSPECIFIED, (uint8_t *)pdpte, 32, 0); + } =20 - for (i =3D 0; i < 4; i++) + for (i =3D 0; i < 4; i++) { wvmcs(vcpu, VMCS_GUEST_PDPTE0 + i * 2, pdpte[i]); + } =20 wvmcs(vcpu, VMCS_CR0_MASK, CR0_CD | CR0_NE | CR0_PG); wvmcs(vcpu, VMCS_CR0_SHADOW, cr0); =20 cr0 &=3D ~CR0_CD; - wvmcs(vcpu, VMCS_GUEST_CR0, cr0 | CR0_NE| CR0_ET); + wvmcs(vcpu, VMCS_GUEST_CR0, cr0 | CR0_NE | CR0_ET); =20 if (efer & EFER_LME) { - if (!(old_cr0 & CR0_PG) && (cr0 & CR0_PG)) - enter_long_mode(vcpu, cr0, efer); - if (/*(old_cr0 & CR0_PG) &&*/ !(cr0 & CR0_PG)) + if (!(old_cr0 & CR0_PG) && (cr0 & CR0_PG)) { + enter_long_mode(vcpu, cr0, efer); + } + if (/*(old_cr0 & CR0_PG) &&*/ !(cr0 & CR0_PG)) { exit_long_mode(vcpu, cr0, efer); + } } =20 hv_vcpu_invalidate_tlb(vcpu); hv_vcpu_flush(vcpu); } =20 -static void inline macvm_set_cr4(hv_vcpuid_t vcpu, uint64_t cr4) +static inline void macvm_set_cr4(hv_vcpuid_t vcpu, uint64_t cr4) { uint64_t guest_cr4 =3D cr4 | CR4_VMXE; =20 @@ -153,7 +162,7 @@ static void inline macvm_set_cr4(hv_vcpuid_t vcpu, uint= 64_t cr4) hv_vcpu_flush(vcpu); } =20 -static void inline macvm_set_rip(CPUState *cpu, uint64_t rip) +static inline void macvm_set_rip(CPUState *cpu, uint64_t rip) { uint64_t val; =20 @@ -162,39 +171,44 @@ static void inline macvm_set_rip(CPUState *cpu, uint6= 4_t rip) =20 /* after moving forward in rip, we need to clean INTERRUPTABILITY */ val =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY); - if (val & (VMCS_INTERRUPTIBILITY_STI_BLOCKING | VMCS_INTERRUPTIBILITY_M= OVSS_BLOCKING)) + if (val & (VMCS_INTERRUPTIBILITY_STI_BLOCKING | + VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING)) { wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY, - val & ~(VMCS_INTERRUPTIBILITY_STI_BLOCKING | VMCS_INTERRUPTI= BILITY_MOVSS_BLOCKING)); + val & ~(VMCS_INTERRUPTIBILITY_STI_BLOCKING | + VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING)); + } } =20 -static void inline vmx_clear_nmi_blocking(CPUState *cpu) +static inline void vmx_clear_nmi_blocking(CPUState *cpu) { uint32_t gi =3D (uint32_t) rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBIL= ITY); gi &=3D ~VMCS_INTERRUPTIBILITY_NMI_BLOCKING; wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY, gi); } =20 -static void inline vmx_set_nmi_blocking(CPUState *cpu) +static inline void vmx_set_nmi_blocking(CPUState *cpu) { uint32_t gi =3D (uint32_t)rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILI= TY); gi |=3D VMCS_INTERRUPTIBILITY_NMI_BLOCKING; wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY, gi); } =20 -static void inline vmx_set_nmi_window_exiting(CPUState *cpu) +static inline void vmx_set_nmi_window_exiting(CPUState *cpu) { uint64_t val; val =3D rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS); - wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val | VMCS_PRI_PROC_BASED= _CTLS_NMI_WINDOW_EXITING); + wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val | + VMCS_PRI_PROC_BASED_CTLS_NMI_WINDOW_EXITING); =20 } =20 -static void inline vmx_clear_nmi_window_exiting(CPUState *cpu) +static inline void vmx_clear_nmi_window_exiting(CPUState *cpu) { =20 uint64_t val; val =3D rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS); - wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val & ~VMCS_PRI_PROC_BASE= D_CTLS_NMI_WINDOW_EXITING); + wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val & + ~VMCS_PRI_PROC_BASED_CTLS_NMI_WINDOW_EXITING); } =20 #endif diff --git a/target/i386/hvf-utils/x86.c b/target/i386/hvf-utils/x86.c index 4debbff31c..045b520425 100644 --- a/target/i386/hvf-utils/x86.c +++ b/target/i386/hvf-utils/x86.c @@ -28,33 +28,36 @@ =20 static uint32_t x86_segment_access_rights(struct x86_segment_descriptor *v= ar) { - uint32_t ar; - - if (!var->p) { - ar =3D 1 << 16; - return ar; - } - - ar =3D var->type & 15; - ar |=3D (var->s & 1) << 4; - ar |=3D (var->dpl & 3) << 5; - ar |=3D (var->p & 1) << 7; - ar |=3D (var->avl & 1) << 12; - ar |=3D (var->l & 1) << 13; - ar |=3D (var->db & 1) << 14; - ar |=3D (var->g & 1) << 15; - return ar; -} - -bool x86_read_segment_descriptor(struct CPUState *cpu, struct x86_segment_= descriptor *desc, x68_segment_selector sel) + uint32_t ar; + + if (!var->p) { + ar =3D 1 << 16; + return ar; + } + + ar =3D var->type & 15; + ar |=3D (var->s & 1) << 4; + ar |=3D (var->dpl & 3) << 5; + ar |=3D (var->p & 1) << 7; + ar |=3D (var->avl & 1) << 12; + ar |=3D (var->l & 1) << 13; + ar |=3D (var->db & 1) << 14; + ar |=3D (var->g & 1) << 15; + return ar; +}*/ + +bool x86_read_segment_descriptor(struct CPUState *cpu, + struct x86_segment_descriptor *desc, + x68_segment_selector sel) { addr_t base; uint32_t limit; =20 ZERO_INIT(*desc); - // valid gdt descriptors start from index 1 - if (!sel.index && GDT_SEL =3D=3D sel.ti) + /* valid gdt descriptors start from index 1 */ + if (!sel.index && GDT_SEL =3D=3D sel.ti) { return false; + } =20 if (GDT_SEL =3D=3D sel.ti) { base =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_BASE); @@ -64,14 +67,17 @@ bool x86_read_segment_descriptor(struct CPUState *cpu, = struct x86_segment_descri limit =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_LIMIT); } =20 - if (sel.index * 8 >=3D limit) + if (sel.index * 8 >=3D limit) { return false; + } =20 vmx_read_mem(cpu, desc, base + sel.index * 8, sizeof(*desc)); return true; } =20 -bool x86_write_segment_descriptor(struct CPUState *cpu, struct x86_segment= _descriptor *desc, x68_segment_selector sel) +bool x86_write_segment_descriptor(struct CPUState *cpu, + struct x86_segment_descriptor *desc, + x68_segment_selector sel) { addr_t base; uint32_t limit; @@ -85,21 +91,22 @@ bool x86_write_segment_descriptor(struct CPUState *cpu,= struct x86_segment_descr } =20 if (sel.index * 8 >=3D limit) { - printf("%s: gdt limit\n", __FUNCTION__); + printf("%s: gdt limit\n", __func__); return false; } vmx_write_mem(cpu, base + sel.index * 8, desc, sizeof(*desc)); return true; } =20 -bool x86_read_call_gate(struct CPUState *cpu, struct x86_call_gate *idt_de= sc, int gate) +bool x86_read_call_gate(struct CPUState *cpu, struct x86_call_gate *idt_de= sc, + int gate) { addr_t base =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_IDTR_BASE); uint32_t limit =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_IDTR_LIMIT); =20 ZERO_INIT(*idt_desc); if (gate * 8 >=3D limit) { - printf("%s: idt limit\n", __FUNCTION__); + printf("%s: idt limit\n", __func__); return false; } =20 @@ -120,7 +127,7 @@ bool x86_is_real(struct CPUState *cpu) =20 bool x86_is_v8086(struct CPUState *cpu) { - return (x86_is_protected(cpu) && (RFLAGS(cpu) & RFLAGS_VM)); + return x86_is_protected(cpu) && (RFLAGS(cpu) & RFLAGS_VM); } =20 bool x86_is_long_mode(struct CPUState *cpu) @@ -153,17 +160,18 @@ addr_t linear_addr(struct CPUState *cpu, addr_t addr,= x86_reg_segment seg) return vmx_read_segment_base(cpu, seg) + addr; } =20 -addr_t linear_addr_size(struct CPUState *cpu, addr_t addr, int size, x86_r= eg_segment seg) +addr_t linear_addr_size(struct CPUState *cpu, addr_t addr, int size, + x86_reg_segment seg) { switch (size) { - case 2: - addr =3D (uint16_t)addr; - break; - case 4: - addr =3D (uint32_t)addr; - break; - default: - break; + case 2: + addr =3D (uint16_t)addr; + break; + case 4: + addr =3D (uint32_t)addr; + break; + default: + break; } return linear_addr(cpu, addr, seg); } diff --git a/target/i386/hvf-utils/x86.h b/target/i386/hvf-utils/x86.h index d433d15ea4..a12044d848 100644 --- a/target/i386/hvf-utils/x86.h +++ b/target/i386/hvf-utils/x86.h @@ -25,26 +25,26 @@ #include "qemu-common.h" #include "x86_flags.h" =20 -// exceptions +/* exceptions */ typedef enum x86_exception { - EXCEPTION_DE, // divide error - EXCEPTION_DB, // debug fault - EXCEPTION_NMI, // non-maskable interrupt - EXCEPTION_BP, // breakpoint trap - EXCEPTION_OF, // overflow trap - EXCEPTION_BR, // boundary range exceeded fault - EXCEPTION_UD, // undefined opcode - EXCEPTION_NM, // device not available - EXCEPTION_DF, // double fault - EXCEPTION_RSVD, // not defined - EXCEPTION_TS, // invalid TSS fault - EXCEPTION_NP, // not present fault - EXCEPTION_GP, // general protection fault - EXCEPTION_PF, // page fault - EXCEPTION_RSVD2, // not defined + EXCEPTION_DE, /* divide error */ + EXCEPTION_DB, /* debug fault */ + EXCEPTION_NMI, /* non-maskable interrupt */ + EXCEPTION_BP, /* breakpoint trap */ + EXCEPTION_OF, /* overflow trap */ + EXCEPTION_BR, /* boundary range exceeded fault */ + EXCEPTION_UD, /* undefined opcode */ + EXCEPTION_NM, /* device not available */ + EXCEPTION_DF, /* double fault */ + EXCEPTION_RSVD, /* not defined */ + EXCEPTION_TS, /* invalid TSS fault */ + EXCEPTION_NP, /* not present fault */ + EXCEPTION_GP, /* general protection fault */ + EXCEPTION_PF, /* page fault */ + EXCEPTION_RSVD2, /* not defined */ } x86_exception; =20 -// general purpose regs +/* general purpose regs */ typedef enum x86_reg_name { REG_RAX =3D 0, REG_RCX =3D 1, @@ -64,7 +64,7 @@ typedef enum x86_reg_name { REG_R15 =3D 15, } x86_reg_name; =20 -// segment regs +/* segment regs */ typedef enum x86_reg_segment { REG_SEG_ES =3D 0, REG_SEG_CS =3D 1, @@ -76,24 +76,23 @@ typedef enum x86_reg_segment { REG_SEG_TR =3D 7, } x86_reg_segment; =20 -typedef struct x86_register -{ +typedef struct x86_register { union { struct { - uint64_t rrx; // full 64 bit + uint64_t rrx; /* full 64 bit */ }; struct { - uint32_t erx; // low 32 bit part + uint32_t erx; /* low 32 bit part */ uint32_t hi32_unused1; }; struct { - uint16_t rx; // low 16 bit part + uint16_t rx; /* low 16 bit part */ uint16_t hi16_unused1; uint32_t hi32_unused2; }; struct { - uint8_t lx; // low 8 bit part - uint8_t hx; // high 8 bit + uint8_t lx; /* low 8 bit part */ + uint8_t hx; /* high 8 bit */ uint16_t hi16_unused2; uint32_t hi32_unused3; }; @@ -120,7 +119,7 @@ typedef enum x86_rflags { RFLAGS_ID =3D (1L << 21), } x86_rflags; =20 -// rflags register +/* rflags register */ typedef struct x86_reg_flags { union { struct { @@ -205,7 +204,7 @@ typedef enum x86_reg_cr4 { CR4_SMEP =3D (1L << 20), } x86_reg_cr4; =20 -// 16 bit Task State Segment +/* 16 bit Task State Segment */ typedef struct x86_tss_segment16 { uint16_t link; uint16_t sp0; @@ -231,9 +230,8 @@ typedef struct x86_tss_segment16 { uint16_t ldtr; } __attribute__((packed)) x86_tss_segment16; =20 -// 32 bit Task State Segment -typedef struct x86_tss_segment32 -{ +/* 32 bit Task State Segment */ +typedef struct x86_tss_segment32 { uint32_t prev_tss; uint32_t esp0; uint32_t ss0; @@ -263,9 +261,8 @@ typedef struct x86_tss_segment32 uint16_t iomap_base; } __attribute__ ((__packed__)) x86_tss_segment32; =20 -// 64 bit Task State Segment -typedef struct x86_tss_segment64 -{ +/* 64 bit Task State Segment */ +typedef struct x86_tss_segment64 { uint32_t unused; uint64_t rsp0; uint64_t rsp1; @@ -283,7 +280,7 @@ typedef struct x86_tss_segment64 uint16_t iomap_base; } __attribute__ ((__packed__)) x86_tss_segment64; =20 -// segment descriptors +/* segment descriptors */ typedef struct x86_segment_descriptor { uint64_t limit0:16; uint64_t base0:16; @@ -305,7 +302,8 @@ static inline uint32_t x86_segment_base(x86_segment_des= criptor *desc) return (uint32_t)((desc->base2 << 24) | (desc->base1 << 16) | desc->ba= se0); } =20 -static inline void x86_set_segment_base(x86_segment_descriptor *desc, uint= 32_t base) +static inline void x86_set_segment_base(x86_segment_descriptor *desc, + uint32_t base) { desc->base2 =3D base >> 24; desc->base1 =3D (base >> 16) & 0xff; @@ -315,12 +313,14 @@ static inline void x86_set_segment_base(x86_segment_d= escriptor *desc, uint32_t b static inline uint32_t x86_segment_limit(x86_segment_descriptor *desc) { uint32_t limit =3D (uint32_t)((desc->limit1 << 16) | desc->limit0); - if (desc->g) + if (desc->g) { return (limit << 12) | 0xfff; + } return limit; } =20 -static inline void x86_set_segment_limit(x86_segment_descriptor *desc, uin= t32_t limit) +static inline void x86_set_segment_limit(x86_segment_descriptor *desc, + uint32_t limit) { desc->limit0 =3D limit & 0xffff; desc->limit1 =3D limit >> 16; @@ -356,7 +356,7 @@ typedef struct x68_segment_selector { }; } __attribute__ ((__packed__)) x68_segment_selector; =20 -// Definition of hvf_x86_state is here +/* Definition of hvf_x86_state is here */ struct hvf_x86_state { int hlt; uint64_t init_tsc; @@ -370,7 +370,7 @@ struct hvf_x86_state { struct lazy_flags lflags; struct x86_efer efer; uint8_t mmio_buf[4096]; - uint8_t* apic_page; + uint8_t *apic_page; }; =20 /* @@ -380,7 +380,7 @@ struct hvf_xsave_buf { uint32_t data[1024]; }; =20 -// useful register access macros +/* useful register access macros */ #define RIP(cpu) (cpu->hvf_x86->rip) #define EIP(cpu) ((uint32_t)cpu->hvf_x86->rip) #define RFLAGS(cpu) (cpu->hvf_x86->rflags.rflags) @@ -436,13 +436,18 @@ struct hvf_xsave_buf { #define DH(cpu) RH(cpu, REG_RDX) #define BH(cpu) RH(cpu, REG_RBX) =20 -// deal with GDT/LDT descriptors in memory -bool x86_read_segment_descriptor(struct CPUState *cpu, struct x86_segment_= descriptor *desc, x68_segment_selector sel); -bool x86_write_segment_descriptor(struct CPUState *cpu, struct x86_segment= _descriptor *desc, x68_segment_selector sel); +/* deal with GDT/LDT descriptors in memory */ +bool x86_read_segment_descriptor(struct CPUState *cpu, + struct x86_segment_descriptor *desc, + x68_segment_selector sel); +bool x86_write_segment_descriptor(struct CPUState *cpu, + struct x86_segment_descriptor *desc, + x68_segment_selector sel); =20 -bool x86_read_call_gate(struct CPUState *cpu, struct x86_call_gate *idt_de= sc, int gate); +bool x86_read_call_gate(struct CPUState *cpu, struct x86_call_gate *idt_de= sc, + int gate); =20 -// helpers +/* helpers */ bool x86_is_protected(struct CPUState *cpu); bool x86_is_real(struct CPUState *cpu); bool x86_is_v8086(struct CPUState *cpu); @@ -452,19 +457,20 @@ bool x86_is_paging_mode(struct CPUState *cpu); bool x86_is_pae_enabled(struct CPUState *cpu); =20 addr_t linear_addr(struct CPUState *cpu, addr_t addr, x86_reg_segment seg); -addr_t linear_addr_size(struct CPUState *cpu, addr_t addr, int size, x86_r= eg_segment seg); +addr_t linear_addr_size(struct CPUState *cpu, addr_t addr, int size, + x86_reg_segment seg); addr_t linear_rip(struct CPUState *cpu, addr_t rip); =20 static inline uint64_t rdtscp(void) { uint64_t tsc; - __asm__ __volatile__("rdtscp; " // serializing read of tsc - "shl $32,%%rdx; " // shift higher 32 bits stored= in rdx up - "or %%rdx,%%rax" // and or onto rax - : "=3Da"(tsc) // output to tsc variable + __asm__ __volatile__("rdtscp; " /* serializing read of tsc */ + "shl $32,%%rdx; " /* shift higher 32 bits stored= in rdx up */ + "or %%rdx,%%rax" /* and or onto rax */ + : "=3Da"(tsc) /* output to tsc variable */ : - : "%rcx", "%rdx"); // rcx and rdx are clobbered - =20 + : "%rcx", "%rdx"); /* rcx and rdx are clobbered */ + return tsc; } =20 diff --git a/target/i386/hvf-utils/x86_cpuid.c b/target/i386/hvf-utils/x86_= cpuid.c index 4abeb5c2da..fe968cb638 100644 --- a/target/i386/hvf-utils/x86_cpuid.c +++ b/target/i386/hvf-utils/x86_cpuid.c @@ -41,10 +41,10 @@ struct x86_cpuid builtin_cpus[] =3D { .model =3D 3, .stepping =3D 3, .features =3D PPRO_FEATURES, - .ext_features =3D /*CPUID_EXT_SSE3 |*/ CPUID_EXT_POPCNT, CPUID_MTR= R | CPUID_CLFLUSH, - CPUID_PSE36, + .ext_features =3D /*CPUID_EXT_SSE3 |*/ CPUID_EXT_POPCNT, CPUID_MTR= R | + CPUID_CLFLUSH, CPUID_PSE36, .ext2_features =3D CPUID_EXT2_LM | CPUID_EXT2_SYSCALL | CPUID_EXT2= _NX, - .ext3_features =3D 0,//CPUID_EXT3_LAHF_LM, + .ext3_features =3D 0, /* CPUID_EXT3_LAHF_LM, */ .xlevel =3D 0x80000004, .model_id =3D "vmx32", }, @@ -92,14 +92,15 @@ struct x86_cpuid builtin_cpus[] =3D { }, }; =20 -static struct x86_cpuid *_cpuid =3D NULL; +static struct x86_cpuid *_cpuid; =20 -void init_cpuid(struct CPUState* cpu) +void init_cpuid(struct CPUState *cpu) { - _cpuid =3D &builtin_cpus[2]; // core2duo + _cpuid =3D &builtin_cpus[2]; /* core2duo */ } =20 -void get_cpuid_func(struct CPUState* cpu, int func, int cnt, uint32_t *eax= , uint32_t *ebx, uint32_t *ecx, uint32_t *edx) +void get_cpuid_func(struct CPUState *cpu, int func, int cnt, uint32_t *eax, + uint32_t *ebx, uint32_t *ecx, uint32_t *edx) { uint32_t h_rax, h_rbx, h_rcx, h_rdx; host_cpuid(func, cnt, &h_rax, &h_rbx, &h_rcx, &h_rdx); @@ -107,164 +108,172 @@ void get_cpuid_func(struct CPUState* cpu, int func,= int cnt, uint32_t *eax, uint =20 =20 *eax =3D *ebx =3D *ecx =3D *edx =3D 0; - switch(func) { - case 0: - *eax =3D _cpuid->level; - *ebx =3D _cpuid->vendor1; - *edx =3D _cpuid->vendor2; - *ecx =3D _cpuid->vendor3; - break; - case 1: - *eax =3D h_rax;//_cpuid->stepping | (_cpuid->model << 3) | (_c= puid->family << 6); - *ebx =3D (apic_id << 24) | (h_rbx & 0x00ffffff); - *ecx =3D h_rcx; - *edx =3D h_rdx; + switch (func) { + case 0: + *eax =3D _cpuid->level; + *ebx =3D _cpuid->vendor1; + *edx =3D _cpuid->vendor2; + *ecx =3D _cpuid->vendor3; + break; + case 1: + *eax =3D h_rax;/*_cpuid->stepping | (_cpuid->model << 3) | + (_cpuid->family << 6); */ + *ebx =3D (apic_id << 24) | (h_rbx & 0x00ffffff); + *ecx =3D h_rcx; + *edx =3D h_rdx; =20 - if (cpu->nr_cores * cpu->nr_threads > 1) { - *ebx |=3D (cpu->nr_cores * cpu->nr_threads) << 16; - *edx |=3D 1 << 28; /* Enable Hyper-Threading */ - } + if (cpu->nr_cores * cpu->nr_threads > 1) { + *ebx |=3D (cpu->nr_cores * cpu->nr_threads) << 16; + *edx |=3D 1 << 28; /* Enable Hyper-Threading */ + } =20 - *ecx =3D *ecx & ~(CPUID_EXT_OSXSAVE | CPUID_EXT_MONITOR | CPUI= D_EXT_X2APIC | - CPUID_EXT_VMX | CPUID_EXT_TSC_DEADLINE_TIMER | CPU= ID_EXT_TM2 | CPUID_EXT_PCID | - CPUID_EXT_EST | CPUID_EXT_SSE42 | CPUID_EXT_SSE41); - *ecx |=3D CPUID_EXT_HYPERVISOR; - break; - case 2: - /* cache info: needed for Pentium Pro compatibility */ - *eax =3D h_rax; - *ebx =3D h_rbx; - *ecx =3D h_rcx; - *edx =3D h_rdx; - break; - case 4: - /* cache info: needed for Core compatibility */ - *eax =3D h_rax; - *ebx =3D h_rbx; - *ecx =3D h_rcx; - *edx =3D h_rdx; - break; - case 5: - /* mwait info: needed for Core compatibility */ - *eax =3D h_rax; - *ebx =3D h_rbx; - *ecx =3D h_rcx; - *edx =3D h_rdx; - break; - case 6: - /* Thermal and Power Leaf */ - *eax =3D 0; - *ebx =3D 0; - *ecx =3D 0; - *edx =3D 0; - break; - case 7: - *eax =3D h_rax; - *ebx =3D h_rbx & ~(CPUID_7_0_EBX_AVX512F | CPUID_7_0_EBX_AVX51= 2PF | CPUID_7_0_EBX_AVX512ER | CPUID_7_0_EBX_AVX512CD | - CPUID_7_0_EBX_AVX512BW | CPUID_7_0_EBX_AVX512= VL | CPUID_7_0_EBX_MPX | CPUID_7_0_EBX_INVPCID); - *ecx =3D h_rcx & ~(CPUID_7_0_ECX_AVX512BMI); - *edx =3D h_rdx; - break; - case 9: - /* Direct Cache Access Information Leaf */ - *eax =3D h_rax; - *ebx =3D h_rbx; - *ecx =3D h_rcx; - *edx =3D h_rdx; - break; - case 0xA: - /* Architectural Performance Monitoring Leaf */ - *eax =3D 0; - *ebx =3D 0; - *ecx =3D 0; - *edx =3D 0; - break; - case 0xB: - /* CPU Topology Leaf */ - *eax =3D 0; - *ebx =3D 0; /* Means that we don't support this leaf */ - *ecx =3D 0; - *edx =3D 0; - break; - case 0xD: - *eax =3D h_rax; - if (!cnt) - *eax &=3D (XSTATE_FP_MASK | XSTATE_SSE_MASK | XSTATE_YMM_M= ASK); - if (1 =3D=3D cnt) - *eax &=3D (CPUID_XSAVE_XSAVEOPT | CPUID_XSAVE_XSAVEC); - *ebx =3D h_rbx; - *ecx =3D h_rcx; - *edx =3D h_rdx; - break; - case 0x80000000: - *eax =3D _cpuid->xlevel; - *ebx =3D _cpuid->vendor1; - *edx =3D _cpuid->vendor2; - *ecx =3D _cpuid->vendor3; - break; - case 0x80000001: - *eax =3D h_rax;//_cpuid->stepping | (_cpuid->model << 3) | (_c= puid->family << 6); - *ebx =3D 0; - *ecx =3D _cpuid->ext3_features & h_rcx; - *edx =3D _cpuid->ext2_features & h_rdx; - break; - case 0x80000002: - case 0x80000003: - case 0x80000004: - *eax =3D h_rax; - *ebx =3D h_rbx; - *ecx =3D h_rcx; - *edx =3D h_rdx; - break; - case 0x80000005: - /* cache info (L1 cache) */ - *eax =3D h_rax; - *ebx =3D h_rbx; - *ecx =3D h_rcx; - *edx =3D h_rdx; - break; - case 0x80000006: - /* cache info (L2 cache) */ - *eax =3D h_rax; - *ebx =3D h_rbx; - *ecx =3D h_rcx; - *edx =3D h_rdx; - break; - case 0x80000007: - *eax =3D 0; - *ebx =3D 0; - *ecx =3D 0; - *edx =3D 0; /* Note - We disable invariant TSC (bit 8) in pu= rpose */ - break; - case 0x80000008: - /* virtual & phys address size in low 2 bytes. */ - *eax =3D h_rax; - *ebx =3D 0; - *ecx =3D 0; - *edx =3D 0; - break; - case 0x8000000A: - *eax =3D 0; - *ebx =3D 0; - *ecx =3D 0; - *edx =3D 0; - break; - case 0x80000019: - *eax =3D h_rax; - *ebx =3D h_rbx; - *ecx =3D 0; - *edx =3D 0; - case 0xC0000000: - *eax =3D _cpuid->xlevel2; - *ebx =3D 0; - *ecx =3D 0; - *edx =3D 0; - break; - default: - *eax =3D 0; - *ebx =3D 0; - *ecx =3D 0; - *edx =3D 0; - break; + *ecx =3D *ecx & ~(CPUID_EXT_OSXSAVE | CPUID_EXT_MONITOR | + CPUID_EXT_X2APIC | CPUID_EXT_VMX | + CPUID_EXT_TSC_DEADLINE_TIMER | CPUID_EXT_TM2 | + CPUID_EXT_PCID | CPUID_EXT_EST | CPUID_EXT_SSE42 | + CPUID_EXT_SSE41); + *ecx |=3D CPUID_EXT_HYPERVISOR; + break; + case 2: + /* cache info: needed for Pentium Pro compatibility */ + *eax =3D h_rax; + *ebx =3D h_rbx; + *ecx =3D h_rcx; + *edx =3D h_rdx; + break; + case 4: + /* cache info: needed for Core compatibility */ + *eax =3D h_rax; + *ebx =3D h_rbx; + *ecx =3D h_rcx; + *edx =3D h_rdx; + break; + case 5: + /* mwait info: needed for Core compatibility */ + *eax =3D h_rax; + *ebx =3D h_rbx; + *ecx =3D h_rcx; + *edx =3D h_rdx; + break; + case 6: + /* Thermal and Power Leaf */ + *eax =3D 0; + *ebx =3D 0; + *ecx =3D 0; + *edx =3D 0; + break; + case 7: + *eax =3D h_rax; + *ebx =3D h_rbx & ~(CPUID_7_0_EBX_AVX512F | CPUID_7_0_EBX_AVX512PF | + CPUID_7_0_EBX_AVX512ER | CPUID_7_0_EBX_AVX512CD | + CPUID_7_0_EBX_AVX512BW | CPUID_7_0_EBX_AVX512VL | + CPUID_7_0_EBX_MPX | CPUID_7_0_EBX_INVPCID); + *ecx =3D h_rcx & ~(CPUID_7_0_ECX_AVX512BMI); + *edx =3D h_rdx; + break; + case 9: + /* Direct Cache Access Information Leaf */ + *eax =3D h_rax; + *ebx =3D h_rbx; + *ecx =3D h_rcx; + *edx =3D h_rdx; + break; + case 0xA: + /* Architectural Performance Monitoring Leaf */ + *eax =3D 0; + *ebx =3D 0; + *ecx =3D 0; + *edx =3D 0; + break; + case 0xB: + /* CPU Topology Leaf */ + *eax =3D 0; + *ebx =3D 0; /* Means that we don't support this leaf */ + *ecx =3D 0; + *edx =3D 0; + break; + case 0xD: + *eax =3D h_rax; + if (!cnt) { + *eax &=3D (XSTATE_FP_MASK | XSTATE_SSE_MASK | XSTATE_YMM_MASK); + } + if (1 =3D=3D cnt) { + *eax &=3D (CPUID_XSAVE_XSAVEOPT | CPUID_XSAVE_XSAVEC); + } + *ebx =3D h_rbx; + *ecx =3D h_rcx; + *edx =3D h_rdx; + break; + case 0x80000000: + *eax =3D _cpuid->xlevel; + *ebx =3D _cpuid->vendor1; + *edx =3D _cpuid->vendor2; + *ecx =3D _cpuid->vendor3; + break; + case 0x80000001: + *eax =3D h_rax;/*_cpuid->stepping | (_cpuid->model << 3) | + (_cpuid->family << 6);*/ + *ebx =3D 0; + *ecx =3D _cpuid->ext3_features & h_rcx; + *edx =3D _cpuid->ext2_features & h_rdx; + break; + case 0x80000002: + case 0x80000003: + case 0x80000004: + *eax =3D h_rax; + *ebx =3D h_rbx; + *ecx =3D h_rcx; + *edx =3D h_rdx; + break; + case 0x80000005: + /* cache info (L1 cache) */ + *eax =3D h_rax; + *ebx =3D h_rbx; + *ecx =3D h_rcx; + *edx =3D h_rdx; + break; + case 0x80000006: + /* cache info (L2 cache) */ + *eax =3D h_rax; + *ebx =3D h_rbx; + *ecx =3D h_rcx; + *edx =3D h_rdx; + break; + case 0x80000007: + *eax =3D 0; + *ebx =3D 0; + *ecx =3D 0; + *edx =3D 0; /* Note - We disable invariant TSC (bit 8) in purpos= e */ + break; + case 0x80000008: + /* virtual & phys address size in low 2 bytes. */ + *eax =3D h_rax; + *ebx =3D 0; + *ecx =3D 0; + *edx =3D 0; + break; + case 0x8000000A: + *eax =3D 0; + *ebx =3D 0; + *ecx =3D 0; + *edx =3D 0; + break; + case 0x80000019: + *eax =3D h_rax; + *ebx =3D h_rbx; + *ecx =3D 0; + *edx =3D 0; + case 0xC0000000: + *eax =3D _cpuid->xlevel2; + *ebx =3D 0; + *ecx =3D 0; + *edx =3D 0; + break; + default: + *eax =3D 0; + *ebx =3D 0; + *ecx =3D 0; + *edx =3D 0; + break; } } diff --git a/target/i386/hvf-utils/x86_cpuid.h b/target/i386/hvf-utils/x86_= cpuid.h index b84a2f08df..45d27c457a 100644 --- a/target/i386/hvf-utils/x86_cpuid.h +++ b/target/i386/hvf-utils/x86_cpuid.h @@ -44,8 +44,9 @@ struct x86_cpuid { =20 struct CPUState; =20 -void init_cpuid(struct CPUState* cpu); -void get_cpuid_func(struct CPUState *cpu, int func, int cnt, uint32_t *eax= , uint32_t *ebx, uint32_t *ecx, uint32_t *edx); +void init_cpuid(struct CPUState *cpu); +void get_cpuid_func(struct CPUState *cpu, int func, int cnt, uint32_t *eax, + uint32_t *ebx, uint32_t *ecx, uint32_t *edx); =20 #endif /* __CPUID_H__ */ =20 diff --git a/target/i386/hvf-utils/x86_decode.c b/target/i386/hvf-utils/x86= _decode.c index 8deaab11d2..e21d96bc01 100644 --- a/target/i386/hvf-utils/x86_decode.c +++ b/target/i386/hvf-utils/x86_decode.c @@ -29,9 +29,11 @@ =20 static void decode_invalid(CPUState *cpu, struct x86_decode *decode) { - printf("%llx: failed to decode instruction ", cpu->hvf_x86->fetch_rip = - decode->len); - for (int i =3D 0; i < decode->opcode_len; i++) + printf("%llx: failed to decode instruction ", cpu->hvf_x86->fetch_rip - + decode->len); + for (int i =3D 0; i < decode->opcode_len; i++) { printf("%x ", decode->opcode[i]); + } printf("\n"); VM_PANIC("decoder failed\n"); } @@ -39,38 +41,39 @@ static void decode_invalid(CPUState *cpu, struct x86_de= code *decode) uint64_t sign(uint64_t val, int size) { switch (size) { - case 1: - val =3D (int8_t)val; - break; - case 2: - val =3D (int16_t)val; - break; - case 4: - val =3D (int32_t)val; - break; - case 8: - val =3D (int64_t)val; - break; - default: - VM_PANIC_EX("%s invalid size %d\n", __FUNCTION__, size); - break; + case 1: + val =3D (int8_t)val; + break; + case 2: + val =3D (int16_t)val; + break; + case 4: + val =3D (int32_t)val; + break; + case 8: + val =3D (int64_t)val; + break; + default: + VM_PANIC_EX("%s invalid size %d\n", __func__, size); + break; } return val; } =20 -static inline uint64_t decode_bytes(CPUState *cpu, struct x86_decode *deco= de, int size) +static inline uint64_t decode_bytes(CPUState *cpu, struct x86_decode *deco= de, + int size) { addr_t val =3D 0; =20 switch (size) { - case 1: - case 2: - case 4: - case 8: - break; - default: - VM_PANIC_EX("%s invalid size %d\n", __FUNCTION__, size); - break; + case 1: + case 2: + case 4: + case 8: + break; + default: + VM_PANIC_EX("%s invalid size %d\n", __func__, size); + break; } addr_t va =3D linear_rip(cpu, RIP(cpu)) + decode->len; vmx_read_mem(cpu, &val, va, size); @@ -99,68 +102,76 @@ static inline uint64_t decode_qword(CPUState *cpu, str= uct x86_decode *decode) return decode_bytes(cpu, decode, 8); } =20 -static void decode_modrm_rm(CPUState *cpu, struct x86_decode *decode, stru= ct x86_decode_op *op) +static void decode_modrm_rm(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op) { op->type =3D X86_VAR_RM; } =20 -static void decode_modrm_reg(CPUState *cpu, struct x86_decode *decode, str= uct x86_decode_op *op) +static void decode_modrm_reg(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op) { op->type =3D X86_VAR_REG; op->reg =3D decode->modrm.reg; op->ptr =3D get_reg_ref(cpu, op->reg, decode->rex.r, decode->operand_s= ize); } =20 -static void decode_rax(CPUState *cpu, struct x86_decode *decode, struct x8= 6_decode_op *op) +static void decode_rax(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op) { op->type =3D X86_VAR_REG; op->reg =3D REG_RAX; op->ptr =3D get_reg_ref(cpu, op->reg, 0, decode->operand_size); } =20 -static inline void decode_immediate(CPUState *cpu, struct x86_decode *deco= de, struct x86_decode_op *var, int size) +static inline void decode_immediate(CPUState *cpu, struct x86_decode *deco= de, + struct x86_decode_op *var, int size) { var->type =3D X86_VAR_IMMEDIATE; var->size =3D size; switch (size) { - case 1: - var->val =3D decode_byte(cpu, decode); - break; - case 2: - var->val =3D decode_word(cpu, decode); - break; - case 4: - var->val =3D decode_dword(cpu, decode); - break; - case 8: - var->val =3D decode_qword(cpu, decode); - break; - default: - VM_PANIC_EX("bad size %d\n", size); + case 1: + var->val =3D decode_byte(cpu, decode); + break; + case 2: + var->val =3D decode_word(cpu, decode); + break; + case 4: + var->val =3D decode_dword(cpu, decode); + break; + case 8: + var->val =3D decode_qword(cpu, decode); + break; + default: + VM_PANIC_EX("bad size %d\n", size); } } =20 -static void decode_imm8(CPUState *cpu, struct x86_decode *decode, struct x= 86_decode_op *op) +static void decode_imm8(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op) { decode_immediate(cpu, decode, op, 1); op->type =3D X86_VAR_IMMEDIATE; } =20 -static void decode_imm8_signed(CPUState *cpu, struct x86_decode *decode, s= truct x86_decode_op *op) +static void decode_imm8_signed(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op) { decode_immediate(cpu, decode, op, 1); op->val =3D sign(op->val, 1); op->type =3D X86_VAR_IMMEDIATE; } =20 -static void decode_imm16(CPUState *cpu, struct x86_decode *decode, struct = x86_decode_op *op) +static void decode_imm16(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op) { decode_immediate(cpu, decode, op, 2); op->type =3D X86_VAR_IMMEDIATE; } =20 =20 -static void decode_imm(CPUState *cpu, struct x86_decode *decode, struct x8= 6_decode_op *op) +static void decode_imm(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op) { if (8 =3D=3D decode->operand_size) { decode_immediate(cpu, decode, op, 4); @@ -171,20 +182,23 @@ static void decode_imm(CPUState *cpu, struct x86_deco= de *decode, struct x86_deco op->type =3D X86_VAR_IMMEDIATE; } =20 -static void decode_imm_signed(CPUState *cpu, struct x86_decode *decode, st= ruct x86_decode_op *op) +static void decode_imm_signed(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op) { decode_immediate(cpu, decode, op, decode->operand_size); op->val =3D sign(op->val, decode->operand_size); op->type =3D X86_VAR_IMMEDIATE; } =20 -static void decode_imm_1(CPUState *cpu, struct x86_decode *decode, struct = x86_decode_op *op) +static void decode_imm_1(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op) { op->type =3D X86_VAR_IMMEDIATE; op->val =3D 1; } =20 -static void decode_imm_0(CPUState *cpu, struct x86_decode *decode, struct = x86_decode_op *op) +static void decode_imm_0(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op) { op->type =3D X86_VAR_IMMEDIATE; op->val =3D 0; @@ -197,24 +211,24 @@ static void decode_pushseg(CPUState *cpu, struct x86_= decode *decode) =20 decode->op[0].type =3D X86_VAR_REG; switch (op) { - case 0xe: - decode->op[0].reg =3D REG_SEG_CS; - break; - case 0x16: - decode->op[0].reg =3D REG_SEG_SS; - break; - case 0x1e: - decode->op[0].reg =3D REG_SEG_DS; - break; - case 0x06: - decode->op[0].reg =3D REG_SEG_ES; - break; - case 0xa0: - decode->op[0].reg =3D REG_SEG_FS; - break; - case 0xa8: - decode->op[0].reg =3D REG_SEG_GS; - break; + case 0xe: + decode->op[0].reg =3D REG_SEG_CS; + break; + case 0x16: + decode->op[0].reg =3D REG_SEG_SS; + break; + case 0x1e: + decode->op[0].reg =3D REG_SEG_DS; + break; + case 0x06: + decode->op[0].reg =3D REG_SEG_ES; + break; + case 0xa0: + decode->op[0].reg =3D REG_SEG_FS; + break; + case 0xa8: + decode->op[0].reg =3D REG_SEG_GS; + break; } } =20 @@ -224,24 +238,24 @@ static void decode_popseg(CPUState *cpu, struct x86_d= ecode *decode) =20 decode->op[0].type =3D X86_VAR_REG; switch (op) { - case 0xf: - decode->op[0].reg =3D REG_SEG_CS; - break; - case 0x17: - decode->op[0].reg =3D REG_SEG_SS; - break; - case 0x1f: - decode->op[0].reg =3D REG_SEG_DS; - break; - case 0x07: - decode->op[0].reg =3D REG_SEG_ES; - break; - case 0xa1: - decode->op[0].reg =3D REG_SEG_FS; - break; - case 0xa9: - decode->op[0].reg =3D REG_SEG_GS; - break; + case 0xf: + decode->op[0].reg =3D REG_SEG_CS; + break; + case 0x17: + decode->op[0].reg =3D REG_SEG_SS; + break; + case 0x1f: + decode->op[0].reg =3D REG_SEG_DS; + break; + case 0x07: + decode->op[0].reg =3D REG_SEG_ES; + break; + case 0xa1: + decode->op[0].reg =3D REG_SEG_FS; + break; + case 0xa9: + decode->op[0].reg =3D REG_SEG_GS; + break; } } =20 @@ -249,36 +263,41 @@ static void decode_incgroup(CPUState *cpu, struct x86= _decode *decode) { decode->op[0].type =3D X86_VAR_REG; decode->op[0].reg =3D decode->opcode[0] - 0x40; - decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, decode->operand_size); + decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, + decode->operand_size); } =20 static void decode_decgroup(CPUState *cpu, struct x86_decode *decode) { decode->op[0].type =3D X86_VAR_REG; decode->op[0].reg =3D decode->opcode[0] - 0x48; - decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, decode->operand_size); + decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, + decode->operand_size); } =20 static void decode_incgroup2(CPUState *cpu, struct x86_decode *decode) { - if (!decode->modrm.reg) + if (!decode->modrm.reg) { decode->cmd =3D X86_DECODE_CMD_INC; - else if (1 =3D=3D decode->modrm.reg) + } else if (1 =3D=3D decode->modrm.reg) { decode->cmd =3D X86_DECODE_CMD_DEC; + } } =20 static void decode_pushgroup(CPUState *cpu, struct x86_decode *decode) { decode->op[0].type =3D X86_VAR_REG; decode->op[0].reg =3D decode->opcode[0] - 0x50; - decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, decode->operand_size); + decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, + decode->operand_size); } =20 static void decode_popgroup(CPUState *cpu, struct x86_decode *decode) { decode->op[0].type =3D X86_VAR_REG; decode->op[0].reg =3D decode->opcode[0] - 0x58; - decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, decode->operand_size); + decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, + decode->operand_size); } =20 static void decode_jxx(CPUState *cpu, struct x86_decode *decode) @@ -340,18 +359,18 @@ static void decode_f7group(CPUState *cpu, struct x86_= decode *decode) decode_modrm_rm(cpu, decode, &decode->op[0]); =20 switch (decode->modrm.reg) { - case 0: - case 1: - decode_imm(cpu, decode, &decode->op[1]); - break; - case 2: - break; - case 3: - decode->op[1].type =3D X86_VAR_IMMEDIATE; - decode->op[1].val =3D 0; - break; - default: - break; + case 0: + case 1: + decode_imm(cpu, decode, &decode->op[1]); + break; + case 2: + break; + case 3: + decode->op[1].type =3D X86_VAR_IMMEDIATE; + decode->op[1].val =3D 0; + break; + default: + break; } } =20 @@ -359,18 +378,21 @@ static void decode_xchgroup(CPUState *cpu, struct x86= _decode *decode) { decode->op[0].type =3D X86_VAR_REG; decode->op[0].reg =3D decode->opcode[0] - 0x90; - decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, decode->operand_size); + decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, + decode->operand_size); } =20 static void decode_movgroup(CPUState *cpu, struct x86_decode *decode) { decode->op[0].type =3D X86_VAR_REG; decode->op[0].reg =3D decode->opcode[0] - 0xb8; - decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, decode->operand_size); + decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, + decode->operand_size); decode_immediate(cpu, decode, &decode->op[1], decode->operand_size); } =20 -static void fetch_moffs(CPUState *cpu, struct x86_decode *decode, struct x= 86_decode_op *op) +static void fetch_moffs(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op) { op->type =3D X86_VAR_OFFSET; op->ptr =3D decode_bytes(cpu, decode, decode->addressing_size); @@ -380,11 +402,13 @@ static void decode_movgroup8(CPUState *cpu, struct x8= 6_decode *decode) { decode->op[0].type =3D X86_VAR_REG; decode->op[0].reg =3D decode->opcode[0] - 0xb0; - decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, decode->operand_size); + decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, + decode->operand_size); decode_immediate(cpu, decode, &decode->op[1], decode->operand_size); } =20 -static void decode_rcx(CPUState *cpu, struct x86_decode *decode, struct x8= 6_decode_op *op) +static void decode_rcx(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op) { op->type =3D X86_VAR_REG; op->reg =3D REG_RCX; @@ -396,10 +420,14 @@ struct decode_tbl { enum x86_decode_cmd cmd; uint8_t operand_size; bool is_modrm; - void (*decode_op1)(CPUState *cpu, struct x86_decode *decode, struct x8= 6_decode_op *op1); - void (*decode_op2)(CPUState *cpu, struct x86_decode *decode, struct x8= 6_decode_op *op2); - void (*decode_op3)(CPUState *cpu, struct x86_decode *decode, struct x8= 6_decode_op *op3); - void (*decode_op4)(CPUState *cpu, struct x86_decode *decode, struct x8= 6_decode_op *op4); + void (*decode_op1)(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op1); + void (*decode_op2)(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op2); + void (*decode_op3)(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op3); + void (*decode_op4)(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op4); void (*decode_postfix)(CPUState *cpu, struct x86_decode *decode); addr_t flags_mask; }; @@ -412,13 +440,16 @@ struct decode_x87_tbl { uint8_t operand_size; bool rev; bool pop; - void (*decode_op1)(CPUState *cpu, struct x86_decode *decode, struct x8= 6_decode_op *op1); - void (*decode_op2)(CPUState *cpu, struct x86_decode *decode, struct x8= 6_decode_op *op2); + void (*decode_op1)(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op1); + void (*decode_op2)(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op2); void (*decode_postfix)(CPUState *cpu, struct x86_decode *decode); addr_t flags_mask; }; =20 -struct decode_tbl invl_inst =3D {0x0, 0, 0, false, NULL, NULL, NULL, NULL,= decode_invalid}; +struct decode_tbl invl_inst =3D {0x0, 0, 0, false, NULL, NULL, NULL, NULL, + decode_invalid}; =20 struct decode_tbl _decode_tbl1[255]; struct decode_tbl _decode_tbl2[255]; @@ -430,25 +461,32 @@ static void decode_x87_ins(CPUState *cpu, struct x86_= decode *decode) =20 decode->is_fpu =3D true; int mode =3D decode->modrm.mod =3D=3D 3 ? 1 : 0; - int index =3D ((decode->opcode[0] & 0xf) << 4) | (mode << 3) | decode-= >modrm.reg; - =20 + int index =3D ((decode->opcode[0] & 0xf) << 4) | (mode << 3) | + decode->modrm.reg; + decoder =3D &_decode_tbl3[index]; =20 decode->cmd =3D decoder->cmd; - if (decoder->operand_size) + if (decoder->operand_size) { decode->operand_size =3D decoder->operand_size; + } decode->flags_mask =3D decoder->flags_mask; decode->fpop_stack =3D decoder->pop; decode->frev =3D decoder->rev; =20 - if (decoder->decode_op1) + if (decoder->decode_op1) { decoder->decode_op1(cpu, decode, &decode->op[0]); - if (decoder->decode_op2) + } + if (decoder->decode_op2) { decoder->decode_op2(cpu, decode, &decode->op[1]); - if (decoder->decode_postfix) + } + if (decoder->decode_postfix) { decoder->decode_postfix(cpu, decode); - =20 - VM_PANIC_ON_EX(!decode->cmd, "x87 opcode %x %x (%x %x) not decoded\n",= decode->opcode[0], decode->modrm.modrm, decoder->modrm_reg, decoder->modrm= _mod); + } + + VM_PANIC_ON_EX(!decode->cmd, "x87 opcode %x %x (%x %x) not decoded\n", + decode->opcode[0], decode->modrm.modrm, decoder->modrm_= reg, + decoder->modrm_mod); } =20 static void decode_ffgroup(CPUState *cpu, struct x86_decode *decode) @@ -465,8 +503,9 @@ static void decode_ffgroup(CPUState *cpu, struct x86_de= code *decode) X86_DECODE_CMD_INVL }; decode->cmd =3D group[decode->modrm.reg]; - if (decode->modrm.reg > 2) + if (decode->modrm.reg > 2) { decode->flags_mask =3D 0; + } } =20 static void decode_sldtgroup(CPUState *cpu, struct x86_decode *decode) @@ -482,7 +521,8 @@ static void decode_sldtgroup(CPUState *cpu, struct x86_= decode *decode) X86_DECODE_CMD_INVL }; decode->cmd =3D group[decode->modrm.reg]; - printf("%llx: decode_sldtgroup: %d\n", cpu->hvf_x86->fetch_rip, decode= ->modrm.reg); + printf("%llx: decode_sldtgroup: %d\n", cpu->hvf_x86->fetch_rip, + decode->modrm.reg); } =20 static void decode_lidtgroup(CPUState *cpu, struct x86_decode *decode) @@ -524,28 +564,34 @@ static void decode_x87_general(CPUState *cpu, struct = x86_decode *decode) decode->is_fpu =3D true; } =20 -static void decode_x87_modrm_floatp(CPUState *cpu, struct x86_decode *deco= de, struct x86_decode_op *op) +static void decode_x87_modrm_floatp(CPUState *cpu, struct x86_decode *deco= de, + struct x86_decode_op *op) { op->type =3D X87_VAR_FLOATP; } =20 -static void decode_x87_modrm_intp(CPUState *cpu, struct x86_decode *decode= , struct x86_decode_op *op) +static void decode_x87_modrm_intp(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op) { op->type =3D X87_VAR_INTP; } =20 -static void decode_x87_modrm_bytep(CPUState *cpu, struct x86_decode *decod= e, struct x86_decode_op *op) +static void decode_x87_modrm_bytep(CPUState *cpu, struct x86_decode *decod= e, + struct x86_decode_op *op) { op->type =3D X87_VAR_BYTEP; } =20 -static void decode_x87_modrm_st0(CPUState *cpu, struct x86_decode *decode,= struct x86_decode_op *op) +static void decode_x87_modrm_st0(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op) { op->type =3D X87_VAR_REG; op->reg =3D 0; } =20 -static void decode_decode_x87_modrm_st0(CPUState *cpu, struct x86_decode *= decode, struct x86_decode_op *op) +static void decode_decode_x87_modrm_st0(CPUState *cpu, + struct x86_decode *decode, + struct x86_decode_op *op) { op->type =3D X87_VAR_REG; op->reg =3D decode->modrm.modrm & 7; @@ -556,35 +602,35 @@ static void decode_aegroup(CPUState *cpu, struct x86_= decode *decode) { decode->is_fpu =3D true; switch (decode->modrm.reg) { - case 0: - decode->cmd =3D X86_DECODE_CMD_FXSAVE; - decode_x87_modrm_bytep(cpu, decode, &decode->op[0]); - break; - case 1: - decode_x87_modrm_bytep(cpu, decode, &decode->op[0]); - decode->cmd =3D X86_DECODE_CMD_FXRSTOR; - break; - case 5: - if (decode->modrm.modrm =3D=3D 0xe8) { - decode->cmd =3D X86_DECODE_CMD_LFENCE; - } else { - VM_PANIC("xrstor"); - } - break; - case 6: - VM_PANIC_ON(decode->modrm.modrm !=3D 0xf0); - decode->cmd =3D X86_DECODE_CMD_MFENCE; - break; - case 7: - if (decode->modrm.modrm =3D=3D 0xf8) { - decode->cmd =3D X86_DECODE_CMD_SFENCE; - } else { - decode->cmd =3D X86_DECODE_CMD_CLFLUSH; - } - break; - default: - VM_PANIC_ON_EX(1, "0xae: reg %d\n", decode->modrm.reg); - break; + case 0: + decode->cmd =3D X86_DECODE_CMD_FXSAVE; + decode_x87_modrm_bytep(cpu, decode, &decode->op[0]); + break; + case 1: + decode_x87_modrm_bytep(cpu, decode, &decode->op[0]); + decode->cmd =3D X86_DECODE_CMD_FXRSTOR; + break; + case 5: + if (decode->modrm.modrm =3D=3D 0xe8) { + decode->cmd =3D X86_DECODE_CMD_LFENCE; + } else { + VM_PANIC("xrstor"); + } + break; + case 6: + VM_PANIC_ON(decode->modrm.modrm !=3D 0xf0); + decode->cmd =3D X86_DECODE_CMD_MFENCE; + break; + case 7: + if (decode->modrm.modrm =3D=3D 0xf8) { + decode->cmd =3D X86_DECODE_CMD_SFENCE; + } else { + decode->cmd =3D X86_DECODE_CMD_CLFLUSH; + } + break; + default: + VM_PANIC_ON_EX(1, "0xae: reg %d\n", decode->modrm.reg); + break; } } =20 @@ -592,568 +638,1003 @@ static void decode_bswap(CPUState *cpu, struct x86= _decode *decode) { decode->op[0].type =3D X86_VAR_REG; decode->op[0].reg =3D decode->opcode[1] - 0xc8; - decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, decode->operand_size); + decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, + decode->operand_size); } =20 static void decode_d9_4(CPUState *cpu, struct x86_decode *decode) { - switch(decode->modrm.modrm) { - case 0xe0: - // FCHS - decode->cmd =3D X86_DECODE_CMD_FCHS; - break; - case 0xe1: - decode->cmd =3D X86_DECODE_CMD_FABS; - break; - case 0xe4: - VM_PANIC_ON_EX(1, "FTST"); - break; - case 0xe5: - // FXAM - decode->cmd =3D X86_DECODE_CMD_FXAM; - break; - default: - VM_PANIC_ON_EX(1, "FLDENV"); - break; + switch (decode->modrm.modrm) { + case 0xe0: + /* FCHS */ + decode->cmd =3D X86_DECODE_CMD_FCHS; + break; + case 0xe1: + decode->cmd =3D X86_DECODE_CMD_FABS; + break; + case 0xe4: + VM_PANIC_ON_EX(1, "FTST"); + break; + case 0xe5: + /* FXAM */ + decode->cmd =3D X86_DECODE_CMD_FXAM; + break; + default: + VM_PANIC_ON_EX(1, "FLDENV"); + break; } } =20 static void decode_db_4(CPUState *cpu, struct x86_decode *decode) { switch (decode->modrm.modrm) { - case 0xe0: - VM_PANIC_ON_EX(1, "unhandled FNENI: %x %x\n", decode->opcode[0= ], decode->modrm.modrm); - break; - case 0xe1: - VM_PANIC_ON_EX(1, "unhandled FNDISI: %x %x\n", decode->opcode[= 0], decode->modrm.modrm); - break; - case 0xe2: - VM_PANIC_ON_EX(1, "unhandled FCLEX: %x %x\n", decode->opcode[0= ], decode->modrm.modrm); - break; - case 0xe3: - decode->cmd =3D X86_DECODE_CMD_FNINIT; - break; - case 0xe4: - decode->cmd =3D X86_DECODE_CMD_FNSETPM; - break; - default: - VM_PANIC_ON_EX(1, "unhandled fpu opcode: %x %x\n", decode->opc= ode[0], decode->modrm.modrm); - break; + case 0xe0: + VM_PANIC_ON_EX(1, "unhandled FNENI: %x %x\n", decode->opcode[0], + decode->modrm.modrm); + break; + case 0xe1: + VM_PANIC_ON_EX(1, "unhandled FNDISI: %x %x\n", decode->opcode[0], + decode->modrm.modrm); + break; + case 0xe2: + VM_PANIC_ON_EX(1, "unhandled FCLEX: %x %x\n", decode->opcode[0], + decode->modrm.modrm); + break; + case 0xe3: + decode->cmd =3D X86_DECODE_CMD_FNINIT; + break; + case 0xe4: + decode->cmd =3D X86_DECODE_CMD_FNSETPM; + break; + default: + VM_PANIC_ON_EX(1, "unhandled fpu opcode: %x %x\n", decode->opcode[= 0], + decode->modrm.modrm); + break; } } =20 =20 #define RFLAGS_MASK_NONE 0 -#define RFLAGS_MASK_OSZAPC (RFLAGS_OF | RFLAGS_SF | RFLAGS_ZF | RFLAGS_AF= | RFLAGS_PF | RFLAGS_CF) -#define RFLAGS_MASK_LAHF (RFLAGS_SF | RFLAGS_ZF | RFLAGS_AF | RFLAGS_PF= | RFLAGS_CF) +#define RFLAGS_MASK_OSZAPC (RFLAGS_OF | RFLAGS_SF | RFLAGS_ZF | RFLAGS_AF= | \ + RFLAGS_PF | RFLAGS_CF) +#define RFLAGS_MASK_LAHF (RFLAGS_SF | RFLAGS_ZF | RFLAGS_AF | RFLAGS_PF= | \ + RFLAGS_CF) #define RFLAGS_MASK_CF (RFLAGS_CF) #define RFLAGS_MASK_IF (RFLAGS_IF) #define RFLAGS_MASK_TF (RFLAGS_TF) #define RFLAGS_MASK_DF (RFLAGS_DF) #define RFLAGS_MASK_ZF (RFLAGS_ZF) =20 -struct decode_tbl _1op_inst[] =3D -{ - {0x0, X86_DECODE_CMD_ADD, 1, true, decode_modrm_rm, decode_modrm_reg, = NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x1, X86_DECODE_CMD_ADD, 0, true, decode_modrm_rm, decode_modrm_reg, = NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x2, X86_DECODE_CMD_ADD, 1, true, decode_modrm_reg, decode_modrm_rm, = NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x3, X86_DECODE_CMD_ADD, 0, true, decode_modrm_reg, decode_modrm_rm, = NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x4, X86_DECODE_CMD_ADD, 1, false, decode_rax, decode_imm8, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, - {0x5, X86_DECODE_CMD_ADD, 0, false, decode_rax, decode_imm, NULL, NULL= , NULL, RFLAGS_MASK_OSZAPC}, - {0x6, X86_DECODE_CMD_PUSH_SEG, 0, false, false, NULL, NULL, NULL, deco= de_pushseg, RFLAGS_MASK_NONE}, - {0x7, X86_DECODE_CMD_POP_SEG, 0, false, false, NULL, NULL, NULL, decod= e_popseg, RFLAGS_MASK_NONE}, - {0x8, X86_DECODE_CMD_OR, 1, true, decode_modrm_rm, decode_modrm_reg, N= ULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x9, X86_DECODE_CMD_OR, 0, true, decode_modrm_rm, decode_modrm_reg, N= ULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0xa, X86_DECODE_CMD_OR, 1, true, decode_modrm_reg, decode_modrm_rm, N= ULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0xb, X86_DECODE_CMD_OR, 0, true, decode_modrm_reg, decode_modrm_rm, N= ULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0xc, X86_DECODE_CMD_OR, 1, false, decode_rax, decode_imm8, NULL, NULL= , NULL, RFLAGS_MASK_OSZAPC}, - {0xd, X86_DECODE_CMD_OR, 0, false, decode_rax, decode_imm, NULL, NULL,= NULL, RFLAGS_MASK_OSZAPC}, - =20 - {0xe, X86_DECODE_CMD_PUSH_SEG, 0, false, false, NULL, NULL, NULL, deco= de_pushseg, RFLAGS_MASK_NONE}, - {0xf, X86_DECODE_CMD_POP_SEG, 0, false, false, NULL, NULL, NULL, decod= e_popseg, RFLAGS_MASK_NONE}, - =20 - {0x10, X86_DECODE_CMD_ADC, 1, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x11, X86_DECODE_CMD_ADC, 0, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x12, X86_DECODE_CMD_ADC, 1, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x13, X86_DECODE_CMD_ADC, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x14, X86_DECODE_CMD_ADC, 1, false, decode_rax, decode_imm, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, - {0x15, X86_DECODE_CMD_ADC, 0, false, decode_rax, decode_imm, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, - =20 - {0x16, X86_DECODE_CMD_PUSH_SEG, 0, false, false, NULL, NULL, NULL, dec= ode_pushseg, RFLAGS_MASK_NONE}, - {0x17, X86_DECODE_CMD_POP_SEG, 0, false, false, NULL, NULL, NULL, deco= de_popseg, RFLAGS_MASK_NONE}, - =20 - {0x18, X86_DECODE_CMD_SBB, 1, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x19, X86_DECODE_CMD_SBB, 0, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x1a, X86_DECODE_CMD_SBB, 1, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x1b, X86_DECODE_CMD_SBB, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x1c, X86_DECODE_CMD_SBB, 1, false, decode_rax, decode_imm8, NULL, NU= LL, NULL, RFLAGS_MASK_OSZAPC}, - {0x1d, X86_DECODE_CMD_SBB, 0, false, decode_rax, decode_imm, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, - =20 - {0x1e, X86_DECODE_CMD_PUSH_SEG, 0, false, false, NULL, NULL, NULL, dec= ode_pushseg, RFLAGS_MASK_NONE}, - {0x1f, X86_DECODE_CMD_POP_SEG, 0, false, false, NULL, NULL, NULL, deco= de_popseg, RFLAGS_MASK_NONE}, - =20 - {0x20, X86_DECODE_CMD_AND, 1, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x21, X86_DECODE_CMD_AND, 0, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x22, X86_DECODE_CMD_AND, 1, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x23, X86_DECODE_CMD_AND, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x24, X86_DECODE_CMD_AND, 1, false, decode_rax, decode_imm, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, - {0x25, X86_DECODE_CMD_AND, 0, false, decode_rax, decode_imm, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, - {0x28, X86_DECODE_CMD_SUB, 1, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x29, X86_DECODE_CMD_SUB, 0, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x2a, X86_DECODE_CMD_SUB, 1, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x2b, X86_DECODE_CMD_SUB, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x2c, X86_DECODE_CMD_SUB, 1, false, decode_rax, decode_imm, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, - {0x2d, X86_DECODE_CMD_SUB, 0, false, decode_rax, decode_imm, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, - {0x2f, X86_DECODE_CMD_DAS, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_OSZAPC}, - {0x30, X86_DECODE_CMD_XOR, 1, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x31, X86_DECODE_CMD_XOR, 0, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x32, X86_DECODE_CMD_XOR, 1, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x33, X86_DECODE_CMD_XOR, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x34, X86_DECODE_CMD_XOR, 1, false, decode_rax, decode_imm, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, - {0x35, X86_DECODE_CMD_XOR, 0, false, decode_rax, decode_imm, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, - =20 - {0x38, X86_DECODE_CMD_CMP, 1, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x39, X86_DECODE_CMD_CMP, 0, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x3a, X86_DECODE_CMD_CMP, 1, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x3b, X86_DECODE_CMD_CMP, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x3c, X86_DECODE_CMD_CMP, 1, false, decode_rax, decode_imm8, NULL, NU= LL, NULL, RFLAGS_MASK_OSZAPC}, - {0x3d, X86_DECODE_CMD_CMP, 0, false, decode_rax, decode_imm, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, - =20 - {0x3f, X86_DECODE_CMD_AAS, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_OSZAPC}, - =20 - {0x40, X86_DECODE_CMD_INC, 0, false, NULL, NULL, NULL, NULL, decode_in= cgroup, RFLAGS_MASK_OSZAPC}, - {0x41, X86_DECODE_CMD_INC, 0, false, NULL, NULL, NULL, NULL, decode_in= cgroup, RFLAGS_MASK_OSZAPC}, - {0x42, X86_DECODE_CMD_INC, 0, false, NULL, NULL, NULL, NULL, decode_in= cgroup, RFLAGS_MASK_OSZAPC}, - {0x43, X86_DECODE_CMD_INC, 0, false, NULL, NULL, NULL, NULL, decode_in= cgroup, RFLAGS_MASK_OSZAPC}, - {0x44, X86_DECODE_CMD_INC, 0, false, NULL, NULL, NULL, NULL, decode_in= cgroup, RFLAGS_MASK_OSZAPC}, - {0x45, X86_DECODE_CMD_INC, 0, false, NULL, NULL, NULL, NULL, decode_in= cgroup, RFLAGS_MASK_OSZAPC}, - {0x46, X86_DECODE_CMD_INC, 0, false, NULL, NULL, NULL, NULL, decode_in= cgroup, RFLAGS_MASK_OSZAPC}, - {0x47, X86_DECODE_CMD_INC, 0, false, NULL, NULL, NULL, NULL, decode_in= cgroup, RFLAGS_MASK_OSZAPC}, - =20 - {0x48, X86_DECODE_CMD_DEC, 0, false, NULL, NULL, NULL, NULL, decode_de= cgroup, RFLAGS_MASK_OSZAPC}, - {0x49, X86_DECODE_CMD_DEC, 0, false, NULL, NULL, NULL, NULL, decode_de= cgroup, RFLAGS_MASK_OSZAPC}, - {0x4a, X86_DECODE_CMD_DEC, 0, false, NULL, NULL, NULL, NULL, decode_de= cgroup, RFLAGS_MASK_OSZAPC}, - {0x4b, X86_DECODE_CMD_DEC, 0, false, NULL, NULL, NULL, NULL, decode_de= cgroup, RFLAGS_MASK_OSZAPC}, - {0x4c, X86_DECODE_CMD_DEC, 0, false, NULL, NULL, NULL, NULL, decode_de= cgroup, RFLAGS_MASK_OSZAPC}, - {0x4d, X86_DECODE_CMD_DEC, 0, false, NULL, NULL, NULL, NULL, decode_de= cgroup, RFLAGS_MASK_OSZAPC}, - {0x4e, X86_DECODE_CMD_DEC, 0, false, NULL, NULL, NULL, NULL, decode_de= cgroup, RFLAGS_MASK_OSZAPC}, - {0x4f, X86_DECODE_CMD_DEC, 0, false, NULL, NULL, NULL, NULL, decode_de= cgroup, RFLAGS_MASK_OSZAPC}, - =20 - {0x50, X86_DECODE_CMD_PUSH, 0, false, NULL, NULL, NULL, NULL, decode_p= ushgroup, RFLAGS_MASK_NONE}, - {0x51, X86_DECODE_CMD_PUSH, 0, false, NULL, NULL, NULL, NULL, decode_p= ushgroup, RFLAGS_MASK_NONE}, - {0x52, X86_DECODE_CMD_PUSH, 0, false, NULL, NULL, NULL, NULL, decode_p= ushgroup, RFLAGS_MASK_NONE}, - {0x53, X86_DECODE_CMD_PUSH, 0, false, NULL, NULL, NULL, NULL, decode_p= ushgroup, RFLAGS_MASK_NONE}, - {0x54, X86_DECODE_CMD_PUSH, 0, false, NULL, NULL, NULL, NULL, decode_p= ushgroup, RFLAGS_MASK_NONE}, - {0x55, X86_DECODE_CMD_PUSH, 0, false, NULL, NULL, NULL, NULL, decode_p= ushgroup, RFLAGS_MASK_NONE}, - {0x56, X86_DECODE_CMD_PUSH, 0, false, NULL, NULL, NULL, NULL, decode_p= ushgroup, RFLAGS_MASK_NONE}, - {0x57, X86_DECODE_CMD_PUSH, 0, false, NULL, NULL, NULL, NULL, decode_p= ushgroup, RFLAGS_MASK_NONE}, - =20 - {0x58, X86_DECODE_CMD_POP, 0, false, NULL, NULL, NULL, NULL, decode_po= pgroup, RFLAGS_MASK_NONE}, - {0x59, X86_DECODE_CMD_POP, 0, false, NULL, NULL, NULL, NULL, decode_po= pgroup, RFLAGS_MASK_NONE}, - {0x5a, X86_DECODE_CMD_POP, 0, false, NULL, NULL, NULL, NULL, decode_po= pgroup, RFLAGS_MASK_NONE}, - {0x5b, X86_DECODE_CMD_POP, 0, false, NULL, NULL, NULL, NULL, decode_po= pgroup, RFLAGS_MASK_NONE}, - {0x5c, X86_DECODE_CMD_POP, 0, false, NULL, NULL, NULL, NULL, decode_po= pgroup, RFLAGS_MASK_NONE}, - {0x5d, X86_DECODE_CMD_POP, 0, false, NULL, NULL, NULL, NULL, decode_po= pgroup, RFLAGS_MASK_NONE}, - {0x5e, X86_DECODE_CMD_POP, 0, false, NULL, NULL, NULL, NULL, decode_po= pgroup, RFLAGS_MASK_NONE}, - {0x5f, X86_DECODE_CMD_POP, 0, false, NULL, NULL, NULL, NULL, decode_po= pgroup, RFLAGS_MASK_NONE}, - =20 - {0x60, X86_DECODE_CMD_PUSHA, 0, false, NULL, NULL, NULL, NULL, NULL, R= FLAGS_MASK_NONE}, - {0x61, X86_DECODE_CMD_POPA, 0, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_NONE}, - =20 - {0x68, X86_DECODE_CMD_PUSH, 0, false, decode_imm, NULL, NULL, NULL, NU= LL, RFLAGS_MASK_NONE}, - {0x6a, X86_DECODE_CMD_PUSH, 0, false, decode_imm8_signed, NULL, NULL, = NULL, NULL, RFLAGS_MASK_NONE}, - {0x69, X86_DECODE_CMD_IMUL_3, 0, true, decode_modrm_reg, decode_modrm_= rm, decode_imm, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x6b, X86_DECODE_CMD_IMUL_3, 0, true, decode_modrm_reg, decode_modrm_= rm, decode_imm8_signed, NULL, NULL, RFLAGS_MASK_OSZAPC}, - =20 - {0x6c, X86_DECODE_CMD_INS, 1, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_NONE}, - {0x6d, X86_DECODE_CMD_INS, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_NONE}, - {0x6e, X86_DECODE_CMD_OUTS, 1, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_NONE}, - {0x6f, X86_DECODE_CMD_OUTS, 0, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_NONE}, - =20 - {0x70, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x71, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x72, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x73, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x74, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x75, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x76, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x77, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x78, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x79, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x7a, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x7b, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x7c, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x7d, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x7e, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x7f, X86_DECODE_CMD_JXX, 1, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - =20 - {0x80, X86_DECODE_CMD_INVL, 1, true, decode_modrm_rm, decode_imm8, NUL= L, NULL, decode_addgroup, RFLAGS_MASK_OSZAPC}, - {0x81, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, decode_imm, NULL= , NULL, decode_addgroup, RFLAGS_MASK_OSZAPC}, - {0x82, X86_DECODE_CMD_INVL, 1, true, decode_modrm_rm, decode_imm8, NUL= L, NULL, decode_addgroup, RFLAGS_MASK_OSZAPC}, - {0x83, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, decode_imm8_sign= ed, NULL, NULL, decode_addgroup, RFLAGS_MASK_OSZAPC}, - {0x84, X86_DECODE_CMD_TST, 1, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x85, X86_DECODE_CMD_TST, 0, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0x86, X86_DECODE_CMD_XCHG, 1, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x87, X86_DECODE_CMD_XCHG, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x88, X86_DECODE_CMD_MOV, 1, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x89, X86_DECODE_CMD_MOV, 0, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x8a, X86_DECODE_CMD_MOV, 1, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x8b, X86_DECODE_CMD_MOV, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x8c, X86_DECODE_CMD_MOV_FROM_SEG, 0, true, decode_modrm_rm, decode_m= odrm_reg, NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x8d, X86_DECODE_CMD_LEA, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x8e, X86_DECODE_CMD_MOV_TO_SEG, 0, true, decode_modrm_reg, decode_mo= drm_rm, NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x8f, X86_DECODE_CMD_POP, 0, true, decode_modrm_rm, NULL, NULL, NULL,= NULL, RFLAGS_MASK_NONE}, - =20 - {0x90, X86_DECODE_CMD_NOP, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_NONE}, - {0x91, X86_DECODE_CMD_XCHG, 0, false, NULL, decode_rax, NULL, NULL, de= code_xchgroup, RFLAGS_MASK_NONE}, - {0x92, X86_DECODE_CMD_XCHG, 0, false, NULL, decode_rax, NULL, NULL, de= code_xchgroup, RFLAGS_MASK_NONE}, - {0x93, X86_DECODE_CMD_XCHG, 0, false, NULL, decode_rax, NULL, NULL, de= code_xchgroup, RFLAGS_MASK_NONE}, - {0x94, X86_DECODE_CMD_XCHG, 0, false, NULL, decode_rax, NULL, NULL, de= code_xchgroup, RFLAGS_MASK_NONE}, - {0x95, X86_DECODE_CMD_XCHG, 0, false, NULL, decode_rax, NULL, NULL, de= code_xchgroup, RFLAGS_MASK_NONE}, - {0x96, X86_DECODE_CMD_XCHG, 0, false, NULL, decode_rax, NULL, NULL, de= code_xchgroup, RFLAGS_MASK_NONE}, - {0x97, X86_DECODE_CMD_XCHG, 0, false, NULL, decode_rax, NULL, NULL, de= code_xchgroup, RFLAGS_MASK_NONE}, - =20 - {0x98, X86_DECODE_CMD_CBW, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_NONE}, - {0x99, X86_DECODE_CMD_CWD, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_NONE}, - =20 - {0x9a, X86_DECODE_CMD_CALL_FAR, 0, false, NULL, NULL, NULL, NULL, deco= de_farjmp, RFLAGS_MASK_NONE}, - =20 - {0x9c, X86_DECODE_CMD_PUSHF, 0, false, NULL, NULL, NULL, NULL, NULL, R= FLAGS_MASK_NONE}, - //{0x9d, X86_DECODE_CMD_POPF, 0, false, NULL, NULL, NULL, NULL, NULL, = RFLAGS_MASK_POPF}, - {0x9e, X86_DECODE_CMD_SAHF, 0, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_NONE}, - {0x9f, X86_DECODE_CMD_LAHF, 0, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_LAHF}, - =20 - {0xa0, X86_DECODE_CMD_MOV, 1, false, decode_rax, fetch_moffs, NULL, NU= LL, NULL, RFLAGS_MASK_NONE}, - {0xa1, X86_DECODE_CMD_MOV, 0, false, decode_rax, fetch_moffs, NULL, NU= LL, NULL, RFLAGS_MASK_NONE}, - {0xa2, X86_DECODE_CMD_MOV, 1, false, fetch_moffs, decode_rax, NULL, NU= LL, NULL, RFLAGS_MASK_NONE}, - {0xa3, X86_DECODE_CMD_MOV, 0, false, fetch_moffs, decode_rax, NULL, NU= LL, NULL, RFLAGS_MASK_NONE}, - =20 - {0xa4, X86_DECODE_CMD_MOVS, 1, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_NONE}, - {0xa5, X86_DECODE_CMD_MOVS, 0, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_NONE}, - {0xa6, X86_DECODE_CMD_CMPS, 1, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_OSZAPC}, - {0xa7, X86_DECODE_CMD_CMPS, 0, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_OSZAPC}, - {0xaa, X86_DECODE_CMD_STOS, 1, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_NONE}, - {0xab, X86_DECODE_CMD_STOS, 0, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_NONE}, - {0xac, X86_DECODE_CMD_LODS, 1, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_NONE}, - {0xad, X86_DECODE_CMD_LODS, 0, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_NONE}, - {0xae, X86_DECODE_CMD_SCAS, 1, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_OSZAPC}, - {0xaf, X86_DECODE_CMD_SCAS, 0, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_OSZAPC}, - =20 - {0xa8, X86_DECODE_CMD_TST, 1, false, decode_rax, decode_imm, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, - {0xa9, X86_DECODE_CMD_TST, 0, false, decode_rax, decode_imm, NULL, NUL= L, NULL, RFLAGS_MASK_OSZAPC}, - =20 - {0xb0, X86_DECODE_CMD_MOV, 1, false, NULL, NULL, NULL, NULL, decode_mo= vgroup8, RFLAGS_MASK_NONE}, - {0xb1, X86_DECODE_CMD_MOV, 1, false, NULL, NULL, NULL, NULL, decode_mo= vgroup8, RFLAGS_MASK_NONE}, - {0xb2, X86_DECODE_CMD_MOV, 1, false, NULL, NULL, NULL, NULL, decode_mo= vgroup8, RFLAGS_MASK_NONE}, - {0xb3, X86_DECODE_CMD_MOV, 1, false, NULL, NULL, NULL, NULL, decode_mo= vgroup8, RFLAGS_MASK_NONE}, - {0xb4, X86_DECODE_CMD_MOV, 1, false, NULL, NULL, NULL, NULL, decode_mo= vgroup8, RFLAGS_MASK_NONE}, - {0xb5, X86_DECODE_CMD_MOV, 1, false, NULL, NULL, NULL, NULL, decode_mo= vgroup8, RFLAGS_MASK_NONE}, - {0xb6, X86_DECODE_CMD_MOV, 1, false, NULL, NULL, NULL, NULL, decode_mo= vgroup8, RFLAGS_MASK_NONE}, - {0xb7, X86_DECODE_CMD_MOV, 1, false, NULL, NULL, NULL, NULL, decode_mo= vgroup8, RFLAGS_MASK_NONE}, - =20 - {0xb8, X86_DECODE_CMD_MOV, 0, false, NULL, NULL, NULL, NULL, decode_mo= vgroup, RFLAGS_MASK_NONE}, - {0xb9, X86_DECODE_CMD_MOV, 0, false, NULL, NULL, NULL, NULL, decode_mo= vgroup, RFLAGS_MASK_NONE}, - {0xba, X86_DECODE_CMD_MOV, 0, false, NULL, NULL, NULL, NULL, decode_mo= vgroup, RFLAGS_MASK_NONE}, - {0xbb, X86_DECODE_CMD_MOV, 0, false, NULL, NULL, NULL, NULL, decode_mo= vgroup, RFLAGS_MASK_NONE}, - {0xbc, X86_DECODE_CMD_MOV, 0, false, NULL, NULL, NULL, NULL, decode_mo= vgroup, RFLAGS_MASK_NONE}, - {0xbd, X86_DECODE_CMD_MOV, 0, false, NULL, NULL, NULL, NULL, decode_mo= vgroup, RFLAGS_MASK_NONE}, - {0xbe, X86_DECODE_CMD_MOV, 0, false, NULL, NULL, NULL, NULL, decode_mo= vgroup, RFLAGS_MASK_NONE}, - {0xbf, X86_DECODE_CMD_MOV, 0, false, NULL, NULL, NULL, NULL, decode_mo= vgroup, RFLAGS_MASK_NONE}, - =20 - {0xc0, X86_DECODE_CMD_INVL, 1, true, decode_modrm_rm, decode_imm8, NUL= L, NULL, decode_rotgroup, RFLAGS_MASK_OSZAPC}, - {0xc1, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, decode_imm8, NUL= L, NULL, decode_rotgroup, RFLAGS_MASK_OSZAPC}, - =20 - {0xc2, X86_DECODE_RET_NEAR, 0, false, decode_imm16, NULL, NULL, NULL, = NULL, RFLAGS_MASK_NONE}, - {0xc3, X86_DECODE_RET_NEAR, 0, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_NONE}, - =20 - {0xc4, X86_DECODE_CMD_LES, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0xc5, X86_DECODE_CMD_LDS, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_NONE}, - =20 - {0xc6, X86_DECODE_CMD_MOV, 1, true, decode_modrm_rm, decode_imm8, NULL= , NULL, NULL, RFLAGS_MASK_NONE}, - {0xc7, X86_DECODE_CMD_MOV, 0, true, decode_modrm_rm, decode_imm, NULL,= NULL, NULL, RFLAGS_MASK_NONE}, - =20 - {0xc8, X86_DECODE_CMD_ENTER, 0, false, decode_imm16, decode_imm8, NULL= , NULL, NULL, RFLAGS_MASK_NONE}, - {0xc9, X86_DECODE_CMD_LEAVE, 0, false, NULL, NULL, NULL, NULL, NULL, R= FLAGS_MASK_NONE}, - {0xca, X86_DECODE_RET_FAR, 0, false, decode_imm16, NULL, NULL, NULL, N= ULL, RFLAGS_MASK_NONE}, - {0xcb, X86_DECODE_RET_FAR, 0, false, decode_imm_0, NULL, NULL, NULL, N= ULL, RFLAGS_MASK_NONE}, - {0xcd, X86_DECODE_CMD_INT, 0, false, decode_imm8, NULL, NULL, NULL, NU= LL, RFLAGS_MASK_NONE}, - //{0xcf, X86_DECODE_CMD_IRET, 0, false, NULL, NULL, NULL, NULL, NULL, = RFLAGS_MASK_IRET}, - =20 - {0xd0, X86_DECODE_CMD_INVL, 1, true, decode_modrm_rm, decode_imm_1, NU= LL, NULL, decode_rotgroup, RFLAGS_MASK_OSZAPC}, - {0xd1, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, decode_imm_1, NU= LL, NULL, decode_rotgroup, RFLAGS_MASK_OSZAPC}, - {0xd2, X86_DECODE_CMD_INVL, 1, true, decode_modrm_rm, decode_rcx, NULL= , NULL, decode_rotgroup, RFLAGS_MASK_OSZAPC}, - {0xd3, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, decode_rcx, NULL= , NULL, decode_rotgroup, RFLAGS_MASK_OSZAPC}, - =20 - {0xd4, X86_DECODE_CMD_AAM, 0, false, decode_imm8, NULL, NULL, NULL, NU= LL, RFLAGS_MASK_OSZAPC}, - {0xd5, X86_DECODE_CMD_AAD, 0, false, decode_imm8, NULL, NULL, NULL, NU= LL, RFLAGS_MASK_OSZAPC}, - =20 - {0xd7, X86_DECODE_CMD_XLAT, 0, false, NULL, NULL, NULL, NULL, NULL, RF= LAGS_MASK_NONE}, - =20 - {0xd8, X86_DECODE_CMD_INVL, 0, true, NULL, NULL, NULL, NULL, decode_x8= 7_ins, RFLAGS_MASK_NONE}, - {0xd9, X86_DECODE_CMD_INVL, 0, true, NULL, NULL, NULL, NULL, decode_x8= 7_ins, RFLAGS_MASK_NONE}, - {0xda, X86_DECODE_CMD_INVL, 0, true, NULL, NULL, NULL, NULL, decode_x8= 7_ins, RFLAGS_MASK_NONE}, - {0xdb, X86_DECODE_CMD_INVL, 0, true, NULL, NULL, NULL, NULL, decode_x8= 7_ins, RFLAGS_MASK_NONE}, - {0xdc, X86_DECODE_CMD_INVL, 0, true, NULL, NULL, NULL, NULL, decode_x8= 7_ins, RFLAGS_MASK_NONE}, - {0xdd, X86_DECODE_CMD_INVL, 0, true, NULL, NULL, NULL, NULL, decode_x8= 7_ins, RFLAGS_MASK_NONE}, - {0xde, X86_DECODE_CMD_INVL, 0, true, NULL, NULL, NULL, NULL, decode_x8= 7_ins, RFLAGS_MASK_NONE}, - {0xdf, X86_DECODE_CMD_INVL, 0, true, NULL, NULL, NULL, NULL, decode_x8= 7_ins, RFLAGS_MASK_NONE}, - =20 - {0xe0, X86_DECODE_CMD_LOOP, 0, false, decode_imm8_signed, NULL, NULL, = NULL, NULL, RFLAGS_MASK_NONE}, - {0xe1, X86_DECODE_CMD_LOOP, 0, false, decode_imm8_signed, NULL, NULL, = NULL, NULL, RFLAGS_MASK_NONE}, - {0xe2, X86_DECODE_CMD_LOOP, 0, false, decode_imm8_signed, NULL, NULL, = NULL, NULL, RFLAGS_MASK_NONE}, - =20 - {0xe3, X86_DECODE_CMD_JCXZ, 1, false, NULL, NULL, NULL, NULL, decode_j= xx, RFLAGS_MASK_NONE}, - =20 - {0xe4, X86_DECODE_CMD_IN, 1, false, decode_imm8, NULL, NULL, NULL, NUL= L, RFLAGS_MASK_NONE}, - {0xe5, X86_DECODE_CMD_IN, 0, false, decode_imm8, NULL, NULL, NULL, NUL= L, RFLAGS_MASK_NONE}, - {0xe6, X86_DECODE_CMD_OUT, 1, false, decode_imm8, NULL, NULL, NULL, NU= LL, RFLAGS_MASK_NONE}, - {0xe7, X86_DECODE_CMD_OUT, 0, false, decode_imm8, NULL, NULL, NULL, NU= LL, RFLAGS_MASK_NONE}, - {0xe8, X86_DECODE_CMD_CALL_NEAR, 0, false, decode_imm_signed, NULL, NU= LL, NULL, NULL, RFLAGS_MASK_NONE}, - {0xe9, X86_DECODE_CMD_JMP_NEAR, 0, false, decode_imm_signed, NULL, NUL= L, NULL, NULL, RFLAGS_MASK_NONE}, - {0xea, X86_DECODE_CMD_JMP_FAR, 0, false, NULL, NULL, NULL, NULL, decod= e_farjmp, RFLAGS_MASK_NONE}, - {0xeb, X86_DECODE_CMD_JMP_NEAR, 1, false, decode_imm8_signed, NULL, NU= LL, NULL, NULL, RFLAGS_MASK_NONE}, - {0xec, X86_DECODE_CMD_IN, 1, false, NULL, NULL, NULL, NULL, NULL, RFLA= GS_MASK_NONE}, - {0xed, X86_DECODE_CMD_IN, 0, false, NULL, NULL, NULL, NULL, NULL, RFLA= GS_MASK_NONE}, - {0xee, X86_DECODE_CMD_OUT, 1, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_NONE}, - {0xef, X86_DECODE_CMD_OUT, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_NONE}, - =20 - {0xf4, X86_DECODE_CMD_HLT, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_NONE}, - =20 - {0xf5, X86_DECODE_CMD_CMC, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_CF}, - =20 - {0xf6, X86_DECODE_CMD_INVL, 1, true, NULL, NULL, NULL, NULL, decode_f7= group, RFLAGS_MASK_OSZAPC}, - {0xf7, X86_DECODE_CMD_INVL, 0, true, NULL, NULL, NULL, NULL, decode_f7= group, RFLAGS_MASK_OSZAPC}, - =20 - {0xf8, X86_DECODE_CMD_CLC, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_CF}, - {0xf9, X86_DECODE_CMD_STC, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_CF}, - =20 - {0xfa, X86_DECODE_CMD_CLI, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_IF}, - {0xfb, X86_DECODE_CMD_STI, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_IF}, - {0xfc, X86_DECODE_CMD_CLD, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_DF}, - {0xfd, X86_DECODE_CMD_STD, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_DF}, - {0xfe, X86_DECODE_CMD_INVL, 1, true, decode_modrm_rm, NULL, NULL, NULL= , decode_incgroup2, RFLAGS_MASK_OSZAPC}, - {0xff, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, NULL, NULL, NULL= , decode_ffgroup, RFLAGS_MASK_OSZAPC}, +struct decode_tbl _1op_inst[] =3D { + {0x0, X86_DECODE_CMD_ADD, 1, true, decode_modrm_rm, decode_modrm_reg, = NULL, + NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x1, X86_DECODE_CMD_ADD, 0, true, decode_modrm_rm, decode_modrm_reg, = NULL, + NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x2, X86_DECODE_CMD_ADD, 1, true, decode_modrm_reg, decode_modrm_rm, = NULL, + NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x3, X86_DECODE_CMD_ADD, 0, true, decode_modrm_reg, decode_modrm_rm, = NULL, + NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x4, X86_DECODE_CMD_ADD, 1, false, decode_rax, decode_imm8, NULL, NUL= L, + NULL, RFLAGS_MASK_OSZAPC}, + {0x5, X86_DECODE_CMD_ADD, 0, false, decode_rax, decode_imm, NULL, NULL, + NULL, RFLAGS_MASK_OSZAPC}, + {0x6, X86_DECODE_CMD_PUSH_SEG, 0, false, false, NULL, NULL, NULL, + decode_pushseg, RFLAGS_MASK_NONE}, + {0x7, X86_DECODE_CMD_POP_SEG, 0, false, false, NULL, NULL, NULL, + decode_popseg, RFLAGS_MASK_NONE}, + {0x8, X86_DECODE_CMD_OR, 1, true, decode_modrm_rm, decode_modrm_reg, N= ULL, + NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x9, X86_DECODE_CMD_OR, 0, true, decode_modrm_rm, decode_modrm_reg, N= ULL, + NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xa, X86_DECODE_CMD_OR, 1, true, decode_modrm_reg, decode_modrm_rm, N= ULL, + NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xb, X86_DECODE_CMD_OR, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xc, X86_DECODE_CMD_OR, 1, false, decode_rax, decode_imm8, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xd, X86_DECODE_CMD_OR, 0, false, decode_rax, decode_imm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + + {0xe, X86_DECODE_CMD_PUSH_SEG, 0, false, false, + NULL, NULL, NULL, decode_pushseg, RFLAGS_MASK_NONE}, + {0xf, X86_DECODE_CMD_POP_SEG, 0, false, false, + NULL, NULL, NULL, decode_popseg, RFLAGS_MASK_NONE}, + + {0x10, X86_DECODE_CMD_ADC, 1, true, decode_modrm_rm, decode_modrm_reg, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x11, X86_DECODE_CMD_ADC, 0, true, decode_modrm_rm, decode_modrm_reg, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x12, X86_DECODE_CMD_ADC, 1, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x13, X86_DECODE_CMD_ADC, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x14, X86_DECODE_CMD_ADC, 1, false, decode_rax, decode_imm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x15, X86_DECODE_CMD_ADC, 0, false, decode_rax, decode_imm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + + {0x16, X86_DECODE_CMD_PUSH_SEG, 0, false, false, + NULL, NULL, NULL, decode_pushseg, RFLAGS_MASK_NONE}, + {0x17, X86_DECODE_CMD_POP_SEG, 0, false, false, + NULL, NULL, NULL, decode_popseg, RFLAGS_MASK_NONE}, + + {0x18, X86_DECODE_CMD_SBB, 1, true, decode_modrm_rm, decode_modrm_reg, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x19, X86_DECODE_CMD_SBB, 0, true, decode_modrm_rm, decode_modrm_reg, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x1a, X86_DECODE_CMD_SBB, 1, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x1b, X86_DECODE_CMD_SBB, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x1c, X86_DECODE_CMD_SBB, 1, false, decode_rax, decode_imm8, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x1d, X86_DECODE_CMD_SBB, 0, false, decode_rax, decode_imm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + + {0x1e, X86_DECODE_CMD_PUSH_SEG, 0, false, false, + NULL, NULL, NULL, decode_pushseg, RFLAGS_MASK_NONE}, + {0x1f, X86_DECODE_CMD_POP_SEG, 0, false, false, + NULL, NULL, NULL, decode_popseg, RFLAGS_MASK_NONE}, + + {0x20, X86_DECODE_CMD_AND, 1, true, decode_modrm_rm, decode_modrm_reg, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x21, X86_DECODE_CMD_AND, 0, true, decode_modrm_rm, decode_modrm_reg, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x22, X86_DECODE_CMD_AND, 1, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x23, X86_DECODE_CMD_AND, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x24, X86_DECODE_CMD_AND, 1, false, decode_rax, decode_imm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x25, X86_DECODE_CMD_AND, 0, false, decode_rax, decode_imm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x28, X86_DECODE_CMD_SUB, 1, true, decode_modrm_rm, decode_modrm_reg, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x29, X86_DECODE_CMD_SUB, 0, true, decode_modrm_rm, decode_modrm_reg, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x2a, X86_DECODE_CMD_SUB, 1, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x2b, X86_DECODE_CMD_SUB, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x2c, X86_DECODE_CMD_SUB, 1, false, decode_rax, decode_imm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x2d, X86_DECODE_CMD_SUB, 0, false, decode_rax, decode_imm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x2f, X86_DECODE_CMD_DAS, 0, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x30, X86_DECODE_CMD_XOR, 1, true, decode_modrm_rm, decode_modrm_reg, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x31, X86_DECODE_CMD_XOR, 0, true, decode_modrm_rm, decode_modrm_reg, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x32, X86_DECODE_CMD_XOR, 1, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x33, X86_DECODE_CMD_XOR, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x34, X86_DECODE_CMD_XOR, 1, false, decode_rax, decode_imm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x35, X86_DECODE_CMD_XOR, 0, false, decode_rax, decode_imm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + + {0x38, X86_DECODE_CMD_CMP, 1, true, decode_modrm_rm, decode_modrm_reg, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x39, X86_DECODE_CMD_CMP, 0, true, decode_modrm_rm, decode_modrm_reg, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x3a, X86_DECODE_CMD_CMP, 1, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x3b, X86_DECODE_CMD_CMP, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x3c, X86_DECODE_CMD_CMP, 1, false, decode_rax, decode_imm8, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x3d, X86_DECODE_CMD_CMP, 0, false, decode_rax, decode_imm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + + {0x3f, X86_DECODE_CMD_AAS, 0, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + + {0x40, X86_DECODE_CMD_INC, 0, false, + NULL, NULL, NULL, NULL, decode_incgroup, RFLAGS_MASK_OSZAPC}, + {0x41, X86_DECODE_CMD_INC, 0, false, + NULL, NULL, NULL, NULL, decode_incgroup, RFLAGS_MASK_OSZAPC}, + {0x42, X86_DECODE_CMD_INC, 0, false, + NULL, NULL, NULL, NULL, decode_incgroup, RFLAGS_MASK_OSZAPC}, + {0x43, X86_DECODE_CMD_INC, 0, false, + NULL, NULL, NULL, NULL, decode_incgroup, RFLAGS_MASK_OSZAPC}, + {0x44, X86_DECODE_CMD_INC, 0, false, + NULL, NULL, NULL, NULL, decode_incgroup, RFLAGS_MASK_OSZAPC}, + {0x45, X86_DECODE_CMD_INC, 0, false, + NULL, NULL, NULL, NULL, decode_incgroup, RFLAGS_MASK_OSZAPC}, + {0x46, X86_DECODE_CMD_INC, 0, false, + NULL, NULL, NULL, NULL, decode_incgroup, RFLAGS_MASK_OSZAPC}, + {0x47, X86_DECODE_CMD_INC, 0, false, + NULL, NULL, NULL, NULL, decode_incgroup, RFLAGS_MASK_OSZAPC}, + + {0x48, X86_DECODE_CMD_DEC, 0, false, + NULL, NULL, NULL, NULL, decode_decgroup, RFLAGS_MASK_OSZAPC}, + {0x49, X86_DECODE_CMD_DEC, 0, false, + NULL, NULL, NULL, NULL, decode_decgroup, RFLAGS_MASK_OSZAPC}, + {0x4a, X86_DECODE_CMD_DEC, 0, false, + NULL, NULL, NULL, NULL, decode_decgroup, RFLAGS_MASK_OSZAPC}, + {0x4b, X86_DECODE_CMD_DEC, 0, false, + NULL, NULL, NULL, NULL, decode_decgroup, RFLAGS_MASK_OSZAPC}, + {0x4c, X86_DECODE_CMD_DEC, 0, false, + NULL, NULL, NULL, NULL, decode_decgroup, RFLAGS_MASK_OSZAPC}, + {0x4d, X86_DECODE_CMD_DEC, 0, false, + NULL, NULL, NULL, NULL, decode_decgroup, RFLAGS_MASK_OSZAPC}, + {0x4e, X86_DECODE_CMD_DEC, 0, false, + NULL, NULL, NULL, NULL, decode_decgroup, RFLAGS_MASK_OSZAPC}, + {0x4f, X86_DECODE_CMD_DEC, 0, false, + NULL, NULL, NULL, NULL, decode_decgroup, RFLAGS_MASK_OSZAPC}, + + {0x50, X86_DECODE_CMD_PUSH, 0, false, + NULL, NULL, NULL, NULL, decode_pushgroup, RFLAGS_MASK_NONE}, + {0x51, X86_DECODE_CMD_PUSH, 0, false, + NULL, NULL, NULL, NULL, decode_pushgroup, RFLAGS_MASK_NONE}, + {0x52, X86_DECODE_CMD_PUSH, 0, false, + NULL, NULL, NULL, NULL, decode_pushgroup, RFLAGS_MASK_NONE}, + {0x53, X86_DECODE_CMD_PUSH, 0, false, + NULL, NULL, NULL, NULL, decode_pushgroup, RFLAGS_MASK_NONE}, + {0x54, X86_DECODE_CMD_PUSH, 0, false, + NULL, NULL, NULL, NULL, decode_pushgroup, RFLAGS_MASK_NONE}, + {0x55, X86_DECODE_CMD_PUSH, 0, false, + NULL, NULL, NULL, NULL, decode_pushgroup, RFLAGS_MASK_NONE}, + {0x56, X86_DECODE_CMD_PUSH, 0, false, + NULL, NULL, NULL, NULL, decode_pushgroup, RFLAGS_MASK_NONE}, + {0x57, X86_DECODE_CMD_PUSH, 0, false, + NULL, NULL, NULL, NULL, decode_pushgroup, RFLAGS_MASK_NONE}, + + {0x58, X86_DECODE_CMD_POP, 0, false, + NULL, NULL, NULL, NULL, decode_popgroup, RFLAGS_MASK_NONE}, + {0x59, X86_DECODE_CMD_POP, 0, false, + NULL, NULL, NULL, NULL, decode_popgroup, RFLAGS_MASK_NONE}, + {0x5a, X86_DECODE_CMD_POP, 0, false, + NULL, NULL, NULL, NULL, decode_popgroup, RFLAGS_MASK_NONE}, + {0x5b, X86_DECODE_CMD_POP, 0, false, + NULL, NULL, NULL, NULL, decode_popgroup, RFLAGS_MASK_NONE}, + {0x5c, X86_DECODE_CMD_POP, 0, false, + NULL, NULL, NULL, NULL, decode_popgroup, RFLAGS_MASK_NONE}, + {0x5d, X86_DECODE_CMD_POP, 0, false, + NULL, NULL, NULL, NULL, decode_popgroup, RFLAGS_MASK_NONE}, + {0x5e, X86_DECODE_CMD_POP, 0, false, + NULL, NULL, NULL, NULL, decode_popgroup, RFLAGS_MASK_NONE}, + {0x5f, X86_DECODE_CMD_POP, 0, false, + NULL, NULL, NULL, NULL, decode_popgroup, RFLAGS_MASK_NONE}, + + {0x60, X86_DECODE_CMD_PUSHA, 0, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x61, X86_DECODE_CMD_POPA, 0, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + + {0x68, X86_DECODE_CMD_PUSH, 0, false, decode_imm, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x6a, X86_DECODE_CMD_PUSH, 0, false, decode_imm8_signed, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x69, X86_DECODE_CMD_IMUL_3, 0, true, decode_modrm_reg, + decode_modrm_rm, decode_imm, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x6b, X86_DECODE_CMD_IMUL_3, 0, true, decode_modrm_reg, decode_modrm_= rm, + decode_imm8_signed, NULL, NULL, RFLAGS_MASK_OSZAPC}, + + {0x6c, X86_DECODE_CMD_INS, 1, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x6d, X86_DECODE_CMD_INS, 0, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x6e, X86_DECODE_CMD_OUTS, 1, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x6f, X86_DECODE_CMD_OUTS, 0, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + + {0x70, X86_DECODE_CMD_JXX, 1, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x71, X86_DECODE_CMD_JXX, 1, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x72, X86_DECODE_CMD_JXX, 1, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x73, X86_DECODE_CMD_JXX, 1, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x74, X86_DECODE_CMD_JXX, 1, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x75, X86_DECODE_CMD_JXX, 1, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x76, X86_DECODE_CMD_JXX, 1, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x77, X86_DECODE_CMD_JXX, 1, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x78, X86_DECODE_CMD_JXX, 1, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x79, X86_DECODE_CMD_JXX, 1, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x7a, X86_DECODE_CMD_JXX, 1, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x7b, X86_DECODE_CMD_JXX, 1, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x7c, X86_DECODE_CMD_JXX, 1, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x7d, X86_DECODE_CMD_JXX, 1, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x7e, X86_DECODE_CMD_JXX, 1, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x7f, X86_DECODE_CMD_JXX, 1, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + + {0x80, X86_DECODE_CMD_INVL, 1, true, decode_modrm_rm, decode_imm8, + NULL, NULL, decode_addgroup, RFLAGS_MASK_OSZAPC}, + {0x81, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, decode_imm, + NULL, NULL, decode_addgroup, RFLAGS_MASK_OSZAPC}, + {0x82, X86_DECODE_CMD_INVL, 1, true, decode_modrm_rm, decode_imm8, + NULL, NULL, decode_addgroup, RFLAGS_MASK_OSZAPC}, + {0x83, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, decode_imm8_sign= ed, + NULL, NULL, decode_addgroup, RFLAGS_MASK_OSZAPC}, + {0x84, X86_DECODE_CMD_TST, 1, true, decode_modrm_rm, decode_modrm_reg, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x85, X86_DECODE_CMD_TST, 0, true, decode_modrm_rm, decode_modrm_reg, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0x86, X86_DECODE_CMD_XCHG, 1, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x87, X86_DECODE_CMD_XCHG, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x88, X86_DECODE_CMD_MOV, 1, true, decode_modrm_rm, decode_modrm_reg, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x89, X86_DECODE_CMD_MOV, 0, true, decode_modrm_rm, decode_modrm_reg, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x8a, X86_DECODE_CMD_MOV, 1, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x8b, X86_DECODE_CMD_MOV, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x8c, X86_DECODE_CMD_MOV_FROM_SEG, 0, true, decode_modrm_rm, + decode_modrm_reg, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x8d, X86_DECODE_CMD_LEA, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x8e, X86_DECODE_CMD_MOV_TO_SEG, 0, true, decode_modrm_reg, + decode_modrm_rm, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x8f, X86_DECODE_CMD_POP, 0, true, decode_modrm_rm, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + + {0x90, X86_DECODE_CMD_NOP, 0, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x91, X86_DECODE_CMD_XCHG, 0, false, NULL, decode_rax, + NULL, NULL, decode_xchgroup, RFLAGS_MASK_NONE}, + {0x92, X86_DECODE_CMD_XCHG, 0, false, NULL, decode_rax, + NULL, NULL, decode_xchgroup, RFLAGS_MASK_NONE}, + {0x93, X86_DECODE_CMD_XCHG, 0, false, NULL, decode_rax, + NULL, NULL, decode_xchgroup, RFLAGS_MASK_NONE}, + {0x94, X86_DECODE_CMD_XCHG, 0, false, NULL, decode_rax, + NULL, NULL, decode_xchgroup, RFLAGS_MASK_NONE}, + {0x95, X86_DECODE_CMD_XCHG, 0, false, NULL, decode_rax, + NULL, NULL, decode_xchgroup, RFLAGS_MASK_NONE}, + {0x96, X86_DECODE_CMD_XCHG, 0, false, NULL, decode_rax, + NULL, NULL, decode_xchgroup, RFLAGS_MASK_NONE}, + {0x97, X86_DECODE_CMD_XCHG, 0, false, NULL, decode_rax, + NULL, NULL, decode_xchgroup, RFLAGS_MASK_NONE}, + + {0x98, X86_DECODE_CMD_CBW, 0, false, NULL, NULL, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x99, X86_DECODE_CMD_CWD, 0, false, NULL, NULL, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + + {0x9a, X86_DECODE_CMD_CALL_FAR, 0, false, NULL, + NULL, NULL, NULL, decode_farjmp, RFLAGS_MASK_NONE}, + + {0x9c, X86_DECODE_CMD_PUSHF, 0, false, NULL, NULL, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + /*{0x9d, X86_DECODE_CMD_POPF, 0, false, NULL, NULL, + NULL, NULL, NULL, RFLAGS_MASK_POPF},*/ + {0x9e, X86_DECODE_CMD_SAHF, 0, false, NULL, NULL, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x9f, X86_DECODE_CMD_LAHF, 0, false, NULL, NULL, + NULL, NULL, NULL, RFLAGS_MASK_LAHF}, + + {0xa0, X86_DECODE_CMD_MOV, 1, false, decode_rax, fetch_moffs, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xa1, X86_DECODE_CMD_MOV, 0, false, decode_rax, fetch_moffs, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xa2, X86_DECODE_CMD_MOV, 1, false, fetch_moffs, decode_rax, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xa3, X86_DECODE_CMD_MOV, 0, false, fetch_moffs, decode_rax, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + + {0xa4, X86_DECODE_CMD_MOVS, 1, false, NULL, NULL, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xa5, X86_DECODE_CMD_MOVS, 0, false, NULL, NULL, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xa6, X86_DECODE_CMD_CMPS, 1, false, NULL, NULL, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xa7, X86_DECODE_CMD_CMPS, 0, false, NULL, NULL, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xaa, X86_DECODE_CMD_STOS, 1, false, NULL, NULL, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xab, X86_DECODE_CMD_STOS, 0, false, NULL, NULL, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xac, X86_DECODE_CMD_LODS, 1, false, NULL, NULL, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xad, X86_DECODE_CMD_LODS, 0, false, NULL, NULL, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xae, X86_DECODE_CMD_SCAS, 1, false, NULL, NULL, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xaf, X86_DECODE_CMD_SCAS, 0, false, NULL, NULL, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + + {0xa8, X86_DECODE_CMD_TST, 1, false, decode_rax, decode_imm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xa9, X86_DECODE_CMD_TST, 0, false, decode_rax, decode_imm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + + {0xb0, X86_DECODE_CMD_MOV, 1, false, NULL, + NULL, NULL, NULL, decode_movgroup8, RFLAGS_MASK_NONE}, + {0xb1, X86_DECODE_CMD_MOV, 1, false, NULL, + NULL, NULL, NULL, decode_movgroup8, RFLAGS_MASK_NONE}, + {0xb2, X86_DECODE_CMD_MOV, 1, false, NULL, + NULL, NULL, NULL, decode_movgroup8, RFLAGS_MASK_NONE}, + {0xb3, X86_DECODE_CMD_MOV, 1, false, NULL, + NULL, NULL, NULL, decode_movgroup8, RFLAGS_MASK_NONE}, + {0xb4, X86_DECODE_CMD_MOV, 1, false, NULL, + NULL, NULL, NULL, decode_movgroup8, RFLAGS_MASK_NONE}, + {0xb5, X86_DECODE_CMD_MOV, 1, false, NULL, + NULL, NULL, NULL, decode_movgroup8, RFLAGS_MASK_NONE}, + {0xb6, X86_DECODE_CMD_MOV, 1, false, NULL, + NULL, NULL, NULL, decode_movgroup8, RFLAGS_MASK_NONE}, + {0xb7, X86_DECODE_CMD_MOV, 1, false, NULL, + NULL, NULL, NULL, decode_movgroup8, RFLAGS_MASK_NONE}, + + {0xb8, X86_DECODE_CMD_MOV, 0, false, NULL, + NULL, NULL, NULL, decode_movgroup, RFLAGS_MASK_NONE}, + {0xb9, X86_DECODE_CMD_MOV, 0, false, NULL, + NULL, NULL, NULL, decode_movgroup, RFLAGS_MASK_NONE}, + {0xba, X86_DECODE_CMD_MOV, 0, false, NULL, + NULL, NULL, NULL, decode_movgroup, RFLAGS_MASK_NONE}, + {0xbb, X86_DECODE_CMD_MOV, 0, false, NULL, + NULL, NULL, NULL, decode_movgroup, RFLAGS_MASK_NONE}, + {0xbc, X86_DECODE_CMD_MOV, 0, false, NULL, + NULL, NULL, NULL, decode_movgroup, RFLAGS_MASK_NONE}, + {0xbd, X86_DECODE_CMD_MOV, 0, false, NULL, + NULL, NULL, NULL, decode_movgroup, RFLAGS_MASK_NONE}, + {0xbe, X86_DECODE_CMD_MOV, 0, false, NULL, + NULL, NULL, NULL, decode_movgroup, RFLAGS_MASK_NONE}, + {0xbf, X86_DECODE_CMD_MOV, 0, false, NULL, + NULL, NULL, NULL, decode_movgroup, RFLAGS_MASK_NONE}, + + {0xc0, X86_DECODE_CMD_INVL, 1, true, decode_modrm_rm, decode_imm8, + NULL, NULL, decode_rotgroup, RFLAGS_MASK_OSZAPC}, + {0xc1, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, decode_imm8, + NULL, NULL, decode_rotgroup, RFLAGS_MASK_OSZAPC}, + + {0xc2, X86_DECODE_RET_NEAR, 0, false, decode_imm16, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xc3, X86_DECODE_RET_NEAR, 0, false, NULL, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + + {0xc4, X86_DECODE_CMD_LES, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xc5, X86_DECODE_CMD_LDS, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + + {0xc6, X86_DECODE_CMD_MOV, 1, true, decode_modrm_rm, decode_imm8, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xc7, X86_DECODE_CMD_MOV, 0, true, decode_modrm_rm, decode_imm, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + + {0xc8, X86_DECODE_CMD_ENTER, 0, false, decode_imm16, decode_imm8, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xc9, X86_DECODE_CMD_LEAVE, 0, false, NULL, NULL, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xca, X86_DECODE_RET_FAR, 0, false, decode_imm16, NULL, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xcb, X86_DECODE_RET_FAR, 0, false, decode_imm_0, NULL, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xcd, X86_DECODE_CMD_INT, 0, false, decode_imm8, NULL, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + /*{0xcf, X86_DECODE_CMD_IRET, 0, false, NULL, NULL, + NULL, NULL, NULL, RFLAGS_MASK_IRET},*/ + + {0xd0, X86_DECODE_CMD_INVL, 1, true, decode_modrm_rm, decode_imm_1, + NULL, NULL, decode_rotgroup, RFLAGS_MASK_OSZAPC}, + {0xd1, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, decode_imm_1, + NULL, NULL, decode_rotgroup, RFLAGS_MASK_OSZAPC}, + {0xd2, X86_DECODE_CMD_INVL, 1, true, decode_modrm_rm, decode_rcx, + NULL, NULL, decode_rotgroup, RFLAGS_MASK_OSZAPC}, + {0xd3, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, decode_rcx, + NULL, NULL, decode_rotgroup, RFLAGS_MASK_OSZAPC}, + + {0xd4, X86_DECODE_CMD_AAM, 0, false, decode_imm8, + NULL, NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xd5, X86_DECODE_CMD_AAD, 0, false, decode_imm8, + NULL, NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + + {0xd7, X86_DECODE_CMD_XLAT, 0, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + + {0xd8, X86_DECODE_CMD_INVL, 0, true, NULL, + NULL, NULL, NULL, decode_x87_ins, RFLAGS_MASK_NONE}, + {0xd9, X86_DECODE_CMD_INVL, 0, true, NULL, + NULL, NULL, NULL, decode_x87_ins, RFLAGS_MASK_NONE}, + {0xda, X86_DECODE_CMD_INVL, 0, true, NULL, + NULL, NULL, NULL, decode_x87_ins, RFLAGS_MASK_NONE}, + {0xdb, X86_DECODE_CMD_INVL, 0, true, NULL, + NULL, NULL, NULL, decode_x87_ins, RFLAGS_MASK_NONE}, + {0xdc, X86_DECODE_CMD_INVL, 0, true, NULL, + NULL, NULL, NULL, decode_x87_ins, RFLAGS_MASK_NONE}, + {0xdd, X86_DECODE_CMD_INVL, 0, true, NULL, + NULL, NULL, NULL, decode_x87_ins, RFLAGS_MASK_NONE}, + {0xde, X86_DECODE_CMD_INVL, 0, true, NULL, + NULL, NULL, NULL, decode_x87_ins, RFLAGS_MASK_NONE}, + {0xdf, X86_DECODE_CMD_INVL, 0, true, NULL, + NULL, NULL, NULL, decode_x87_ins, RFLAGS_MASK_NONE}, + + {0xe0, X86_DECODE_CMD_LOOP, 0, false, decode_imm8_signed, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xe1, X86_DECODE_CMD_LOOP, 0, false, decode_imm8_signed, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xe2, X86_DECODE_CMD_LOOP, 0, false, decode_imm8_signed, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + + {0xe3, X86_DECODE_CMD_JCXZ, 1, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + + {0xe4, X86_DECODE_CMD_IN, 1, false, decode_imm8, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xe5, X86_DECODE_CMD_IN, 0, false, decode_imm8, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xe6, X86_DECODE_CMD_OUT, 1, false, decode_imm8, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xe7, X86_DECODE_CMD_OUT, 0, false, decode_imm8, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xe8, X86_DECODE_CMD_CALL_NEAR, 0, false, decode_imm_signed, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xe9, X86_DECODE_CMD_JMP_NEAR, 0, false, decode_imm_signed, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xea, X86_DECODE_CMD_JMP_FAR, 0, false, + NULL, NULL, NULL, NULL, decode_farjmp, RFLAGS_MASK_NONE}, + {0xeb, X86_DECODE_CMD_JMP_NEAR, 1, false, decode_imm8_signed, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xec, X86_DECODE_CMD_IN, 1, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xed, X86_DECODE_CMD_IN, 0, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xee, X86_DECODE_CMD_OUT, 1, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xef, X86_DECODE_CMD_OUT, 0, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + + {0xf4, X86_DECODE_CMD_HLT, 0, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + + {0xf5, X86_DECODE_CMD_CMC, 0, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_CF}, + + {0xf6, X86_DECODE_CMD_INVL, 1, true, + NULL, NULL, NULL, NULL, decode_f7group, RFLAGS_MASK_OSZAPC}, + {0xf7, X86_DECODE_CMD_INVL, 0, true, + NULL, NULL, NULL, NULL, decode_f7group, RFLAGS_MASK_OSZAPC}, + + {0xf8, X86_DECODE_CMD_CLC, 0, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_CF}, + {0xf9, X86_DECODE_CMD_STC, 0, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_CF}, + + {0xfa, X86_DECODE_CMD_CLI, 0, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_IF}, + {0xfb, X86_DECODE_CMD_STI, 0, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_IF}, + {0xfc, X86_DECODE_CMD_CLD, 0, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_DF}, + {0xfd, X86_DECODE_CMD_STD, 0, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_DF}, + {0xfe, X86_DECODE_CMD_INVL, 1, true, decode_modrm_rm, + NULL, NULL, NULL, decode_incgroup2, RFLAGS_MASK_OSZAPC}, + {0xff, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, + NULL, NULL, NULL, decode_ffgroup, RFLAGS_MASK_OSZAPC}, }; =20 -struct decode_tbl _2op_inst[] =3D -{ - {0x0, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, NULL, NULL, NULL,= decode_sldtgroup, RFLAGS_MASK_NONE}, - {0x1, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, NULL, NULL, NULL,= decode_lidtgroup, RFLAGS_MASK_NONE}, - {0x6, X86_DECODE_CMD_CLTS, 0, false, NULL, NULL, NULL, NULL, NULL, RFL= AGS_MASK_TF}, - {0x9, X86_DECODE_CMD_WBINVD, 0, false, NULL, NULL, NULL, NULL, NULL, R= FLAGS_MASK_NONE}, - {0x18, X86_DECODE_CMD_PREFETCH, 0, true, NULL, NULL, NULL, NULL, decod= e_x87_general, RFLAGS_MASK_NONE}, - {0x1f, X86_DECODE_CMD_NOP, 0, true, decode_modrm_rm, NULL, NULL, NULL,= NULL, RFLAGS_MASK_NONE}, - {0x20, X86_DECODE_CMD_MOV_FROM_CR, 0, true, decode_modrm_rm, decode_mo= drm_reg, NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x21, X86_DECODE_CMD_MOV_FROM_DR, 0, true, decode_modrm_rm, decode_mo= drm_reg, NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x22, X86_DECODE_CMD_MOV_TO_CR, 0, true, decode_modrm_reg, decode_mod= rm_rm, NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x23, X86_DECODE_CMD_MOV_TO_DR, 0, true, decode_modrm_reg, decode_mod= rm_rm, NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x30, X86_DECODE_CMD_WRMSR, 0, false, NULL, NULL, NULL, NULL, NULL, R= FLAGS_MASK_NONE}, - {0x31, X86_DECODE_CMD_RDTSC, 0, false, NULL, NULL, NULL, NULL, NULL, R= FLAGS_MASK_NONE}, - {0x32, X86_DECODE_CMD_RDMSR, 0, false, NULL, NULL, NULL, NULL, NULL, R= FLAGS_MASK_NONE}, - {0x40, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x41, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x42, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x43, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x44, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x45, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x46, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x47, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x48, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x49, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x4a, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x4b, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x4c, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x4d, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x4e, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x4f, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm= , NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0x77, X86_DECODE_CMD_EMMS, 0, false, NULL, NULL, NULL, NULL, decode_x= 87_general, RFLAGS_MASK_NONE}, - {0x82, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x83, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x84, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x85, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x86, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x87, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x88, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x89, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x8a, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x8b, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x8c, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x8d, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x8e, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x8f, X86_DECODE_CMD_JXX, 0, false, NULL, NULL, NULL, NULL, decode_jx= x, RFLAGS_MASK_NONE}, - {0x90, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, - {0x91, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, - {0x92, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, - {0x93, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, - {0x94, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, - {0x95, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, - {0x96, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, - {0x97, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, - {0x98, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, - {0x99, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, - {0x9a, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, - {0x9b, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, - {0x9c, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, - {0x9d, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, - {0x9e, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, - {0x9f, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, NULL, NULL, NUL= L, NULL, RFLAGS_MASK_NONE}, - =20 - {0xb0, X86_DECODE_CMD_CMPXCHG, 1, true, decode_modrm_rm, decode_modrm_= reg, NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0xb1, X86_DECODE_CMD_CMPXCHG, 0, true, decode_modrm_rm, decode_modrm_= reg, NULL, NULL, NULL, RFLAGS_MASK_NONE}, - =20 - {0xb6, X86_DECODE_CMD_MOVZX, 0, true, decode_modrm_reg, decode_modrm_r= m, NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0xb7, X86_DECODE_CMD_MOVZX, 0, true, decode_modrm_reg, decode_modrm_r= m, NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0xb8, X86_DECODE_CMD_POPCNT, 0, true, decode_modrm_reg, decode_modrm_= rm, NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0xbe, X86_DECODE_CMD_MOVSX, 0, true, decode_modrm_reg, decode_modrm_r= m, NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0xbf, X86_DECODE_CMD_MOVSX, 0, true, decode_modrm_reg, decode_modrm_r= m, NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0xa0, X86_DECODE_CMD_PUSH_SEG, 0, false, false, NULL, NULL, NULL, dec= ode_pushseg, RFLAGS_MASK_NONE}, - {0xa1, X86_DECODE_CMD_POP_SEG, 0, false, false, NULL, NULL, NULL, deco= de_popseg, RFLAGS_MASK_NONE}, - {0xa2, X86_DECODE_CMD_CPUID, 0, false, NULL, NULL, NULL, NULL, NULL, R= FLAGS_MASK_NONE}, - {0xa3, X86_DECODE_CMD_BT, 0, true, decode_modrm_rm, decode_modrm_reg, = NULL, NULL, NULL, RFLAGS_MASK_CF}, - {0xa4, X86_DECODE_CMD_SHLD, 0, true, decode_modrm_rm, decode_modrm_reg= , decode_imm8, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0xa5, X86_DECODE_CMD_SHLD, 0, true, decode_modrm_rm, decode_modrm_reg= , decode_rcx, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0xa8, X86_DECODE_CMD_PUSH_SEG, 0, false, false, NULL, NULL, NULL, dec= ode_pushseg, RFLAGS_MASK_NONE}, - {0xa9, X86_DECODE_CMD_POP_SEG, 0, false, false, NULL, NULL, NULL, deco= de_popseg, RFLAGS_MASK_NONE}, - {0xab, X86_DECODE_CMD_BTS, 0, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_CF}, - {0xac, X86_DECODE_CMD_SHRD, 0, true, decode_modrm_rm, decode_modrm_reg= , decode_imm8, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0xad, X86_DECODE_CMD_SHRD, 0, true, decode_modrm_rm, decode_modrm_reg= , decode_rcx, NULL, NULL, RFLAGS_MASK_OSZAPC}, - =20 - {0xae, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, NULL, NULL, NULL= , decode_aegroup, RFLAGS_MASK_NONE}, - =20 - {0xaf, X86_DECODE_CMD_IMUL_2, 0, true, decode_modrm_reg, decode_modrm_= rm, NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0xb2, X86_DECODE_CMD_LSS, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_NONE}, - {0xb3, X86_DECODE_CMD_BTR, 0, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0xba, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, decode_imm8, NUL= L, NULL, decode_btgroup, RFLAGS_MASK_OSZAPC}, - {0xbb, X86_DECODE_CMD_BTC, 0, true, decode_modrm_rm, decode_modrm_reg,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0xbc, X86_DECODE_CMD_BSF, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - {0xbd, X86_DECODE_CMD_BSR, 0, true, decode_modrm_reg, decode_modrm_rm,= NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - =20 - {0xc1, X86_DECODE_CMD_XADD, 0, true, decode_modrm_rm, decode_modrm_reg= , NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, - =20 - {0xc7, X86_DECODE_CMD_CMPXCHG8B, 0, true, decode_modrm_rm, NULL, NULL,= NULL, NULL, RFLAGS_MASK_ZF}, - =20 - {0xc8, X86_DECODE_CMD_BSWAP, 0, false, NULL, NULL, NULL, NULL, decode_= bswap, RFLAGS_MASK_NONE}, - {0xc9, X86_DECODE_CMD_BSWAP, 0, false, NULL, NULL, NULL, NULL, decode_= bswap, RFLAGS_MASK_NONE}, - {0xca, X86_DECODE_CMD_BSWAP, 0, false, NULL, NULL, NULL, NULL, decode_= bswap, RFLAGS_MASK_NONE}, - {0xcb, X86_DECODE_CMD_BSWAP, 0, false, NULL, NULL, NULL, NULL, decode_= bswap, RFLAGS_MASK_NONE}, - {0xcc, X86_DECODE_CMD_BSWAP, 0, false, NULL, NULL, NULL, NULL, decode_= bswap, RFLAGS_MASK_NONE}, - {0xcd, X86_DECODE_CMD_BSWAP, 0, false, NULL, NULL, NULL, NULL, decode_= bswap, RFLAGS_MASK_NONE}, - {0xce, X86_DECODE_CMD_BSWAP, 0, false, NULL, NULL, NULL, NULL, decode_= bswap, RFLAGS_MASK_NONE}, - {0xcf, X86_DECODE_CMD_BSWAP, 0, false, NULL, NULL, NULL, NULL, decode_= bswap, RFLAGS_MASK_NONE}, +struct decode_tbl _2op_inst[] =3D { + {0x0, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, + NULL, NULL, NULL, decode_sldtgroup, RFLAGS_MASK_NONE}, + {0x1, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, + NULL, NULL, NULL, decode_lidtgroup, RFLAGS_MASK_NONE}, + {0x6, X86_DECODE_CMD_CLTS, 0, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_TF}, + {0x9, X86_DECODE_CMD_WBINVD, 0, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x18, X86_DECODE_CMD_PREFETCH, 0, true, + NULL, NULL, NULL, NULL, decode_x87_general, RFLAGS_MASK_NONE}, + {0x1f, X86_DECODE_CMD_NOP, 0, true, decode_modrm_rm, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x20, X86_DECODE_CMD_MOV_FROM_CR, 0, true, decode_modrm_rm, + decode_modrm_reg, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x21, X86_DECODE_CMD_MOV_FROM_DR, 0, true, decode_modrm_rm, + decode_modrm_reg, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x22, X86_DECODE_CMD_MOV_TO_CR, 0, true, decode_modrm_reg, + decode_modrm_rm, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x23, X86_DECODE_CMD_MOV_TO_DR, 0, true, decode_modrm_reg, + decode_modrm_rm, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x30, X86_DECODE_CMD_WRMSR, 0, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x31, X86_DECODE_CMD_RDTSC, 0, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x32, X86_DECODE_CMD_RDMSR, 0, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x40, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x41, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x42, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x43, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x44, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x45, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x46, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x47, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x48, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x49, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x4a, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x4b, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x4c, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x4d, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x4e, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x4f, X86_DECODE_CMD_CMOV, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x77, X86_DECODE_CMD_EMMS, 0, false, + NULL, NULL, NULL, NULL, decode_x87_general, RFLAGS_MASK_NONE}, + {0x82, X86_DECODE_CMD_JXX, 0, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x83, X86_DECODE_CMD_JXX, 0, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x84, X86_DECODE_CMD_JXX, 0, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x85, X86_DECODE_CMD_JXX, 0, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x86, X86_DECODE_CMD_JXX, 0, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x87, X86_DECODE_CMD_JXX, 0, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x88, X86_DECODE_CMD_JXX, 0, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x89, X86_DECODE_CMD_JXX, 0, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x8a, X86_DECODE_CMD_JXX, 0, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x8b, X86_DECODE_CMD_JXX, 0, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x8c, X86_DECODE_CMD_JXX, 0, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x8d, X86_DECODE_CMD_JXX, 0, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x8e, X86_DECODE_CMD_JXX, 0, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x8f, X86_DECODE_CMD_JXX, 0, false, + NULL, NULL, NULL, NULL, decode_jxx, RFLAGS_MASK_NONE}, + {0x90, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x91, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x92, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x93, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x94, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x95, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x96, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x97, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x98, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x99, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x9a, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x9b, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x9c, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x9d, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x9e, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0x9f, X86_DECODE_CMD_SETXX, 1, true, decode_modrm_rm, + NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + + {0xb0, X86_DECODE_CMD_CMPXCHG, 1, true, decode_modrm_rm, decode_modrm_= reg, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xb1, X86_DECODE_CMD_CMPXCHG, 0, true, decode_modrm_rm, decode_modrm_= reg, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + + {0xb6, X86_DECODE_CMD_MOVZX, 0, true, decode_modrm_reg, decode_modrm_r= m, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xb7, X86_DECODE_CMD_MOVZX, 0, true, decode_modrm_reg, decode_modrm_r= m, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xb8, X86_DECODE_CMD_POPCNT, 0, true, decode_modrm_reg, decode_modrm_= rm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xbe, X86_DECODE_CMD_MOVSX, 0, true, decode_modrm_reg, decode_modrm_r= m, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xbf, X86_DECODE_CMD_MOVSX, 0, true, decode_modrm_reg, decode_modrm_r= m, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xa0, X86_DECODE_CMD_PUSH_SEG, 0, false, false, + NULL, NULL, NULL, decode_pushseg, RFLAGS_MASK_NONE}, + {0xa1, X86_DECODE_CMD_POP_SEG, 0, false, false, + NULL, NULL, NULL, decode_popseg, RFLAGS_MASK_NONE}, + {0xa2, X86_DECODE_CMD_CPUID, 0, false, + NULL, NULL, NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xa3, X86_DECODE_CMD_BT, 0, true, decode_modrm_rm, decode_modrm_reg, + NULL, NULL, NULL, RFLAGS_MASK_CF}, + {0xa4, X86_DECODE_CMD_SHLD, 0, true, decode_modrm_rm, decode_modrm_reg, + decode_imm8, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xa5, X86_DECODE_CMD_SHLD, 0, true, decode_modrm_rm, decode_modrm_reg, + decode_rcx, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xa8, X86_DECODE_CMD_PUSH_SEG, 0, false, false, + NULL, NULL, NULL, decode_pushseg, RFLAGS_MASK_NONE}, + {0xa9, X86_DECODE_CMD_POP_SEG, 0, false, false, + NULL, NULL, NULL, decode_popseg, RFLAGS_MASK_NONE}, + {0xab, X86_DECODE_CMD_BTS, 0, true, decode_modrm_rm, decode_modrm_reg, + NULL, NULL, NULL, RFLAGS_MASK_CF}, + {0xac, X86_DECODE_CMD_SHRD, 0, true, decode_modrm_rm, decode_modrm_reg, + decode_imm8, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xad, X86_DECODE_CMD_SHRD, 0, true, decode_modrm_rm, decode_modrm_reg, + decode_rcx, NULL, NULL, RFLAGS_MASK_OSZAPC}, + + {0xae, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, + NULL, NULL, NULL, decode_aegroup, RFLAGS_MASK_NONE}, + + {0xaf, X86_DECODE_CMD_IMUL_2, 0, true, decode_modrm_reg, decode_modrm_= rm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xb2, X86_DECODE_CMD_LSS, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_NONE}, + {0xb3, X86_DECODE_CMD_BTR, 0, true, decode_modrm_rm, decode_modrm_reg, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xba, X86_DECODE_CMD_INVL, 0, true, decode_modrm_rm, decode_imm8, + NULL, NULL, decode_btgroup, RFLAGS_MASK_OSZAPC}, + {0xbb, X86_DECODE_CMD_BTC, 0, true, decode_modrm_rm, decode_modrm_reg, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xbc, X86_DECODE_CMD_BSF, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + {0xbd, X86_DECODE_CMD_BSR, 0, true, decode_modrm_reg, decode_modrm_rm, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + + {0xc1, X86_DECODE_CMD_XADD, 0, true, decode_modrm_rm, decode_modrm_reg, + NULL, NULL, NULL, RFLAGS_MASK_OSZAPC}, + + {0xc7, X86_DECODE_CMD_CMPXCHG8B, 0, true, decode_modrm_rm, + NULL, NULL, NULL, NULL, RFLAGS_MASK_ZF}, + + {0xc8, X86_DECODE_CMD_BSWAP, 0, false, + NULL, NULL, NULL, NULL, decode_bswap, RFLAGS_MASK_NONE}, + {0xc9, X86_DECODE_CMD_BSWAP, 0, false, + NULL, NULL, NULL, NULL, decode_bswap, RFLAGS_MASK_NONE}, + {0xca, X86_DECODE_CMD_BSWAP, 0, false, + NULL, NULL, NULL, NULL, decode_bswap, RFLAGS_MASK_NONE}, + {0xcb, X86_DECODE_CMD_BSWAP, 0, false, + NULL, NULL, NULL, NULL, decode_bswap, RFLAGS_MASK_NONE}, + {0xcc, X86_DECODE_CMD_BSWAP, 0, false, + NULL, NULL, NULL, NULL, decode_bswap, RFLAGS_MASK_NONE}, + {0xcd, X86_DECODE_CMD_BSWAP, 0, false, + NULL, NULL, NULL, NULL, decode_bswap, RFLAGS_MASK_NONE}, + {0xce, X86_DECODE_CMD_BSWAP, 0, false, + NULL, NULL, NULL, NULL, decode_bswap, RFLAGS_MASK_NONE}, + {0xcf, X86_DECODE_CMD_BSWAP, 0, false, + NULL, NULL, NULL, NULL, decode_bswap, RFLAGS_MASK_NONE}, }; =20 -struct decode_x87_tbl invl_inst_x87 =3D {0x0, 0, 0, 0, 0, false, false, NU= LL, NULL, decode_invalid, 0}; - -struct decode_x87_tbl _x87_inst[] =3D -{ - {0xd8, 0, 3, X86_DECODE_CMD_FADD, 10, false, false, decode_x87_modrm_s= t0, decode_decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xd8, 0, 0, X86_DECODE_CMD_FADD, 4, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, - {0xd8, 1, 3, X86_DECODE_CMD_FMUL, 10, false, false, decode_x87_modrm_s= t0, decode_decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xd8, 1, 0, X86_DECODE_CMD_FMUL, 4, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, - {0xd8, 4, 3, X86_DECODE_CMD_FSUB, 10, false, false, decode_x87_modrm_s= t0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xd8, 4, 0, X86_DECODE_CMD_FSUB, 4, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, - {0xd8, 5, 3, X86_DECODE_CMD_FSUB, 10, true, false, decode_x87_modrm_st= 0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xd8, 5, 0, X86_DECODE_CMD_FSUB, 4, true, false, decode_x87_modrm_st0= , decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, - {0xd8, 6, 3, X86_DECODE_CMD_FDIV, 10, false, false, decode_x87_modrm_s= t0,decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xd8, 6, 0, X86_DECODE_CMD_FDIV, 4, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, - {0xd8, 7, 3, X86_DECODE_CMD_FDIV, 10, true, false, decode_x87_modrm_st= 0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xd8, 7, 0, X86_DECODE_CMD_FDIV, 4, true, false, decode_x87_modrm_st0= , decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, - =20 - {0xd9, 0, 3, X86_DECODE_CMD_FLD, 10, false, false, decode_x87_modrm_st= 0, NULL, NULL, RFLAGS_MASK_NONE}, - {0xd9, 0, 0, X86_DECODE_CMD_FLD, 4, false, false, decode_x87_modrm_flo= atp, NULL, NULL, RFLAGS_MASK_NONE}, - {0xd9, 1, 3, X86_DECODE_CMD_FXCH, 10, false, false, decode_x87_modrm_s= t0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xd9, 1, 0, X86_DECODE_CMD_INVL, 10, false, false, decode_x87_modrm_s= t0, NULL, NULL, RFLAGS_MASK_NONE}, - {0xd9, 2, 3, X86_DECODE_CMD_INVL, 10, false, false, decode_x87_modrm_s= t0, NULL, NULL, RFLAGS_MASK_NONE}, - {0xd9, 2, 0, X86_DECODE_CMD_FST, 4, false, false, decode_x87_modrm_flo= atp, NULL, NULL, RFLAGS_MASK_NONE}, - {0xd9, 3, 3, X86_DECODE_CMD_INVL, 10, false, false, decode_x87_modrm_s= t0, NULL, NULL, RFLAGS_MASK_NONE}, - {0xd9, 3, 0, X86_DECODE_CMD_FST, 4, false, true, decode_x87_modrm_floa= tp, NULL, NULL, RFLAGS_MASK_NONE}, - {0xd9, 4, 3, X86_DECODE_CMD_INVL, 10, false, false, decode_x87_modrm_s= t0, NULL, decode_d9_4, RFLAGS_MASK_NONE}, - {0xd9, 4, 0, X86_DECODE_CMD_INVL, 4, false, false, decode_x87_modrm_by= tep, NULL, NULL, RFLAGS_MASK_NONE}, - {0xd9, 5, 3, X86_DECODE_CMD_FLDxx, 10, false, false, NULL, NULL, NULL,= RFLAGS_MASK_NONE}, - {0xd9, 5, 0, X86_DECODE_CMD_FLDCW, 2, false, false, decode_x87_modrm_b= ytep, NULL, NULL, RFLAGS_MASK_NONE}, - // - {0xd9, 7, 3, X86_DECODE_CMD_FNSTCW, 2, false, false, decode_x87_modrm_= bytep, NULL, NULL, RFLAGS_MASK_NONE}, - {0xd9, 7, 0, X86_DECODE_CMD_FNSTCW, 2, false, false, decode_x87_modrm_= bytep, NULL, NULL, RFLAGS_MASK_NONE}, - =20 - {0xda, 0, 3, X86_DECODE_CMD_FCMOV, 10, false, false, decode_x87_modrm_= st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xda, 0, 0, X86_DECODE_CMD_FADD, 4, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, - {0xda, 1, 3, X86_DECODE_CMD_FCMOV, 10, false, false, decode_x87_modrm_= st0, decode_decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xda, 1, 0, X86_DECODE_CMD_FMUL, 4, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, - {0xda, 2, 3, X86_DECODE_CMD_FCMOV, 10, false, false, decode_x87_modrm_= st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xda, 3, 3, X86_DECODE_CMD_FCMOV, 10, false, false, decode_x87_modrm_= st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xda, 4, 3, X86_DECODE_CMD_INVL, 10, false, false, NULL, NULL, NULL, = RFLAGS_MASK_NONE}, - {0xda, 4, 0, X86_DECODE_CMD_FSUB, 4, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, - {0xda, 5, 3, X86_DECODE_CMD_FUCOM, 10, false, true, decode_x87_modrm_s= t0, decode_decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xda, 5, 0, X86_DECODE_CMD_FSUB, 4, true, false, decode_x87_modrm_st0= , decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, - {0xda, 6, 3, X86_DECODE_CMD_INVL, 10, false, false, NULL, NULL, NULL, = RFLAGS_MASK_NONE}, - {0xda, 6, 0, X86_DECODE_CMD_FDIV, 4, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, - {0xda, 7, 3, X86_DECODE_CMD_INVL, 10, false, false, NULL, NULL, NULL, = RFLAGS_MASK_NONE}, - {0xda, 7, 0, X86_DECODE_CMD_FDIV, 4, true, false, decode_x87_modrm_st0= , decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, - =20 - {0xdb, 0, 3, X86_DECODE_CMD_FCMOV, 10, false, false, decode_x87_modrm_= st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xdb, 0, 0, X86_DECODE_CMD_FLD, 4, false, false, decode_x87_modrm_int= p, NULL, NULL, RFLAGS_MASK_NONE}, - {0xdb, 1, 3, X86_DECODE_CMD_FCMOV, 10, false, false, decode_x87_modrm_= st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xdb, 2, 3, X86_DECODE_CMD_FCMOV, 10, false, false, decode_x87_modrm_= st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xdb, 2, 0, X86_DECODE_CMD_FST, 4, false, false, decode_x87_modrm_int= p, NULL, NULL, RFLAGS_MASK_NONE}, - {0xdb, 3, 3, X86_DECODE_CMD_FCMOV, 10, false, false, decode_x87_modrm_= st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xdb, 3, 0, X86_DECODE_CMD_FST, 4, false, true, decode_x87_modrm_intp= , NULL, NULL, RFLAGS_MASK_NONE}, - {0xdb, 4, 3, X86_DECODE_CMD_INVL, 10, false, false, NULL, NULL, decode= _db_4, RFLAGS_MASK_NONE}, - {0xdb, 4, 0, X86_DECODE_CMD_INVL, 10, false, false, NULL, NULL, NULL, = RFLAGS_MASK_NONE}, - {0xdb, 5, 3, X86_DECODE_CMD_FUCOMI, 10, false, false, decode_x87_modrm= _st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xdb, 5, 0, X86_DECODE_CMD_FLD, 10, false, false, decode_x87_modrm_fl= oatp, NULL, NULL, RFLAGS_MASK_NONE}, - {0xdb, 7, 0, X86_DECODE_CMD_FST, 10, false, true, decode_x87_modrm_flo= atp, NULL, NULL, RFLAGS_MASK_NONE}, - =20 - {0xdc, 0, 3, X86_DECODE_CMD_FADD, 10, false, false, decode_x87_modrm_s= t0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xdc, 0, 0, X86_DECODE_CMD_FADD, 8, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, - {0xdc, 1, 3, X86_DECODE_CMD_FMUL, 10, false, false, decode_x87_modrm_s= t0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xdc, 1, 0, X86_DECODE_CMD_FMUL, 8, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, - {0xdc, 4, 3, X86_DECODE_CMD_FSUB, 10, true, false, decode_x87_modrm_st= 0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xdc, 4, 0, X86_DECODE_CMD_FSUB, 8, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, - {0xdc, 5, 3, X86_DECODE_CMD_FSUB, 10, false, false, decode_x87_modrm_s= t0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xdc, 5, 0, X86_DECODE_CMD_FSUB, 8, true, false, decode_x87_modrm_st0= , decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, - {0xdc, 6, 3, X86_DECODE_CMD_FDIV, 10, true, false, decode_x87_modrm_st= 0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xdc, 6, 0, X86_DECODE_CMD_FDIV, 8, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, - {0xdc, 7, 3, X86_DECODE_CMD_FDIV, 10, false, false, decode_x87_modrm_s= t0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xdc, 7, 0, X86_DECODE_CMD_FDIV, 8, true, false, decode_x87_modrm_st0= , decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, - =20 - {0xdd, 0, 0, X86_DECODE_CMD_FLD, 8, false, false, decode_x87_modrm_flo= atp, NULL, NULL, RFLAGS_MASK_NONE}, - {0xdd, 1, 3, X86_DECODE_CMD_FXCH, 10, false, false, decode_x87_modrm_s= t0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xdd, 2, 3, X86_DECODE_CMD_FST, 10, false, false, decode_x87_modrm_st= 0, NULL, NULL, RFLAGS_MASK_NONE}, - {0xdd, 2, 0, X86_DECODE_CMD_FST, 8, false, false, decode_x87_modrm_flo= atp, NULL, NULL, RFLAGS_MASK_NONE}, - {0xdd, 3, 3, X86_DECODE_CMD_FST, 10, false, true, decode_x87_modrm_st0= , NULL, NULL, RFLAGS_MASK_NONE}, - {0xdd, 3, 0, X86_DECODE_CMD_FST, 8, false, true, decode_x87_modrm_floa= tp, NULL, NULL, RFLAGS_MASK_NONE}, - {0xdd, 4, 3, X86_DECODE_CMD_FUCOM, 10, false, false, decode_x87_modrm_= st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xdd, 4, 0, X86_DECODE_CMD_FRSTOR, 8, false, false, decode_x87_modrm_= bytep, NULL, NULL, RFLAGS_MASK_NONE}, - {0xdd, 5, 3, X86_DECODE_CMD_FUCOM, 10, false, true, decode_x87_modrm_s= t0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xdd, 7, 0, X86_DECODE_CMD_FNSTSW, 0, false, false, decode_x87_modrm_= bytep, NULL, NULL, RFLAGS_MASK_NONE}, - {0xdd, 7, 3, X86_DECODE_CMD_FNSTSW, 0, false, false, decode_x87_modrm_= bytep, NULL, NULL, RFLAGS_MASK_NONE}, - =20 - {0xde, 0, 3, X86_DECODE_CMD_FADD, 10, false, true, decode_x87_modrm_st= 0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xde, 0, 0, X86_DECODE_CMD_FADD, 2, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, - {0xde, 1, 3, X86_DECODE_CMD_FMUL, 10, false, true, decode_x87_modrm_st= 0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xde, 1, 0, X86_DECODE_CMD_FMUL, 2, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, - {0xde, 4, 3, X86_DECODE_CMD_FSUB, 10, true, true, decode_x87_modrm_st0= , decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xde, 4, 0, X86_DECODE_CMD_FSUB, 2, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, - {0xde, 5, 3, X86_DECODE_CMD_FSUB, 10, false, true, decode_x87_modrm_st= 0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xde, 5, 0, X86_DECODE_CMD_FSUB, 2, true, false, decode_x87_modrm_st0= , decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, - {0xde, 6, 3, X86_DECODE_CMD_FDIV, 10, true, true, decode_x87_modrm_st0= , decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xde, 6, 0, X86_DECODE_CMD_FDIV, 2, false, false, decode_x87_modrm_st= 0, decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, - {0xde, 7, 3, X86_DECODE_CMD_FDIV, 10, false, true, decode_x87_modrm_st= 0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xde, 7, 0, X86_DECODE_CMD_FDIV, 2, true, false, decode_x87_modrm_st0= , decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, - =20 - {0xdf, 0, 0, X86_DECODE_CMD_FLD, 2, false, false, decode_x87_modrm_int= p, NULL, NULL, RFLAGS_MASK_NONE}, - {0xdf, 1, 3, X86_DECODE_CMD_FXCH, 10, false, false, decode_x87_modrm_s= t0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xdf, 2, 3, X86_DECODE_CMD_FST, 10, false, true, decode_x87_modrm_st0= , decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xdf, 2, 0, X86_DECODE_CMD_FST, 2, false, false, decode_x87_modrm_int= p, NULL, NULL, RFLAGS_MASK_NONE}, - {0xdf, 3, 3, X86_DECODE_CMD_FST, 10, false, true, decode_x87_modrm_st0= , decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xdf, 3, 0, X86_DECODE_CMD_FST, 2, false, true, decode_x87_modrm_intp= , NULL, NULL, RFLAGS_MASK_NONE}, - {0xdf, 4, 3, X86_DECODE_CMD_FNSTSW, 2, false, true, decode_x87_modrm_b= ytep, NULL, NULL, RFLAGS_MASK_NONE}, - {0xdf, 5, 3, X86_DECODE_CMD_FUCOMI, 10, false, true, decode_x87_modrm_= st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, - {0xdf, 5, 0, X86_DECODE_CMD_FLD, 8, false, false, decode_x87_modrm_int= p, NULL, NULL, RFLAGS_MASK_NONE}, - {0xdf, 7, 0, X86_DECODE_CMD_FST, 8, false, true, decode_x87_modrm_intp= , NULL, NULL, RFLAGS_MASK_NONE}, +struct decode_x87_tbl invl_inst_x87 =3D {0x0, 0, 0, 0, 0, false, false, NU= LL, + NULL, decode_invalid, 0}; + +struct decode_x87_tbl _x87_inst[] =3D { + {0xd8, 0, 3, X86_DECODE_CMD_FADD, 10, false, false, + decode_x87_modrm_st0, decode_decode_x87_modrm_st0, NULL, RFLAGS_MASK_= NONE}, + {0xd8, 0, 0, X86_DECODE_CMD_FADD, 4, false, false, decode_x87_modrm_st= 0, + decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, + {0xd8, 1, 3, X86_DECODE_CMD_FMUL, 10, false, false, decode_x87_modrm_s= t0, + decode_decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xd8, 1, 0, X86_DECODE_CMD_FMUL, 4, false, false, decode_x87_modrm_st= 0, + decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, + {0xd8, 4, 3, X86_DECODE_CMD_FSUB, 10, false, false, decode_x87_modrm_s= t0, + decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xd8, 4, 0, X86_DECODE_CMD_FSUB, 4, false, false, decode_x87_modrm_st= 0, + decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, + {0xd8, 5, 3, X86_DECODE_CMD_FSUB, 10, true, false, decode_x87_modrm_st= 0, + decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xd8, 5, 0, X86_DECODE_CMD_FSUB, 4, true, false, decode_x87_modrm_st0, + decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, + {0xd8, 6, 3, X86_DECODE_CMD_FDIV, 10, false, false, decode_x87_modrm_s= t0, + decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xd8, 6, 0, X86_DECODE_CMD_FDIV, 4, false, false, decode_x87_modrm_st= 0, + decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, + {0xd8, 7, 3, X86_DECODE_CMD_FDIV, 10, true, false, decode_x87_modrm_st= 0, + decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xd8, 7, 0, X86_DECODE_CMD_FDIV, 4, true, false, decode_x87_modrm_st0, + decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE}, + + {0xd9, 0, 3, X86_DECODE_CMD_FLD, 10, false, false, + decode_x87_modrm_st0, NULL, NULL, RFLAGS_MASK_NONE}, + {0xd9, 0, 0, X86_DECODE_CMD_FLD, 4, false, false, + decode_x87_modrm_floatp, NULL, NULL, RFLAGS_MASK_NONE}, + {0xd9, 1, 3, X86_DECODE_CMD_FXCH, 10, false, false, decode_x87_modrm_s= t0, + decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xd9, 1, 0, X86_DECODE_CMD_INVL, 10, false, false, + decode_x87_modrm_st0, NULL, NULL, RFLAGS_MASK_NONE}, + {0xd9, 2, 3, X86_DECODE_CMD_INVL, 10, false, false, + decode_x87_modrm_st0, NULL, NULL, RFLAGS_MASK_NONE}, + {0xd9, 2, 0, X86_DECODE_CMD_FST, 4, false, false, + decode_x87_modrm_floatp, NULL, NULL, RFLAGS_MASK_NONE}, + {0xd9, 3, 3, X86_DECODE_CMD_INVL, 10, false, false, + decode_x87_modrm_st0, NULL, NULL, RFLAGS_MASK_NONE}, + {0xd9, 3, 0, X86_DECODE_CMD_FST, 4, false, true, + decode_x87_modrm_floatp, NULL, NULL, RFLAGS_MASK_NONE}, + {0xd9, 4, 3, X86_DECODE_CMD_INVL, 10, false, false, + decode_x87_modrm_st0, NULL, decode_d9_4, RFLAGS_MASK_NONE}, + {0xd9, 4, 0, X86_DECODE_CMD_INVL, 4, false, false, + decode_x87_modrm_bytep, NULL, NULL, RFLAGS_MASK_NONE}, + {0xd9, 5, 3, X86_DECODE_CMD_FLDxx, 10, false, false, NULL, NULL, NULL, + RFLAGS_MASK_NONE}, + {0xd9, 5, 0, X86_DECODE_CMD_FLDCW, 2, false, false, + decode_x87_modrm_bytep, NULL, NULL, RFLAGS_MASK_NONE}, + + {0xd9, 7, 3, X86_DECODE_CMD_FNSTCW, 2, false, false, + decode_x87_modrm_bytep, NULL, NULL, RFLAGS_MASK_NONE}, + {0xd9, 7, 0, X86_DECODE_CMD_FNSTCW, 2, false, false, + decode_x87_modrm_bytep, NULL, NULL, RFLAGS_MASK_NONE}, + + {0xda, 0, 3, X86_DECODE_CMD_FCMOV, 10, false, false, + decode_x87_modrm_st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xda, 0, 0, X86_DECODE_CMD_FADD, 4, false, false, decode_x87_modrm_st= 0, + decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, + {0xda, 1, 3, X86_DECODE_CMD_FCMOV, 10, false, false, decode_x87_modrm_= st0, + decode_decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xda, 1, 0, X86_DECODE_CMD_FMUL, 4, false, false, decode_x87_modrm_st= 0, + decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, + {0xda, 2, 3, X86_DECODE_CMD_FCMOV, 10, false, false, decode_x87_modrm_= st0, + decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xda, 3, 3, X86_DECODE_CMD_FCMOV, 10, false, false, decode_x87_modrm_= st0, + decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xda, 4, 3, X86_DECODE_CMD_INVL, 10, false, false, NULL, NULL, NULL, + RFLAGS_MASK_NONE}, + {0xda, 4, 0, X86_DECODE_CMD_FSUB, 4, false, false, decode_x87_modrm_st= 0, + decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, + {0xda, 5, 3, X86_DECODE_CMD_FUCOM, 10, false, true, decode_x87_modrm_s= t0, + decode_decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xda, 5, 0, X86_DECODE_CMD_FSUB, 4, true, false, decode_x87_modrm_st0, + decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, + {0xda, 6, 3, X86_DECODE_CMD_INVL, 10, false, false, NULL, NULL, NULL, + RFLAGS_MASK_NONE}, + {0xda, 6, 0, X86_DECODE_CMD_FDIV, 4, false, false, decode_x87_modrm_st= 0, + decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, + {0xda, 7, 3, X86_DECODE_CMD_INVL, 10, false, false, NULL, NULL, NULL, + RFLAGS_MASK_NONE}, + {0xda, 7, 0, X86_DECODE_CMD_FDIV, 4, true, false, decode_x87_modrm_st0, + decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, + + {0xdb, 0, 3, X86_DECODE_CMD_FCMOV, 10, false, false, decode_x87_modrm_= st0, + decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdb, 0, 0, X86_DECODE_CMD_FLD, 4, false, false, + decode_x87_modrm_intp, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdb, 1, 3, X86_DECODE_CMD_FCMOV, 10, false, false, + decode_x87_modrm_st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdb, 2, 3, X86_DECODE_CMD_FCMOV, 10, false, false, + decode_x87_modrm_st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdb, 2, 0, X86_DECODE_CMD_FST, 4, false, false, + decode_x87_modrm_intp, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdb, 3, 3, X86_DECODE_CMD_FCMOV, 10, false, false, + decode_x87_modrm_st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdb, 3, 0, X86_DECODE_CMD_FST, 4, false, true, + decode_x87_modrm_intp, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdb, 4, 3, X86_DECODE_CMD_INVL, 10, false, false, NULL, NULL, + decode_db_4, RFLAGS_MASK_NONE}, + {0xdb, 4, 0, X86_DECODE_CMD_INVL, 10, false, false, NULL, NULL, NULL, + RFLAGS_MASK_NONE}, + {0xdb, 5, 3, X86_DECODE_CMD_FUCOMI, 10, false, false, + decode_x87_modrm_st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdb, 5, 0, X86_DECODE_CMD_FLD, 10, false, false, + decode_x87_modrm_floatp, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdb, 7, 0, X86_DECODE_CMD_FST, 10, false, true, + decode_x87_modrm_floatp, NULL, NULL, RFLAGS_MASK_NONE}, + + {0xdc, 0, 3, X86_DECODE_CMD_FADD, 10, false, false, + decode_x87_modrm_st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdc, 0, 0, X86_DECODE_CMD_FADD, 8, false, false, + decode_x87_modrm_st0, decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE= }, + {0xdc, 1, 3, X86_DECODE_CMD_FMUL, 10, false, false, + decode_x87_modrm_st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdc, 1, 0, X86_DECODE_CMD_FMUL, 8, false, false, + decode_x87_modrm_st0, decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE= }, + {0xdc, 4, 3, X86_DECODE_CMD_FSUB, 10, true, false, + decode_x87_modrm_st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdc, 4, 0, X86_DECODE_CMD_FSUB, 8, false, false, + decode_x87_modrm_st0, decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE= }, + {0xdc, 5, 3, X86_DECODE_CMD_FSUB, 10, false, false, + decode_x87_modrm_st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdc, 5, 0, X86_DECODE_CMD_FSUB, 8, true, false, + decode_x87_modrm_st0, decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE= }, + {0xdc, 6, 3, X86_DECODE_CMD_FDIV, 10, true, false, + decode_x87_modrm_st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdc, 6, 0, X86_DECODE_CMD_FDIV, 8, false, false, + decode_x87_modrm_st0, decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE= }, + {0xdc, 7, 3, X86_DECODE_CMD_FDIV, 10, false, false, + decode_x87_modrm_st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdc, 7, 0, X86_DECODE_CMD_FDIV, 8, true, false, + decode_x87_modrm_st0, decode_x87_modrm_floatp, NULL, RFLAGS_MASK_NONE= }, + + {0xdd, 0, 0, X86_DECODE_CMD_FLD, 8, false, false, + decode_x87_modrm_floatp, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdd, 1, 3, X86_DECODE_CMD_FXCH, 10, false, false, + decode_x87_modrm_st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdd, 2, 3, X86_DECODE_CMD_FST, 10, false, false, + decode_x87_modrm_st0, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdd, 2, 0, X86_DECODE_CMD_FST, 8, false, false, + decode_x87_modrm_floatp, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdd, 3, 3, X86_DECODE_CMD_FST, 10, false, true, + decode_x87_modrm_st0, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdd, 3, 0, X86_DECODE_CMD_FST, 8, false, true, + decode_x87_modrm_floatp, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdd, 4, 3, X86_DECODE_CMD_FUCOM, 10, false, false, + decode_x87_modrm_st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdd, 4, 0, X86_DECODE_CMD_FRSTOR, 8, false, false, + decode_x87_modrm_bytep, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdd, 5, 3, X86_DECODE_CMD_FUCOM, 10, false, true, + decode_x87_modrm_st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdd, 7, 0, X86_DECODE_CMD_FNSTSW, 0, false, false, + decode_x87_modrm_bytep, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdd, 7, 3, X86_DECODE_CMD_FNSTSW, 0, false, false, + decode_x87_modrm_bytep, NULL, NULL, RFLAGS_MASK_NONE}, + + {0xde, 0, 3, X86_DECODE_CMD_FADD, 10, false, true, + decode_x87_modrm_st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xde, 0, 0, X86_DECODE_CMD_FADD, 2, false, false, + decode_x87_modrm_st0, decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, + {0xde, 1, 3, X86_DECODE_CMD_FMUL, 10, false, true, + decode_x87_modrm_st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xde, 1, 0, X86_DECODE_CMD_FMUL, 2, false, false, + decode_x87_modrm_st0, decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, + {0xde, 4, 3, X86_DECODE_CMD_FSUB, 10, true, true, + decode_x87_modrm_st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xde, 4, 0, X86_DECODE_CMD_FSUB, 2, false, false, + decode_x87_modrm_st0, decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, + {0xde, 5, 3, X86_DECODE_CMD_FSUB, 10, false, true, + decode_x87_modrm_st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xde, 5, 0, X86_DECODE_CMD_FSUB, 2, true, false, + decode_x87_modrm_st0, decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, + {0xde, 6, 3, X86_DECODE_CMD_FDIV, 10, true, true, + decode_x87_modrm_st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xde, 6, 0, X86_DECODE_CMD_FDIV, 2, false, false, + decode_x87_modrm_st0, decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, + {0xde, 7, 3, X86_DECODE_CMD_FDIV, 10, false, true, + decode_x87_modrm_st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xde, 7, 0, X86_DECODE_CMD_FDIV, 2, true, false, + decode_x87_modrm_st0, decode_x87_modrm_intp, NULL, RFLAGS_MASK_NONE}, + + {0xdf, 0, 0, X86_DECODE_CMD_FLD, 2, false, false, + decode_x87_modrm_intp, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdf, 1, 3, X86_DECODE_CMD_FXCH, 10, false, false, + decode_x87_modrm_st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdf, 2, 3, X86_DECODE_CMD_FST, 10, false, true, + decode_x87_modrm_st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdf, 2, 0, X86_DECODE_CMD_FST, 2, false, false, + decode_x87_modrm_intp, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdf, 3, 3, X86_DECODE_CMD_FST, 10, false, true, + decode_x87_modrm_st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdf, 3, 0, X86_DECODE_CMD_FST, 2, false, true, + decode_x87_modrm_intp, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdf, 4, 3, X86_DECODE_CMD_FNSTSW, 2, false, true, + decode_x87_modrm_bytep, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdf, 5, 3, X86_DECODE_CMD_FUCOMI, 10, false, true, + decode_x87_modrm_st0, decode_x87_modrm_st0, NULL, RFLAGS_MASK_NONE}, + {0xdf, 5, 0, X86_DECODE_CMD_FLD, 8, false, false, + decode_x87_modrm_intp, NULL, NULL, RFLAGS_MASK_NONE}, + {0xdf, 7, 0, X86_DECODE_CMD_FST, 8, false, true, + decode_x87_modrm_intp, NULL, NULL, RFLAGS_MASK_NONE}, }; =20 -void calc_modrm_operand16(CPUState *cpu, struct x86_decode *decode, struct= x86_decode_op *op) +void calc_modrm_operand16(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op) { addr_t ptr =3D 0; x86_reg_segment seg =3D REG_SEG_DS; @@ -1163,43 +1644,45 @@ void calc_modrm_operand16(CPUState *cpu, struct x86= _decode *decode, struct x86_d goto calc_addr; } =20 - if (decode->displacement_size) + if (decode->displacement_size) { ptr =3D sign(decode->displacement, decode->displacement_size); + } =20 switch (decode->modrm.rm) { - case 0: - ptr +=3D BX(cpu) + SI(cpu); - break; - case 1: - ptr +=3D BX(cpu) + DI(cpu); - break; - case 2: - ptr +=3D BP(cpu) + SI(cpu); - seg =3D REG_SEG_SS; - break; - case 3: - ptr +=3D BP(cpu) + DI(cpu); - seg =3D REG_SEG_SS; - break; - case 4: - ptr +=3D SI(cpu); - break; - case 5: - ptr +=3D DI(cpu); - break; - case 6: - ptr +=3D BP(cpu); - seg =3D REG_SEG_SS; - break; - case 7: - ptr +=3D BX(cpu); - break; + case 0: + ptr +=3D BX(cpu) + SI(cpu); + break; + case 1: + ptr +=3D BX(cpu) + DI(cpu); + break; + case 2: + ptr +=3D BP(cpu) + SI(cpu); + seg =3D REG_SEG_SS; + break; + case 3: + ptr +=3D BP(cpu) + DI(cpu); + seg =3D REG_SEG_SS; + break; + case 4: + ptr +=3D SI(cpu); + break; + case 5: + ptr +=3D DI(cpu); + break; + case 6: + ptr +=3D BP(cpu); + seg =3D REG_SEG_SS; + break; + case 7: + ptr +=3D BX(cpu); + break; } calc_addr: - if (X86_DECODE_CMD_LEA =3D=3D decode->cmd) + if (X86_DECODE_CMD_LEA =3D=3D decode->cmd) { op->ptr =3D (uint16_t)ptr; - else + } else { op->ptr =3D decode_linear_addr(cpu, decode, (uint16_t)ptr, seg); + } } =20 addr_t get_reg_ref(CPUState *cpu, int reg, int is_extended, int size) @@ -1207,24 +1690,25 @@ addr_t get_reg_ref(CPUState *cpu, int reg, int is_e= xtended, int size) addr_t ptr =3D 0; int which =3D 0; =20 - if (is_extended) + if (is_extended) { reg |=3D REG_R8; + } =20 =20 switch (size) { - case 1: - if (is_extended || reg < 4) { - which =3D 1; - ptr =3D (addr_t)&RL(cpu, reg); - } else { - which =3D 2; - ptr =3D (addr_t)&RH(cpu, reg - 4); - } - break; - default: - which =3D 3; - ptr =3D (addr_t)&RRX(cpu, reg); - break; + case 1: + if (is_extended || reg < 4) { + which =3D 1; + ptr =3D (addr_t)&RL(cpu, reg); + } else { + which =3D 2; + ptr =3D (addr_t)&RH(cpu, reg - 4); + } + break; + default: + which =3D 3; + ptr =3D (addr_t)&RRX(cpu, reg); + break; } return ptr; } @@ -1232,11 +1716,12 @@ addr_t get_reg_ref(CPUState *cpu, int reg, int is_e= xtended, int size) addr_t get_reg_val(CPUState *cpu, int reg, int is_extended, int size) { addr_t val =3D 0; - memcpy(&val, (void*)get_reg_ref(cpu, reg, is_extended, size), size); + memcpy(&val, (void *)get_reg_ref(cpu, reg, is_extended, size), size); return val; } =20 -static addr_t get_sib_val(CPUState *cpu, struct x86_decode *decode, x86_re= g_segment *sel) +static addr_t get_sib_val(CPUState *cpu, struct x86_decode *decode, + x86_reg_segment *sel) { addr_t base =3D 0; addr_t scaled_index =3D 0; @@ -1247,52 +1732,61 @@ static addr_t get_sib_val(CPUState *cpu, struct x86= _decode *decode, x86_reg_segm *sel =3D REG_SEG_DS; =20 if (decode->modrm.mod || base_reg !=3D REG_RBP) { - if (decode->rex.b) + if (decode->rex.b) { base_reg |=3D REG_R8; - if (REG_RSP =3D=3D base_reg || REG_RBP =3D=3D base_reg) + } + if (REG_RSP =3D=3D base_reg || REG_RBP =3D=3D base_reg) { *sel =3D REG_SEG_SS; + } base =3D get_reg_val(cpu, decode->sib.base, decode->rex.b, addr_si= ze); } =20 - if (decode->rex.x) + if (decode->rex.x) { index_reg |=3D REG_R8; + } =20 - if (index_reg !=3D REG_RSP) - scaled_index =3D get_reg_val(cpu, index_reg, decode->rex.x, addr_s= ize) << decode->sib.scale; + if (index_reg !=3D REG_RSP) { + scaled_index =3D get_reg_val(cpu, index_reg, decode->rex.x, addr_s= ize) << + decode->sib.scale; + } return base + scaled_index; } =20 -void calc_modrm_operand32(CPUState *cpu, struct x86_decode *decode, struct= x86_decode_op *op) +void calc_modrm_operand32(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op) { x86_reg_segment seg =3D REG_SEG_DS; addr_t ptr =3D 0; int addr_size =3D decode->addressing_size; =20 - if (decode->displacement_size) + if (decode->displacement_size) { ptr =3D sign(decode->displacement, decode->displacement_size); + } =20 if (4 =3D=3D decode->modrm.rm) { ptr +=3D get_sib_val(cpu, decode, &seg); - } - else if (!decode->modrm.mod && 5 =3D=3D decode->modrm.rm) { - if (x86_is_long_mode(cpu)) + } else if (!decode->modrm.mod && 5 =3D=3D decode->modrm.rm) { + if (x86_is_long_mode(cpu)) { ptr +=3D RIP(cpu) + decode->len; - else + } else { ptr =3D decode->displacement; - } - else { - if (REG_RBP =3D=3D decode->modrm.rm || REG_RSP =3D=3D decode->modr= m.rm) + } + } else { + if (REG_RBP =3D=3D decode->modrm.rm || REG_RSP =3D=3D decode->modr= m.rm) { seg =3D REG_SEG_SS; + } ptr +=3D get_reg_val(cpu, decode->modrm.rm, decode->rex.b, addr_si= ze); } =20 - if (X86_DECODE_CMD_LEA =3D=3D decode->cmd) + if (X86_DECODE_CMD_LEA =3D=3D decode->cmd) { op->ptr =3D (uint32_t)ptr; - else + } else { op->ptr =3D decode_linear_addr(cpu, decode, (uint32_t)ptr, seg); + } } =20 -void calc_modrm_operand64(CPUState *cpu, struct x86_decode *decode, struct= x86_decode_op *op) +void calc_modrm_operand64(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op) { x86_reg_segment seg =3D REG_SEG_DS; int32_t offset =3D 0; @@ -1300,46 +1794,51 @@ void calc_modrm_operand64(CPUState *cpu, struct x86= _decode *decode, struct x86_d int rm =3D decode->modrm.rm; addr_t ptr; int src =3D decode->modrm.rm; - =20 - if (decode->displacement_size) + + if (decode->displacement_size) { offset =3D sign(decode->displacement, decode->displacement_size); + } =20 - if (4 =3D=3D rm) + if (4 =3D=3D rm) { ptr =3D get_sib_val(cpu, decode, &seg) + offset; - else if (0 =3D=3D mod && 5 =3D=3D rm) + } else if (0 =3D=3D mod && 5 =3D=3D rm) { ptr =3D RIP(cpu) + decode->len + (int32_t) offset; - else + } else { ptr =3D get_reg_val(cpu, src, decode->rex.b, 8) + (int64_t) offset; - =20 - if (X86_DECODE_CMD_LEA =3D=3D decode->cmd) + } + + if (X86_DECODE_CMD_LEA =3D=3D decode->cmd) { op->ptr =3D ptr; - else + } else { op->ptr =3D decode_linear_addr(cpu, decode, ptr, seg); + } } =20 =20 -void calc_modrm_operand(CPUState *cpu, struct x86_decode *decode, struct x= 86_decode_op *op) +void calc_modrm_operand(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op) { if (3 =3D=3D decode->modrm.mod) { op->reg =3D decode->modrm.reg; op->type =3D X86_VAR_REG; - op->ptr =3D get_reg_ref(cpu, decode->modrm.rm, decode->rex.b, deco= de->operand_size); + op->ptr =3D get_reg_ref(cpu, decode->modrm.rm, decode->rex.b, + decode->operand_size); return; } =20 switch (decode->addressing_size) { - case 2: - calc_modrm_operand16(cpu, decode, op); - break; - case 4: - calc_modrm_operand32(cpu, decode, op); - break; - case 8: - calc_modrm_operand64(cpu, decode, op); - break; - default: - VM_PANIC_EX("unsupported address size %d\n", decode->addressin= g_size); - break; + case 2: + calc_modrm_operand16(cpu, decode, op); + break; + case 4: + calc_modrm_operand32(cpu, decode, op); + break; + case 8: + calc_modrm_operand64(cpu, decode, op); + break; + default: + VM_PANIC_EX("unsupported address size %d\n", decode->addressing_si= ze); + break; } } =20 @@ -1348,36 +1847,36 @@ static void decode_prefix(CPUState *cpu, struct x86= _decode *decode) while (1) { uint8_t byte =3D decode_byte(cpu, decode); switch (byte) { - case PREFIX_LOCK: - decode->lock =3D byte; - break; - case PREFIX_REPN: - case PREFIX_REP: - decode->rep =3D byte; - break; - case PREFIX_CS_SEG_OVEERIDE: - case PREFIX_SS_SEG_OVEERIDE: - case PREFIX_DS_SEG_OVEERIDE: - case PREFIX_ES_SEG_OVEERIDE: - case PREFIX_FS_SEG_OVEERIDE: - case PREFIX_GS_SEG_OVEERIDE: - decode->segment_override =3D byte; - break; - case PREFIX_OP_SIZE_OVERRIDE: - decode->op_size_override =3D byte; - break; - case PREFIX_ADDR_SIZE_OVERRIDE: - decode->addr_size_override =3D byte; + case PREFIX_LOCK: + decode->lock =3D byte; + break; + case PREFIX_REPN: + case PREFIX_REP: + decode->rep =3D byte; + break; + case PREFIX_CS_SEG_OVEERIDE: + case PREFIX_SS_SEG_OVEERIDE: + case PREFIX_DS_SEG_OVEERIDE: + case PREFIX_ES_SEG_OVEERIDE: + case PREFIX_FS_SEG_OVEERIDE: + case PREFIX_GS_SEG_OVEERIDE: + decode->segment_override =3D byte; + break; + case PREFIX_OP_SIZE_OVERRIDE: + decode->op_size_override =3D byte; + break; + case PREFIX_ADDR_SIZE_OVERRIDE: + decode->addr_size_override =3D byte; + break; + case PREFIX_REX ... (PREFIX_REX + 0xf): + if (x86_is_long_mode(cpu)) { + decode->rex.rex =3D byte; break; - case PREFIX_REX ... (PREFIX_REX + 0xf): - if (x86_is_long_mode(cpu)) { - decode->rex.rex =3D byte; - break; - } - // fall through when not in long mode - default: - decode->len--; - return; + } + /* fall through when not in long mode */ + default: + decode->len--; + return; } } } @@ -1386,33 +1885,36 @@ void set_addressing_size(CPUState *cpu, struct x86_= decode *decode) { decode->addressing_size =3D -1; if (x86_is_real(cpu) || x86_is_v8086(cpu)) { - if (decode->addr_size_override) + if (decode->addr_size_override) { decode->addressing_size =3D 4; - else + } else { decode->addressing_size =3D 2; - } - else if (!x86_is_long_mode(cpu)) { - // protected + } + } else if (!x86_is_long_mode(cpu)) { + /* protected */ struct vmx_segment cs; vmx_read_segment_descriptor(cpu, &cs, REG_SEG_CS); - // check db + /* check db */ if ((cs.ar >> 14) & 1) { - if (decode->addr_size_override) + if (decode->addr_size_override) { decode->addressing_size =3D 2; - else + } else { decode->addressing_size =3D 4; + } } else { - if (decode->addr_size_override) + if (decode->addr_size_override) { decode->addressing_size =3D 4; - else + } else { decode->addressing_size =3D 2; + } } } else { - // long - if (decode->addr_size_override) + /* long */ + if (decode->addr_size_override) { decode->addressing_size =3D 4; - else + } else { decode->addressing_size =3D 8; + } } } =20 @@ -1420,99 +1922,98 @@ void set_operand_size(CPUState *cpu, struct x86_dec= ode *decode) { decode->operand_size =3D -1; if (x86_is_real(cpu) || x86_is_v8086(cpu)) { - if (decode->op_size_override) + if (decode->op_size_override) { decode->operand_size =3D 4; - else + } else { decode->operand_size =3D 2; - } - else if (!x86_is_long_mode(cpu)) { - // protected + } + } else if (!x86_is_long_mode(cpu)) { + /* protected */ struct vmx_segment cs; vmx_read_segment_descriptor(cpu, &cs, REG_SEG_CS); - // check db + /* check db */ if ((cs.ar >> 14) & 1) { - if (decode->op_size_override) + if (decode->op_size_override) { decode->operand_size =3D 2; - else + } else{ decode->operand_size =3D 4; + } } else { - if (decode->op_size_override) + if (decode->op_size_override) { decode->operand_size =3D 4; - else + } else { decode->operand_size =3D 2; + } } } else { - // long - if (decode->op_size_override) + /* long */ + if (decode->op_size_override) { decode->operand_size =3D 2; - else + } else { decode->operand_size =3D 4; + } =20 - if (decode->rex.w) + if (decode->rex.w) { decode->operand_size =3D 8; + } } } =20 static void decode_sib(CPUState *cpu, struct x86_decode *decode) { - if ((decode->modrm.mod !=3D 3) && (4 =3D=3D decode->modrm.rm) && (deco= de->addressing_size !=3D 2)) { + if ((decode->modrm.mod !=3D 3) && (4 =3D=3D decode->modrm.rm) && + (decode->addressing_size !=3D 2)) { decode->sib.sib =3D decode_byte(cpu, decode); decode->sib_present =3D true; } } =20 -/* 16 bit modrm - * mod R/M - * 00 [BX+SI] [BX+DI] [BP+SI] [BP+DI] [SI]= [DI] [disp16] [BX] - * 01 [BX+SI+disp8] [BX+DI+disp8] [BP+SI+disp8] [BP+DI+disp8] [SI+disp8] [= DI+disp8] [BP+disp8] [BX+disp8] - * 10 [BX+SI+disp16] [BX+DI+disp16] [BP+SI+disp16] [BP+DI+disp16] [SI+disp= 16] [DI+disp16] [BP+disp16] [BX+disp16] - * 11 - - - - = - - - - - */ -int disp16_tbl[4][8] =3D - {{0, 0, 0, 0, 0, 0, 2, 0}, +/* 16 bit modrm */ +int disp16_tbl[4][8] =3D { + {0, 0, 0, 0, 0, 0, 2, 0}, {1, 1, 1, 1, 1, 1, 1, 1}, {2, 2, 2, 2, 2, 2, 2, 2}, - {0, 0, 0, 0, 0, 0, 0, 0}}; + {0, 0, 0, 0, 0, 0, 0, 0} +}; =20 -/* - 32/64-bit modrm - Mod - 00 [r/m] [r/m] [r/m] [r/m] [SIB] [= RIP/EIP1,2+disp32] [r/m] [r/m] - 01 [r/m+disp8] [r/m+disp8] [r/m+disp8] [r/m+disp8] [SIB+disp8] [= r/m+disp8] [SIB+disp8] [r/m+disp8] - 10 [r/m+disp32] [r/m+disp32] [r/m+disp32] [r/m+disp32] [SIB+disp32] [= r/m+disp32] [SIB+disp32] [r/m+disp32] - 11 - - - - - -= - - - */ -int disp32_tbl[4][8] =3D - {{0, 0, 0, 0, -1, 4, 0, 0}, +/* 32/64-bit modrm */ +int disp32_tbl[4][8] =3D { + {0, 0, 0, 0, -1, 4, 0, 0}, {1, 1, 1, 1, 1, 1, 1, 1}, {4, 4, 4, 4, 4, 4, 4, 4}, - {0, 0, 0, 0, 0, 0, 0, 0}}; + {0, 0, 0, 0, 0, 0, 0, 0} +}; =20 static inline void decode_displacement(CPUState *cpu, struct x86_decode *d= ecode) { int addressing_size =3D decode->addressing_size; int mod =3D decode->modrm.mod; int rm =3D decode->modrm.rm; - =20 + decode->displacement_size =3D 0; switch (addressing_size) { - case 2: - decode->displacement_size =3D disp16_tbl[mod][rm]; - if (decode->displacement_size) - decode->displacement =3D (uint16_t)decode_bytes(cpu, decod= e, decode->displacement_size); - break; - case 4: - case 8: - if (-1 =3D=3D disp32_tbl[mod][rm]) { - if (5 =3D=3D decode->sib.base) - decode->displacement_size =3D 4; + case 2: + decode->displacement_size =3D disp16_tbl[mod][rm]; + if (decode->displacement_size) { + decode->displacement =3D (uint16_t)decode_bytes(cpu, decode, + decode->displacement_size); + } + break; + case 4: + case 8: + if (-1 =3D=3D disp32_tbl[mod][rm]) { + if (5 =3D=3D decode->sib.base) { + decode->displacement_size =3D 4; } - else - decode->displacement_size =3D disp32_tbl[mod][rm]; - =20 - if (decode->displacement_size) - decode->displacement =3D (uint32_t)decode_bytes(cpu, decod= e, decode->displacement_size); - break; + } else { + decode->displacement_size =3D disp32_tbl[mod][rm]; + } + + if (decode->displacement_size) { + decode->displacement =3D (uint32_t)decode_bytes(cpu, decode, + decode->displacement_size); + } + break; } } =20 @@ -1525,35 +2026,47 @@ static inline void decode_modrm(CPUState *cpu, stru= ct x86_decode *decode) decode_displacement(cpu, decode); } =20 -static inline void decode_opcode_general(CPUState *cpu, struct x86_decode = *decode, uint8_t opcode, struct decode_tbl *inst_decoder) +static inline void decode_opcode_general(CPUState *cpu, + struct x86_decode *decode, + uint8_t opcode, + struct decode_tbl *inst_decoder) { decode->cmd =3D inst_decoder->cmd; - if (inst_decoder->operand_size) + if (inst_decoder->operand_size) { decode->operand_size =3D inst_decoder->operand_size; + } decode->flags_mask =3D inst_decoder->flags_mask; - =20 - if (inst_decoder->is_modrm) + + if (inst_decoder->is_modrm) { decode_modrm(cpu, decode); - if (inst_decoder->decode_op1) + } + if (inst_decoder->decode_op1) { inst_decoder->decode_op1(cpu, decode, &decode->op[0]); - if (inst_decoder->decode_op2) + } + if (inst_decoder->decode_op2) { inst_decoder->decode_op2(cpu, decode, &decode->op[1]); - if (inst_decoder->decode_op3) + } + if (inst_decoder->decode_op3) { inst_decoder->decode_op3(cpu, decode, &decode->op[2]); - if (inst_decoder->decode_op4) + } + if (inst_decoder->decode_op4) { inst_decoder->decode_op4(cpu, decode, &decode->op[3]); - if (inst_decoder->decode_postfix) + } + if (inst_decoder->decode_postfix) { inst_decoder->decode_postfix(cpu, decode); + } } =20 -static inline void decode_opcode_1(CPUState *cpu, struct x86_decode *decod= e, uint8_t opcode) +static inline void decode_opcode_1(CPUState *cpu, struct x86_decode *decod= e, + uint8_t opcode) { struct decode_tbl *inst_decoder =3D &_decode_tbl1[opcode]; decode_opcode_general(cpu, decode, opcode, inst_decoder); } =20 =20 -static inline void decode_opcode_2(CPUState *cpu, struct x86_decode *decod= e, uint8_t opcode) +static inline void decode_opcode_2(CPUState *cpu, struct x86_decode *decod= e, + uint8_t opcode) { struct decode_tbl *inst_decoder =3D &_decode_tbl2[opcode]; decode_opcode_general(cpu, decode, opcode, inst_decoder); @@ -1591,13 +2104,16 @@ void init_decoder(CPUState *cpu) { int i; =20 - for (i =3D 0; i < ARRAY_SIZE(_decode_tbl2); i++) + for (i =3D 0; i < ARRAY_SIZE(_decode_tbl2); i++) { memcpy(_decode_tbl1, &invl_inst, sizeof(invl_inst)); - for (i =3D 0; i < ARRAY_SIZE(_decode_tbl2); i++) + } + for (i =3D 0; i < ARRAY_SIZE(_decode_tbl2); i++) { memcpy(_decode_tbl2, &invl_inst, sizeof(invl_inst)); - for (i =3D 0; i < ARRAY_SIZE(_decode_tbl3); i++) + } + for (i =3D 0; i < ARRAY_SIZE(_decode_tbl3); i++) { memcpy(_decode_tbl3, &invl_inst, sizeof(invl_inst_x87)); =20 + } for (i =3D 0; i < ARRAY_SIZE(_1op_inst); i++) { _decode_tbl1[_1op_inst[i].opcode] =3D _1op_inst[i]; } @@ -1605,7 +2121,9 @@ void init_decoder(CPUState *cpu) _decode_tbl2[_2op_inst[i].opcode] =3D _2op_inst[i]; } for (i =3D 0; i < ARRAY_SIZE(_x87_inst); i++) { - int index =3D ((_x87_inst[i].opcode & 0xf) << 4) | ((_x87_inst[i].= modrm_mod & 1) << 3) | _x87_inst[i].modrm_reg; + int index =3D ((_x87_inst[i].opcode & 0xf) << 4) | + ((_x87_inst[i].modrm_mod & 1) << 3) | + _x87_inst[i].modrm_reg; _decode_tbl3[index] =3D _x87_inst[i]; } } @@ -1613,47 +2131,55 @@ void init_decoder(CPUState *cpu) =20 const char *decode_cmd_to_string(enum x86_decode_cmd cmd) { - static const char *cmds[] =3D {"INVL", "PUSH", "PUSH_SEG", "POP", "POP= _SEG", "MOV", "MOVSX", "MOVZX", "CALL_NEAR", - "CALL_NEAR_ABS_INDIRECT", "CALL_FAR_ABS_INDIRECT", "CMD_CALL_FAR",= "RET_NEAR", "RET_FAR", "ADD", "OR", - "ADC", "SBB", "AND", "SUB", "XOR", "CMP", "INC", "DEC", "TST", "NO= T", "NEG", "JMP_NEAR", "JMP_NEAR_ABS_INDIRECT", - "JMP_FAR", "JMP_FAR_ABS_INDIRECT", "LEA", "JXX", - "JCXZ", "SETXX", "MOV_TO_SEG", "MOV_FROM_SEG", "CLI", "STI", "CLD"= , "STD", "STC", - "CLC", "OUT", "IN", "INS", "OUTS", "LIDT", "SIDT", "LGDT", "SGDT",= "SMSW", "LMSW", "RDTSCP", "INVLPG", "MOV_TO_CR", - "MOV_FROM_CR", "MOV_TO_DR", "MOV_FROM_DR", "PUSHF", "POPF", "CPUID= ", "ROL", "ROR", "RCL", "RCR", "SHL", "SAL", - "SHR","SHRD", "SHLD", "SAR", "DIV", "IDIV", "MUL", "IMUL_3", "IMUL= _2", "IMUL_1", "MOVS", "CMPS", "SCAS", - "LODS", "STOS", "BSWAP", "XCHG", "RDTSC", "RDMSR", "WRMSR", "ENTER= ", "LEAVE", "BT", "BTS", "BTC", "BTR", "BSF", - "BSR", "IRET", "INT", "POPA", "PUSHA", "CWD", "CBW", "DAS", "AAD",= "AAM", "AAS", "LOOP", "SLDT", "STR", "LLDT", - "LTR", "VERR", "VERW", "SAHF", "LAHF", "WBINVD", "LDS", "LSS", "LE= S", "LGS", "LFS", "CMC", "XLAT", "NOP", "CMOV", - "CLTS", "XADD", "HLT", "CMPXCHG8B", "CMPXCHG", "POPCNT", - "FNINIT", "FLD", "FLDxx", "FNSTCW", "FNSTSW", "FNSETPM", "FSAVE", = "FRSTOR", "FXSAVE", "FXRSTOR", "FDIV", "FMUL", - "FSUB", "FADD", "EMMS", "MFENCE", "SFENCE", "LFENCE", "PREFETCH", = "FST", "FABS", "FUCOM", "FUCOMI", "FLDCW", + static const char *cmds[] =3D {"INVL", "PUSH", "PUSH_SEG", "POP", "POP= _SEG", + "MOV", "MOVSX", "MOVZX", "CALL_NEAR", "CALL_NEAR_ABS_INDIRECT", + "CALL_FAR_ABS_INDIRECT", "CMD_CALL_FAR", "RET_NEAR", "RET_FAR", "A= DD", + "OR", "ADC", "SBB", "AND", "SUB", "XOR", "CMP", "INC", "DEC", "TST= ", + "NOT", "NEG", "JMP_NEAR", "JMP_NEAR_ABS_INDIRECT", "JMP_FAR", + "JMP_FAR_ABS_INDIRECT", "LEA", "JXX", "JCXZ", "SETXX", "MOV_TO_SEG= ", + "MOV_FROM_SEG", "CLI", "STI", "CLD", "STD", "STC", "CLC", "OUT", "= IN", + "INS", "OUTS", "LIDT", "SIDT", "LGDT", "SGDT", "SMSW", "LMSW", + "RDTSCP", "INVLPG", "MOV_TO_CR", "MOV_FROM_CR", "MOV_TO_DR", + "MOV_FROM_DR", "PUSHF", "POPF", "CPUID", "ROL", "ROR", "RCL", "RCR= ", + "SHL", "SAL", "SHR", "SHRD", "SHLD", "SAR", "DIV", "IDIV", "MUL", + "IMUL_3", "IMUL_2", "IMUL_1", "MOVS", "CMPS", "SCAS", "LODS", "STO= S", + "BSWAP", "XCHG", "RDTSC", "RDMSR", "WRMSR", "ENTER", "LEAVE", "BT", + "BTS", "BTC", "BTR", "BSF", "BSR", "IRET", "INT", "POPA", "PUSHA", + "CWD", "CBW", "DAS", "AAD", "AAM", "AAS", "LOOP", "SLDT", "STR", "= LLDT", + "LTR", "VERR", "VERW", "SAHF", "LAHF", "WBINVD", "LDS", "LSS", "LE= S", + "LGS", "LFS", "CMC", "XLAT", "NOP", "CMOV", "CLTS", "XADD", "HLT", + "CMPXCHG8B", "CMPXCHG", "POPCNT", "FNINIT", "FLD", "FLDxx", "FNSTC= W", + "FNSTSW", "FNSETPM", "FSAVE", "FRSTOR", "FXSAVE", "FXRSTOR", "FDIV= ", + "FMUL", "FSUB", "FADD", "EMMS", "MFENCE", "SFENCE", "LFENCE", + "PREFETCH", "FST", "FABS", "FUCOM", "FUCOMI", "FLDCW", "FXCH", "FCHS", "FCMOV", "FRNDINT", "FXAM", "LAST"}; return cmds[cmd]; } =20 -addr_t decode_linear_addr(struct CPUState *cpu, struct x86_decode *decode,= addr_t addr, x86_reg_segment seg) +addr_t decode_linear_addr(struct CPUState *cpu, struct x86_decode *decode, + addr_t addr, x86_reg_segment seg) { switch (decode->segment_override) { - case PREFIX_CS_SEG_OVEERIDE: - seg =3D REG_SEG_CS; - break; - case PREFIX_SS_SEG_OVEERIDE: - seg =3D REG_SEG_SS; - break; - case PREFIX_DS_SEG_OVEERIDE: - seg =3D REG_SEG_DS; - break; - case PREFIX_ES_SEG_OVEERIDE: - seg =3D REG_SEG_ES; - break; - case PREFIX_FS_SEG_OVEERIDE: - seg =3D REG_SEG_FS; - break; - case PREFIX_GS_SEG_OVEERIDE: - seg =3D REG_SEG_GS; - break; - default: - break; + case PREFIX_CS_SEG_OVEERIDE: + seg =3D REG_SEG_CS; + break; + case PREFIX_SS_SEG_OVEERIDE: + seg =3D REG_SEG_SS; + break; + case PREFIX_DS_SEG_OVEERIDE: + seg =3D REG_SEG_DS; + break; + case PREFIX_ES_SEG_OVEERIDE: + seg =3D REG_SEG_ES; + break; + case PREFIX_FS_SEG_OVEERIDE: + seg =3D REG_SEG_FS; + break; + case PREFIX_GS_SEG_OVEERIDE: + seg =3D REG_SEG_GS; + break; + default: + break; } return linear_addr_size(cpu, addr, decode->addressing_size, seg); } diff --git a/target/i386/hvf-utils/x86_decode.h b/target/i386/hvf-utils/x86= _decode.h index fde524f819..571931dc73 100644 --- a/target/i386/hvf-utils/x86_decode.h +++ b/target/i386/hvf-utils/x86_decode.h @@ -25,20 +25,20 @@ #include "x86.h" =20 typedef enum x86_prefix { - // group 1 + /* group 1 */ PREFIX_LOCK =3D 0xf0, PREFIX_REPN =3D 0xf2, PREFIX_REP =3D 0xf3, - // group 2 + /* group 2 */ PREFIX_CS_SEG_OVEERIDE =3D 0x2e, PREFIX_SS_SEG_OVEERIDE =3D 0x36, PREFIX_DS_SEG_OVEERIDE =3D 0x3e, PREFIX_ES_SEG_OVEERIDE =3D 0x26, PREFIX_FS_SEG_OVEERIDE =3D 0x64, PREFIX_GS_SEG_OVEERIDE =3D 0x65, - // group 3 + /* group 3 */ PREFIX_OP_SIZE_OVERRIDE =3D 0x66, - // group 4 + /* group 4 */ PREFIX_ADDR_SIZE_OVERRIDE =3D 0x67, =20 PREFIX_REX =3D 0x40, @@ -255,7 +255,7 @@ typedef enum x86_var_type { X86_VAR_REG, X86_VAR_RM, =20 - // for floating point computations + /* for floating point computations */ X87_VAR_REG, X87_VAR_FLOATP, X87_VAR_INTP, @@ -308,7 +308,17 @@ uint32_t decode_instruction(CPUState *cpu, struct x86_= decode *decode); =20 addr_t get_reg_ref(CPUState *cpu, int reg, int is_extended, int size); addr_t get_reg_val(CPUState *cpu, int reg, int is_extended, int size); -void calc_modrm_operand(CPUState *cpu, struct x86_decode *decode, struct x= 86_decode_op *op); -addr_t decode_linear_addr(struct CPUState *cpu, struct x86_decode *decode,= addr_t addr, x86_reg_segment seg); +void calc_modrm_operand(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op); +addr_t decode_linear_addr(struct CPUState *cpu, struct x86_decode *decode, + addr_t addr, x86_reg_segment seg); =20 -void init_decoder(CPUState* cpu); +void init_decoder(CPUState *cpu); +void calc_modrm_operand16(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op); +void calc_modrm_operand32(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op); +void calc_modrm_operand64(CPUState *cpu, struct x86_decode *decode, + struct x86_decode_op *op); +void set_addressing_size(CPUState *cpu, struct x86_decode *decode); +void set_operand_size(CPUState *cpu, struct x86_decode *decode); diff --git a/target/i386/hvf-utils/x86_descr.h b/target/i386/hvf-utils/x86_= descr.h index 9917585aeb..e2d3f75054 100644 --- a/target/i386/hvf-utils/x86_descr.h +++ b/target/i386/hvf-utils/x86_descr.h @@ -27,14 +27,23 @@ typedef struct vmx_segment { uint64_t ar; } vmx_segment; =20 -// deal with vmstate descriptors -void vmx_read_segment_descriptor(struct CPUState *cpu, struct vmx_segment = *desc, x86_reg_segment seg); -void vmx_write_segment_descriptor(CPUState *cpu, struct vmx_segment *desc,= x86_reg_segment seg); +/* deal with vmstate descriptors */ +void vmx_read_segment_descriptor(struct CPUState *cpu, + struct vmx_segment *desc, x86_reg_segment= seg); +void vmx_write_segment_descriptor(CPUState *cpu, struct vmx_segment *desc, + x86_reg_segment seg); =20 -x68_segment_selector vmx_read_segment_selector(struct CPUState *cpu, x86_r= eg_segment seg); -void vmx_write_segment_selector(struct CPUState *cpu, x68_segment_selector= selector, x86_reg_segment seg); +x68_segment_selector vmx_read_segment_selector(struct CPUState *cpu, + x86_reg_segment seg); +void vmx_write_segment_selector(struct CPUState *cpu, + x68_segment_selector selector, + x86_reg_segment seg); =20 uint64_t vmx_read_segment_base(struct CPUState *cpu, x86_reg_segment seg); -void vmx_write_segment_base(struct CPUState *cpu, x86_reg_segment seg, uin= t64_t base); +void vmx_write_segment_base(struct CPUState *cpu, x86_reg_segment seg, + uint64_t base); =20 -void x86_segment_descriptor_to_vmx(struct CPUState *cpu, x68_segment_selec= tor selector, struct x86_segment_descriptor *desc, struct vmx_segment *vmx_= desc); +void x86_segment_descriptor_to_vmx(struct CPUState *cpu, + x68_segment_selector selector, + struct x86_segment_descriptor *desc, + struct vmx_segment *vmx_desc); diff --git a/target/i386/hvf-utils/x86_emu.c b/target/i386/hvf-utils/x86_em= u.c index 76680921d1..f53d1b1995 100644 --- a/target/i386/hvf-utils/x86_emu.c +++ b/target/i386/hvf-utils/x86_emu.c @@ -46,7 +46,8 @@ #include "vmx.h" =20 static void print_debug(struct CPUState *cpu); -void hvf_handle_io(struct CPUState *cpu, uint16_t port, void *data, int di= rection, int size, uint32_t count); +void hvf_handle_io(struct CPUState *cpu, uint16_t port, void *data, + int direction, int size, uint32_t count); =20 #define EXEC_2OP_LOGIC_CMD(cpu, decode, cmd, FLAGS_FUNC, save_res) \ { \ @@ -57,8 +58,9 @@ void hvf_handle_io(struct CPUState *cpu, uint16_t port, v= oid *data, int directio uint8_t v1 =3D (uint8_t)decode->op[0].val; \ uint8_t v2 =3D (uint8_t)decode->op[1].val; \ uint8_t diff =3D v1 cmd v2; \ - if (save_res) \ + if (save_res) { \ write_val_ext(cpu, decode->op[0].ptr, diff, 1); \ + } \ FLAGS_FUNC##_8(diff); \ break; \ } \ @@ -67,8 +69,9 @@ void hvf_handle_io(struct CPUState *cpu, uint16_t port, v= oid *data, int directio uint16_t v1 =3D (uint16_t)decode->op[0].val; \ uint16_t v2 =3D (uint16_t)decode->op[1].val; \ uint16_t diff =3D v1 cmd v2; \ - if (save_res) \ + if (save_res) { \ write_val_ext(cpu, decode->op[0].ptr, diff, 2); \ + } \ FLAGS_FUNC##_16(diff); \ break; \ } \ @@ -77,8 +80,9 @@ void hvf_handle_io(struct CPUState *cpu, uint16_t port, v= oid *data, int directio uint32_t v1 =3D (uint32_t)decode->op[0].val; \ uint32_t v2 =3D (uint32_t)decode->op[1].val; \ uint32_t diff =3D v1 cmd v2; \ - if (save_res) \ + if (save_res) { \ write_val_ext(cpu, decode->op[0].ptr, diff, 4); \ + } \ FLAGS_FUNC##_32(diff); \ break; \ } \ @@ -97,8 +101,9 @@ void hvf_handle_io(struct CPUState *cpu, uint16_t port, = void *data, int directio uint8_t v1 =3D (uint8_t)decode->op[0].val; \ uint8_t v2 =3D (uint8_t)decode->op[1].val; \ uint8_t diff =3D v1 cmd v2; \ - if (save_res) \ + if (save_res) { \ write_val_ext(cpu, decode->op[0].ptr, diff, 1); \ + } \ FLAGS_FUNC##_8(v1, v2, diff); \ break; \ } \ @@ -107,8 +112,9 @@ void hvf_handle_io(struct CPUState *cpu, uint16_t port,= void *data, int directio uint16_t v1 =3D (uint16_t)decode->op[0].val; \ uint16_t v2 =3D (uint16_t)decode->op[1].val; \ uint16_t diff =3D v1 cmd v2; \ - if (save_res) \ + if (save_res) { \ write_val_ext(cpu, decode->op[0].ptr, diff, 2); \ + } \ FLAGS_FUNC##_16(v1, v2, diff); \ break; \ } \ @@ -117,8 +123,9 @@ void hvf_handle_io(struct CPUState *cpu, uint16_t port,= void *data, int directio uint32_t v1 =3D (uint32_t)decode->op[0].val; \ uint32_t v2 =3D (uint32_t)decode->op[1].val; \ uint32_t diff =3D v1 cmd v2; \ - if (save_res) \ + if (save_res) { \ write_val_ext(cpu, decode->op[0].ptr, diff, 4); \ + } \ FLAGS_FUNC##_32(v1, v2, diff); \ break; \ } \ @@ -127,63 +134,63 @@ void hvf_handle_io(struct CPUState *cpu, uint16_t por= t, void *data, int directio } \ } =20 -addr_t read_reg(struct CPUState* cpu, int reg, int size) +addr_t read_reg(struct CPUState *cpu, int reg, int size) { switch (size) { - case 1: - return cpu->hvf_x86->regs[reg].lx; - case 2: - return cpu->hvf_x86->regs[reg].rx; - case 4: - return cpu->hvf_x86->regs[reg].erx; - case 8: - return cpu->hvf_x86->regs[reg].rrx; - default: - VM_PANIC_ON("read_reg size"); + case 1: + return cpu->hvf_x86->regs[reg].lx; + case 2: + return cpu->hvf_x86->regs[reg].rx; + case 4: + return cpu->hvf_x86->regs[reg].erx; + case 8: + return cpu->hvf_x86->regs[reg].rrx; + default: + VM_PANIC_ON("read_reg size"); } return 0; } =20 -void write_reg(struct CPUState* cpu, int reg, addr_t val, int size) +void write_reg(struct CPUState *cpu, int reg, addr_t val, int size) { switch (size) { - case 1: - cpu->hvf_x86->regs[reg].lx =3D val; - break; - case 2: - cpu->hvf_x86->regs[reg].rx =3D val; - break; - case 4: - cpu->hvf_x86->regs[reg].rrx =3D (uint32_t)val; - break; - case 8: - cpu->hvf_x86->regs[reg].rrx =3D val; - break; - default: - VM_PANIC_ON("write_reg size"); + case 1: + cpu->hvf_x86->regs[reg].lx =3D val; + break; + case 2: + cpu->hvf_x86->regs[reg].rx =3D val; + break; + case 4: + cpu->hvf_x86->regs[reg].rrx =3D (uint32_t)val; + break; + case 8: + cpu->hvf_x86->regs[reg].rrx =3D val; + break; + default: + VM_PANIC_ON("write_reg size"); } } =20 addr_t read_val_from_reg(addr_t reg_ptr, int size) { addr_t val; - =20 + switch (size) { - case 1: - val =3D *(uint8_t*)reg_ptr; - break; - case 2: - val =3D *(uint16_t*)reg_ptr; - break; - case 4: - val =3D *(uint32_t*)reg_ptr; - break; - case 8: - val =3D *(uint64_t*)reg_ptr; - break; - default: - VM_PANIC_ON_EX(1, "read_val: Unknown size %d\n", size); - break; + case 1: + val =3D *(uint8_t *)reg_ptr; + break; + case 2: + val =3D *(uint16_t *)reg_ptr; + break; + case 4: + val =3D *(uint32_t *)reg_ptr; + break; + case 8: + val =3D *(uint64_t *)reg_ptr; + break; + default: + VM_PANIC_ON_EX(1, "read_val: Unknown size %d\n", size); + break; } return val; } @@ -191,30 +198,32 @@ addr_t read_val_from_reg(addr_t reg_ptr, int size) void write_val_to_reg(addr_t reg_ptr, addr_t val, int size) { switch (size) { - case 1: - *(uint8_t*)reg_ptr =3D val; - break; - case 2: - *(uint16_t*)reg_ptr =3D val; - break; - case 4: - *(uint64_t*)reg_ptr =3D (uint32_t)val; - break; - case 8: - *(uint64_t*)reg_ptr =3D val; - break; - default: - VM_PANIC("write_val: Unknown size\n"); - break; + case 1: + *(uint8_t *)reg_ptr =3D val; + break; + case 2: + *(uint16_t *)reg_ptr =3D val; + break; + case 4: + *(uint64_t *)reg_ptr =3D (uint32_t)val; + break; + case 8: + *(uint64_t *)reg_ptr =3D val; + break; + default: + VM_PANIC("write_val: Unknown size\n"); + break; } } =20 -static bool is_host_reg(struct CPUState* cpu, addr_t ptr) { +static bool is_host_reg(struct CPUState *cpu, addr_t ptr) +{ return (ptr > (addr_t)cpu && ptr < (addr_t)cpu + sizeof(struct CPUStat= e)) || - (ptr > (addr_t)cpu->hvf_x86 && ptr < (addr_t)(cpu->hvf_x86 + si= zeof(struct hvf_x86_state))); + (ptr > (addr_t)cpu->hvf_x86 && ptr < + (addr_t)(cpu->hvf_x86 + sizeof(struct hvf_x86_state))); } =20 -void write_val_ext(struct CPUState* cpu, addr_t ptr, addr_t val, int size) +void write_val_ext(struct CPUState *cpu, addr_t ptr, addr_t val, int size) { if (is_host_reg(cpu, ptr)) { write_val_to_reg(ptr, val, size); @@ -223,68 +232,77 @@ void write_val_ext(struct CPUState* cpu, addr_t ptr, = addr_t val, int size) vmx_write_mem(cpu, ptr, &val, size); } =20 -uint8_t *read_mmio(struct CPUState* cpu, addr_t ptr, int bytes) +uint8_t *read_mmio(struct CPUState *cpu, addr_t ptr, int bytes) { vmx_read_mem(cpu, cpu->hvf_x86->mmio_buf, ptr, bytes); return cpu->hvf_x86->mmio_buf; } =20 -addr_t read_val_ext(struct CPUState* cpu, addr_t ptr, int size) +addr_t read_val_ext(struct CPUState *cpu, addr_t ptr, int size) { addr_t val; uint8_t *mmio_ptr; - =20 + if (is_host_reg(cpu, ptr)) { return read_val_from_reg(ptr, size); } - =20 + mmio_ptr =3D read_mmio(cpu, ptr, size); switch (size) { - case 1: - val =3D *(uint8_t*)mmio_ptr; - break; - case 2: - val =3D *(uint16_t*)mmio_ptr; - break; - case 4: - val =3D *(uint32_t*)mmio_ptr; - break; - case 8: - val =3D *(uint64_t*)mmio_ptr; - break; - default: - VM_PANIC("bad size\n"); - break; + case 1: + val =3D *(uint8_t *)mmio_ptr; + break; + case 2: + val =3D *(uint16_t *)mmio_ptr; + break; + case 4: + val =3D *(uint32_t *)mmio_ptr; + break; + case 8: + val =3D *(uint64_t *)mmio_ptr; + break; + default: + VM_PANIC("bad size\n"); + break; } return val; } =20 -static void fetch_operands(struct CPUState *cpu, struct x86_decode *decode= , int n, bool val_op0, bool val_op1, bool val_op2) +static void fetch_operands(struct CPUState *cpu, struct x86_decode *decode, + int n, bool val_op0, bool val_op1, bool val_op2) { int i; bool calc_val[3] =3D {val_op0, val_op1, val_op2}; =20 for (i =3D 0; i < n; i++) { switch (decode->op[i].type) { - case X86_VAR_IMMEDIATE: - break; - case X86_VAR_REG: - VM_PANIC_ON(!decode->op[i].ptr); - if (calc_val[i]) - decode->op[i].val =3D read_val_from_reg(decode->op[i].= ptr, decode->operand_size); - break; - case X86_VAR_RM: - calc_modrm_operand(cpu, decode, &decode->op[i]); - if (calc_val[i]) - decode->op[i].val =3D read_val_ext(cpu, decode->op[i].= ptr, decode->operand_size); - break; - case X86_VAR_OFFSET: - decode->op[i].ptr =3D decode_linear_addr(cpu, decode, deco= de->op[i].ptr, REG_SEG_DS); - if (calc_val[i]) - decode->op[i].val =3D read_val_ext(cpu, decode->op[i].= ptr, decode->operand_size); - break; - default: - break; + case X86_VAR_IMMEDIATE: + break; + case X86_VAR_REG: + VM_PANIC_ON(!decode->op[i].ptr); + if (calc_val[i]) { + decode->op[i].val =3D read_val_from_reg(decode->op[i].ptr, + decode->operand_size= ); + } + break; + case X86_VAR_RM: + calc_modrm_operand(cpu, decode, &decode->op[i]); + if (calc_val[i]) { + decode->op[i].val =3D read_val_ext(cpu, decode->op[i].ptr, + decode->operand_size); + } + break; + case X86_VAR_OFFSET: + decode->op[i].ptr =3D decode_linear_addr(cpu, decode, + decode->op[i].ptr, + REG_SEG_DS); + if (calc_val[i]) { + decode->op[i].val =3D read_val_ext(cpu, decode->op[i].ptr, + decode->operand_size); + } + break; + default: + break; } } } @@ -292,7 +310,8 @@ static void fetch_operands(struct CPUState *cpu, struct= x86_decode *decode, int static void exec_mov(struct CPUState *cpu, struct x86_decode *decode) { fetch_operands(cpu, decode, 2, false, true, false); - write_val_ext(cpu, decode->op[0].ptr, decode->op[1].val, decode->opera= nd_size); + write_val_ext(cpu, decode->op[0].ptr, decode->op[1].val, + decode->operand_size); =20 RIP(cpu) +=3D decode->len; } @@ -341,7 +360,7 @@ static void exec_xor(struct CPUState *cpu, struct x86_d= ecode *decode) =20 static void exec_neg(struct CPUState *cpu, struct x86_decode *decode) { - //EXEC_2OP_ARITH_CMD(cpu, decode, -, SET_FLAGS_OSZAPC_SUB, false); + /*EXEC_2OP_ARITH_CMD(cpu, decode, -, SET_FLAGS_OSZAPC_SUB, false);*/ int32_t val; fetch_operands(cpu, decode, 2, true, true, false); =20 @@ -350,17 +369,15 @@ static void exec_neg(struct CPUState *cpu, struct x86= _decode *decode) =20 if (4 =3D=3D decode->operand_size) { SET_FLAGS_OSZAPC_SUB_32(0, 0 - val, val); - } - else if (2 =3D=3D decode->operand_size) { + } else if (2 =3D=3D decode->operand_size) { SET_FLAGS_OSZAPC_SUB_16(0, 0 - val, val); - } - else if (1 =3D=3D decode->operand_size) { + } else if (1 =3D=3D decode->operand_size) { SET_FLAGS_OSZAPC_SUB_8(0, 0 - val, val); } else { VM_PANIC("bad op size\n"); } =20 - //lflags_to_rflags(cpu); + /*lflags_to_rflags(cpu);*/ RIP(cpu) +=3D decode->len; } =20 @@ -399,7 +416,8 @@ static void exec_not(struct CPUState *cpu, struct x86_d= ecode *decode) { fetch_operands(cpu, decode, 1, true, false, false); =20 - write_val_ext(cpu, decode->op[0].ptr, ~decode->op[0].val, decode->oper= and_size); + write_val_ext(cpu, decode->op[0].ptr, ~decode->op[0].val, + decode->operand_size); RIP(cpu) +=3D decode->len; } =20 @@ -410,10 +428,11 @@ void exec_movzx(struct CPUState *cpu, struct x86_deco= de *decode) =20 fetch_operands(cpu, decode, 1, false, false, false); =20 - if (0xb6 =3D=3D decode->opcode[1]) + if (0xb6 =3D=3D decode->opcode[1]) { src_op_size =3D 1; - else + } else { src_op_size =3D 2; + } decode->operand_size =3D src_op_size; calc_modrm_operand(cpu, decode, &decode->op[1]); decode->op[1].val =3D read_val_ext(cpu, decode->op[1].ptr, src_op_size= ); @@ -425,21 +444,22 @@ void exec_movzx(struct CPUState *cpu, struct x86_deco= de *decode) static void exec_out(struct CPUState *cpu, struct x86_decode *decode) { switch (decode->opcode[0]) { - case 0xe6: - hvf_handle_io(cpu, decode->op[0].val, &AL(cpu), 1, 1, 1); - break; - case 0xe7: - hvf_handle_io(cpu, decode->op[0].val, &RAX(cpu), 1, decode->op= erand_size, 1); - break; - case 0xee: - hvf_handle_io(cpu, DX(cpu), &AL(cpu), 1, 1, 1); - break; - case 0xef: - hvf_handle_io(cpu, DX(cpu), &RAX(cpu), 1, decode->operand_size= , 1); - break; - default: - VM_PANIC("Bad out opcode\n"); - break; + case 0xe6: + hvf_handle_io(cpu, decode->op[0].val, &AL(cpu), 1, 1, 1); + break; + case 0xe7: + hvf_handle_io(cpu, decode->op[0].val, &RAX(cpu), 1, + decode->operand_size, 1); + break; + case 0xee: + hvf_handle_io(cpu, DX(cpu), &AL(cpu), 1, 1, 1); + break; + case 0xef: + hvf_handle_io(cpu, DX(cpu), &RAX(cpu), 1, decode->operand_size, 1); + break; + default: + VM_PANIC("Bad out opcode\n"); + break; } RIP(cpu) +=3D decode->len; } @@ -448,63 +468,73 @@ static void exec_in(struct CPUState *cpu, struct x86_= decode *decode) { addr_t val =3D 0; switch (decode->opcode[0]) { - case 0xe4: - hvf_handle_io(cpu, decode->op[0].val, &AL(cpu), 0, 1, 1); - break; - case 0xe5: - hvf_handle_io(cpu, decode->op[0].val, &val, 0, decode->operand= _size, 1); - if (decode->operand_size =3D=3D 2) - AX(cpu) =3D val; - else - RAX(cpu) =3D (uint32_t)val; - break; - case 0xec: - hvf_handle_io(cpu, DX(cpu), &AL(cpu), 0, 1, 1); - break; - case 0xed: - hvf_handle_io(cpu, DX(cpu), &val, 0, decode->operand_size, 1); - if (decode->operand_size =3D=3D 2) - AX(cpu) =3D val; - else - RAX(cpu) =3D (uint32_t)val; + case 0xe4: + hvf_handle_io(cpu, decode->op[0].val, &AL(cpu), 0, 1, 1); + break; + case 0xe5: + hvf_handle_io(cpu, decode->op[0].val, &val, 0, decode->operand_siz= e, 1); + if (decode->operand_size =3D=3D 2) { + AX(cpu) =3D val; + } else { + RAX(cpu) =3D (uint32_t)val; + } + break; + case 0xec: + hvf_handle_io(cpu, DX(cpu), &AL(cpu), 0, 1, 1); + break; + case 0xed: + hvf_handle_io(cpu, DX(cpu), &val, 0, decode->operand_size, 1); + if (decode->operand_size =3D=3D 2) { + AX(cpu) =3D val; + } else { + RAX(cpu) =3D (uint32_t)val; + } =20 - break; - default: - VM_PANIC("Bad in opcode\n"); - break; + break; + default: + VM_PANIC("Bad in opcode\n"); + break; } =20 RIP(cpu) +=3D decode->len; } =20 -static inline void string_increment_reg(struct CPUState * cpu, int reg, st= ruct x86_decode *decode) +static inline void string_increment_reg(struct CPUState *cpu, int reg, + struct x86_decode *decode) { addr_t val =3D read_reg(cpu, reg, decode->addressing_size); - if (cpu->hvf_x86->rflags.df) + if (cpu->hvf_x86->rflags.df) { val -=3D decode->operand_size; - else + } else { val +=3D decode->operand_size; + } write_reg(cpu, reg, val, decode->addressing_size); } =20 -static inline void string_rep(struct CPUState * cpu, struct x86_decode *de= code, void (*func)(struct CPUState *cpu, struct x86_decode *ins), int rep) +static inline void string_rep(struct CPUState *cpu, struct x86_decode *dec= ode, + void (*func)(struct CPUState *cpu, + struct x86_decode *ins), int re= p) { addr_t rcx =3D read_reg(cpu, REG_RCX, decode->addressing_size); while (rcx--) { func(cpu, decode); write_reg(cpu, REG_RCX, rcx, decode->addressing_size); - if ((PREFIX_REP =3D=3D rep) && !get_ZF(cpu)) + if ((PREFIX_REP =3D=3D rep) && !get_ZF(cpu)) { break; - if ((PREFIX_REPN =3D=3D rep) && get_ZF(cpu)) + } + if ((PREFIX_REPN =3D=3D rep) && get_ZF(cpu)) { break; + } } } =20 static void exec_ins_single(struct CPUState *cpu, struct x86_decode *decod= e) { - addr_t addr =3D linear_addr_size(cpu, RDI(cpu), decode->addressing_siz= e, REG_SEG_ES); + addr_t addr =3D linear_addr_size(cpu, RDI(cpu), decode->addressing_siz= e, + REG_SEG_ES); =20 - hvf_handle_io(cpu, DX(cpu), cpu->hvf_x86->mmio_buf, 0, decode->operand= _size, 1); + hvf_handle_io(cpu, DX(cpu), cpu->hvf_x86->mmio_buf, 0, + decode->operand_size, 1); vmx_write_mem(cpu, addr, cpu->hvf_x86->mmio_buf, decode->operand_size); =20 string_increment_reg(cpu, REG_RDI, decode); @@ -512,10 +542,11 @@ static void exec_ins_single(struct CPUState *cpu, str= uct x86_decode *decode) =20 static void exec_ins(struct CPUState *cpu, struct x86_decode *decode) { - if (decode->rep) + if (decode->rep) { string_rep(cpu, decode, exec_ins_single, 0); - else + } else { exec_ins_single(cpu, decode); + } =20 RIP(cpu) +=3D decode->len; } @@ -525,18 +556,20 @@ static void exec_outs_single(struct CPUState *cpu, st= ruct x86_decode *decode) addr_t addr =3D decode_linear_addr(cpu, decode, RSI(cpu), REG_SEG_DS); =20 vmx_read_mem(cpu, cpu->hvf_x86->mmio_buf, addr, decode->operand_size); - hvf_handle_io(cpu, DX(cpu), cpu->hvf_x86->mmio_buf, 1, decode->operand= _size, 1); + hvf_handle_io(cpu, DX(cpu), cpu->hvf_x86->mmio_buf, 1, + decode->operand_size, 1); =20 string_increment_reg(cpu, REG_RSI, decode); } =20 static void exec_outs(struct CPUState *cpu, struct x86_decode *decode) { - if (decode->rep) + if (decode->rep) { string_rep(cpu, decode, exec_outs_single, 0); - else + } else { exec_outs_single(cpu, decode); - =20 + } + RIP(cpu) +=3D decode->len; } =20 @@ -547,8 +580,9 @@ static void exec_movs_single(struct CPUState *cpu, stru= ct x86_decode *decode) addr_t val; =20 src_addr =3D decode_linear_addr(cpu, decode, RSI(cpu), REG_SEG_DS); - dst_addr =3D linear_addr_size(cpu, RDI(cpu), decode->addressing_size, = REG_SEG_ES); =20 + dst_addr =3D linear_addr_size(cpu, RDI(cpu), decode->addressing_size, + REG_SEG_ES); val =3D read_val_ext(cpu, src_addr, decode->operand_size); write_val_ext(cpu, dst_addr, val, decode->operand_size); =20 @@ -560,9 +594,9 @@ static void exec_movs(struct CPUState *cpu, struct x86_= decode *decode) { if (decode->rep) { string_rep(cpu, decode, exec_movs_single, 0); - } - else + } else { exec_movs_single(cpu, decode); + } =20 RIP(cpu) +=3D decode->len; } @@ -573,7 +607,8 @@ static void exec_cmps_single(struct CPUState *cpu, stru= ct x86_decode *decode) addr_t dst_addr; =20 src_addr =3D decode_linear_addr(cpu, decode, RSI(cpu), REG_SEG_DS); - dst_addr =3D linear_addr_size(cpu, RDI(cpu), decode->addressing_size, = REG_SEG_ES); + dst_addr =3D linear_addr_size(cpu, RDI(cpu), decode->addressing_size, + REG_SEG_ES); =20 decode->op[0].type =3D X86_VAR_IMMEDIATE; decode->op[0].val =3D read_val_ext(cpu, src_addr, decode->operand_size= ); @@ -590,9 +625,9 @@ static void exec_cmps(struct CPUState *cpu, struct x86_= decode *decode) { if (decode->rep) { string_rep(cpu, decode, exec_cmps_single, decode->rep); - } - else + } else { exec_cmps_single(cpu, decode); + } RIP(cpu) +=3D decode->len; } =20 @@ -614,9 +649,9 @@ static void exec_stos(struct CPUState *cpu, struct x86_= decode *decode) { if (decode->rep) { string_rep(cpu, decode, exec_stos_single, 0); - } - else + } else { exec_stos_single(cpu, decode); + } =20 RIP(cpu) +=3D decode->len; } @@ -639,9 +674,9 @@ static void exec_scas(struct CPUState *cpu, struct x86_= decode *decode) decode->op[0].reg =3D REG_RAX; if (decode->rep) { string_rep(cpu, decode, exec_scas_single, decode->rep); - } - else + } else { exec_scas_single(cpu, decode); + } =20 RIP(cpu) +=3D decode->len; } @@ -662,14 +697,14 @@ static void exec_lods(struct CPUState *cpu, struct x8= 6_decode *decode) { if (decode->rep) { string_rep(cpu, decode, exec_lods_single, 0); - } - else + } else { exec_lods_single(cpu, decode); + } =20 RIP(cpu) +=3D decode->len; } =20 -#define MSR_IA32_UCODE_REV 0x00000017 +#define MSR_IA32_UCODE_REV 0x00000017 =20 void simulate_rdmsr(struct CPUState *cpu) { @@ -679,83 +714,83 @@ void simulate_rdmsr(struct CPUState *cpu) uint64_t val =3D 0; =20 switch (msr) { - case MSR_IA32_TSC: - val =3D rdtscp() + rvmcs(cpu->hvf_fd, VMCS_TSC_OFFSET); - break; - case MSR_IA32_APICBASE: - val =3D cpu_get_apic_base(X86_CPU(cpu)->apic_state); - break; - case MSR_IA32_UCODE_REV: - val =3D (0x100000000ULL << 32) | 0x100000000ULL; - break; - case MSR_EFER: - val =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER); - break; - case MSR_FSBASE: - val =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_FS_BASE); - break; - case MSR_GSBASE: - val =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_GS_BASE); - break; - case MSR_KERNELGSBASE: - val =3D rvmcs(cpu->hvf_fd, VMCS_HOST_FS_BASE); - break; - case MSR_STAR: - abort(); - break; - case MSR_LSTAR: - abort(); - break; - case MSR_CSTAR: - abort(); - break; - case MSR_IA32_MISC_ENABLE: - val =3D env->msr_ia32_misc_enable; - break; - case MSR_MTRRphysBase(0): - case MSR_MTRRphysBase(1): - case MSR_MTRRphysBase(2): - case MSR_MTRRphysBase(3): - case MSR_MTRRphysBase(4): - case MSR_MTRRphysBase(5): - case MSR_MTRRphysBase(6): - case MSR_MTRRphysBase(7): - val =3D env->mtrr_var[(ECX(cpu) - MSR_MTRRphysBase(0)) / 2].ba= se; - break; - case MSR_MTRRphysMask(0): - case MSR_MTRRphysMask(1): - case MSR_MTRRphysMask(2): - case MSR_MTRRphysMask(3): - case MSR_MTRRphysMask(4): - case MSR_MTRRphysMask(5): - case MSR_MTRRphysMask(6): - case MSR_MTRRphysMask(7): - val =3D env->mtrr_var[(ECX(cpu) - MSR_MTRRphysMask(0)) / 2].ma= sk; - break; - case MSR_MTRRfix64K_00000: - val =3D env->mtrr_fixed[0]; - break; - case MSR_MTRRfix16K_80000: - case MSR_MTRRfix16K_A0000: - val =3D env->mtrr_fixed[ECX(cpu) - MSR_MTRRfix16K_80000 + 1]; - break; - case MSR_MTRRfix4K_C0000: - case MSR_MTRRfix4K_C8000: - case MSR_MTRRfix4K_D0000: - case MSR_MTRRfix4K_D8000: - case MSR_MTRRfix4K_E0000: - case MSR_MTRRfix4K_E8000: - case MSR_MTRRfix4K_F0000: - case MSR_MTRRfix4K_F8000: - val =3D env->mtrr_fixed[ECX(cpu) - MSR_MTRRfix4K_C0000 + 3]; - break; - case MSR_MTRRdefType: - val =3D env->mtrr_deftype; - break; - default: - // fprintf(stderr, "%s: unknown msr 0x%x\n", __func__, msr); - val =3D 0; - break; + case MSR_IA32_TSC: + val =3D rdtscp() + rvmcs(cpu->hvf_fd, VMCS_TSC_OFFSET); + break; + case MSR_IA32_APICBASE: + val =3D cpu_get_apic_base(X86_CPU(cpu)->apic_state); + break; + case MSR_IA32_UCODE_REV: + val =3D (0x100000000ULL << 32) | 0x100000000ULL; + break; + case MSR_EFER: + val =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER); + break; + case MSR_FSBASE: + val =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_FS_BASE); + break; + case MSR_GSBASE: + val =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_GS_BASE); + break; + case MSR_KERNELGSBASE: + val =3D rvmcs(cpu->hvf_fd, VMCS_HOST_FS_BASE); + break; + case MSR_STAR: + abort(); + break; + case MSR_LSTAR: + abort(); + break; + case MSR_CSTAR: + abort(); + break; + case MSR_IA32_MISC_ENABLE: + val =3D env->msr_ia32_misc_enable; + break; + case MSR_MTRRphysBase(0): + case MSR_MTRRphysBase(1): + case MSR_MTRRphysBase(2): + case MSR_MTRRphysBase(3): + case MSR_MTRRphysBase(4): + case MSR_MTRRphysBase(5): + case MSR_MTRRphysBase(6): + case MSR_MTRRphysBase(7): + val =3D env->mtrr_var[(ECX(cpu) - MSR_MTRRphysBase(0)) / 2].base; + break; + case MSR_MTRRphysMask(0): + case MSR_MTRRphysMask(1): + case MSR_MTRRphysMask(2): + case MSR_MTRRphysMask(3): + case MSR_MTRRphysMask(4): + case MSR_MTRRphysMask(5): + case MSR_MTRRphysMask(6): + case MSR_MTRRphysMask(7): + val =3D env->mtrr_var[(ECX(cpu) - MSR_MTRRphysMask(0)) / 2].mask; + break; + case MSR_MTRRfix64K_00000: + val =3D env->mtrr_fixed[0]; + break; + case MSR_MTRRfix16K_80000: + case MSR_MTRRfix16K_A0000: + val =3D env->mtrr_fixed[ECX(cpu) - MSR_MTRRfix16K_80000 + 1]; + break; + case MSR_MTRRfix4K_C0000: + case MSR_MTRRfix4K_C8000: + case MSR_MTRRfix4K_D0000: + case MSR_MTRRfix4K_D8000: + case MSR_MTRRfix4K_E0000: + case MSR_MTRRfix4K_E8000: + case MSR_MTRRfix4K_F0000: + case MSR_MTRRfix4K_F8000: + val =3D env->mtrr_fixed[ECX(cpu) - MSR_MTRRfix4K_C0000 + 3]; + break; + case MSR_MTRRdefType: + val =3D env->mtrr_deftype; + break; + default: + /* fprintf(stderr, "%s: unknown msr 0x%x\n", __func__, msr); */ + val =3D 0; + break; } =20 RAX(cpu) =3D (uint32_t)val; @@ -776,88 +811,89 @@ void simulate_wrmsr(struct CPUState *cpu) uint64_t data =3D ((uint64_t)EDX(cpu) << 32) | EAX(cpu); =20 switch (msr) { - case MSR_IA32_TSC: - // if (!osx_is_sierra()) - // wvmcs(cpu->hvf_fd, VMCS_TSC_OFFSET, data - rdtscp()); - //hv_vm_sync_tsc(data); - break; - case MSR_IA32_APICBASE: - cpu_set_apic_base(X86_CPU(cpu)->apic_state, data); - break; - case MSR_FSBASE: - wvmcs(cpu->hvf_fd, VMCS_GUEST_FS_BASE, data); - break; - case MSR_GSBASE: - wvmcs(cpu->hvf_fd, VMCS_GUEST_GS_BASE, data); - break; - case MSR_KERNELGSBASE: - wvmcs(cpu->hvf_fd, VMCS_HOST_FS_BASE, data); - break; - case MSR_STAR: - abort(); - break; - case MSR_LSTAR: - abort(); - break; - case MSR_CSTAR: - abort(); - break; - case MSR_EFER: - cpu->hvf_x86->efer.efer =3D data; - //printf("new efer %llx\n", EFER(cpu)); - wvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER, data); - if (data & EFER_NXE) - hv_vcpu_invalidate_tlb(cpu->hvf_fd); - break; - case MSR_MTRRphysBase(0): - case MSR_MTRRphysBase(1): - case MSR_MTRRphysBase(2): - case MSR_MTRRphysBase(3): - case MSR_MTRRphysBase(4): - case MSR_MTRRphysBase(5): - case MSR_MTRRphysBase(6): - case MSR_MTRRphysBase(7): - env->mtrr_var[(ECX(cpu) - MSR_MTRRphysBase(0)) / 2].base =3D d= ata; - break; - case MSR_MTRRphysMask(0): - case MSR_MTRRphysMask(1): - case MSR_MTRRphysMask(2): - case MSR_MTRRphysMask(3): - case MSR_MTRRphysMask(4): - case MSR_MTRRphysMask(5): - case MSR_MTRRphysMask(6): - case MSR_MTRRphysMask(7): - env->mtrr_var[(ECX(cpu) - MSR_MTRRphysMask(0)) / 2].mask =3D d= ata; - break; - case MSR_MTRRfix64K_00000: - env->mtrr_fixed[ECX(cpu) - MSR_MTRRfix64K_00000] =3D data; - break; - case MSR_MTRRfix16K_80000: - case MSR_MTRRfix16K_A0000: - env->mtrr_fixed[ECX(cpu) - MSR_MTRRfix16K_80000 + 1] =3D data; - break; - case MSR_MTRRfix4K_C0000: - case MSR_MTRRfix4K_C8000: - case MSR_MTRRfix4K_D0000: - case MSR_MTRRfix4K_D8000: - case MSR_MTRRfix4K_E0000: - case MSR_MTRRfix4K_E8000: - case MSR_MTRRfix4K_F0000: - case MSR_MTRRfix4K_F8000: - env->mtrr_fixed[ECX(cpu) - MSR_MTRRfix4K_C0000 + 3] =3D data; - break; - case MSR_MTRRdefType: - env->mtrr_deftype =3D data; - break; - default: - break; + case MSR_IA32_TSC: + /* if (!osx_is_sierra()) + wvmcs(cpu->hvf_fd, VMCS_TSC_OFFSET, data - rdtscp()); + hv_vm_sync_tsc(data);*/ + break; + case MSR_IA32_APICBASE: + cpu_set_apic_base(X86_CPU(cpu)->apic_state, data); + break; + case MSR_FSBASE: + wvmcs(cpu->hvf_fd, VMCS_GUEST_FS_BASE, data); + break; + case MSR_GSBASE: + wvmcs(cpu->hvf_fd, VMCS_GUEST_GS_BASE, data); + break; + case MSR_KERNELGSBASE: + wvmcs(cpu->hvf_fd, VMCS_HOST_FS_BASE, data); + break; + case MSR_STAR: + abort(); + break; + case MSR_LSTAR: + abort(); + break; + case MSR_CSTAR: + abort(); + break; + case MSR_EFER: + cpu->hvf_x86->efer.efer =3D data; + /*printf("new efer %llx\n", EFER(cpu));*/ + wvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER, data); + if (data & EFER_NXE) { + hv_vcpu_invalidate_tlb(cpu->hvf_fd); + } + break; + case MSR_MTRRphysBase(0): + case MSR_MTRRphysBase(1): + case MSR_MTRRphysBase(2): + case MSR_MTRRphysBase(3): + case MSR_MTRRphysBase(4): + case MSR_MTRRphysBase(5): + case MSR_MTRRphysBase(6): + case MSR_MTRRphysBase(7): + env->mtrr_var[(ECX(cpu) - MSR_MTRRphysBase(0)) / 2].base =3D data; + break; + case MSR_MTRRphysMask(0): + case MSR_MTRRphysMask(1): + case MSR_MTRRphysMask(2): + case MSR_MTRRphysMask(3): + case MSR_MTRRphysMask(4): + case MSR_MTRRphysMask(5): + case MSR_MTRRphysMask(6): + case MSR_MTRRphysMask(7): + env->mtrr_var[(ECX(cpu) - MSR_MTRRphysMask(0)) / 2].mask =3D data; + break; + case MSR_MTRRfix64K_00000: + env->mtrr_fixed[ECX(cpu) - MSR_MTRRfix64K_00000] =3D data; + break; + case MSR_MTRRfix16K_80000: + case MSR_MTRRfix16K_A0000: + env->mtrr_fixed[ECX(cpu) - MSR_MTRRfix16K_80000 + 1] =3D data; + break; + case MSR_MTRRfix4K_C0000: + case MSR_MTRRfix4K_C8000: + case MSR_MTRRfix4K_D0000: + case MSR_MTRRfix4K_D8000: + case MSR_MTRRfix4K_E0000: + case MSR_MTRRfix4K_E8000: + case MSR_MTRRfix4K_F0000: + case MSR_MTRRfix4K_F8000: + env->mtrr_fixed[ECX(cpu) - MSR_MTRRfix4K_C0000 + 3] =3D data; + break; + case MSR_MTRRdefType: + env->mtrr_deftype =3D data; + break; + default: + break; } =20 /* Related to support known hypervisor interface */ - // if (g_hypervisor_iface) - // g_hypervisor_iface->wrmsr_handler(cpu, msr, data); + /* if (g_hypervisor_iface) + g_hypervisor_iface->wrmsr_handler(cpu, msr, data); =20 - //printf("write msr %llx\n", RCX(cpu)); + printf("write msr %llx\n", RCX(cpu));*/ } =20 static void exec_wrmsr(struct CPUState *cpu, struct x86_decode *decode) @@ -893,24 +929,26 @@ static void do_bt(struct CPUState *cpu, struct x86_de= code *decode, int flag) VM_PANIC("bt 64bit\n"); } } - decode->op[0].val =3D read_val_ext(cpu, decode->op[0].ptr, decode->ope= rand_size); + decode->op[0].val =3D read_val_ext(cpu, decode->op[0].ptr, + decode->operand_size); cf =3D (decode->op[0].val >> index) & 0x01; =20 switch (flag) { - case 0: - set_CF(cpu, cf); - return; - case 1: - decode->op[0].val ^=3D (1u << index); - break; - case 2: - decode->op[0].val |=3D (1u << index); - break; - case 3: - decode->op[0].val &=3D ~(1u << index); - break; + case 0: + set_CF(cpu, cf); + return; + case 1: + decode->op[0].val ^=3D (1u << index); + break; + case 2: + decode->op[0].val |=3D (1u << index); + break; + case 3: + decode->op[0].val &=3D ~(1u << index); + break; } - write_val_ext(cpu, decode->op[0].ptr, decode->op[0].val, decode->opera= nd_size); + write_val_ext(cpu, decode->op[0].ptr, decode->op[0].val, + decode->operand_size); set_CF(cpu, cf); } =20 @@ -946,58 +984,59 @@ void exec_shl(struct CPUState *cpu, struct x86_decode= *decode) fetch_operands(cpu, decode, 2, true, true, false); =20 count =3D decode->op[1].val; - count &=3D 0x1f; // count is masked to 5 bits - if (!count) + count &=3D 0x1f; /* count is masked to 5 bits*/ + if (!count) { goto exit; + } =20 switch (decode->operand_size) { - case 1: - { - uint8_t res =3D 0; - if (count <=3D 8) { - res =3D (decode->op[0].val << count); - cf =3D (decode->op[0].val >> (8 - count)) & 0x1; - of =3D cf ^ (res >> 7); - } - - write_val_ext(cpu, decode->op[0].ptr, res, 1); - SET_FLAGS_OSZAPC_LOGIC_8(res); - SET_FLAGS_OxxxxC(cpu, of, cf); - break; + case 1: + { + uint8_t res =3D 0; + if (count <=3D 8) { + res =3D (decode->op[0].val << count); + cf =3D (decode->op[0].val >> (8 - count)) & 0x1; + of =3D cf ^ (res >> 7); } - case 2: - { - uint16_t res =3D 0; - - /* from bochs */ - if (count <=3D 16) { - res =3D (decode->op[0].val << count); - cf =3D (decode->op[0].val >> (16 - count)) & 0x1; - of =3D cf ^ (res >> 15); // of =3D cf ^ result15 - } =20 - write_val_ext(cpu, decode->op[0].ptr, res, 2); - SET_FLAGS_OSZAPC_LOGIC_16(res); - SET_FLAGS_OxxxxC(cpu, of, cf); - break; - } - case 4: - { - uint32_t res =3D decode->op[0].val << count; - =20 - write_val_ext(cpu, decode->op[0].ptr, res, 4); - SET_FLAGS_OSZAPC_LOGIC_32(res); - cf =3D (decode->op[0].val >> (32 - count)) & 0x1; - of =3D cf ^ (res >> 31); // of =3D cf ^ result31 - SET_FLAGS_OxxxxC(cpu, of, cf); - break; + write_val_ext(cpu, decode->op[0].ptr, res, 1); + SET_FLAGS_OSZAPC_LOGIC_8(res); + SET_FLAGS_OxxxxC(cpu, of, cf); + break; + } + case 2: + { + uint16_t res =3D 0; + + /* from bochs */ + if (count <=3D 16) { + res =3D (decode->op[0].val << count); + cf =3D (decode->op[0].val >> (16 - count)) & 0x1; + of =3D cf ^ (res >> 15); /* of =3D cf ^ result15 */ } - default: - abort(); + + write_val_ext(cpu, decode->op[0].ptr, res, 2); + SET_FLAGS_OSZAPC_LOGIC_16(res); + SET_FLAGS_OxxxxC(cpu, of, cf); + break; + } + case 4: + { + uint32_t res =3D decode->op[0].val << count; + + write_val_ext(cpu, decode->op[0].ptr, res, 4); + SET_FLAGS_OSZAPC_LOGIC_32(res); + cf =3D (decode->op[0].val >> (32 - count)) & 0x1; + of =3D cf ^ (res >> 31); /* of =3D cf ^ result31 */ + SET_FLAGS_OxxxxC(cpu, of, cf); + break; + } + default: + abort(); } =20 exit: - //lflags_to_rflags(cpu); + /* lflags_to_rflags(cpu); */ RIP(cpu) +=3D decode->len; } =20 @@ -1008,14 +1047,16 @@ void exec_movsx(struct CPUState *cpu, struct x86_de= code *decode) =20 fetch_operands(cpu, decode, 2, false, false, false); =20 - if (0xbe =3D=3D decode->opcode[1]) + if (0xbe =3D=3D decode->opcode[1]) { src_op_size =3D 1; - else + } else { src_op_size =3D 2; + } =20 decode->operand_size =3D src_op_size; calc_modrm_operand(cpu, decode, &decode->op[1]); - decode->op[1].val =3D sign(read_val_ext(cpu, decode->op[1].ptr, src_op= _size), src_op_size); + decode->op[1].val =3D sign(read_val_ext(cpu, decode->op[1].ptr, src_op= _size), + src_op_size); =20 write_val_ext(cpu, decode->op[0].ptr, decode->op[1].val, op_size); =20 @@ -1030,68 +1071,71 @@ void exec_ror(struct CPUState *cpu, struct x86_deco= de *decode) count =3D decode->op[1].val; =20 switch (decode->operand_size) { - case 1: - { - uint32_t bit6, bit7; - uint8_t res; - - if ((count & 0x07) =3D=3D 0) { - if (count & 0x18) { - bit6 =3D ((uint8_t)decode->op[0].val >> 6) & 1; - bit7 =3D ((uint8_t)decode->op[0].val >> 7) & 1; - SET_FLAGS_OxxxxC(cpu, bit6 ^ bit7, bit7); - } - } else { - count &=3D 0x7; /* use only bottom 3 bits */ - res =3D ((uint8_t)decode->op[0].val >> count) | ((uint8_t)= decode->op[0].val << (8 - count)); - write_val_ext(cpu, decode->op[0].ptr, res, 1); - bit6 =3D (res >> 6) & 1; - bit7 =3D (res >> 7) & 1; - /* set eflags: ROR count affects the following flags: C, O= */ + case 1: + { + uint32_t bit6, bit7; + uint8_t res; + + if ((count & 0x07) =3D=3D 0) { + if (count & 0x18) { + bit6 =3D ((uint8_t)decode->op[0].val >> 6) & 1; + bit7 =3D ((uint8_t)decode->op[0].val >> 7) & 1; SET_FLAGS_OxxxxC(cpu, bit6 ^ bit7, bit7); - } - break; + } + } else { + count &=3D 0x7; /* use only bottom 3 bits */ + res =3D ((uint8_t)decode->op[0].val >> count) | + ((uint8_t)decode->op[0].val << (8 - count)); + write_val_ext(cpu, decode->op[0].ptr, res, 1); + bit6 =3D (res >> 6) & 1; + bit7 =3D (res >> 7) & 1; + /* set eflags: ROR count affects the following flags: C, O */ + SET_FLAGS_OxxxxC(cpu, bit6 ^ bit7, bit7); } - case 2: - { - uint32_t bit14, bit15; - uint16_t res; - - if ((count & 0x0f) =3D=3D 0) { - if (count & 0x10) { - bit14 =3D ((uint16_t)decode->op[0].val >> 14) & 1; - bit15 =3D ((uint16_t)decode->op[0].val >> 15) & 1; - // of =3D result14 ^ result15 - SET_FLAGS_OxxxxC(cpu, bit14 ^ bit15, bit15); - } - } else { - count &=3D 0x0f; // use only 4 LSB's - res =3D ((uint16_t)decode->op[0].val >> count) | ((uint16_= t)decode->op[0].val << (16 - count)); - write_val_ext(cpu, decode->op[0].ptr, res, 2); - - bit14 =3D (res >> 14) & 1; - bit15 =3D (res >> 15) & 1; - // of =3D result14 ^ result15 + break; + } + case 2: + { + uint32_t bit14, bit15; + uint16_t res; + + if ((count & 0x0f) =3D=3D 0) { + if (count & 0x10) { + bit14 =3D ((uint16_t)decode->op[0].val >> 14) & 1; + bit15 =3D ((uint16_t)decode->op[0].val >> 15) & 1; + /* of =3D result14 ^ result15 */ SET_FLAGS_OxxxxC(cpu, bit14 ^ bit15, bit15); } - break; + } else { + count &=3D 0x0f; /* use only 4 LSB's */ + res =3D ((uint16_t)decode->op[0].val >> count) | + ((uint16_t)decode->op[0].val << (16 - count)); + write_val_ext(cpu, decode->op[0].ptr, res, 2); + + bit14 =3D (res >> 14) & 1; + bit15 =3D (res >> 15) & 1; + /* of =3D result14 ^ result15 */ + SET_FLAGS_OxxxxC(cpu, bit14 ^ bit15, bit15); } - case 4: - { - uint32_t bit31, bit30; - uint32_t res; - - count &=3D 0x1f; - if (count) { - res =3D ((uint32_t)decode->op[0].val >> count) | ((uint32_= t)decode->op[0].val << (32 - count)); - write_val_ext(cpu, decode->op[0].ptr, res, 4); - - bit31 =3D (res >> 31) & 1; - bit30 =3D (res >> 30) & 1; - // of =3D result30 ^ result31 - SET_FLAGS_OxxxxC(cpu, bit30 ^ bit31, bit31); - } - break; + break; + } + case 4: + { + uint32_t bit31, bit30; + uint32_t res; + + count &=3D 0x1f; + if (count) { + res =3D ((uint32_t)decode->op[0].val >> count) | + ((uint32_t)decode->op[0].val << (32 - count)); + write_val_ext(cpu, decode->op[0].ptr, res, 4); + + bit31 =3D (res >> 31) & 1; + bit30 =3D (res >> 30) & 1; + /* of =3D result30 ^ result31 */ + SET_FLAGS_OxxxxC(cpu, bit30 ^ bit31, bit31); + } + break; } } RIP(cpu) +=3D decode->len; @@ -1105,71 +1149,74 @@ void exec_rol(struct CPUState *cpu, struct x86_deco= de *decode) count =3D decode->op[1].val; =20 switch (decode->operand_size) { - case 1: - { - uint32_t bit0, bit7; - uint8_t res; - - if ((count & 0x07) =3D=3D 0) { - if (count & 0x18) { - bit0 =3D ((uint8_t)decode->op[0].val & 1); - bit7 =3D ((uint8_t)decode->op[0].val >> 7); - SET_FLAGS_OxxxxC(cpu, bit0 ^ bit7, bit0); - } - } else { - count &=3D 0x7; // use only lowest 3 bits - res =3D ((uint8_t)decode->op[0].val << count) | ((uint8_t)= decode->op[0].val >> (8 - count)); - - write_val_ext(cpu, decode->op[0].ptr, res, 1); - /* set eflags: - * ROL count affects the following flags: C, O - */ - bit0 =3D (res & 1); - bit7 =3D (res >> 7); + case 1: + { + uint32_t bit0, bit7; + uint8_t res; + + if ((count & 0x07) =3D=3D 0) { + if (count & 0x18) { + bit0 =3D ((uint8_t)decode->op[0].val & 1); + bit7 =3D ((uint8_t)decode->op[0].val >> 7); SET_FLAGS_OxxxxC(cpu, bit0 ^ bit7, bit0); } - break; + } else { + count &=3D 0x7; /* use only lowest 3 bits */ + res =3D ((uint8_t)decode->op[0].val << count) | + ((uint8_t)decode->op[0].val >> (8 - count)); + + write_val_ext(cpu, decode->op[0].ptr, res, 1); + /* set eflags: + * ROL count affects the following flags: C, O + */ + bit0 =3D (res & 1); + bit7 =3D (res >> 7); + SET_FLAGS_OxxxxC(cpu, bit0 ^ bit7, bit0); } - case 2: - { - uint32_t bit0, bit15; - uint16_t res; - - if ((count & 0x0f) =3D=3D 0) { - if (count & 0x10) { - bit0 =3D ((uint16_t)decode->op[0].val & 0x1); - bit15 =3D ((uint16_t)decode->op[0].val >> 15); - // of =3D cf ^ result15 - SET_FLAGS_OxxxxC(cpu, bit0 ^ bit15, bit0); - } - } else { - count &=3D 0x0f; // only use bottom 4 bits - res =3D ((uint16_t)decode->op[0].val << count) | ((uint16_= t)decode->op[0].val >> (16 - count)); - - write_val_ext(cpu, decode->op[0].ptr, res, 2); - bit0 =3D (res & 0x1); - bit15 =3D (res >> 15); - // of =3D cf ^ result15 + break; + } + case 2: + { + uint32_t bit0, bit15; + uint16_t res; + + if ((count & 0x0f) =3D=3D 0) { + if (count & 0x10) { + bit0 =3D ((uint16_t)decode->op[0].val & 0x1); + bit15 =3D ((uint16_t)decode->op[0].val >> 15); + /* of =3D cf ^ result15 */ SET_FLAGS_OxxxxC(cpu, bit0 ^ bit15, bit0); } - break; + } else { + count &=3D 0x0f; /* only use bottom 4 bits */ + res =3D ((uint16_t)decode->op[0].val << count) | + ((uint16_t)decode->op[0].val >> (16 - count)); + + write_val_ext(cpu, decode->op[0].ptr, res, 2); + bit0 =3D (res & 0x1); + bit15 =3D (res >> 15); + /* of =3D cf ^ result15 */ + SET_FLAGS_OxxxxC(cpu, bit0 ^ bit15, bit0); } - case 4: - { - uint32_t bit0, bit31; - uint32_t res; - - count &=3D 0x1f; - if (count) { - res =3D ((uint32_t)decode->op[0].val << count) | ((uint32_= t)decode->op[0].val >> (32 - count)); - - write_val_ext(cpu, decode->op[0].ptr, res, 4); - bit0 =3D (res & 0x1); - bit31 =3D (res >> 31); - // of =3D cf ^ result31 - SET_FLAGS_OxxxxC(cpu, bit0 ^ bit31, bit0); - } - break; + break; + } + case 4: + { + uint32_t bit0, bit31; + uint32_t res; + + count &=3D 0x1f; + if (count) { + res =3D ((uint32_t)decode->op[0].val << count) | + ((uint32_t)decode->op[0].val >> (32 - count)); + + write_val_ext(cpu, decode->op[0].ptr, res, 4); + bit0 =3D (res & 0x1); + bit31 =3D (res >> 31); + /* of =3D cf ^ result31 */ + SET_FLAGS_OxxxxC(cpu, bit0 ^ bit31, bit0); + } + break; } } RIP(cpu) +=3D decode->len; @@ -1184,70 +1231,79 @@ void exec_rcl(struct CPUState *cpu, struct x86_deco= de *decode) fetch_operands(cpu, decode, 2, true, true, false); count =3D decode->op[1].val & 0x1f; =20 - switch(decode->operand_size) { - case 1: - { - uint8_t op1_8 =3D decode->op[0].val; - uint8_t res; - count %=3D 9; - if (!count) - break; + switch (decode->operand_size) { + case 1: + { + uint8_t op1_8 =3D decode->op[0].val; + uint8_t res; + count %=3D 9; + if (!count) { + break; + } + + if (1 =3D=3D count) { + res =3D (op1_8 << 1) | get_CF(cpu); + } else { + res =3D (op1_8 << count) | (get_CF(cpu) << (count - 1)) | + (op1_8 >> (9 - count)); + } =20 - if (1 =3D=3D count) - res =3D (op1_8 << 1) | get_CF(cpu); - else - res =3D (op1_8 << count) | (get_CF(cpu) << (count - 1)) | = (op1_8 >> (9 - count)); + write_val_ext(cpu, decode->op[0].ptr, res, 1); =20 - write_val_ext(cpu, decode->op[0].ptr, res, 1); + cf =3D (op1_8 >> (8 - count)) & 0x01; + of =3D cf ^ (res >> 7); /* of =3D cf ^ result7 */ + SET_FLAGS_OxxxxC(cpu, of, cf); + break; + } + case 2: + { + uint16_t res; + uint16_t op1_16 =3D decode->op[0].val; =20 - cf =3D (op1_8 >> (8 - count)) & 0x01; - of =3D cf ^ (res >> 7); // of =3D cf ^ result7 - SET_FLAGS_OxxxxC(cpu, of, cf); + count %=3D 17; + if (!count) { break; } - case 2: - { - uint16_t res; - uint16_t op1_16 =3D decode->op[0].val; - - count %=3D 17; - if (!count) - break; - - if (1 =3D=3D count) - res =3D (op1_16 << 1) | get_CF(cpu); - else if (count =3D=3D 16) - res =3D (get_CF(cpu) << 15) | (op1_16 >> 1); - else // 2..15 - res =3D (op1_16 << count) | (get_CF(cpu) << (count - 1)) |= (op1_16 >> (17 - count)); - =20 - write_val_ext(cpu, decode->op[0].ptr, res, 2); - =20 - cf =3D (op1_16 >> (16 - count)) & 0x1; - of =3D cf ^ (res >> 15); // of =3D cf ^ result15 - SET_FLAGS_OxxxxC(cpu, of, cf); - break; - } - case 4: - { - uint32_t res; - uint32_t op1_32 =3D decode->op[0].val; =20 - if (!count) - break; + if (1 =3D=3D count) { + res =3D (op1_16 << 1) | get_CF(cpu); + } else if (count =3D=3D 16) { + res =3D (get_CF(cpu) << 15) | (op1_16 >> 1); + } else { /* 2..15 */ + res =3D (op1_16 << count) | (get_CF(cpu) << (count - 1)) | + (op1_16 >> (17 - count)); + } =20 - if (1 =3D=3D count) - res =3D (op1_32 << 1) | get_CF(cpu); - else - res =3D (op1_32 << count) | (get_CF(cpu) << (count - 1)) |= (op1_32 >> (33 - count)); + write_val_ext(cpu, decode->op[0].ptr, res, 2); =20 - write_val_ext(cpu, decode->op[0].ptr, res, 4); + cf =3D (op1_16 >> (16 - count)) & 0x1; + of =3D cf ^ (res >> 15); /* of =3D cf ^ result15 */ + SET_FLAGS_OxxxxC(cpu, of, cf); + break; + } + case 4: + { + uint32_t res; + uint32_t op1_32 =3D decode->op[0].val; =20 - cf =3D (op1_32 >> (32 - count)) & 0x1; - of =3D cf ^ (res >> 31); // of =3D cf ^ result31 - SET_FLAGS_OxxxxC(cpu, of, cf); + if (!count) { break; } + + if (1 =3D=3D count) { + res =3D (op1_32 << 1) | get_CF(cpu); + } else { + res =3D (op1_32 << count) | (get_CF(cpu) << (count - 1)) | + (op1_32 >> (33 - count)); + } + + write_val_ext(cpu, decode->op[0].ptr, res, 4); + + cf =3D (op1_32 >> (32 - count)) & 0x1; + of =3D cf ^ (res >> 31); /* of =3D cf ^ result31 */ + SET_FLAGS_OxxxxC(cpu, of, cf); + break; + } } RIP(cpu) +=3D decode->len; } @@ -1260,60 +1316,68 @@ void exec_rcr(struct CPUState *cpu, struct x86_deco= de *decode) fetch_operands(cpu, decode, 2, true, true, false); count =3D decode->op[1].val & 0x1f; =20 - switch(decode->operand_size) { - case 1: - { - uint8_t op1_8 =3D decode->op[0].val; - uint8_t res; + switch (decode->operand_size) { + case 1: + { + uint8_t op1_8 =3D decode->op[0].val; + uint8_t res; =20 - count %=3D 9; - if (!count) - break; - res =3D (op1_8 >> count) | (get_CF(cpu) << (8 - count)) | (op1= _8 << (9 - count)); + count %=3D 9; + if (!count) { + break; + } + res =3D (op1_8 >> count) | (get_CF(cpu) << (8 - count)) | + (op1_8 << (9 - count)); =20 - write_val_ext(cpu, decode->op[0].ptr, res, 1); + write_val_ext(cpu, decode->op[0].ptr, res, 1); + + cf =3D (op1_8 >> (count - 1)) & 0x1; + of =3D (((res << 1) ^ res) >> 7) & 0x1; /* of =3D result6 ^ result= 7 */ + SET_FLAGS_OxxxxC(cpu, of, cf); + break; + } + case 2: + { + uint16_t op1_16 =3D decode->op[0].val; + uint16_t res; =20 - cf =3D (op1_8 >> (count - 1)) & 0x1; - of =3D (((res << 1) ^ res) >> 7) & 0x1; // of =3D result6 ^ re= sult7 - SET_FLAGS_OxxxxC(cpu, of, cf); + count %=3D 17; + if (!count) { break; } - case 2: - { - uint16_t op1_16 =3D decode->op[0].val; - uint16_t res; + res =3D (op1_16 >> count) | (get_CF(cpu) << (16 - count)) | + (op1_16 << (17 - count)); =20 - count %=3D 17; - if (!count) - break; - res =3D (op1_16 >> count) | (get_CF(cpu) << (16 - count)) | (o= p1_16 << (17 - count)); + write_val_ext(cpu, decode->op[0].ptr, res, 2); =20 - write_val_ext(cpu, decode->op[0].ptr, res, 2); + cf =3D (op1_16 >> (count - 1)) & 0x1; + of =3D ((uint16_t)((res << 1) ^ res) >> 15) & 0x1; /* of =3D resul= t15 ^ + result14 */ + SET_FLAGS_OxxxxC(cpu, of, cf); + break; + } + case 4: + { + uint32_t res; + uint32_t op1_32 =3D decode->op[0].val; =20 - cf =3D (op1_16 >> (count - 1)) & 0x1; - of =3D ((uint16_t)((res << 1) ^ res) >> 15) & 0x1; // of =3D r= esult15 ^ result14 - SET_FLAGS_OxxxxC(cpu, of, cf); + if (!count) { break; } - case 4: - { - uint32_t res; - uint32_t op1_32 =3D decode->op[0].val; - - if (!count) - break; -=20 - if (1 =3D=3D count) - res =3D (op1_32 >> 1) | (get_CF(cpu) << 31); - else - res =3D (op1_32 >> count) | (get_CF(cpu) << (32 - count)) = | (op1_32 << (33 - count)); =20 - write_val_ext(cpu, decode->op[0].ptr, res, 4); + if (1 =3D=3D count) { + res =3D (op1_32 >> 1) | (get_CF(cpu) << 31); + } else { + res =3D (op1_32 >> count) | (get_CF(cpu) << (32 - count)) | + (op1_32 << (33 - count)); + } =20 - cf =3D (op1_32 >> (count - 1)) & 0x1; - of =3D ((res << 1) ^ res) >> 31; // of =3D result30 ^ result31 - SET_FLAGS_OxxxxC(cpu, of, cf); - break; + write_val_ext(cpu, decode->op[0].ptr, res, 4); + + cf =3D (op1_32 >> (count - 1)) & 0x1; + of =3D ((res << 1) ^ res) >> 31; /* of =3D result30 ^ result31 */ + SET_FLAGS_OxxxxC(cpu, of, cf); + break; } } RIP(cpu) +=3D decode->len; @@ -1323,8 +1387,10 @@ static void exec_xchg(struct CPUState *cpu, struct x= 86_decode *decode) { fetch_operands(cpu, decode, 2, true, true, false); =20 - write_val_ext(cpu, decode->op[0].ptr, decode->op[1].val, decode->opera= nd_size); - write_val_ext(cpu, decode->op[1].ptr, decode->op[0].val, decode->opera= nd_size); + write_val_ext(cpu, decode->op[0].ptr, decode->op[1].val, + decode->operand_size); + write_val_ext(cpu, decode->op[1].ptr, decode->op[0].val, + decode->operand_size); =20 RIP(cpu) +=3D decode->len; } @@ -1332,7 +1398,8 @@ static void exec_xchg(struct CPUState *cpu, struct x8= 6_decode *decode) static void exec_xadd(struct CPUState *cpu, struct x86_decode *decode) { EXEC_2OP_ARITH_CMD(cpu, decode, +, SET_FLAGS_OSZAPC_ADD, true); - write_val_ext(cpu, decode->op[1].ptr, decode->op[0].val, decode->opera= nd_size); + write_val_ext(cpu, decode->op[1].ptr, decode->op[0].val, + decode->operand_size); =20 RIP(cpu) +=3D decode->len; } @@ -1388,13 +1455,9 @@ static struct cmd_handler _cmd_handler[X86_DECODE_CM= D_LAST]; static void init_cmd_handler(CPUState *cpu) { int i; - for (i =3D 0; i < ARRAY_SIZE(handlers); i++) + for (i =3D 0; i < ARRAY_SIZE(handlers); i++) { _cmd_handler[handlers[i].cmd] =3D handlers[i]; -} - -static void print_debug(struct CPUState *cpu) -{ - printf("%llx: eax %llx ebx %llx ecx %llx edx %llx esi %llx edi %llx eb= p %llx esp %llx flags %llx\n", RIP(cpu), RAX(cpu), RBX(cpu), RCX(cpu), RDX(= cpu), RSI(cpu), RDI(cpu), RBP(cpu), RSP(cpu), EFLAGS(cpu)); + } } =20 void load_regs(struct CPUState *cpu) @@ -1408,9 +1471,10 @@ void load_regs(struct CPUState *cpu) RRX(cpu, REG_RDI) =3D rreg(cpu->hvf_fd, HV_X86_RDI); RRX(cpu, REG_RSP) =3D rreg(cpu->hvf_fd, HV_X86_RSP); RRX(cpu, REG_RBP) =3D rreg(cpu->hvf_fd, HV_X86_RBP); - for (i =3D 8; i < 16; i++) + for (i =3D 8; i < 16; i++) { RRX(cpu, i) =3D rreg(cpu->hvf_fd, HV_X86_RAX + i); =20 + } RFLAGS(cpu) =3D rreg(cpu->hvf_fd, HV_X86_RFLAGS); rflags_to_lflags(cpu); RIP(cpu) =3D rreg(cpu->hvf_fd, HV_X86_RIP); @@ -1429,32 +1493,36 @@ void store_regs(struct CPUState *cpu) wreg(cpu->hvf_fd, HV_X86_RDI, RDI(cpu)); wreg(cpu->hvf_fd, HV_X86_RBP, RBP(cpu)); wreg(cpu->hvf_fd, HV_X86_RSP, RSP(cpu)); - for (i =3D 8; i < 16; i++) + for (i =3D 8; i < 16; i++) { wreg(cpu->hvf_fd, HV_X86_RAX + i, RRX(cpu, i)); - =20 + } + lflags_to_rflags(cpu); wreg(cpu->hvf_fd, HV_X86_RFLAGS, RFLAGS(cpu)); macvm_set_rip(cpu, RIP(cpu)); - - //print_debug(cpu); } =20 bool exec_instruction(struct CPUState *cpu, struct x86_decode *ins) { - //if (hvf_vcpu_id(cpu)) - //printf("%d, %llx: exec_instruction %s\n", hvf_vcpu_id(cpu), RIP(cpu= ), decode_cmd_to_string(ins->cmd)); - =20 + /*if (hvf_vcpu_id(cpu)) + printf("%d, %llx: exec_instruction %s\n", hvf_vcpu_id(cpu), RIP(cpu), + decode_cmd_to_string(ins->cmd));*/ + if (0 && ins->is_fpu) { VM_PANIC("emulate fpu\n"); } else { if (!_cmd_handler[ins->cmd].handler) { - printf("Unimplemented handler (%llx) for %d (%x %x) \n", RIP(c= pu), ins->cmd, ins->opcode[0], - ins->opcode_len > 1 ? ins->opcode[1] : 0); + printf("Unimplemented handler (%llx) for %d (%x %x) \n", RIP(c= pu), + ins->cmd, ins->opcode[0], + ins->opcode_len > 1 ? ins->opcode[1] : 0); RIP(cpu) +=3D ins->len; return true; } - =20 - VM_PANIC_ON_EX(!_cmd_handler[ins->cmd].handler, "Unimplemented han= dler (%llx) for %d (%x %x) \n", RIP(cpu), ins->cmd, ins->opcode[0], ins->op= code_len > 1 ? ins->opcode[1] : 0); + + VM_PANIC_ON_EX(!_cmd_handler[ins->cmd].handler, + "Unimplemented handler (%llx) for %d (%x %x) \n", RIP(cpu), + ins->cmd, ins->opcode[0], + ins->opcode_len > 1 ? ins->opcode[1] : 0); _cmd_handler[ins->cmd].handler(cpu, ins); } return true; diff --git a/target/i386/hvf-utils/x86_flags.c b/target/i386/hvf-utils/x86_= flags.c index ca876d03dd..187ab9b56b 100644 --- a/target/i386/hvf-utils/x86_flags.c +++ b/target/i386/hvf-utils/x86_flags.c @@ -32,65 +32,78 @@ void SET_FLAGS_OxxxxC(struct CPUState *cpu, uint32_t ne= w_of, uint32_t new_cf) { uint32_t temp_po =3D new_of ^ new_cf; cpu->hvf_x86->lflags.auxbits &=3D ~(LF_MASK_PO | LF_MASK_CF); - cpu->hvf_x86->lflags.auxbits |=3D (temp_po << LF_BIT_PO) | (new_cf << = LF_BIT_CF); + cpu->hvf_x86->lflags.auxbits |=3D (temp_po << LF_BIT_PO) | + (new_cf << LF_BIT_CF); } =20 -void SET_FLAGS_OSZAPC_SUB32(struct CPUState *cpu, uint32_t v1, uint32_t v2= , uint32_t diff) +void SET_FLAGS_OSZAPC_SUB32(struct CPUState *cpu, uint32_t v1, uint32_t v2, + uint32_t diff) { SET_FLAGS_OSZAPC_SUB_32(v1, v2, diff); } =20 -void SET_FLAGS_OSZAPC_SUB16(struct CPUState *cpu, uint16_t v1, uint16_t v2= , uint16_t diff) +void SET_FLAGS_OSZAPC_SUB16(struct CPUState *cpu, uint16_t v1, uint16_t v2, + uint16_t diff) { SET_FLAGS_OSZAPC_SUB_16(v1, v2, diff); } =20 -void SET_FLAGS_OSZAPC_SUB8(struct CPUState *cpu, uint8_t v1, uint8_t v2, u= int8_t diff) +void SET_FLAGS_OSZAPC_SUB8(struct CPUState *cpu, uint8_t v1, uint8_t v2, + uint8_t diff) { SET_FLAGS_OSZAPC_SUB_8(v1, v2, diff); } =20 -void SET_FLAGS_OSZAPC_ADD32(struct CPUState *cpu, uint32_t v1, uint32_t v2= , uint32_t diff) +void SET_FLAGS_OSZAPC_ADD32(struct CPUState *cpu, uint32_t v1, uint32_t v2, + uint32_t diff) { SET_FLAGS_OSZAPC_ADD_32(v1, v2, diff); } =20 -void SET_FLAGS_OSZAPC_ADD16(struct CPUState *cpu, uint16_t v1, uint16_t v2= , uint16_t diff) +void SET_FLAGS_OSZAPC_ADD16(struct CPUState *cpu, uint16_t v1, uint16_t v2, + uint16_t diff) { SET_FLAGS_OSZAPC_ADD_16(v1, v2, diff); } =20 -void SET_FLAGS_OSZAPC_ADD8(struct CPUState *cpu, uint8_t v1, uint8_t v2, u= int8_t diff) +void SET_FLAGS_OSZAPC_ADD8(struct CPUState *cpu, uint8_t v1, uint8_t v2, + uint8_t diff) { SET_FLAGS_OSZAPC_ADD_8(v1, v2, diff); } =20 -void SET_FLAGS_OSZAP_SUB32(struct CPUState *cpu, uint32_t v1, uint32_t v2,= uint32_t diff) +void SET_FLAGS_OSZAP_SUB32(struct CPUState *cpu, uint32_t v1, uint32_t v2, + uint32_t diff) { SET_FLAGS_OSZAP_SUB_32(v1, v2, diff); } =20 -void SET_FLAGS_OSZAP_SUB16(struct CPUState *cpu, uint16_t v1, uint16_t v2,= uint16_t diff) +void SET_FLAGS_OSZAP_SUB16(struct CPUState *cpu, uint16_t v1, uint16_t v2, + uint16_t diff) { SET_FLAGS_OSZAP_SUB_16(v1, v2, diff); } =20 -void SET_FLAGS_OSZAP_SUB8(struct CPUState *cpu, uint8_t v1, uint8_t v2, ui= nt8_t diff) +void SET_FLAGS_OSZAP_SUB8(struct CPUState *cpu, uint8_t v1, uint8_t v2, + uint8_t diff) { SET_FLAGS_OSZAP_SUB_8(v1, v2, diff); } =20 -void SET_FLAGS_OSZAP_ADD32(struct CPUState *cpu, uint32_t v1, uint32_t v2,= uint32_t diff) +void SET_FLAGS_OSZAP_ADD32(struct CPUState *cpu, uint32_t v1, uint32_t v2, + uint32_t diff) { SET_FLAGS_OSZAP_ADD_32(v1, v2, diff); } =20 -void SET_FLAGS_OSZAP_ADD16(struct CPUState *cpu, uint16_t v1, uint16_t v2,= uint16_t diff) +void SET_FLAGS_OSZAP_ADD16(struct CPUState *cpu, uint16_t v1, uint16_t v2, + uint16_t diff) { SET_FLAGS_OSZAP_ADD_16(v1, v2, diff); } =20 -void SET_FLAGS_OSZAP_ADD8(struct CPUState *cpu, uint8_t v1, uint8_t v2, ui= nt8_t diff) +void SET_FLAGS_OSZAP_ADD8(struct CPUState *cpu, uint8_t v1, uint8_t v2, + uint8_t diff) { SET_FLAGS_OSZAP_ADD_8(v1, v2, diff); } @@ -264,19 +277,22 @@ bool get_ZF(struct CPUState *cpu) void set_ZF(struct CPUState *cpu, bool val) { if (val) { - cpu->hvf_x86->lflags.auxbits ^=3D (((cpu->hvf_x86->lflags.result >= > LF_SIGN_BIT) & 1) << LF_BIT_SD); - // merge the parity bits into the Parity Delta Byte + cpu->hvf_x86->lflags.auxbits ^=3D + (((cpu->hvf_x86->lflags.result >> LF_SIGN_BIT) & 1) << LF_BIT_SD); + /* merge the parity bits into the Parity Delta Byte */ uint32_t temp_pdb =3D (255 & cpu->hvf_x86->lflags.result); cpu->hvf_x86->lflags.auxbits ^=3D (temp_pdb << LF_BIT_PDB); - // now zero the .result value + /* now zero the .result value */ cpu->hvf_x86->lflags.result =3D 0; - } else + } else { cpu->hvf_x86->lflags.result |=3D (1 << 8); + } } =20 bool get_SF(struct CPUState *cpu) { - return ((cpu->hvf_x86->lflags.result >> LF_SIGN_BIT) ^ (cpu->hvf_x86->= lflags.auxbits >> LF_BIT_SD)) & 1; + return ((cpu->hvf_x86->lflags.result >> LF_SIGN_BIT) ^ + (cpu->hvf_x86->lflags.auxbits >> LF_BIT_SD)) & 1; } =20 void set_SF(struct CPUState *cpu, bool val) diff --git a/target/i386/hvf-utils/x86_flags.h b/target/i386/hvf-utils/x86_= flags.h index f963f8ad1b..87a694b408 100644 --- a/target/i386/hvf-utils/x86_flags.h +++ b/target/i386/hvf-utils/x86_flags.h @@ -55,19 +55,24 @@ typedef struct lazy_flags { #define GET_ADD_OVERFLOW(op1, op2, result, mask) \ ((((op1) ^ (result)) & ((op2) ^ (result))) & (mask)) =20 -// ******************* -// OSZAPC -// ******************* +/* ******************* */ +/* OSZAPC */ +/* ******************* */ =20 /* size, carries, result */ #define SET_FLAGS_OSZAPC_SIZE(size, lf_carries, lf_result) { \ addr_t temp =3D ((lf_carries) & (LF_MASK_AF)) | \ (((lf_carries) >> (size - 2)) << LF_BIT_PO); \ cpu->hvf_x86->lflags.result =3D (addr_t)(int##size##_t)(lf_result); \ - if ((size) =3D=3D 32) temp =3D ((lf_carries) & ~(LF_MASK_PDB | LF_MASK= _SD)); \ - else if ((size) =3D=3D 16) temp =3D ((lf_carries) & (LF_MASK_AF)) | ((= lf_carries) << 16); \ - else if ((size) =3D=3D 8) temp =3D ((lf_carries) & (LF_MASK_AF)) | ((= lf_carries) << 24); \ - else VM_PANIC("unimplemented"); = \ + if ((size) =3D=3D 32) { \ + temp =3D ((lf_carries) & ~(LF_MASK_PDB | LF_MASK_SD)); \ + } else if ((size) =3D=3D 16) { \ + temp =3D ((lf_carries) & (LF_MASK_AF)) | ((lf_carries) << 16); \ + } else if ((size) =3D=3D 8) { \ + temp =3D ((lf_carries) & (LF_MASK_AF)) | ((lf_carries) << 24); \ + } else { \ + VM_PANIC("unimplemented"); \ + } \ cpu->hvf_x86->lflags.auxbits =3D (addr_t)(uint32_t)temp; \ } =20 @@ -87,10 +92,15 @@ typedef struct lazy_flags { #define SET_FLAGS_OSZAPC_LOGIC_32(result_32) \ SET_FLAGS_OSZAPC_32(0, (result_32)) #define SET_FLAGS_OSZAPC_LOGIC_SIZE(size, result) { \ - if (32 =3D=3D size) {SET_FLAGS_OSZAPC_LOGIC_32(result);} \ - else if (16 =3D=3D size) {SET_FLAGS_OSZAPC_LOGIC_16(result);} \ - else if (8 =3D=3D size) {SET_FLAGS_OSZAPC_LOGIC_8(result);} \ - else VM_PANIC("unimplemented"); \ + if (32 =3D=3D size) { \ + SET_FLAGS_OSZAPC_LOGIC_32(result); \ + } else if (16 =3D=3D size) { \ + SET_FLAGS_OSZAPC_LOGIC_16(result); \ + } else if (8 =3D=3D size) { \ + SET_FLAGS_OSZAPC_LOGIC_8(result); \ + } else { \ + VM_PANIC("unimplemented"); \ + } \ } =20 /* op1, op2, result */ @@ -109,17 +119,22 @@ typedef struct lazy_flags { #define SET_FLAGS_OSZAPC_SUB_32(op1_32, op2_32, diff_32) \ SET_FLAGS_OSZAPC_32(SUB_COUT_VEC((op1_32), (op2_32), (diff_32)), (diff= _32)) =20 -// ******************* -// OSZAP -// ******************* +/* ******************* */ +/* OSZAP */ +/* ******************* */ /* size, carries, result */ #define SET_FLAGS_OSZAP_SIZE(size, lf_carries, lf_result) { \ addr_t temp =3D ((lf_carries) & (LF_MASK_AF)) | \ (((lf_carries) >> (size - 2)) << LF_BIT_PO); \ - if ((size) =3D=3D 32) temp =3D ((lf_carries) & ~(LF_MASK_PDB | LF_MASK= _SD)); \ - else if ((size) =3D=3D 16) temp =3D ((lf_carries) & (LF_MASK_AF)) | ((= lf_carries) << 16); \ - else if ((size) =3D=3D 8) temp =3D ((lf_carries) & (LF_MASK_AF)) | ((= lf_carries) << 24); \ - else VM_PANIC("unimplemented"); = \ + if ((size) =3D=3D 32) { \ + temp =3D ((lf_carries) & ~(LF_MASK_PDB | LF_MASK_SD)); \ + } else if ((size) =3D=3D 16) { \ + temp =3D ((lf_carries) & (LF_MASK_AF)) | ((lf_carries) << 16); \ + } else if ((size) =3D=3D 8) { \ + temp =3D ((lf_carries) & (LF_MASK_AF)) | ((lf_carries) << 24); \ + } else { \ + VM_PANIC("unimplemented"); \ + } \ cpu->hvf_x86->lflags.result =3D (addr_t)(int##size##_t)(lf_result); \ addr_t delta_c =3D (cpu->hvf_x86->lflags.auxbits ^ temp) & LF_MASK_CF;= \ delta_c ^=3D (delta_c >> 1); \ @@ -150,9 +165,9 @@ typedef struct lazy_flags { #define SET_FLAGS_OSZAP_SUB_32(op1_32, op2_32, diff_32) \ SET_FLAGS_OSZAP_32(SUB_COUT_VEC((op1_32), (op2_32), (diff_32)), (diff_= 32)) =20 -// ******************* -// OSZAxC -// ******************* +/* ******************* */ +/* OSZAxC */ +/* ******************* */ /* size, carries, result */ #define SET_FLAGS_OSZAxC_LOGIC_SIZE(size, lf_result) { \ bool saved_PF =3D getB_PF(); \ @@ -183,21 +198,33 @@ void set_OSZAPC(struct CPUState *cpu, uint32_t flags3= 2); =20 void SET_FLAGS_OxxxxC(struct CPUState *cpu, uint32_t new_of, uint32_t new_= cf); =20 -void SET_FLAGS_OSZAPC_SUB32(struct CPUState *cpu, uint32_t v1, uint32_t v2= , uint32_t diff); -void SET_FLAGS_OSZAPC_SUB16(struct CPUState *cpu, uint16_t v1, uint16_t v2= , uint16_t diff); -void SET_FLAGS_OSZAPC_SUB8(struct CPUState *cpu, uint8_t v1, uint8_t v2, u= int8_t diff); - -void SET_FLAGS_OSZAPC_ADD32(struct CPUState *cpu, uint32_t v1, uint32_t v2= , uint32_t diff); -void SET_FLAGS_OSZAPC_ADD16(struct CPUState *cpu, uint16_t v1, uint16_t v2= , uint16_t diff); -void SET_FLAGS_OSZAPC_ADD8(struct CPUState *cpu, uint8_t v1, uint8_t v2, u= int8_t diff); - -void SET_FLAGS_OSZAP_SUB32(struct CPUState *cpu, uint32_t v1, uint32_t v2,= uint32_t diff); -void SET_FLAGS_OSZAP_SUB16(struct CPUState *cpu, uint16_t v1, uint16_t v2,= uint16_t diff); -void SET_FLAGS_OSZAP_SUB8(struct CPUState *cpu, uint8_t v1, uint8_t v2, ui= nt8_t diff); - -void SET_FLAGS_OSZAP_ADD32(struct CPUState *cpu, uint32_t v1, uint32_t v2,= uint32_t diff); -void SET_FLAGS_OSZAP_ADD16(struct CPUState *cpu, uint16_t v1, uint16_t v2,= uint16_t diff); -void SET_FLAGS_OSZAP_ADD8(struct CPUState *cpu, uint8_t v1, uint8_t v2, ui= nt8_t diff); +void SET_FLAGS_OSZAPC_SUB32(struct CPUState *cpu, uint32_t v1, uint32_t v2, + uint32_t diff); +void SET_FLAGS_OSZAPC_SUB16(struct CPUState *cpu, uint16_t v1, uint16_t v2, + uint16_t diff); +void SET_FLAGS_OSZAPC_SUB8(struct CPUState *cpu, uint8_t v1, uint8_t v2, + uint8_t diff); + +void SET_FLAGS_OSZAPC_ADD32(struct CPUState *cpu, uint32_t v1, uint32_t v2, + uint32_t diff); +void SET_FLAGS_OSZAPC_ADD16(struct CPUState *cpu, uint16_t v1, uint16_t v2, + uint16_t diff); +void SET_FLAGS_OSZAPC_ADD8(struct CPUState *cpu, uint8_t v1, uint8_t v2, + uint8_t diff); + +void SET_FLAGS_OSZAP_SUB32(struct CPUState *cpu, uint32_t v1, uint32_t v2, + uint32_t diff); +void SET_FLAGS_OSZAP_SUB16(struct CPUState *cpu, uint16_t v1, uint16_t v2, + uint16_t diff); +void SET_FLAGS_OSZAP_SUB8(struct CPUState *cpu, uint8_t v1, uint8_t v2, + uint8_t diff); + +void SET_FLAGS_OSZAP_ADD32(struct CPUState *cpu, uint32_t v1, uint32_t v2, + uint32_t diff); +void SET_FLAGS_OSZAP_ADD16(struct CPUState *cpu, uint16_t v1, uint16_t v2, + uint16_t diff); +void SET_FLAGS_OSZAP_ADD8(struct CPUState *cpu, uint8_t v1, uint8_t v2, + uint8_t diff); =20 void SET_FLAGS_OSZAPC_LOGIC32(struct CPUState *cpu, uint32_t diff); void SET_FLAGS_OSZAPC_LOGIC16(struct CPUState *cpu, uint16_t diff); diff --git a/target/i386/hvf-utils/x86_mmu.c b/target/i386/hvf-utils/x86_mm= u.c index 95b3d15b94..36ca7f7bdf 100644 --- a/target/i386/hvf-utils/x86_mmu.c +++ b/target/i386/hvf-utils/x86_mmu.c @@ -54,10 +54,12 @@ struct gpt_translation { =20 static int gpt_top_level(struct CPUState *cpu, bool pae) { - if (!pae) + if (!pae) { return 2; - if (x86_is_long_mode(cpu)) + } + if (x86_is_long_mode(cpu)) { return 4; + } =20 return 3; } @@ -74,18 +76,21 @@ static inline int pte_size(bool pae) } =20 =20 -static bool get_pt_entry(struct CPUState *cpu, struct gpt_translation *pt,= int level, bool pae) +static bool get_pt_entry(struct CPUState *cpu, struct gpt_translation *pt, + int level, bool pae) { int index; uint64_t pte =3D 0; addr_t page_mask =3D pae ? PAE_PTE_PAGE_MASK : LEGACY_PTE_PAGE_MASK; addr_t gpa =3D pt->pte[level] & page_mask; =20 - if (level =3D=3D 3 && !x86_is_long_mode(cpu)) + if (level =3D=3D 3 && !x86_is_long_mode(cpu)) { gpa =3D pt->pte[level]; + } =20 index =3D gpt_entry(pt->gva, level, pae); - address_space_rw(&address_space_memory, gpa + index * pte_size(pae), M= EMTXATTRS_UNSPECIFIED, (uint8_t *)&pte, pte_size(pae), 0); + address_space_rw(&address_space_memory, gpa + index * pte_size(pae), + MEMTXATTRS_UNSPECIFIED, (uint8_t *)&pte, pte_size(pae= ), 0); =20 pt->pte[level - 1] =3D pte; =20 @@ -93,32 +98,38 @@ static bool get_pt_entry(struct CPUState *cpu, struct g= pt_translation *pt, int l } =20 /* test page table entry */ -static bool test_pt_entry(struct CPUState *cpu, struct gpt_translation *pt= , int level, bool *is_large, bool pae) +static bool test_pt_entry(struct CPUState *cpu, struct gpt_translation *pt, + int level, bool *is_large, bool pae) { uint64_t pte =3D pt->pte[level]; - =20 - if (pt->write_access) + + if (pt->write_access) { pt->err_code |=3D MMU_PAGE_WT; - if (pt->user_access) + } + if (pt->user_access) { pt->err_code |=3D MMU_PAGE_US; - if (pt->exec_access) + } + if (pt->exec_access) { pt->err_code |=3D MMU_PAGE_NX; + } =20 if (!pte_present(pte)) { - addr_t page_mask =3D pae ? PAE_PTE_PAGE_MASK : LEGACY_PTE_PAGE_MAS= K; + /* addr_t page_mask =3D pae ? PAE_PTE_PAGE_MASK : LEGACY_PTE_PAGE_= MASK; */ return false; } - =20 - if (pae && !x86_is_long_mode(cpu) && 2 =3D=3D level) + + if (pae && !x86_is_long_mode(cpu) && 2 =3D=3D level) { goto exit; - =20 + } + if (1 =3D=3D level && pte_large_page(pte)) { pt->err_code |=3D MMU_PAGE_PT; *is_large =3D true; } - if (!level) + if (!level) { pt->err_code |=3D MMU_PAGE_PT; - =20 + } + addr_t cr0 =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0); /* check protection */ if (cr0 & CR0_WP) { @@ -149,22 +160,24 @@ static inline uint64_t large_page_gpa(struct gpt_tran= slation *pt, bool pae) { VM_PANIC_ON(!pte_large_page(pt->pte[1])) /* 2Mb large page */ - if (pae) + if (pae) { return (pt->pte[1] & PAE_PTE_LARGE_PAGE_MASK) | (pt->gva & 0x1ffff= f); - =20 + } + /* 4Mb large page */ return pse_pte_to_page(pt->pte[1]) | (pt->gva & 0x3fffff); } =20 =20 =20 -static bool walk_gpt(struct CPUState *cpu, addr_t addr, int err_code, stru= ct gpt_translation* pt, bool pae) +static bool walk_gpt(struct CPUState *cpu, addr_t addr, int err_code, + struct gpt_translation *pt, bool pae) { int top_level, level; bool is_large =3D false; addr_t cr3 =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_CR3); addr_t page_mask =3D pae ? PAE_PTE_PAGE_MASK : LEGACY_PTE_PAGE_MASK; - =20 + memset(pt, 0, sizeof(*pt)); top_level =3D gpt_top_level(cpu, pae); =20 @@ -181,14 +194,16 @@ static bool walk_gpt(struct CPUState *cpu, addr_t add= r, int err_code, struct gpt return false; } =20 - if (is_large) + if (is_large) { break; + } } =20 - if (!is_large) + if (!is_large) { pt->gpa =3D (pt->pte[0] & page_mask) | (pt->gva & 0xfff); - else + } else { pt->gpa =3D large_page_gpa(pt, pae); + } =20 return true; } @@ -214,18 +229,20 @@ bool mmu_gva_to_gpa(struct CPUState *cpu, addr_t gva,= addr_t *gpa) return false; } =20 -void vmx_write_mem(struct CPUState* cpu, addr_t gva, void *data, int bytes) +void vmx_write_mem(struct CPUState *cpu, addr_t gva, void *data, int bytes) { addr_t gpa; =20 while (bytes > 0) { - // copy page + /* copy page */ int copy =3D MIN(bytes, 0x1000 - (gva & 0xfff)); =20 if (!mmu_gva_to_gpa(cpu, gva, &gpa)) { - VM_PANIC_ON_EX(1, "%s: mmu_gva_to_gpa %llx failed\n", __FUNCTI= ON__, gva); + VM_PANIC_ON_EX(1, "%s: mmu_gva_to_gpa %llx failed\n", __func__, + gva); } else { - address_space_rw(&address_space_memory, gpa, MEMTXATTRS_UNSPEC= IFIED, data, copy, 1); + address_space_rw(&address_space_memory, gpa, MEMTXATTRS_UNSPEC= IFIED, + data, copy, 1); } =20 bytes -=3D copy; @@ -234,18 +251,20 @@ void vmx_write_mem(struct CPUState* cpu, addr_t gva, = void *data, int bytes) } } =20 -void vmx_read_mem(struct CPUState* cpu, void *data, addr_t gva, int bytes) +void vmx_read_mem(struct CPUState *cpu, void *data, addr_t gva, int bytes) { addr_t gpa; =20 while (bytes > 0) { - // copy page + /* copy page */ int copy =3D MIN(bytes, 0x1000 - (gva & 0xfff)); =20 if (!mmu_gva_to_gpa(cpu, gva, &gpa)) { - VM_PANIC_ON_EX(1, "%s: mmu_gva_to_gpa %llx failed\n", __FUNCTI= ON__, gva); + VM_PANIC_ON_EX(1, "%s: mmu_gva_to_gpa %llx failed\n", __func__, + gva); } - address_space_rw(&address_space_memory, gpa, MEMTXATTRS_UNSPECIFIE= D, data, copy, 0); + address_space_rw(&address_space_memory, gpa, MEMTXATTRS_UNSPECIFIE= D, + data, copy, 0); =20 bytes -=3D copy; gva +=3D copy; diff --git a/target/i386/hvf-utils/x86_mmu.h b/target/i386/hvf-utils/x86_mm= u.h index aa0fcfafd2..b786af280b 100644 --- a/target/i386/hvf-utils/x86_mmu.h +++ b/target/i386/hvf-utils/x86_mmu.h @@ -31,7 +31,7 @@ #define PT_GLOBAL (1 << 8) #define PT_NX (1llu << 63) =20 -// error codes +/* error codes */ #define MMU_PAGE_PT (1 << 0) #define MMU_PAGE_WT (1 << 1) #define MMU_PAGE_US (1 << 2) @@ -39,7 +39,7 @@ =20 bool mmu_gva_to_gpa(struct CPUState *cpu, addr_t gva, addr_t *gpa); =20 -void vmx_write_mem(struct CPUState* cpu, addr_t gva, void *data, int bytes= ); -void vmx_read_mem(struct CPUState* cpu, void *data, addr_t gva, int bytes); +void vmx_write_mem(struct CPUState *cpu, addr_t gva, void *data, int bytes= ); +void vmx_read_mem(struct CPUState *cpu, void *data, addr_t gva, int bytes); =20 #endif /* __X86_MMU_H__ */ diff --git a/target/i386/hvf-utils/x86hvf.c b/target/i386/hvf-utils/x86hvf.c index aba8983dc7..1d2c49d50a 100644 --- a/target/i386/hvf-utils/x86hvf.c +++ b/target/i386/hvf-utils/x86hvf.c @@ -37,14 +37,16 @@ =20 void hvf_cpu_synchronize_state(struct CPUState* cpu_state); =20 -void hvf_set_segment(struct CPUState *cpu, struct vmx_segment *vmx_seg, Se= gmentCache *qseg, bool is_tr) +void hvf_set_segment(struct CPUState *cpu, struct vmx_segment *vmx_seg, + SegmentCache *qseg, bool is_tr) { vmx_seg->sel =3D qseg->selector; vmx_seg->base =3D qseg->base; vmx_seg->limit =3D qseg->limit; =20 if (!qseg->selector && !x86_is_real(cpu) && !is_tr) { - // the TR register is usable after processor reset despite having = a null selector + /* the TR register is usable after processor reset despite + * having a null selector */ vmx_seg->ar =3D 1 << 16; return; } @@ -122,7 +124,7 @@ void hvf_put_segments(CPUState *cpu_state) wvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_LIMIT, env->gdt.limit); wvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_BASE, env->gdt.base); =20 - //wvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR2, env->cr[2]); + /* wvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR2, env->cr[2]); */ wvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR3, env->cr[3]); vmx_update_tpr(cpu_state); wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IA32_EFER, env->efer); @@ -161,9 +163,12 @@ void hvf_put_msrs(CPUState *cpu_state) { CPUX86State *env =3D &X86_CPU(cpu_state)->env; =20 - hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_CS, env->sysent= er_cs); - hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_ESP, env->sysen= ter_esp); - hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_EIP, env->sysen= ter_eip); + hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_CS, + env->sysenter_cs); + hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_ESP, + env->sysenter_esp); + hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_EIP, + env->sysenter_eip); =20 hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_STAR, env->star); =20 @@ -177,8 +182,8 @@ void hvf_put_msrs(CPUState *cpu_state) hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_GSBASE, env->segs[R_GS].base); hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_FSBASE, env->segs[R_FS].base); =20 - // if (!osx_is_sierra()) - // wvmcs(cpu_state->hvf_fd, VMCS_TSC_OFFSET, env->tsc - rdtscp()); + /* if (!osx_is_sierra()) + wvmcs(cpu_state->hvf_fd, VMCS_TSC_OFFSET, env->tsc - rdtscp());*/ hv_vm_sync_tsc(env->tsc); } =20 @@ -385,14 +390,16 @@ static void vmx_set_int_window_exiting(CPUState *cpu) { uint64_t val; val =3D rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS); - wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val | VMCS_PRI_PROC_BASE= D_CTLS_INT_WINDOW_EXITING); + wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val | + VMCS_PRI_PROC_BASED_CTLS_INT_WINDOW_EXITING); } =20 void vmx_clear_int_window_exiting(CPUState *cpu) { uint64_t val; val =3D rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS); - wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val & ~VMCS_PRI_PROC_BAS= ED_CTLS_INT_WINDOW_EXITING); + wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val & + ~VMCS_PRI_PROC_BASED_CTLS_INT_WINDOW_EXITING); } =20 #define NMI_VEC 2 @@ -400,7 +407,8 @@ void vmx_clear_int_window_exiting(CPUState *cpu) void hvf_inject_interrupts(CPUState *cpu_state) { X86CPU *x86cpu =3D X86_CPU(cpu_state); - int allow_nmi =3D !(rvmcs(cpu_state->hvf_fd, VMCS_GUEST_INTERRUPTIBILI= TY) & VMCS_INTERRUPTIBILITY_NMI_BLOCKING); + int allow_nmi =3D !(rvmcs(cpu_state->hvf_fd, VMCS_GUEST_INTERRUPTIBILI= TY) & + VMCS_INTERRUPTIBILITY_NMI_BLOCKING); =20 uint64_t idt_info =3D rvmcs(cpu_state->hvf_fd, VMCS_IDT_VECTORING_INFO= ); uint64_t info =3D 0; @@ -421,7 +429,8 @@ void hvf_inject_interrupts(CPUState *cpu_state) if (intr_type =3D=3D VMCS_INTR_T_SWINTR || intr_type =3D=3D VMCS_INTR_T_PRIV_SWEXCEPTION || intr_type =3D=3D VMCS_INTR_T_SWEXCEPTION) { - uint64_t ins_len =3D rvmcs(cpu_state->hvf_fd, VMCS_EXIT_IN= STRUCTION_LENGTH); + uint64_t ins_len =3D rvmcs(cpu_state->hvf_fd, + VMCS_EXIT_INSTRUCTION_LENGTH); wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INST_LENGTH, ins_len); } if (vector =3D=3D EXCEPTION_BP || vector =3D=3D EXCEPTION_OF) { @@ -431,7 +440,8 @@ void hvf_inject_interrupts(CPUState *cpu_state) */ info &=3D ~VMCS_INTR_T_MASK; info |=3D VMCS_INTR_T_SWEXCEPTION; - uint64_t ins_len =3D rvmcs(cpu_state->hvf_fd, VMCS_EXIT_IN= STRUCTION_LENGTH); + uint64_t ins_len =3D rvmcs(cpu_state->hvf_fd, + VMCS_EXIT_INSTRUCTION_LENGTH); wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INST_LENGTH, ins_len); } =20 @@ -440,7 +450,7 @@ void hvf_inject_interrupts(CPUState *cpu_state) err =3D rvmcs(cpu_state->hvf_fd, VMCS_IDT_VECTORING_ERROR); wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_EXCEPTION_ERROR, err); } - //printf("reinject %lx err %d\n", info, err); + /*printf("reinject %lx err %d\n", info, err);*/ wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, info); }; } @@ -455,15 +465,19 @@ void hvf_inject_interrupts(CPUState *cpu_state) } } =20 - if (cpu_state->hvf_x86->interruptable && (cpu_state->interrupt_request= & CPU_INTERRUPT_HARD) && + if (cpu_state->hvf_x86->interruptable && + (cpu_state->interrupt_request & CPU_INTERRUPT_HARD) && (EFLAGS(cpu_state) & IF_MASK) && !(info & VMCS_INTR_VALID)) { int line =3D cpu_get_pic_interrupt(&x86cpu->env); cpu_state->interrupt_request &=3D ~CPU_INTERRUPT_HARD; - if (line >=3D 0) - wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, line | VMCS_INT= R_VALID | VMCS_INTR_T_HWINTR); + if (line >=3D 0) { + wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, line | + VMCS_INTR_VALID | VMCS_INTR_T_HWINTR); + } } - if (cpu_state->interrupt_request & CPU_INTERRUPT_HARD) + if (cpu_state->interrupt_request & CPU_INTERRUPT_HARD) { vmx_set_int_window_exiting(cpu_state); + } } =20 int hvf_process_events(CPUState *cpu_state) @@ -482,7 +496,8 @@ int hvf_process_events(CPUState *cpu_state) cpu_state->interrupt_request &=3D ~CPU_INTERRUPT_POLL; apic_poll_irq(cpu->apic_state); } - if (((cpu_state->interrupt_request & CPU_INTERRUPT_HARD) && (EFLAGS(cp= u_state) & IF_MASK)) || + if (((cpu_state->interrupt_request & CPU_INTERRUPT_HARD) && + (EFLAGS(cpu_state) & IF_MASK)) || (cpu_state->interrupt_request & CPU_INTERRUPT_NMI)) { cpu_state->halted =3D 0; } diff --git a/target/i386/hvf-utils/x86hvf.h b/target/i386/hvf-utils/x86hvf.h index 0c5bc3dcf8..ddc73ed114 100644 --- a/target/i386/hvf-utils/x86hvf.h +++ b/target/i386/hvf-utils/x86hvf.h @@ -24,7 +24,8 @@ int hvf_process_events(CPUState *); int hvf_put_registers(CPUState *); int hvf_get_registers(CPUState *); void hvf_inject_interrupts(CPUState *); -void hvf_set_segment(struct CPUState *cpu, struct vmx_segment *vmx_seg, Se= gmentCache *qseg, bool is_tr); +void hvf_set_segment(struct CPUState *cpu, struct vmx_segment *vmx_seg, + SegmentCache *qseg, bool is_tr); void hvf_get_segment(SegmentCache *qseg, struct vmx_segment *vmx_seg); void hvf_put_xsave(CPUState *cpu_state); void hvf_put_segments(CPUState *cpu_state); --=20 2.14.1 From nobody Sat Apr 27 16:08:04 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1504584127334511.787120593984; Mon, 4 Sep 2017 21:02:07 -0700 (PDT) Received: from localhost ([::1]:56741 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp53u-00072K-9U for importer@patchew.org; Tue, 05 Sep 2017 00:02:06 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41483) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp4xm-0001kn-6W for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:54 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dp4xg-0007uZ-1a for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:46 -0400 Received: from mail-vk0-x242.google.com ([2607:f8b0:400c:c05::242]:33296) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1dp4xf-0007tv-Li for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:39 -0400 Received: by mail-vk0-x242.google.com with SMTP id j189so720398vka.0 for ; Mon, 04 Sep 2017 20:55:39 -0700 (PDT) Received: from localhost.localdomain ([161.10.80.59]) by smtp.gmail.com with ESMTPSA id d206sm1877252vka.29.2017.09.04.20.55.37 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 04 Sep 2017 20:55:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=9PspCa6SFaTETBfwhiAtX5G4dfeBTuchm6tW/d02E9k=; b=ck/c7LEXrgRJWFes4AuxypA4dzAJhMve+PItXkM+oUddRYQEaIgbrTKTe2ntr3SO+w wimXIVwiWBgWzlkPhpM8Lo7nFD5jJqJIpyfOVfu1R+3QHYrVOpGGOK4Htx1L7Ecd3kOx Lt5ng0fWoLoZFzN8zLBXt8I0O+MnGfeL0ysYEq9pVfoJpsm/dmzJzIfrVv9Xms5AeY3e LB3+DlqRsMaxkRAMeFEAGYhjOkHRlgOtf44TvLMiDqyjQyfZCAyWQJtL15/Hnd0Yjcq8 H6izysObLRCAogZJt9puT9cj7ySRGZc1zHWIzmAGTV6mPHs5uGFuR1iWwdh4GuOmqCFS lYoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=9PspCa6SFaTETBfwhiAtX5G4dfeBTuchm6tW/d02E9k=; b=UwmEbvaJqGodeUzb4iMZ2luxlTqAy0ZQE5Edtvadgkj1iU+w7iPQruOgCEIIM11Ndp cSjSNJx2LQgZTe7kLdUznueH7GhWhD2kQBoNTbfi9cQRiCmVzgXsCuY7zcsDJnUH+ytw 0lOrCRuf0/0IKGCqtbHVfFm0YFiR8Rt2uaf5W/Ul6dMXXaYSiZgBrpPQRlt1ObfC3piK EBHjuvUuEwtDvn8fvwlmuGxhIhJfZYU/fI5UY3EUIjAh5Yb0W7ubtcjaZkj21EbfCI0x S6uBDlV24RuELlwnEcVas9mOHVu3UDgfrFp9/mt2C7X5zi2hMskhOVFVDhcdAb+gAuBd yOzg== X-Gm-Message-State: AHPjjUgsLC1e/X0CU07x/rRNFbSz8Nfp38Nbnw5OSp5WY8QexTIUXKzq ZRExdSA/xUN4H6A8 X-Google-Smtp-Source: ADKCNb5ms8PtobseEAiWwxgIhHJNE0kD+IzysWWyad48IW3dk7whgv5QgDJ2P3zvA7/r4qUbQyj13A== X-Received: by 10.31.4.201 with SMTP id 192mr1060762vke.88.1504583738657; Mon, 04 Sep 2017 20:55:38 -0700 (PDT) From: Sergio Andres Gomez Del Real X-Google-Original-From: Sergio Andres Gomez Del Real To: qemu-devel@nongnu.org Date: Mon, 4 Sep 2017 22:54:48 -0500 Message-Id: <20170905035457.3753-6-Sergio.G.DelReal@gmail.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> References: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:400c:c05::242 Subject: [Qemu-devel] [PATCH v3 05/14] hvf: add code to cpus.c and do refactoring in preparation for compiling X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sergio Andres Gomez Del Real , pbonzini@redhat.com, stefanha@gmail.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZohoMail: RDKM_2 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" The files originally added from Google's repository in previous patches won't compile cleanly unless some glue code is added to cpus.c and some other reorganization is done. This patch adds that code and does some general refactoring in preparation for compiling in subsequent patches. Signed-off-by: Sergio Andres Gomez Del Real --- cpus.c | 44 +++++++++++++++++++ include/sysemu/hvf.h | 22 ++++++---- target/i386/hvf-all.c | 89 +++++++++++++++--------------------= ---- target/i386/hvf-utils/x86.c | 2 +- target/i386/hvf-utils/x86_cpuid.h | 2 +- target/i386/hvf-utils/x86_descr.h | 6 +++ target/i386/hvf-utils/x86_emu.c | 5 +-- target/i386/hvf-utils/x86_emu.h | 15 +++++++ target/i386/hvf-utils/x86_flags.h | 2 + target/i386/hvf-utils/x86hvf.c | 3 -- target/i386/hvf-utils/x86hvf.h | 2 + 11 files changed, 120 insertions(+), 72 deletions(-) diff --git a/cpus.c b/cpus.c index a2cd9dfa5d..263d36aa64 100644 --- a/cpus.c +++ b/cpus.c @@ -37,6 +37,7 @@ #include "sysemu/hw_accel.h" #include "sysemu/kvm.h" #include "sysemu/hax.h" +#include "sysemu/hvf.h" #include "qmp-commands.h" #include "exec/exec-all.h" =20 @@ -900,6 +901,9 @@ void cpu_synchronize_all_states(void) =20 CPU_FOREACH(cpu) { cpu_synchronize_state(cpu); + if (hvf_enabled()) { + hvf_cpu_synchronize_state(cpu); + } } } =20 @@ -909,6 +913,9 @@ void cpu_synchronize_all_post_reset(void) =20 CPU_FOREACH(cpu) { cpu_synchronize_post_reset(cpu); + if (hvf_enabled()) { + hvf_cpu_synchronize_post_reset(cpu); + } } } =20 @@ -918,6 +925,9 @@ void cpu_synchronize_all_post_init(void) =20 CPU_FOREACH(cpu) { cpu_synchronize_post_init(cpu); + if (hvf_enabled()) { + hvf_cpu_synchronize_post_init(cpu); + } } } =20 @@ -1098,6 +1108,14 @@ static void qemu_kvm_wait_io_event(CPUState *cpu) qemu_wait_io_event_common(cpu); } =20 +static void qemu_hvf_wait_io_event(CPUState *cpu) +{ + while (cpu_thread_is_idle(cpu)) { + qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex); + } + qemu_wait_io_event_common(cpu); +} + static void *qemu_kvm_cpu_thread_fn(void *arg) { CPUState *cpu =3D arg; @@ -1564,6 +1582,9 @@ static void qemu_cpu_kick_thread(CPUState *cpu) fprintf(stderr, "qemu:%s: %s", __func__, strerror(err)); exit(1); } + if (hvf_enabled()) { + cpu_exit(cpu); + } #else /* _WIN32 */ if (!qemu_cpu_is_self(cpu)) { if (!QueueUserAPC(dummy_apc_func, cpu->hThread, 0)) { @@ -1780,6 +1801,27 @@ static void qemu_kvm_start_vcpu(CPUState *cpu) } } =20 +static void qemu_hvf_start_vcpu(CPUState *cpu) +{ + char thread_name[VCPU_THREAD_NAME_SIZE]; + + /* HVF currently does not support TCG, and only runs in + * unrestricted-guest mode. */ + assert(hvf_enabled()); + + cpu->thread =3D g_malloc0(sizeof(QemuThread)); + cpu->halt_cond =3D g_malloc0(sizeof(QemuCond)); + qemu_cond_init(cpu->halt_cond); + + snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/HVF", + cpu->cpu_index); + qemu_thread_create(cpu->thread, thread_name, qemu_hvf_cpu_thread_fn, + cpu, QEMU_THREAD_JOINABLE); + while (!cpu->created) { + qemu_cond_wait(&qemu_cpu_cond, &qemu_global_mutex); + } +} + static void qemu_dummy_start_vcpu(CPUState *cpu) { char thread_name[VCPU_THREAD_NAME_SIZE]; @@ -1816,6 +1858,8 @@ void qemu_init_vcpu(CPUState *cpu) qemu_kvm_start_vcpu(cpu); } else if (hax_enabled()) { qemu_hax_start_vcpu(cpu); + } else if (hvf_enabled()) { + qemu_hvf_start_vcpu(cpu); } else if (tcg_enabled()) { qemu_tcg_init_vcpu(cpu); } else { diff --git a/include/sysemu/hvf.h b/include/sysemu/hvf.h index d068a95e93..944b014596 100644 --- a/include/sysemu/hvf.h +++ b/include/sysemu/hvf.h @@ -15,15 +15,24 @@ #include "config-host.h" #include "qemu/osdep.h" #include "qemu-common.h" -#include "hw/hw.h" -#include "target/i386/cpu.h" #include "qemu/bitops.h" #include "exec/memory.h" #include "sysemu/accel.h" + +extern int hvf_disabled; +#ifdef CONFIG_HVF #include #include #include - +#include "target/i386/cpu.h" +#include "hw/hw.h" +uint32_t hvf_get_supported_cpuid(uint32_t func, uint32_t idx, + int reg); +#define hvf_enabled() !hvf_disabled +#else +#define hvf_enabled() 0 +#define hvf_get_supported_cpuid(func, idx, reg) 0 +#endif =20 typedef struct hvf_slot { uint64_t start; @@ -41,7 +50,6 @@ struct hvf_vcpu_caps { uint64_t vmx_cap_preemption_timer; }; =20 -int __hvf_set_memory(hvf_slot *); typedef struct HVFState { AccelState parent; hvf_slot slots[32]; @@ -56,8 +64,6 @@ void hvf_handle_io(CPUArchState *, uint16_t, void *, int, int, int); hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t); =20 -/* Returns 1 if HVF is available and enabled, 0 otherwise. */ -int hvf_enabled(void); /* Disable HVF if |disable| is 1, otherwise, enable it iff it is supported= by * the host CPU. Use hvf_enabled() after this to get the result. */ void hvf_disable(int disable); @@ -83,12 +89,10 @@ void hvf_vcpu_destroy(CPUState *); void hvf_raise_event(CPUState *); /* void hvf_reset_vcpu_state(void *opaque); */ void vmx_reset_vcpu(CPUState *); -void __hvf_cpu_synchronize_state(CPUState *, run_on_cpu_data); -void __hvf_cpu_synchronize_post_reset(CPUState *, run_on_cpu_data); void vmx_update_tpr(CPUState *); void update_apic_tpr(CPUState *); -int apic_get_highest_priority_irr(DeviceState *); int hvf_put_registers(CPUState *); +void vmx_clear_int_window_exiting(CPUState *cpu); =20 #define TYPE_HVF_ACCEL ACCEL_CLASS_NAME("hvf") =20 diff --git a/target/i386/hvf-all.c b/target/i386/hvf-all.c index 9b871c2aa0..ade5e9ab46 100644 --- a/target/i386/hvf-all.c +++ b/target/i386/hvf-all.c @@ -53,7 +53,7 @@ =20 pthread_rwlock_t mem_lock =3D PTHREAD_RWLOCK_INITIALIZER; HVFState *hvf_state; -static int hvf_disabled =3D 1; +int hvf_disabled =3D 1; =20 static void assert_hvf_ok(hv_return_t ret) { @@ -62,26 +62,26 @@ static void assert_hvf_ok(hv_return_t ret) } =20 switch (ret) { - case HV_ERROR: - fprintf(stderr, "Error: HV_ERROR\n"); - break; - case HV_BUSY: - fprintf(stderr, "Error: HV_BUSY\n"); - break; - case HV_BAD_ARGUMENT: - fprintf(stderr, "Error: HV_BAD_ARGUMENT\n"); - break; - case HV_NO_RESOURCES: - fprintf(stderr, "Error: HV_NO_RESOURCES\n"); - break; - case HV_NO_DEVICE: - fprintf(stderr, "Error: HV_NO_DEVICE\n"); - break; - case HV_UNSUPPORTED: - fprintf(stderr, "Error: HV_UNSUPPORTED\n"); - break; - default: - fprintf(stderr, "Unknown Error\n"); + case HV_ERROR: + error_report("Error: HV_ERROR\n"); + break; + case HV_BUSY: + error_report("Error: HV_BUSY\n"); + break; + case HV_BAD_ARGUMENT: + error_report("Error: HV_BAD_ARGUMENT\n"); + break; + case HV_NO_RESOURCES: + error_report("Error: HV_NO_RESOURCES\n"); + break; + case HV_NO_DEVICE: + error_report("Error: HV_NO_DEVICE\n"); + break; + case HV_UNSUPPORTED: + error_report("Error: HV_UNSUPPORTED\n"); + break; + default: + error_report("Unknown Error\n"); } =20 abort(); @@ -112,11 +112,10 @@ struct mac_slot { struct mac_slot mac_slots[32]; #define ALIGN(x, y) (((x) + (y) - 1) & ~((y) - 1)) =20 -int __hvf_set_memory(hvf_slot *slot) +static int do_hvf_set_memory(hvf_slot *slot) { struct mac_slot *macslot; hv_memory_flags_t flags; - pthread_rwlock_wrlock(&mem_lock); hv_return_t ret; =20 macslot =3D &mac_slots[slot->slot_id]; @@ -130,7 +129,6 @@ int __hvf_set_memory(hvf_slot *slot) } =20 if (!slot->size) { - pthread_rwlock_unlock(&mem_lock); return 0; } =20 @@ -141,7 +139,6 @@ int __hvf_set_memory(hvf_slot *slot) macslot->size =3D slot->size; ret =3D hv_vm_map((hv_uvaddr_t)slot->mem, slot->start, slot->size, fla= gs); assert_hvf_ok(ret); - pthread_rwlock_unlock(&mem_lock); return 0; } =20 @@ -170,8 +167,8 @@ void hvf_set_phys_mem(MemoryRegionSection *section, boo= l add) /* Region needs to be reset. set the size to 0 and remap it. */ if (mem) { mem->size =3D 0; - if (__hvf_set_memory(mem)) { - fprintf(stderr, "Failed to reset overlapping slot\n"); + if (do_hvf_set_memory(mem)) { + error_report("Failed to reset overlapping slot\n"); abort(); } } @@ -191,7 +188,7 @@ void hvf_set_phys_mem(MemoryRegionSection *section, boo= l add) } =20 if (x =3D=3D hvf_state->num_slots) { - fprintf(stderr, "No free slots\n"); + error_report("No free slots\n"); abort(); } =20 @@ -199,24 +196,12 @@ void hvf_set_phys_mem(MemoryRegionSection *section, b= ool add) mem->mem =3D memory_region_get_ram_ptr(area) + section->offset_within_= region; mem->start =3D section->offset_within_address_space; =20 - if (__hvf_set_memory(mem)) { - fprintf(stderr, "Error registering new memory slot\n"); + if (do_hvf_set_memory(mem)) { + error_report("Error registering new memory slot\n"); abort(); } } =20 -/* return -1 if no bit is set */ -static int get_highest_priority_int(uint32_t *tab) -{ - int i; - for (i =3D 7; i >=3D 0; i--) { - if (tab[i] !=3D 0) { - return i * 32 + apic_fls_bit(tab[i]); - } - } - return -1; -} - void vmx_update_tpr(CPUState *cpu) { /* TODO: need integrate APIC handling */ @@ -263,11 +248,11 @@ void hvf_handle_io(CPUArchState *env, uint16_t port, = void *buffer, ptr +=3D size; } } -void __hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg) =20 /* TODO: synchronize vcpu state */ +static void do_hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data ar= g) { - CPUState *cpu_state =3D cpu;//(CPUState *)data; + CPUState *cpu_state =3D cpu; if (cpu_state->hvf_vcpu_dirty =3D=3D 0) { hvf_get_registers(cpu_state); } @@ -277,12 +262,12 @@ void __hvf_cpu_synchronize_state(CPUState *cpu, run_o= n_cpu_data arg) =20 void hvf_cpu_synchronize_state(CPUState *cpu_state) { - run_on_cpu(cpu_state, __hvf_cpu_synchronize_state, RUN_ON_CPU_NULL= ); if (cpu_state->hvf_vcpu_dirty =3D=3D 0) { + run_on_cpu(cpu_state, do_hvf_cpu_synchronize_state, RUN_ON_CPU_NUL= L); } } =20 -void __hvf_cpu_synchronize_post_reset(CPUState *cpu, run_on_cpu_data arg) +static void do_hvf_cpu_synchronize_post_reset(CPUState *cpu, run_on_cpu_da= ta arg) { CPUState *cpu_state =3D cpu; hvf_put_registers(cpu_state); @@ -291,7 +276,7 @@ void __hvf_cpu_synchronize_post_reset(CPUState *cpu, ru= n_on_cpu_data arg) =20 void hvf_cpu_synchronize_post_reset(CPUState *cpu_state) { - run_on_cpu(cpu_state, __hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NUL= L); + run_on_cpu(cpu_state, do_hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NU= LL); } =20 void _hvf_cpu_synchronize_post_init(CPUState *cpu, run_on_cpu_data arg) @@ -354,12 +339,11 @@ static MemoryListener hvf_memory_listener =3D { .region_del =3D hvf_region_del, }; =20 -static MemoryListener hvf_io_listener =3D { - .priority =3D 10, -}; - void vmx_reset_vcpu(CPUState *cpu) { =20 + /* TODO: this shouldn't be needed; there is already a call to + * cpu_synchronize_all_post_reset in vl.c + */ wvmcs(cpu->hvf_fd, VMCS_ENTRY_CTLS, 0); wvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER, 0); macvm_set_cr0(cpu->hvf_fd, 0x60000010); @@ -588,8 +572,6 @@ int hvf_vcpu_exec(CPUState *cpu) RFLAGS(cpu) =3D rreg(cpu->hvf_fd, HV_X86_RFLAGS); env->eflags =3D RFLAGS(cpu); =20 - trace_hvf_vm_exit(exit_reason, exit_qual); - qemu_mutex_lock_iothread(); =20 update_apic_tpr(cpu); @@ -846,7 +828,6 @@ static int hvf_accel_init(MachineState *ms) hvf_state =3D s; cpu_interrupt_handler =3D hvf_handle_interrupt; memory_listener_register(&hvf_memory_listener, &address_space_memory); - memory_listener_register(&hvf_io_listener, &address_space_io); return 0; } =20 diff --git a/target/i386/hvf-utils/x86.c b/target/i386/hvf-utils/x86.c index 045b520425..1f68287830 100644 --- a/target/i386/hvf-utils/x86.c +++ b/target/i386/hvf-utils/x86.c @@ -26,7 +26,7 @@ #include "x86_mmu.h" #include "x86_descr.h" =20 -static uint32_t x86_segment_access_rights(struct x86_segment_descriptor *v= ar) +/* static uint32_t x86_segment_access_rights(struct x86_segment_descriptor= *var) { uint32_t ar; =20 diff --git a/target/i386/hvf-utils/x86_cpuid.h b/target/i386/hvf-utils/x86_= cpuid.h index 45d27c457a..da13ea8aca 100644 --- a/target/i386/hvf-utils/x86_cpuid.h +++ b/target/i386/hvf-utils/x86_cpuid.h @@ -35,7 +35,7 @@ struct x86_cpuid { uint32_t features, ext_features, ext2_features, ext3_features; uint32_t kvm_features, svm_features; uint32_t xlevel; - char model_id[48]; + char model_id[50]; int vendor_override; uint32_t flags; uint32_t xlevel2; diff --git a/target/i386/hvf-utils/x86_descr.h b/target/i386/hvf-utils/x86_= descr.h index e2d3f75054..1285dd3897 100644 --- a/target/i386/hvf-utils/x86_descr.h +++ b/target/i386/hvf-utils/x86_descr.h @@ -47,3 +47,9 @@ void x86_segment_descriptor_to_vmx(struct CPUState *cpu, x68_segment_selector selector, struct x86_segment_descriptor *desc, struct vmx_segment *vmx_desc); + +uint32_t vmx_read_segment_limit(CPUState *cpu, x86_reg_segment seg); +uint32_t vmx_read_segment_ar(CPUState *cpu, x86_reg_segment seg); +void vmx_segment_to_x86_descriptor(struct CPUState *cpu, + struct vmx_segment *vmx_desc, + struct x86_segment_descriptor *desc); diff --git a/target/i386/hvf-utils/x86_emu.c b/target/i386/hvf-utils/x86_em= u.c index f53d1b1995..861319d17e 100644 --- a/target/i386/hvf-utils/x86_emu.c +++ b/target/i386/hvf-utils/x86_emu.c @@ -45,7 +45,6 @@ #include "vmcs.h" #include "vmx.h" =20 -static void print_debug(struct CPUState *cpu); void hvf_handle_io(struct CPUState *cpu, uint16_t port, void *data, int direction, int size, uint32_t count); =20 @@ -1473,13 +1472,11 @@ void load_regs(struct CPUState *cpu) RRX(cpu, REG_RBP) =3D rreg(cpu->hvf_fd, HV_X86_RBP); for (i =3D 8; i < 16; i++) { RRX(cpu, i) =3D rreg(cpu->hvf_fd, HV_X86_RAX + i); - =20 } + RFLAGS(cpu) =3D rreg(cpu->hvf_fd, HV_X86_RFLAGS); rflags_to_lflags(cpu); RIP(cpu) =3D rreg(cpu->hvf_fd, HV_X86_RIP); - - //print_debug(cpu); } =20 void store_regs(struct CPUState *cpu) diff --git a/target/i386/hvf-utils/x86_emu.h b/target/i386/hvf-utils/x86_em= u.h index f6feff5553..be80350ed8 100644 --- a/target/i386/hvf-utils/x86_emu.h +++ b/target/i386/hvf-utils/x86_emu.h @@ -30,4 +30,19 @@ void store_regs(struct CPUState *cpu); void simulate_rdmsr(struct CPUState *cpu); void simulate_wrmsr(struct CPUState *cpu); =20 +addr_t read_reg(struct CPUState *cpu, int reg, int size); +void write_reg(struct CPUState *cpu, int reg, addr_t val, int size); +addr_t read_val_from_reg(addr_t reg_ptr, int size); +void write_val_to_reg(addr_t reg_ptr, addr_t val, int size); +void write_val_ext(struct CPUState *cpu, addr_t ptr, addr_t val, int size); +uint8_t *read_mmio(struct CPUState *cpu, addr_t ptr, int bytes); +addr_t read_val_ext(struct CPUState *cpu, addr_t ptr, int size); + +void exec_movzx(struct CPUState *cpu, struct x86_decode *decode); +void exec_shl(struct CPUState *cpu, struct x86_decode *decode); +void exec_movsx(struct CPUState *cpu, struct x86_decode *decode); +void exec_ror(struct CPUState *cpu, struct x86_decode *decode); +void exec_rol(struct CPUState *cpu, struct x86_decode *decode); +void exec_rcl(struct CPUState *cpu, struct x86_decode *decode); +void exec_rcr(struct CPUState *cpu, struct x86_decode *decode); #endif diff --git a/target/i386/hvf-utils/x86_flags.h b/target/i386/hvf-utils/x86_= flags.h index 87a694b408..68a0c10b90 100644 --- a/target/i386/hvf-utils/x86_flags.h +++ b/target/i386/hvf-utils/x86_flags.h @@ -242,4 +242,6 @@ void SET_FLAGS_SHL32(struct CPUState *cpu, uint32_t v, = int count, uint32_t res); void SET_FLAGS_SHL16(struct CPUState *cpu, uint16_t v, int count, uint16_t= res); void SET_FLAGS_SHL8(struct CPUState *cpu, uint8_t v, int count, uint8_t re= s); =20 +bool _get_OF(struct CPUState *cpu); +bool _get_CF(struct CPUState *cpu); #endif /* __X86_FLAGS_H__ */ diff --git a/target/i386/hvf-utils/x86hvf.c b/target/i386/hvf-utils/x86hvf.c index 1d2c49d50a..c920064659 100644 --- a/target/i386/hvf-utils/x86hvf.c +++ b/target/i386/hvf-utils/x86hvf.c @@ -35,8 +35,6 @@ #include #include =20 -void hvf_cpu_synchronize_state(struct CPUState* cpu_state); - void hvf_set_segment(struct CPUState *cpu, struct vmx_segment *vmx_seg, SegmentCache *qseg, bool is_tr) { @@ -112,7 +110,6 @@ void hvf_put_xsave(CPUState *cpu_state) } } =20 -void vmx_update_tpr(CPUState *cpu); void hvf_put_segments(CPUState *cpu_state) { CPUX86State *env =3D &X86_CPU(cpu_state)->env; diff --git a/target/i386/hvf-utils/x86hvf.h b/target/i386/hvf-utils/x86hvf.h index ddc73ed114..26da3dab57 100644 --- a/target/i386/hvf-utils/x86hvf.h +++ b/target/i386/hvf-utils/x86hvf.h @@ -34,4 +34,6 @@ void hvf_get_xsave(CPUState *cpu_state); void hvf_get_msrs(CPUState *cpu_state); void vmx_clear_int_window_exiting(CPUState *cpu); void hvf_get_segments(CPUState *cpu_state); +void vmx_update_tpr(CPUState *cpu); +void hvf_cpu_synchronize_state(CPUState *cpu_state); #endif --=20 2.14.1 From nobody Sat Apr 27 16:08:04 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1504584498426173.00352920593627; Mon, 4 Sep 2017 21:08:18 -0700 (PDT) Received: from localhost ([::1]:56771 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp59s-00037Q-Sq for importer@patchew.org; Tue, 05 Sep 2017 00:08:16 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41768) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp4y3-0001yX-GO for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:56:23 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dp4xl-0007zC-8V for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:56:03 -0400 Received: from mail-vk0-x243.google.com ([2607:f8b0:400c:c05::243]:35378) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1dp4xk-0007yM-Lb for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:45 -0400 Received: by mail-vk0-x243.google.com with SMTP id t10so716000vke.2 for ; Mon, 04 Sep 2017 20:55:44 -0700 (PDT) Received: from localhost.localdomain ([161.10.80.59]) by smtp.gmail.com with ESMTPSA id d206sm1877252vka.29.2017.09.04.20.55.38 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 04 Sep 2017 20:55:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=i9m2MvNE4ivIG5Vn+ynIC59MYEv7gp7LmHZxsub8ZIA=; b=CvfO1bVofrwLxcbzQq0JM9ufYeK1czIasdVq8H9RmgUD7C48mFv/+7Dyv3sMSkejgE mke4WakjbF1/Nsfejy88TBM+8SCbIbX4zA5YuuLC0lSgtO06FL6vyug57ubb5+eZLh2L vnaEKx/jF6B5DMxVaJlfyKV9+WBfdg++YnI5tEXCeGErbdD5TMU/k2f6YUWVRodMsTJM 2tQM4NYhfE69C028wWZSs76xrTfNAjVWdbfqL5CkWNAg/5ouIG48VUzaRku66R4xoq7z WevUHDj7FTWR0n2AY2skr6Vv73xaBXpU5zDNd6O0VqpN4U2c2UFQ1SXlVb7fNIh9oXDa 3Srg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=i9m2MvNE4ivIG5Vn+ynIC59MYEv7gp7LmHZxsub8ZIA=; b=XspzCMk2ro1OC+oAAwkuYIrs5mgWIQZJuDmSNwPe3PAPhfBu5+NdR3IG3jlqyGTTxP ff6M7jfjerjKIGIx4srq0PKNJuLmmEGYi4SsX7fekAm89Lukm0ttNnmqiEenw60HddCL LiIS07VPtjnwMw+0kgfmwu1ot486ROt4d/MOd9+gIBdeMsd2McCnS63Oy98ejuMmOd8M 43+sgJ4jhqpEmzkMNJR1iasuNMzneV1pKJESjNvwliCqf45aGP8EDS8dJvCwEOzna0W9 ORKdxNzUzqCUrAeNASd6CjDiscnBcigMFJMkMjt7jT2FJYwb1yL6UamyixokGemZ72wY +DjA== X-Gm-Message-State: AHPjjUjZ3P1bpDSHCfkoRnIplqia4seYGW66nGlE6Z0cgUX9eYjtP4YI FHKjzitMkxOISJKg X-Google-Smtp-Source: ADKCNb66V0ziF6T+vQ2VpHpQHUWqYycwNUReIPcACbfZyA3pMz3PbQS0L/q7J52ToKz2LatouKPN7w== X-Received: by 10.31.200.195 with SMTP id y186mr1379548vkf.93.1504583742533; Mon, 04 Sep 2017 20:55:42 -0700 (PDT) From: Sergio Andres Gomez Del Real X-Google-Original-From: Sergio Andres Gomez Del Real To: qemu-devel@nongnu.org Date: Mon, 4 Sep 2017 22:54:49 -0500 Message-Id: <20170905035457.3753-7-Sergio.G.DelReal@gmail.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> References: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:400c:c05::243 Subject: [Qemu-devel] [PATCH v3 06/14] hvf: handle fields from CPUState and CPUX86State X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sergio Andres Gomez Del Real , pbonzini@redhat.com, stefanha@gmail.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZohoMail: RDKM_2 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This commit is a small refactoring of hvf's emulation code: it moves the HVFX86EmulatorState field to CPUX86State, and in general changes, for the emulation functions, the parameter with signature 'CPUState *' for 'CPUX86State *' so we don't have to get the 'env' (which is what we really need) through the 'cpu' everytime. This commit also adds some fields specific to hvf in CPUState and CPUX86State. It also adds some handy #defines. Signed-off-by: Sergio Andres Gomez Del Real --- include/qom/cpu.h | 2 + target/i386/cpu.h | 38 ++- target/i386/hvf-all.c | 73 ++-- target/i386/hvf-utils/x86.c | 4 +- target/i386/hvf-utils/x86.h | 34 +- target/i386/hvf-utils/x86_decode.c | 363 ++++++++++---------- target/i386/hvf-utils/x86_decode.h | 23 +- target/i386/hvf-utils/x86_emu.c | 681 +++++++++++++++++++--------------= ---- target/i386/hvf-utils/x86_emu.h | 29 +- target/i386/hvf-utils/x86_flags.c | 194 +++++------ target/i386/hvf-utils/x86_flags.h | 106 +++--- target/i386/hvf-utils/x86hvf.c | 16 +- 12 files changed, 801 insertions(+), 762 deletions(-) diff --git a/include/qom/cpu.h b/include/qom/cpu.h index 25eefea7ab..a79f37e20a 100644 --- a/include/qom/cpu.h +++ b/include/qom/cpu.h @@ -407,6 +407,8 @@ struct CPUState { * unnecessary flushes. */ uint16_t pending_tlb_flush; + + int hvf_fd; }; =20 QTAILQ_HEAD(CPUTailQ, CPUState); diff --git a/target/i386/cpu.h b/target/i386/cpu.h index 051867399b..a904072009 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -23,6 +23,9 @@ #include "qemu-common.h" #include "cpu-qom.h" #include "standard-headers/asm-x86/hyperv.h" +#if defined(CONFIG_HVF) +#include "target/i386/hvf-utils/x86.h" +#endif =20 #ifdef TARGET_X86_64 #define TARGET_LONG_BITS 64 @@ -82,16 +85,20 @@ #define R_GS 5 =20 /* segment descriptor fields */ -#define DESC_G_MASK (1 << 23) +#define DESC_G_SHIFT 23 +#define DESC_G_MASK (1 << DESC_G_SHIFT) #define DESC_B_SHIFT 22 #define DESC_B_MASK (1 << DESC_B_SHIFT) #define DESC_L_SHIFT 21 /* x86_64 only : 64 bit code segment */ #define DESC_L_MASK (1 << DESC_L_SHIFT) -#define DESC_AVL_MASK (1 << 20) -#define DESC_P_MASK (1 << 15) +#define DESC_AVL_SHIFT 20 +#define DESC_AVL_MASK (1 << DESC_AVL_SHIFT) +#define DESC_P_SHIFT 15 +#define DESC_P_MASK (1 << DESC_P_SHIFT) #define DESC_DPL_SHIFT 13 #define DESC_DPL_MASK (3 << DESC_DPL_SHIFT) -#define DESC_S_MASK (1 << 12) +#define DESC_S_SHIFT 12 +#define DESC_S_MASK (1 << DESC_S_SHIFT) #define DESC_TYPE_SHIFT 8 #define DESC_TYPE_MASK (15 << DESC_TYPE_SHIFT) #define DESC_A_MASK (1 << 8) @@ -631,6 +638,7 @@ typedef uint32_t FeatureWordArray[FEATURE_WORDS]; #define CPUID_7_0_EBX_AVX512BW (1U << 30) /* AVX-512 Byte and Word Instruc= tions */ #define CPUID_7_0_EBX_AVX512VL (1U << 31) /* AVX-512 Vector Length Extensi= ons */ =20 +#define CPUID_7_0_ECX_AVX512BMI (1U << 1) #define CPUID_7_0_ECX_VBMI (1U << 1) /* AVX-512 Vector Byte Manipulat= ion Instrs */ #define CPUID_7_0_ECX_UMIP (1U << 2) #define CPUID_7_0_ECX_PKU (1U << 3) @@ -806,6 +814,20 @@ typedef struct SegmentCache { float64 _d_##n[(bits)/64]; \ } =20 +typedef union { + uint8_t _b[16]; + uint16_t _w[8]; + uint32_t _l[4]; + uint64_t _q[2]; +} XMMReg; + +typedef union { + uint8_t _b[32]; + uint16_t _w[16]; + uint32_t _l[8]; + uint64_t _q[4]; +} YMMReg; + typedef MMREG_UNION(ZMMReg, 512) ZMMReg; typedef MMREG_UNION(MMXReg, 64) MMXReg; =20 @@ -1041,7 +1063,11 @@ typedef struct CPUX86State { ZMMReg xmm_t0; MMXReg mmx_t0; =20 + XMMReg ymmh_regs[CPU_NB_REGS]; + uint64_t opmask_regs[NB_OPMASK_REGS]; + YMMReg zmmh_regs[CPU_NB_REGS]; + ZMMReg hi16_zmm_regs[CPU_NB_REGS]; =20 /* sysenter registers */ uint32_t sysenter_cs; @@ -1164,11 +1190,15 @@ typedef struct CPUX86State { int32_t interrupt_injected; uint8_t soft_interrupt; uint8_t has_error_code; + uint32_t ins_len; uint32_t sipi_vector; bool tsc_valid; int64_t tsc_khz; int64_t user_tsc_khz; /* for sanity check only */ void *kvm_xsave_buf; +#if defined(CONFIG_HVF) + HVFX86EmulatorState *hvf_emul; +#endif =20 uint64_t mcg_cap; uint64_t mcg_ctl; diff --git a/target/i386/hvf-all.c b/target/i386/hvf-all.c index ade5e9ab46..be68c71ea0 100644 --- a/target/i386/hvf-all.c +++ b/target/i386/hvf-all.c @@ -253,16 +253,16 @@ void hvf_handle_io(CPUArchState *env, uint16_t port, = void *buffer, static void do_hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data ar= g) { CPUState *cpu_state =3D cpu; - if (cpu_state->hvf_vcpu_dirty =3D=3D 0) { + if (cpu_state->vcpu_dirty =3D=3D 0) { hvf_get_registers(cpu_state); } =20 - cpu_state->hvf_vcpu_dirty =3D 1; + cpu_state->vcpu_dirty =3D 1; } =20 void hvf_cpu_synchronize_state(CPUState *cpu_state) { - if (cpu_state->hvf_vcpu_dirty =3D=3D 0) { + if (cpu_state->vcpu_dirty =3D=3D 0) { run_on_cpu(cpu_state, do_hvf_cpu_synchronize_state, RUN_ON_CPU_NUL= L); } } @@ -271,7 +271,7 @@ static void do_hvf_cpu_synchronize_post_reset(CPUState = *cpu, run_on_cpu_data arg { CPUState *cpu_state =3D cpu; hvf_put_registers(cpu_state); - cpu_state->hvf_vcpu_dirty =3D false; + cpu_state->vcpu_dirty =3D false; } =20 void hvf_cpu_synchronize_post_reset(CPUState *cpu_state) @@ -283,7 +283,7 @@ void _hvf_cpu_synchronize_post_init(CPUState *cpu, run_= on_cpu_data arg) { CPUState *cpu_state =3D cpu; hvf_put_registers(cpu_state); - cpu_state->hvf_vcpu_dirty =3D false; + cpu_state->vcpu_dirty =3D false; } =20 void hvf_cpu_synchronize_post_init(CPUState *cpu_state) @@ -436,7 +436,8 @@ static void dummy_signal(int sig) int hvf_init_vcpu(CPUState *cpu) { =20 - X86CPU *x86cpu; + X86CPU *x86cpu =3D X86_CPU(cpu); + CPUX86State *env =3D &x86cpu->env; =20 /* init cpu signals */ sigset_t set; @@ -450,15 +451,15 @@ int hvf_init_vcpu(CPUState *cpu) sigdelset(&set, SIG_IPI); =20 int r; - init_emu(cpu); - init_decoder(cpu); + init_emu(); + init_decoder(); init_cpuid(cpu); =20 hvf_state->hvf_caps =3D g_new0(struct hvf_vcpu_caps, 1); env->hvf_emul =3D g_new0(HVFX86EmulatorState, 1); =20 r =3D hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf_fd, HV_VCPU_DEFAULT); - cpu->hvf_vcpu_dirty =3D 1; + cpu->vcpu_dirty =3D 1; assert_hvf_ok(r); =20 if (hv_vmx_read_capability(HV_VMX_CAP_PINBASED, @@ -539,12 +540,12 @@ int hvf_vcpu_exec(CPUState *cpu) } =20 do { - if (cpu->hvf_vcpu_dirty) { + if (cpu->vcpu_dirty) { hvf_put_registers(cpu); - cpu->hvf_vcpu_dirty =3D false; + cpu->vcpu_dirty =3D false; } =20 - cpu->hvf_x86->interruptable =3D + env->hvf_emul->interruptable =3D !(rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY) & (VMCS_INTERRUPTIBILITY_STI_BLOCKING | VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING)); @@ -569,8 +570,8 @@ int hvf_vcpu_exec(CPUState *cpu) VMCS_EXIT_INSTRUCTION_LENGTH); uint64_t idtvec_info =3D rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_INF= O); rip =3D rreg(cpu->hvf_fd, HV_X86_RIP); - RFLAGS(cpu) =3D rreg(cpu->hvf_fd, HV_X86_RFLAGS); - env->eflags =3D RFLAGS(cpu); + RFLAGS(env) =3D rreg(cpu->hvf_fd, HV_X86_RFLAGS); + env->eflags =3D RFLAGS(env); =20 qemu_mutex_lock_iothread(); =20 @@ -582,7 +583,7 @@ int hvf_vcpu_exec(CPUState *cpu) case EXIT_REASON_HLT: { macvm_set_rip(cpu, rip + ins_len); if (!((cpu->interrupt_request & CPU_INTERRUPT_HARD) && - (EFLAGS(cpu) & IF_MASK)) + (EFLAGS(env) & IF_MASK)) && !(cpu->interrupt_request & CPU_INTERRUPT_NMI) && !(idtvec_info & VMCS_IDT_VEC_VALID)) { cpu->halted =3D 1; @@ -612,10 +613,10 @@ int hvf_vcpu_exec(CPUState *cpu) struct x86_decode decode; =20 load_regs(cpu); - cpu->hvf_x86->fetch_rip =3D rip; + env->hvf_emul->fetch_rip =3D rip; =20 - decode_instruction(cpu, &decode); - exec_instruction(cpu, &decode); + decode_instruction(env, &decode); + exec_instruction(env, &decode); store_regs(cpu); break; } @@ -638,20 +639,20 @@ int hvf_vcpu_exec(CPUState *cpu) load_regs(cpu); hvf_handle_io(env, port, &val, 0, size, 1); if (size =3D=3D 1) { - AL(cpu) =3D val; + AL(env) =3D val; } else if (size =3D=3D 2) { - AX(cpu) =3D val; + AX(env) =3D val; } else if (size =3D=3D 4) { - RAX(cpu) =3D (uint32_t)val; + RAX(env) =3D (uint32_t)val; } else { VM_PANIC("size"); } - RIP(cpu) +=3D ins_len; + RIP(env) +=3D ins_len; store_regs(cpu); break; } else if (!string && !in) { - RAX(cpu) =3D rreg(cpu->hvf_fd, HV_X86_RAX); - hvf_handle_io(env, port, &RAX(cpu), 1, size, 1); + RAX(env) =3D rreg(cpu->hvf_fd, HV_X86_RAX); + hvf_handle_io(env, port, &RAX(env), 1, size, 1); macvm_set_rip(cpu, rip + ins_len); break; } @@ -659,11 +660,11 @@ int hvf_vcpu_exec(CPUState *cpu) struct x86_decode decode; =20 load_regs(cpu); - cpu->hvf_x86->fetch_rip =3D rip; + env->hvf_emul->fetch_rip =3D rip; =20 - decode_instruction(cpu, &decode); + decode_instruction(env, &decode); VM_PANIC_ON(ins_len !=3D decode.len); - exec_instruction(cpu, &decode); + exec_instruction(env, &decode); store_regs(cpu); =20 break; @@ -721,7 +722,7 @@ int hvf_vcpu_exec(CPUState *cpu) } else { simulate_wrmsr(cpu); } - RIP(cpu) +=3D rvmcs(cpu->hvf_fd, VMCS_EXIT_INSTRUCTION_LENGTH); + RIP(env) +=3D rvmcs(cpu->hvf_fd, VMCS_EXIT_INSTRUCTION_LENGTH); store_regs(cpu); break; } @@ -735,19 +736,19 @@ int hvf_vcpu_exec(CPUState *cpu) =20 switch (cr) { case 0x0: { - macvm_set_cr0(cpu->hvf_fd, RRX(cpu, reg)); + macvm_set_cr0(cpu->hvf_fd, RRX(env, reg)); break; } case 4: { - macvm_set_cr4(cpu->hvf_fd, RRX(cpu, reg)); + macvm_set_cr4(cpu->hvf_fd, RRX(env, reg)); break; } case 8: { X86CPU *x86_cpu =3D X86_CPU(cpu); if (exit_qual & 0x10) { - RRX(cpu, reg) =3D cpu_get_apic_tpr(x86_cpu->apic_state= ); + RRX(env, reg) =3D cpu_get_apic_tpr(x86_cpu->apic_state= ); } else { - int tpr =3D RRX(cpu, reg); + int tpr =3D RRX(env, reg); cpu_set_apic_tpr(x86_cpu->apic_state, tpr); ret =3D EXCP_INTERRUPT; } @@ -757,7 +758,7 @@ int hvf_vcpu_exec(CPUState *cpu) error_report("Unrecognized CR %d\n", cr); abort(); } - RIP(cpu) +=3D ins_len; + RIP(env) +=3D ins_len; store_regs(cpu); break; } @@ -765,10 +766,10 @@ int hvf_vcpu_exec(CPUState *cpu) struct x86_decode decode; =20 load_regs(cpu); - cpu->hvf_x86->fetch_rip =3D rip; + env->hvf_emul->fetch_rip =3D rip; =20 - decode_instruction(cpu, &decode); - exec_instruction(cpu, &decode); + decode_instruction(env, &decode); + exec_instruction(env, &decode); store_regs(cpu); break; } diff --git a/target/i386/hvf-utils/x86.c b/target/i386/hvf-utils/x86.c index 1f68287830..625ea6cac0 100644 --- a/target/i386/hvf-utils/x86.c +++ b/target/i386/hvf-utils/x86.c @@ -127,7 +127,9 @@ bool x86_is_real(struct CPUState *cpu) =20 bool x86_is_v8086(struct CPUState *cpu) { - return x86_is_protected(cpu) && (RFLAGS(cpu) & RFLAGS_VM); + X86CPU *x86_cpu =3D X86_CPU(cpu); + CPUX86State *env =3D &x86_cpu->env; + return x86_is_protected(cpu) && (RFLAGS(env) & RFLAGS_VM); } =20 bool x86_is_long_mode(struct CPUState *cpu) diff --git a/target/i386/hvf-utils/x86.h b/target/i386/hvf-utils/x86.h index a12044d848..7fd8fede80 100644 --- a/target/i386/hvf-utils/x86.h +++ b/target/i386/hvf-utils/x86.h @@ -23,7 +23,7 @@ #include #include #include "qemu-common.h" -#include "x86_flags.h" +#include "x86_gen.h" =20 /* exceptions */ typedef enum x86_exception { @@ -356,13 +356,14 @@ typedef struct x68_segment_selector { }; } __attribute__ ((__packed__)) x68_segment_selector; =20 +typedef struct lazy_flags { + addr_t result; + addr_t auxbits; +} lazy_flags; + /* Definition of hvf_x86_state is here */ -struct hvf_x86_state { - int hlt; - uint64_t init_tsc; - =20 +typedef struct HVFX86EmulatorState { int interruptable; - uint64_t exp_rip; uint64_t fetch_rip; uint64_t rip; struct x86_register regs[16]; @@ -370,8 +371,7 @@ struct hvf_x86_state { struct lazy_flags lflags; struct x86_efer efer; uint8_t mmio_buf[4096]; - uint8_t *apic_page; -}; +} HVFX86EmulatorState; =20 /* * hvf xsave area @@ -381,12 +381,12 @@ struct hvf_xsave_buf { }; =20 /* useful register access macros */ -#define RIP(cpu) (cpu->hvf_x86->rip) -#define EIP(cpu) ((uint32_t)cpu->hvf_x86->rip) -#define RFLAGS(cpu) (cpu->hvf_x86->rflags.rflags) -#define EFLAGS(cpu) (cpu->hvf_x86->rflags.eflags) +#define RIP(cpu) (cpu->hvf_emul->rip) +#define EIP(cpu) ((uint32_t)cpu->hvf_emul->rip) +#define RFLAGS(cpu) (cpu->hvf_emul->rflags.rflags) +#define EFLAGS(cpu) (cpu->hvf_emul->rflags.eflags) =20 -#define RRX(cpu, reg) (cpu->hvf_x86->regs[reg].rrx) +#define RRX(cpu, reg) (cpu->hvf_emul->regs[reg].rrx) #define RAX(cpu) RRX(cpu, REG_RAX) #define RCX(cpu) RRX(cpu, REG_RCX) #define RDX(cpu) RRX(cpu, REG_RDX) @@ -404,7 +404,7 @@ struct hvf_xsave_buf { #define R14(cpu) RRX(cpu, REG_R14) #define R15(cpu) RRX(cpu, REG_R15) =20 -#define ERX(cpu, reg) (cpu->hvf_x86->regs[reg].erx) +#define ERX(cpu, reg) (cpu->hvf_emul->regs[reg].erx) #define EAX(cpu) ERX(cpu, REG_RAX) #define ECX(cpu) ERX(cpu, REG_RCX) #define EDX(cpu) ERX(cpu, REG_RDX) @@ -414,7 +414,7 @@ struct hvf_xsave_buf { #define ESI(cpu) ERX(cpu, REG_RSI) #define EDI(cpu) ERX(cpu, REG_RDI) =20 -#define RX(cpu, reg) (cpu->hvf_x86->regs[reg].rx) +#define RX(cpu, reg) (cpu->hvf_emul->regs[reg].rx) #define AX(cpu) RX(cpu, REG_RAX) #define CX(cpu) RX(cpu, REG_RCX) #define DX(cpu) RX(cpu, REG_RDX) @@ -424,13 +424,13 @@ struct hvf_xsave_buf { #define SI(cpu) RX(cpu, REG_RSI) #define DI(cpu) RX(cpu, REG_RDI) =20 -#define RL(cpu, reg) (cpu->hvf_x86->regs[reg].lx) +#define RL(cpu, reg) (cpu->hvf_emul->regs[reg].lx) #define AL(cpu) RL(cpu, REG_RAX) #define CL(cpu) RL(cpu, REG_RCX) #define DL(cpu) RL(cpu, REG_RDX) #define BL(cpu) RL(cpu, REG_RBX) =20 -#define RH(cpu, reg) (cpu->hvf_x86->regs[reg].hx) +#define RH(cpu, reg) (cpu->hvf_emul->regs[reg].hx) #define AH(cpu) RH(cpu, REG_RAX) #define CH(cpu) RH(cpu, REG_RCX) #define DH(cpu) RH(cpu, REG_RDX) diff --git a/target/i386/hvf-utils/x86_decode.c b/target/i386/hvf-utils/x86= _decode.c index e21d96bc01..69cbd4d252 100644 --- a/target/i386/hvf-utils/x86_decode.c +++ b/target/i386/hvf-utils/x86_decode.c @@ -27,9 +27,9 @@ =20 #define OPCODE_ESCAPE 0xf =20 -static void decode_invalid(CPUState *cpu, struct x86_decode *decode) +static void decode_invalid(CPUX86State *env, struct x86_decode *decode) { - printf("%llx: failed to decode instruction ", cpu->hvf_x86->fetch_rip - + printf("%llx: failed to decode instruction ", env->hvf_emul->fetch_rip= - decode->len); for (int i =3D 0; i < decode->opcode_len; i++) { printf("%x ", decode->opcode[i]); @@ -60,7 +60,7 @@ uint64_t sign(uint64_t val, int size) return val; } =20 -static inline uint64_t decode_bytes(CPUState *cpu, struct x86_decode *deco= de, +static inline uint64_t decode_bytes(CPUX86State *env, struct x86_decode *d= ecode, int size) { addr_t val =3D 0; @@ -75,129 +75,129 @@ static inline uint64_t decode_bytes(CPUState *cpu, st= ruct x86_decode *decode, VM_PANIC_EX("%s invalid size %d\n", __func__, size); break; } - addr_t va =3D linear_rip(cpu, RIP(cpu)) + decode->len; - vmx_read_mem(cpu, &val, va, size); + addr_t va =3D linear_rip(ENV_GET_CPU(env), RIP(env)) + decode->len; + vmx_read_mem(ENV_GET_CPU(env), &val, va, size); decode->len +=3D size; =20 return val; } =20 -static inline uint8_t decode_byte(CPUState *cpu, struct x86_decode *decode) +static inline uint8_t decode_byte(CPUX86State *env, struct x86_decode *dec= ode) { - return (uint8_t)decode_bytes(cpu, decode, 1); + return (uint8_t)decode_bytes(env, decode, 1); } =20 -static inline uint16_t decode_word(CPUState *cpu, struct x86_decode *decod= e) +static inline uint16_t decode_word(CPUX86State *env, struct x86_decode *de= code) { - return (uint16_t)decode_bytes(cpu, decode, 2); + return (uint16_t)decode_bytes(env, decode, 2); } =20 -static inline uint32_t decode_dword(CPUState *cpu, struct x86_decode *deco= de) +static inline uint32_t decode_dword(CPUX86State *env, struct x86_decode *d= ecode) { - return (uint32_t)decode_bytes(cpu, decode, 4); + return (uint32_t)decode_bytes(env, decode, 4); } =20 -static inline uint64_t decode_qword(CPUState *cpu, struct x86_decode *deco= de) +static inline uint64_t decode_qword(CPUX86State *env, struct x86_decode *d= ecode) { - return decode_bytes(cpu, decode, 8); + return decode_bytes(env, decode, 8); } =20 -static void decode_modrm_rm(CPUState *cpu, struct x86_decode *decode, +static void decode_modrm_rm(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op) { op->type =3D X86_VAR_RM; } =20 -static void decode_modrm_reg(CPUState *cpu, struct x86_decode *decode, +static void decode_modrm_reg(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op) { op->type =3D X86_VAR_REG; op->reg =3D decode->modrm.reg; - op->ptr =3D get_reg_ref(cpu, op->reg, decode->rex.r, decode->operand_s= ize); + op->ptr =3D get_reg_ref(env, op->reg, decode->rex.r, decode->operand_s= ize); } =20 -static void decode_rax(CPUState *cpu, struct x86_decode *decode, +static void decode_rax(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op) { op->type =3D X86_VAR_REG; op->reg =3D REG_RAX; - op->ptr =3D get_reg_ref(cpu, op->reg, 0, decode->operand_size); + op->ptr =3D get_reg_ref(env, op->reg, 0, decode->operand_size); } =20 -static inline void decode_immediate(CPUState *cpu, struct x86_decode *deco= de, +static inline void decode_immediate(CPUX86State *env, struct x86_decode *d= ecode, struct x86_decode_op *var, int size) { var->type =3D X86_VAR_IMMEDIATE; var->size =3D size; switch (size) { case 1: - var->val =3D decode_byte(cpu, decode); + var->val =3D decode_byte(env, decode); break; case 2: - var->val =3D decode_word(cpu, decode); + var->val =3D decode_word(env, decode); break; case 4: - var->val =3D decode_dword(cpu, decode); + var->val =3D decode_dword(env, decode); break; case 8: - var->val =3D decode_qword(cpu, decode); + var->val =3D decode_qword(env, decode); break; default: VM_PANIC_EX("bad size %d\n", size); } } =20 -static void decode_imm8(CPUState *cpu, struct x86_decode *decode, +static void decode_imm8(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op) { - decode_immediate(cpu, decode, op, 1); + decode_immediate(env, decode, op, 1); op->type =3D X86_VAR_IMMEDIATE; } =20 -static void decode_imm8_signed(CPUState *cpu, struct x86_decode *decode, +static void decode_imm8_signed(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op) { - decode_immediate(cpu, decode, op, 1); + decode_immediate(env, decode, op, 1); op->val =3D sign(op->val, 1); op->type =3D X86_VAR_IMMEDIATE; } =20 -static void decode_imm16(CPUState *cpu, struct x86_decode *decode, +static void decode_imm16(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op) { - decode_immediate(cpu, decode, op, 2); + decode_immediate(env, decode, op, 2); op->type =3D X86_VAR_IMMEDIATE; } =20 =20 -static void decode_imm(CPUState *cpu, struct x86_decode *decode, +static void decode_imm(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op) { if (8 =3D=3D decode->operand_size) { - decode_immediate(cpu, decode, op, 4); + decode_immediate(env, decode, op, 4); op->val =3D sign(op->val, decode->operand_size); } else { - decode_immediate(cpu, decode, op, decode->operand_size); + decode_immediate(env, decode, op, decode->operand_size); } op->type =3D X86_VAR_IMMEDIATE; } =20 -static void decode_imm_signed(CPUState *cpu, struct x86_decode *decode, +static void decode_imm_signed(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op) { - decode_immediate(cpu, decode, op, decode->operand_size); + decode_immediate(env, decode, op, decode->operand_size); op->val =3D sign(op->val, decode->operand_size); op->type =3D X86_VAR_IMMEDIATE; } =20 -static void decode_imm_1(CPUState *cpu, struct x86_decode *decode, +static void decode_imm_1(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op) { op->type =3D X86_VAR_IMMEDIATE; op->val =3D 1; } =20 -static void decode_imm_0(CPUState *cpu, struct x86_decode *decode, +static void decode_imm_0(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op) { op->type =3D X86_VAR_IMMEDIATE; @@ -205,7 +205,7 @@ static void decode_imm_0(CPUState *cpu, struct x86_deco= de *decode, } =20 =20 -static void decode_pushseg(CPUState *cpu, struct x86_decode *decode) +static void decode_pushseg(CPUX86State *env, struct x86_decode *decode) { uint8_t op =3D (decode->opcode_len > 1) ? decode->opcode[1] : decode->= opcode[0]; =20 @@ -232,7 +232,7 @@ static void decode_pushseg(CPUState *cpu, struct x86_de= code *decode) } } =20 -static void decode_popseg(CPUState *cpu, struct x86_decode *decode) +static void decode_popseg(CPUX86State *env, struct x86_decode *decode) { uint8_t op =3D (decode->opcode_len > 1) ? decode->opcode[1] : decode->= opcode[0]; =20 @@ -259,23 +259,23 @@ static void decode_popseg(CPUState *cpu, struct x86_d= ecode *decode) } } =20 -static void decode_incgroup(CPUState *cpu, struct x86_decode *decode) +static void decode_incgroup(CPUX86State *env, struct x86_decode *decode) { decode->op[0].type =3D X86_VAR_REG; decode->op[0].reg =3D decode->opcode[0] - 0x40; - decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, + decode->op[0].ptr =3D get_reg_ref(env, decode->op[0].reg, decode->rex.= b, decode->operand_size); } =20 -static void decode_decgroup(CPUState *cpu, struct x86_decode *decode) +static void decode_decgroup(CPUX86State *env, struct x86_decode *decode) { decode->op[0].type =3D X86_VAR_REG; decode->op[0].reg =3D decode->opcode[0] - 0x48; - decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, + decode->op[0].ptr =3D get_reg_ref(env, decode->op[0].reg, decode->rex.= b, decode->operand_size); } =20 -static void decode_incgroup2(CPUState *cpu, struct x86_decode *decode) +static void decode_incgroup2(CPUX86State *env, struct x86_decode *decode) { if (!decode->modrm.reg) { decode->cmd =3D X86_DECODE_CMD_INC; @@ -284,36 +284,36 @@ static void decode_incgroup2(CPUState *cpu, struct x8= 6_decode *decode) } } =20 -static void decode_pushgroup(CPUState *cpu, struct x86_decode *decode) +static void decode_pushgroup(CPUX86State *env, struct x86_decode *decode) { decode->op[0].type =3D X86_VAR_REG; decode->op[0].reg =3D decode->opcode[0] - 0x50; - decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, + decode->op[0].ptr =3D get_reg_ref(env, decode->op[0].reg, decode->rex.= b, decode->operand_size); } =20 -static void decode_popgroup(CPUState *cpu, struct x86_decode *decode) +static void decode_popgroup(CPUX86State *env, struct x86_decode *decode) { decode->op[0].type =3D X86_VAR_REG; decode->op[0].reg =3D decode->opcode[0] - 0x58; - decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, + decode->op[0].ptr =3D get_reg_ref(env, decode->op[0].reg, decode->rex.= b, decode->operand_size); } =20 -static void decode_jxx(CPUState *cpu, struct x86_decode *decode) +static void decode_jxx(CPUX86State *env, struct x86_decode *decode) { - decode->displacement =3D decode_bytes(cpu, decode, decode->operand_siz= e); + decode->displacement =3D decode_bytes(env, decode, decode->operand_siz= e); decode->displacement_size =3D decode->operand_size; } =20 -static void decode_farjmp(CPUState *cpu, struct x86_decode *decode) +static void decode_farjmp(CPUX86State *env, struct x86_decode *decode) { decode->op[0].type =3D X86_VAR_IMMEDIATE; - decode->op[0].val =3D decode_bytes(cpu, decode, decode->operand_size); - decode->displacement =3D decode_word(cpu, decode); + decode->op[0].val =3D decode_bytes(env, decode, decode->operand_size); + decode->displacement =3D decode_word(env, decode); } =20 -static void decode_addgroup(CPUState *cpu, struct x86_decode *decode) +static void decode_addgroup(CPUX86State *env, struct x86_decode *decode) { enum x86_decode_cmd group[] =3D { X86_DECODE_CMD_ADD, @@ -328,7 +328,7 @@ static void decode_addgroup(CPUState *cpu, struct x86_d= ecode *decode) decode->cmd =3D group[decode->modrm.reg]; } =20 -static void decode_rotgroup(CPUState *cpu, struct x86_decode *decode) +static void decode_rotgroup(CPUX86State *env, struct x86_decode *decode) { enum x86_decode_cmd group[] =3D { X86_DECODE_CMD_ROL, @@ -343,7 +343,7 @@ static void decode_rotgroup(CPUState *cpu, struct x86_d= ecode *decode) decode->cmd =3D group[decode->modrm.reg]; } =20 -static void decode_f7group(CPUState *cpu, struct x86_decode *decode) +static void decode_f7group(CPUX86State *env, struct x86_decode *decode) { enum x86_decode_cmd group[] =3D { X86_DECODE_CMD_TST, @@ -356,12 +356,12 @@ static void decode_f7group(CPUState *cpu, struct x86_= decode *decode) X86_DECODE_CMD_IDIV }; decode->cmd =3D group[decode->modrm.reg]; - decode_modrm_rm(cpu, decode, &decode->op[0]); + decode_modrm_rm(env, decode, &decode->op[0]); =20 switch (decode->modrm.reg) { case 0: case 1: - decode_imm(cpu, decode, &decode->op[1]); + decode_imm(env, decode, &decode->op[1]); break; case 2: break; @@ -374,45 +374,45 @@ static void decode_f7group(CPUState *cpu, struct x86_= decode *decode) } } =20 -static void decode_xchgroup(CPUState *cpu, struct x86_decode *decode) +static void decode_xchgroup(CPUX86State *env, struct x86_decode *decode) { decode->op[0].type =3D X86_VAR_REG; decode->op[0].reg =3D decode->opcode[0] - 0x90; - decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, + decode->op[0].ptr =3D get_reg_ref(env, decode->op[0].reg, decode->rex.= b, decode->operand_size); } =20 -static void decode_movgroup(CPUState *cpu, struct x86_decode *decode) +static void decode_movgroup(CPUX86State *env, struct x86_decode *decode) { decode->op[0].type =3D X86_VAR_REG; decode->op[0].reg =3D decode->opcode[0] - 0xb8; - decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, + decode->op[0].ptr =3D get_reg_ref(env, decode->op[0].reg, decode->rex.= b, decode->operand_size); - decode_immediate(cpu, decode, &decode->op[1], decode->operand_size); + decode_immediate(env, decode, &decode->op[1], decode->operand_size); } =20 -static void fetch_moffs(CPUState *cpu, struct x86_decode *decode, +static void fetch_moffs(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op) { op->type =3D X86_VAR_OFFSET; - op->ptr =3D decode_bytes(cpu, decode, decode->addressing_size); + op->ptr =3D decode_bytes(env, decode, decode->addressing_size); } =20 -static void decode_movgroup8(CPUState *cpu, struct x86_decode *decode) +static void decode_movgroup8(CPUX86State *env, struct x86_decode *decode) { decode->op[0].type =3D X86_VAR_REG; decode->op[0].reg =3D decode->opcode[0] - 0xb0; - decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, + decode->op[0].ptr =3D get_reg_ref(env, decode->op[0].reg, decode->rex.= b, decode->operand_size); - decode_immediate(cpu, decode, &decode->op[1], decode->operand_size); + decode_immediate(env, decode, &decode->op[1], decode->operand_size); } =20 -static void decode_rcx(CPUState *cpu, struct x86_decode *decode, +static void decode_rcx(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op) { op->type =3D X86_VAR_REG; op->reg =3D REG_RCX; - op->ptr =3D get_reg_ref(cpu, op->reg, decode->rex.b, decode->operand_s= ize); + op->ptr =3D get_reg_ref(env, op->reg, decode->rex.b, decode->operand_s= ize); } =20 struct decode_tbl { @@ -420,15 +420,15 @@ struct decode_tbl { enum x86_decode_cmd cmd; uint8_t operand_size; bool is_modrm; - void (*decode_op1)(CPUState *cpu, struct x86_decode *decode, + void (*decode_op1)(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op1); - void (*decode_op2)(CPUState *cpu, struct x86_decode *decode, + void (*decode_op2)(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op2); - void (*decode_op3)(CPUState *cpu, struct x86_decode *decode, + void (*decode_op3)(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op3); - void (*decode_op4)(CPUState *cpu, struct x86_decode *decode, + void (*decode_op4)(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op4); - void (*decode_postfix)(CPUState *cpu, struct x86_decode *decode); + void (*decode_postfix)(CPUX86State *env, struct x86_decode *decode); addr_t flags_mask; }; =20 @@ -440,11 +440,11 @@ struct decode_x87_tbl { uint8_t operand_size; bool rev; bool pop; - void (*decode_op1)(CPUState *cpu, struct x86_decode *decode, + void (*decode_op1)(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op1); - void (*decode_op2)(CPUState *cpu, struct x86_decode *decode, + void (*decode_op2)(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op2); - void (*decode_postfix)(CPUState *cpu, struct x86_decode *decode); + void (*decode_postfix)(CPUX86State *env, struct x86_decode *decode); addr_t flags_mask; }; =20 @@ -455,7 +455,7 @@ struct decode_tbl _decode_tbl1[255]; struct decode_tbl _decode_tbl2[255]; struct decode_x87_tbl _decode_tbl3[255]; =20 -static void decode_x87_ins(CPUState *cpu, struct x86_decode *decode) +static void decode_x87_ins(CPUX86State *env, struct x86_decode *decode) { struct decode_x87_tbl *decoder; =20 @@ -475,13 +475,13 @@ static void decode_x87_ins(CPUState *cpu, struct x86_= decode *decode) decode->frev =3D decoder->rev; =20 if (decoder->decode_op1) { - decoder->decode_op1(cpu, decode, &decode->op[0]); + decoder->decode_op1(env, decode, &decode->op[0]); } if (decoder->decode_op2) { - decoder->decode_op2(cpu, decode, &decode->op[1]); + decoder->decode_op2(env, decode, &decode->op[1]); } if (decoder->decode_postfix) { - decoder->decode_postfix(cpu, decode); + decoder->decode_postfix(env, decode); } =20 VM_PANIC_ON_EX(!decode->cmd, "x87 opcode %x %x (%x %x) not decoded\n", @@ -489,7 +489,7 @@ static void decode_x87_ins(CPUState *cpu, struct x86_de= code *decode) decoder->modrm_mod); } =20 -static void decode_ffgroup(CPUState *cpu, struct x86_decode *decode) +static void decode_ffgroup(CPUX86State *env, struct x86_decode *decode) { enum x86_decode_cmd group[] =3D { X86_DECODE_CMD_INC, @@ -508,8 +508,9 @@ static void decode_ffgroup(CPUState *cpu, struct x86_de= code *decode) } } =20 -static void decode_sldtgroup(CPUState *cpu, struct x86_decode *decode) +static void decode_sldtgroup(CPUX86State *env, struct x86_decode *decode) { + enum x86_decode_cmd group[] =3D { X86_DECODE_CMD_SLDT, X86_DECODE_CMD_STR, @@ -521,11 +522,11 @@ static void decode_sldtgroup(CPUState *cpu, struct x8= 6_decode *decode) X86_DECODE_CMD_INVL }; decode->cmd =3D group[decode->modrm.reg]; - printf("%llx: decode_sldtgroup: %d\n", cpu->hvf_x86->fetch_rip, + printf("%llx: decode_sldtgroup: %d\n", env->hvf_emul->fetch_rip, decode->modrm.reg); } =20 -static void decode_lidtgroup(CPUState *cpu, struct x86_decode *decode) +static void decode_lidtgroup(CPUX86State *env, struct x86_decode *decode) { enum x86_decode_cmd group[] =3D { X86_DECODE_CMD_SGDT, @@ -544,7 +545,7 @@ static void decode_lidtgroup(CPUState *cpu, struct x86_= decode *decode) } } =20 -static void decode_btgroup(CPUState *cpu, struct x86_decode *decode) +static void decode_btgroup(CPUX86State *env, struct x86_decode *decode) { enum x86_decode_cmd group[] =3D { X86_DECODE_CMD_INVL, @@ -559,37 +560,37 @@ static void decode_btgroup(CPUState *cpu, struct x86_= decode *decode) decode->cmd =3D group[decode->modrm.reg]; } =20 -static void decode_x87_general(CPUState *cpu, struct x86_decode *decode) +static void decode_x87_general(CPUX86State *env, struct x86_decode *decode) { decode->is_fpu =3D true; } =20 -static void decode_x87_modrm_floatp(CPUState *cpu, struct x86_decode *deco= de, +static void decode_x87_modrm_floatp(CPUX86State *env, struct x86_decode *d= ecode, struct x86_decode_op *op) { op->type =3D X87_VAR_FLOATP; } =20 -static void decode_x87_modrm_intp(CPUState *cpu, struct x86_decode *decode, +static void decode_x87_modrm_intp(CPUX86State *env, struct x86_decode *dec= ode, struct x86_decode_op *op) { op->type =3D X87_VAR_INTP; } =20 -static void decode_x87_modrm_bytep(CPUState *cpu, struct x86_decode *decod= e, +static void decode_x87_modrm_bytep(CPUX86State *env, struct x86_decode *de= code, struct x86_decode_op *op) { op->type =3D X87_VAR_BYTEP; } =20 -static void decode_x87_modrm_st0(CPUState *cpu, struct x86_decode *decode, +static void decode_x87_modrm_st0(CPUX86State *env, struct x86_decode *deco= de, struct x86_decode_op *op) { op->type =3D X87_VAR_REG; op->reg =3D 0; } =20 -static void decode_decode_x87_modrm_st0(CPUState *cpu, +static void decode_decode_x87_modrm_st0(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op) { @@ -598,16 +599,16 @@ static void decode_decode_x87_modrm_st0(CPUState *cpu, } =20 =20 -static void decode_aegroup(CPUState *cpu, struct x86_decode *decode) +static void decode_aegroup(CPUX86State *env, struct x86_decode *decode) { decode->is_fpu =3D true; switch (decode->modrm.reg) { case 0: decode->cmd =3D X86_DECODE_CMD_FXSAVE; - decode_x87_modrm_bytep(cpu, decode, &decode->op[0]); + decode_x87_modrm_bytep(env, decode, &decode->op[0]); break; case 1: - decode_x87_modrm_bytep(cpu, decode, &decode->op[0]); + decode_x87_modrm_bytep(env, decode, &decode->op[0]); decode->cmd =3D X86_DECODE_CMD_FXRSTOR; break; case 5: @@ -634,15 +635,15 @@ static void decode_aegroup(CPUState *cpu, struct x86_= decode *decode) } } =20 -static void decode_bswap(CPUState *cpu, struct x86_decode *decode) +static void decode_bswap(CPUX86State *env, struct x86_decode *decode) { decode->op[0].type =3D X86_VAR_REG; decode->op[0].reg =3D decode->opcode[1] - 0xc8; - decode->op[0].ptr =3D get_reg_ref(cpu, decode->op[0].reg, decode->rex.= b, + decode->op[0].ptr =3D get_reg_ref(env, decode->op[0].reg, decode->rex.= b, decode->operand_size); } =20 -static void decode_d9_4(CPUState *cpu, struct x86_decode *decode) +static void decode_d9_4(CPUX86State *env, struct x86_decode *decode) { switch (decode->modrm.modrm) { case 0xe0: @@ -665,7 +666,7 @@ static void decode_d9_4(CPUState *cpu, struct x86_decod= e *decode) } } =20 -static void decode_db_4(CPUState *cpu, struct x86_decode *decode) +static void decode_db_4(CPUX86State *env, struct x86_decode *decode) { switch (decode->modrm.modrm) { case 0xe0: @@ -1633,7 +1634,7 @@ struct decode_x87_tbl _x87_inst[] =3D { decode_x87_modrm_intp, NULL, NULL, RFLAGS_MASK_NONE}, }; =20 -void calc_modrm_operand16(CPUState *cpu, struct x86_decode *decode, +void calc_modrm_operand16(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op) { addr_t ptr =3D 0; @@ -1650,42 +1651,42 @@ void calc_modrm_operand16(CPUState *cpu, struct x86= _decode *decode, =20 switch (decode->modrm.rm) { case 0: - ptr +=3D BX(cpu) + SI(cpu); + ptr +=3D BX(env) + SI(env); break; case 1: - ptr +=3D BX(cpu) + DI(cpu); + ptr +=3D BX(env) + DI(env); break; case 2: - ptr +=3D BP(cpu) + SI(cpu); + ptr +=3D BP(env) + SI(env); seg =3D REG_SEG_SS; break; case 3: - ptr +=3D BP(cpu) + DI(cpu); + ptr +=3D BP(env) + DI(env); seg =3D REG_SEG_SS; break; case 4: - ptr +=3D SI(cpu); + ptr +=3D SI(env); break; case 5: - ptr +=3D DI(cpu); + ptr +=3D DI(env); break; case 6: - ptr +=3D BP(cpu); + ptr +=3D BP(env); seg =3D REG_SEG_SS; break; case 7: - ptr +=3D BX(cpu); + ptr +=3D BX(env); break; } calc_addr: if (X86_DECODE_CMD_LEA =3D=3D decode->cmd) { op->ptr =3D (uint16_t)ptr; } else { - op->ptr =3D decode_linear_addr(cpu, decode, (uint16_t)ptr, seg); + op->ptr =3D decode_linear_addr(env, decode, (uint16_t)ptr, seg); } } =20 -addr_t get_reg_ref(CPUState *cpu, int reg, int is_extended, int size) +addr_t get_reg_ref(CPUX86State *env, int reg, int is_extended, int size) { addr_t ptr =3D 0; int which =3D 0; @@ -1699,28 +1700,28 @@ addr_t get_reg_ref(CPUState *cpu, int reg, int is_e= xtended, int size) case 1: if (is_extended || reg < 4) { which =3D 1; - ptr =3D (addr_t)&RL(cpu, reg); + ptr =3D (addr_t)&RL(env, reg); } else { which =3D 2; - ptr =3D (addr_t)&RH(cpu, reg - 4); + ptr =3D (addr_t)&RH(env, reg - 4); } break; default: which =3D 3; - ptr =3D (addr_t)&RRX(cpu, reg); + ptr =3D (addr_t)&RRX(env, reg); break; } return ptr; } =20 -addr_t get_reg_val(CPUState *cpu, int reg, int is_extended, int size) +addr_t get_reg_val(CPUX86State *env, int reg, int is_extended, int size) { addr_t val =3D 0; - memcpy(&val, (void *)get_reg_ref(cpu, reg, is_extended, size), size); + memcpy(&val, (void *)get_reg_ref(env, reg, is_extended, size), size); return val; } =20 -static addr_t get_sib_val(CPUState *cpu, struct x86_decode *decode, +static addr_t get_sib_val(CPUX86State *env, struct x86_decode *decode, x86_reg_segment *sel) { addr_t base =3D 0; @@ -1738,7 +1739,7 @@ static addr_t get_sib_val(CPUState *cpu, struct x86_d= ecode *decode, if (REG_RSP =3D=3D base_reg || REG_RBP =3D=3D base_reg) { *sel =3D REG_SEG_SS; } - base =3D get_reg_val(cpu, decode->sib.base, decode->rex.b, addr_si= ze); + base =3D get_reg_val(env, decode->sib.base, decode->rex.b, addr_si= ze); } =20 if (decode->rex.x) { @@ -1746,13 +1747,13 @@ static addr_t get_sib_val(CPUState *cpu, struct x86= _decode *decode, } =20 if (index_reg !=3D REG_RSP) { - scaled_index =3D get_reg_val(cpu, index_reg, decode->rex.x, addr_s= ize) << + scaled_index =3D get_reg_val(env, index_reg, decode->rex.x, addr_s= ize) << decode->sib.scale; } return base + scaled_index; } =20 -void calc_modrm_operand32(CPUState *cpu, struct x86_decode *decode, +void calc_modrm_operand32(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op) { x86_reg_segment seg =3D REG_SEG_DS; @@ -1764,10 +1765,10 @@ void calc_modrm_operand32(CPUState *cpu, struct x86= _decode *decode, } =20 if (4 =3D=3D decode->modrm.rm) { - ptr +=3D get_sib_val(cpu, decode, &seg); + ptr +=3D get_sib_val(env, decode, &seg); } else if (!decode->modrm.mod && 5 =3D=3D decode->modrm.rm) { - if (x86_is_long_mode(cpu)) { - ptr +=3D RIP(cpu) + decode->len; + if (x86_is_long_mode(ENV_GET_CPU(env))) { + ptr +=3D RIP(env) + decode->len; } else { ptr =3D decode->displacement; } @@ -1775,17 +1776,17 @@ void calc_modrm_operand32(CPUState *cpu, struct x86= _decode *decode, if (REG_RBP =3D=3D decode->modrm.rm || REG_RSP =3D=3D decode->modr= m.rm) { seg =3D REG_SEG_SS; } - ptr +=3D get_reg_val(cpu, decode->modrm.rm, decode->rex.b, addr_si= ze); + ptr +=3D get_reg_val(env, decode->modrm.rm, decode->rex.b, addr_si= ze); } =20 if (X86_DECODE_CMD_LEA =3D=3D decode->cmd) { op->ptr =3D (uint32_t)ptr; } else { - op->ptr =3D decode_linear_addr(cpu, decode, (uint32_t)ptr, seg); + op->ptr =3D decode_linear_addr(env, decode, (uint32_t)ptr, seg); } } =20 -void calc_modrm_operand64(CPUState *cpu, struct x86_decode *decode, +void calc_modrm_operand64(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op) { x86_reg_segment seg =3D REG_SEG_DS; @@ -1800,41 +1801,41 @@ void calc_modrm_operand64(CPUState *cpu, struct x86= _decode *decode, } =20 if (4 =3D=3D rm) { - ptr =3D get_sib_val(cpu, decode, &seg) + offset; + ptr =3D get_sib_val(env, decode, &seg) + offset; } else if (0 =3D=3D mod && 5 =3D=3D rm) { - ptr =3D RIP(cpu) + decode->len + (int32_t) offset; + ptr =3D RIP(env) + decode->len + (int32_t) offset; } else { - ptr =3D get_reg_val(cpu, src, decode->rex.b, 8) + (int64_t) offset; + ptr =3D get_reg_val(env, src, decode->rex.b, 8) + (int64_t) offset; } =20 if (X86_DECODE_CMD_LEA =3D=3D decode->cmd) { op->ptr =3D ptr; } else { - op->ptr =3D decode_linear_addr(cpu, decode, ptr, seg); + op->ptr =3D decode_linear_addr(env, decode, ptr, seg); } } =20 =20 -void calc_modrm_operand(CPUState *cpu, struct x86_decode *decode, +void calc_modrm_operand(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op) { if (3 =3D=3D decode->modrm.mod) { op->reg =3D decode->modrm.reg; op->type =3D X86_VAR_REG; - op->ptr =3D get_reg_ref(cpu, decode->modrm.rm, decode->rex.b, + op->ptr =3D get_reg_ref(env, decode->modrm.rm, decode->rex.b, decode->operand_size); return; } =20 switch (decode->addressing_size) { case 2: - calc_modrm_operand16(cpu, decode, op); + calc_modrm_operand16(env, decode, op); break; case 4: - calc_modrm_operand32(cpu, decode, op); + calc_modrm_operand32(env, decode, op); break; case 8: - calc_modrm_operand64(cpu, decode, op); + calc_modrm_operand64(env, decode, op); break; default: VM_PANIC_EX("unsupported address size %d\n", decode->addressing_si= ze); @@ -1842,10 +1843,10 @@ void calc_modrm_operand(CPUState *cpu, struct x86_d= ecode *decode, } } =20 -static void decode_prefix(CPUState *cpu, struct x86_decode *decode) +static void decode_prefix(CPUX86State *env, struct x86_decode *decode) { while (1) { - uint8_t byte =3D decode_byte(cpu, decode); + uint8_t byte =3D decode_byte(env, decode); switch (byte) { case PREFIX_LOCK: decode->lock =3D byte; @@ -1869,7 +1870,7 @@ static void decode_prefix(CPUState *cpu, struct x86_d= ecode *decode) decode->addr_size_override =3D byte; break; case PREFIX_REX ... (PREFIX_REX + 0xf): - if (x86_is_long_mode(cpu)) { + if (x86_is_long_mode(ENV_GET_CPU(env))) { decode->rex.rex =3D byte; break; } @@ -1881,19 +1882,19 @@ static void decode_prefix(CPUState *cpu, struct x86= _decode *decode) } } =20 -void set_addressing_size(CPUState *cpu, struct x86_decode *decode) +void set_addressing_size(CPUX86State *env, struct x86_decode *decode) { decode->addressing_size =3D -1; - if (x86_is_real(cpu) || x86_is_v8086(cpu)) { + if (x86_is_real(ENV_GET_CPU(env)) || x86_is_v8086(ENV_GET_CPU(env))) { if (decode->addr_size_override) { decode->addressing_size =3D 4; } else { decode->addressing_size =3D 2; } - } else if (!x86_is_long_mode(cpu)) { + } else if (!x86_is_long_mode(ENV_GET_CPU(env))) { /* protected */ struct vmx_segment cs; - vmx_read_segment_descriptor(cpu, &cs, REG_SEG_CS); + vmx_read_segment_descriptor(ENV_GET_CPU(env), &cs, REG_SEG_CS); /* check db */ if ((cs.ar >> 14) & 1) { if (decode->addr_size_override) { @@ -1918,19 +1919,19 @@ void set_addressing_size(CPUState *cpu, struct x86_= decode *decode) } } =20 -void set_operand_size(CPUState *cpu, struct x86_decode *decode) +void set_operand_size(CPUX86State *env, struct x86_decode *decode) { decode->operand_size =3D -1; - if (x86_is_real(cpu) || x86_is_v8086(cpu)) { + if (x86_is_real(ENV_GET_CPU(env)) || x86_is_v8086(ENV_GET_CPU(env))) { if (decode->op_size_override) { decode->operand_size =3D 4; } else { decode->operand_size =3D 2; } - } else if (!x86_is_long_mode(cpu)) { + } else if (!x86_is_long_mode(ENV_GET_CPU(env))) { /* protected */ struct vmx_segment cs; - vmx_read_segment_descriptor(cpu, &cs, REG_SEG_CS); + vmx_read_segment_descriptor(ENV_GET_CPU(env), &cs, REG_SEG_CS); /* check db */ if ((cs.ar >> 14) & 1) { if (decode->op_size_override) { @@ -1959,11 +1960,11 @@ void set_operand_size(CPUState *cpu, struct x86_dec= ode *decode) } } =20 -static void decode_sib(CPUState *cpu, struct x86_decode *decode) +static void decode_sib(CPUX86State *env, struct x86_decode *decode) { if ((decode->modrm.mod !=3D 3) && (4 =3D=3D decode->modrm.rm) && (decode->addressing_size !=3D 2)) { - decode->sib.sib =3D decode_byte(cpu, decode); + decode->sib.sib =3D decode_byte(env, decode); decode->sib_present =3D true; } } @@ -1984,7 +1985,7 @@ int disp32_tbl[4][8] =3D { {0, 0, 0, 0, 0, 0, 0, 0} }; =20 -static inline void decode_displacement(CPUState *cpu, struct x86_decode *d= ecode) +static inline void decode_displacement(CPUX86State *env, struct x86_decode= *decode) { int addressing_size =3D decode->addressing_size; int mod =3D decode->modrm.mod; @@ -1995,7 +1996,7 @@ static inline void decode_displacement(CPUState *cpu,= struct x86_decode *decode) case 2: decode->displacement_size =3D disp16_tbl[mod][rm]; if (decode->displacement_size) { - decode->displacement =3D (uint16_t)decode_bytes(cpu, decode, + decode->displacement =3D (uint16_t)decode_bytes(env, decode, decode->displacement_size); } break; @@ -2010,23 +2011,23 @@ static inline void decode_displacement(CPUState *cp= u, struct x86_decode *decode) } =20 if (decode->displacement_size) { - decode->displacement =3D (uint32_t)decode_bytes(cpu, decode, + decode->displacement =3D (uint32_t)decode_bytes(env, decode, decode->displacement_size); } break; } } =20 -static inline void decode_modrm(CPUState *cpu, struct x86_decode *decode) +static inline void decode_modrm(CPUX86State *env, struct x86_decode *decod= e) { - decode->modrm.modrm =3D decode_byte(cpu, decode); + decode->modrm.modrm =3D decode_byte(env, decode); decode->is_modrm =3D true; - =20 - decode_sib(cpu, decode); - decode_displacement(cpu, decode); + + decode_sib(env, decode); + decode_displacement(env, decode); } =20 -static inline void decode_opcode_general(CPUState *cpu, +static inline void decode_opcode_general(CPUX86State *env, struct x86_decode *decode, uint8_t opcode, struct decode_tbl *inst_decoder) @@ -2038,69 +2039,69 @@ static inline void decode_opcode_general(CPUState *= cpu, decode->flags_mask =3D inst_decoder->flags_mask; =20 if (inst_decoder->is_modrm) { - decode_modrm(cpu, decode); + decode_modrm(env, decode); } if (inst_decoder->decode_op1) { - inst_decoder->decode_op1(cpu, decode, &decode->op[0]); + inst_decoder->decode_op1(env, decode, &decode->op[0]); } if (inst_decoder->decode_op2) { - inst_decoder->decode_op2(cpu, decode, &decode->op[1]); + inst_decoder->decode_op2(env, decode, &decode->op[1]); } if (inst_decoder->decode_op3) { - inst_decoder->decode_op3(cpu, decode, &decode->op[2]); + inst_decoder->decode_op3(env, decode, &decode->op[2]); } if (inst_decoder->decode_op4) { - inst_decoder->decode_op4(cpu, decode, &decode->op[3]); + inst_decoder->decode_op4(env, decode, &decode->op[3]); } if (inst_decoder->decode_postfix) { - inst_decoder->decode_postfix(cpu, decode); + inst_decoder->decode_postfix(env, decode); } } =20 -static inline void decode_opcode_1(CPUState *cpu, struct x86_decode *decod= e, +static inline void decode_opcode_1(CPUX86State *env, struct x86_decode *de= code, uint8_t opcode) { struct decode_tbl *inst_decoder =3D &_decode_tbl1[opcode]; - decode_opcode_general(cpu, decode, opcode, inst_decoder); + decode_opcode_general(env, decode, opcode, inst_decoder); } =20 =20 -static inline void decode_opcode_2(CPUState *cpu, struct x86_decode *decod= e, +static inline void decode_opcode_2(CPUX86State *env, struct x86_decode *de= code, uint8_t opcode) { struct decode_tbl *inst_decoder =3D &_decode_tbl2[opcode]; - decode_opcode_general(cpu, decode, opcode, inst_decoder); + decode_opcode_general(env, decode, opcode, inst_decoder); } =20 -static void decode_opcodes(CPUState *cpu, struct x86_decode *decode) +static void decode_opcodes(CPUX86State *env, struct x86_decode *decode) { uint8_t opcode; - =20 - opcode =3D decode_byte(cpu, decode); + + opcode =3D decode_byte(env, decode); decode->opcode[decode->opcode_len++] =3D opcode; if (opcode !=3D OPCODE_ESCAPE) { - decode_opcode_1(cpu, decode, opcode); + decode_opcode_1(env, decode, opcode); } else { - opcode =3D decode_byte(cpu, decode); + opcode =3D decode_byte(env, decode); decode->opcode[decode->opcode_len++] =3D opcode; - decode_opcode_2(cpu, decode, opcode); + decode_opcode_2(env, decode, opcode); } } =20 -uint32_t decode_instruction(CPUState *cpu, struct x86_decode *decode) +uint32_t decode_instruction(CPUX86State *env, struct x86_decode *decode) { ZERO_INIT(*decode); =20 - decode_prefix(cpu, decode); - set_addressing_size(cpu, decode); - set_operand_size(cpu, decode); + decode_prefix(env, decode); + set_addressing_size(env, decode); + set_operand_size(env, decode); + + decode_opcodes(env, decode); =20 - decode_opcodes(cpu, decode); - =20 return decode->len; } =20 -void init_decoder(CPUState *cpu) +void init_decoder() { int i; =20 @@ -2156,7 +2157,7 @@ const char *decode_cmd_to_string(enum x86_decode_cmd = cmd) return cmds[cmd]; } =20 -addr_t decode_linear_addr(struct CPUState *cpu, struct x86_decode *decode, +addr_t decode_linear_addr(CPUX86State *env, struct x86_decode *decode, addr_t addr, x86_reg_segment seg) { switch (decode->segment_override) { @@ -2181,5 +2182,5 @@ addr_t decode_linear_addr(struct CPUState *cpu, struc= t x86_decode *decode, default: break; } - return linear_addr_size(cpu, addr, decode->addressing_size, seg); + return linear_addr_size(ENV_GET_CPU(env), addr, decode->addressing_siz= e, seg); } diff --git a/target/i386/hvf-utils/x86_decode.h b/target/i386/hvf-utils/x86= _decode.h index 571931dc73..329131360f 100644 --- a/target/i386/hvf-utils/x86_decode.h +++ b/target/i386/hvf-utils/x86_decode.h @@ -23,6 +23,7 @@ #include #include "qemu-common.h" #include "x86.h" +#include "cpu.h" =20 typedef enum x86_prefix { /* group 1 */ @@ -304,21 +305,21 @@ typedef struct x86_decode { =20 uint64_t sign(uint64_t val, int size); =20 -uint32_t decode_instruction(CPUState *cpu, struct x86_decode *decode); +uint32_t decode_instruction(CPUX86State *env, struct x86_decode *decode); =20 -addr_t get_reg_ref(CPUState *cpu, int reg, int is_extended, int size); -addr_t get_reg_val(CPUState *cpu, int reg, int is_extended, int size); -void calc_modrm_operand(CPUState *cpu, struct x86_decode *decode, +addr_t get_reg_ref(CPUX86State *env, int reg, int is_extended, int size); +addr_t get_reg_val(CPUX86State *env, int reg, int is_extended, int size); +void calc_modrm_operand(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op); -addr_t decode_linear_addr(struct CPUState *cpu, struct x86_decode *decode, +addr_t decode_linear_addr(CPUX86State *env, struct x86_decode *decode, addr_t addr, x86_reg_segment seg); =20 -void init_decoder(CPUState *cpu); -void calc_modrm_operand16(CPUState *cpu, struct x86_decode *decode, +void init_decoder(void); +void calc_modrm_operand16(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op); -void calc_modrm_operand32(CPUState *cpu, struct x86_decode *decode, +void calc_modrm_operand32(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op); -void calc_modrm_operand64(CPUState *cpu, struct x86_decode *decode, +void calc_modrm_operand64(CPUX86State *env, struct x86_decode *decode, struct x86_decode_op *op); -void set_addressing_size(CPUState *cpu, struct x86_decode *decode); -void set_operand_size(CPUState *cpu, struct x86_decode *decode); +void set_addressing_size(CPUX86State *env, struct x86_decode *decode); +void set_operand_size(CPUX86State *env, struct x86_decode *decode); diff --git a/target/i386/hvf-utils/x86_emu.c b/target/i386/hvf-utils/x86_em= u.c index 861319d17e..73c028aafc 100644 --- a/target/i386/hvf-utils/x86_emu.c +++ b/target/i386/hvf-utils/x86_emu.c @@ -42,15 +42,16 @@ #include "x86.h" #include "x86_emu.h" #include "x86_mmu.h" +#include "x86_flags.h" #include "vmcs.h" #include "vmx.h" =20 void hvf_handle_io(struct CPUState *cpu, uint16_t port, void *data, int direction, int size, uint32_t count); =20 -#define EXEC_2OP_LOGIC_CMD(cpu, decode, cmd, FLAGS_FUNC, save_res) \ +#define EXEC_2OP_LOGIC_CMD(env, decode, cmd, FLAGS_FUNC, save_res) \ { \ - fetch_operands(cpu, decode, 2, true, true, false); \ + fetch_operands(env, decode, 2, true, true, false); \ switch (decode->operand_size) { \ case 1: \ { \ @@ -58,7 +59,7 @@ void hvf_handle_io(struct CPUState *cpu, uint16_t port, v= oid *data, uint8_t v2 =3D (uint8_t)decode->op[1].val; \ uint8_t diff =3D v1 cmd v2; \ if (save_res) { \ - write_val_ext(cpu, decode->op[0].ptr, diff, 1); \ + write_val_ext(env, decode->op[0].ptr, diff, 1); \ } \ FLAGS_FUNC##_8(diff); \ break; \ @@ -69,7 +70,7 @@ void hvf_handle_io(struct CPUState *cpu, uint16_t port, v= oid *data, uint16_t v2 =3D (uint16_t)decode->op[1].val; \ uint16_t diff =3D v1 cmd v2; \ if (save_res) { \ - write_val_ext(cpu, decode->op[0].ptr, diff, 2); \ + write_val_ext(env, decode->op[0].ptr, diff, 2); \ } \ FLAGS_FUNC##_16(diff); \ break; \ @@ -80,7 +81,7 @@ void hvf_handle_io(struct CPUState *cpu, uint16_t port, v= oid *data, uint32_t v2 =3D (uint32_t)decode->op[1].val; \ uint32_t diff =3D v1 cmd v2; \ if (save_res) { \ - write_val_ext(cpu, decode->op[0].ptr, diff, 4); \ + write_val_ext(env, decode->op[0].ptr, diff, 4); \ } \ FLAGS_FUNC##_32(diff); \ break; \ @@ -91,9 +92,9 @@ void hvf_handle_io(struct CPUState *cpu, uint16_t port, v= oid *data, } \ =20 =20 -#define EXEC_2OP_ARITH_CMD(cpu, decode, cmd, FLAGS_FUNC, save_res) \ +#define EXEC_2OP_ARITH_CMD(env, decode, cmd, FLAGS_FUNC, save_res) \ { \ - fetch_operands(cpu, decode, 2, true, true, false); \ + fetch_operands(env, decode, 2, true, true, false); \ switch (decode->operand_size) { \ case 1: \ { \ @@ -101,7 +102,7 @@ void hvf_handle_io(struct CPUState *cpu, uint16_t port,= void *data, uint8_t v2 =3D (uint8_t)decode->op[1].val; \ uint8_t diff =3D v1 cmd v2; \ if (save_res) { \ - write_val_ext(cpu, decode->op[0].ptr, diff, 1); \ + write_val_ext(env, decode->op[0].ptr, diff, 1); \ } \ FLAGS_FUNC##_8(v1, v2, diff); \ break; \ @@ -112,7 +113,7 @@ void hvf_handle_io(struct CPUState *cpu, uint16_t port,= void *data, uint16_t v2 =3D (uint16_t)decode->op[1].val; \ uint16_t diff =3D v1 cmd v2; \ if (save_res) { \ - write_val_ext(cpu, decode->op[0].ptr, diff, 2); \ + write_val_ext(env, decode->op[0].ptr, diff, 2); \ } \ FLAGS_FUNC##_16(v1, v2, diff); \ break; \ @@ -123,7 +124,7 @@ void hvf_handle_io(struct CPUState *cpu, uint16_t port,= void *data, uint32_t v2 =3D (uint32_t)decode->op[1].val; \ uint32_t diff =3D v1 cmd v2; \ if (save_res) { \ - write_val_ext(cpu, decode->op[0].ptr, diff, 4); \ + write_val_ext(env, decode->op[0].ptr, diff, 4); \ } \ FLAGS_FUNC##_32(v1, v2, diff); \ break; \ @@ -133,37 +134,37 @@ void hvf_handle_io(struct CPUState *cpu, uint16_t por= t, void *data, } \ } =20 -addr_t read_reg(struct CPUState *cpu, int reg, int size) +addr_t read_reg(CPUX86State *env, int reg, int size) { switch (size) { case 1: - return cpu->hvf_x86->regs[reg].lx; + return env->hvf_emul->regs[reg].lx; case 2: - return cpu->hvf_x86->regs[reg].rx; + return env->hvf_emul->regs[reg].rx; case 4: - return cpu->hvf_x86->regs[reg].erx; + return env->hvf_emul->regs[reg].erx; case 8: - return cpu->hvf_x86->regs[reg].rrx; + return env->hvf_emul->regs[reg].rrx; default: VM_PANIC_ON("read_reg size"); } return 0; } =20 -void write_reg(struct CPUState *cpu, int reg, addr_t val, int size) +void write_reg(CPUX86State *env, int reg, addr_t val, int size) { switch (size) { case 1: - cpu->hvf_x86->regs[reg].lx =3D val; + env->hvf_emul->regs[reg].lx =3D val; break; case 2: - cpu->hvf_x86->regs[reg].rx =3D val; + env->hvf_emul->regs[reg].rx =3D val; break; case 4: - cpu->hvf_x86->regs[reg].rrx =3D (uint32_t)val; + env->hvf_emul->regs[reg].rrx =3D (uint32_t)val; break; case 8: - cpu->hvf_x86->regs[reg].rrx =3D val; + env->hvf_emul->regs[reg].rrx =3D val; break; default: VM_PANIC_ON("write_reg size"); @@ -215,38 +216,36 @@ void write_val_to_reg(addr_t reg_ptr, addr_t val, int= size) } } =20 -static bool is_host_reg(struct CPUState *cpu, addr_t ptr) +static bool is_host_reg(struct CPUX86State *env, addr_t ptr) { - return (ptr > (addr_t)cpu && ptr < (addr_t)cpu + sizeof(struct CPUStat= e)) || - (ptr > (addr_t)cpu->hvf_x86 && ptr < - (addr_t)(cpu->hvf_x86 + sizeof(struct hvf_x86_state))); + return (ptr - (addr_t)&env->hvf_emul->regs[0]) < sizeof(env->hvf_emul-= >regs); } =20 -void write_val_ext(struct CPUState *cpu, addr_t ptr, addr_t val, int size) +void write_val_ext(struct CPUX86State *env, addr_t ptr, addr_t val, int si= ze) { - if (is_host_reg(cpu, ptr)) { + if (is_host_reg(env, ptr)) { write_val_to_reg(ptr, val, size); return; } - vmx_write_mem(cpu, ptr, &val, size); + vmx_write_mem(ENV_GET_CPU(env), ptr, &val, size); } =20 -uint8_t *read_mmio(struct CPUState *cpu, addr_t ptr, int bytes) +uint8_t *read_mmio(struct CPUX86State *env, addr_t ptr, int bytes) { - vmx_read_mem(cpu, cpu->hvf_x86->mmio_buf, ptr, bytes); - return cpu->hvf_x86->mmio_buf; + vmx_read_mem(ENV_GET_CPU(env), env->hvf_emul->mmio_buf, ptr, bytes); + return env->hvf_emul->mmio_buf; } =20 -addr_t read_val_ext(struct CPUState *cpu, addr_t ptr, int size) +addr_t read_val_ext(struct CPUX86State *env, addr_t ptr, int size) { addr_t val; uint8_t *mmio_ptr; =20 - if (is_host_reg(cpu, ptr)) { + if (is_host_reg(env, ptr)) { return read_val_from_reg(ptr, size); } =20 - mmio_ptr =3D read_mmio(cpu, ptr, size); + mmio_ptr =3D read_mmio(env, ptr, size); switch (size) { case 1: val =3D *(uint8_t *)mmio_ptr; @@ -267,7 +266,7 @@ addr_t read_val_ext(struct CPUState *cpu, addr_t ptr, i= nt size) return val; } =20 -static void fetch_operands(struct CPUState *cpu, struct x86_decode *decode, +static void fetch_operands(struct CPUX86State *env, struct x86_decode *dec= ode, int n, bool val_op0, bool val_op1, bool val_op2) { int i; @@ -285,18 +284,18 @@ static void fetch_operands(struct CPUState *cpu, stru= ct x86_decode *decode, } break; case X86_VAR_RM: - calc_modrm_operand(cpu, decode, &decode->op[i]); + calc_modrm_operand(env, decode, &decode->op[i]); if (calc_val[i]) { - decode->op[i].val =3D read_val_ext(cpu, decode->op[i].ptr, + decode->op[i].val =3D read_val_ext(env, decode->op[i].ptr, decode->operand_size); } break; case X86_VAR_OFFSET: - decode->op[i].ptr =3D decode_linear_addr(cpu, decode, + decode->op[i].ptr =3D decode_linear_addr(env, decode, decode->op[i].ptr, REG_SEG_DS); if (calc_val[i]) { - decode->op[i].val =3D read_val_ext(cpu, decode->op[i].ptr, + decode->op[i].val =3D read_val_ext(env, decode->op[i].ptr, decode->operand_size); } break; @@ -306,65 +305,65 @@ static void fetch_operands(struct CPUState *cpu, stru= ct x86_decode *decode, } } =20 -static void exec_mov(struct CPUState *cpu, struct x86_decode *decode) +static void exec_mov(struct CPUX86State *env, struct x86_decode *decode) { - fetch_operands(cpu, decode, 2, false, true, false); - write_val_ext(cpu, decode->op[0].ptr, decode->op[1].val, + fetch_operands(env, decode, 2, false, true, false); + write_val_ext(env, decode->op[0].ptr, decode->op[1].val, decode->operand_size); =20 - RIP(cpu) +=3D decode->len; + RIP(env) +=3D decode->len; } =20 -static void exec_add(struct CPUState *cpu, struct x86_decode *decode) +static void exec_add(struct CPUX86State *env, struct x86_decode *decode) { - EXEC_2OP_ARITH_CMD(cpu, decode, +, SET_FLAGS_OSZAPC_ADD, true); - RIP(cpu) +=3D decode->len; + EXEC_2OP_ARITH_CMD(env, decode, +, SET_FLAGS_OSZAPC_ADD, true); + RIP(env) +=3D decode->len; } =20 -static void exec_or(struct CPUState *cpu, struct x86_decode *decode) +static void exec_or(struct CPUX86State *env, struct x86_decode *decode) { - EXEC_2OP_LOGIC_CMD(cpu, decode, |, SET_FLAGS_OSZAPC_LOGIC, true); - RIP(cpu) +=3D decode->len; + EXEC_2OP_LOGIC_CMD(env, decode, |, SET_FLAGS_OSZAPC_LOGIC, true); + RIP(env) +=3D decode->len; } =20 -static void exec_adc(struct CPUState *cpu, struct x86_decode *decode) +static void exec_adc(struct CPUX86State *env, struct x86_decode *decode) { - EXEC_2OP_ARITH_CMD(cpu, decode, +get_CF(cpu)+, SET_FLAGS_OSZAPC_ADD, t= rue); - RIP(cpu) +=3D decode->len; + EXEC_2OP_ARITH_CMD(env, decode, +get_CF(env)+, SET_FLAGS_OSZAPC_ADD, t= rue); + RIP(env) +=3D decode->len; } =20 -static void exec_sbb(struct CPUState *cpu, struct x86_decode *decode) +static void exec_sbb(struct CPUX86State *env, struct x86_decode *decode) { - EXEC_2OP_ARITH_CMD(cpu, decode, -get_CF(cpu)-, SET_FLAGS_OSZAPC_SUB, t= rue); - RIP(cpu) +=3D decode->len; + EXEC_2OP_ARITH_CMD(env, decode, -get_CF(env)-, SET_FLAGS_OSZAPC_SUB, t= rue); + RIP(env) +=3D decode->len; } =20 -static void exec_and(struct CPUState *cpu, struct x86_decode *decode) +static void exec_and(struct CPUX86State *env, struct x86_decode *decode) { - EXEC_2OP_LOGIC_CMD(cpu, decode, &, SET_FLAGS_OSZAPC_LOGIC, true); - RIP(cpu) +=3D decode->len; + EXEC_2OP_LOGIC_CMD(env, decode, &, SET_FLAGS_OSZAPC_LOGIC, true); + RIP(env) +=3D decode->len; } =20 -static void exec_sub(struct CPUState *cpu, struct x86_decode *decode) +static void exec_sub(struct CPUX86State *env, struct x86_decode *decode) { - EXEC_2OP_ARITH_CMD(cpu, decode, -, SET_FLAGS_OSZAPC_SUB, true); - RIP(cpu) +=3D decode->len; + EXEC_2OP_ARITH_CMD(env, decode, -, SET_FLAGS_OSZAPC_SUB, true); + RIP(env) +=3D decode->len; } =20 -static void exec_xor(struct CPUState *cpu, struct x86_decode *decode) +static void exec_xor(struct CPUX86State *env, struct x86_decode *decode) { - EXEC_2OP_LOGIC_CMD(cpu, decode, ^, SET_FLAGS_OSZAPC_LOGIC, true); - RIP(cpu) +=3D decode->len; + EXEC_2OP_LOGIC_CMD(env, decode, ^, SET_FLAGS_OSZAPC_LOGIC, true); + RIP(env) +=3D decode->len; } =20 -static void exec_neg(struct CPUState *cpu, struct x86_decode *decode) +static void exec_neg(struct CPUX86State *env, struct x86_decode *decode) { - /*EXEC_2OP_ARITH_CMD(cpu, decode, -, SET_FLAGS_OSZAPC_SUB, false);*/ + /*EXEC_2OP_ARITH_CMD(env, decode, -, SET_FLAGS_OSZAPC_SUB, false);*/ int32_t val; - fetch_operands(cpu, decode, 2, true, true, false); + fetch_operands(env, decode, 2, true, true, false); =20 val =3D 0 - sign(decode->op[1].val, decode->operand_size); - write_val_ext(cpu, decode->op[1].ptr, val, decode->operand_size); + write_val_ext(env, decode->op[1].ptr, val, decode->operand_size); =20 if (4 =3D=3D decode->operand_size) { SET_FLAGS_OSZAPC_SUB_32(0, 0 - val, val); @@ -376,56 +375,56 @@ static void exec_neg(struct CPUState *cpu, struct x86= _decode *decode) VM_PANIC("bad op size\n"); } =20 - /*lflags_to_rflags(cpu);*/ - RIP(cpu) +=3D decode->len; + /*lflags_to_rflags(env);*/ + RIP(env) +=3D decode->len; } =20 -static void exec_cmp(struct CPUState *cpu, struct x86_decode *decode) +static void exec_cmp(struct CPUX86State *env, struct x86_decode *decode) { - EXEC_2OP_ARITH_CMD(cpu, decode, -, SET_FLAGS_OSZAPC_SUB, false); - RIP(cpu) +=3D decode->len; + EXEC_2OP_ARITH_CMD(env, decode, -, SET_FLAGS_OSZAPC_SUB, false); + RIP(env) +=3D decode->len; } =20 -static void exec_inc(struct CPUState *cpu, struct x86_decode *decode) +static void exec_inc(struct CPUX86State *env, struct x86_decode *decode) { decode->op[1].type =3D X86_VAR_IMMEDIATE; decode->op[1].val =3D 0; =20 - EXEC_2OP_ARITH_CMD(cpu, decode, +1+, SET_FLAGS_OSZAP_ADD, true); + EXEC_2OP_ARITH_CMD(env, decode, +1+, SET_FLAGS_OSZAP_ADD, true); =20 - RIP(cpu) +=3D decode->len; + RIP(env) +=3D decode->len; } =20 -static void exec_dec(struct CPUState *cpu, struct x86_decode *decode) +static void exec_dec(struct CPUX86State *env, struct x86_decode *decode) { decode->op[1].type =3D X86_VAR_IMMEDIATE; decode->op[1].val =3D 0; =20 - EXEC_2OP_ARITH_CMD(cpu, decode, -1-, SET_FLAGS_OSZAP_SUB, true); - RIP(cpu) +=3D decode->len; + EXEC_2OP_ARITH_CMD(env, decode, -1-, SET_FLAGS_OSZAP_SUB, true); + RIP(env) +=3D decode->len; } =20 -static void exec_tst(struct CPUState *cpu, struct x86_decode *decode) +static void exec_tst(struct CPUX86State *env, struct x86_decode *decode) { - EXEC_2OP_LOGIC_CMD(cpu, decode, &, SET_FLAGS_OSZAPC_LOGIC, false); - RIP(cpu) +=3D decode->len; + EXEC_2OP_LOGIC_CMD(env, decode, &, SET_FLAGS_OSZAPC_LOGIC, false); + RIP(env) +=3D decode->len; } =20 -static void exec_not(struct CPUState *cpu, struct x86_decode *decode) +static void exec_not(struct CPUX86State *env, struct x86_decode *decode) { - fetch_operands(cpu, decode, 1, true, false, false); + fetch_operands(env, decode, 1, true, false, false); =20 - write_val_ext(cpu, decode->op[0].ptr, ~decode->op[0].val, + write_val_ext(env, decode->op[0].ptr, ~decode->op[0].val, decode->operand_size); - RIP(cpu) +=3D decode->len; + RIP(env) +=3D decode->len; } =20 -void exec_movzx(struct CPUState *cpu, struct x86_decode *decode) +void exec_movzx(struct CPUX86State *env, struct x86_decode *decode) { int src_op_size; int op_size =3D decode->operand_size; =20 - fetch_operands(cpu, decode, 1, false, false, false); + fetch_operands(env, decode, 1, false, false, false); =20 if (0xb6 =3D=3D decode->opcode[1]) { src_op_size =3D 1; @@ -433,60 +432,60 @@ void exec_movzx(struct CPUState *cpu, struct x86_deco= de *decode) src_op_size =3D 2; } decode->operand_size =3D src_op_size; - calc_modrm_operand(cpu, decode, &decode->op[1]); - decode->op[1].val =3D read_val_ext(cpu, decode->op[1].ptr, src_op_size= ); - write_val_ext(cpu, decode->op[0].ptr, decode->op[1].val, op_size); + calc_modrm_operand(env, decode, &decode->op[1]); + decode->op[1].val =3D read_val_ext(env, decode->op[1].ptr, src_op_size= ); + write_val_ext(env, decode->op[0].ptr, decode->op[1].val, op_size); =20 - RIP(cpu) +=3D decode->len; + RIP(env) +=3D decode->len; } =20 -static void exec_out(struct CPUState *cpu, struct x86_decode *decode) +static void exec_out(struct CPUX86State *env, struct x86_decode *decode) { switch (decode->opcode[0]) { case 0xe6: - hvf_handle_io(cpu, decode->op[0].val, &AL(cpu), 1, 1, 1); + hvf_handle_io(ENV_GET_CPU(env), decode->op[0].val, &AL(env), 1, 1,= 1); break; case 0xe7: - hvf_handle_io(cpu, decode->op[0].val, &RAX(cpu), 1, + hvf_handle_io(ENV_GET_CPU(env), decode->op[0].val, &RAX(env), 1, decode->operand_size, 1); break; case 0xee: - hvf_handle_io(cpu, DX(cpu), &AL(cpu), 1, 1, 1); + hvf_handle_io(ENV_GET_CPU(env), DX(env), &AL(env), 1, 1, 1); break; case 0xef: - hvf_handle_io(cpu, DX(cpu), &RAX(cpu), 1, decode->operand_size, 1); + hvf_handle_io(ENV_GET_CPU(env), DX(env), &RAX(env), 1, decode->ope= rand_size, 1); break; default: VM_PANIC("Bad out opcode\n"); break; } - RIP(cpu) +=3D decode->len; + RIP(env) +=3D decode->len; } =20 -static void exec_in(struct CPUState *cpu, struct x86_decode *decode) +static void exec_in(struct CPUX86State *env, struct x86_decode *decode) { addr_t val =3D 0; switch (decode->opcode[0]) { case 0xe4: - hvf_handle_io(cpu, decode->op[0].val, &AL(cpu), 0, 1, 1); + hvf_handle_io(ENV_GET_CPU(env), decode->op[0].val, &AL(env), 0, 1,= 1); break; case 0xe5: - hvf_handle_io(cpu, decode->op[0].val, &val, 0, decode->operand_siz= e, 1); + hvf_handle_io(ENV_GET_CPU(env), decode->op[0].val, &val, 0, decode= ->operand_size, 1); if (decode->operand_size =3D=3D 2) { - AX(cpu) =3D val; + AX(env) =3D val; } else { - RAX(cpu) =3D (uint32_t)val; + RAX(env) =3D (uint32_t)val; } break; case 0xec: - hvf_handle_io(cpu, DX(cpu), &AL(cpu), 0, 1, 1); + hvf_handle_io(ENV_GET_CPU(env), DX(env), &AL(env), 0, 1, 1); break; case 0xed: - hvf_handle_io(cpu, DX(cpu), &val, 0, decode->operand_size, 1); + hvf_handle_io(ENV_GET_CPU(env), DX(env), &val, 0, decode->operand_= size, 1); if (decode->operand_size =3D=3D 2) { - AX(cpu) =3D val; + AX(env) =3D val; } else { - RAX(cpu) =3D (uint32_t)val; + RAX(env) =3D (uint32_t)val; } =20 break; @@ -495,212 +494,212 @@ static void exec_in(struct CPUState *cpu, struct x8= 6_decode *decode) break; } =20 - RIP(cpu) +=3D decode->len; + RIP(env) +=3D decode->len; } =20 -static inline void string_increment_reg(struct CPUState *cpu, int reg, +static inline void string_increment_reg(struct CPUX86State *env, int reg, struct x86_decode *decode) { - addr_t val =3D read_reg(cpu, reg, decode->addressing_size); - if (cpu->hvf_x86->rflags.df) { + addr_t val =3D read_reg(env, reg, decode->addressing_size); + if (env->hvf_emul->rflags.df) { val -=3D decode->operand_size; } else { val +=3D decode->operand_size; } - write_reg(cpu, reg, val, decode->addressing_size); + write_reg(env, reg, val, decode->addressing_size); } =20 -static inline void string_rep(struct CPUState *cpu, struct x86_decode *dec= ode, - void (*func)(struct CPUState *cpu, +static inline void string_rep(struct CPUX86State *env, struct x86_decode *= decode, + void (*func)(struct CPUX86State *env, struct x86_decode *ins), int re= p) { - addr_t rcx =3D read_reg(cpu, REG_RCX, decode->addressing_size); + addr_t rcx =3D read_reg(env, REG_RCX, decode->addressing_size); while (rcx--) { - func(cpu, decode); - write_reg(cpu, REG_RCX, rcx, decode->addressing_size); - if ((PREFIX_REP =3D=3D rep) && !get_ZF(cpu)) { + func(env, decode); + write_reg(env, REG_RCX, rcx, decode->addressing_size); + if ((PREFIX_REP =3D=3D rep) && !get_ZF(env)) { break; } - if ((PREFIX_REPN =3D=3D rep) && get_ZF(cpu)) { + if ((PREFIX_REPN =3D=3D rep) && get_ZF(env)) { break; } } } =20 -static void exec_ins_single(struct CPUState *cpu, struct x86_decode *decod= e) +static void exec_ins_single(struct CPUX86State *env, struct x86_decode *de= code) { - addr_t addr =3D linear_addr_size(cpu, RDI(cpu), decode->addressing_siz= e, + addr_t addr =3D linear_addr_size(ENV_GET_CPU(env), RDI(env), decode->a= ddressing_size, REG_SEG_ES); =20 - hvf_handle_io(cpu, DX(cpu), cpu->hvf_x86->mmio_buf, 0, + hvf_handle_io(ENV_GET_CPU(env), DX(env), env->hvf_emul->mmio_buf, 0, decode->operand_size, 1); - vmx_write_mem(cpu, addr, cpu->hvf_x86->mmio_buf, decode->operand_size); + vmx_write_mem(ENV_GET_CPU(env), addr, env->hvf_emul->mmio_buf, decode-= >operand_size); =20 - string_increment_reg(cpu, REG_RDI, decode); + string_increment_reg(env, REG_RDI, decode); } =20 -static void exec_ins(struct CPUState *cpu, struct x86_decode *decode) +static void exec_ins(struct CPUX86State *env, struct x86_decode *decode) { if (decode->rep) { - string_rep(cpu, decode, exec_ins_single, 0); + string_rep(env, decode, exec_ins_single, 0); } else { - exec_ins_single(cpu, decode); + exec_ins_single(env, decode); } =20 - RIP(cpu) +=3D decode->len; + RIP(env) +=3D decode->len; } =20 -static void exec_outs_single(struct CPUState *cpu, struct x86_decode *deco= de) +static void exec_outs_single(struct CPUX86State *env, struct x86_decode *d= ecode) { - addr_t addr =3D decode_linear_addr(cpu, decode, RSI(cpu), REG_SEG_DS); + addr_t addr =3D decode_linear_addr(env, decode, RSI(env), REG_SEG_DS); =20 - vmx_read_mem(cpu, cpu->hvf_x86->mmio_buf, addr, decode->operand_size); - hvf_handle_io(cpu, DX(cpu), cpu->hvf_x86->mmio_buf, 1, + vmx_read_mem(ENV_GET_CPU(env), env->hvf_emul->mmio_buf, addr, decode->= operand_size); + hvf_handle_io(ENV_GET_CPU(env), DX(env), env->hvf_emul->mmio_buf, 1, decode->operand_size, 1); =20 - string_increment_reg(cpu, REG_RSI, decode); + string_increment_reg(env, REG_RSI, decode); } =20 -static void exec_outs(struct CPUState *cpu, struct x86_decode *decode) +static void exec_outs(struct CPUX86State *env, struct x86_decode *decode) { if (decode->rep) { - string_rep(cpu, decode, exec_outs_single, 0); + string_rep(env, decode, exec_outs_single, 0); } else { - exec_outs_single(cpu, decode); + exec_outs_single(env, decode); } =20 - RIP(cpu) +=3D decode->len; + RIP(env) +=3D decode->len; } =20 -static void exec_movs_single(struct CPUState *cpu, struct x86_decode *deco= de) +static void exec_movs_single(struct CPUX86State *env, struct x86_decode *d= ecode) { addr_t src_addr; addr_t dst_addr; addr_t val; - =20 - src_addr =3D decode_linear_addr(cpu, decode, RSI(cpu), REG_SEG_DS); - =20 - dst_addr =3D linear_addr_size(cpu, RDI(cpu), decode->addressing_size, + + src_addr =3D decode_linear_addr(env, decode, RSI(env), REG_SEG_DS); + dst_addr =3D linear_addr_size(ENV_GET_CPU(env), RDI(env), decode->addr= essing_size, REG_SEG_ES); - val =3D read_val_ext(cpu, src_addr, decode->operand_size); - write_val_ext(cpu, dst_addr, val, decode->operand_size); =20 - string_increment_reg(cpu, REG_RSI, decode); - string_increment_reg(cpu, REG_RDI, decode); + val =3D read_val_ext(env, src_addr, decode->operand_size); + write_val_ext(env, dst_addr, val, decode->operand_size); + + string_increment_reg(env, REG_RSI, decode); + string_increment_reg(env, REG_RDI, decode); } =20 -static void exec_movs(struct CPUState *cpu, struct x86_decode *decode) +static void exec_movs(struct CPUX86State *env, struct x86_decode *decode) { if (decode->rep) { - string_rep(cpu, decode, exec_movs_single, 0); + string_rep(env, decode, exec_movs_single, 0); } else { - exec_movs_single(cpu, decode); + exec_movs_single(env, decode); } =20 - RIP(cpu) +=3D decode->len; + RIP(env) +=3D decode->len; } =20 -static void exec_cmps_single(struct CPUState *cpu, struct x86_decode *deco= de) +static void exec_cmps_single(struct CPUX86State *env, struct x86_decode *d= ecode) { addr_t src_addr; addr_t dst_addr; =20 - src_addr =3D decode_linear_addr(cpu, decode, RSI(cpu), REG_SEG_DS); - dst_addr =3D linear_addr_size(cpu, RDI(cpu), decode->addressing_size, + src_addr =3D decode_linear_addr(env, decode, RSI(env), REG_SEG_DS); + dst_addr =3D linear_addr_size(ENV_GET_CPU(env), RDI(env), decode->addr= essing_size, REG_SEG_ES); =20 decode->op[0].type =3D X86_VAR_IMMEDIATE; - decode->op[0].val =3D read_val_ext(cpu, src_addr, decode->operand_size= ); + decode->op[0].val =3D read_val_ext(env, src_addr, decode->operand_size= ); decode->op[1].type =3D X86_VAR_IMMEDIATE; - decode->op[1].val =3D read_val_ext(cpu, dst_addr, decode->operand_size= ); + decode->op[1].val =3D read_val_ext(env, dst_addr, decode->operand_size= ); =20 - EXEC_2OP_ARITH_CMD(cpu, decode, -, SET_FLAGS_OSZAPC_SUB, false); + EXEC_2OP_ARITH_CMD(env, decode, -, SET_FLAGS_OSZAPC_SUB, false); =20 - string_increment_reg(cpu, REG_RSI, decode); - string_increment_reg(cpu, REG_RDI, decode); + string_increment_reg(env, REG_RSI, decode); + string_increment_reg(env, REG_RDI, decode); } =20 -static void exec_cmps(struct CPUState *cpu, struct x86_decode *decode) +static void exec_cmps(struct CPUX86State *env, struct x86_decode *decode) { if (decode->rep) { - string_rep(cpu, decode, exec_cmps_single, decode->rep); + string_rep(env, decode, exec_cmps_single, decode->rep); } else { - exec_cmps_single(cpu, decode); + exec_cmps_single(env, decode); } - RIP(cpu) +=3D decode->len; + RIP(env) +=3D decode->len; } =20 =20 -static void exec_stos_single(struct CPUState *cpu, struct x86_decode *deco= de) +static void exec_stos_single(struct CPUX86State *env, struct x86_decode *d= ecode) { addr_t addr; addr_t val; =20 - addr =3D linear_addr_size(cpu, RDI(cpu), decode->addressing_size, REG_= SEG_ES); - val =3D read_reg(cpu, REG_RAX, decode->operand_size); - vmx_write_mem(cpu, addr, &val, decode->operand_size); + addr =3D linear_addr_size(ENV_GET_CPU(env), RDI(env), decode->addressi= ng_size, REG_SEG_ES); + val =3D read_reg(env, REG_RAX, decode->operand_size); + vmx_write_mem(ENV_GET_CPU(env), addr, &val, decode->operand_size); =20 - string_increment_reg(cpu, REG_RDI, decode); + string_increment_reg(env, REG_RDI, decode); } =20 =20 -static void exec_stos(struct CPUState *cpu, struct x86_decode *decode) +static void exec_stos(struct CPUX86State *env, struct x86_decode *decode) { if (decode->rep) { - string_rep(cpu, decode, exec_stos_single, 0); + string_rep(env, decode, exec_stos_single, 0); } else { - exec_stos_single(cpu, decode); + exec_stos_single(env, decode); } =20 - RIP(cpu) +=3D decode->len; + RIP(env) +=3D decode->len; } =20 -static void exec_scas_single(struct CPUState *cpu, struct x86_decode *deco= de) +static void exec_scas_single(struct CPUX86State *env, struct x86_decode *d= ecode) { addr_t addr; - =20 - addr =3D linear_addr_size(cpu, RDI(cpu), decode->addressing_size, REG_= SEG_ES); + + addr =3D linear_addr_size(ENV_GET_CPU(env), RDI(env), decode->addressi= ng_size, REG_SEG_ES); decode->op[1].type =3D X86_VAR_IMMEDIATE; - vmx_read_mem(cpu, &decode->op[1].val, addr, decode->operand_size); + vmx_read_mem(ENV_GET_CPU(env), &decode->op[1].val, addr, decode->opera= nd_size); =20 - EXEC_2OP_ARITH_CMD(cpu, decode, -, SET_FLAGS_OSZAPC_SUB, false); - string_increment_reg(cpu, REG_RDI, decode); + EXEC_2OP_ARITH_CMD(env, decode, -, SET_FLAGS_OSZAPC_SUB, false); + string_increment_reg(env, REG_RDI, decode); } =20 -static void exec_scas(struct CPUState *cpu, struct x86_decode *decode) +static void exec_scas(struct CPUX86State *env, struct x86_decode *decode) { decode->op[0].type =3D X86_VAR_REG; decode->op[0].reg =3D REG_RAX; if (decode->rep) { - string_rep(cpu, decode, exec_scas_single, decode->rep); + string_rep(env, decode, exec_scas_single, decode->rep); } else { - exec_scas_single(cpu, decode); + exec_scas_single(env, decode); } =20 - RIP(cpu) +=3D decode->len; + RIP(env) +=3D decode->len; } =20 -static void exec_lods_single(struct CPUState *cpu, struct x86_decode *deco= de) +static void exec_lods_single(struct CPUX86State *env, struct x86_decode *d= ecode) { addr_t addr; addr_t val =3D 0; - =20 - addr =3D decode_linear_addr(cpu, decode, RSI(cpu), REG_SEG_DS); - vmx_read_mem(cpu, &val, addr, decode->operand_size); - write_reg(cpu, REG_RAX, val, decode->operand_size); =20 - string_increment_reg(cpu, REG_RSI, decode); + addr =3D decode_linear_addr(env, decode, RSI(env), REG_SEG_DS); + vmx_read_mem(ENV_GET_CPU(env), &val, addr, decode->operand_size); + write_reg(env, REG_RAX, val, decode->operand_size); + + string_increment_reg(env, REG_RSI, decode); } =20 -static void exec_lods(struct CPUState *cpu, struct x86_decode *decode) +static void exec_lods(struct CPUX86State *env, struct x86_decode *decode) { if (decode->rep) { - string_rep(cpu, decode, exec_lods_single, 0); + string_rep(env, decode, exec_lods_single, 0); } else { - exec_lods_single(cpu, decode); + exec_lods_single(env, decode); } =20 - RIP(cpu) +=3D decode->len; + RIP(env) +=3D decode->len; } =20 #define MSR_IA32_UCODE_REV 0x00000017 @@ -709,7 +708,7 @@ void simulate_rdmsr(struct CPUState *cpu) { X86CPU *x86_cpu =3D X86_CPU(cpu); CPUX86State *env =3D &x86_cpu->env; - uint32_t msr =3D ECX(cpu); + uint32_t msr =3D ECX(env); uint64_t val =3D 0; =20 switch (msr) { @@ -754,7 +753,7 @@ void simulate_rdmsr(struct CPUState *cpu) case MSR_MTRRphysBase(5): case MSR_MTRRphysBase(6): case MSR_MTRRphysBase(7): - val =3D env->mtrr_var[(ECX(cpu) - MSR_MTRRphysBase(0)) / 2].base; + val =3D env->mtrr_var[(ECX(env) - MSR_MTRRphysBase(0)) / 2].base; break; case MSR_MTRRphysMask(0): case MSR_MTRRphysMask(1): @@ -764,14 +763,14 @@ void simulate_rdmsr(struct CPUState *cpu) case MSR_MTRRphysMask(5): case MSR_MTRRphysMask(6): case MSR_MTRRphysMask(7): - val =3D env->mtrr_var[(ECX(cpu) - MSR_MTRRphysMask(0)) / 2].mask; + val =3D env->mtrr_var[(ECX(env) - MSR_MTRRphysMask(0)) / 2].mask; break; case MSR_MTRRfix64K_00000: val =3D env->mtrr_fixed[0]; break; case MSR_MTRRfix16K_80000: case MSR_MTRRfix16K_A0000: - val =3D env->mtrr_fixed[ECX(cpu) - MSR_MTRRfix16K_80000 + 1]; + val =3D env->mtrr_fixed[ECX(env) - MSR_MTRRfix16K_80000 + 1]; break; case MSR_MTRRfix4K_C0000: case MSR_MTRRfix4K_C8000: @@ -781,7 +780,7 @@ void simulate_rdmsr(struct CPUState *cpu) case MSR_MTRRfix4K_E8000: case MSR_MTRRfix4K_F0000: case MSR_MTRRfix4K_F8000: - val =3D env->mtrr_fixed[ECX(cpu) - MSR_MTRRfix4K_C0000 + 3]; + val =3D env->mtrr_fixed[ECX(env) - MSR_MTRRfix4K_C0000 + 3]; break; case MSR_MTRRdefType: val =3D env->mtrr_deftype; @@ -792,22 +791,22 @@ void simulate_rdmsr(struct CPUState *cpu) break; } =20 - RAX(cpu) =3D (uint32_t)val; - RDX(cpu) =3D (uint32_t)(val >> 32); + RAX(env) =3D (uint32_t)val; + RDX(env) =3D (uint32_t)(val >> 32); } =20 -static void exec_rdmsr(struct CPUState *cpu, struct x86_decode *decode) +static void exec_rdmsr(struct CPUX86State *env, struct x86_decode *decode) { - simulate_rdmsr(cpu); - RIP(cpu) +=3D decode->len; + simulate_rdmsr(ENV_GET_CPU(env)); + RIP(env) +=3D decode->len; } =20 void simulate_wrmsr(struct CPUState *cpu) { X86CPU *x86_cpu =3D X86_CPU(cpu); CPUX86State *env =3D &x86_cpu->env; - uint32_t msr =3D ECX(cpu); - uint64_t data =3D ((uint64_t)EDX(cpu) << 32) | EAX(cpu); + uint32_t msr =3D ECX(env); + uint64_t data =3D ((uint64_t)EDX(env) << 32) | EAX(env); =20 switch (msr) { case MSR_IA32_TSC: @@ -837,7 +836,7 @@ void simulate_wrmsr(struct CPUState *cpu) abort(); break; case MSR_EFER: - cpu->hvf_x86->efer.efer =3D data; + env->hvf_emul->efer.efer =3D data; /*printf("new efer %llx\n", EFER(cpu));*/ wvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER, data); if (data & EFER_NXE) { @@ -852,7 +851,7 @@ void simulate_wrmsr(struct CPUState *cpu) case MSR_MTRRphysBase(5): case MSR_MTRRphysBase(6): case MSR_MTRRphysBase(7): - env->mtrr_var[(ECX(cpu) - MSR_MTRRphysBase(0)) / 2].base =3D data; + env->mtrr_var[(ECX(env) - MSR_MTRRphysBase(0)) / 2].base =3D data; break; case MSR_MTRRphysMask(0): case MSR_MTRRphysMask(1): @@ -862,14 +861,14 @@ void simulate_wrmsr(struct CPUState *cpu) case MSR_MTRRphysMask(5): case MSR_MTRRphysMask(6): case MSR_MTRRphysMask(7): - env->mtrr_var[(ECX(cpu) - MSR_MTRRphysMask(0)) / 2].mask =3D data; + env->mtrr_var[(ECX(env) - MSR_MTRRphysMask(0)) / 2].mask =3D data; break; case MSR_MTRRfix64K_00000: - env->mtrr_fixed[ECX(cpu) - MSR_MTRRfix64K_00000] =3D data; + env->mtrr_fixed[ECX(env) - MSR_MTRRfix64K_00000] =3D data; break; case MSR_MTRRfix16K_80000: case MSR_MTRRfix16K_A0000: - env->mtrr_fixed[ECX(cpu) - MSR_MTRRfix16K_80000 + 1] =3D data; + env->mtrr_fixed[ECX(env) - MSR_MTRRfix16K_80000 + 1] =3D data; break; case MSR_MTRRfix4K_C0000: case MSR_MTRRfix4K_C8000: @@ -879,7 +878,7 @@ void simulate_wrmsr(struct CPUState *cpu) case MSR_MTRRfix4K_E8000: case MSR_MTRRfix4K_F0000: case MSR_MTRRfix4K_F8000: - env->mtrr_fixed[ECX(cpu) - MSR_MTRRfix4K_C0000 + 3] =3D data; + env->mtrr_fixed[ECX(env) - MSR_MTRRfix4K_C0000 + 3] =3D data; break; case MSR_MTRRdefType: env->mtrr_deftype =3D data; @@ -895,17 +894,17 @@ void simulate_wrmsr(struct CPUState *cpu) printf("write msr %llx\n", RCX(cpu));*/ } =20 -static void exec_wrmsr(struct CPUState *cpu, struct x86_decode *decode) +static void exec_wrmsr(struct CPUX86State *env, struct x86_decode *decode) { - simulate_wrmsr(cpu); - RIP(cpu) +=3D decode->len; + simulate_wrmsr(ENV_GET_CPU(env)); + RIP(env) +=3D decode->len; } =20 /* * flag: * 0 - bt, 1 - btc, 2 - bts, 3 - btr */ -static void do_bt(struct CPUState *cpu, struct x86_decode *decode, int fla= g) +static void do_bt(struct CPUX86State *env, struct x86_decode *decode, int = flag) { int32_t displacement; uint8_t index; @@ -914,7 +913,7 @@ static void do_bt(struct CPUState *cpu, struct x86_deco= de *decode, int flag) =20 VM_PANIC_ON(decode->rex.rex); =20 - fetch_operands(cpu, decode, 2, false, true, false); + fetch_operands(env, decode, 2, false, true, false); index =3D decode->op[1].val & mask; =20 if (decode->op[0].type !=3D X86_VAR_REG) { @@ -928,13 +927,13 @@ static void do_bt(struct CPUState *cpu, struct x86_de= code *decode, int flag) VM_PANIC("bt 64bit\n"); } } - decode->op[0].val =3D read_val_ext(cpu, decode->op[0].ptr, + decode->op[0].val =3D read_val_ext(env, decode->op[0].ptr, decode->operand_size); cf =3D (decode->op[0].val >> index) & 0x01; =20 switch (flag) { case 0: - set_CF(cpu, cf); + set_CF(env, cf); return; case 1: decode->op[0].val ^=3D (1u << index); @@ -946,41 +945,41 @@ static void do_bt(struct CPUState *cpu, struct x86_de= code *decode, int flag) decode->op[0].val &=3D ~(1u << index); break; } - write_val_ext(cpu, decode->op[0].ptr, decode->op[0].val, + write_val_ext(env, decode->op[0].ptr, decode->op[0].val, decode->operand_size); - set_CF(cpu, cf); + set_CF(env, cf); } =20 -static void exec_bt(struct CPUState *cpu, struct x86_decode *decode) +static void exec_bt(struct CPUX86State *env, struct x86_decode *decode) { - do_bt(cpu, decode, 0); - RIP(cpu) +=3D decode->len; + do_bt(env, decode, 0); + RIP(env) +=3D decode->len; } =20 -static void exec_btc(struct CPUState *cpu, struct x86_decode *decode) +static void exec_btc(struct CPUX86State *env, struct x86_decode *decode) { - do_bt(cpu, decode, 1); - RIP(cpu) +=3D decode->len; + do_bt(env, decode, 1); + RIP(env) +=3D decode->len; } =20 -static void exec_btr(struct CPUState *cpu, struct x86_decode *decode) +static void exec_btr(struct CPUX86State *env, struct x86_decode *decode) { - do_bt(cpu, decode, 3); - RIP(cpu) +=3D decode->len; + do_bt(env, decode, 3); + RIP(env) +=3D decode->len; } =20 -static void exec_bts(struct CPUState *cpu, struct x86_decode *decode) +static void exec_bts(struct CPUX86State *env, struct x86_decode *decode) { - do_bt(cpu, decode, 2); - RIP(cpu) +=3D decode->len; + do_bt(env, decode, 2); + RIP(env) +=3D decode->len; } =20 -void exec_shl(struct CPUState *cpu, struct x86_decode *decode) +void exec_shl(struct CPUX86State *env, struct x86_decode *decode) { uint8_t count; int of =3D 0, cf =3D 0; =20 - fetch_operands(cpu, decode, 2, true, true, false); + fetch_operands(env, decode, 2, true, true, false); =20 count =3D decode->op[1].val; count &=3D 0x1f; /* count is masked to 5 bits*/ @@ -998,9 +997,9 @@ void exec_shl(struct CPUState *cpu, struct x86_decode *= decode) of =3D cf ^ (res >> 7); } =20 - write_val_ext(cpu, decode->op[0].ptr, res, 1); + write_val_ext(env, decode->op[0].ptr, res, 1); SET_FLAGS_OSZAPC_LOGIC_8(res); - SET_FLAGS_OxxxxC(cpu, of, cf); + SET_FLAGS_OxxxxC(env, of, cf); break; } case 2: @@ -1014,20 +1013,20 @@ void exec_shl(struct CPUState *cpu, struct x86_deco= de *decode) of =3D cf ^ (res >> 15); /* of =3D cf ^ result15 */ } =20 - write_val_ext(cpu, decode->op[0].ptr, res, 2); + write_val_ext(env, decode->op[0].ptr, res, 2); SET_FLAGS_OSZAPC_LOGIC_16(res); - SET_FLAGS_OxxxxC(cpu, of, cf); + SET_FLAGS_OxxxxC(env, of, cf); break; } case 4: { uint32_t res =3D decode->op[0].val << count; =20 - write_val_ext(cpu, decode->op[0].ptr, res, 4); + write_val_ext(env, decode->op[0].ptr, res, 4); SET_FLAGS_OSZAPC_LOGIC_32(res); cf =3D (decode->op[0].val >> (32 - count)) & 0x1; of =3D cf ^ (res >> 31); /* of =3D cf ^ result31 */ - SET_FLAGS_OxxxxC(cpu, of, cf); + SET_FLAGS_OxxxxC(env, of, cf); break; } default: @@ -1035,16 +1034,16 @@ void exec_shl(struct CPUState *cpu, struct x86_deco= de *decode) } =20 exit: - /* lflags_to_rflags(cpu); */ - RIP(cpu) +=3D decode->len; + /* lflags_to_rflags(env); */ + RIP(env) +=3D decode->len; } =20 -void exec_movsx(struct CPUState *cpu, struct x86_decode *decode) +void exec_movsx(CPUX86State *env, struct x86_decode *decode) { int src_op_size; int op_size =3D decode->operand_size; =20 - fetch_operands(cpu, decode, 2, false, false, false); + fetch_operands(env, decode, 2, false, false, false); =20 if (0xbe =3D=3D decode->opcode[1]) { src_op_size =3D 1; @@ -1053,20 +1052,20 @@ void exec_movsx(struct CPUState *cpu, struct x86_de= code *decode) } =20 decode->operand_size =3D src_op_size; - calc_modrm_operand(cpu, decode, &decode->op[1]); - decode->op[1].val =3D sign(read_val_ext(cpu, decode->op[1].ptr, src_op= _size), + calc_modrm_operand(env, decode, &decode->op[1]); + decode->op[1].val =3D sign(read_val_ext(env, decode->op[1].ptr, src_op= _size), src_op_size); =20 - write_val_ext(cpu, decode->op[0].ptr, decode->op[1].val, op_size); + write_val_ext(env, decode->op[0].ptr, decode->op[1].val, op_size); =20 - RIP(cpu) +=3D decode->len; + RIP(env) +=3D decode->len; } =20 -void exec_ror(struct CPUState *cpu, struct x86_decode *decode) +void exec_ror(struct CPUX86State *env, struct x86_decode *decode) { uint8_t count; =20 - fetch_operands(cpu, decode, 2, true, true, false); + fetch_operands(env, decode, 2, true, true, false); count =3D decode->op[1].val; =20 switch (decode->operand_size) { @@ -1079,17 +1078,17 @@ void exec_ror(struct CPUState *cpu, struct x86_deco= de *decode) if (count & 0x18) { bit6 =3D ((uint8_t)decode->op[0].val >> 6) & 1; bit7 =3D ((uint8_t)decode->op[0].val >> 7) & 1; - SET_FLAGS_OxxxxC(cpu, bit6 ^ bit7, bit7); + SET_FLAGS_OxxxxC(env, bit6 ^ bit7, bit7); } } else { count &=3D 0x7; /* use only bottom 3 bits */ res =3D ((uint8_t)decode->op[0].val >> count) | ((uint8_t)decode->op[0].val << (8 - count)); - write_val_ext(cpu, decode->op[0].ptr, res, 1); + write_val_ext(env, decode->op[0].ptr, res, 1); bit6 =3D (res >> 6) & 1; bit7 =3D (res >> 7) & 1; /* set eflags: ROR count affects the following flags: C, O */ - SET_FLAGS_OxxxxC(cpu, bit6 ^ bit7, bit7); + SET_FLAGS_OxxxxC(env, bit6 ^ bit7, bit7); } break; } @@ -1103,18 +1102,18 @@ void exec_ror(struct CPUState *cpu, struct x86_deco= de *decode) bit14 =3D ((uint16_t)decode->op[0].val >> 14) & 1; bit15 =3D ((uint16_t)decode->op[0].val >> 15) & 1; /* of =3D result14 ^ result15 */ - SET_FLAGS_OxxxxC(cpu, bit14 ^ bit15, bit15); + SET_FLAGS_OxxxxC(env, bit14 ^ bit15, bit15); } } else { count &=3D 0x0f; /* use only 4 LSB's */ res =3D ((uint16_t)decode->op[0].val >> count) | ((uint16_t)decode->op[0].val << (16 - count)); - write_val_ext(cpu, decode->op[0].ptr, res, 2); + write_val_ext(env, decode->op[0].ptr, res, 2); =20 bit14 =3D (res >> 14) & 1; bit15 =3D (res >> 15) & 1; /* of =3D result14 ^ result15 */ - SET_FLAGS_OxxxxC(cpu, bit14 ^ bit15, bit15); + SET_FLAGS_OxxxxC(env, bit14 ^ bit15, bit15); } break; } @@ -1127,24 +1126,24 @@ void exec_ror(struct CPUState *cpu, struct x86_deco= de *decode) if (count) { res =3D ((uint32_t)decode->op[0].val >> count) | ((uint32_t)decode->op[0].val << (32 - count)); - write_val_ext(cpu, decode->op[0].ptr, res, 4); + write_val_ext(env, decode->op[0].ptr, res, 4); =20 bit31 =3D (res >> 31) & 1; bit30 =3D (res >> 30) & 1; /* of =3D result30 ^ result31 */ - SET_FLAGS_OxxxxC(cpu, bit30 ^ bit31, bit31); + SET_FLAGS_OxxxxC(env, bit30 ^ bit31, bit31); } break; } } - RIP(cpu) +=3D decode->len; + RIP(env) +=3D decode->len; } =20 -void exec_rol(struct CPUState *cpu, struct x86_decode *decode) +void exec_rol(struct CPUX86State *env, struct x86_decode *decode) { uint8_t count; =20 - fetch_operands(cpu, decode, 2, true, true, false); + fetch_operands(env, decode, 2, true, true, false); count =3D decode->op[1].val; =20 switch (decode->operand_size) { @@ -1157,20 +1156,20 @@ void exec_rol(struct CPUState *cpu, struct x86_deco= de *decode) if (count & 0x18) { bit0 =3D ((uint8_t)decode->op[0].val & 1); bit7 =3D ((uint8_t)decode->op[0].val >> 7); - SET_FLAGS_OxxxxC(cpu, bit0 ^ bit7, bit0); + SET_FLAGS_OxxxxC(env, bit0 ^ bit7, bit0); } } else { count &=3D 0x7; /* use only lowest 3 bits */ res =3D ((uint8_t)decode->op[0].val << count) | ((uint8_t)decode->op[0].val >> (8 - count)); =20 - write_val_ext(cpu, decode->op[0].ptr, res, 1); + write_val_ext(env, decode->op[0].ptr, res, 1); /* set eflags: * ROL count affects the following flags: C, O */ bit0 =3D (res & 1); bit7 =3D (res >> 7); - SET_FLAGS_OxxxxC(cpu, bit0 ^ bit7, bit0); + SET_FLAGS_OxxxxC(env, bit0 ^ bit7, bit0); } break; } @@ -1184,18 +1183,18 @@ void exec_rol(struct CPUState *cpu, struct x86_deco= de *decode) bit0 =3D ((uint16_t)decode->op[0].val & 0x1); bit15 =3D ((uint16_t)decode->op[0].val >> 15); /* of =3D cf ^ result15 */ - SET_FLAGS_OxxxxC(cpu, bit0 ^ bit15, bit0); + SET_FLAGS_OxxxxC(env, bit0 ^ bit15, bit0); } } else { count &=3D 0x0f; /* only use bottom 4 bits */ res =3D ((uint16_t)decode->op[0].val << count) | ((uint16_t)decode->op[0].val >> (16 - count)); =20 - write_val_ext(cpu, decode->op[0].ptr, res, 2); + write_val_ext(env, decode->op[0].ptr, res, 2); bit0 =3D (res & 0x1); bit15 =3D (res >> 15); /* of =3D cf ^ result15 */ - SET_FLAGS_OxxxxC(cpu, bit0 ^ bit15, bit0); + SET_FLAGS_OxxxxC(env, bit0 ^ bit15, bit0); } break; } @@ -1209,25 +1208,25 @@ void exec_rol(struct CPUState *cpu, struct x86_deco= de *decode) res =3D ((uint32_t)decode->op[0].val << count) | ((uint32_t)decode->op[0].val >> (32 - count)); =20 - write_val_ext(cpu, decode->op[0].ptr, res, 4); + write_val_ext(env, decode->op[0].ptr, res, 4); bit0 =3D (res & 0x1); bit31 =3D (res >> 31); /* of =3D cf ^ result31 */ - SET_FLAGS_OxxxxC(cpu, bit0 ^ bit31, bit0); + SET_FLAGS_OxxxxC(env, bit0 ^ bit31, bit0); } break; } } - RIP(cpu) +=3D decode->len; + RIP(env) +=3D decode->len; } =20 =20 -void exec_rcl(struct CPUState *cpu, struct x86_decode *decode) +void exec_rcl(struct CPUX86State *env, struct x86_decode *decode) { uint8_t count; int of =3D 0, cf =3D 0; =20 - fetch_operands(cpu, decode, 2, true, true, false); + fetch_operands(env, decode, 2, true, true, false); count =3D decode->op[1].val & 0x1f; =20 switch (decode->operand_size) { @@ -1241,17 +1240,17 @@ void exec_rcl(struct CPUState *cpu, struct x86_deco= de *decode) } =20 if (1 =3D=3D count) { - res =3D (op1_8 << 1) | get_CF(cpu); + res =3D (op1_8 << 1) | get_CF(env); } else { - res =3D (op1_8 << count) | (get_CF(cpu) << (count - 1)) | + res =3D (op1_8 << count) | (get_CF(env) << (count - 1)) | (op1_8 >> (9 - count)); } =20 - write_val_ext(cpu, decode->op[0].ptr, res, 1); + write_val_ext(env, decode->op[0].ptr, res, 1); =20 cf =3D (op1_8 >> (8 - count)) & 0x01; of =3D cf ^ (res >> 7); /* of =3D cf ^ result7 */ - SET_FLAGS_OxxxxC(cpu, of, cf); + SET_FLAGS_OxxxxC(env, of, cf); break; } case 2: @@ -1265,19 +1264,19 @@ void exec_rcl(struct CPUState *cpu, struct x86_deco= de *decode) } =20 if (1 =3D=3D count) { - res =3D (op1_16 << 1) | get_CF(cpu); + res =3D (op1_16 << 1) | get_CF(env); } else if (count =3D=3D 16) { - res =3D (get_CF(cpu) << 15) | (op1_16 >> 1); + res =3D (get_CF(env) << 15) | (op1_16 >> 1); } else { /* 2..15 */ - res =3D (op1_16 << count) | (get_CF(cpu) << (count - 1)) | + res =3D (op1_16 << count) | (get_CF(env) << (count - 1)) | (op1_16 >> (17 - count)); } =20 - write_val_ext(cpu, decode->op[0].ptr, res, 2); + write_val_ext(env, decode->op[0].ptr, res, 2); =20 cf =3D (op1_16 >> (16 - count)) & 0x1; of =3D cf ^ (res >> 15); /* of =3D cf ^ result15 */ - SET_FLAGS_OxxxxC(cpu, of, cf); + SET_FLAGS_OxxxxC(env, of, cf); break; } case 4: @@ -1290,29 +1289,29 @@ void exec_rcl(struct CPUState *cpu, struct x86_deco= de *decode) } =20 if (1 =3D=3D count) { - res =3D (op1_32 << 1) | get_CF(cpu); + res =3D (op1_32 << 1) | get_CF(env); } else { - res =3D (op1_32 << count) | (get_CF(cpu) << (count - 1)) | + res =3D (op1_32 << count) | (get_CF(env) << (count - 1)) | (op1_32 >> (33 - count)); } =20 - write_val_ext(cpu, decode->op[0].ptr, res, 4); + write_val_ext(env, decode->op[0].ptr, res, 4); =20 cf =3D (op1_32 >> (32 - count)) & 0x1; of =3D cf ^ (res >> 31); /* of =3D cf ^ result31 */ - SET_FLAGS_OxxxxC(cpu, of, cf); + SET_FLAGS_OxxxxC(env, of, cf); break; } } - RIP(cpu) +=3D decode->len; + RIP(env) +=3D decode->len; } =20 -void exec_rcr(struct CPUState *cpu, struct x86_decode *decode) +void exec_rcr(struct CPUX86State *env, struct x86_decode *decode) { uint8_t count; int of =3D 0, cf =3D 0; =20 - fetch_operands(cpu, decode, 2, true, true, false); + fetch_operands(env, decode, 2, true, true, false); count =3D decode->op[1].val & 0x1f; =20 switch (decode->operand_size) { @@ -1325,14 +1324,14 @@ void exec_rcr(struct CPUState *cpu, struct x86_deco= de *decode) if (!count) { break; } - res =3D (op1_8 >> count) | (get_CF(cpu) << (8 - count)) | + res =3D (op1_8 >> count) | (get_CF(env) << (8 - count)) | (op1_8 << (9 - count)); =20 - write_val_ext(cpu, decode->op[0].ptr, res, 1); + write_val_ext(env, decode->op[0].ptr, res, 1); =20 cf =3D (op1_8 >> (count - 1)) & 0x1; of =3D (((res << 1) ^ res) >> 7) & 0x1; /* of =3D result6 ^ result= 7 */ - SET_FLAGS_OxxxxC(cpu, of, cf); + SET_FLAGS_OxxxxC(env, of, cf); break; } case 2: @@ -1344,15 +1343,15 @@ void exec_rcr(struct CPUState *cpu, struct x86_deco= de *decode) if (!count) { break; } - res =3D (op1_16 >> count) | (get_CF(cpu) << (16 - count)) | + res =3D (op1_16 >> count) | (get_CF(env) << (16 - count)) | (op1_16 << (17 - count)); =20 - write_val_ext(cpu, decode->op[0].ptr, res, 2); + write_val_ext(env, decode->op[0].ptr, res, 2); =20 cf =3D (op1_16 >> (count - 1)) & 0x1; of =3D ((uint16_t)((res << 1) ^ res) >> 15) & 0x1; /* of =3D resul= t15 ^ result14 */ - SET_FLAGS_OxxxxC(cpu, of, cf); + SET_FLAGS_OxxxxC(env, of, cf); break; } case 4: @@ -1365,47 +1364,47 @@ void exec_rcr(struct CPUState *cpu, struct x86_deco= de *decode) } =20 if (1 =3D=3D count) { - res =3D (op1_32 >> 1) | (get_CF(cpu) << 31); + res =3D (op1_32 >> 1) | (get_CF(env) << 31); } else { - res =3D (op1_32 >> count) | (get_CF(cpu) << (32 - count)) | + res =3D (op1_32 >> count) | (get_CF(env) << (32 - count)) | (op1_32 << (33 - count)); } =20 - write_val_ext(cpu, decode->op[0].ptr, res, 4); + write_val_ext(env, decode->op[0].ptr, res, 4); =20 cf =3D (op1_32 >> (count - 1)) & 0x1; of =3D ((res << 1) ^ res) >> 31; /* of =3D result30 ^ result31 */ - SET_FLAGS_OxxxxC(cpu, of, cf); + SET_FLAGS_OxxxxC(env, of, cf); break; } } - RIP(cpu) +=3D decode->len; + RIP(env) +=3D decode->len; } =20 -static void exec_xchg(struct CPUState *cpu, struct x86_decode *decode) +static void exec_xchg(struct CPUX86State *env, struct x86_decode *decode) { - fetch_operands(cpu, decode, 2, true, true, false); + fetch_operands(env, decode, 2, true, true, false); =20 - write_val_ext(cpu, decode->op[0].ptr, decode->op[1].val, + write_val_ext(env, decode->op[0].ptr, decode->op[1].val, decode->operand_size); - write_val_ext(cpu, decode->op[1].ptr, decode->op[0].val, + write_val_ext(env, decode->op[1].ptr, decode->op[0].val, decode->operand_size); =20 - RIP(cpu) +=3D decode->len; + RIP(env) +=3D decode->len; } =20 -static void exec_xadd(struct CPUState *cpu, struct x86_decode *decode) +static void exec_xadd(struct CPUX86State *env, struct x86_decode *decode) { - EXEC_2OP_ARITH_CMD(cpu, decode, +, SET_FLAGS_OSZAPC_ADD, true); - write_val_ext(cpu, decode->op[1].ptr, decode->op[0].val, + EXEC_2OP_ARITH_CMD(env, decode, +, SET_FLAGS_OSZAPC_ADD, true); + write_val_ext(env, decode->op[1].ptr, decode->op[0].val, decode->operand_size); =20 - RIP(cpu) +=3D decode->len; + RIP(env) +=3D decode->len; } =20 static struct cmd_handler { enum x86_decode_cmd cmd; - void (*handler)(struct CPUState *cpu, struct x86_decode *ins); + void (*handler)(struct CPUX86State *env, struct x86_decode *ins); } handlers[] =3D { {X86_DECODE_CMD_INVL, NULL,}, {X86_DECODE_CMD_MOV, exec_mov}, @@ -1451,7 +1450,7 @@ static struct cmd_handler { =20 static struct cmd_handler _cmd_handler[X86_DECODE_CMD_LAST]; =20 -static void init_cmd_handler(CPUState *cpu) +static void init_cmd_handler() { int i; for (i =3D 0; i < ARRAY_SIZE(handlers); i++) { @@ -1461,45 +1460,51 @@ static void init_cmd_handler(CPUState *cpu) =20 void load_regs(struct CPUState *cpu) { + X86CPU *x86_cpu =3D X86_CPU(cpu); + CPUX86State *env =3D &x86_cpu->env; + int i =3D 0; - RRX(cpu, REG_RAX) =3D rreg(cpu->hvf_fd, HV_X86_RAX); - RRX(cpu, REG_RBX) =3D rreg(cpu->hvf_fd, HV_X86_RBX); - RRX(cpu, REG_RCX) =3D rreg(cpu->hvf_fd, HV_X86_RCX); - RRX(cpu, REG_RDX) =3D rreg(cpu->hvf_fd, HV_X86_RDX); - RRX(cpu, REG_RSI) =3D rreg(cpu->hvf_fd, HV_X86_RSI); - RRX(cpu, REG_RDI) =3D rreg(cpu->hvf_fd, HV_X86_RDI); - RRX(cpu, REG_RSP) =3D rreg(cpu->hvf_fd, HV_X86_RSP); - RRX(cpu, REG_RBP) =3D rreg(cpu->hvf_fd, HV_X86_RBP); + RRX(env, REG_RAX) =3D rreg(cpu->hvf_fd, HV_X86_RAX); + RRX(env, REG_RBX) =3D rreg(cpu->hvf_fd, HV_X86_RBX); + RRX(env, REG_RCX) =3D rreg(cpu->hvf_fd, HV_X86_RCX); + RRX(env, REG_RDX) =3D rreg(cpu->hvf_fd, HV_X86_RDX); + RRX(env, REG_RSI) =3D rreg(cpu->hvf_fd, HV_X86_RSI); + RRX(env, REG_RDI) =3D rreg(cpu->hvf_fd, HV_X86_RDI); + RRX(env, REG_RSP) =3D rreg(cpu->hvf_fd, HV_X86_RSP); + RRX(env, REG_RBP) =3D rreg(cpu->hvf_fd, HV_X86_RBP); for (i =3D 8; i < 16; i++) { - RRX(cpu, i) =3D rreg(cpu->hvf_fd, HV_X86_RAX + i); + RRX(env, i) =3D rreg(cpu->hvf_fd, HV_X86_RAX + i); } =20 - RFLAGS(cpu) =3D rreg(cpu->hvf_fd, HV_X86_RFLAGS); - rflags_to_lflags(cpu); - RIP(cpu) =3D rreg(cpu->hvf_fd, HV_X86_RIP); + RFLAGS(env) =3D rreg(cpu->hvf_fd, HV_X86_RFLAGS); + rflags_to_lflags(env); + RIP(env) =3D rreg(cpu->hvf_fd, HV_X86_RIP); } =20 void store_regs(struct CPUState *cpu) { + X86CPU *x86_cpu =3D X86_CPU(cpu); + CPUX86State *env =3D &x86_cpu->env; + int i =3D 0; - wreg(cpu->hvf_fd, HV_X86_RAX, RAX(cpu)); - wreg(cpu->hvf_fd, HV_X86_RBX, RBX(cpu)); - wreg(cpu->hvf_fd, HV_X86_RCX, RCX(cpu)); - wreg(cpu->hvf_fd, HV_X86_RDX, RDX(cpu)); - wreg(cpu->hvf_fd, HV_X86_RSI, RSI(cpu)); - wreg(cpu->hvf_fd, HV_X86_RDI, RDI(cpu)); - wreg(cpu->hvf_fd, HV_X86_RBP, RBP(cpu)); - wreg(cpu->hvf_fd, HV_X86_RSP, RSP(cpu)); + wreg(cpu->hvf_fd, HV_X86_RAX, RAX(env)); + wreg(cpu->hvf_fd, HV_X86_RBX, RBX(env)); + wreg(cpu->hvf_fd, HV_X86_RCX, RCX(env)); + wreg(cpu->hvf_fd, HV_X86_RDX, RDX(env)); + wreg(cpu->hvf_fd, HV_X86_RSI, RSI(env)); + wreg(cpu->hvf_fd, HV_X86_RDI, RDI(env)); + wreg(cpu->hvf_fd, HV_X86_RBP, RBP(env)); + wreg(cpu->hvf_fd, HV_X86_RSP, RSP(env)); for (i =3D 8; i < 16; i++) { - wreg(cpu->hvf_fd, HV_X86_RAX + i, RRX(cpu, i)); + wreg(cpu->hvf_fd, HV_X86_RAX + i, RRX(env, i)); } =20 - lflags_to_rflags(cpu); - wreg(cpu->hvf_fd, HV_X86_RFLAGS, RFLAGS(cpu)); - macvm_set_rip(cpu, RIP(cpu)); + lflags_to_rflags(env); + wreg(cpu->hvf_fd, HV_X86_RFLAGS, RFLAGS(env)); + macvm_set_rip(cpu, RIP(env)); } =20 -bool exec_instruction(struct CPUState *cpu, struct x86_decode *ins) +bool exec_instruction(struct CPUX86State *env, struct x86_decode *ins) { /*if (hvf_vcpu_id(cpu)) printf("%d, %llx: exec_instruction %s\n", hvf_vcpu_id(cpu), RIP(cpu), @@ -1509,23 +1514,23 @@ bool exec_instruction(struct CPUState *cpu, struct = x86_decode *ins) VM_PANIC("emulate fpu\n"); } else { if (!_cmd_handler[ins->cmd].handler) { - printf("Unimplemented handler (%llx) for %d (%x %x) \n", RIP(c= pu), + printf("Unimplemented handler (%llx) for %d (%x %x) \n", RIP(e= nv), ins->cmd, ins->opcode[0], ins->opcode_len > 1 ? ins->opcode[1] : 0); - RIP(cpu) +=3D ins->len; + RIP(env) +=3D ins->len; return true; } =20 VM_PANIC_ON_EX(!_cmd_handler[ins->cmd].handler, - "Unimplemented handler (%llx) for %d (%x %x) \n", RIP(cpu), + "Unimplemented handler (%llx) for %d (%x %x) \n", RIP(env), ins->cmd, ins->opcode[0], ins->opcode_len > 1 ? ins->opcode[1] : 0); - _cmd_handler[ins->cmd].handler(cpu, ins); + _cmd_handler[ins->cmd].handler(env, ins); } return true; } =20 -void init_emu(struct CPUState *cpu) +void init_emu() { - init_cmd_handler(cpu); + init_cmd_handler(); } diff --git a/target/i386/hvf-utils/x86_emu.h b/target/i386/hvf-utils/x86_em= u.h index be80350ed8..cd4acb0030 100644 --- a/target/i386/hvf-utils/x86_emu.h +++ b/target/i386/hvf-utils/x86_emu.h @@ -20,9 +20,10 @@ =20 #include "x86.h" #include "x86_decode.h" +#include "cpu.h" =20 -void init_emu(struct CPUState *cpu); -bool exec_instruction(struct CPUState *cpu, struct x86_decode *ins); +void init_emu(void); +bool exec_instruction(struct CPUX86State *env, struct x86_decode *ins); =20 void load_regs(struct CPUState *cpu); void store_regs(struct CPUState *cpu); @@ -30,19 +31,19 @@ void store_regs(struct CPUState *cpu); void simulate_rdmsr(struct CPUState *cpu); void simulate_wrmsr(struct CPUState *cpu); =20 -addr_t read_reg(struct CPUState *cpu, int reg, int size); -void write_reg(struct CPUState *cpu, int reg, addr_t val, int size); +addr_t read_reg(CPUX86State *env, int reg, int size); +void write_reg(CPUX86State *env, int reg, addr_t val, int size); addr_t read_val_from_reg(addr_t reg_ptr, int size); void write_val_to_reg(addr_t reg_ptr, addr_t val, int size); -void write_val_ext(struct CPUState *cpu, addr_t ptr, addr_t val, int size); -uint8_t *read_mmio(struct CPUState *cpu, addr_t ptr, int bytes); -addr_t read_val_ext(struct CPUState *cpu, addr_t ptr, int size); +void write_val_ext(struct CPUX86State *env, addr_t ptr, addr_t val, int si= ze); +uint8_t *read_mmio(struct CPUX86State *env, addr_t ptr, int bytes); +addr_t read_val_ext(struct CPUX86State *env, addr_t ptr, int size); =20 -void exec_movzx(struct CPUState *cpu, struct x86_decode *decode); -void exec_shl(struct CPUState *cpu, struct x86_decode *decode); -void exec_movsx(struct CPUState *cpu, struct x86_decode *decode); -void exec_ror(struct CPUState *cpu, struct x86_decode *decode); -void exec_rol(struct CPUState *cpu, struct x86_decode *decode); -void exec_rcl(struct CPUState *cpu, struct x86_decode *decode); -void exec_rcr(struct CPUState *cpu, struct x86_decode *decode); +void exec_movzx(struct CPUX86State *env, struct x86_decode *decode); +void exec_shl(struct CPUX86State *env, struct x86_decode *decode); +void exec_movsx(struct CPUX86State *env, struct x86_decode *decode); +void exec_ror(struct CPUX86State *env, struct x86_decode *decode); +void exec_rol(struct CPUX86State *env, struct x86_decode *decode); +void exec_rcl(struct CPUX86State *env, struct x86_decode *decode); +void exec_rcr(struct CPUX86State *env, struct x86_decode *decode); #endif diff --git a/target/i386/hvf-utils/x86_flags.c b/target/i386/hvf-utils/x86_= flags.c index 187ab9b56b..c833774485 100644 --- a/target/i386/hvf-utils/x86_flags.c +++ b/target/i386/hvf-utils/x86_flags.c @@ -28,155 +28,155 @@ #include "x86_flags.h" #include "x86.h" =20 -void SET_FLAGS_OxxxxC(struct CPUState *cpu, uint32_t new_of, uint32_t new_= cf) +void SET_FLAGS_OxxxxC(CPUX86State *env, uint32_t new_of, uint32_t new_cf) { uint32_t temp_po =3D new_of ^ new_cf; - cpu->hvf_x86->lflags.auxbits &=3D ~(LF_MASK_PO | LF_MASK_CF); - cpu->hvf_x86->lflags.auxbits |=3D (temp_po << LF_BIT_PO) | + env->hvf_emul->lflags.auxbits &=3D ~(LF_MASK_PO | LF_MASK_CF); + env->hvf_emul->lflags.auxbits |=3D (temp_po << LF_BIT_PO) | (new_cf << LF_BIT_CF); } =20 -void SET_FLAGS_OSZAPC_SUB32(struct CPUState *cpu, uint32_t v1, uint32_t v2, +void SET_FLAGS_OSZAPC_SUB32(CPUX86State *env, uint32_t v1, uint32_t v2, uint32_t diff) { SET_FLAGS_OSZAPC_SUB_32(v1, v2, diff); } =20 -void SET_FLAGS_OSZAPC_SUB16(struct CPUState *cpu, uint16_t v1, uint16_t v2, +void SET_FLAGS_OSZAPC_SUB16(CPUX86State *env, uint16_t v1, uint16_t v2, uint16_t diff) { SET_FLAGS_OSZAPC_SUB_16(v1, v2, diff); } =20 -void SET_FLAGS_OSZAPC_SUB8(struct CPUState *cpu, uint8_t v1, uint8_t v2, +void SET_FLAGS_OSZAPC_SUB8(CPUX86State *env, uint8_t v1, uint8_t v2, uint8_t diff) { SET_FLAGS_OSZAPC_SUB_8(v1, v2, diff); } =20 -void SET_FLAGS_OSZAPC_ADD32(struct CPUState *cpu, uint32_t v1, uint32_t v2, +void SET_FLAGS_OSZAPC_ADD32(CPUX86State *env, uint32_t v1, uint32_t v2, uint32_t diff) { SET_FLAGS_OSZAPC_ADD_32(v1, v2, diff); } =20 -void SET_FLAGS_OSZAPC_ADD16(struct CPUState *cpu, uint16_t v1, uint16_t v2, +void SET_FLAGS_OSZAPC_ADD16(CPUX86State *env, uint16_t v1, uint16_t v2, uint16_t diff) { SET_FLAGS_OSZAPC_ADD_16(v1, v2, diff); } =20 -void SET_FLAGS_OSZAPC_ADD8(struct CPUState *cpu, uint8_t v1, uint8_t v2, +void SET_FLAGS_OSZAPC_ADD8(CPUX86State *env, uint8_t v1, uint8_t v2, uint8_t diff) { SET_FLAGS_OSZAPC_ADD_8(v1, v2, diff); } =20 -void SET_FLAGS_OSZAP_SUB32(struct CPUState *cpu, uint32_t v1, uint32_t v2, +void SET_FLAGS_OSZAP_SUB32(CPUX86State *env, uint32_t v1, uint32_t v2, uint32_t diff) { SET_FLAGS_OSZAP_SUB_32(v1, v2, diff); } =20 -void SET_FLAGS_OSZAP_SUB16(struct CPUState *cpu, uint16_t v1, uint16_t v2, +void SET_FLAGS_OSZAP_SUB16(CPUX86State *env, uint16_t v1, uint16_t v2, uint16_t diff) { SET_FLAGS_OSZAP_SUB_16(v1, v2, diff); } =20 -void SET_FLAGS_OSZAP_SUB8(struct CPUState *cpu, uint8_t v1, uint8_t v2, +void SET_FLAGS_OSZAP_SUB8(CPUX86State *env, uint8_t v1, uint8_t v2, uint8_t diff) { SET_FLAGS_OSZAP_SUB_8(v1, v2, diff); } =20 -void SET_FLAGS_OSZAP_ADD32(struct CPUState *cpu, uint32_t v1, uint32_t v2, +void SET_FLAGS_OSZAP_ADD32(CPUX86State *env, uint32_t v1, uint32_t v2, uint32_t diff) { SET_FLAGS_OSZAP_ADD_32(v1, v2, diff); } =20 -void SET_FLAGS_OSZAP_ADD16(struct CPUState *cpu, uint16_t v1, uint16_t v2, +void SET_FLAGS_OSZAP_ADD16(CPUX86State *env, uint16_t v1, uint16_t v2, uint16_t diff) { SET_FLAGS_OSZAP_ADD_16(v1, v2, diff); } =20 -void SET_FLAGS_OSZAP_ADD8(struct CPUState *cpu, uint8_t v1, uint8_t v2, +void SET_FLAGS_OSZAP_ADD8(CPUX86State *env, uint8_t v1, uint8_t v2, uint8_t diff) { SET_FLAGS_OSZAP_ADD_8(v1, v2, diff); } =20 =20 -void SET_FLAGS_OSZAPC_LOGIC32(struct CPUState *cpu, uint32_t diff) +void SET_FLAGS_OSZAPC_LOGIC32(CPUX86State *env, uint32_t diff) { SET_FLAGS_OSZAPC_LOGIC_32(diff); } =20 -void SET_FLAGS_OSZAPC_LOGIC16(struct CPUState *cpu, uint16_t diff) +void SET_FLAGS_OSZAPC_LOGIC16(CPUX86State *env, uint16_t diff) { SET_FLAGS_OSZAPC_LOGIC_16(diff); } =20 -void SET_FLAGS_OSZAPC_LOGIC8(struct CPUState *cpu, uint8_t diff) +void SET_FLAGS_OSZAPC_LOGIC8(CPUX86State *env, uint8_t diff) { SET_FLAGS_OSZAPC_LOGIC_8(diff); } =20 -void SET_FLAGS_SHR32(struct CPUState *cpu, uint32_t v, int count, uint32_t= res) +void SET_FLAGS_SHR32(CPUX86State *env, uint32_t v, int count, uint32_t res) { int cf =3D (v >> (count - 1)) & 0x1; int of =3D (((res << 1) ^ res) >> 31); =20 SET_FLAGS_OSZAPC_LOGIC_32(res); - SET_FLAGS_OxxxxC(cpu, of, cf); + SET_FLAGS_OxxxxC(env, of, cf); } =20 -void SET_FLAGS_SHR16(struct CPUState *cpu, uint16_t v, int count, uint16_t= res) +void SET_FLAGS_SHR16(CPUX86State *env, uint16_t v, int count, uint16_t res) { int cf =3D (v >> (count - 1)) & 0x1; int of =3D (((res << 1) ^ res) >> 15); =20 SET_FLAGS_OSZAPC_LOGIC_16(res); - SET_FLAGS_OxxxxC(cpu, of, cf); + SET_FLAGS_OxxxxC(env, of, cf); } =20 -void SET_FLAGS_SHR8(struct CPUState *cpu, uint8_t v, int count, uint8_t re= s) +void SET_FLAGS_SHR8(CPUX86State *env, uint8_t v, int count, uint8_t res) { int cf =3D (v >> (count - 1)) & 0x1; int of =3D (((res << 1) ^ res) >> 7); =20 SET_FLAGS_OSZAPC_LOGIC_8(res); - SET_FLAGS_OxxxxC(cpu, of, cf); + SET_FLAGS_OxxxxC(env, of, cf); } =20 -void SET_FLAGS_SAR32(struct CPUState *cpu, int32_t v, int count, uint32_t = res) +void SET_FLAGS_SAR32(CPUX86State *env, int32_t v, int count, uint32_t res) { int cf =3D (v >> (count - 1)) & 0x1; =20 SET_FLAGS_OSZAPC_LOGIC_32(res); - SET_FLAGS_OxxxxC(cpu, 0, cf); + SET_FLAGS_OxxxxC(env, 0, cf); } =20 -void SET_FLAGS_SAR16(struct CPUState *cpu, int16_t v, int count, uint16_t = res) +void SET_FLAGS_SAR16(CPUX86State *env, int16_t v, int count, uint16_t res) { int cf =3D (v >> (count - 1)) & 0x1; =20 SET_FLAGS_OSZAPC_LOGIC_16(res); - SET_FLAGS_OxxxxC(cpu, 0, cf); + SET_FLAGS_OxxxxC(env, 0, cf); } =20 -void SET_FLAGS_SAR8(struct CPUState *cpu, int8_t v, int count, uint8_t res) +void SET_FLAGS_SAR8(CPUX86State *env, int8_t v, int count, uint8_t res) { int cf =3D (v >> (count - 1)) & 0x1; =20 SET_FLAGS_OSZAPC_LOGIC_8(res); - SET_FLAGS_OxxxxC(cpu, 0, cf); + SET_FLAGS_OxxxxC(env, 0, cf); } =20 =20 -void SET_FLAGS_SHL32(struct CPUState *cpu, uint32_t v, int count, uint32_t= res) +void SET_FLAGS_SHL32(CPUX86State *env, uint32_t v, int count, uint32_t res) { int of, cf; =20 @@ -184,10 +184,10 @@ void SET_FLAGS_SHL32(struct CPUState *cpu, uint32_t v= , int count, uint32_t res) of =3D cf ^ (res >> 31); =20 SET_FLAGS_OSZAPC_LOGIC_32(res); - SET_FLAGS_OxxxxC(cpu, of, cf); + SET_FLAGS_OxxxxC(env, of, cf); } =20 -void SET_FLAGS_SHL16(struct CPUState *cpu, uint16_t v, int count, uint16_t= res) +void SET_FLAGS_SHL16(CPUX86State *env, uint16_t v, int count, uint16_t res) { int of =3D 0, cf =3D 0; =20 @@ -197,10 +197,10 @@ void SET_FLAGS_SHL16(struct CPUState *cpu, uint16_t v= , int count, uint16_t res) } =20 SET_FLAGS_OSZAPC_LOGIC_16(res); - SET_FLAGS_OxxxxC(cpu, of, cf); + SET_FLAGS_OxxxxC(env, of, cf); } =20 -void SET_FLAGS_SHL8(struct CPUState *cpu, uint8_t v, int count, uint8_t re= s) +void SET_FLAGS_SHL8(CPUX86State *env, uint8_t v, int count, uint8_t res) { int of =3D 0, cf =3D 0; =20 @@ -210,124 +210,124 @@ void SET_FLAGS_SHL8(struct CPUState *cpu, uint8_t v= , int count, uint8_t res) } =20 SET_FLAGS_OSZAPC_LOGIC_8(res); - SET_FLAGS_OxxxxC(cpu, of, cf); + SET_FLAGS_OxxxxC(env, of, cf); } =20 -bool get_PF(struct CPUState *cpu) +bool get_PF(CPUX86State *env) { - uint32_t temp =3D (255 & cpu->hvf_x86->lflags.result); - temp =3D temp ^ (255 & (cpu->hvf_x86->lflags.auxbits >> LF_BIT_PDB)); + uint32_t temp =3D (255 & env->hvf_emul->lflags.result); + temp =3D temp ^ (255 & (env->hvf_emul->lflags.auxbits >> LF_BIT_PDB)); temp =3D (temp ^ (temp >> 4)) & 0x0F; return (0x9669U >> temp) & 1; } =20 -void set_PF(struct CPUState *cpu, bool val) +void set_PF(CPUX86State *env, bool val) { - uint32_t temp =3D (255 & cpu->hvf_x86->lflags.result) ^ (!val); - cpu->hvf_x86->lflags.auxbits &=3D ~(LF_MASK_PDB); - cpu->hvf_x86->lflags.auxbits |=3D (temp << LF_BIT_PDB); + uint32_t temp =3D (255 & env->hvf_emul->lflags.result) ^ (!val); + env->hvf_emul->lflags.auxbits &=3D ~(LF_MASK_PDB); + env->hvf_emul->lflags.auxbits |=3D (temp << LF_BIT_PDB); } =20 -bool _get_OF(struct CPUState *cpu) +bool _get_OF(CPUX86State *env) { - return ((cpu->hvf_x86->lflags.auxbits + (1U << LF_BIT_PO)) >> LF_BIT_C= F) & 1; + return ((env->hvf_emul->lflags.auxbits + (1U << LF_BIT_PO)) >> LF_BIT_= CF) & 1; } =20 -bool get_OF(struct CPUState *cpu) +bool get_OF(CPUX86State *env) { - return _get_OF(cpu); + return _get_OF(env); } =20 -bool _get_CF(struct CPUState *cpu) +bool _get_CF(CPUX86State *env) { - return (cpu->hvf_x86->lflags.auxbits >> LF_BIT_CF) & 1; + return (env->hvf_emul->lflags.auxbits >> LF_BIT_CF) & 1; } =20 -bool get_CF(struct CPUState *cpu) +bool get_CF(CPUX86State *env) { - return _get_CF(cpu); + return _get_CF(env); } =20 -void set_OF(struct CPUState *cpu, bool val) +void set_OF(CPUX86State *env, bool val) { - SET_FLAGS_OxxxxC(cpu, val, _get_CF(cpu)); + SET_FLAGS_OxxxxC(env, val, _get_CF(env)); } =20 -void set_CF(struct CPUState *cpu, bool val) +void set_CF(CPUX86State *env, bool val) { - SET_FLAGS_OxxxxC(cpu, _get_OF(cpu), (val)); + SET_FLAGS_OxxxxC(env, _get_OF(env), (val)); } =20 -bool get_AF(struct CPUState *cpu) +bool get_AF(CPUX86State *env) { - return (cpu->hvf_x86->lflags.auxbits >> LF_BIT_AF) & 1; + return (env->hvf_emul->lflags.auxbits >> LF_BIT_AF) & 1; } =20 -void set_AF(struct CPUState *cpu, bool val) +void set_AF(CPUX86State *env, bool val) { - cpu->hvf_x86->lflags.auxbits &=3D ~(LF_MASK_AF); - cpu->hvf_x86->lflags.auxbits |=3D (val) << LF_BIT_AF; + env->hvf_emul->lflags.auxbits &=3D ~(LF_MASK_AF); + env->hvf_emul->lflags.auxbits |=3D (val) << LF_BIT_AF; } =20 -bool get_ZF(struct CPUState *cpu) +bool get_ZF(CPUX86State *env) { - return !cpu->hvf_x86->lflags.result; + return !env->hvf_emul->lflags.result; } =20 -void set_ZF(struct CPUState *cpu, bool val) +void set_ZF(CPUX86State *env, bool val) { if (val) { - cpu->hvf_x86->lflags.auxbits ^=3D - (((cpu->hvf_x86->lflags.result >> LF_SIGN_BIT) & 1) << LF_BIT_SD); + env->hvf_emul->lflags.auxbits ^=3D + (((env->hvf_emul->lflags.result >> LF_SIGN_BIT) & 1) << LF_BIT_SD= ); /* merge the parity bits into the Parity Delta Byte */ - uint32_t temp_pdb =3D (255 & cpu->hvf_x86->lflags.result); - cpu->hvf_x86->lflags.auxbits ^=3D (temp_pdb << LF_BIT_PDB); + uint32_t temp_pdb =3D (255 & env->hvf_emul->lflags.result); + env->hvf_emul->lflags.auxbits ^=3D (temp_pdb << LF_BIT_PDB); /* now zero the .result value */ - cpu->hvf_x86->lflags.result =3D 0; + env->hvf_emul->lflags.result =3D 0; } else { - cpu->hvf_x86->lflags.result |=3D (1 << 8); + env->hvf_emul->lflags.result |=3D (1 << 8); } } =20 -bool get_SF(struct CPUState *cpu) +bool get_SF(CPUX86State *env) { - return ((cpu->hvf_x86->lflags.result >> LF_SIGN_BIT) ^ - (cpu->hvf_x86->lflags.auxbits >> LF_BIT_SD)) & 1; + return ((env->hvf_emul->lflags.result >> LF_SIGN_BIT) ^ + (env->hvf_emul->lflags.auxbits >> LF_BIT_SD)) & 1; } =20 -void set_SF(struct CPUState *cpu, bool val) +void set_SF(CPUX86State *env, bool val) { - bool temp_sf =3D get_SF(cpu); - cpu->hvf_x86->lflags.auxbits ^=3D (temp_sf ^ val) << LF_BIT_SD; + bool temp_sf =3D get_SF(env); + env->hvf_emul->lflags.auxbits ^=3D (temp_sf ^ val) << LF_BIT_SD; } =20 -void set_OSZAPC(struct CPUState *cpu, uint32_t flags32) +void set_OSZAPC(CPUX86State *env, uint32_t flags32) { - set_OF(cpu, cpu->hvf_x86->rflags.of); - set_SF(cpu, cpu->hvf_x86->rflags.sf); - set_ZF(cpu, cpu->hvf_x86->rflags.zf); - set_AF(cpu, cpu->hvf_x86->rflags.af); - set_PF(cpu, cpu->hvf_x86->rflags.pf); - set_CF(cpu, cpu->hvf_x86->rflags.cf); + set_OF(env, env->hvf_emul->rflags.of); + set_SF(env, env->hvf_emul->rflags.sf); + set_ZF(env, env->hvf_emul->rflags.zf); + set_AF(env, env->hvf_emul->rflags.af); + set_PF(env, env->hvf_emul->rflags.pf); + set_CF(env, env->hvf_emul->rflags.cf); } =20 -void lflags_to_rflags(struct CPUState *cpu) +void lflags_to_rflags(CPUX86State *env) { - cpu->hvf_x86->rflags.cf =3D get_CF(cpu); - cpu->hvf_x86->rflags.pf =3D get_PF(cpu); - cpu->hvf_x86->rflags.af =3D get_AF(cpu); - cpu->hvf_x86->rflags.zf =3D get_ZF(cpu); - cpu->hvf_x86->rflags.sf =3D get_SF(cpu); - cpu->hvf_x86->rflags.of =3D get_OF(cpu); + env->hvf_emul->rflags.cf =3D get_CF(env); + env->hvf_emul->rflags.pf =3D get_PF(env); + env->hvf_emul->rflags.af =3D get_AF(env); + env->hvf_emul->rflags.zf =3D get_ZF(env); + env->hvf_emul->rflags.sf =3D get_SF(env); + env->hvf_emul->rflags.of =3D get_OF(env); } =20 -void rflags_to_lflags(struct CPUState *cpu) +void rflags_to_lflags(CPUX86State *env) { - cpu->hvf_x86->lflags.auxbits =3D cpu->hvf_x86->lflags.result =3D 0; - set_OF(cpu, cpu->hvf_x86->rflags.of); - set_SF(cpu, cpu->hvf_x86->rflags.sf); - set_ZF(cpu, cpu->hvf_x86->rflags.zf); - set_AF(cpu, cpu->hvf_x86->rflags.af); - set_PF(cpu, cpu->hvf_x86->rflags.pf); - set_CF(cpu, cpu->hvf_x86->rflags.cf); + env->hvf_emul->lflags.auxbits =3D env->hvf_emul->lflags.result =3D 0; + set_OF(env, env->hvf_emul->rflags.of); + set_SF(env, env->hvf_emul->rflags.sf); + set_ZF(env, env->hvf_emul->rflags.zf); + set_AF(env, env->hvf_emul->rflags.af); + set_PF(env, env->hvf_emul->rflags.pf); + set_CF(env, env->hvf_emul->rflags.cf); } diff --git a/target/i386/hvf-utils/x86_flags.h b/target/i386/hvf-utils/x86_= flags.h index 68a0c10b90..57a524240c 100644 --- a/target/i386/hvf-utils/x86_flags.h +++ b/target/i386/hvf-utils/x86_flags.h @@ -24,14 +24,10 @@ #define __X86_FLAGS_H__ =20 #include "x86_gen.h" +#include "cpu.h" =20 /* this is basically bocsh code */ =20 -typedef struct lazy_flags { - addr_t result; - addr_t auxbits; -} lazy_flags; - #define LF_SIGN_BIT 31 =20 #define LF_BIT_SD (0) /* lazy Sign Flag Delta */ @@ -63,7 +59,7 @@ typedef struct lazy_flags { #define SET_FLAGS_OSZAPC_SIZE(size, lf_carries, lf_result) { \ addr_t temp =3D ((lf_carries) & (LF_MASK_AF)) | \ (((lf_carries) >> (size - 2)) << LF_BIT_PO); \ - cpu->hvf_x86->lflags.result =3D (addr_t)(int##size##_t)(lf_result); \ + env->hvf_emul->lflags.result =3D (addr_t)(int##size##_t)(lf_result); \ if ((size) =3D=3D 32) { \ temp =3D ((lf_carries) & ~(LF_MASK_PDB | LF_MASK_SD)); \ } else if ((size) =3D=3D 16) { \ @@ -73,7 +69,7 @@ typedef struct lazy_flags { } else { \ VM_PANIC("unimplemented"); \ } \ - cpu->hvf_x86->lflags.auxbits =3D (addr_t)(uint32_t)temp; \ + env->hvf_emul->lflags.auxbits =3D (addr_t)(uint32_t)temp; \ } =20 /* carries, result */ @@ -135,10 +131,10 @@ typedef struct lazy_flags { } else { \ VM_PANIC("unimplemented"); \ } \ - cpu->hvf_x86->lflags.result =3D (addr_t)(int##size##_t)(lf_result); \ - addr_t delta_c =3D (cpu->hvf_x86->lflags.auxbits ^ temp) & LF_MASK_CF;= \ + env->hvf_emul->lflags.result =3D (addr_t)(int##size##_t)(lf_result); \ + addr_t delta_c =3D (env->hvf_emul->lflags.auxbits ^ temp) & LF_MASK_CF= ; \ delta_c ^=3D (delta_c >> 1); \ - cpu->hvf_x86->lflags.auxbits =3D (addr_t)(uint32_t)(temp ^ delta_c); \ + env->hvf_emul->lflags.auxbits =3D (addr_t)(uint32_t)(temp ^ delta_c); \ } =20 /* carries, result */ @@ -179,69 +175,69 @@ typedef struct lazy_flags { #define SET_FLAGS_OSZAxC_LOGIC_32(result_32) \ SET_FLAGS_OSZAxC_LOGIC_SIZE(32, (result_32)) =20 -void lflags_to_rflags(struct CPUState *cpu); -void rflags_to_lflags(struct CPUState *cpu); - -bool get_PF(struct CPUState *cpu); -void set_PF(struct CPUState *cpu, bool val); -bool get_CF(struct CPUState *cpu); -void set_CF(struct CPUState *cpu, bool val); -bool get_AF(struct CPUState *cpu); -void set_AF(struct CPUState *cpu, bool val); -bool get_ZF(struct CPUState *cpu); -void set_ZF(struct CPUState *cpu, bool val); -bool get_SF(struct CPUState *cpu); -void set_SF(struct CPUState *cpu, bool val); -bool get_OF(struct CPUState *cpu); -void set_OF(struct CPUState *cpu, bool val); -void set_OSZAPC(struct CPUState *cpu, uint32_t flags32); - -void SET_FLAGS_OxxxxC(struct CPUState *cpu, uint32_t new_of, uint32_t new_= cf); - -void SET_FLAGS_OSZAPC_SUB32(struct CPUState *cpu, uint32_t v1, uint32_t v2, +void lflags_to_rflags(CPUX86State *env); +void rflags_to_lflags(CPUX86State *env); + +bool get_PF(CPUX86State *env); +void set_PF(CPUX86State *env, bool val); +bool get_CF(CPUX86State *env); +void set_CF(CPUX86State *env, bool val); +bool get_AF(CPUX86State *env); +void set_AF(CPUX86State *env, bool val); +bool get_ZF(CPUX86State *env); +void set_ZF(CPUX86State *env, bool val); +bool get_SF(CPUX86State *env); +void set_SF(CPUX86State *env, bool val); +bool get_OF(CPUX86State *env); +void set_OF(CPUX86State *env, bool val); +void set_OSZAPC(CPUX86State *env, uint32_t flags32); + +void SET_FLAGS_OxxxxC(CPUX86State *env, uint32_t new_of, uint32_t new_cf); + +void SET_FLAGS_OSZAPC_SUB32(CPUX86State *env, uint32_t v1, uint32_t v2, uint32_t diff); -void SET_FLAGS_OSZAPC_SUB16(struct CPUState *cpu, uint16_t v1, uint16_t v2, +void SET_FLAGS_OSZAPC_SUB16(CPUX86State *env, uint16_t v1, uint16_t v2, uint16_t diff); -void SET_FLAGS_OSZAPC_SUB8(struct CPUState *cpu, uint8_t v1, uint8_t v2, +void SET_FLAGS_OSZAPC_SUB8(CPUX86State *env, uint8_t v1, uint8_t v2, uint8_t diff); =20 -void SET_FLAGS_OSZAPC_ADD32(struct CPUState *cpu, uint32_t v1, uint32_t v2, +void SET_FLAGS_OSZAPC_ADD32(CPUX86State *env, uint32_t v1, uint32_t v2, uint32_t diff); -void SET_FLAGS_OSZAPC_ADD16(struct CPUState *cpu, uint16_t v1, uint16_t v2, +void SET_FLAGS_OSZAPC_ADD16(CPUX86State *env, uint16_t v1, uint16_t v2, uint16_t diff); -void SET_FLAGS_OSZAPC_ADD8(struct CPUState *cpu, uint8_t v1, uint8_t v2, +void SET_FLAGS_OSZAPC_ADD8(CPUX86State *env, uint8_t v1, uint8_t v2, uint8_t diff); =20 -void SET_FLAGS_OSZAP_SUB32(struct CPUState *cpu, uint32_t v1, uint32_t v2, +void SET_FLAGS_OSZAP_SUB32(CPUX86State *env, uint32_t v1, uint32_t v2, uint32_t diff); -void SET_FLAGS_OSZAP_SUB16(struct CPUState *cpu, uint16_t v1, uint16_t v2, +void SET_FLAGS_OSZAP_SUB16(CPUX86State *env, uint16_t v1, uint16_t v2, uint16_t diff); -void SET_FLAGS_OSZAP_SUB8(struct CPUState *cpu, uint8_t v1, uint8_t v2, +void SET_FLAGS_OSZAP_SUB8(CPUX86State *env, uint8_t v1, uint8_t v2, uint8_t diff); =20 -void SET_FLAGS_OSZAP_ADD32(struct CPUState *cpu, uint32_t v1, uint32_t v2, +void SET_FLAGS_OSZAP_ADD32(CPUX86State *env, uint32_t v1, uint32_t v2, uint32_t diff); -void SET_FLAGS_OSZAP_ADD16(struct CPUState *cpu, uint16_t v1, uint16_t v2, +void SET_FLAGS_OSZAP_ADD16(CPUX86State *env, uint16_t v1, uint16_t v2, uint16_t diff); -void SET_FLAGS_OSZAP_ADD8(struct CPUState *cpu, uint8_t v1, uint8_t v2, +void SET_FLAGS_OSZAP_ADD8(CPUX86State *env, uint8_t v1, uint8_t v2, uint8_t diff); =20 -void SET_FLAGS_OSZAPC_LOGIC32(struct CPUState *cpu, uint32_t diff); -void SET_FLAGS_OSZAPC_LOGIC16(struct CPUState *cpu, uint16_t diff); -void SET_FLAGS_OSZAPC_LOGIC8(struct CPUState *cpu, uint8_t diff); +void SET_FLAGS_OSZAPC_LOGIC32(CPUX86State *env, uint32_t diff); +void SET_FLAGS_OSZAPC_LOGIC16(CPUX86State *env, uint16_t diff); +void SET_FLAGS_OSZAPC_LOGIC8(CPUX86State *env, uint8_t diff); =20 -void SET_FLAGS_SHR32(struct CPUState *cpu, uint32_t v, int count, uint32_t= res); -void SET_FLAGS_SHR16(struct CPUState *cpu, uint16_t v, int count, uint16_t= res); -void SET_FLAGS_SHR8(struct CPUState *cpu, uint8_t v, int count, uint8_t re= s); +void SET_FLAGS_SHR32(CPUX86State *env, uint32_t v, int count, uint32_t res= ); +void SET_FLAGS_SHR16(CPUX86State *env, uint16_t v, int count, uint16_t res= ); +void SET_FLAGS_SHR8(CPUX86State *env, uint8_t v, int count, uint8_t res); =20 -void SET_FLAGS_SAR32(struct CPUState *cpu, int32_t v, int count, uint32_t = res); -void SET_FLAGS_SAR16(struct CPUState *cpu, int16_t v, int count, uint16_t = res); -void SET_FLAGS_SAR8(struct CPUState *cpu, int8_t v, int count, uint8_t res= ); +void SET_FLAGS_SAR32(CPUX86State *env, int32_t v, int count, uint32_t res); +void SET_FLAGS_SAR16(CPUX86State *env, int16_t v, int count, uint16_t res); +void SET_FLAGS_SAR8(CPUX86State *env, int8_t v, int count, uint8_t res); =20 -void SET_FLAGS_SHL32(struct CPUState *cpu, uint32_t v, int count, uint32_t= res); -void SET_FLAGS_SHL16(struct CPUState *cpu, uint16_t v, int count, uint16_t= res); -void SET_FLAGS_SHL8(struct CPUState *cpu, uint8_t v, int count, uint8_t re= s); +void SET_FLAGS_SHL32(CPUX86State *env, uint32_t v, int count, uint32_t res= ); +void SET_FLAGS_SHL16(CPUX86State *env, uint16_t v, int count, uint16_t res= ); +void SET_FLAGS_SHL8(CPUX86State *env, uint8_t v, int count, uint8_t res); =20 -bool _get_OF(struct CPUState *cpu); -bool _get_CF(struct CPUState *cpu); +bool _get_OF(CPUX86State *env); +bool _get_CF(CPUX86State *env); #endif /* __X86_FLAGS_H__ */ diff --git a/target/i386/hvf-utils/x86hvf.c b/target/i386/hvf-utils/x86hvf.c index c920064659..1e687f4f89 100644 --- a/target/i386/hvf-utils/x86hvf.c +++ b/target/i386/hvf-utils/x86hvf.c @@ -403,9 +403,10 @@ void vmx_clear_int_window_exiting(CPUState *cpu) =20 void hvf_inject_interrupts(CPUState *cpu_state) { - X86CPU *x86cpu =3D X86_CPU(cpu_state); int allow_nmi =3D !(rvmcs(cpu_state->hvf_fd, VMCS_GUEST_INTERRUPTIBILI= TY) & - VMCS_INTERRUPTIBILITY_NMI_BLOCKING); + VMCS_INTERRUPTIBILITY_NMI_BLOCKING); + X86CPU *x86cpu =3D X86_CPU(cpu_state); + CPUX86State *env =3D &x86cpu->env; =20 uint64_t idt_info =3D rvmcs(cpu_state->hvf_fd, VMCS_IDT_VECTORING_INFO= ); uint64_t info =3D 0; @@ -462,9 +463,9 @@ void hvf_inject_interrupts(CPUState *cpu_state) } } =20 - if (cpu_state->hvf_x86->interruptable && + if (env->hvf_emul->interruptable && (cpu_state->interrupt_request & CPU_INTERRUPT_HARD) && - (EFLAGS(cpu_state) & IF_MASK) && !(info & VMCS_INTR_VALID)) { + (EFLAGS(env) & IF_MASK) && !(info & VMCS_INTR_VALID)) { int line =3D cpu_get_pic_interrupt(&x86cpu->env); cpu_state->interrupt_request &=3D ~CPU_INTERRUPT_HARD; if (line >=3D 0) { @@ -481,8 +482,8 @@ int hvf_process_events(CPUState *cpu_state) { X86CPU *cpu =3D X86_CPU(cpu_state); CPUX86State *env =3D &cpu->env; - =20 - EFLAGS(cpu_state) =3D rreg(cpu_state->hvf_fd, HV_X86_RFLAGS); + + EFLAGS(env) =3D rreg(cpu_state->hvf_fd, HV_X86_RFLAGS); =20 if (cpu_state->interrupt_request & CPU_INTERRUPT_INIT) { hvf_cpu_synchronize_state(cpu_state); @@ -494,7 +495,7 @@ int hvf_process_events(CPUState *cpu_state) apic_poll_irq(cpu->apic_state); } if (((cpu_state->interrupt_request & CPU_INTERRUPT_HARD) && - (EFLAGS(cpu_state) & IF_MASK)) || + (EFLAGS(env) & IF_MASK)) || (cpu_state->interrupt_request & CPU_INTERRUPT_NMI)) { cpu_state->halted =3D 0; } @@ -510,4 +511,3 @@ int hvf_process_events(CPUState *cpu_state) } return cpu_state->halted; } - --=20 2.14.1 From nobody Sat Apr 27 16:08:04 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1504583993060881.963602848102; Mon, 4 Sep 2017 20:59:53 -0700 (PDT) Received: from localhost ([::1]:56721 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp51k-0004wz-49 for importer@patchew.org; Mon, 04 Sep 2017 23:59:52 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41514) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp4xp-0001ng-HO for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:54 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dp4xk-0007yp-TT for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:49 -0400 Received: from mail-vk0-x242.google.com ([2607:f8b0:400c:c05::242]:37783) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1dp4xk-0007yU-P0 for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:44 -0400 Received: by mail-vk0-x242.google.com with SMTP id 184so713125vkn.4 for ; Mon, 04 Sep 2017 20:55:44 -0700 (PDT) Received: from localhost.localdomain ([161.10.80.59]) by smtp.gmail.com with ESMTPSA id d206sm1877252vka.29.2017.09.04.20.55.42 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 04 Sep 2017 20:55:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Leh2zzC1y7SBPj5qJKYS1Lq6nTQLIg1zG1oJkTJkXC4=; b=tDAk5yDzXYM7bnGmAts/UcuUuFCmq5ovGImMSodjm6RZ8pnYKz+KqzQ78r4vd+Vy4J bOOk1ol/vDhtxb7SdoxpiW+yk77Jl9P2CysCmregR3gLKDQ1qS2VMzhTofpFfa9//utZ sfyRELzCCjthmr6y68LD+pTvasD2FOP1JhT4Jx5Sr5H+bIn/cqYMLP3AGFVVW3/GeTp7 0YcvQHdJcAAyPVSGPQzUlSB8w0O1OBkPP4zp+FWfx0AaWshTT1q2PhvM3CPAZ7/Kwz0F o6gD0bALBZp2TiSTlWJKMWy40tPJFU5pVAPybFPNtuC6P3slW3SYHiXJNr3veQFM8qYh zcQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Leh2zzC1y7SBPj5qJKYS1Lq6nTQLIg1zG1oJkTJkXC4=; b=VYuENAfkHUd3UQRNlNbhhNN1JdM+NmMg4UNPP1hwSatkVTjs54kQuYfTAkKa+Spppp l+oCuVrbjyYYhiVJLYHBKAxvYL8S53SWlrWdRWxeHFA0KYjn0DMJ93Kg8RlKnuSVP+PN lLiOWGsq0/V8HOWjSeUkofMKSkJdvD5nJhQlO6im+x49UXCDFt/x++UKVlJEZE1QQt6/ JPbEtyIthUNwdizyZJuLdgHTDzEZ4TA1wshXhD7EFyv0uI+WX0Tus4BPaKZOJYLSdZyk mL3xKNzc1qZwUePy4Pr+DT4CFgoS/GAa8DscUFORyDhLsJCuZr7SouKxBXjrSGadCNj2 USfg== X-Gm-Message-State: AHPjjUilgdCKcVCXIAE4G7bZmHMaXRZJu+GaBGcj/i3xf3/lQmfwhLrh 7m+PQf/ieKOEGpkQ X-Google-Smtp-Source: ADKCNb5ikbYCpNj7t0zdjioOFL1PCVcnHWyX8RtmQZq2ad8jEA4Z0sb1zc59xqO/1b2K7qkFW7VbpQ== X-Received: by 10.31.81.67 with SMTP id f64mr1405554vkb.163.1504583743972; Mon, 04 Sep 2017 20:55:43 -0700 (PDT) From: Sergio Andres Gomez Del Real X-Google-Original-From: Sergio Andres Gomez Del Real To: qemu-devel@nongnu.org Date: Mon, 4 Sep 2017 22:54:50 -0500 Message-Id: <20170905035457.3753-8-Sergio.G.DelReal@gmail.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> References: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:400c:c05::242 Subject: [Qemu-devel] [PATCH v3 07/14] apic: add function to apic that will be used by hvf X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sergio Andres Gomez Del Real , pbonzini@redhat.com, stefanha@gmail.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZohoMail: RDKM_2 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This commit adds the function apic_get_highest_priority_irr to apic.c and exports it through the interface in apic.h for use by hvf. Signed-off-by: Sergio Andres Gomez Del Real --- hw/intc/apic.c | 12 ++++++++++++ include/hw/i386/apic.h | 1 + 2 files changed, 13 insertions(+) diff --git a/hw/intc/apic.c b/hw/intc/apic.c index fe15fb6024..6fda52b86c 100644 --- a/hw/intc/apic.c +++ b/hw/intc/apic.c @@ -305,6 +305,18 @@ static void apic_set_tpr(APICCommonState *s, uint8_t v= al) } } =20 +int apic_get_highest_priority_irr(DeviceState *dev) +{ + APICCommonState *s; + + if (!dev) { + /* no interrupts */ + return -1; + } + s =3D APIC_COMMON(dev); + return get_highest_priority_int(s->irr); +} + static uint8_t apic_get_tpr(APICCommonState *s) { apic_sync_vapic(s, SYNC_FROM_VAPIC); diff --git a/include/hw/i386/apic.h b/include/hw/i386/apic.h index ea48ea9389..a9f6c0aa33 100644 --- a/include/hw/i386/apic.h +++ b/include/hw/i386/apic.h @@ -20,6 +20,7 @@ void apic_init_reset(DeviceState *s); void apic_sipi(DeviceState *s); void apic_poll_irq(DeviceState *d); void apic_designate_bsp(DeviceState *d, bool bsp); +int apic_get_highest_priority_irr(DeviceState *dev); =20 /* pc.c */ DeviceState *cpu_get_current_apic(void); --=20 2.14.1 From nobody Sat Apr 27 16:08:04 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1504583878437743.4865010119948; Mon, 4 Sep 2017 20:57:58 -0700 (PDT) Received: from localhost ([::1]:56713 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp4zt-0003Bc-4v for importer@patchew.org; Mon, 04 Sep 2017 23:57:57 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41532) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp4xr-0001pD-2n for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:55 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dp4xm-00081I-KO for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:51 -0400 Received: from mail-vk0-x243.google.com ([2607:f8b0:400c:c05::243]:37985) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1dp4xm-00080l-FE for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:46 -0400 Received: by mail-vk0-x243.google.com with SMTP id o22so714500vke.5 for ; Mon, 04 Sep 2017 20:55:46 -0700 (PDT) Received: from localhost.localdomain ([161.10.80.59]) by smtp.gmail.com with ESMTPSA id d206sm1877252vka.29.2017.09.04.20.55.44 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 04 Sep 2017 20:55:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=BqjmGUhRKmKO6ZJARwD37kAQQ5+Aa0BSaFgngdfmC/o=; b=SShgNOCcpj6K47mxJAjkF+EPLRtBTjqc7/H+qqxyK+x0UGjKZ9XWVd/TXmyCW/Ct9l 329/LSbwxCviZwDhPqpgidNuIApdbLDGKH5U1MzKgot+2g/ZvP2EbJ/c1E9277+0/rYy sL10Tph+v06i8g6qZEAbo1HoeqHjxJgGPLEXdbm0sTmxoOxmhBIwp6psqbDTwg9/p/sT YLaczAtOnqC6JJ+h9v5WtX7R8uok8tvlh2yh5yqJ1JkAulx8E/42NHIbJJqgLBtTdoEY gqR1qNMkF/cOJJsQm5fs0F75F4jUxz9kkgTLcogkuf9LQR2DF3xSbx7fC711ZwKGT1Qu AICg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=BqjmGUhRKmKO6ZJARwD37kAQQ5+Aa0BSaFgngdfmC/o=; b=a/PgQOzEY3OzEb7c4sEedG1WDPGcgVm/JkyDZIzffNMBGpju+0ej9bt3leAUU07hXt 1RdZKVMKKifb1tUhvsPsE3Eqepf5qgQzl2BGFzXCMv01uwjtJVVkkW3ruXoiTvlRY6Wa yVdUY/prTMveRqA3MgSaNNrdTgCwvDdAftz7tQX1pu8aWIBXGYO5bU+JB6ifCyKl6YDz IKiuRNAYEvHZN/6JitEaURHJpgIX5IrdVD7BF2ZKa+7xoCuZ9aCiquV8YPSxIkYYuKV3 uXztXXOgeI7l7zkFfEQi7p1UpnhfLU7az1lyikuHlYGTqymah02Qn1hVHkD0ie8iq0ii GRwA== X-Gm-Message-State: AHPjjUg6dHw+feNS559LsG41kYbYXsnrYN2rhxquHFUtl+k4zhXSGFi+ zy4jQN+qMh3Xl8yS X-Google-Smtp-Source: ADKCNb7l4mrKvf1+Nam9S2Je5cvpLTEUYlrlMb5qxANgesqdYwE/Fp9/PM0poELZWxpLNUkjXdkcLw== X-Received: by 10.31.159.140 with SMTP id i134mr1270283vke.157.1504583745742; Mon, 04 Sep 2017 20:55:45 -0700 (PDT) From: Sergio Andres Gomez Del Real X-Google-Original-From: Sergio Andres Gomez Del Real To: qemu-devel@nongnu.org Date: Mon, 4 Sep 2017 22:54:51 -0500 Message-Id: <20170905035457.3753-9-Sergio.G.DelReal@gmail.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> References: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:400c:c05::243 Subject: [Qemu-devel] [PATCH v3 08/14] hvf: add compilation rules to Makefile.objs X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sergio Andres Gomez Del Real , pbonzini@redhat.com, stefanha@gmail.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZohoMail: RDKM_2 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This commit adds to target/i386/Makefile.objs the necessary rules so that the new files for hvf are compiled by the build system. It also adds handling of the -enable-hvf argument in the main function in vl.c. Signed-off-by: Sergio Andres Gomez Del Real --- target/i386/Makefile.objs | 1 + 1 file changed, 1 insertion(+) diff --git a/target/i386/Makefile.objs b/target/i386/Makefile.objs index 6a26e9d9f0..0bef89c099 100644 --- a/target/i386/Makefile.objs +++ b/target/i386/Makefile.objs @@ -12,4 +12,5 @@ obj-$(CONFIG_HAX) +=3D hax-all.o hax-mem.o hax-windows.o endif ifdef CONFIG_DARWIN obj-$(CONFIG_HAX) +=3D hax-all.o hax-mem.o hax-darwin.o +obj-$(CONFIG_HVF) +=3D hvf-utils/ hvf-all.o endif --=20 2.14.1 From nobody Sat Apr 27 16:08:04 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1504584022592372.8755582412182; Mon, 4 Sep 2017 21:00:22 -0700 (PDT) Received: from localhost ([::1]:56723 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp52D-0005OB-KL for importer@patchew.org; Tue, 05 Sep 2017 00:00:21 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41571) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp4xu-0001qE-Kl for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:56:00 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dp4xp-00083x-IM for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:54 -0400 Received: from mail-vk0-x241.google.com ([2607:f8b0:400c:c05::241]:37785) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1dp4xp-00083W-Dw for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:49 -0400 Received: by mail-vk0-x241.google.com with SMTP id 184so713166vkn.4 for ; Mon, 04 Sep 2017 20:55:49 -0700 (PDT) Received: from localhost.localdomain ([161.10.80.59]) by smtp.gmail.com with ESMTPSA id d206sm1877252vka.29.2017.09.04.20.55.45 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 04 Sep 2017 20:55:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=LBpWfEMArBFPyiPw0oC4Weg+m7hZafmBKKIoq+opbio=; b=vgeY5GghrfEaoGnikTRYIqUbxNDj9ghDaWgLF/UdpSwN/MLEIalUfAqWdTnjIScXZs sQw599mTDPhYJ4lVRxIjUkR+raj67sFYLRFH1wCUj7mAi32Jo/IUPa6rr9q5D8Q3QeZ9 JJ8AR6MITNdLzaP5k30ICrEYt18+PZLEgIJEJYGea+S+yj/Ptf8VGswFxo+DxhpB6RxC hOTwfhOzsMLLOMTNTTCxyKdbioPnl9GoIPEQAD79NkR8apFluZqZjscakODHEiNi6ktm utzraCGxzcILb42z8A7Knq3GBuM/moFO5jItZfmbDi+BcWL5Q2ShYPvZNYbLmZYB9bBo bjQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=LBpWfEMArBFPyiPw0oC4Weg+m7hZafmBKKIoq+opbio=; b=U7NLRLE1oO4Z+MmywkpN39xL2+kvAI7CieRDOKHRe4hPuLE8hpImRfIIi8GoKlWm3n PP+JE3+vWJrWWULLPua30ZeWTlfqfLyGvFyoRSi1Y4szLCSMP6l+5ggDIFjUaO6LD3GC 7XREMGRv9D/tNEESr7+I67d0q3ENSrKckeIRJjKmLHI3i8i/glGfL8dQybMMG2X11u5k B9K7gokEZ8DNS3mkGP+KVMqoJNi2QXZEjDXUevw8cK9l50bh6lz5pLqV/ZoVrxNN68ck lryrYL8ErvW0FzGMFz5nmircYWubILVGSIMifYR9zc3gTdCluYseaRlfJgZ151vBwnzF JDOQ== X-Gm-Message-State: AHPjjUgsKccXVPY3siH0fZxOjcvlGf6CxTU1DWyJhVJx9ePGjZUuP58t rreYoLdP3d3y4Mfi X-Google-Smtp-Source: ADKCNb69UaTabTa0pSWe9cd7hkuGx1OTuEnuHYi7L5UIBpmIXkNWvsBMxEJToIxb8q1LKxNlMXsHsg== X-Received: by 10.31.131.19 with SMTP id f19mr1237413vkd.80.1504583748584; Mon, 04 Sep 2017 20:55:48 -0700 (PDT) From: Sergio Andres Gomez Del Real X-Google-Original-From: Sergio Andres Gomez Del Real To: qemu-devel@nongnu.org Date: Mon, 4 Sep 2017 22:54:52 -0500 Message-Id: <20170905035457.3753-10-Sergio.G.DelReal@gmail.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> References: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:400c:c05::241 Subject: [Qemu-devel] [PATCH v3 09/14] hvf: use new helper functions for put/get xsave X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sergio Andres Gomez Del Real , pbonzini@redhat.com, stefanha@gmail.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZohoMail: RDKM_2 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This commit makes use of the helper functions for handling xsave in xsave_helper.c, which are shared with kvm. Signed-off-by: Sergio Andres Gomez Del Real --- target/i386/hvf-utils/x86hvf.c | 63 ++++++--------------------------------= ---- 1 file changed, 8 insertions(+), 55 deletions(-) diff --git a/target/i386/hvf-utils/x86hvf.c b/target/i386/hvf-utils/x86hvf.c index 1e687f4f89..dd0710d056 100644 --- a/target/i386/hvf-utils/x86hvf.c +++ b/target/i386/hvf-utils/x86hvf.c @@ -76,36 +76,13 @@ void hvf_get_segment(SegmentCache *qseg, struct vmx_seg= ment *vmx_seg) void hvf_put_xsave(CPUState *cpu_state) { =20 - int x; struct hvf_xsave_buf *xsave; - =20 + xsave =3D X86_CPU(cpu_state)->env.kvm_xsave_buf; - memset(xsave, 0, sizeof(*xsave));=20 - =20 - memcpy(&xsave->data[4], &X86_CPU(cpu_state)->env.fpdp, sizeof(X86_CPU(= cpu_state)->env.fpdp)); - memcpy(&xsave->data[2], &X86_CPU(cpu_state)->env.fpip, sizeof(X86_CPU(= cpu_state)->env.fpip)); - memcpy(&xsave->data[8], &X86_CPU(cpu_state)->env.fpregs, sizeof(X86_CP= U(cpu_state)->env.fpregs)); - memcpy(&xsave->data[144], &X86_CPU(cpu_state)->env.ymmh_regs, sizeof(X= 86_CPU(cpu_state)->env.ymmh_regs)); - memcpy(&xsave->data[288], &X86_CPU(cpu_state)->env.zmmh_regs, sizeof(X= 86_CPU(cpu_state)->env.zmmh_regs)); - memcpy(&xsave->data[272], &X86_CPU(cpu_state)->env.opmask_regs, sizeof= (X86_CPU(cpu_state)->env.opmask_regs)); - memcpy(&xsave->data[240], &X86_CPU(cpu_state)->env.bnd_regs, sizeof(X8= 6_CPU(cpu_state)->env.bnd_regs)); - memcpy(&xsave->data[256], &X86_CPU(cpu_state)->env.bndcs_regs, sizeof(= X86_CPU(cpu_state)->env.bndcs_regs)); - memcpy(&xsave->data[416], &X86_CPU(cpu_state)->env.hi16_zmm_regs, size= of(X86_CPU(cpu_state)->env.hi16_zmm_regs)); - =20 - xsave->data[0] =3D (uint16_t)X86_CPU(cpu_state)->env.fpuc; - xsave->data[0] |=3D (X86_CPU(cpu_state)->env.fpus << 16); - xsave->data[0] |=3D (X86_CPU(cpu_state)->env.fpstt & 7) << 11; - =20 - for (x =3D 0; x < 8; ++x) - xsave->data[1] |=3D ((!X86_CPU(cpu_state)->env.fptags[x]) << x); - xsave->data[1] |=3D (uint32_t)(X86_CPU(cpu_state)->env.fpop << 16); - =20 - memcpy(&xsave->data[40], &X86_CPU(cpu_state)->env.xmm_regs, sizeof(X86= _CPU(cpu_state)->env.xmm_regs)); - =20 - xsave->data[6] =3D X86_CPU(cpu_state)->env.mxcsr; - *(uint64_t *)&xsave->data[128] =3D X86_CPU(cpu_state)->env.xstate_bv; - =20 - if (hv_vcpu_write_fpstate(cpu_state->hvf_fd, xsave->data, 4096)){ + + x86_cpu_xsave_all_areas(X86_CPU(cpu_state), xsave); + + if (hv_vcpu_write_fpstate(cpu_state->hvf_fd, xsave->data, 4096)) { abort(); } } @@ -187,39 +164,15 @@ void hvf_put_msrs(CPUState *cpu_state) =20 void hvf_get_xsave(CPUState *cpu_state) { - int x; struct hvf_xsave_buf *xsave; - =20 + xsave =3D X86_CPU(cpu_state)->env.kvm_xsave_buf; - =20 + if (hv_vcpu_read_fpstate(cpu_state->hvf_fd, xsave->data, 4096)) { abort(); } =20 - memcpy(&X86_CPU(cpu_state)->env.fpdp, &xsave->data[4], sizeof(X86_CPU(= cpu_state)->env.fpdp)); - memcpy(&X86_CPU(cpu_state)->env.fpip, &xsave->data[2], sizeof(X86_CPU(= cpu_state)->env.fpip)); - memcpy(&X86_CPU(cpu_state)->env.fpregs, &xsave->data[8], sizeof(X86_CP= U(cpu_state)->env.fpregs)); - memcpy(&X86_CPU(cpu_state)->env.ymmh_regs, &xsave->data[144], sizeof(X= 86_CPU(cpu_state)->env.ymmh_regs)); - memcpy(&X86_CPU(cpu_state)->env.zmmh_regs, &xsave->data[288], sizeof(X= 86_CPU(cpu_state)->env.zmmh_regs)); - memcpy(&X86_CPU(cpu_state)->env.opmask_regs, &xsave->data[272], sizeof= (X86_CPU(cpu_state)->env.opmask_regs)); - memcpy(&X86_CPU(cpu_state)->env.bnd_regs, &xsave->data[240], sizeof(X8= 6_CPU(cpu_state)->env.bnd_regs)); - memcpy(&X86_CPU(cpu_state)->env.bndcs_regs, &xsave->data[256], sizeof(= X86_CPU(cpu_state)->env.bndcs_regs)); - memcpy(&X86_CPU(cpu_state)->env.hi16_zmm_regs, &xsave->data[416], size= of(X86_CPU(cpu_state)->env.hi16_zmm_regs)); - =20 - =20 - X86_CPU(cpu_state)->env.fpuc =3D (uint16_t)xsave->data[0]; - X86_CPU(cpu_state)->env.fpus =3D (uint16_t)(xsave->data[0] >> 16); - X86_CPU(cpu_state)->env.fpstt =3D (X86_CPU(cpu_state)->env.fpus >> 11)= & 7; - X86_CPU(cpu_state)->env.fpop =3D (uint16_t)(xsave->data[1] >> 16); - =20 - for (x =3D 0; x < 8; ++x) - X86_CPU(cpu_state)->env.fptags[x] =3D - ((((uint16_t)xsave->data[1] >> x) & 1) =3D=3D 0); - =20 - memcpy(&X86_CPU(cpu_state)->env.xmm_regs, &xsave->data[40], sizeof(X86= _CPU(cpu_state)->env.xmm_regs)); - - X86_CPU(cpu_state)->env.mxcsr =3D xsave->data[6]; - X86_CPU(cpu_state)->env.xstate_bv =3D *(uint64_t *)&xsave->data[128]; + x86_cpu_xrstor_all_areas(X86_CPU(cpu_state), xsave); } =20 void hvf_get_segments(CPUState *cpu_state) --=20 2.14.1 From nobody Sat Apr 27 16:08:04 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1504583884863740.3507597115657; Mon, 4 Sep 2017 20:58:04 -0700 (PDT) Received: from localhost ([::1]:56714 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp4zz-0003GA-KT for importer@patchew.org; Mon, 04 Sep 2017 23:58:03 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41604) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp4xv-0001sZ-Nd for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:56:01 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dp4xq-00085l-Pw for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:55 -0400 Received: from mail-vk0-x242.google.com ([2607:f8b0:400c:c05::242]:37785) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1dp4xq-000854-Lc for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:50 -0400 Received: by mail-vk0-x242.google.com with SMTP id 184so713170vkn.4 for ; Mon, 04 Sep 2017 20:55:50 -0700 (PDT) Received: from localhost.localdomain ([161.10.80.59]) by smtp.gmail.com with ESMTPSA id d206sm1877252vka.29.2017.09.04.20.55.48 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 04 Sep 2017 20:55:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ggiC/JjH7S7JXAKb11DBMjXIPtkpk3lNWYWCGUV4UfA=; b=SG86S+m7fxcidz0qIKzJe3VtFE5VU8VQ0mJPkClc4r+y9WQmsB1ZWFLrqkhjUmwlY/ bYo1jE0nXckVQWkczb8f47IHQi4EeOlIFrg+HrY/USOEUR14NX0sVZPHDkbSo0jGnUR+ 1JK7v5wpwKrbZq2rGDCEH7l4jkNrsBP3i7cfsORjQm7N1FUdGigf5JeBeGOuuk7Q8N7U y+dFtRa6dEbD3d3St3zpyIs3w0ZCkajUArwfLMGqe07TpuGxhfq7uhygzIkWu5pBqNss 7S8w9bV2rrxAlKcfbRmMXR4tDnr0LpjNcrTv+9oOyMq0JUaR3aXckPmuD9gEwxwGsYIk e+Bg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ggiC/JjH7S7JXAKb11DBMjXIPtkpk3lNWYWCGUV4UfA=; b=ELS99snHS0WrL2JJ9vgRZAMr8uQGaPetQvAvECKKVOZq8roLEWm6OjUCqO6mMmo9+y WzXx6x/3zIyyR1fqZq1+DyHh4tCSvk7mPZHcSFLWjEoaofXM6mjIRilkqP4gKQMc+jVe 0vi2gWU2MZ0aDYHIfHfOLhMe3YyO1qQJ6GO8AXqxKM+K7P9cZGv8f4KMoW8tSGmyEZnD 5lWavTOPPBwFmkM/VzhgZ4ylM9f5/ljlRdlAAE7gDQE2lp/HrqiKwJf3ZcTcoiq0huJa 1CfJCu5JmPu/kcd9OVhC1X3g55ZTTba26N65KiPj10nKLFp+Z5foiIIxdMhkIpameUPn EOHg== X-Gm-Message-State: AHPjjUjU4KaOp5B0W7emY0ePpJWLa1+l9xsKzlj4fgj8fVIDxk9/SZCF tPwRjD+L1a4cB/ks X-Google-Smtp-Source: ADKCNb61KAzxSdo2pW1gyh+R4kd6HOYhSDZ5bNq9pVHZyul2HIQTkrhaNRX3TEFDNTywrQPJ+0+1qQ== X-Received: by 10.31.55.196 with SMTP id e187mr1213421vka.195.1504583749816; Mon, 04 Sep 2017 20:55:49 -0700 (PDT) From: Sergio Andres Gomez Del Real X-Google-Original-From: Sergio Andres Gomez Del Real To: qemu-devel@nongnu.org Date: Mon, 4 Sep 2017 22:54:53 -0500 Message-Id: <20170905035457.3753-11-Sergio.G.DelReal@gmail.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> References: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:400c:c05::242 Subject: [Qemu-devel] [PATCH v3 10/14] hvf: implement hvf_get_supported_cpuid X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sergio Andres Gomez Del Real , pbonzini@redhat.com, stefanha@gmail.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZohoMail: RDKM_2 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This commit implements hvf_get_supported_cpuid, which returns the set of features supported by both the host processor and the hypervisor. Signed-off-by: Sergio Andres Gomez Del Real --- target/i386/hvf-utils/x86_cpuid.c | 138 ++++++++++++++++++++++++++++++++++= ++++ 1 file changed, 138 insertions(+) diff --git a/target/i386/hvf-utils/x86_cpuid.c b/target/i386/hvf-utils/x86_= cpuid.c index fe968cb638..0646588ae3 100644 --- a/target/i386/hvf-utils/x86_cpuid.c +++ b/target/i386/hvf-utils/x86_cpuid.c @@ -24,6 +24,7 @@ #include "x86_cpuid.h" #include "x86.h" #include "vmx.h" +#include "sysemu/hvf.h" =20 #define PPRO_FEATURES (CPUID_FP87 | CPUID_DE | CPUID_PSE | CPUID_TSC | \ CPUID_MSR | CPUID_MCE | CPUID_CX8 | CPUID_PGE | CPUID_CMOV | \ @@ -94,6 +95,27 @@ struct x86_cpuid builtin_cpus[] =3D { =20 static struct x86_cpuid *_cpuid; =20 +static uint64_t xgetbv(uint32_t xcr) +{ + uint32_t eax, edx; + + __asm__ volatile ("xgetbv" + : "=3Da" (eax), "=3Dd" (edx) + : "c" (xcr)); + + return (((uint64_t)edx) << 32) | eax; +} + +static bool vmx_mpx_supported() +{ + uint64_t cap_exit, cap_entry; + + hv_vmx_read_capability(HV_VMX_CAP_ENTRY, &cap_entry); + hv_vmx_read_capability(HV_VMX_CAP_EXIT, &cap_exit); + + return ((cap_exit & (1 << 23)) && (cap_entry & (1 << 16))); +} + void init_cpuid(struct CPUState *cpu) { _cpuid =3D &builtin_cpus[2]; /* core2duo */ @@ -277,3 +299,119 @@ void get_cpuid_func(struct CPUState *cpu, int func, i= nt cnt, uint32_t *eax, break; } } + +uint32_t hvf_get_supported_cpuid(uint32_t func, uint32_t idx, + int reg) +{ + uint64_t cap; + uint32_t eax, ebx, ecx, edx; + + host_cpuid(func, idx, &eax, &ebx, &ecx, &edx); + + switch (func) { + case 0: + eax =3D eax < (uint32_t)0xd ? eax : (uint32_t)0xd; + break; + case 1: + edx &=3D CPUID_FP87 | CPUID_VME | CPUID_DE | CPUID_PSE | CPUID_TSC= | + CPUID_MSR | CPUID_PAE | CPUID_MCE | CPUID_CX8 | CPUID_APIC | + CPUID_SEP | CPUID_MTRR | CPUID_PGE | CPUID_MCA | CPUID_CMOV | + CPUID_PAT | CPUID_PSE36 | CPUID_CLFLUSH | CPUID_MMX | + CPUID_FXSR | CPUID_SSE | CPUID_SSE2 | CPUID_SS; + ecx &=3D CPUID_EXT_SSE3 | CPUID_EXT_PCLMULQDQ | CPUID_EXT_SSSE3 | + CPUID_EXT_FMA | CPUID_EXT_CX16 | CPUID_EXT_PCID | + CPUID_EXT_SSE41 | CPUID_EXT_SSE42 | CPUID_EXT_MOVBE | + CPUID_EXT_POPCNT | CPUID_EXT_AES | CPUID_EXT_XSAVE | + CPUID_EXT_AVX | CPUID_EXT_F16C | CPUID_EXT_RDRAND; + break; + case 6: + eax =3D 4; + ebx =3D 0; + ecx =3D 0; + edx =3D 0; + break; + case 7: + if (idx =3D=3D 0) { + ebx &=3D CPUID_7_0_EBX_FSGSBASE | CPUID_7_0_EBX_BMI1 | + CPUID_7_0_EBX_HLE | CPUID_7_0_EBX_AVX2 | + CPUID_7_0_EBX_SMEP | CPUID_7_0_EBX_BMI2 | + CPUID_7_0_EBX_ERMS | CPUID_7_0_EBX_RTM | + CPUID_7_0_EBX_RDSEED | CPUID_7_0_EBX_ADX | + CPUID_7_0_EBX_SMAP | CPUID_7_0_EBX_AVX512IFMA | + CPUID_7_0_EBX_AVX512F | CPUID_7_0_EBX_AVX512PF | + CPUID_7_0_EBX_AVX512ER | CPUID_7_0_EBX_AVX512CD | + CPUID_7_0_EBX_CLFLUSHOPT | CPUID_7_0_EBX_CLWB | + CPUID_7_0_EBX_AVX512DQ | CPUID_7_0_EBX_SHA_NI | + CPUID_7_0_EBX_AVX512BW | CPUID_7_0_EBX_AVX512VL | + CPUID_7_0_EBX_INVPCID | CPUID_7_0_EBX_MPX; + + if (!vmx_mpx_supported()) { + ebx &=3D ~CPUID_7_0_EBX_MPX; + } + hv_vmx_read_capability(HV_VMX_CAP_PROCBASED2, &cap); + if (!(cap & CPU_BASED2_INVPCID)) { + ebx &=3D ~CPUID_7_0_EBX_INVPCID; + } + + ecx &=3D CPUID_7_0_ECX_AVX512BMI | CPUID_7_0_ECX_AVX512_VPOPCN= TDQ; + edx &=3D CPUID_7_0_EDX_AVX512_4VNNIW | CPUID_7_0_EDX_AVX512_4F= MAPS; + } else { + ebx =3D 0; + ecx =3D 0; + edx =3D 0; + } + eax =3D 0; + break; + case 0xD: + if (idx =3D=3D 0) { + uint64_t host_xcr0 =3D xgetbv(0); + uint64_t supp_xcr0 =3D host_xcr0 & (XSTATE_FP_MASK | XSTATE_SS= E_MASK | + XSTATE_YMM_MASK | XSTATE_BNDREGS_MASK | + XSTATE_BNDCSR_MASK | XSTATE_OPMASK_MASK | + XSTATE_ZMM_Hi256_MASK | XSTATE_Hi16_ZMM_= MASK); + eax &=3D supp_xcr0; + if (!vmx_mpx_supported()) { + eax &=3D ~(XSTATE_BNDREGS_MASK | XSTATE_BNDCSR_MASK); + } + } else if (idx =3D=3D 1) { + hv_vmx_read_capability(HV_VMX_CAP_PROCBASED2, &cap); + eax &=3D CPUID_XSAVE_XSAVEOPT | CPUID_XSAVE_XGETBV1; + if (!(cap & CPU_BASED2_XSAVES_XRSTORS)) { + eax &=3D ~CPUID_XSAVE_XSAVES; + } + } + break; + case 0x80000001: + /* LM only if HVF in 64-bit mode */ + edx &=3D CPUID_FP87 | CPUID_VME | CPUID_DE | CPUID_PSE | CPUID_TSC= | + CPUID_MSR | CPUID_PAE | CPUID_MCE | CPUID_CX8 | CPUID_APIC= | + CPUID_EXT2_SYSCALL | CPUID_MTRR | CPUID_PGE | CPUID_MCA | = CPUID_CMOV | + CPUID_PAT | CPUID_PSE36 | CPUID_EXT2_MMXEXT | CPUID_MMX | + CPUID_FXSR | CPUID_EXT2_FXSR | CPUID_EXT2_PDPE1GB | CPUID_= EXT2_3DNOWEXT | + CPUID_EXT2_3DNOW | CPUID_EXT2_LM | CPUID_EXT2_RDTSCP | CPU= ID_EXT2_NX; + hv_vmx_read_capability(HV_VMX_CAP_PROCBASED, &cap); + if (!(cap & CPU_BASED_TSC_OFFSET)) { + edx &=3D ~CPUID_EXT2_RDTSCP; + } + ecx &=3D CPUID_EXT3_LAHF_LM | CPUID_EXT3_CMP_LEG | CPUID_EXT3_CR8L= EG | + CPUID_EXT3_ABM | CPUID_EXT3_SSE4A | CPUID_EXT3_MISALIGNSSE= | + CPUID_EXT3_3DNOWPREFETCH | CPUID_EXT3_OSVW | CPUID_EXT3_XO= P | + CPUID_EXT3_FMA4 | CPUID_EXT3_TBM; + break; + default: + return 0; + } + + switch (reg) { + case R_EAX: + return eax; + case R_EBX: + return ebx; + case R_ECX: + return ecx; + case R_EDX: + return edx; + default: + return 0; + } +} --=20 2.14.1 From nobody Sat Apr 27 16:08:04 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1504584116044225.829831272172; Mon, 4 Sep 2017 21:01:56 -0700 (PDT) Received: from localhost ([::1]:56740 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp53j-0006wY-0Q for importer@patchew.org; Tue, 05 Sep 2017 00:01:55 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41656) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp4xx-0001w3-TE for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:56:04 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dp4xs-00087j-FX for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:57 -0400 Received: from mail-vk0-x243.google.com ([2607:f8b0:400c:c05::243]:35877) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1dp4xs-00087D-9e for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:52 -0400 Received: by mail-vk0-x243.google.com with SMTP id x85so716182vkx.3 for ; Mon, 04 Sep 2017 20:55:52 -0700 (PDT) Received: from localhost.localdomain ([161.10.80.59]) by smtp.gmail.com with ESMTPSA id d206sm1877252vka.29.2017.09.04.20.55.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 04 Sep 2017 20:55:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=3B+S/pLggk8HK8ntvv0jXnke5aOuSKBHHNO2q38sR+g=; b=ZImWnHR7d5y0h3TOzjoNtUsdJzlxPFjpXjvuA5MfGdEhNmTpVOloovWRZQ8V+FLRRD x0OKFRmXlf5SHUIFXJWZO5uOzgiampQV1YS/hbeYdZ9gJ3YQBPa0f1/EgsyyoXmPTcgo o+NEFsk3fdWKXK6qDco4a65DNyCJN3qybrPi0DM3AIrBO2UYytkFVGZZWnYNVbK6H2kR g87Ddpc3X97BuIL4orK7fD/b5YvR+cWsKWQ31Q91kmaDMstLIvO6PEN4C05h9TpiLi7e /XV6I3ivUabOdRcT4lPqx1BfVZ+vQ3togtM4utADmsTWnxzpCokZzzQKH9zLh6jTIBVZ eKyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=3B+S/pLggk8HK8ntvv0jXnke5aOuSKBHHNO2q38sR+g=; b=f4ozTicpQ+uOPsxt9lGlzFL9Okdb9EGQWy1BD1doXuU92/fy0abHtkY4sxBhWSEG7H 22VQQrd0Dj6SDWVdiWMe0jTcwZpanvyXUushdzm57q0D7dlkJXlbTgtvj/AlsYE2jGfz IIEuUdru53Zsq0lejvVu1osj+GHOffPdP4GC8pfNEM6/hhDfzG/YpnOPyZWH4MajZDt8 9xzrdL/DQ8gUPxZTC+LqNaS3zy/b/JknKkQ82c0u83bPXL4tSjsy9VuG6t57hYHNCMgW wYRyDnyFgQtTEcEvtjAqH/O8gJtadf6R28hpt79DXnONP2ZRSYJvYmBOssZbFVo0YGjj ZuvQ== X-Gm-Message-State: AHPjjUgSA+iJ1yZ6klRIBOKtMtE0boheLlN2Af2GAKjmTuICMpyW/Gst 9LH3lJb2cbfWB/fM X-Google-Smtp-Source: ADKCNb4iT1pCrrKRpnGyntliu9j/9zpasTsIENKPLdBJaJ22yWd4xbztSi3aJg9uyvREAJW+JrcAHw== X-Received: by 10.31.128.14 with SMTP id b14mr1551606vkd.56.1504583751414; Mon, 04 Sep 2017 20:55:51 -0700 (PDT) From: Sergio Andres Gomez Del Real X-Google-Original-From: Sergio Andres Gomez Del Real To: qemu-devel@nongnu.org Date: Mon, 4 Sep 2017 22:54:54 -0500 Message-Id: <20170905035457.3753-12-Sergio.G.DelReal@gmail.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> References: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:400c:c05::243 Subject: [Qemu-devel] [PATCH v3 11/14] hvf: refactor cpuid code X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sergio Andres Gomez Del Real , pbonzini@redhat.com, stefanha@gmail.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZohoMail: RDKM_2 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This patch generalizes some code in cpu.c, sharing code and data between hvf and kvm. It also beings calling the new hvf_get_supported_cpuid where appropriate. Signed-off-by: Sergio Andres Gomez Del Real --- target/i386/cpu-qom.h | 4 +-- target/i386/cpu.c | 76 +++++++++++++++++++++++++++++++++++++----------= ---- 2 files changed, 58 insertions(+), 22 deletions(-) diff --git a/target/i386/cpu-qom.h b/target/i386/cpu-qom.h index c2205e6077..22f95eb3a4 100644 --- a/target/i386/cpu-qom.h +++ b/target/i386/cpu-qom.h @@ -47,7 +47,7 @@ typedef struct X86CPUDefinition X86CPUDefinition; /** * X86CPUClass: * @cpu_def: CPU model definition - * @kvm_required: Whether CPU model requires KVM to be enabled. + * @host_cpuid_required: Whether CPU model requires cpuid from host. * @ordering: Ordering on the "-cpu help" CPU model list. * @migration_safe: See CpuDefinitionInfo::migration_safe * @static_model: See CpuDefinitionInfo::static @@ -66,7 +66,7 @@ typedef struct X86CPUClass { */ X86CPUDefinition *cpu_def; =20 - bool kvm_required; + bool host_cpuid_required; int ordering; bool migration_safe; bool static_model; diff --git a/target/i386/cpu.c b/target/i386/cpu.c index ddc45abd70..c6ffd0c928 100644 --- a/target/i386/cpu.c +++ b/target/i386/cpu.c @@ -22,6 +22,7 @@ #include "cpu.h" #include "exec/exec-all.h" #include "sysemu/kvm.h" +#include "sysemu/hvf.h" #include "sysemu/cpus.h" #include "kvm_i386.h" =20 @@ -613,6 +614,11 @@ static uint32_t xsave_area_size(uint64_t mask) return ret; } =20 +static inline bool accel_uses_host_cpuid(void) +{ + return kvm_enabled() || hvf_enabled(); +} + static inline uint64_t x86_cpu_xsave_components(X86CPU *cpu) { return ((uint64_t)cpu->env.features[FEAT_XSAVE_COMP_HI]) << 32 | @@ -1643,10 +1649,15 @@ static void max_x86_cpu_initfn(Object *obj) */ cpu->max_features =3D true; =20 - if (kvm_enabled()) { + if (accel_uses_host_cpuid()) { char vendor[CPUID_VENDOR_SZ + 1] =3D { 0 }; char model_id[CPUID_MODEL_ID_SZ + 1] =3D { 0 }; int family, model, stepping; + X86CPUDefinition host_cpudef =3D { }; + uint32_t eax =3D 0, ebx =3D 0, ecx =3D 0, edx =3D 0; + + host_cpuid(0x0, 0, &eax, &ebx, &ecx, &edx); + x86_cpu_vendor_words2str(host_cpudef.vendor, ebx, edx, ecx); =20 host_vendor_fms(vendor, &family, &model, &stepping); =20 @@ -1660,12 +1671,21 @@ static void max_x86_cpu_initfn(Object *obj) object_property_set_str(OBJECT(cpu), model_id, "model-id", &error_abort); =20 - env->cpuid_min_level =3D - kvm_arch_get_supported_cpuid(s, 0x0, 0, R_EAX); - env->cpuid_min_xlevel =3D - kvm_arch_get_supported_cpuid(s, 0x80000000, 0, R_EAX); - env->cpuid_min_xlevel2 =3D - kvm_arch_get_supported_cpuid(s, 0xC0000000, 0, R_EAX); + if (kvm_enabled()) { + env->cpuid_min_level =3D + kvm_arch_get_supported_cpuid(s, 0x0, 0, R_EAX); + env->cpuid_min_xlevel =3D + kvm_arch_get_supported_cpuid(s, 0x80000000, 0, R_EAX); + env->cpuid_min_xlevel2 =3D + kvm_arch_get_supported_cpuid(s, 0xC0000000, 0, R_EAX); + } else { + env->cpuid_min_level =3D + hvf_get_supported_cpuid(0x0, 0, R_EAX); + env->cpuid_min_xlevel =3D + hvf_get_supported_cpuid(0x80000000, 0, R_EAX); + env->cpuid_min_xlevel2 =3D + hvf_get_supported_cpuid(0xC0000000, 0, R_EAX); + } =20 if (lmce_supported()) { object_property_set_bool(OBJECT(cpu), true, "lmce", &error_abo= rt); @@ -1691,18 +1711,21 @@ static const TypeInfo max_x86_cpu_type_info =3D { .class_init =3D max_x86_cpu_class_init, }; =20 -#ifdef CONFIG_KVM - +#if defined(CONFIG_KVM) || defined(CONFIG_HVF) static void host_x86_cpu_class_init(ObjectClass *oc, void *data) { X86CPUClass *xcc =3D X86_CPU_CLASS(oc); =20 - xcc->kvm_required =3D true; + xcc->host_cpuid_required =3D true; xcc->ordering =3D 8; =20 - xcc->model_description =3D - "KVM processor with all supported host features " - "(only available in KVM mode)"; + if (kvm_enabled()) { + xcc->model_description =3D + "KVM processor with all supported host features "; + } else if (hvf_enabled()) { + xcc->model_description =3D + "HVF processor with all supported host features "; + } } =20 static const TypeInfo host_x86_cpu_type_info =3D { @@ -1724,7 +1747,7 @@ static void report_unavailable_features(FeatureWord w= , uint32_t mask) assert(reg); fprintf(stderr, "warning: %s doesn't support requested feature= : " "CPUID.%02XH:%s%s%s [bit %d]\n", - kvm_enabled() ? "host" : "TCG", + accel_uses_host_cpuid() ? "host" : "TCG", f->cpuid_eax, reg, f->feat_names[i] ? "." : "", f->feat_names[i] ? f->feat_names[i] : "", i); @@ -2175,7 +2198,7 @@ static void x86_cpu_class_check_missing_features(X86C= PUClass *xcc, Error *err =3D NULL; strList **next =3D missing_feats; =20 - if (xcc->kvm_required && !kvm_enabled()) { + if (xcc->host_cpuid_required && !accel_uses_host_cpuid()) { strList *new =3D g_new0(strList, 1); new->value =3D g_strdup("kvm");; *missing_feats =3D new; @@ -2337,6 +2360,10 @@ static uint32_t x86_cpu_get_supported_feature_word(F= eatureWord w, r =3D kvm_arch_get_supported_cpuid(kvm_state, wi->cpuid_eax, wi->cpuid_ecx, wi->cpuid_reg); + } else if (hvf_enabled()) { + r =3D hvf_get_supported_cpuid(wi->cpuid_eax, + wi->cpuid_ecx, + wi->cpuid_reg); } else if (tcg_enabled()) { r =3D wi->tcg_features; } else { @@ -2396,6 +2423,7 @@ static void x86_cpu_load_def(X86CPU *cpu, X86CPUDefin= ition *def, Error **errp) } =20 /* Special cases not set in the X86CPUDefinition structs: */ + /* TODO: implement for hvf */ if (kvm_enabled()) { if (!kvm_irqchip_in_kernel()) { x86_cpu_change_kvm_default("x2apic", "off"); @@ -2416,7 +2444,7 @@ static void x86_cpu_load_def(X86CPU *cpu, X86CPUDefin= ition *def, Error **errp) * when doing cross vendor migration */ vendor =3D def->vendor; - if (kvm_enabled()) { + if (accel_uses_host_cpuid()) { uint32_t ebx =3D 0, ecx =3D 0, edx =3D 0; host_cpuid(0, 0, NULL, &ebx, &ecx, &edx); x86_cpu_vendor_words2str(host_vendor, ebx, edx, ecx); @@ -2872,6 +2900,11 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index,= uint32_t count, *ebx =3D kvm_arch_get_supported_cpuid(s, 0xA, count, R_EBX); *ecx =3D kvm_arch_get_supported_cpuid(s, 0xA, count, R_ECX); *edx =3D kvm_arch_get_supported_cpuid(s, 0xA, count, R_EDX); + } else if (hvf_enabled() && cpu->enable_pmu) { + *eax =3D hvf_get_supported_cpuid(0xA, count, R_EAX); + *ebx =3D hvf_get_supported_cpuid(0xA, count, R_EBX); + *ecx =3D hvf_get_supported_cpuid(0xA, count, R_ECX); + *edx =3D hvf_get_supported_cpuid(0xA, count, R_EDX); } else { *eax =3D 0; *ebx =3D 0; @@ -3220,6 +3253,7 @@ static void x86_cpu_reset(CPUState *s) =20 s->halted =3D !cpu_is_bsp(cpu); =20 + /* TODO: implement for hvf */ if (kvm_enabled()) { kvm_arch_reset_vcpu(cpu); } @@ -3262,6 +3296,7 @@ APICCommonClass *apic_get_class(void) { const char *apic_type =3D "apic"; =20 + /* TODO: implement for hvf */ if (kvm_apic_in_kernel()) { apic_type =3D "kvm-apic"; } else if (xen_enabled()) { @@ -3492,6 +3527,7 @@ static void x86_cpu_expand_features(X86CPU *cpu, Erro= r **errp) } } =20 + /* TODO: implement for hvf */ if (!kvm_enabled() || !cpu->expose_kvm) { env->features[FEAT_KVM] =3D 0; } @@ -3575,7 +3611,7 @@ static void x86_cpu_realizefn(DeviceState *dev, Error= **errp) Error *local_err =3D NULL; static bool ht_warned; =20 - if (xcc->kvm_required && !kvm_enabled()) { + if (xcc->host_cpuid_required && !accel_uses_host_cpuid()) { char *name =3D x86_cpu_class_get_model_name(xcc); error_setg(&local_err, "CPU model '%s' requires KVM", name); g_free(name); @@ -3597,7 +3633,7 @@ static void x86_cpu_realizefn(DeviceState *dev, Error= **errp) x86_cpu_report_filtered_features(cpu); if (cpu->enforce_cpuid) { error_setg(&local_err, - kvm_enabled() ? + accel_uses_host_cpuid() ? "Host doesn't support requested features" : "TCG doesn't support requested features"); goto out; @@ -3620,7 +3656,7 @@ static void x86_cpu_realizefn(DeviceState *dev, Error= **errp) * consumer AMD devices but nothing else. */ if (env->features[FEAT_8000_0001_EDX] & CPUID_EXT2_LM) { - if (kvm_enabled()) { + if (accel_uses_host_cpuid()) { uint32_t host_phys_bits =3D x86_host_phys_bits(); static bool warned; =20 @@ -4207,7 +4243,7 @@ static void x86_cpu_register_types(void) } type_register_static(&max_x86_cpu_type_info); type_register_static(&x86_base_cpu_type_info); -#ifdef CONFIG_KVM +#if defined(CONFIG_KVM) || defined(CONFIG_HVF) type_register_static(&host_x86_cpu_type_info); #endif } --=20 2.14.1 From nobody Sat Apr 27 16:08:04 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 150458422753342.12940916112859; Mon, 4 Sep 2017 21:03:47 -0700 (PDT) Received: from localhost ([::1]:56745 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp55W-00088o-6q for importer@patchew.org; Tue, 05 Sep 2017 00:03:46 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41679) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp4xz-0001xJ-KV for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:56:05 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dp4xt-0008AG-Qr for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:59 -0400 Received: from mail-vk0-x242.google.com ([2607:f8b0:400c:c05::242]:35381) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1dp4xt-00089k-K4 for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:53 -0400 Received: by mail-vk0-x242.google.com with SMTP id t10so716060vke.2 for ; Mon, 04 Sep 2017 20:55:53 -0700 (PDT) Received: from localhost.localdomain ([161.10.80.59]) by smtp.gmail.com with ESMTPSA id d206sm1877252vka.29.2017.09.04.20.55.51 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 04 Sep 2017 20:55:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ITtPzY5/aC+DjLnpYSxCA2jwb9XT6rHPL+1xPQqsV2w=; b=nBzeMDpAHnnaYMUvpvvFL9G7y1uocy1kP5us0le+149FRIzBZhgNrELmIXL20YReZL TJU3wRgUDx5tqHGYs3auWy4IZgk6giwCeLYUndC3q9jQpmUbx47JKt5DK1hEHDF2zMWH 0vJoYFo2zrJ1N2Hrq4K0xceDMHiyNAq5BY4TbZbq3WyIlH7tnF8cUJJ1x77n47f41unq txX4aAQ+ju9wlypr3qTyzV/M/mbvJONNL6d2N/JxVLqIm0kRFpSa+AKDLCaXuJn7tBMb CBP98paIzpq0m4AX1iKfz+kPUgbLUZ8yJ3jO+HJHAxnb9RhR68rF5x0dk7ZvGPwO3vbH BMdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ITtPzY5/aC+DjLnpYSxCA2jwb9XT6rHPL+1xPQqsV2w=; b=U9CWLQl9FHpuE/HxZENjnZd7xU8f9jtw+mN3SwFw6iMxshRIG6cqfkTWbPtgaO5PxB iJ6RI2Y02Mhu6AOx6SpKhL7mB1FvNI+onUkUldAhL1pwebu4r1Vb2acK+eYieF9Tjq+q XfoV3s242tnA7XjJpDU+IIlC+Q0l4GRQ8g3rGsSydbRWKAbqJWcrIBnKX0rbhjIYjLab YwgtC9MOSA1eqnXuLyf0bofuYKcxyoyZOLVpLYsvsXoo6aFOVWw0ZDxla802rVuT8qlh VAIYU7HlMFmLm0wQrzZQ750oApdPnfQ6D9iJixq3s5OIGap2SVs8AyQKNEyu6fwgQiWD OyjA== X-Gm-Message-State: AHPjjUgMsHThUZPw8PJaGinmLwXQ/ARPlLQDeT7hk3jqIgIxoqeG+NBZ iIQqFMq9htAFTzQt X-Google-Smtp-Source: ADKCNb4SKikHYmB96BkhTgZsc3tIguCFSPQ+xcAn1RyWD8Q4oPNnttusjA24syKemvpTATeTSLHvgA== X-Received: by 10.31.61.136 with SMTP id k130mr1408219vka.45.1504583752777; Mon, 04 Sep 2017 20:55:52 -0700 (PDT) From: Sergio Andres Gomez Del Real X-Google-Original-From: Sergio Andres Gomez Del Real To: qemu-devel@nongnu.org Date: Mon, 4 Sep 2017 22:54:55 -0500 Message-Id: <20170905035457.3753-13-Sergio.G.DelReal@gmail.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> References: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:400c:c05::242 Subject: [Qemu-devel] [PATCH v3 12/14] hvf: implement vga dirty page tracking X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sergio Andres Gomez Del Real , pbonzini@redhat.com, stefanha@gmail.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZohoMail: RDKM_2 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This commit implements setting the tracking of dirty pages, using hvf's interface to protect guest memory. It uses the MemoryListener callback mechanism through .log_start/stop/sync Signed-off-by: Sergio Andres Gomez Del Real --- include/sysemu/hvf.h | 5 ++++ target/i386/hvf-all.c | 74 ++++++++++++++++++++++++++++++++++++++++++++++-= ---- 2 files changed, 72 insertions(+), 7 deletions(-) diff --git a/include/sysemu/hvf.h b/include/sysemu/hvf.h index 944b014596..43b02be63c 100644 --- a/include/sysemu/hvf.h +++ b/include/sysemu/hvf.h @@ -34,11 +34,16 @@ uint32_t hvf_get_supported_cpuid(uint32_t func, uint32_= t idx, #define hvf_get_supported_cpuid(func, idx, reg) 0 #endif =20 +/* hvf_slot flags */ +#define HVF_SLOT_LOG (1 << 0) + typedef struct hvf_slot { uint64_t start; uint64_t size; uint8_t *mem; int slot_id; + uint32_t flags; + MemoryRegion *region; } hvf_slot; =20 struct hvf_vcpu_caps { diff --git a/target/i386/hvf-all.c b/target/i386/hvf-all.c index be68c71ea0..5644fa8ae0 100644 --- a/target/i386/hvf-all.c +++ b/target/i386/hvf-all.c @@ -195,6 +195,7 @@ void hvf_set_phys_mem(MemoryRegionSection *section, boo= l add) mem->size =3D int128_get64(section->size); mem->mem =3D memory_region_get_ram_ptr(area) + section->offset_within_= region; mem->start =3D section->offset_within_address_space; + mem->region =3D area; =20 if (do_hvf_set_memory(mem)) { error_report("Error registering new memory slot\n"); @@ -291,8 +292,7 @@ void hvf_cpu_synchronize_post_init(CPUState *cpu_state) run_on_cpu(cpu_state, _hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL); } =20 -/* TODO: ept fault handlig */ -static bool ept_emulation_fault(uint64_t ept_qual) +static bool ept_emulation_fault(hvf_slot *slot, addr_t gpa, uint64_t ept_q= ual) { int read, write; =20 @@ -308,6 +308,14 @@ static bool ept_emulation_fault(uint64_t ept_qual) return false; } =20 + if (write && slot) { + if (slot->flags & HVF_SLOT_LOG) { + memory_region_set_dirty(slot->region, gpa - slot->start, 1); + hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size, + HV_MEMORY_READ | HV_MEMORY_WRITE); + } + } + /* * The EPT violation must have been caused by accessing a * guest-physical address that is a translation of a guest-linear @@ -318,7 +326,59 @@ static bool ept_emulation_fault(uint64_t ept_qual) return false; } =20 - return true; + return !slot; +} + +static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on) +{ + struct mac_slot *macslot; + hvf_slot *slot; + + slot =3D hvf_find_overlap_slot( + section->offset_within_address_space, + section->offset_within_address_space + int128_get64(section->s= ize)); + + /* protect region against writes; begin tracking it */ + if (on) { + slot->flags |=3D HVF_SLOT_LOG; + hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size, + HV_MEMORY_READ); + /* stop tracking region*/ + } else { + slot->flags &=3D ~HVF_SLOT_LOG; + hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size, + HV_MEMORY_READ | HV_MEMORY_WRITE); + } +} + +static void hvf_log_start(MemoryListener *listener, + MemoryRegionSection *section, int old, int new) +{ + if (old !=3D 0) { + return; + } + + hvf_set_dirty_tracking(section, 1); +} + +static void hvf_log_stop(MemoryListener *listener, + MemoryRegionSection *section, int old, int new) +{ + if (new !=3D 0) { + return; + } + + hvf_set_dirty_tracking(section, 0); +} + +static void hvf_log_sync(MemoryListener *listener, + MemoryRegionSection *section) +{ + /* + * sync of dirty pages is handled elsewhere; just make sure we keep + * tracking the region. + */ + hvf_set_dirty_tracking(section, 1); } =20 static void hvf_region_add(MemoryListener *listener, @@ -337,6 +397,9 @@ static MemoryListener hvf_memory_listener =3D { .priority =3D 10, .region_add =3D hvf_region_add, .region_del =3D hvf_region_del, + .log_start =3D hvf_log_start, + .log_stop =3D hvf_log_stop, + .log_sync =3D hvf_log_sync, }; =20 void vmx_reset_vcpu(CPUState *cpu) { @@ -609,7 +672,7 @@ int hvf_vcpu_exec(CPUState *cpu) =20 slot =3D hvf_find_overlap_slot(gpa, gpa); /* mmio */ - if (ept_emulation_fault(exit_qual) && !slot) { + if (ept_emulation_fault(slot, gpa, exit_qual)) { struct x86_decode decode; =20 load_regs(cpu); @@ -620,9 +683,6 @@ int hvf_vcpu_exec(CPUState *cpu) store_regs(cpu); break; } -#ifdef DIRTY_VGA_TRACKING - /* TODO: handle dirty page tracking */ -#endif break; } case EXIT_REASON_INOUT: --=20 2.14.1 From nobody Sat Apr 27 16:08:04 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1504584275772196.49328023962187; Mon, 4 Sep 2017 21:04:35 -0700 (PDT) Received: from localhost ([::1]:56746 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp56I-0000E5-PE for importer@patchew.org; Tue, 05 Sep 2017 00:04:34 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41712) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp4y1-0001y4-5V for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:56:08 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dp4xv-0008C5-ES for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:56:01 -0400 Received: from mail-vk0-x241.google.com ([2607:f8b0:400c:c05::241]:37787) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1dp4xv-0008Ba-7D for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:55 -0400 Received: by mail-vk0-x241.google.com with SMTP id 184so713216vkn.4 for ; Mon, 04 Sep 2017 20:55:55 -0700 (PDT) Received: from localhost.localdomain ([161.10.80.59]) by smtp.gmail.com with ESMTPSA id d206sm1877252vka.29.2017.09.04.20.55.52 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 04 Sep 2017 20:55:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=A6fkbXnkit/n0e/iBlfnxMIio3fJ3xa3dO/qKZGHS4E=; b=FGd95g0Y7Le3isL46iQHhOuqiUvSVzoV+Q3ab7mMbd6m4VkOvtrseQtYosc+S7dMZ+ COnFN5EuVIfa0UvCTfXdcuZQ4tjTS7AgPxU7v2PoCN+fdRhVPDlT9vI2AkPZkgHBLaZ1 FILDncRdpfSGNbdlcFwds7xByLfttsWfVymlf/YN7Xh2M+yofVeVB339nEJeiwkldUkY gXIwCe9tbOh8crpKYbTsznLoCtMHjJF+FwhB6BU0tm4+1Wo7AxujqkeprU3T4Zex0h/g wYMZu4vrN7IdOSDsMvqCQfs3nuDwf5t5Md/jM+iIzO1m9cwM5MB2ZcqJg4bTelbSKbPv JL6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=A6fkbXnkit/n0e/iBlfnxMIio3fJ3xa3dO/qKZGHS4E=; b=Sl2111mK/RpgTJsEC2Krqmx/Hqt7XsJJwjvc+XKnW/Lfg+Q0CiSdbH6uteq8nj1PBV 8VocPNzyD/hMNPD3Mppue7JqY0mzer1wYJB+XmQWTVZDGnGpwxJQ3nE+UkTiPNGUgQXg 26Q537OsiX850Fh3tM01JxTHMXa0XsF0z8pJK3JoFYGoh9iFopLmvEg/um4tXOoI+bZg QCkTVZv5M9vDHzMSp3sdT8Hd3h5drQ3yhlTJRUiVa5pJZonHcnnKaHdkx3izCUCs8Szx IJTKew6S8zJXqaYom2WfpatVmpxP6v/LcTUtTLALigtcT98Kkqv5rLTOFR/pdZW9Ix92 OEwg== X-Gm-Message-State: AHPjjUic3LX+NorMF43kUFh0+vh004qnKIQuUB+bdys/QJWDGUhe0BeN w4coAQRsu+gscA85 X-Google-Smtp-Source: ADKCNb71s92h39TAvBDisU3RY+tLFG1Q/G+uxY+2kWnVD1W3Aoo/rRuIVy9XFWtupySlidZcN3BIKA== X-Received: by 10.31.95.210 with SMTP id t201mr1361109vkb.168.1504583754260; Mon, 04 Sep 2017 20:55:54 -0700 (PDT) From: Sergio Andres Gomez Del Real X-Google-Original-From: Sergio Andres Gomez Del Real To: qemu-devel@nongnu.org Date: Mon, 4 Sep 2017 22:54:56 -0500 Message-Id: <20170905035457.3753-14-Sergio.G.DelReal@gmail.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> References: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:400c:c05::241 Subject: [Qemu-devel] [PATCH v3 13/14] hvf: refactor event injection code for hvf X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sergio Andres Gomez Del Real , pbonzini@redhat.com, stefanha@gmail.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZohoMail: RDKM_2 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This commit refactors the event-injection code for hvf through using the appropriate fields already provided by CPUX86State. At vmexit, it fills these fields so that hvf_inject_interrupts can just retrieve them without calling into hvf. Signed-off-by: Sergio Andres Gomez Del Real --- target/i386/cpu.c | 3 ++ target/i386/hvf-all.c | 57 ++++++++++++++++++++++++++++++++---- target/i386/hvf-utils/vmcs.h | 3 ++ target/i386/hvf-utils/vmx.h | 8 ++++++ target/i386/hvf-utils/x86hvf.c | 65 ++++++++++++++++++++------------------= ---- target/i386/kvm.c | 2 -- 6 files changed, 97 insertions(+), 41 deletions(-) diff --git a/target/i386/cpu.c b/target/i386/cpu.c index c6ffd0c928..3b6a42aaa4 100644 --- a/target/i386/cpu.c +++ b/target/i386/cpu.c @@ -3247,6 +3247,9 @@ static void x86_cpu_reset(CPUState *s) memset(env->mtrr_var, 0, sizeof(env->mtrr_var)); memset(env->mtrr_fixed, 0, sizeof(env->mtrr_fixed)); =20 + env->interrupt_injected =3D -1; + env->exception_injected =3D -1; + env->nmi_injected =3D false; #if !defined(CONFIG_USER_ONLY) /* We hard-wire the BSP to the first CPU. */ apic_designate_bsp(cpu->apic_state, s->cpu_index =3D=3D 0); diff --git a/target/i386/hvf-all.c b/target/i386/hvf-all.c index 5644fa8ae0..8fc6a0b5d1 100644 --- a/target/i386/hvf-all.c +++ b/target/i386/hvf-all.c @@ -589,6 +589,55 @@ void hvf_disable(int shouldDisable) hvf_disabled =3D shouldDisable; } =20 +static void hvf_store_events(CPUState *cpu, uint32_t ins_len, uint64_t idt= vec_info) +{ + X86CPU *x86_cpu =3D X86_CPU(cpu); + CPUX86State *env =3D &x86_cpu->env; + + env->exception_injected =3D -1; + env->interrupt_injected =3D -1; + env->nmi_injected =3D false; + if (idtvec_info & VMCS_IDT_VEC_VALID) { + switch (idtvec_info & VMCS_IDT_VEC_TYPE) { + case VMCS_IDT_VEC_HWINTR: + case VMCS_IDT_VEC_SWINTR: + env->interrupt_injected =3D idtvec_info & VMCS_IDT_VEC_VECNUM; + break; + case VMCS_IDT_VEC_NMI: + env->nmi_injected =3D true; + break; + case VMCS_IDT_VEC_HWEXCEPTION: + case VMCS_IDT_VEC_SWEXCEPTION: + env->exception_injected =3D idtvec_info & VMCS_IDT_VEC_VECNUM; + break; + case VMCS_IDT_VEC_PRIV_SWEXCEPTION: + default: + abort(); + } + if ((idtvec_info & VMCS_IDT_VEC_TYPE) =3D=3D VMCS_IDT_VEC_SWEXCEPT= ION || + (idtvec_info & VMCS_IDT_VEC_TYPE) =3D=3D VMCS_IDT_VEC_SWINTR) { + env->ins_len =3D ins_len; + } + if (idtvec_info & VMCS_INTR_DEL_ERRCODE) { + env->has_error_code =3D true; + env->error_code =3D rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_ERRO= R); + } + } + if ((rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY) & + VMCS_INTERRUPTIBILITY_NMI_BLOCKING)) { + env->hflags2 |=3D HF2_NMI_MASK; + } else { + env->hflags2 &=3D ~HF2_NMI_MASK; + } + if (rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY) & + (VMCS_INTERRUPTIBILITY_STI_BLOCKING | + VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING)) { + env->hflags |=3D HF_INHIBIT_IRQ_MASK; + } else { + env->hflags &=3D ~HF_INHIBIT_IRQ_MASK; + } +} + int hvf_vcpu_exec(CPUState *cpu) { X86CPU *x86_cpu =3D X86_CPU(cpu); @@ -608,11 +657,6 @@ int hvf_vcpu_exec(CPUState *cpu) cpu->vcpu_dirty =3D false; } =20 - env->hvf_emul->interruptable =3D - !(rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY) & - (VMCS_INTERRUPTIBILITY_STI_BLOCKING | - VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING)); - hvf_inject_interrupts(cpu); vmx_update_tpr(cpu); =20 @@ -631,7 +675,10 @@ int hvf_vcpu_exec(CPUState *cpu) uint64_t exit_qual =3D rvmcs(cpu->hvf_fd, VMCS_EXIT_QUALIFICATION); uint32_t ins_len =3D (uint32_t)rvmcs(cpu->hvf_fd, VMCS_EXIT_INSTRUCTION_LENGTH); + uint64_t idtvec_info =3D rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_INF= O); + + hvf_store_events(cpu, ins_len, idtvec_info); rip =3D rreg(cpu->hvf_fd, HV_X86_RIP); RFLAGS(env) =3D rreg(cpu->hvf_fd, HV_X86_RFLAGS); env->eflags =3D RFLAGS(env); diff --git a/target/i386/hvf-utils/vmcs.h b/target/i386/hvf-utils/vmcs.h index c410dcfaaa..0fae73dce5 100644 --- a/target/i386/hvf-utils/vmcs.h +++ b/target/i386/hvf-utils/vmcs.h @@ -299,6 +299,7 @@ /* * VMCS IDT-Vectoring information fields */ +#define VMCS_IDT_VEC_VECNUM 0xFF #define VMCS_IDT_VEC_VALID (1U << 31) #define VMCS_IDT_VEC_TYPE 0x700 #define VMCS_IDT_VEC_ERRCODE_VALID (1U << 11) @@ -306,6 +307,8 @@ #define VMCS_IDT_VEC_NMI (2 << 8) #define VMCS_IDT_VEC_HWEXCEPTION (3 << 8) #define VMCS_IDT_VEC_SWINTR (4 << 8) +#define VMCS_IDT_VEC_PRIV_SWEXCEPTION (5 << 8) +#define VMCS_IDT_VEC_SWEXCEPTION (6 << 8) =20 /* * VMCS Guest interruptibility field diff --git a/target/i386/hvf-utils/vmx.h b/target/i386/hvf-utils/vmx.h index 44a5c6d554..102075d0d4 100644 --- a/target/i386/hvf-utils/vmx.h +++ b/target/i386/hvf-utils/vmx.h @@ -181,6 +181,10 @@ static inline void macvm_set_rip(CPUState *cpu, uint64= _t rip) =20 static inline void vmx_clear_nmi_blocking(CPUState *cpu) { + X86CPU *x86_cpu =3D X86_CPU(cpu); + CPUX86State *env =3D &x86_cpu->env; + + env->hflags2 &=3D ~HF2_NMI_MASK; uint32_t gi =3D (uint32_t) rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBIL= ITY); gi &=3D ~VMCS_INTERRUPTIBILITY_NMI_BLOCKING; wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY, gi); @@ -188,6 +192,10 @@ static inline void vmx_clear_nmi_blocking(CPUState *cp= u) =20 static inline void vmx_set_nmi_blocking(CPUState *cpu) { + X86CPU *x86_cpu =3D X86_CPU(cpu); + CPUX86State *env =3D &x86_cpu->env; + + env->hflags2 |=3D HF2_NMI_MASK; uint32_t gi =3D (uint32_t)rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILI= TY); gi |=3D VMCS_INTERRUPTIBILITY_NMI_BLOCKING; wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY, gi); diff --git a/target/i386/hvf-utils/x86hvf.c b/target/i386/hvf-utils/x86hvf.c index dd0710d056..83eb3be065 100644 --- a/target/i386/hvf-utils/x86hvf.c +++ b/target/i386/hvf-utils/x86hvf.c @@ -356,50 +356,47 @@ void vmx_clear_int_window_exiting(CPUState *cpu) =20 void hvf_inject_interrupts(CPUState *cpu_state) { - int allow_nmi =3D !(rvmcs(cpu_state->hvf_fd, VMCS_GUEST_INTERRUPTIBILI= TY) & - VMCS_INTERRUPTIBILITY_NMI_BLOCKING); X86CPU *x86cpu =3D X86_CPU(cpu_state); CPUX86State *env =3D &x86cpu->env; =20 - uint64_t idt_info =3D rvmcs(cpu_state->hvf_fd, VMCS_IDT_VECTORING_INFO= ); + uint8_t vector; + uint64_t intr_type; + bool have_event =3D true; + if (env->interrupt_injected !=3D -1) { + vector =3D env->interrupt_injected; + intr_type =3D VMCS_INTR_T_SWINTR; + } else if (env->exception_injected !=3D -1) { + vector =3D env->exception_injected; + if (vector =3D=3D EXCP03_INT3 || vector =3D=3D EXCP04_INTO) { + intr_type =3D VMCS_INTR_T_SWEXCEPTION; + } else { + intr_type =3D VMCS_INTR_T_HWEXCEPTION; + } + } else if (env->nmi_injected) { + vector =3D NMI_VEC; + intr_type =3D VMCS_INTR_T_NMI; + } else { + have_event =3D false; + } + uint64_t info =3D 0; - =20 - if (idt_info & VMCS_IDT_VEC_VALID) { - uint8_t vector =3D idt_info & 0xff; - uint64_t intr_type =3D idt_info & VMCS_INTR_T_MASK; - info =3D idt_info; - =20 + if (have_event) { + info =3D vector | intr_type | VMCS_INTR_VALID; uint64_t reason =3D rvmcs(cpu_state->hvf_fd, VMCS_EXIT_REASON); - if (intr_type =3D=3D VMCS_INTR_T_NMI && reason !=3D EXIT_REASON_TA= SK_SWITCH) { - allow_nmi =3D 1; + if (env->nmi_injected && reason !=3D EXIT_REASON_TASK_SWITCH) { vmx_clear_nmi_blocking(cpu_state); } - =20 - if ((allow_nmi || intr_type !=3D VMCS_INTR_T_NMI)) { + + if (!(env->hflags2 & HF2_NMI_MASK) || intr_type !=3D VMCS_INTR_T_N= MI) { info &=3D ~(1 << 12); /* clear undefined bit */ if (intr_type =3D=3D VMCS_INTR_T_SWINTR || - intr_type =3D=3D VMCS_INTR_T_PRIV_SWEXCEPTION || intr_type =3D=3D VMCS_INTR_T_SWEXCEPTION) { - uint64_t ins_len =3D rvmcs(cpu_state->hvf_fd, - VMCS_EXIT_INSTRUCTION_LENGTH); - wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INST_LENGTH, ins_len); - } - if (vector =3D=3D EXCEPTION_BP || vector =3D=3D EXCEPTION_OF) { - /* - * VT-x requires #BP and #OF to be injected as software - * exceptions. - */ - info &=3D ~VMCS_INTR_T_MASK; - info |=3D VMCS_INTR_T_SWEXCEPTION; - uint64_t ins_len =3D rvmcs(cpu_state->hvf_fd, - VMCS_EXIT_INSTRUCTION_LENGTH); - wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INST_LENGTH, ins_len); + wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INST_LENGTH, env->ins_= len); } =20 - uint64_t err =3D 0; - if (idt_info & VMCS_INTR_DEL_ERRCODE) { - err =3D rvmcs(cpu_state->hvf_fd, VMCS_IDT_VECTORING_ERROR); - wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_EXCEPTION_ERROR, err); + if (env->has_error_code) { + wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_EXCEPTION_ERROR, + env->error_code); } /*printf("reinject %lx err %d\n", info, err);*/ wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, info); @@ -407,7 +404,7 @@ void hvf_inject_interrupts(CPUState *cpu_state) } =20 if (cpu_state->interrupt_request & CPU_INTERRUPT_NMI) { - if (allow_nmi && !(info & VMCS_INTR_VALID)) { + if (!(env->hflags2 & HF2_NMI_MASK) && !(info & VMCS_INTR_VALID)) { cpu_state->interrupt_request &=3D ~CPU_INTERRUPT_NMI; info =3D VMCS_INTR_VALID | VMCS_INTR_T_NMI | NMI_VEC; wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, info); @@ -416,7 +413,7 @@ void hvf_inject_interrupts(CPUState *cpu_state) } } =20 - if (env->hvf_emul->interruptable && + if (!(env->hflags & HF_INHIBIT_IRQ_MASK) && (cpu_state->interrupt_request & CPU_INTERRUPT_HARD) && (EFLAGS(env) & IF_MASK) && !(info & VMCS_INTR_VALID)) { int line =3D cpu_get_pic_interrupt(&x86cpu->env); diff --git a/target/i386/kvm.c b/target/i386/kvm.c index 6db7783edc..a695b8cd4e 100644 --- a/target/i386/kvm.c +++ b/target/i386/kvm.c @@ -1030,8 +1030,6 @@ void kvm_arch_reset_vcpu(X86CPU *cpu) { CPUX86State *env =3D &cpu->env; =20 - env->exception_injected =3D -1; - env->interrupt_injected =3D -1; env->xcr0 =3D 1; if (kvm_irqchip_in_kernel()) { env->mp_state =3D cpu_is_bsp(cpu) ? KVM_MP_STATE_RUNNABLE : --=20 2.14.1 From nobody Sat Apr 27 16:08:04 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1504584383806374.26706660158186; Mon, 4 Sep 2017 21:06:23 -0700 (PDT) Received: from localhost ([::1]:56758 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp582-0001jx-Pq for importer@patchew.org; Tue, 05 Sep 2017 00:06:22 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41734) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dp4y1-0001y8-Va for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:56:09 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dp4xw-0008E9-PS for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:56:02 -0400 Received: from mail-vk0-x243.google.com ([2607:f8b0:400c:c05::243]:37787) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1dp4xw-0008DY-Jw for qemu-devel@nongnu.org; Mon, 04 Sep 2017 23:55:56 -0400 Received: by mail-vk0-x243.google.com with SMTP id 184so713224vkn.4 for ; Mon, 04 Sep 2017 20:55:56 -0700 (PDT) Received: from localhost.localdomain ([161.10.80.59]) by smtp.gmail.com with ESMTPSA id d206sm1877252vka.29.2017.09.04.20.55.54 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 04 Sep 2017 20:55:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Dmh7f0H7KDxGlSuaiP1VmQ86fT19tVOGb7IoeZYfZdw=; b=e+7NRmOWh+cuwZhPkp/z74aeY2N1B1NLI4OEfkji8GShmyGx1VhmWlw3orGli/Bsew 0XKMv6MsOdHx69LitozSK/HpyhiqLoF/BNNA+jNldA3zrS4tax/Iyp2cVhadWtJa/c21 2sVIXOLHBCznO4Z3TMwcIlgMVfwobTAKwoEcCfufIwHNCuJJ00TUAgJOPuWsjnPu60mH 5Qj/nH7WIBIv26FhAxHi2KHHJ/r4vjCaHIYlwzTwjubtAoo0IQZKiEIoAF8QcTb3OwqB qHWUDKfKZk1hnWa35AtLP8ClEtabnudnn6zqtu3/tBbRtMCnuuYkCwDy6c5qst9OnfLM x5ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Dmh7f0H7KDxGlSuaiP1VmQ86fT19tVOGb7IoeZYfZdw=; b=bTMM50MEYl9njrEB/5cntk0Dk/dU0lsDzp0n3cGTZs+0Net81erTA4fGwLslIk9g1A Ml9nRN1xeY1TjHZK/2AuS7ptWO1/uMtJfYTodZuYAH96ZLpZYh8n9q/D23TB3AAqDvCb ENV4XqTEqNmHPg2oAIBMOGfSa1nNt0TgRNSIjguN2T72+PtxP6OaRakzIU5GhmG16FC8 mHbudb+bZI9KLk0CTt5zspTXrPj2GRPEeDcRhQEVzyeH3kmIbG/3trOlTH1Bq1f64VO5 B8yX2WBT4Qqe6HKiPZOhFnAQ3z2pS+s8bkxLIObhInQuQ2121S5PTlQA54JRD2pqIpoL oVjg== X-Gm-Message-State: AHPjjUiN0YBcQtDGgWWsvi0Dt9bhWKNsuwmDCFsW47KHabi+ya8kRhvb DH57x6eBjhIWAOvK X-Google-Smtp-Source: ADKCNb6UnekckK28zKSH/wJMAnCaG9xjYOiL8QX+SENmxQ7QUWUh4S1A3TylgFvShtTBAnZ2g29OGA== X-Received: by 10.31.188.11 with SMTP id m11mr1514663vkf.110.1504583755855; Mon, 04 Sep 2017 20:55:55 -0700 (PDT) From: Sergio Andres Gomez Del Real X-Google-Original-From: Sergio Andres Gomez Del Real To: qemu-devel@nongnu.org Date: Mon, 4 Sep 2017 22:54:57 -0500 Message-Id: <20170905035457.3753-15-Sergio.G.DelReal@gmail.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> References: <20170905035457.3753-1-Sergio.G.DelReal@gmail.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:400c:c05::243 Subject: [Qemu-devel] [PATCH v3 14/14] hvf: inject General Protection Fault when vmexit through vmcall X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sergio Andres Gomez Del Real , pbonzini@redhat.com, stefanha@gmail.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZohoMail: RDKM_2 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This commit injects a GP fault when the guest vmexit's by executing a vmcall instruction. Signed-off-by: Sergio Andres Gomez Del Real --- target/i386/hvf-all.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/target/i386/hvf-all.c b/target/i386/hvf-all.c index 8fc6a0b5d1..cdf4d6f8e7 100644 --- a/target/i386/hvf-all.c +++ b/target/i386/hvf-all.c @@ -903,7 +903,9 @@ int hvf_vcpu_exec(CPUState *cpu) macvm_set_rip(cpu, rip + ins_len); break; case VMX_REASON_VMCALL: - /* TODO: inject #GP fault */ + env->exception_injected =3D EXCP0D_GPF; + env->has_error_code =3D true; + env->error_code =3D 0; break; default: error_report("%llx: unhandled exit %llx\n", rip, exit_reason); --=20 2.14.1