From nobody Fri May 17 08:24:54 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661363444; cv=none; d=zohomail.com; s=zohoarc; b=LlVDpnFJPWhpRl/kgNttA+oKwEVp8TCC3SO3lB9iQ5ATsgmnyIy5d108HXCbrWfevOLewhhTpdGruDasCUzfFC+8KBveEp209BPlGFGmqzfucA89Ki7GeFeuJXzWUleaqMTsT2pnysaghKVqiw1/XpjYashDmPy4xl1TwoFRL14= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661363444; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=QT+/tnbwcKr0FvhYYSVIPF3FNhS5k+vOlXhs6LQbIXw=; b=SWk8SglZCkQ/hRFVgxCiovY34o39dA7OaSA2jdZBkBEiUX5VzFNy6XHPUtvKybpRdKMqeywJXz1j1qSiMVOFr+piuToDx0hEQ+gpuzBEDETpRwbWdj3RiHRM1qzSczWY4uLHVYA3tNmEKXpMrrRmdigCzBQ3/plpV/wb8ZjDivc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1661363444372447.1762004910455; Wed, 24 Aug 2022 10:50:44 -0700 (PDT) Received: from localhost ([::1]:45732 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQuWX-0002Z5-OP for importer@patchew.org; Wed, 24 Aug 2022 13:50:41 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:52024) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuEV-0001o0-12 for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:32:04 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:32661) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuE9-0003ie-4D for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:31:59 -0400 Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-622-yllxivdZOdm31wna1lWYJA-1; Wed, 24 Aug 2022 13:31:35 -0400 Received: by mail-wm1-f69.google.com with SMTP id ay21-20020a05600c1e1500b003a6271a9718so9429764wmb.0 for ; Wed, 24 Aug 2022 10:31:31 -0700 (PDT) Received: from goa-sendmail ([2001:b07:6468:f312:9af8:e5f5:7516:fa89]) by smtp.gmail.com with ESMTPSA id w3-20020adfde83000000b002253af82fa7sm17864212wrl.9.2022.08.24.10.31.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Aug 2022 10:31:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661362300; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QT+/tnbwcKr0FvhYYSVIPF3FNhS5k+vOlXhs6LQbIXw=; b=Y0QlEX8p8sfPMDjCKEeTr2md8eDST1tZShHtLN/By3I8a5MZQn5TBSkPNAXxoxGoTHjlvT D8tOeot2x0mdfxXiou8p9voOkX4RzcFff4iKyciCbPqwAx+3nrL8fBMn6Q70ChE0P2tPgq jeiND3K7sVpMXxvJQmpmh9UNFSjuXXY= X-MC-Unique: yllxivdZOdm31wna1lWYJA-1 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=QT+/tnbwcKr0FvhYYSVIPF3FNhS5k+vOlXhs6LQbIXw=; b=hic/nLCxun6w1/Z96rFK9PzzabjlfN/wdrdIWrKfrtGCOMi/K4OetSn0rF3aV6dqM4 yh4kEaHZVVCsZVhsulEdJJZYV/yAeSs3vkAxtxu9Y8bt7TqkVBbNKXji55AV1/8vM/5z fI2SOYgPo1KvBiv1lTuJr4jYotHkaQVFWCZyFzsBNGC9kO3vjrdO3sffbSGO6kWO+6uO d7FlkmstXMY9Ehttbb3PO30Rd6CizXTQoJ2DknJPhSd4MbkLz0Tt0gldL8Sv2psOygZw EBAApOyX/DasvRFwLXVom/4Hnjd2C7K+63MzDNaaEIUi2nQp5CMALheFXNum6MtJJTDU Q+nQ== X-Gm-Message-State: ACgBeo0hn9tJibC2l4z2E5HFUHsKBHLEtMPl7nRbSDrrgOw6/5lr6Hj9 dB2aMtJM+VF99zPUkhgeB3KVbdF36Nx4vwUyXASVCjSbmrvKPHiAv6ItrDge17m5DGQtdav10al GnceQZ/26HQzDSYkNm4+K4sqT+PJVgSmoaYI0qZcBpr8Qc0X6YYrE3pz3MXXBOSsPYQM= X-Received: by 2002:adf:fa8a:0:b0:225:20ab:642b with SMTP id h10-20020adffa8a000000b0022520ab642bmr140578wrr.615.1661362288988; Wed, 24 Aug 2022 10:31:28 -0700 (PDT) X-Google-Smtp-Source: AA6agR4+rQ4YZZSXhpygxVE7OoerU+0+rEQ9VV5UiTp0Gr7CgW0OoskAa2yJu0zHy9832PnT1tsIBQ== X-Received: by 2002:adf:fa8a:0:b0:225:20ab:642b with SMTP id h10-20020adffa8a000000b0022520ab642bmr140503wrr.615.1661362287080; Wed, 24 Aug 2022 10:31:27 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, paul@nowt.org Subject: [PATCH 01/17] target/i386: extract old decoder to a separate file Date: Wed, 24 Aug 2022 19:31:07 +0200 Message-Id: <20220824173123.232018-2-pbonzini@redhat.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220824173123.232018-1-pbonzini@redhat.com> References: <20220824173123.232018-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661363446889100001 Content-Type: text/plain; charset="utf-8" To have a better idea of which functions are helpers to emit TCG ops and which implement the decode tree, extract the latter to a separate file. Signed-off-by: Paolo Bonzini --- target/i386/tcg/decode-old.c.inc | 5712 +++++++++++++++++++++++++++++ target/i386/tcg/translate.c | 5715 +----------------------------- 2 files changed, 5713 insertions(+), 5714 deletions(-) create mode 100644 target/i386/tcg/decode-old.c.inc diff --git a/target/i386/tcg/decode-old.c.inc b/target/i386/tcg/decode-old.= c.inc new file mode 100644 index 0000000000..61448ab7c9 --- /dev/null +++ b/target/i386/tcg/decode-old.c.inc @@ -0,0 +1,5712 @@ +/* For a switch indexed by MODRM, match all memory operands for a given OP= . */ +#define CASE_MODRM_MEM_OP(OP) \ + case (0 << 6) | (OP << 3) | 0 ... (0 << 6) | (OP << 3) | 7: \ + case (1 << 6) | (OP << 3) | 0 ... (1 << 6) | (OP << 3) | 7: \ + case (2 << 6) | (OP << 3) | 0 ... (2 << 6) | (OP << 3) | 7 + +#define CASE_MODRM_OP(OP) \ + case (0 << 6) | (OP << 3) | 0 ... (0 << 6) | (OP << 3) | 7: \ + case (1 << 6) | (OP << 3) | 0 ... (1 << 6) | (OP << 3) | 7: \ + case (2 << 6) | (OP << 3) | 0 ... (2 << 6) | (OP << 3) | 7: \ + case (3 << 6) | (OP << 3) | 0 ... (3 << 6) | (OP << 3) | 7 + +typedef void (*SSEFunc_i_ep)(TCGv_i32 val, TCGv_ptr env, TCGv_ptr reg); +typedef void (*SSEFunc_l_ep)(TCGv_i64 val, TCGv_ptr env, TCGv_ptr reg); +typedef void (*SSEFunc_0_epi)(TCGv_ptr env, TCGv_ptr reg, TCGv_i32 val); +typedef void (*SSEFunc_0_epl)(TCGv_ptr env, TCGv_ptr reg, TCGv_i64 val); +typedef void (*SSEFunc_0_epp)(TCGv_ptr env, TCGv_ptr reg_a, TCGv_ptr reg_b= ); +typedef void (*SSEFunc_0_eppi)(TCGv_ptr env, TCGv_ptr reg_a, TCGv_ptr reg_= b, + TCGv_i32 val); +typedef void (*SSEFunc_0_ppi)(TCGv_ptr reg_a, TCGv_ptr reg_b, TCGv_i32 val= ); +typedef void (*SSEFunc_0_eppt)(TCGv_ptr env, TCGv_ptr reg_a, TCGv_ptr reg_= b, + TCGv val); + +#define SSE_SPECIAL ((void *)1) +#define SSE_DUMMY ((void *)2) + +#define MMX_OP2(x) { gen_helper_ ## x ## _mmx, gen_helper_ ## x ## _xmm } +#define SSE_FOP(x) { gen_helper_ ## x ## ps, gen_helper_ ## x ## pd, \ + gen_helper_ ## x ## ss, gen_helper_ ## x ## sd, } + +static const SSEFunc_0_epp sse_op_table1[256][4] =3D { + /* 3DNow! extensions */ + [0x0e] =3D { SSE_DUMMY }, /* femms */ + [0x0f] =3D { SSE_DUMMY }, /* pf... */ + /* pure SSE operations */ + [0x10] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, /* = movups, movupd, movss, movsd */ + [0x11] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, /* = movups, movupd, movss, movsd */ + [0x12] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, /* = movlps, movlpd, movsldup, movddup */ + [0x13] =3D { SSE_SPECIAL, SSE_SPECIAL }, /* movlps, movlpd */ + [0x14] =3D { gen_helper_punpckldq_xmm, gen_helper_punpcklqdq_xmm }, + [0x15] =3D { gen_helper_punpckhdq_xmm, gen_helper_punpckhqdq_xmm }, + [0x16] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, /* movhps, movh= pd, movshdup */ + [0x17] =3D { SSE_SPECIAL, SSE_SPECIAL }, /* movhps, movhpd */ + + [0x28] =3D { SSE_SPECIAL, SSE_SPECIAL }, /* movaps, movapd */ + [0x29] =3D { SSE_SPECIAL, SSE_SPECIAL }, /* movaps, movapd */ + [0x2a] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, /* = cvtpi2ps, cvtpi2pd, cvtsi2ss, cvtsi2sd */ + [0x2b] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, /* = movntps, movntpd, movntss, movntsd */ + [0x2c] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, /* = cvttps2pi, cvttpd2pi, cvttsd2si, cvttss2si */ + [0x2d] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, /* = cvtps2pi, cvtpd2pi, cvtsd2si, cvtss2si */ + [0x2e] =3D { gen_helper_ucomiss, gen_helper_ucomisd }, + [0x2f] =3D { gen_helper_comiss, gen_helper_comisd }, + [0x50] =3D { SSE_SPECIAL, SSE_SPECIAL }, /* movmskps, movmskpd */ + [0x51] =3D SSE_FOP(sqrt), + [0x52] =3D { gen_helper_rsqrtps, NULL, gen_helper_rsqrtss, NULL }, + [0x53] =3D { gen_helper_rcpps, NULL, gen_helper_rcpss, NULL }, + [0x54] =3D { gen_helper_pand_xmm, gen_helper_pand_xmm }, /* andps, and= pd */ + [0x55] =3D { gen_helper_pandn_xmm, gen_helper_pandn_xmm }, /* andnps, = andnpd */ + [0x56] =3D { gen_helper_por_xmm, gen_helper_por_xmm }, /* orps, orpd */ + [0x57] =3D { gen_helper_pxor_xmm, gen_helper_pxor_xmm }, /* xorps, xor= pd */ + [0x58] =3D SSE_FOP(add), + [0x59] =3D SSE_FOP(mul), + [0x5a] =3D { gen_helper_cvtps2pd, gen_helper_cvtpd2ps, + gen_helper_cvtss2sd, gen_helper_cvtsd2ss }, + [0x5b] =3D { gen_helper_cvtdq2ps, gen_helper_cvtps2dq, gen_helper_cvtt= ps2dq }, + [0x5c] =3D SSE_FOP(sub), + [0x5d] =3D SSE_FOP(min), + [0x5e] =3D SSE_FOP(div), + [0x5f] =3D SSE_FOP(max), + + [0xc2] =3D SSE_FOP(cmpeq), + [0xc6] =3D { (SSEFunc_0_epp)gen_helper_shufps, + (SSEFunc_0_epp)gen_helper_shufpd }, /* XXX: casts */ + + /* SSSE3, SSE4, MOVBE, CRC32, BMI1, BMI2, ADX. */ + [0x38] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, + [0x3a] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, + + /* MMX ops and their SSE extensions */ + [0x60] =3D MMX_OP2(punpcklbw), + [0x61] =3D MMX_OP2(punpcklwd), + [0x62] =3D MMX_OP2(punpckldq), + [0x63] =3D MMX_OP2(packsswb), + [0x64] =3D MMX_OP2(pcmpgtb), + [0x65] =3D MMX_OP2(pcmpgtw), + [0x66] =3D MMX_OP2(pcmpgtl), + [0x67] =3D MMX_OP2(packuswb), + [0x68] =3D MMX_OP2(punpckhbw), + [0x69] =3D MMX_OP2(punpckhwd), + [0x6a] =3D MMX_OP2(punpckhdq), + [0x6b] =3D MMX_OP2(packssdw), + [0x6c] =3D { NULL, gen_helper_punpcklqdq_xmm }, + [0x6d] =3D { NULL, gen_helper_punpckhqdq_xmm }, + [0x6e] =3D { SSE_SPECIAL, SSE_SPECIAL }, /* movd mm, ea */ + [0x6f] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, /* movq, movdqa,= , movqdu */ + [0x70] =3D { (SSEFunc_0_epp)gen_helper_pshufw_mmx, + (SSEFunc_0_epp)gen_helper_pshufd_xmm, + (SSEFunc_0_epp)gen_helper_pshufhw_xmm, + (SSEFunc_0_epp)gen_helper_pshuflw_xmm }, /* XXX: casts */ + [0x71] =3D { SSE_SPECIAL, SSE_SPECIAL }, /* shiftw */ + [0x72] =3D { SSE_SPECIAL, SSE_SPECIAL }, /* shiftd */ + [0x73] =3D { SSE_SPECIAL, SSE_SPECIAL }, /* shiftq */ + [0x74] =3D MMX_OP2(pcmpeqb), + [0x75] =3D MMX_OP2(pcmpeqw), + [0x76] =3D MMX_OP2(pcmpeql), + [0x77] =3D { SSE_DUMMY }, /* emms */ + [0x78] =3D { NULL, SSE_SPECIAL, NULL, SSE_SPECIAL }, /* extrq_i, inser= tq_i */ + [0x79] =3D { NULL, gen_helper_extrq_r, NULL, gen_helper_insertq_r }, + [0x7c] =3D { NULL, gen_helper_haddpd, NULL, gen_helper_haddps }, + [0x7d] =3D { NULL, gen_helper_hsubpd, NULL, gen_helper_hsubps }, + [0x7e] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, /* movd, movd, ,= movq */ + [0x7f] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, /* movq, movdqa,= movdqu */ + [0xc4] =3D { SSE_SPECIAL, SSE_SPECIAL }, /* pinsrw */ + [0xc5] =3D { SSE_SPECIAL, SSE_SPECIAL }, /* pextrw */ + [0xd0] =3D { NULL, gen_helper_addsubpd, NULL, gen_helper_addsubps }, + [0xd1] =3D MMX_OP2(psrlw), + [0xd2] =3D MMX_OP2(psrld), + [0xd3] =3D MMX_OP2(psrlq), + [0xd4] =3D MMX_OP2(paddq), + [0xd5] =3D MMX_OP2(pmullw), + [0xd6] =3D { NULL, SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, + [0xd7] =3D { SSE_SPECIAL, SSE_SPECIAL }, /* pmovmskb */ + [0xd8] =3D MMX_OP2(psubusb), + [0xd9] =3D MMX_OP2(psubusw), + [0xda] =3D MMX_OP2(pminub), + [0xdb] =3D MMX_OP2(pand), + [0xdc] =3D MMX_OP2(paddusb), + [0xdd] =3D MMX_OP2(paddusw), + [0xde] =3D MMX_OP2(pmaxub), + [0xdf] =3D MMX_OP2(pandn), + [0xe0] =3D MMX_OP2(pavgb), + [0xe1] =3D MMX_OP2(psraw), + [0xe2] =3D MMX_OP2(psrad), + [0xe3] =3D MMX_OP2(pavgw), + [0xe4] =3D MMX_OP2(pmulhuw), + [0xe5] =3D MMX_OP2(pmulhw), + [0xe6] =3D { NULL, gen_helper_cvttpd2dq, gen_helper_cvtdq2pd, gen_help= er_cvtpd2dq }, + [0xe7] =3D { SSE_SPECIAL , SSE_SPECIAL }, /* movntq, movntq */ + [0xe8] =3D MMX_OP2(psubsb), + [0xe9] =3D MMX_OP2(psubsw), + [0xea] =3D MMX_OP2(pminsw), + [0xeb] =3D MMX_OP2(por), + [0xec] =3D MMX_OP2(paddsb), + [0xed] =3D MMX_OP2(paddsw), + [0xee] =3D MMX_OP2(pmaxsw), + [0xef] =3D MMX_OP2(pxor), + [0xf0] =3D { NULL, NULL, NULL, SSE_SPECIAL }, /* lddqu */ + [0xf1] =3D MMX_OP2(psllw), + [0xf2] =3D MMX_OP2(pslld), + [0xf3] =3D MMX_OP2(psllq), + [0xf4] =3D MMX_OP2(pmuludq), + [0xf5] =3D MMX_OP2(pmaddwd), + [0xf6] =3D MMX_OP2(psadbw), + [0xf7] =3D { (SSEFunc_0_epp)gen_helper_maskmov_mmx, + (SSEFunc_0_epp)gen_helper_maskmov_xmm }, /* XXX: casts */ + [0xf8] =3D MMX_OP2(psubb), + [0xf9] =3D MMX_OP2(psubw), + [0xfa] =3D MMX_OP2(psubl), + [0xfb] =3D MMX_OP2(psubq), + [0xfc] =3D MMX_OP2(paddb), + [0xfd] =3D MMX_OP2(paddw), + [0xfe] =3D MMX_OP2(paddl), +}; + +static const SSEFunc_0_epp sse_op_table2[3 * 8][2] =3D { + [0 + 2] =3D MMX_OP2(psrlw), + [0 + 4] =3D MMX_OP2(psraw), + [0 + 6] =3D MMX_OP2(psllw), + [8 + 2] =3D MMX_OP2(psrld), + [8 + 4] =3D MMX_OP2(psrad), + [8 + 6] =3D MMX_OP2(pslld), + [16 + 2] =3D MMX_OP2(psrlq), + [16 + 3] =3D { NULL, gen_helper_psrldq_xmm }, + [16 + 6] =3D MMX_OP2(psllq), + [16 + 7] =3D { NULL, gen_helper_pslldq_xmm }, +}; + +static const SSEFunc_0_epi sse_op_table3ai[] =3D { + gen_helper_cvtsi2ss, + gen_helper_cvtsi2sd +}; + +#ifdef TARGET_X86_64 +static const SSEFunc_0_epl sse_op_table3aq[] =3D { + gen_helper_cvtsq2ss, + gen_helper_cvtsq2sd +}; +#endif + +static const SSEFunc_i_ep sse_op_table3bi[] =3D { + gen_helper_cvttss2si, + gen_helper_cvtss2si, + gen_helper_cvttsd2si, + gen_helper_cvtsd2si +}; + +#ifdef TARGET_X86_64 +static const SSEFunc_l_ep sse_op_table3bq[] =3D { + gen_helper_cvttss2sq, + gen_helper_cvtss2sq, + gen_helper_cvttsd2sq, + gen_helper_cvtsd2sq +}; +#endif + +static const SSEFunc_0_epp sse_op_table4[8][4] =3D { + SSE_FOP(cmpeq), + SSE_FOP(cmplt), + SSE_FOP(cmple), + SSE_FOP(cmpunord), + SSE_FOP(cmpneq), + SSE_FOP(cmpnlt), + SSE_FOP(cmpnle), + SSE_FOP(cmpord), +}; + +static const SSEFunc_0_epp sse_op_table5[256] =3D { + [0x0c] =3D gen_helper_pi2fw, + [0x0d] =3D gen_helper_pi2fd, + [0x1c] =3D gen_helper_pf2iw, + [0x1d] =3D gen_helper_pf2id, + [0x8a] =3D gen_helper_pfnacc, + [0x8e] =3D gen_helper_pfpnacc, + [0x90] =3D gen_helper_pfcmpge, + [0x94] =3D gen_helper_pfmin, + [0x96] =3D gen_helper_pfrcp, + [0x97] =3D gen_helper_pfrsqrt, + [0x9a] =3D gen_helper_pfsub, + [0x9e] =3D gen_helper_pfadd, + [0xa0] =3D gen_helper_pfcmpgt, + [0xa4] =3D gen_helper_pfmax, + [0xa6] =3D gen_helper_movq, /* pfrcpit1; no need to actually increase = precision */ + [0xa7] =3D gen_helper_movq, /* pfrsqit1 */ + [0xaa] =3D gen_helper_pfsubr, + [0xae] =3D gen_helper_pfacc, + [0xb0] =3D gen_helper_pfcmpeq, + [0xb4] =3D gen_helper_pfmul, + [0xb6] =3D gen_helper_movq, /* pfrcpit2 */ + [0xb7] =3D gen_helper_pmulhrw_mmx, + [0xbb] =3D gen_helper_pswapd, + [0xbf] =3D gen_helper_pavgb_mmx /* pavgusb */ +}; + +struct SSEOpHelper_epp { + SSEFunc_0_epp op[2]; + uint32_t ext_mask; +}; + +struct SSEOpHelper_eppi { + SSEFunc_0_eppi op[2]; + uint32_t ext_mask; +}; + +#define SSSE3_OP(x) { MMX_OP2(x), CPUID_EXT_SSSE3 } +#define SSE41_OP(x) { { NULL, gen_helper_ ## x ## _xmm }, CPUID_EXT_SSE41 } +#define SSE42_OP(x) { { NULL, gen_helper_ ## x ## _xmm }, CPUID_EXT_SSE42 } +#define SSE41_SPECIAL { { NULL, SSE_SPECIAL }, CPUID_EXT_SSE41 } +#define PCLMULQDQ_OP(x) { { NULL, gen_helper_ ## x ## _xmm }, \ + CPUID_EXT_PCLMULQDQ } +#define AESNI_OP(x) { { NULL, gen_helper_ ## x ## _xmm }, CPUID_EXT_AES } + +static const struct SSEOpHelper_epp sse_op_table6[256] =3D { + [0x00] =3D SSSE3_OP(pshufb), + [0x01] =3D SSSE3_OP(phaddw), + [0x02] =3D SSSE3_OP(phaddd), + [0x03] =3D SSSE3_OP(phaddsw), + [0x04] =3D SSSE3_OP(pmaddubsw), + [0x05] =3D SSSE3_OP(phsubw), + [0x06] =3D SSSE3_OP(phsubd), + [0x07] =3D SSSE3_OP(phsubsw), + [0x08] =3D SSSE3_OP(psignb), + [0x09] =3D SSSE3_OP(psignw), + [0x0a] =3D SSSE3_OP(psignd), + [0x0b] =3D SSSE3_OP(pmulhrsw), + [0x10] =3D SSE41_OP(pblendvb), + [0x14] =3D SSE41_OP(blendvps), + [0x15] =3D SSE41_OP(blendvpd), + [0x17] =3D SSE41_OP(ptest), + [0x1c] =3D SSSE3_OP(pabsb), + [0x1d] =3D SSSE3_OP(pabsw), + [0x1e] =3D SSSE3_OP(pabsd), + [0x20] =3D SSE41_OP(pmovsxbw), + [0x21] =3D SSE41_OP(pmovsxbd), + [0x22] =3D SSE41_OP(pmovsxbq), + [0x23] =3D SSE41_OP(pmovsxwd), + [0x24] =3D SSE41_OP(pmovsxwq), + [0x25] =3D SSE41_OP(pmovsxdq), + [0x28] =3D SSE41_OP(pmuldq), + [0x29] =3D SSE41_OP(pcmpeqq), + [0x2a] =3D SSE41_SPECIAL, /* movntqda */ + [0x2b] =3D SSE41_OP(packusdw), + [0x30] =3D SSE41_OP(pmovzxbw), + [0x31] =3D SSE41_OP(pmovzxbd), + [0x32] =3D SSE41_OP(pmovzxbq), + [0x33] =3D SSE41_OP(pmovzxwd), + [0x34] =3D SSE41_OP(pmovzxwq), + [0x35] =3D SSE41_OP(pmovzxdq), + [0x37] =3D SSE42_OP(pcmpgtq), + [0x38] =3D SSE41_OP(pminsb), + [0x39] =3D SSE41_OP(pminsd), + [0x3a] =3D SSE41_OP(pminuw), + [0x3b] =3D SSE41_OP(pminud), + [0x3c] =3D SSE41_OP(pmaxsb), + [0x3d] =3D SSE41_OP(pmaxsd), + [0x3e] =3D SSE41_OP(pmaxuw), + [0x3f] =3D SSE41_OP(pmaxud), + [0x40] =3D SSE41_OP(pmulld), + [0x41] =3D SSE41_OP(phminposuw), + [0xdb] =3D AESNI_OP(aesimc), + [0xdc] =3D AESNI_OP(aesenc), + [0xdd] =3D AESNI_OP(aesenclast), + [0xde] =3D AESNI_OP(aesdec), + [0xdf] =3D AESNI_OP(aesdeclast), +}; + +static const struct SSEOpHelper_eppi sse_op_table7[256] =3D { + [0x08] =3D SSE41_OP(roundps), + [0x09] =3D SSE41_OP(roundpd), + [0x0a] =3D SSE41_OP(roundss), + [0x0b] =3D SSE41_OP(roundsd), + [0x0c] =3D SSE41_OP(blendps), + [0x0d] =3D SSE41_OP(blendpd), + [0x0e] =3D SSE41_OP(pblendw), + [0x0f] =3D SSSE3_OP(palignr), + [0x14] =3D SSE41_SPECIAL, /* pextrb */ + [0x15] =3D SSE41_SPECIAL, /* pextrw */ + [0x16] =3D SSE41_SPECIAL, /* pextrd/pextrq */ + [0x17] =3D SSE41_SPECIAL, /* extractps */ + [0x20] =3D SSE41_SPECIAL, /* pinsrb */ + [0x21] =3D SSE41_SPECIAL, /* insertps */ + [0x22] =3D SSE41_SPECIAL, /* pinsrd/pinsrq */ + [0x40] =3D SSE41_OP(dpps), + [0x41] =3D SSE41_OP(dppd), + [0x42] =3D SSE41_OP(mpsadbw), + [0x44] =3D PCLMULQDQ_OP(pclmulqdq), + [0x60] =3D SSE42_OP(pcmpestrm), + [0x61] =3D SSE42_OP(pcmpestri), + [0x62] =3D SSE42_OP(pcmpistrm), + [0x63] =3D SSE42_OP(pcmpistri), + [0xdf] =3D AESNI_OP(aeskeygenassist), +}; + +static void gen_sse(CPUX86State *env, DisasContext *s, int b, + target_ulong pc_start) +{ + int b1, op1_offset, op2_offset, is_xmm, val; + int modrm, mod, rm, reg; + SSEFunc_0_epp sse_fn_epp; + SSEFunc_0_eppi sse_fn_eppi; + SSEFunc_0_ppi sse_fn_ppi; + SSEFunc_0_eppt sse_fn_eppt; + MemOp ot; + + b &=3D 0xff; + if (s->prefix & PREFIX_DATA) + b1 =3D 1; + else if (s->prefix & PREFIX_REPZ) + b1 =3D 2; + else if (s->prefix & PREFIX_REPNZ) + b1 =3D 3; + else + b1 =3D 0; + sse_fn_epp =3D sse_op_table1[b][b1]; + if (!sse_fn_epp) { + goto unknown_op; + } + if ((b <=3D 0x5f && b >=3D 0x10) || b =3D=3D 0xc6 || b =3D=3D 0xc2) { + is_xmm =3D 1; + } else { + if (b1 =3D=3D 0) { + /* MMX case */ + is_xmm =3D 0; + } else { + is_xmm =3D 1; + } + } + /* simple MMX/SSE operation */ + if (s->flags & HF_TS_MASK) { + gen_exception(s, EXCP07_PREX, pc_start - s->cs_base); + return; + } + if (s->flags & HF_EM_MASK) { + illegal_op: + gen_illegal_opcode(s); + return; + } + if (is_xmm + && !(s->flags & HF_OSFXSR_MASK) + && (b !=3D 0x38 && b !=3D 0x3a)) { + goto unknown_op; ptimize by storing fptt and fptags in + the static cpu state) */ + if (!is_xmm) { + gen_helper_enter_mmx(cpu_env); + } + + modrm =3D x86_ldub_code(env, s); + reg =3D ((modrm >> 3) & 7); + if (is_xmm) { + reg |=3D REX_R(s); + } + mod =3D (modrm >> 6) & 3; + if (sse_fn_epp =3D=3D SSE_SPECIAL) { + b |=3D (b1 << 8); + switch(b) { + case 0x0e7: /* movntq */ + if (mod =3D=3D 3) { + goto illegal_op; + } + gen_lea_modrm(env, s, modrm); + gen_stq_env_A0(s, offsetof(CPUX86State, fpregs[reg].mmx)); + break; + case 0x1e7: /* movntdq */ + case 0x02b: /* movntps */ + case 0x12b: /* movntps */ + if (mod =3D=3D 3) + goto illegal_op; + gen_lea_modrm(env, s, modrm); + gen_sto_env_A0(s, offsetof(CPUX86State, xmm_regs[reg])); + break; + case 0x3f0: /* lddqu */ + if (mod =3D=3D 3) + goto illegal_op; + gen_lea_modrm(env, s, modrm); + gen_ldo_env_A0(s, offsetof(CPUX86State, xmm_regs[reg])); + break; + case 0x22b: /* movntss */ + case 0x32b: /* movntsd */ + if (mod =3D=3D 3) + goto illegal_op; + gen_lea_modrm(env, s, modrm); + if (b1 & 1) { + gen_stq_env_A0(s, offsetof(CPUX86State, + xmm_regs[reg].ZMM_Q(0))); + } else { + tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State, + xmm_regs[reg].ZMM_L(0))); + gen_op_st_v(s, MO_32, s->T0, s->A0); + } + break; + case 0x6e: /* movd mm, ea */ +#ifdef TARGET_X86_64 + if (s->dflag =3D=3D MO_64) { + gen_ldst_modrm(env, s, modrm, MO_64, OR_TMP0, 0); + tcg_gen_st_tl(s->T0, cpu_env, + offsetof(CPUX86State, fpregs[reg].mmx)); + } else +#endif + { + gen_ldst_modrm(env, s, modrm, MO_32, OR_TMP0, 0); + tcg_gen_addi_ptr(s->ptr0, cpu_env, + offsetof(CPUX86State,fpregs[reg].mmx)); + tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); + gen_helper_movl_mm_T0_mmx(s->ptr0, s->tmp2_i32); + } + break; + case 0x16e: /* movd xmm, ea */ +#ifdef TARGET_X86_64 + if (s->dflag =3D=3D MO_64) { + gen_ldst_modrm(env, s, modrm, MO_64, OR_TMP0, 0); + tcg_gen_addi_ptr(s->ptr0, cpu_env, + offsetof(CPUX86State,xmm_regs[reg])); + gen_helper_movq_mm_T0_xmm(s->ptr0, s->T0); + } else +#endif + { + gen_ldst_modrm(env, s, modrm, MO_32, OR_TMP0, 0); + tcg_gen_addi_ptr(s->ptr0, cpu_env, + offsetof(CPUX86State,xmm_regs[reg])); + tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); + gen_helper_movl_mm_T0_xmm(s->ptr0, s->tmp2_i32); + } + break; + case 0x6f: /* movq mm, ea */ + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + gen_ldq_env_A0(s, offsetof(CPUX86State, fpregs[reg].mmx)); + } else { + rm =3D (modrm & 7); + tcg_gen_ld_i64(s->tmp1_i64, cpu_env, + offsetof(CPUX86State,fpregs[rm].mmx)); + tcg_gen_st_i64(s->tmp1_i64, cpu_env, + offsetof(CPUX86State,fpregs[reg].mmx)); + } + break; + case 0x010: /* movups */ + case 0x110: /* movupd */ + case 0x028: /* movaps */ + case 0x128: /* movapd */ + case 0x16f: /* movdqa xmm, ea */ + case 0x26f: /* movdqu xmm, ea */ + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + gen_ldo_env_A0(s, offsetof(CPUX86State, xmm_regs[reg])); + } else { + rm =3D (modrm & 7) | REX_B(s); + gen_op_movo(s, offsetof(CPUX86State, xmm_regs[reg]), + offsetof(CPUX86State,xmm_regs[rm])); + } + break; + case 0x210: /* movss xmm, ea */ + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + gen_op_ld_v(s, MO_32, s->T0, s->A0); + tcg_gen_st32_tl(s->T0, cpu_env, + offsetof(CPUX86State, xmm_regs[reg].ZMM_L(= 0))); + tcg_gen_movi_tl(s->T0, 0); + tcg_gen_st32_tl(s->T0, cpu_env, + offsetof(CPUX86State, xmm_regs[reg].ZMM_L(= 1))); + tcg_gen_st32_tl(s->T0, cpu_env, + offsetof(CPUX86State, xmm_regs[reg].ZMM_L(= 2))); + tcg_gen_st32_tl(s->T0, cpu_env, + offsetof(CPUX86State, xmm_regs[reg].ZMM_L(= 3))); + } else { + rm =3D (modrm & 7) | REX_B(s); + gen_op_movl(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_L(0= )), + offsetof(CPUX86State,xmm_regs[rm].ZMM_L(0))); + } + break; + case 0x310: /* movsd xmm, ea */ + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + gen_ldq_env_A0(s, offsetof(CPUX86State, + xmm_regs[reg].ZMM_Q(0))); + tcg_gen_movi_tl(s->T0, 0); + tcg_gen_st32_tl(s->T0, cpu_env, + offsetof(CPUX86State, xmm_regs[reg].ZMM_L(= 2))); + tcg_gen_st32_tl(s->T0, cpu_env, + offsetof(CPUX86State, xmm_regs[reg].ZMM_L(= 3))); + } else { + rm =3D (modrm & 7) | REX_B(s); + gen_op_movq(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_Q(0= )), + offsetof(CPUX86State,xmm_regs[rm].ZMM_Q(0))); + } + break; + case 0x012: /* movlps */ + case 0x112: /* movlpd */ + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + gen_ldq_env_A0(s, offsetof(CPUX86State, + xmm_regs[reg].ZMM_Q(0))); + } else { + /* movhlps */ + rm =3D (modrm & 7) | REX_B(s); + gen_op_movq(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_Q(0= )), + offsetof(CPUX86State,xmm_regs[rm].ZMM_Q(1))); + } + break; + case 0x212: /* movsldup */ + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + gen_ldo_env_A0(s, offsetof(CPUX86State, xmm_regs[reg])); + } else { + rm =3D (modrm & 7) | REX_B(s); + gen_op_movl(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_L(0= )), + offsetof(CPUX86State,xmm_regs[rm].ZMM_L(0))); + gen_op_movl(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_L(2= )), + offsetof(CPUX86State,xmm_regs[rm].ZMM_L(2))); + } + gen_op_movl(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_L(1)), + offsetof(CPUX86State,xmm_regs[reg].ZMM_L(0))); + gen_op_movl(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_L(3)), + offsetof(CPUX86State,xmm_regs[reg].ZMM_L(2))); + break; + case 0x312: /* movddup */ + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + gen_ldq_env_A0(s, offsetof(CPUX86State, + xmm_regs[reg].ZMM_Q(0))); + } else { + rm =3D (modrm & 7) | REX_B(s); + gen_op_movq(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_Q(0= )), + offsetof(CPUX86State,xmm_regs[rm].ZMM_Q(0))); + } + gen_op_movq(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_Q(1)), + offsetof(CPUX86State,xmm_regs[reg].ZMM_Q(0))); + break; + case 0x016: /* movhps */ + case 0x116: /* movhpd */ + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + gen_ldq_env_A0(s, offsetof(CPUX86State, + xmm_regs[reg].ZMM_Q(1))); + } else { + /* movlhps */ + rm =3D (modrm & 7) | REX_B(s); + gen_op_movq(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_Q(1= )), + offsetof(CPUX86State,xmm_regs[rm].ZMM_Q(0))); + } + break; + case 0x216: /* movshdup */ + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + gen_ldo_env_A0(s, offsetof(CPUX86State, xmm_regs[reg])); + } else { + rm =3D (modrm & 7) | REX_B(s); + gen_op_movl(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_L(1= )), + offsetof(CPUX86State,xmm_regs[rm].ZMM_L(1))); + gen_op_movl(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_L(3= )), + offsetof(CPUX86State,xmm_regs[rm].ZMM_L(3))); + } + gen_op_movl(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_L(0)), + offsetof(CPUX86State,xmm_regs[reg].ZMM_L(1))); + gen_op_movl(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_L(2)), + offsetof(CPUX86State,xmm_regs[reg].ZMM_L(3))); + break; + case 0x178: + case 0x378: + { + int bit_index, field_length; + + if (b1 =3D=3D 1 && reg !=3D 0) + goto illegal_op; + field_length =3D x86_ldub_code(env, s) & 0x3F; + bit_index =3D x86_ldub_code(env, s) & 0x3F; + tcg_gen_addi_ptr(s->ptr0, cpu_env, + offsetof(CPUX86State,xmm_regs[reg])); + if (b1 =3D=3D 1) + gen_helper_extrq_i(cpu_env, s->ptr0, + tcg_const_i32(bit_index), + tcg_const_i32(field_length)); + else + gen_helper_insertq_i(cpu_env, s->ptr0, + tcg_const_i32(bit_index), + tcg_const_i32(field_length)); + } + break; + case 0x7e: /* movd ea, mm */ +#ifdef TARGET_X86_64 + if (s->dflag =3D=3D MO_64) { + tcg_gen_ld_i64(s->T0, cpu_env, + offsetof(CPUX86State,fpregs[reg].mmx)); + gen_ldst_modrm(env, s, modrm, MO_64, OR_TMP0, 1); + } else +#endif + { + tcg_gen_ld32u_tl(s->T0, cpu_env, + offsetof(CPUX86State,fpregs[reg].mmx.MMX_= L(0))); + gen_ldst_modrm(env, s, modrm, MO_32, OR_TMP0, 1); + } + break; + case 0x17e: /* movd ea, xmm */ +#ifdef TARGET_X86_64 + if (s->dflag =3D=3D MO_64) { + tcg_gen_ld_i64(s->T0, cpu_env, + offsetof(CPUX86State,xmm_regs[reg].ZMM_Q(0)= )); + gen_ldst_modrm(env, s, modrm, MO_64, OR_TMP0, 1); + } else +#endif + { + tcg_gen_ld32u_tl(s->T0, cpu_env, + offsetof(CPUX86State,xmm_regs[reg].ZMM_L(= 0))); + gen_ldst_modrm(env, s, modrm, MO_32, OR_TMP0, 1); + } + break; + case 0x27e: /* movq xmm, ea */ + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + gen_ldq_env_A0(s, offsetof(CPUX86State, + xmm_regs[reg].ZMM_Q(0))); + } else { + rm =3D (modrm & 7) | REX_B(s); + gen_op_movq(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_Q(0= )), + offsetof(CPUX86State,xmm_regs[rm].ZMM_Q(0))); + } + gen_op_movq_env_0(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_Q= (1))); + break; + case 0x7f: /* movq ea, mm */ + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + gen_stq_env_A0(s, offsetof(CPUX86State, fpregs[reg].mmx)); + } else { + rm =3D (modrm & 7); + gen_op_movq(s, offsetof(CPUX86State, fpregs[rm].mmx), + offsetof(CPUX86State,fpregs[reg].mmx)); + } + break; + case 0x011: /* movups */ + case 0x111: /* movupd */ + case 0x029: /* movaps */ + case 0x129: /* movapd */ + case 0x17f: /* movdqa ea, xmm */ + case 0x27f: /* movdqu ea, xmm */ + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + gen_sto_env_A0(s, offsetof(CPUX86State, xmm_regs[reg])); + } else { + rm =3D (modrm & 7) | REX_B(s); + gen_op_movo(s, offsetof(CPUX86State, xmm_regs[rm]), + offsetof(CPUX86State,xmm_regs[reg])); + } + break; + case 0x211: /* movss ea, xmm */ + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + tcg_gen_ld32u_tl(s->T0, cpu_env, + offsetof(CPUX86State, xmm_regs[reg].ZMM_L= (0))); + gen_op_st_v(s, MO_32, s->T0, s->A0); + } else { + rm =3D (modrm & 7) | REX_B(s); + gen_op_movl(s, offsetof(CPUX86State, xmm_regs[rm].ZMM_L(0)= ), + offsetof(CPUX86State,xmm_regs[reg].ZMM_L(0))); + } + break; + case 0x311: /* movsd ea, xmm */ + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + gen_stq_env_A0(s, offsetof(CPUX86State, + xmm_regs[reg].ZMM_Q(0))); + } else { + rm =3D (modrm & 7) | REX_B(s); + gen_op_movq(s, offsetof(CPUX86State, xmm_regs[rm].ZMM_Q(0)= ), + offsetof(CPUX86State,xmm_regs[reg].ZMM_Q(0))); + } + break; + case 0x013: /* movlps */ + case 0x113: /* movlpd */ + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + gen_stq_env_A0(s, offsetof(CPUX86State, + xmm_regs[reg].ZMM_Q(0))); + } else { + goto illegal_op; + } + break; + case 0x017: /* movhps */ + case 0x117: /* movhpd */ + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + gen_stq_env_A0(s, offsetof(CPUX86State, + xmm_regs[reg].ZMM_Q(1))); + } else { + goto illegal_op; + } + break; + case 0x71: /* shift mm, im */ + case 0x72: + case 0x73: + case 0x171: /* shift xmm, im */ + case 0x172: + case 0x173: + val =3D x86_ldub_code(env, s); + if (is_xmm) { + tcg_gen_movi_tl(s->T0, val); + tcg_gen_st32_tl(s->T0, cpu_env, + offsetof(CPUX86State, xmm_t0.ZMM_L(0))); + tcg_gen_movi_tl(s->T0, 0); + tcg_gen_st32_tl(s->T0, cpu_env, + offsetof(CPUX86State, xmm_t0.ZMM_L(1))); + op1_offset =3D offsetof(CPUX86State,xmm_t0); + } else { + tcg_gen_movi_tl(s->T0, val); + tcg_gen_st32_tl(s->T0, cpu_env, + offsetof(CPUX86State, mmx_t0.MMX_L(0))); + tcg_gen_movi_tl(s->T0, 0); + tcg_gen_st32_tl(s->T0, cpu_env, + offsetof(CPUX86State, mmx_t0.MMX_L(1(modrm= & 7) | REX_B(s); + op2_offset =3D offsetof(CPUX86State,xmm_regs[rm]); + } else { + rm =3D (modrm & 7); + op2_offset =3D offsetof(CPUX86State,fpregs[rm].mmx); + } + tcg_gen_addi_ptr(s->ptr0, cpu_env, op2_offset); + tcg_gen_addi_ptr(s->ptr1, cpu_env, op1_offset); + sse_fn_epp(cpu_env, s->ptr0, s->ptr1); + break; + case 0x050: /* movmskps */ + rm =3D (modrm & 7) | REX_B(s); + tcg_gen_addi_ptr(s->ptr0, cpu_env, + offsetof(CPUX86State,xmm_regs[rm])); + gen_helper_movmskps(s->tmp2_i32, cpu_env, s->ptr0); + tcg_gen_extu_i32_tl(cpu_regs[reg], s->tmp2_i32); + break; + case 0x150: /* movmskpd */ + rm =3D (modrm & 7) | REX_B(s); + tcg_gen_addi_ptr(s->ptr0, cpu_env, + offsetof(CPUX86State,xmm_regs[rm])); + gen_helper_movmskpd(s->tmp2_i32, cpu_env, s->ptr0); + tcg_gen_extu_i32_tl(cpu_regs[reg], s->tmp2_i32); + break; + case 0x02a: /* cvtpi2ps */ + case 0x12a: /* cvtpi2pd */ + gen_helper_enter_mmx(cpu_env); + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + op2_offset =3D offsetof(CPUX86State,mmx_t0); + gen_ldq_env_A0(s, op2_offset); + } else { + rm =3D (modrm & 7); + op2_offset =3D offsetof(CPUX86State,fpregs[rm].mmx); + } + op1_offset =3D offsetof(CPUX86State,xmm_regs[reg]); + tcg_gen_addi_ptr(s->ptr0, cpu_env, op1_offset); + tcg_gen_addi_ptr(s->ptr1, cpu_env, op2_offset); + switch(b >> 8) { + case 0x0: + gen_helper_cvtpi2ps(cpu_env, s->ptr0, s->ptr1); + break; + default: + case 0x1: + gen_helper_cvtpi2pd(cpu_env, s->ptr0, s->ptr1); + break; + } + break; + case 0x22a: /* cvtsi2ss */ + case 0x32a: /* cvtsi2sd */ + ot =3D mo_64_32(s->dflag); + gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); + op1_offset =3D offsetof(CPUX86State,xmm_regs[reg]); + tcg_gen_addi_ptr(s->ptr0, cpu_env, op1_offset); + if (ot =3D=3D MO_32) { + SSEFunc_0_epi sse_fn_epi =3D sse_op_table3ai[(b >> 8) & 1]; + tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); + sse_fn_epi(cpu_env, s->ptr0, s->tmp2_i32); + } else { +#ifdef TARGET_X86_64 + SSEFunc_0_epl sse_fn_epl =3D sse_op_table3aq[(b >> 8) & 1]; + sse_fn_epl(cpu_env, s->ptr0, s->T0); +#else + goto illegal_op; +#endif + } + break; + case 0x02c: /* cvttps2pi */ + case 0x12c: /* cvttpd2pi */ + case 0x02d: /* cvtps2pi */ + case 0x12d: /* cvtpd2pi */ + gen_helper_enter_mmx(cpu_env); + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + op2_offset =3D offsetof(CPUX86State,xmm_t0); + gen_ldo_env_A0(s, op2_offset); + } else { + rm =3D (modrm & 7) | REX_B(s); + op2_offset =3D offsetof(CPUX86State,xmm_regs[rm]); + } + op1_offset =3D offsetof(CPUX86State,fpregs[reg & 7].mmx); + tcg_gen_addi_ptr(s->ptr0, cpu_env, op1_offset); + tcg_gen_addi_ptr(s->ptr1, cpu_env, op2_offset); + switch(b) { + case 0x02c: + gen_helper_cvttps2pi(cpu_env, s->ptr0, s->ptr1); + break; + case 0x12c: + gen_helper_cvttpd2pi(cpu_env, s->ptr0, s->ptr1); + break; + case 0x02d: + gen_helper_cvtps2pi(cpu_env, s->ptr0, s->ptr1); + break; + case 0x12d: + gen_helper_cvtpd2pi(cpu_env, s->ptr0, s->ptr1); + break; + } + break; + case 0x22c: /* cvttss2si */ + case 0x32c: /* cvttsd2si */ + case 0x22d: /* cvtss2si */ + case 0x32d: /* cvtsd2si */ + ot =3D mo_64_32(s->dflag); + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + if ((b >> 8) & 1) { + gen_ldq_env_A0(s, offsetof(CPUX86State, xmm_t0.ZMM_Q(0= ))); + } else { + gen_op_ld_v(s, MO_32, s->T0, s->A0); + tcg_gen_st32_tl(s->T0, cpu_env, + offsetof(CPUX86State, xmm_t0.ZMM_L(0))= ); + } + op2_offset =3D offsetof(CPUX86State,xmm_t0); + } else { + rm =3D (modrm & 7) | REX_B(s); + op2_offset =3D offsetof(CPUX86State,xmm_regs[rm]); + } + tcg_gen_addi_ptr(s->ptr0, cpu_env, op2_offset); + if (ot =3D=3D MO_32) { + SSEFunc_i_ep sse_fn_i_ep =3D + sse_op_table3bi[((b >> 7) & 2) | (b & 1)]; + sse_fn_i_ep(s->tmp2_i32, cpu_env, s->ptr0); + tcg_gen_extu_i32_tl(s->T0, s->tmp2_i32); + } else { +#ifdef TARGET_X86_64 + SSEFunc_l_ep sse_fn_l_ep =3D + sse_op_table3bq[((b >> 7) & 2) | (b & 1)]; + sse_fn_l_ep(s->T0, cpu_env, s->ptr0); +#else + goto illegal_op; +#endif + } + gen_op_mov_reg_v(s, ot, reg, s->T0); + break; + case 0xc4: /* pinsrw */ + case 0x1c4: + s->rip_offset =3D 1; + gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0); + val =3D x86_ldub_code(env, s); + if (b1) { + val &=3D 7; + tcg_gen_st16_tl(s->T0, cpu_env, + offsetof(CPUX86State,xmm_regs[reg].ZMM_W(v= al))); + } else { + val &=3D 3; + tcg_gen_st16_tl(s->T0, cpu_env, + offsetof(CPUX86State,fpregs[reg].mmx.MMX_W= (val))); + } + break; + case 0xc5: /* pextrw */ + case 0x1c5: + if (mod !=3D 3) + goto illegal_op; + ot =3D mo_64_32(s->dflag); + val =3D x86_ldub_code(env, s); + if (b1) { + val &=3D 7; + rm =3D (modrm & 7) | REX_B(s); + tcg_gen_ld16u_tl(s->T0, cpu_env, + offsetof(CPUX86State,xmm_regs[rm].ZMM_W(v= al))); + } else { + val &=3D 3; + rm =3D (modrm & 7); + tcg_gen_ld16u_tl(s->T0, cpu_env, + offsetof(CPUX86State,fpregs[rm].mmx.MMX_W(= val))); + } + reg =3D ((modrm >> 3) & 7) | REX_R(s); + gen_op_mov_reg_v(s, ot, reg, s->T0); + break; + case 0x1d6: /* movq ea, xmm */ + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + gen_stq_env_A0(s, offsetof(CPUX86State, + xmm_regs[reg].ZMM_Q(0))); + } else { + rm =3D (modrm & 7) | REX_B(s); + gen_op_movq(s, offsetof(CPUX86State, xmm_regs[rm].ZMM_Q(0)= ), + offsetof(CPUX86State,xmm_regs[reg].ZMM_Q(0))); + gen_op_movq_env_0(s, + offsetof(CPUX86State, xmm_regs[rm].ZMM_Q= (1))); + } + break; + case 0x2d6: /* movq2dq */ + gen_helper_enter_mmx(cpu_env); + rm =3D (modrm & 7); + gen_op_movq(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_Q(0)), + offsetof(CPUX86State,fpregs[rm].mmx)); + gen_op_movq_env_0(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_Q= (1))); + break; + case 0x3d6: /* movdq2q */ + gen_helper_enter_mmx(cpu_env); + rm =3D (modrm & 7) | REX_B(s); + gen_op_movq(s, offsetof(CPUX86State, fpregs[reg & 7].mmx), + offsetof(CPUX86State,xmm_regs[rm].ZMM_Q(0))); + break; + case 0xd7: /* pmovmskb */ + case 0x1d7: + if (mod !=3D 3) + goto illegal_op; + if (b1) { + rm =3D (modrm & 7) | REX_B(s); + tcg_gen_addi_ptr(s->ptr0, cpu_env, + offsetof(CPUX86State, xmm_regs[rm])); + gen_helper_pmovmskb_xmm(s->tmp2_i32, cpu_env, s->ptr0); + } else { + rm =3D (modrm & 7); + tcg_gen_addi_ptr(s->ptr0, cpu_env, + offsetof(CPUX86State, fpregs[rm].mmx)); + gen_helper_pmovmskb_mmx(s->tmp2_i32, cpu_env, s->ptr0); + } + reg =3D ((modrm >> 3) & 7) | REX_R(s); + tcg_gen_extu_i32_tl(cpu_regs[reg], s->tmp2_i32); + break; + + case 0x138: + case 0x038: + b =3D modrm; + if ((b & 0xf0) =3D=3D 0xf0) { + goto do_0f_38_fx; + } + modrm =3D x86_ldub_code(env, s); + rm =3D modrm & 7; + reg =3D ((modrm >> 3) & 7) | REX_R(s); + mod =3D (modrm >> 6) & 3; + + assert(b1 < 2); + sse_fn_epp =3D sse_op_table6[b].op[b1]; + if (!sse_fn_epp) { + goto unknown_op; + } + if (!(s->cpuid_ext_features & sse_op_table6[b].ext_mask)) + goto illegal_op; + + if (b1) { + op1_offset =3D offsetof(CPUX86State,xmm_regs[reg]); + if (mod =3D=3D 3) { + op2_offset =3D offsetof(CPUX86State,xmm_regs[rm | REX_= B(s)]); + } else { + op2_offset =3D offsetof(CPUX86State,xmm_t0); + gen_lea_modrm(env, s, modrm); + switch (b) { + case 0x20: case 0x30: /* pmovsxbw, pmovzxbw */ + case 0x23: case 0x33: /* pmovsxwd, pmovzxwd */ + case 0x25: case 0x35: /* pmovsxdq, pmovzxdq */ + gen_ldq_env_A0(s, op2_offset + + offsetof(ZMMReg, ZMM_Q(0))); + break; + case 0x21: case 0x31: /* pmovsxbd, pmovzxbd */ + case 0x24: case 0x34: /* pmovsxwq, pmovzxwq */ + tcg_gen_qemu_ld_i32(s->tmp2_i32, s->A0, + s->mem_index, MO_LEUL); + tcg_gen_st_i32(s->tmp2_i32, cpu_env, op2_offset + + offsetof(ZMMReg, ZMM_L(0))); + break; + case 0x22: case 0x32: /* pmovsxbq, pmovzxbq */ + tcg_gen_qemu_ld_tl(s->tmp0, s->A0, + s->mem_index, MO_LEUW); + tcg_gen_st16_tl(s->tmp0, cpu_env, op2_offset + + offsetof(ZMMReg, ZMM_W(0))); + break; + case 0x2a: /* movntqda */ + gen_ldo_env_A0(s, op1_offset); + return; + default: + gen_ldo_env_A0(s, op2_offset); + } + } + } else { + op1_offset =3D offsetof(CPUX86State,fpregs[reg].mmx); + if (mod =3D=3D 3) { + op2_offset =3D offsetof(CPUX86State,fpregs[rm].mmx); + } else { + op2_offset =3D offsetof(CPUX86State,mmx_t0); + gen_lea_modrm(env, s, modrm); + gen_ldq_env_A0(s, op2_offset); + } + } + if (sse_fn_epp =3D=3D SSE_SPECIAL) { + goto unknown_op; + } + + tcg_gen_addi_ptr(s->ptr0, cpu_env, op1_offset); + tcg_gen_addi_ptr(s->ptr1, cpu_env, op2_offset); + sse_fn_epp(cpu_env, s->ptr0, s->ptr1); + + if (b =3D=3D 0x17) { + set_cc_op(s, CC_OP_EFLAGS); + } + break; + + case 0x238: + case 0x338: + do_0f_38_fx: + /* Various integer extensions at 0f 38 f[0-f]. */ + b =3D modrm | (b1 << 8); + modrm =3D x86_ldub_code(env, s); + reg =3D ((modrm >> 3) & 7) | REX_R(s); + + switch (b) { + case 0x3f0: /* crc32 Gd,Eb */ + case 0x3f1: /* crc32 Gd,Ey */ + do_crc32: + if (!(s->cpuid_ext_features & CPUID_EXT_SSE42)) { + goto illegal_op; + } + if ((b & 0xff) =3D=3D 0xf0) { + ot =3D MO_8; + } else if (s->dflag !=3D MO_64) { + ot =3D (s->prefix & PREFIX_DATA ? MO_16 : MO_32); + } else { + ot =3D MO_64; + } + + tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[reg]); + gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); + gen_helper_crc32(s->T0, s->tmp2_i32, + s->T0, tcg_const_i32(8 << ot)); + + ot =3D mo_64_32(s->dflag); + gen_op_mov_reg_v(s, ot, reg, s->T0); + break; + + case 0x1f0: /* crc32 or movbe */ + case 0x1f1: + /* For these insns, the f3 prefix is supposed to have prio= rity + over the 66 prefix, but that's not what we implement ab= ove + setting b1. */ + if (s->prefix & PREFIX_REPNZ) { + goto do_crc32; + } + /* FALLTHRU */ + case 0x0f0: /* movbe Gy,My */ + case 0x0f1: /* movbe My,Gy */ + if (!(s->cpuid_ext_features & CPUID_EXT_MOVBE)) { + goto illegal_op; + } + if (s->dflag !=3D MO_64) { + ot =3D (s->prefix & PREFIX_DATA ? MO_16 : MO_32); + } else { + ot =3D MO_64; + } + + gen_lea_modrm(env, s, modrm); + if ((b & 1) =3D=3D 0) { + tcg_gen_qemu_ld_tl(s->T0, s->A0, + s->mem_index, ot | MO_BE); + gen_op_mov_reg_v(s, ot, reg, s->T0); + } else { + tcg_gen_qemu_st_tl(cpu_regs[reg], s->A0, + s->mem_index, ot | MO_BE); + } + break; + + case 0x0f2: /* andn Gy, By, Ey */ + if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_BMI1) + || !(s->prefix & PREFIX_VEX) + || s->vex_l !=3D 0) { + goto illegal_op; + } + ot =3D mo_64_32(s->dflag); + gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); + tcg_gen_andc_tl(s->T0, s->T0, cpu_regs[s->vex_v]); + gen_op_mov_reg_v(s, ot, reg, s->T0); + gen_op_update1_cc(s); + set_cc_op(s, CC_OP_LOGICB + ot); + break; + + case 0x0f7: /* bextr Gy, Ey, By */ + if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_BMI1) + || !(s->prefix & PREFIX_VEX) + || s->vex_l !=3D 0) { + goto illegal_op; + } + ot =3D mo_64_32(s->dflag); + { + TCGv bound, zero; + + gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); + /* Extract START, and shift the operand. + Shifts larger than operand size get zeros. */ + tcg_gen_ext8u_tl(s->A0, cpu_regs[s->vex_v]); + tcg_gen_shr_tl(s->T0, s->T0, s->A0); + + bound =3D tcg_const_tl(ot =3D=3D MO_64 ? 63 : 31); + zero =3D tcg_c8); + tcg_gen_movcond_tl(TCG_COND_LEU, s->A0, s->A0, bound, + s->A0, bound); + tcg_temp_free(bound); + tcg_gen_movi_tl(s->T1, 1); + tcg_gen_shl_tl(s->T1, s->T1, s->A0); + tcg_gen_subi_tl(s->T1, s->T1, 1); + tcg_gen_and_tl(s->T0, s->T0, s->T1); + + gen_op_mov_reg_v(s, ot, reg, s->T0); + gen_op_update1_cc(s); + set_cc_op(s, CC_OP_LOGICB + ot); + } + break; + + case 0x0f5: /* bzhi Gy, Ey, By */ + if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_BMI2) + || !(s->prefix & PREFIX_VEX) + || s->vex_l !=3D 0) { + goto illegal_op; + } + ot =3D mo_64_32(s->dflag); + gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); + tcg_gen_ext8u_tl(s->T1, cpu_regs[s->vex_v]); + { + TCGv bound =3D tcg_const_tl(ot =3D=3D MO_64 ? 63 : 31); + /* Note that since we're using BMILG (in order to get O + cleared) we need to store the inverse into C. */ + tcg_gen_setcond_tl(TCG_COND_LT, cpu_cc_src, + s->T1, bound); + tcg_gen_movcond_tl(TCG_COND_GT, s->T1, s->T1, + bound, bound, s->T1); + tcg_temp_free(bound); + } + tcg_gen_movi_tl(s->A0, -1); + tcg_gen_shl_tl(s->A0, s->A0, s->T1); + tcg_gen_andc_tl(s->T0, s->T0, s->A0); + gen_op_mov_reg_v(s, ot, reg, s->T0); + gen_op_update1_cc(s); + set_cc_op(s, CC_OP_BMILGB + ot); + break; + + case 0x3f6: /* mulx By, Gy, rdx, Ey */ + if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_BMI2) + || !(s->prefix & PREFIX_VEX) + || s->vex_l !=3D 0) { + goto illegal_op; + } + ot =3D mo_64_32(s->dflag); + gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); + switch (ot) { + default: + tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); + tcg_gen_trunc_tl_i32(s->tmp3_i32, cpu_regs[R_EDX]); + tcg_gen_mulu2_i32(s->tmp2_i32, s->tmp3_i32, + s->tmp2_i32, s->tmp3_i32); + tcg_gen_extu_i32_tl(cpu_regs[s->vex_v], s->tmp2_i32); + tcg_gen_extu_i32_tl(cpu_regs[reg], s->tmp3_i32); + break; +#ifdef TARGET_X86_64 + case MO_64: + tcg_gen_mulu2_i64(s->T0, s->T1, + s->T0, cpu_regs[R_EDX]); + tcg_gen_mov_i64(cpu_regs[s->vex_v], s->T0); + tcg_gen_mov_i64(cpu_regs[reg], s->T1); + break; +#endif + } + break; + + case 0x3f5: /* pdep Gy, By, Ey */ + if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_BMI2) + || !(s->prefix & PREFIX_VEX) + || s->vex_l !=3D 0) { + goto illegal_op; + } + ot =3D mo_64_32(s->dflag); + gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); + /* Note that by zero-extending the source operand, we + automatically handle zero-extending the result. */ + if (ot =3D=3D MO_64) { + tcg_gen_mov_tl(s->T1, cpu_regs[s->vex_v]); + } else { + tcg_gen_ext32u_tl(s->T1, cpu_regs[s->vex_v]); + } + gen_helper_pdep(cpu_regs[reg], s->T1, s->T0); + break; + + case 0x2f5: /* pext Gy, By, Ey */ + if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_BMI2) + || !(s->prefix & PREFIX_VEX) + || s->vex_l !=3D 0) { + goto illegal_op; + } + ot =3D mo_64_32(s->dflag); + gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); + /* Note that by zero-extending the source operand, we + automatically handle zero-extending the result. */ + if (ot =3D=3D MO_64) { + tcg_gen_mov_tl(s->T1, cpu_regs[s->vex_v]); + } else { + tcg_gen_ext32u_tl(s->T1, cpu_regs[s->vex_v]); + } + gen_helper_pext(cpu_regs[reg], s->T1, s->T0); + break; + + case 0x1f6: /* adcx Gy, Ey */ + case 0x2f6: /* adox Gy, Ey */ + if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_ADX)) { + goto illegal_op; + } else { + TCGv carry_in, carry_out, zero; + int end_op; + + ot =3D mo_64_32(s->dflag); + gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); + + /* Re-use the carry-out from a previous round. */ + carry_in =3D NULL; + carry_out =3D (b =3D=3D 0x1f6 ? cpu_cc_dst : cpu_cc_sr= c2); + switch (s->cc_op) { + case CC_OP_ADCX: + if (b =3D=3D 0x1f6) { + carry_in =3D cpu_cc_dst; + end_op =3D CC_OP_ADCX; + } else { + end_op =3D CC_OP_ADCOX; + } + break; + case CC_OP_ADOX: + if (b =3D=3D 0x1f6) { + end_op =3D CC_OP_ADCOX; + } else { + carry_in =3D cpu_cc_src2; + end_op =3D CC_OP_ADOX; + } + break; + case CC_OP_ADCOX: + end_op =3D CC_OP_ADCOX; + carry_in =3D carry_out; + break; + default: + end_op =3D (b =3D=3D 0x1f6 ? CC_OP_ADCX : CC_OP_AD= OX); + break; + } + /* If we can't reuse carry-out, get it out of EFLAGS. = */ + if (!carry_in) { + if (s->cc_op !=3D CC_OP_ADCX && s->cc_op !=3D CC_O= P_ADOX) { + gen_compute_eflags(s); + } + carry_in =3D s->tmp0; + tcg_gen_extract_tl(carry_in, cpu_cc_src, + ctz32(b =3D=3D 0x1f6 ? CC_C : C= C_O), 1); + } + + switch (ot) { +#ifdef TARGET_X86_64 + case MO_32: + /* If we know TL is 64-bit, and we want a 32-bit + result, just do everything in 64-bit arithmetic= . */ + tcg_gen_ext32u_i64(cpu_regs[reg], cpu_regs[reg]); + tcg_gen_ext32u_i64(s->T0, s->T0); + tcg_gen_add_i64(s->T0, s->T0, cpu_regs[reg]); + tcg_gen_add_i64(s->T0, s->T0, carry_in); + tcg_gen_ext32u_i64(cpu_regs[reg], s->T0); + tcg_gen_shri_i64(carry_out, s->T0, 32); + break; +#endif + default: + /* Otherwise compute the carry-out in two steps. = */ + zero =3D tcg_const_tl(0); + tcg_gen_add2_tl(s->T0, carry_out, + s->T0, zero, + carry_in, zero); + tcg_gen_add2_tl(cpu_regs[reg], carry_out, + cpu_regs[reg], carry_out, + s->T0, zero); + tcg_temp_free(zero); + break; + } + set_cc_op(s, end_op); + } + break; + + case 0x1f7: /* shlx Gy, Ey, By */ + case 0x2f7: /* sarx Gy, Ey, By */ + case 0x3f7: /* shrx Gy, Ey, By */ + if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_BMI2) + || !(s->prefix & PREFIX_VEX) + || s->vex_l !=3D 0) { + goto illegal_op; + } + ot =3D mo_64_32(s->dflag); + gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); + if (ot =3D=3D MO_64) { + tcg_gen_andi_tl(s->T1, cpu_regs[s->vex_v], 63); + } else { + tcg_gen_andi_tl(s->T1, cpu_regs[s->vex_v], 31); + } + if (b =3D=3D 0x1f7) { + tcg_gen_shl_tl(s->T0, s->T0, s->T1); + } else if (b =3D=3D 0x2f7) { + if (ot !=3D MO_64) { + tcg_gen_ext32s_tl(s->T0, s->T0); + } + tcg_gen_sar_tl(s->T0, s->T0, s->T1); + } else { + if (ot !=3D MO_64) { + tcg_gen_ext32u_tl(s->T0, s->T0); + } + tcg_gen_shr_tl(s->T0, s->T0, s->T1); + } + gen_op_mov_reg_v(s, ot, reg, s->T0); + break; + + case 0x0f3: + case 0x1f3: + case 0x2f3: + case 0x3f3: /* Group 17 */ + if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_BMI1) + || !(s->prefix & PREFIX_VEX) + || s->vex_l !=3D 0) { + goto illegal_op; + } + ot =3D mo_64_32(s->dflag); + gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); + + tcg_gen_mov_tl(cpu_cc_src, s->T0); + switch (reg & 7) { + case 1: /* blsr By,Ey */ + tcg_gen_subi_tl(s->T1, s->T0, 1); + tcg_gen_and_tl(s->T0, s->T0, s->T1); + break; + case 2: /* blsmsk By,Ey */ + tcg_gen_subi_tl(s->T1, s->T0, 1); + tcg_gen_xor_tl(s->T0, s->T0, s->T1); + break; + case 3: /* blsi By, Ey */ + tcg_gen_neg_tl(s->T1, s->T0); + tcg_gen_and_tl(s->T0, s->T0, s->T1); + break; + default: + goto unknown_op; + } + tcg_gen_mov_tl(cpu_cc_dst, s->T0); + gen_op_mov_reg_v(s, ot, s->vex_v, s->T0); + set_cc_op(s, CC_OP_BMILGB + ot); + break; + + default: + goto unknown_op; + } + break; + + case 0x03a: + case 0x13a: + b =3D modrm; + modrm =3D x86_ldub_code(env, s); + rm =3D modrm & 7; + reg =3D ((modrm >> 3) & 7) | REX_R(s); + mod =3D (modrm >> 6) & 3; + + assert(b1 < 2); + sse_fn_eppi =3D sse_op_table7[b].op[b1]; + if (!sse_fn_eppi) { + goto unknown_op; + } + if (!(s->cpuid_ext_features & sse_op_table7[b].ext_mask)) + goto illegal_op; + + s->rip_offset =3D 1; + + if (sse_fn_eppi =3D=3D SSE_SPECIAL) { + ot =3D mo_64_32(s->dflag); + rm =3D (modrm & 7) | REX_B(s); + if (mod !=3D 3) + gen_lea_modrm(env, s, modrm); + reg =3D ((modrm >> 3) & 7) | REX_R(s); + val =3D x86_ldub_code(env, s); + switch (b) { + case 0x14: /* pextrb */ + tcg_gen_ld8u_tl(s->T0, cpu_env, offsetof(CPUX86State, + xmm_regs[reg].ZMM_B(val & 15))= ); + if (mod =3D=3D 3) { + gen_op_mov_reg_v(s, ot, rm, s->T0); + } else { + tcg_gen_qemu_st_tl(s->T0, s->A0, + s->mem_index, MO_UB); + } + break; + case 0x15: /* pextrw */ + tcg_gen_ld16u_tl(s->T0, cpu_env, offsetof(CPUX86State, + xmm_regs[reg].ZMM_W(val & 7))); + if (mod =3D=3D 3) { + gen_op_mov_reg_v(s, ot, rm, s->T0); + } else { + tcg_gen_qemu_st_tl(s->T0, s->A0, + s->mem_index, MO_LEUW); + } + break; + case 0x16: + if (ot =3D=3D MO_32) { /* pextrd */ + tcg_gen_ld_i32(s->tmp2_i32, cpu_env, + offsetof(CPUX86State, + xmm_regs[reg].ZMM_L(val & = 3))); + if (mod =3D=3D 3) { + tcg_gen_extu_i32_tl(cpu_regs[rm], s->tmp2_i32); + } else { + tcg_gen_qemu_st_i32(s->tmp2_i32, s->A0, + s->mem_index, MO_LEUL); + } + } else { /* pextrq */ +#ifdef TARGET_X86_64 + tcg_gen_ld_i64(s->tmp1_i64, cpu_env, + offsetof(CPUX86State, + xmm_regs[reg].ZMM_Q(val & = 1))); + if (mod =3D=3D 3) { + tcg_gen_mov_i64(cpu_regs[rm], s->tmp1_i64); + } else { + tcg_gen_qemu_st_i64(s->tmp1_i64, s->A0, + s->mem_index, MO_LEUQ); + } +#else + goto illegal_op; +#endif + } + break; + case 0x17: /* extractps */ + tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State, + xmm_regs[reg].ZMM_L(val & 3))); + if (mod =3D=3D 3) { + gen_op_mov_reg_v(s, ot, rm, s->T0); + } else { + tcg_gen_qemu_st_tl(s->T0, s->A0, + s->mem_index, MO_LEUL); + } + break; + case 0x20: /* pinsrb */ + if (mod =3D=3D 3) { + gen_op_mov_v_reg(s, MO_32, s->T0, rm); + } else { + tcg_gen_qemu_ld_tl(s->T0, s->A0, + s->mem_index, MO_UB); + } + tcg_gen_st8_tl(s->T0, cpu_env, offsetof(CPUX86State, + xmm_regs[reg].ZMM_B(val & 15))= ); + break; + case 0x21: /* insertps */ + if (mod =3D=3D 3) { + tcg_gen_ld_i32(s->tmp2_i32, cpu_env, + offsetof(CPUX86State,xmm_regs[rm] + .ZMM_L((val >> 6) & 3))); + } else { + tcg_gen_qemu_ld_i32(s->tmp2_i32, s->A0, + s->mem_index, MO_LEUL); + } + tcg_gen_st_i32(s->tmp2_i32, cpu_env, + offsetof(CPUX86State,xmm_regs[reg] + .ZMM_L((val >> 4) & 3))); + if ((val >> 0) & 1) + tcg_gen_st_i32(tcg_const_i32(0 /*float32_zero*/) = xmm_regs[reg].ZMM_L(1))); + if ((val >> 2) & 1) + tcg_gen_st_i32(tcg_const_i32(0 /*float32_zero*/), + cpu_env, offsetof(CPUX86State, + xmm_regs[reg].ZMM_L(2))); + if ((val >> 3) & 1) + tcg_gen_st_i32(tcg_const_i32(0 /*float32_zero*/), + cpu_env, offsetof(CPUX86State, + xmm_regs[reg].ZMM_L(3))); + break; + case 0x22: + if (ot =3D=3D MO_32) { /* pinsrd */ + if (mod =3D=3D 3) { + tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[rm]= ); + } else { + tcg_gen_qemu_ld_i32(s->tmp2_i32, s->A0, + s->mem_index, MO_LEUL); + } + tcg_gen_st_i32(s->tmp2_i32, cpu_env, + offsetof(CPUX86State, + xmm_regs[reg].ZMM_L(val & = 3))); + } else { /* pinsrq */ +#ifdef TARGET_X86_64 + if (mod =3D=3D 3) { + gen_op_mov_v_reg(s, ot, s->tmp1_i64, rm); + } else { + tcg_gen_qemu_ld_i64(s->tmp1_i64, s->A0, + s->mem_index, MO_LEUQ); + } + tcg_gen_st_i64(s->tmp1_i64, cpu_env, + offsetof(CPUX86State, + xmm_regs[reg].ZMM_Q(val & = 1))); +#else + goto illegal_op; +#endif + } + break; + } + return; + } + + if (b1) { + op1_offset =3D offsetof(CPUX86State,xmm_regs[reg]); + if (mod =3D=3D 3) { + op2_offset =3D offsetof(CPUX86State,xmm_regs[rm | REX_= B(s)]); + } else { + op2_offset =3D offsetof(CPUX86State,xmm_t0); + gen_lea_modrm(env, s, modrm); + gen_ldo_env_A0(s, op2_offset); + } + } else { + op1_offset =3D offsetof(CPUX86State,fpregs[reg].mmx); + if (mod =3D=3D 3) { + op2_offset =3D offsetof(CPUX86State,fpregs[rm].mmx); + } else { + op2_offset =3D offsetof(CPUX86State,mmx_t0); + gen_lea_modrm(env, s, modrm); + gen_ldq_env_A0(s, op2_offset); + } + } + val =3D x86_ldub_code(env, s); + + if ((b & 0xfc) =3D=3D 0x60) { /* pcmpXstrX */ + set_cc_op(s, CC_OP_EFLAGS); + + if (s->dflag =3D=3D MO_64) { + /* The helper must use entire 64-bit gp registers */ + val |=3D 1 << 8; + } + } + + tcg_gen_addi_ptr(s->ptr0, cpu_env, op1_offset); + tcg_gen_addi_ptr(s->ptr1, cpu_env, op2_offset); + sse_fn_eppi(cpu_env, s->ptr0, s->ptr1, tcg_const_i32(val)); + break; + + case 0x33a: + /* Various integer extensions at 0f 3a f[0-f]. */ + b =3D modrm | (b1 << 8); + modrm =3D x86_ldub_code(env, s); + reg =3D ((modrm >> 3) & 7) | REX_R(s); + + switch (b) { + case 0x3f0: /* rorx Gy,Ey, Ib */ + if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_BMI2) + || !(s->prefix & PREFIX_VEX) + || s->vex_l !=3D 0) { + goto illegal_op; + } + ot =3D mo_64_32(s->dflag); + gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); + b =3D x86_ldub_code(env, s); + if (ot =3D=3D MO_64) { + tcg_gen_rotri_tl(s->T0, s->T0, b & 63); + } else { + tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); + tcg_gen_rotri_i32(s->tmp2_i32, s->tmp2_i32, b & 31); + tcg_gen_extu_i32_tl(s->T0, s->tmp2_i32); + } + gen_op_mov_reg_v(s, ot, reg, s->T0); + break; + + default: + goto unknown_op; + } + break; + + default: + unknown_op: + gen_unknown_opcode(env, s); + return; + } + } else { + /* generic MMX or SSE operation */ + switch(b) { + case 0x70: /* pshufx insn */ + case 0xc6: /* pshufx insn */ + case 0xc2: /* compare insns */ + s->rip_offset =3D 1; + break; + default: + break; + } + if (is_xmm) { + op1_offset =3D offsetof(CPUX86State,xmm_regs[reg]); + if (mod !=3D 3) { + int sz =3D 4; + + gen_lea_modrm(env, s, modrm); + op2_offset =3D offsetof(CPUX86State,xmm_t0); + + switch (b) { + case 0x50 ... 0x5a: + case 0x5c ... 0x5f: + case 0xc2: + /* Most sse scalar operations. */ + if (b1 =3D=3D 2) { + sz =3D 2; + } else if (b1 =3D=3D 3) { + sz =3D 3; + } + break; + + case 0x2e: /* ucomis[sd] */ + case 0x2f: /* comis[sd] */ + if (b1 =3D=3D 0) { + sz =3D 2; + } else { + sz =3D 3; + } + break; + } + + switch (sz) { + case 2: + /* 32 bit access */ + gen_op_ld_v(s, MO_32, s->T0, s->A0); + tcg_gen_st32_tl(s->T0, cpu_env, + offsetof(CPUX86State,xmm_t0.ZMM_L(0))); + break; + case 3: + /* 64 bit access */ + gen_ldq_env_A0(s, offsetof(CPUX86State, xmm_t0.ZMM_D(0= ))); + break; + default: + /* 128 bit access */ + gen_ldo_env_A0(s, op2_offset); + break; + } + } else { + rm =3D (modrm & 7) | REX_B(s); + op2_offset =3D offsetof(CPUX86State,xmm_regs[rm]); + } + } else { + op1_offset =3D offsetof(CPUX86State,fpregs[reg].mmx); + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + op2_offset =3D offsetof(CPUX86State,mmx_t0); + gen_ldq_env_A0(s, op2_offset); + } else { + rm =3D (modrm & 7); + op2_offset =3D offsetof(CPUX86State,fpregs[rm].mmx); + } + } + switch(b) { + case 0x0f: /* 3DNow! data insns */ + val =3D x86_ldub_code(env, s); + sse_fn_epp =3D sse_op_table5[val]; + if (!sse_fn_epp) { + goto unknown_op; + } + if (!(s->cpuid_ext2_features & CPUID_EXT2_3DNOW)) { + goto illegal_op; + } + tcg_gen_addi_ptr(s->ptr0, cpu_env, op1_offset); + tcg_gen_addi_ptr(s->ptr1, cpu_env, op2_offset); + sse_fn_epp(cpu_env, s->ptr0, s->ptr1); + break; + case 0x70: /* pshufx insn */ + case 0xc6: /* pshufx insn */ + val =3D x86_ldub_code(env, s); + tcg_gen_addi_ptr(s->ptr0, cpu_env, op1_offset); + tcg_gen_addi_ptr(s->ptr1, cpu_env, op2_offset); + /* XXX: introduce a new table? */ + sse_fn_ppi =3D (SSEFunc_0_ppi)sse_fn_epp; + sse_fn_ppi(s->ptr0, s->ptr1, tcg_const_i32(val)); + break; + case 0xc2: + /* compare insns, bits 7:3 (7:5 for AVX) are ignored */ + val =3D x86_ldub_code(env, s) & 7; + sse_fn_epp =3D sse_op_table4[val][b1]; + + tcg_gen_addi_ptr(s->ptr0, cpu_env, op1_offset); + tcg_gen_addi_ptr(s->ptr1, cpu_env, op2_offset); + sse_fn_epp(cpu_env, s->ptr0, s->ptr1); + break; + case 0xf7: + /* maskmov : we must prepare A0 */ + if (mod !=3D 3) + goto illegal_op; + tcg_gen_mov_tl(s->A0, cpu_regs[R_EDI]); + gen_extu(s->aflag, s->A0); + gen_add_A0_ds_seg(s); + + tcg_gen_addi_ptr(s->ptr0, cpu_env, op1_offset); + tcg_gen_addi_ptr(s->ptr1, cpu_env, op2_offset); + /* XXX: introduce a new table? */ + sse_fn_eppt =3D (SSEFunc_0_eppt)sse_fn_epp; + sse_fn_eppt(cpu_env, s->ptr0, s->ptr1, s->A0); + break; + default: + tcg_gen_addi_ptr(s->ptr0, cpu_env, op1_offset); + tcg_gen_addi_ptr(s->ptr1, cpu_env, op2_offset); + sse_fn_epp(cpu_env, s->ptr0, s->ptr1); + break; + } + if (b =3D=3D 0x2e || b =3D=3D 0x2f) { + set_cc_op(s, CC_OP_EFLAGS); + } + } +} + +/* convert one instruction. s->base.is_jmp is set if the translation must + be stopped. Return the next pc value */ +static target_ulong disas_insn(DisasContext *s, CPUState *cpu) +{ + CPUX86State *env =3D cpu->env_ptr; + int b, prefixes; + int shift; + MemOp ot, aflag, dflag; + int modrm, reg, rm, mod, op, opreg, val; + target_ulong next_eip, tval; + target_ulong pc_start =3D s->base.pc_next; + + s->pc_start =3D s->pc =3D pc_start; + s->override =3D -1; +#ifdef TARGET_X86_64 + s->rex_w =3D false; + s->rex_r =3D 0; + s->rex_x =3D 0; + s->rex_b =3D 0; +#endif + s->rip_offset =3D 0; /* for relative ip address */ + s->vex_l =3D 0; + s->vex_v =3D 0; + if (sigsetjmp(s->jmpbuf, 0) !=3D 0) { + gen_exception_gpf(s); + return s->pc; + } + + prefixes =3D 0; + + next_byte: + b =3D x86_ldub_code(env, s); + /* Collect prefixes. */ + switch (b) { + case 0xf3: + prefixes |=3D PREFIX_REPZ; + goto next_byte; + case 0xf2: + prefixes |=3D PREFIX_REPNZ; + goto next_byte; + case 0xf0: + prefixes |=3D PREFIX_LOCK; + goto next_byte; + case 0x2e: + s->override =3D R_CS; + goto next_byte; + case 0x36: + s->override =3D R_SS; + goto next_byte; + case 0x3e: + s->override =3D R_DS; + goto next_byte; + case 0x26: + s->override =3D R_ES; + goto next_byte; + case 0x64: + s->override =3D R_FS; + goto next_byte; + case 0x65: + s->override =3D R_GS; + goto next_byte; + case 0x66: + prefixes |=3D PREFIX_DATA; + goto next_byte; + case 0x67: + prefixes |=3D PREFIX_ADR; + goto next_byte; +#ifdef TARGET_X86_64 + case 0x40 ... 0x4f: + if (CODE64(s)) { + /* REX prefix */ + prefixes |=3D PREFIX_REX; + s->rex_w =3D (b >> 3) & 1; + s->rex_r =3D (b & 0x4) << 1; + s->rex_x =3D (b & 0x2) << 2; + s->rex_b =3D (b & 0x1) << 3; + goto next_byte; + } + break; +#endif + case 0xc5: /* 2-byte VEX */ + case 0xc4: /* 3-byte VEX */ + /* VEX prefixes cannot be used except in 32-bit mode. + Otherwise the instruction is LES or LDS. */ + if (CODE32(s) && !VM86(s)) { + static const int pp_prefix[4] =3D { + 0, PREFIX_DATA, PREFIX_REPZ, PREFIX_REPNZ + }; + int vex3, vex2 =3D x86_ldub_code(env, s); + + if (!CODE64(s) && (vex2 & 0xc0) !=3D 0xc0) { + /* 4.1.4.6: In 32-bit mode, bits [7:6] must be 11b, + otherwise the instruction is LES or LDS. */ + s->pc--; /* rewind the advance_pc() x86_ldub_code() did */ + break; + } + + /* 4.1.1-4.1.3: No preceding lock, 66, f2, f3, or rex prefixes= . */ + if (prefixes & (PREFIX_REPZ | PREFIX_REPNZ + | PREFIX_LOCK | PREFIX_DATA | PREFIX_REX)) { + goto illegal_op; + } +#ifdef TARGET_X86_64 + s->rex_r =3D (~vex2 >> 4) & 8; +#endif + if (b =3D=3D 0xc5) { + /* 2-byte VEX prefix: RVVVVlpp, implied 0f leading opcode = byte */ + vex3 =3D vex2; + b =3D x86_ldub_code(env, s) | 0x100; + } else { + /* 3-byte VEX prefix: RXBmmmmm wVVVVlpp */ + vex3 =3D x86_ldub_code(env, s); +#ifdef TARGET_X86_64 + s->rex_x =3D (~vex2 >> 3) & 8; + s->rex_b =3D (~vex2 >> 2) & 8; + s->rex_w =3D (vex3 >> 7) & 1; +#endif + switch (vex2 & 0x1f) { + case 0x01: /* Implied 0f leading opcode bytes. */ + b =3D x86_ldub_code(env, s) | 0x100; + break; + case 0x02: /* Implied 0f 38 leading opcode bytes. */ + b =3D 0x138; + break; + case 0x03: /* Implied 0f 3a leading opcode bytes. */ + b =3D 0x13a; + break; + default: /* Reserved for future use. */ + goto unknown_op; + } + } + s->vex_v =3D (~vex3 >> 3) & 0xf; + s->vex_l =3D (vex3 >> 2) & 1; + prefixes |=3D pp_prefix[vex3 & 3] | PREFIX_VEX; + } + break; + } + + /* Post-process prefixes. */ + if (CODE64(s)) { + /* In 64-bit mode, the default data size is 32-bit. Select 64-bit + data with rex_w, and 16-bit data with 0x66; rex_w takes precede= nce + over 0x66 if both are present. */ + dflag =3D (REX_W(s) ? MO_64 : prefixes & PREFIX_DATA ? MO_16 : MO_= 32); + /* In 64-bit mode, 0x67 selects 32-bit addressing. */ + aflag =3D (prefixes & PREFIX_ADR ? MO_32 : MO_64); + } else { + /* In 16/32-bit mode, 0x66 selects the opposite data size. */ + if (CODE32(s) ^ ((prefixes & PREFIX_DATA) !=3D 0)) { + dflag =3D MO_32; + } else { + dflag =3D MO_16; + } + /* In 16/32-bit mode, 0x67 selects the opposite addressing. */ + if (CODE32(s) ^ ((prefixes & PREFIX_ADR) !=3D 0)) { + aflag =3D MO_32; + } else { + aflag =3D MO_16; + } + } + + s->prefix =3D prefixes; + s->aflag =3D aflag; + s->dflag =3D dflag; + + /* now check op code */ + reswitch: + switch(b) { + case 0x0f: + /**************************/ + /* extended op code */ + b =3D x86_ldub_code(env, s) | 0x100; + goto reswitch; + + /**************************/ + /* arith & logic */ + case 0x00 ... 0x05: + case 0x08 ... 0x0d: + case 0x10 ... 0x15: + case 0x18 ... 0x1d: + case 0x20 ... 0x25: + case 0x28 ... 0x2d: + case 0x30 ... 0x35: + case 0x38 ... 0x3d: + { + int op, f, val; + op =3D (b >> 3) & 7; + f =3D (b >> 1) & 3; + + ot =3D mo_b_d(b, dflag); + + switch(f) { + case 0: /* OP Ev, Gv */ + modrm =3D x86_ldub_code(env, s); + reg =3D ((modrm >> 3) & 7) | REX_R(s); + mod =3D (modrm >> 6) & 3; + rm =3D (modrm & 7) | REX_B(s); + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + opreg =3D OR_TMP0; + } else if (op =3D=3D OP_XORL && rm =3D=3D reg) { + xor_zero: + /* xor reg, reg optimisation */ + set_cc_op(s, CC_OP_CLR); + tcg_gen_movi_tl(s->T0, 0); + gen_op_mov_ rm =3D (modrm & 7) | REX_B(s); + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + gen_op_ld_v(s, ot, s->T1, s->A0); + } else if (op =3D=3D OP_XORL && rm =3D=3D reg) { + goto xor_zero; + } else { + gen_op_mov_v_reg(s, ot, s->T1, rm); + } + gen_op(s, op, ot, reg); + break; + case 2: /* OP A, Iv */ + val =3D insn_get(env, s, ot); + tcg_gen_movi_tl(s->T1, val); + gen_op(s, op, ot, OR_EAX); + break; + } + } + break; + + case 0x82: + if (CODE64(s)) + goto illegal_op; + /* fall through */ + case 0x80: /* GRP1 */ + case 0x81: + case 0x83: + { + int val; + + ot =3D mo_b_d(b, dflag); + + modrm =3D x86_ldub_code(env, s); + mod =3D (modrm >> 6) & 3; + rm =3D (modrm & 7) | REX_B(s); + op =3D (modrm >> 3) & 7; + + if (mod !=3D 3) { + if (b =3D=3D 0x83) + s->rip_offset =3D 1; + else + s->rip_offset =3D insn_const_size(ot); + gen_lea_modrm(env, s, modrm); + opreg =3D OR_TMP0; + } else { + opreg =3D rm; + } + + switch(b) { + default: + case 0x80: + case 0x81: + case 0x82: + val =3D insn_get(env, s, ot); + break; + case 0x83: + val =3D (int8_t)insn_get(env, s, MO_8); + break; + } + tcg_gen_movi_tl(s->T1, val); + gen_op(s, op, ot, opreg); + } + break; + + /**************************/ + /* inc, dec, and other misc arith */ + case 0x40 ... 0x47: /* inc Gv */ + ot =3D dflag; + gen_inc(s, ot, OR_EAX + (b & 7), 1); + break; + case 0x48 ... 0x4f: /* dec Gv */ + ot =3D dflag; + gen_inc(s, ot, OR_EAX + (b & 7), -1); + break; + case 0xf6: /* GRP3 */ + case 0xf7: + ot =3D mo_b_d(b, dflag); + + modrm =3D x86_ldub_code(env, s); + mod =3D (modrm >> 6) & 3; + rm =3D (modrm & 7) | REX_B(s); + op =3D (modrm >> 3) & 7; + if (mod !=3D 3) { + if (op =3D=3D 0) { + s->rip_offset =3D insn_const_size(ot); + } + gen_lea_modrm(env, s, modrm); + /* For those below that handle locked memory, don't load here.= */ + if (!(s->prefix & PREFIX_LOCK) + || op !=3D 2) { + gen_op_ld_v(s, ot, s->T0, s->A0); + } + } else { + gen_op_mov_v_reg(s, ot, s->T0, rm); + } + + switch(op) { + case 0: /* test */ + val =3D insn_get(env, s, ot); + tcg_gen_movi_tl(s->T1, val); + gen_op_testl_T0_T1_cc(s); + set_cc_op(s, CC_OP_LOGICB + ot); + break; + case 2: /* not */ + if (s->prefix & PREFIX_LOCK) { + if (mod =3D=3D 3) { + goto illegal_op; + } + tcg_gen_movi_tl(s->T0, ~0); + tcg_gen_atomic_xor_fetch_tl(s->T0, s->A0, s->T0, + s->mem_index, ot | MO_LE); + } else { + tcg_gen_not_tl(s->T0, s->T0); + if (mod !=3D 3) { + gen_op_st_v(s, ot, s->T0, s->A0); + } else { + gen_op_mov_reg_v(s, ot, rm, s->T0); + } + } + break; + case 3: /* neg */ + if (s->prefix & PREFIX_LOCK) { + TCGLabel *label1; + TCGv a0, t0, t1, t2; + + if (mod =3D=3D 3) { + goto illegal_op; + } + a0 =3D tcg_temp_local_new(); + t0 =3D tcg_temp_local_new(); + label1 =3D gen_new_label(); + + tcg_gen_mov_tl(a0, s->A0); + tcg_gen_mov_tl(t0, s->T0); + + gen_set_label(label1); + t1 =3D tcg_temp_new(); + t2 =3D tcg_temp_new(); + tcg_gen_mov_tl(t2, t0); + tcg_gen_neg_tl(t1, t0); + tcg_gen_atomic_cmpxchg_tl(t0, a0, t0, t1, + s->mem_index, ot | MO_LE); + tcg_temp_free(t1); + tcg_gen_brcond_tl(TCG_COND_NE, t0, t2, label1); + + tcg_temp_free(t2); + tcg_temp_free(a0); + tcg_gen_mov_tl(s->T0, t0); + tcg_temp_free(t0); + } else { + tcg_gen_neg_tl(s->T0, s->T0); + if (mod !=3D 3) { + gen_op_st_v(s, ot, s->T0, s->A0); + } else { + gen_op_mov_reg_v(s, ot, rm, s->T0); + } + } + gen_op_update_neg_cc(s); + set_cc_op(s, CC_OP_SUBB + ot); + break; + case 4: /* mul */ + switch(ot) { + case MO_8: + gen_op_mov_v_reg(s, MO_8, s->T1, R_EAX); + tcg_gen_ext8u_tl(s->T0, s->T0); + tcg_gen_ext8u_tl(s->T1, s->T1); + /* XXX: use 32 bit mul which could be faster */ + tcg_gen_mul_tl(s->T0, s->T0, s->T1); + gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0); + tcg_gen_mov_tl(cpu_cc_dst, s->T0); + tcg_gen_andi_tl(cpu_cc_src, s->T0, 0xff00); + set_cc_op(s, CC_OP_MULB); + break; + case MO_16: + gen_op_mov_v_reg(s, MO_16, s->T1, R_EAX); + tcg_gen_ext16u_tl(s->T0, s->T0); + tcg_gen_ext16u_tl(s->T1, s->T1); + /* XXX: use 32 bit mul which could be faster */ + tcg_gen_mul_tl(s->T0, s->T0, s->T1); + gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0); + tcg_gen_mov_tl(cpu_cc_dst, s->T0); + tcg_gen_shri_tl(s->T0, s->T0, 16); + gen_op_mov_reg_v(s, MO_16, R_EDX, s->T0); + tcg_gen_mov_tl(cpu_cc_src, s->T0); + set_cc_op(s, CC_OP_MULW); + break; + default: + case MO_32: + tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); + tcg_gen_trunc_tl_i32(s->tmp3_i32, cpu_regs[R_EAX]); + tcg_gen_mulu2_i32(s->tmp2_i32, s->tmp3_i32, + s->tmp2_i32, s->tmp3_i32); + tcg_gen_extu_i32_tl(cpu_regs[R_EAX], s->tmp2_i32); + tcg_gen_extu_i32_tl(cpu_regs[R_EDX], s->tmp3_i32); + tcg_gen_mov_tl(cpu_cc_dst, cpu_regs[R_EAX]); + tcg_gen_mov_tl(cpu_cc_src, cpu_regs[R_EDX]); + set_cc_op(s, CC_OP_MULL); + break; +#ifdef TARGET_X86_64 + case MO_64: + tcg_gen_mulu2_i64(cpu_regs[R_EAX], cpu_regs[R_EDX], + s->T0, cpu_regs[R_EAX]); + tcg_gen_mov_tl(cpu_cc_dst, cpu_regs[R_EAX]); + tcg_gen_mov_tl(cpu_cc_src, cpu_regs[R_EDX]); + set_cc_op(s, CC_OP_MULQ); + break; +#endif + } + break; + case 5: /* imul */ + switch(ot) { + case MO_8: + gen_op_mov_v_reg(s, MO_8, s->T1, R_EAX); + tcg_gen_ext8s_tl(s->T0, s->T0); + tcg_gen_ext8s_tl(s->T1, s->T1); + /* XXX: use 32 bit mul which could be faster */ + tcg_gen_mul_tl(s->T0, s->T0, s->T1); + gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0); + tcg_gen_mov_tl(cpu_cc_dst, s->T0); + tcg_gen_ext8s_tl(s->tmp0, s->T0); + tcg_gen_sub_tl(cpu_cc_src, s->T0, s->tmp0); + set_cc_op(s, CC_OP_MULB); + break; + case MO_16: + gen_op_mov_v_reg(s, MO_16, s->T1, R_EAX); + tcg_gen_ext16s_tl(s->T0, s->T0); + tcg_gen_ext16s_tl(s->T1, s->T1); + /* XXX: use 32 bit mul which could be faster */ + tcg_gen_mul_tl(s->T0, s->T0, s->T1); + gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0); + tcg_gen_mov_tl(cpu_cc_dst, s->T0); + tcg_gen_ext16s_tl(s->tmp0, s->T0); + tcg_gen_sub_tl(cpu_cc_src, s->T0, s->tmp0); + tcg_gen_shri_tl(s->T0, s->T0, 16); + gen_op_mov_reg_v(s, MO_16, R_EDX, s->T0); + set_cc_op(s, CC_OP_MULW); + break; + default: + case MO_32: + tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); + tcg_gen_trunc_tl_i32(s->tmp3_i32, cpu_regs[R_EAX]); + tcg_gen_muls2_i32(s->tmp2_i32, s->tmp3_i32, + s->tmp2_i32, s->tmp3_i32); + tcg_gen_extu_i32_tl(cpu_regs[R_EAX], s->tmp2_i32); + tcg_gen_extu_i32_tl(cpu_regs[R_EDX], s->tmp3_i32); + tcg_gen_sari_i32(s->tmp2_i32, s->tmp2_i32, 31); + tcg_gen_mov_tl(cpu_cc_dst, cpu_regs[R_EAX]); + tcg_gen_sub_i32(s->tmp2_i32, s->tmp2_i32, s->tmp3_i32); + tcg_gen_extu_i32_tl(cpu_cc_src, s->tmp2_i32); + set_cc_op(s, CC_OP_MULL); + break; +#ifdef TARGET_X86_64 + case MO_64: + tcg_gen_muls2_i64(cpu_regs[R_EAX], cpu_regs[R_EDX], + s->T0, cpu_regs[R_EAX]); + tcg_gen_mov_tl(cpu_cc_dst, cpu_regs[R_EAX]); + tcg_gen_sari_tl(cpu_cc_src, cpu_regs[R_EAX], 63); + tcg_gen_sub_tl(cpu_cc_src, cpu_cc_src, cpu_regs[R_EDX]); + set_cc_op(s, CC_OP_MULQ); + break; +#endif + } + break; + case 6: /* div */ + switch(ot) { + case MO_8: + gen_helper_divb_AL(cpu_env, s->T0); + break; + case MO_16: + gen_helper_divw_AX(cpu_env, s->T0); + break; + default: + case MO_32: + gen_helper_divl_EAX(cpu_env, s->T0); + break; +#ifdef TARGET_X86_64 + case MO_64: + gen_helper_divq_EAX(cpu_env, s->T0); + break; +#endif + } + break; + case 7: /* idiv */ + switch(ot) { + case MO_8: + gen_helper_idivb_AL(cpu_env, s->T0); + break; + case MO_16: + gen_helper_idivw_AX(cpu_env, s->T0); + break; + default: + case MO_32: + gen_helper_idivl_EAX(cpu_env, s->T0); + break; +#ifdef TARGET_X86_64 + case MO_64: + gen_helper_idivq_EAX(cpu_env, s->T0); + break; +#endif + } + break; + default: + goto unknown_op; + } + break; + + case 0xfe: /* GRP4 */ + case 0xff: /* GRP5 */ + ot =3D mo_b_d(b, dflag); + + modrm =3D x86_ldub_code(env, s); + mod =3D (modrm >> 6) & 3; + rm =3D (modrm & 7) | REX_B(s); + op =3D (modrm >> 3) & 7; + if (op >=3D 2 && b =3D=3D 0xfe) { + goto unknown_op; + } + if (CODE64(s)) { + if (op =3D=3D 2 || op =3D=3D 4) { + /* operand size for jumps is 64 bit */ + ot =3D MO_64; + } else if (op =3D=3D 3 || op =3D=3D 5) { + ot =3D dflag !=3D MO_16 ? MO_32 + REX_W(s) : MO_16; + } else if (op =3D=3D 6) { + /* default push size is 64 bit */ + ot =3D mo_pushpop(s, dflag); + } + } + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + if (op >=3D 2 && op !=3D 3 && op !=3D 5) + gen_op_ld_v(s, ot, s->T0, s->A0); + } else { + gen_op_mov_v_reg(s, ot, s->T0, rm); + } + + switch(op) { + case 0: /* inc Ev */ + if (mod !=3D 3) + opreg =3D OR_TMP0; + else + opreg =3D rm; + gen_inc(s, ot, opreg, 1); + break; + case 1: /* dec Ev */ + if (mod !=3D 3) + opreg =3D OR_TMP0; + else + opreg =3D rm; + gen_inc(s, ot, opreg, -1); + break; + case 2: /* call Ev */ + /* XXX: optimize if memory (no 'and' is necessary) */ + if (dflag =3D=3D MO_16) { + tcg_gen_ext16u_tl(s->T0, s->T0); + } + next_eip =3D s->pc - s->cs_base; + tcg_gen_movi_tl(s->T1, next_eip); + gen_push_v(s, s->T1); + gen_op_jmp_v(s->T0); + gen_bnd_jmp(s); + gen_jr(s, s->T0); + break; + case 3: /* lcall Ev */ + if (mod =3D=3D 3) { + goto illegal_op; + } + gen_op_ld_v(s, ot, s->T1, s->A0); + gen_add_A0_im(s, 1 << ot); + gen_op_ld_v(s, MO_16, s->T0, s->A0); + do_lcall: + if (PE(s) && !VM86(s)) { + tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); + gen_helper_lcall_protected(cpu_env, s->tmp2_i32, s->T1, + tcg_const_i32(dflag - 1), + tcg_const_tl(s->pc - s->cs_base= )); + } else { + tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); + gen_helper_lcall_real(cpu_env, s->tmp2_i32, s->T1, + tcg_const_i32(dflag - 1), + tcg_const_i32(s->pc - s->cs_base)); + } + tcg_gen_ld_tl(s->tmp4, cpu_env, offsetof(CPUX86State, eip)); + gen_jr(s, s->tmp4); + break; + case 4: /* jmp Ev */ + if (dflag =3D=3D MO_16) { + tcg_gen_ext16u_tl(s->T0, s->T0); + } + gen_op_jmp_v(s->T0); + gen_bnd_jmp(s); + gen_jr(s, s->T0); + break; + case 5: /* ljmp Ev */ + if (mod =3D=3D 3) { + goto illegal_op; + } + gen_op_ld_v(s, ot, s->T1, s->A0); + gen_add_A0_im(s, 1 << ot); + gen_op_ld_v(s, MO_16, s->T0, s->A0); + do_ljmp: + if (PE(s) && !VM86(s)) { + tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); + gen_helper_ljmp_protected(cpu_env, s->tmp2_i32, s->T1, + tcg_const_tl(s->pc - s->cs_base)= ); + } else { + gen_op_movl_seg_T0_vm(s, R_CS); + gen_op_jmp_v(s->T1); + } + tcg_gen_ld_tl(s->tmp4, cpu_env, offsetof(CPUX86State, eip)); + gen_jr(s, s->tmp4); + break; + case 6: /* push Ev */ + gen_push_v(s, s->T0); + break; + default: + goto unknown_op; + } + break; + + case 0x84: /* test Ev, Gv */ + case 0x85: + ot =3D mo_b_d(b, dflag); + + modrm =3D x86_ldub_code(env, s); + reg =3D ((modrm >> 3) & 7) | REX_R(s); + + gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); + gen_op_mov_v_reg(s, ot, s->T1, reg); + gen_op_testl_T0_T1_cc(s); + set_cc_op(s, CC_OP_LOGICB + ot); + break; + + case 0xa8: /* test eAX, Iv */ + case 0xa9: + ot =3D mo_b_d(b, dflag); + val =3D insn_get(env, s, ot); + + gen_op_mov_v_reg(s, ot, s->T0, OR_EAX); + tcg_gen_movi_tl(s->T1, val); + gen_op_testl_T0_T1_cc(s); + set_cc_op(s, CC_OP_LOGICB + ot); + break; + + case 0x98: /* CWDE/CBW */ + switch (dflag) { +#ifdef TARGET_X86_64 + , MO_8, s->T0, R_EAX); + tcg_gen_ext8s_tl(s->T0, s->T0); + gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0); + break; + default: + tcg_abort(); + } + break; + case 0x99: /* CDQ/CWD */ + switch (dflag) { +#ifdef TARGET_X86_64 + case MO_64: + gen_op_mov_v_reg(s, MO_64, s->T0, R_EAX); + tcg_gen_sari_tl(s->T0, s->T0, 63); + gen_op_mov_reg_v(s, MO_64, R_EDX, s->T0); + break; +#endif + case MO_32: + gen_op_mov_v_reg(s, MO_32, s->T0, R_EAX); + tcg_gen_ext32s_tl(s->T0, s->T0); + tcg_gen_sari_tl(s->T0, s->T0, 31); + gen_op_mov_reg_v(s, MO_32, R_EDX, s->T0); + break; + case MO_16: + gen_op_mov_v_reg(s, MO_16, s->T0, R_EAX); + tcg_gen_ext16s_tl(s->T0, s->T0); + tcg_gen_sari_tl(s->T0, s->T0, 15); + gen_op_mov_reg_v(s, MO_16, R_EDX, s->T0); + break; + default: + tcg_abort(); + } + break; + case 0x1af: /* imul Gv, Ev */ + case 0x69: /* imul Gv, Ev, I */ + case 0x6b: + ot =3D dflag; + modrm =3D x86_ldub_code(env, s); + reg =3D ((modrm >> 3) & 7) | REX_R(s); + if (b =3D=3D 0x69) + s->rip_offset =3D insn_const_size(ot); + else if (b =3D=3D 0x6b) + s->rip_offset =3D 1; + gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); + if (b =3D=3D 0x69) { + val =3D insn_get(env, s, ot); + tcg_gen_movi_tl(s->T1, val); + } else if (b =3D=3D 0x6b) { + val =3D (int8_t)insn_get(env, s, MO_8); + tcg_gen_movi_tl(s->T1, val); + } else { + gen_op_mov_v_reg(s, ot, s->T1, reg); + } + switch (ot) { +#ifdef TARGET_X86_64 + case MO_64: + tcg_gen_muls2_i64(cpu_regs[reg], s->T1, s->T0, s->T1); + tcg_gen_mov_tl(cpu_cc_dst, cpu_regs[reg]); + tcg_gen_sari_tl(cpu_cc_src, cpu_cc_dst, 63); + tcg_gen_sub_tl(cpu_cc_src, cpu_cc_src, s->T1); + break; +#endif + case MO_32: + tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); + tcg_gen_trunc_tl_i32(s->tmp3_i32, s->T1); + tcg_gen_muls2_i32(s->tmp2_i32, s->tmp3_i32, + s->tmp2_i32, s->tmp3_i32); + tcg_gen_extu_i32_tl(cpu_regs[reg], s->tmp2_i32); + tcg_gen_sari_i32(s->tmp2_i32, s->tmp2_i32, 31); + tcg_gen_mov_tl(cpu_cc_dst, cpu_regs[reg]); + tcg_gen_sub_i32(s->tmp2_i32, s->tmp2_i32, s->tmp3_i32); + tcg_gen_extu_i32_tl(cpu_cc_src, s->tmp2_i32); + break; + default: + tcg_gen_ext16s_tl(s->T0, s->T0); + tcg_gen_ext16s_tl(s->T1, s->T1); + /* XXX: use 32 bit mul which could be faster */ + tcg_gen_mul_tl(s->T0, s->T0, s->T1); + tcg_gen_mov_tl(cpu_cc_dst, s->T0); + tcg_gen_ext16s_tl(s->tmp0, s->T0); + tcg_gen_sub_tl(cpu_cc_src, s->T0, s->tmp0); + gen_op_mov_reg_v(s, ot, reg, s->T0); + break; + } + set_cc_op(s, CC_OP_MULB + ot); + break; + case 0x1c0: + case 0x1c1: /* xadd Ev, Gv */ + ot =3D mo_b_d(b, dflag); + modrm =3D x86_ldub_code(env, s); + reg =3D ((modrm >> 3) & 7) | REX_R(s); + mod =3D (modrm >> 6) & 3; + gen_op_mov_v_reg(s, ot, s->T0, reg); + if (mod =3D=3D 3) { + rm =3D (modrm & 7) | REX_B(s); + gen_op_mov_v_reg(s, ot, s->T1, rm); + tcg_gen_add_tl(s->T0, s->T0, s->T1); + gen_op_mov_reg_v(s, ot, reg, s->T1); + gen_op_mov_reg_v(s, ot, rm, s->T0); + } else { + gen_lea_modrm(env, s, modrm); + if (s->prefix & PREFIX_LOCK) { + tcg_gen_atomic_fetch_add_tl(s->T1, s->A0, s->T0, + s->mem_index, ot | MO_LE); + tcg_gen_add_tl(s->T0, s->T0, s->T1); + } else { + gen_op_ld_v(s, ot, s->T1, s->A0); + tcg_gen_add_tl(s->T0, s->T0, s->T1); + gen_op_st_v(s, ot, s->T0, s->A0); + } + gen_op_mov_reg_v(s, ot, reg, s->T1); + } + gen_op_update2_cc(s); + set_cc_op(s, CC_OP_ADDB + ot); + break; + case 0x1b0: + case 0x1b1: /* cmpxchg Ev, Gv */ + { + TCGv oldv, newv, cmpv; + + ot =3D mo_b_d(b, dflag); + modrm =3D x86_ldub_code(env, s); + reg =3D ((modrm >> 3) & 7) | REX_R(s); + mod =3D (modrm >> 6) & 3; + oldv =3D tcg_temp_new(); + newv =3D tcg_temp_new(); + cmpv =3D tcg_temp_new(); + gen_op_mov_v_reg(s, ot, newv, reg); + tcg_gen_mov_tl(cmpv, cpu_regs[R_EAX]); + + if (s->prefix & PREFIX_LOCK) { + if (mod =3D=3D 3) { + goto illegal_op; + } + gen_lea_modrm(env, s, modrm); + tcg_gen_atomic_cmpxchg_tl(oldv, s->A0, cmpv, newv, + s->mem_index, ot | MO_LE); + gen_op_mov_reg_v(s, ot, R_EAX, oldv); + } else { + if (mod =3D=3D 3) { + rm =3D (modrm & 7) | REX_B(s); + gen_op_mov_v_reg(s, ot, oldv, rm); + } else { + gen_lea_modrm(env, s, modrm); + gen_op_ld_v(s, ot, oldv, s->A0); + rm =3D 0; /* avoid warning */ + } + gen_extu(ot, oldv); + gen_extu(ot, cmpv); + /* store value =3D (old =3D=3D cmp ? new : old); */ + tcg_gen_movcond_tl(TCG_COND_EQ, newv, oldv, cmpv, newv, ol= dv); + if (mod =3D=3D 3) { + gen_op_mov_reg_v(s, ot, R_EAX, oldv); + gen_op_mov_reg_v(s, ot, rm, newv); + } else { + /* Perform an unconditional store cycle like physical = cpu; + must be before changing accumulator to ensure + idempotency if the store faults and the instruction + is restarted */ + gen_op_st_v(s, ot, newv, s->A0); + gen_op_mov_reg_v(s, ot, R_EAX, oldv); + } + } + tcg_gen_mov_tl(cpu_cc_src, oldv); + tcg_gen_mov_tl(s->cc_srcT, cmpv); + tcg_gen_sub_tl(cpu_cc_dst, cmpv, oldv); + set_cc_op(s, CC_OP_SUBB + ot); + tcg_temp_free(oldv); + tcg_temp_free(newv); + tcg_temp_free(cmpv); + } + break; + case 0x1c7: /* cmpxchg8b */ + modrm =3D x86_ldub_code(env, s); + mod =3D (modrm >> 6) & 3; + switch ((modrm >> 3) & 7) { + case 1: /* CMPXCHG8, CMPXCHG16 */ + if (mod =3D=3D 3) { + goto illegal_op; + } +#ifdef TARGET_X86_64 + if (dflag =3D=3D MO_64) { + if (!(s->cpuid_ext_features & CPUID_EXT_CX16)) { + goto illegal_op; + } + gen_lea_modrm(env, s, modrm); + if ((s->prefix & PREFIX_LOCK) && + (tb_cflags(s->base.tb) & CF_PARALLEL)) { + gen_helper_cmpxchg16b(cpu_env, s->A0); + } else { + gen_helper_cmpxchg16b_unlocked(cpu_env, s->A0); + } + set_cc_op(s, CC_OP_EFLAGS); + break; + } +#endif =20 + if (!(s->cpuid_features & CPUID_CX8)) { + goto illegal_op; + } + gen_lea_modrm(env, s, modrm); + if ((s->prefix & PREFIX_LOCK) && + (tb_cflags(s->base.tb) & CF_PARALLEL)) { + gen_helper_cmpxchg8b(cpu_env, s->A0); + } else { + gen_helper_cmpxchg8b_unlocked(cpu_env, s->A0); + } + set_cc_op(s, CC_OP_EFLAGS); + break; + + case 7: /* RDSEED */ + case 6: /* RDRAND */ + if (mod !=3D 3 || + (s->prefix & (PREFIX_LOCK | PREFIX_REPZ | PREFIX_REPNZ)) || + !(s->cpuid_ext_features & CPUID_EXT_RDRAND)) { + goto illegal_op; + } + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { + gen_io_start(); + } + gen_helper_rdrand(s->T0, cpu_env); + rm =3D (modrm & 7) | REX_B(s); + gen_op_mov_reg_v(s, dflag, rm, s->T0); + set_cc_op(s, CC_OP_EFLAGS); + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { + gen_jmp(s, s->pc - s->cs_base); + } + break; + + default: + goto illegal_op; + } + break; + + /**************************/ + /* push/pop */ + case 0x50 ... 0x57: /* push */ + gen_op_mov_v_reg(s, MO_32, s->T0, (b & 7) | REX_B(s)); + gen_push_v(s, s->T0); + break; + case 0x58 ... 0x5f: /* pop */ + ot =3D gen_pop_T0(s); + /* NOTE: order is important for pop %sp */ + gen_pop_update(s, ot); + gen_op_mov_reg_v(s, ot, (b & 7) | REX_B(s), s->T0); + break; + case 0x60: /* pusha */ + if (CODE64(s)) + goto illegal_op; + gen_pusha(s); + break; + case 0x61: /* popa */ + if (CODE64(s)) + goto illegal_op; + gen_popa(s); + break; + case 0x68: /* push Iv */ + case 0x6a: + ot =3D mo_pushpop(s, dflag); + if (b =3D=3D 0x68) + val =3D insn_get(env, s, ot); + else + val =3D (int8_t)insn_get(env, s, MO_8); + tcg_gen_movi_tl(s->T0, val); + gen_push_v(s, s->T0); + break; + case 0x8f: /* pop Ev */ + modrm =3D x86_ldub_code(env, s); + mod =3D (modrm >> 6) & 3; + ot =3D gen_pop_T0(s); + if (mod =3D=3D 3) { + /* NOTE: order is important for pop %sp */ + gen_pop_update(s, ot); + rm =3D (modrm & 7) | REX_B(s); + gen_op_mov_reg_v(s, ot, rm, s->T0); + } else { + /* NOTE: order is important too for MMU exceptions */ + s->popl_esp_hack =3D 1 << ot; + gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1); + s->popl_esp_hack =3D 0; + gen_pop_update(s, ot); + } + break; + case 0xc8: /* enter */ + { + int level; + val =3D x86_lduw_code(env, s); + level =3D x86_ldub_code(env, s); + gen_enter(s, val, level); + } + break; + case 0xc9: /* leave */ + gen_leave(s); + break; + case 0x06: /* push es */ + case 0x0e: /* push cs */ + case 0x16: /* push ss */ + case 0x1e: /* push ds */ + if (CODE64(s)) + goto illegal_op; + gen_op_movl_T0_seg(s, b >> 3); + gen_push_v(s, s->T0); + break; + case 0x1a0: /* push fs */ + case 0x1a8: /* push gs */ + gen_op_movl_T0_seg(s, (b >> 3) & 7); + gen_push_v(s, s->T0); + break; + case 0x07: /* pop es */ + case 0x17: /* pop ss */ + case 0x1f: /* pop ds */ + if (CODE64(s)) + goto illegal_op; + reg =3D b >> 3; + ot =3D gen_pop_T0(s); + gen_movl_seg_T0(s, reg); + gen_pop_update(s, ot); + /* Note that reg =3D=3D R_SS in gen_movl_seg_T0 always sets is_jmp= . */ + if (s->base.is_jmp) { + gen_jmp_im(s, s->pc - s->cs_base); + if (reg =3D=3D R_SS) { + s->flags &=3D ~HF_TF_MASK; + gen_eob_inhibit_irq(s, true); + } else { + gen_eob(s); + } + } + break; + case 0x1a1: /* pop fs */ + case 0x1a9: /* pop gs */ + ot =3D gen_pop_T0(s); + gen_movl_seg_T0(s, (b >> 3) & 7); + gen_pop_update(s, ot); + if (s->base.is_jmp) { + gen_jmp_im(s, s->pc - s->cs_base); + gen_eob(s); + } + break; + + /**************************/ + /* mov */ + case 0x88: + case 0x89: /* mov Gv, Ev */ + ot =3D mo_b_d(b, dflag); + modrm =3D x86_ldub_code(env, s); + reg =3D ((modrm >> 3) & 7) | REX_R(s); + + /* generate a generic store */ + gen_ldst_modrm(env, s, modrm, ot, reg, 1); + break; + case 0xc6: + case 0xc7: /* mov Ev, Iv */ + ot =3D mo_b_d(b, dflag); + modrm =3D x86_ldub_code(env, s); + mod =3D (modrm >> 6) & 3; + if (mod !=3D 3) { + s->rip_offset =3D insn_const_size(ot); + gen_lea_modrm(env, s, modrm); + } + val =3D insn_get(env, s, ot); + tcg_gen_movi_tl(s->T0, val); + if (mod !=3D 3) { + gen_op_st_v(s, ot, s->T0, s->A0); + } else { + gen_op_mov_reg_v(s, ot, (modrm & 7) | REX_B(s), s->T0); + } + break; + case 0x8a: + case 0x8b: /* mov Ev, Gv */ + ot =3D mo_b_d(b, dflag); + modrm =3D x86_ldub_code(env, s); + reg =3D ((modrm >> 3) & 7) | REX_R(s); + + gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); + gen_op_mov_reg_v(s, ot, reg, s->T0); + break; + case 0x8e: /* mov seg, Gv */ + modrm =3D x86_ldub_code(env, s); + reg =3D (modrm >> 3) & 7; + if (reg >=3D 6 || reg =3D=3D R_CS) + goto illegal_op; + gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0); + gen_movl_seg_T0(s, reg); + /* Note that reg =3D=3D R_SS in gen_movl_seg_T0 always sets is_jmp= . */ + if (s->base.is_jmp) { + gen_jmp_im(s, s->pc - s->cs_base); + if (reg =3D=3D R_SS) { + s->flags &=3D ~HF_TF_MASK; + gen_eob_inhibit_irq(s, true); + } else { + gen_eob(s); + } + } + break; + case 0x8c: /* mov Gv, seg */ + modrm =3D x86_ldub_code(env, s); + reg =3D (modrm >> 3) & 7; + mod =3D (modrm >> 6) & 3; + if (reg >=3D 6) + goto illegal_op; + gen_op_movl_T0_seg(s, reg); + ot =3D mod =3D=3D 3 ? dflag : MO_16; + gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1); + break; + + case 0x1b6: /* movzbS Gv, Eb */ + case 0x1b7: /* movzwS Gv, Eb */ + case 0x1be: /* movsbS Gv, Eb */ + case 0x1bf: /* movswS Gv, Eb */ + { + MemOp d_ot; + MemOp s_ot; + + /* d_ot is the size of destination */ + d_ot =3D dflag; + /* ot is the size of source */ + ot =3D (b & 1) + MO_8; + /* s_ot is the sign+size of source */ + s_ot =3D b & 8 ? MO_SIGN | ot : ot; + + modrm =3D x86_ldub_code(env, s); + reg =3D ((modrm >> 3) & 7) | REX_R(s); + mod =3D (modrm >> 6) & 3; + rm =3D (modrm & 7) | REX_B(s); + + if (mod =3D=3D 3) { + if (s_ot =3D=3D MO_SB && byte_reg_is_xH(s, rm)) { + tcg_gen_sextract_tl(s->T0, cpu_regs[rm - 4], 8, 8); + } else { + gen_op_mov_v_reg(s, ot, s->T0, rm); + switch (s_ot) { + case MO_UB: + tcg_gen_ext8u_tl(s->T0, s->T0); + break; + case MO_SB: + tcg_gen_ext8s_tl(s->T0, s->T0); + break; + case MO_UW: + tcg_gen_ext16u_tl(s->T0, s->T0); + break; + default: + case MO_SW: + tcg_gen_ext16s_tl(s->T0, s->T0); + break; + } + } + gen_op_mov_reg_v(s, d_ot, reg, s->T0); + } else { + gen_lea_modrm(env, s, modrm); + gen_op_ld_v(s, s_ot, s->T0, s->A0); + gen_op_mov_reg_v(s, d_ot, reg, s->T0); + } + } + break; + + case 0xa0: /* mov EAX, Ov */ + case 0xa1: + case 0xa2: /* mov Ov, EAX */ + case 0xa3: + { + target_ulong offset_addr; + + ot =3D mo_b_d(b, dflag); + switch (s->aflag) { +#ifdef TARGET_X86_64 + case MO_64: + offset_addr =3D x86_ldq_code(env, s); + break; +#endif + default: + offset_addr =3D insn_get(env, s, s->aflag); + break; + } + tcg_gen_movi_tl(s->A0, offset_addr); + gen_add_A0_ds_seg(s); + if ((b & 2) =3D=3D 0) { + gen_op_ld_v(s, ot, s->T0, s->A0); + gen_op_mov_reg_v(s, ot, R_EAX, s->T0); + } else { + gen_op_mov_v_reg(s, ot, s->T0, R_EAX); + gen_op_st_v(s, ot, s->T0, s->A0); + } + } + break; + case 0xd7: /* xlat */ + tcg_gen_mov_tl(s->A0, cpu_regs[R_EBX]); + tcg_gen_ext8u_tl(s->T0, cpu_regs[R_EAX]); + tcg_gen_add_tl(s->A0, s->A0, s->T0); + gen_extu(s->aflag, s->A0); + gen_add_A0_ds_seg(s); + gen_op_ld_v(s, MO_8, s->T0, s->A0); + gen_op_mov_reg_v(s, MO_8, R_EAX, s->T0); + break; + case 0xb0 ... 0xb7: /* mov R, Ib */ + val =3D insn_get(env, s, MO_8); + tcg_gen_movi_tl(s->T0, val); + gen_op_mov_reg_v(s, MO_8, (b & 7) | REX_B(s), s->T0); + break; + case 0xb8 ... 0xbf: /* mov R, Iv */ +#ifdef TARGET_X86_64 + if (dflag =3D=3D MO_64) { + uint64_t tmp; + /* 64 bit case */ + tmp =3D x86_ldq_code(env, s); + reg =3D (b & 7) | REX_B(s); + tcg_gen_movi_tl(s->T0, tmp); + gen_op_mov_reg_v(s, MO_64, reg, s->T0); + } else +#endif + { + ot =3D dflag; + val =3D insn_get(env, s, ot); + reg =3D (b & 7) | REX_B(s); + tcg_gen_movi_tl(s->T0, val); + gen_op_mov_reg_v(s, ot, reg, s->T0); + } + break; + + case 0x91 ... 0x97: /* xchg R, EAX */ + do_xchg_reg_eax: + ot =3D dflag; + reg =3D (b & 7) | REX_B(s); + rm =3D R_EAX; + goto do_xchg_reg; + case 0x86: + case 0x87: /* xchg Ev, Gv */ + ot =3D mo_b_d(b, dflag); + modrm =3D x86_ldub_code(env, s); + reg =3D ((modrm >> 3) & 7) | REX_R(s); + mod =3D (modrm >> 6) & 3; + if (mod =3D=3D 3) { + rm =3D (modrm & 7) | REX_B(s); + do_xchg_reg: + gen_op_mov_v_reg(s, ot, s->T0, reg); + gen_op_mov_v_reg(s, ot, s->T1, rm); + gen_op_mov_reg_v(s, ot, rm, s->T0); + gen_op_mov_reg_v(s, ot, reg, s->T1); + } else { + gen_lea_modrm(env, s, modrm); + gen_op_mov_v_reg(s, ot, s->T0, reg); + /* for xchg, lock is implicit */ + tcg_gen_atomic_xchg_tl(s->T1, s->A0, s->T0, + s->mem_index, ot | MO_LE); + gen_op_mov_reg_v(s, ot, reg, s->T1); + } + break; + case 0xc4: /* les Gv */ + /* In CODE64 this is VEX3; see above. */ + op =3D R_ES; + goto do_lxx; + case 0xc5: /* lds Gv */ + /* In CODE64 this is VEX2; see above. */ + op =3D R_DS; + goto do_lxx; + case 0x1b2: /* lss Gv */ + op =3D R_SS; + goto do_lxx; + case 0x1b4: /* lfs Gv */ + op =3D R_FS; + goto do_lxx; + case 0x1b5: /* lgs Gv */ + op =3D R_GS; + do_lxx: + ot =3D dflag !=3D MO_16 ? MO_32 : MO_16; + modrm =3D x86_ldub_code(env, s); + reg =3D ((modrm >> 3) & 7) | REX_R(s); + mod =3D (modrm >> 6) & 3; + if (mod =3D=3D 3) + goto illegal_op; + gen_lea_modrm(env, s, modrm); + gen_op_ld_v(s, ot, s->T1, s->A0); + gen_add_A0_im(s, 1 << ot); + /* load the segment first to handle exceptions properly */ + gen_op_ld_v(s, MO_16, s->T0, s->A0); + gen_movl_seg_T0(s, op); + /* then put the data */ + gen_op_mov_reg_v(s, ot, reg, s->T1); + if (s->base.is_jmp) { + gen_jmp_im(s, s->pc - s->cs_base); + gen_eob(s); + } + break; + + /************************/ + /* shifts */ + case 0xc0: + case 0xc1: + /* shift Ev,Ib */ + shift =3D 2; + grp2: + { + ot =3D mo_b_d(b, dflag); + modrm =3D x86_ldub_code(env, s); + mod =3D (modrm >> 6) & 3; + op =3D (modrm >> 3) & 7; + + if (mod !=3D 3) { + if (shift =3D=3D 2) { + s->rip_offset =3D 1; + } + gen_lea_modrm(env, s, modrm); + opreg =3D OR_TMP0; + } else { + opreg =3D (modrm & 7) | REX_B(s); + } + + /* simpler op */ + if (shift =3D=3D 0) { + gen_shift(s, op, ot, opreg, OR_ECX); + } else { + if (shift =3D=3D 2) { + shift =3D x86_ldub_code(env, s); + } + gen_shifti(s, op, ot, opreg, shift); + } + } + break; + case 0xd0: + case 0xd1: + /* shift Ev,1 */ + shift =3D 1; + goto grp2; + case 0xd2: + case 0xd3: + /* shift Ev,cl */ + shift =3D 0; + goto grp2; + + case 0x1a4: /* shld imm */ + op =3D 0; + shift =3D 1; + goto do_shiftd; + case 0x1a5: /* shld cl */ + op =3D 0; + shift =3D 0; + goto do_shiftd; + case 0x1ac: /* shrd imm */ + op =3D 1; + shift =3D 1; + goto do_shiftd; + case 0x1ad: /* shrd cl */ + op =3D 1; + shift =3D 0; + do_shiftd: + ot =3D dflag; + modrm =3D x86_ldub_code(env, s); + mod =3D (modrm >> 6) & 3; + rm =3D (modrm & 7) | REX_B(s); + reg =3D ((modrm >> 3) & 7) | REX_R(s); + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + opreg =3D OR_TMP0; + } else { + opreg =3D rm; + } + gen_op_mov_v_reg(s, ot, s->T1, reg); + + if (shift) { + TCGv imm =3D tcg_const_tl(x86_ldub_code(env, s)); + gen_shiftd_rm_T1(s, ot, opreg, op, imm); + tcg_temp_free(imm); + } else { + gen_shiftd_rm_T1(s, ot, opreg, op, cpu_regs[R_ECX]); + } + break; + + /************************/ + /* floats */ + case 0xd8 ... 0xdf: + { + bool update_fip =3D true; + + if (s->flags & (HF_EM_MASK | HF_TS_MASK)) { + /* if CR0.EM or CR0.TS are set, generate an FPU exception = */ + /* XXX: what to do if illegal op ? */ + gen_exception(s, EXCP07_PREX, pc_start - s->cs_base); + break; + } + modrm =3D x86_ldub_code(env, s); + mod =3D (modrm >> 6) & 3; + rm =3D modrm & 7; + op =3D ((b & 7) << 3) | ((modrm >> 3) & 7); + if (mod !=3D 3) { + /* memory op */ + AddressParts a =3D gen_lea_modrm_0(env, s, modrm); + TCGv ea =3D gen_lea_modrm_1(s, a); + TCGv last_addr =3D tcg_temp_new(); + bool update_fdp =3D true; + + tcg_gen_mov_tl(last_addr, ea); + gen_lea_v_seg(s, s->aflag, ea, a.def_seg, s->override); + + switch (op) { + case 0x00 ... 0x07: /* fxxxs */ + case 0x10 ... 0x17: /* fixxxl */ + case 0x20 ... 0x27: /* fxxxl */ + case 0x30 ... 0x37: /* fixxx */ + { + int op1; + op1 =3D op & 7; + + switch (op >> 4) { + case 0: + tcg_gen_qemu_ld_i32(s->tmp2_i32, s->A0, + s->mem_index, MO_LEUL); + gen_helper_flds_FT0(cpu_env, s->tmp2_i32); + break; + case 1: + tcg_gen_qemu_ld_i32(s->tmp2_i32, s->A0, + s->mem_index, MO_LEUL); + gen_helper_fildl_FT0(cpu_env, s->tmp2_i32); + break; + case 2: + tcg_gen_qemu_ld_i64(s->tmp1_i64, s->A0, + s->mem_index, MO_LEUQ); + gen_helper_fldl_FT0(cpu_env, s->tmp1_i64); + break; + case 3: + default: + tcg_gen_qemu_ld_i32(s->tmp2_i32, s->A0, + s->mem_index, MO_LESW); + gen_helper_fildl_FT0(cpu_env, s->tmp2_i32); + break; + } + + gen_helper_fp_arith_ST0_FT0(op1); + if (op1 =3D=3D 3) { + /* fcomp needs pop */ + gen_helper_fpop(cpu_env); + } + } + break; + case 0x08: /* flds */ + case 0x0a: /* fsts */ + case 0x0b: /* fstps */ + case 0x18 ... 0x1b: /* fildl, fisttpl, fistl, fistpl */ + case 0x28 ... 0x2b: /* fldl, fisttpll, fstl, fstpl */ + case 0x38 ... 0x3b: /* filds, fisttps, fists, fistps */ + switch (op & 7) { + case 0: + switch (op >> 4) { + case 0: + tcg_gen_qemu_ld_i32(s->tmp2_i32, s->A0, + s->mem_index, MO_LEUL); + gen_helper_flds_ST0(cpu_env, s->tmp2_i32); + break; + case 1: + tcg_gen_qemu_ld_i32(s->tmp2_i32, s->A0, + s->mem_index, MO_LEUL); + gen_helper_fildl_ST0(cpu_env, s->tmp2_i32); + break; + case 2: + tcg_gen_qemu_ld_i64(s->tmp1_i64, s->A0, + s->mem_index, MO_LEUQ); + gen_helper_fldl_ST0(cpu_env, s->tmp1_i64); + break; + case 3: + default: + tcg_gen_qemu_ld_i32(s->tmp2_i32, s->A0, + s->mem_index, MO_LESW); + gen_helper_fildl_ST0(cpu_env, s->tmp2_i32); + break; + } + break; + case 1: + /* XXX: the corresponding CPUID bit must be tested= ! */ + switch (op >> 4) { + case 1: + gen_helper_fisttl_ST0(s->tmp2_i32, cpu_env); + tcg_gen_qemu_st_i32(s->tmp2_i32, s->A0, + s->mem_index, MO_LEUL); + break; + case 2: + gen_helper_fisttll_ST0(s->tmp1_i64, cpu_env); + tcg_gen_qemu_st_i64(s->tmp1_i64, s->A0, + s->mem_index, MO_LEUQ); + break; + case 3: + default: + gen_helper_fistt_ST0(s->tmp2_i32, cpu_env); + tcg_gen_qemu_st_i32(s->tmp2_i32, s->A0, + s->mem_index, MO_LEUW); + break; + } + gen_helper_fpop(cpu_env); + break; + default: + switch (op >> 4) { + case 0: + gen_helper_fsts_ST0(s->tmp2_i32, cpu_env); + tcg_gen_qemu_st_i32(s->tmp2_i32, s->A0, + s->mem_index, MO_LEUL); + break; + case 1: + gen_helper_fistl_ST0(s->tmp2_i32, cpu_env); + tcg_gen_qemu_st_i32(s->tmp2_i32, s->A0, + s->mem_index, MO_LEUL); + break; + case 2: + gen_helper_fstl_ST0(s->tmp1_i64, cpu_env); + tcg_gen_qemu_st_i64(s->tmp1_i64, s->A0, + s->mem_index, MO_LEUQ); + break; + case 3: + default: + gen_helper_fist_ST0(s->tmp2_i32, cpu_env); + tcg_gen_qemu_st_i32(s->tmp2_i32, s->A0, + s->mem_index, MO_LEUW); + break; + } + if ((op & 7) =3D=3D 3) { + gen_helper_fpop(cpu_env); + } + break; + } + break; + case 0x0c: /* fldenv mem */ + gen_helper_fldenv(cpu_env, s->A0, + tcg_const_i32(dflag - 1)); + update_fip =3D update_fdp =3D false; + break; + case 0x0d: /* fldcw mem */ + tcg_gen_qemu_ld_i32(s->tmp2_i32, s->A0, + s->mem_index, MO_LEUW); + gen_helper_fldcw(cpu_env, s->tmp2_i32); + update_fip =3D update_fdp =3D false; + break; + case 0x0e: /* fnstenv mem */ + gen_helper_fstenv(cpu_env, s->A0, + tcg_const_i32(dflag - 1)); + update_fip =3D update_fdp =3D false; + break; + case 0x0f: /* fnstcw mem */ + gen_helper_fnstcw(s->tmp2_i32, cpu_env); + tcg_gen_qemu_st_i32(s->tmp2_i32, s->A0, + s->mem_index, MO_LEUW); + update_fip =3D update_fdp =3D false; + break; + case 0x1d: /* fldt mem */ + gen_helper_fldt_ST0(cpu_env, s->A0); + break; + case 0x1f: /* fstpt mem */ + gen_helper_fstt_ST0(cpu_env, s->A0); + gen_helper_fpop(cpu_env); + break; + case 0x2c: /* frstor mem */ + gen_helper_frstor(cpu_env, s->A0, + tcg_const_i32(dflag - 1)); + update_fip =3D update_fdp =3D false; + break; + case 0x2e: /* fnsave mem */ + gen_helper_fsave(cpu_env, s->A0, + tcg_const_i32(dflag - 1)); + update_fip =3D update_fdp =3D false; + break; + case 0x2f: /* fnstsw mem */ + gen_helper_fnstsw(s->tmp2_i32, cpu_env); + tcg_gen_qemu_st_i32(s->tmp2_i32, s->A0, + s->mem_index, MO_LEUW); + update_fip =3D update_fdp =3D false; + break; + case 0x3c: /* fbld */ + gen_helper_fbld_ST0(cpu_env, s->A0); + break; + case 0x3e: /* fbstp */ + gen_helper_fbst_ST0(cpu_env, s->A0); + cpu_env); + tcg_gen_qemu_st_i64(s->tmp1_i64, s->A0, + s->mem_index, MO_LEUQ); + gen_helper_fpop(cpu_env); + break; + default: + goto unknown_op; + } + + if (update_fdp) { + int last_seg =3D s->override >=3D 0 ? s->override : a.= def_seg; + + tcg_gen_ld_i32(s->tmp2_i32, cpu_env, + offsetof(CPUX86State, + segs[last_seg].selector)); + tcg_gen_st16_i32(s->tmp2_i32, cpu_env, + offsetof(CPUX86State, fpds)); + tcg_gen_st_tl(last_addr, cpu_env, + offsetof(CPUX86State, fpdp)); + } + tcg_temp_free(last_addr); + } else { + /* register float ops */ + opreg =3D rm; + + switch (op) { + case 0x08: /* fld sti */ + gen_helper_fpush(cpu_env); + gen_helper_fmov_ST0_STN(cpu_env, + tcg_const_i32((opreg + 1) & 7)= ); + break; + case 0x09: /* fxchg sti */ + case 0x29: /* fxchg4 sti, undocumented op */ + case 0x39: /* fxchg7 sti, undocumented op */ + gen_helper_fxchg_ST0_STN(cpu_env, tcg_const_i32(opreg)= ); + break; + case 0x0a: /* grp d9/2 */ + switch (rm) { + case 0: /* fnop */ + /* check exceptions (FreeBSD FPU probe) */ + gen_helper_fwait(cpu_env); + update_fip =3D false; + break; + default: + goto unknown_op; + } + break; + case 0x0c: /* grp d9/4 */ + switch (rm) { + case 0: /* fchs */ + gen_helper_fchs_ST0(cpu_env); + break; + case 1: /* fabs */ + gen_helper_fabs_ST0(cpu_env); + break; + case 4: /* ftst */ + gen_helper_fldz_FT0(cpu_env); + gen_helper_fcom_ST0_FT0(cpu_env); + break; + case 5: /* fxam */ + gen_helper_fxam_ST0(cpu_env); + break; + default: + goto unknown_op; + } + break; + case 0x0d: /* grp d9/5 */ + { + switch (rm) { + case 0: + gen_helper_fpush(cpu_env); + gen_helper_fld1_ST0(cpu_env); + break; + case 1: + gen_helper_fpush(cpu_env); + gen_helper_fldl2t_ST0(cpu_env); + break; + case 2: + gen_helper_fpush(cpu_env); + gen_helper_fldl2e_ST0(cpu_env); + break; + case 3: + gen_helper_fpush(cpu_env); + gen_helper_fldpi_ST0(cpu_env); + break; + case 4: + gen_helper_fpush(cpu_env); + gen_helper_fldlg2_ST0(cpu_env); + break; + case 5: + gen_helper_fpush(cpu_env); + gen_helper_fldln2_ST0(cpu_env); + break; + case 6: + gen_helper_fpush(cpu_env); + gen_helper_fldz_ST0(cpu_env); + break; + default: + goto unknown_op; + } + } + break; + case 0x0e: /* grp d9/6 */ + switch (rm) { + case 0: /* f2xm1 */ + gen_helper_f2xm1(cpu_env); + break; + case 1: /* fyl2x */ + gen_helper_fyl2x(cpu_env); + break; + case 2: /* fptan */ + gen_helper_fptan(cpu_env); + break; + case 3: /* fpatan */ + gen_helper_fpatan(cpu_env); + break; + case 4: /* fxtract */ + gen_helper_fxtract(cpu_env); + break; + case 5: /* fprem1 */ + gen_helper_fprem1(cpu_env); + break; + case 6: /* fdecstp */ + gen_helper_fdecstp(cpu_env); + break; + default: + case 7: /* fincstp */ + gen_helper_fincstp(cpu_env); + break; + } + break; + case 0x0f: /* grp d9/7 */ + switch (rm) { + case 0: /* fprem */ + gen_helper_fprem(cpu_env); + break; + case 1: /* fyl2xp1 */ + gen_helper_fyl2xp1(cpu_env); + break; + case 2: /* fsqrt */ + gen_helper_fsqrt(cpu_env); + break; + case 3: /* fsincos */ + gen_helper_fsincos(cpu_env); + break; + case 5: /* fscale */ + gen_helper_fscale(cpu_env); + break; + case 4: /* frndint */ + gen_helper_frndint(cpu_env); + break; + case 6: /* fsin */ + gen_helper_fsin(cpu_env); + break; + default: + case 7: /* fcos */ + gen_helper_fcos(cpu_env); + break; + } + break; + case 0x00: case 0x01: case 0x04 ... 0x07: /* fxxx st, sti = */ + case 0x20: case 0x21: case 0x24 ... 0x27: /* fxxx sti, st = */ + case 0x30: case 0x31: case 0x34 ... 0x37: /* fxxxp sti, st= */ + { + int op1; + + op1 =3D op & 7; + if (op >=3D 0x20) { + gen_helper_fp_arith_STN_ST0(op1, opreg); + if (op >=3D 0x30) { + gen_helper_fpop(cpu_env); + } + } else { + gen_helper_fmov_FT0_STN(cpu_env, + tcg_const_i32(opreg)); + gen_helper_fp_arith_ST0_FT0(op1); + } + } + break; + case 0x02: /* fcom */ + case 0x22: /* fcom2, undocumented op */ + gen_helper_fmov_FT0_STN(cpu_env, tcg_const_i32(opreg)); + gen_helper_fcom_ST0_FT0(cpu_env); + break; + case 0x03: /* fcomp */ + case 0x23: /* fcomp3, undocumented op */ + case 0x32: /* fcomp5, undocumented op */ + gen_helper_fmov_FT0_STN(cpu_env, tcg_const_i32(opreg)); + gen_helper_fcom_ST0_FT0(cpu_env); + gen_helper_fpop(cpu_env); + break; + case 0x15: /* da/5 */ + switch (rm) { + case 1: /* fucompp */ + gen_helper_fmov_FT0_STN(cpu_env, tcg_const_i32(1)); + gen_helper_fucom_ST0_FT0(cpu_env); + gen_helper_fpop(cpu_env); + gen_helper_fpop(cpu_env); + break; + default: + goto unknown_op; + } + break; + case 0x1c: + switch (rm) { + case 0: /* feni (287 only, just do nop here) */ + break; + case 1: /* fdisi (287 only, just do nop here) */ + break; + case 2: /* fclex */ + gen_helper_fclex(cpu_env); + update_fip =3D false; + break; + case 3: /* fninit */ + gen_helper_fninit(cpu_env); + update_fip =3D false; + break; + case 4: /* fsetpm (287 only, just do nop here) */ + break; + default: + goto unknown_op; + } + break; + case 0x1d: /* fucomi */ + if (!(s->cpuid_features & CPUID_CMOV)) { + goto illegal_op; + } + gen_update_cc_op(s); + gen_helper_fmov_FT0_STN(cpu_env, tcg_const_i32(opreg)); + gen_helper_fucomi_ST0_FT0(cpu_env); + set_cc_op(s, CC_OP_EFLAGS); + break; + case 0x1e: /* fcomi */ + if (!(s->cpuid_features & CPUID_CMOV)) { + goto illegal_op; + } + gen_update_cc_op(s); + gen_helper_fmov_FT0_STN(cpu_env, tcg_const_i32(opreg)); + gen_helper_fcomi_ST0_FT0(cpu_env); + set_cc_op(s, CC_OP_EFLAGS); + break; + case 0x28: /* ffree sti */ + gen_helper_ffree_STN(cpu_env, tcg_const_i32(opreg)); + break; + case 0x2a: /* fst sti */ + gen_helper_fmov_STN_ST0(cpu_env, tcg_const_i32(opreg)); + break; + case 0x2b: /* fstp sti */ + case 0x0b: /* fstp1 sti, undocumented op */ + case 0x3a: /* fstp8 sti, undocumented op */ + case 0x3b: /* fstp9 sti, undocumented op */ + gen_helper_fmov_STN_ST0(cpu_env, tcg_const_i32(opreg)); + gen_helper_fpop(cpu_env); + break; + case 0x2c: /* fucom st(i) */ + gen_helper_fmov_FT0_STN(cpu_env, tcg_const_i32(opreg)); + gen_helper_fucom_ST0_FT0(cpu_env); + break; + case 0x2d: /* fucomp st(i) */ + gen_helper_fmov_FT0_STN(cpu_env, tcg_const_i32(opreg)); + gen_helper_fucom_ST0_FT0(cpu_env); + gen_helper_fpop(cpu_env); + break; + case 0x33: /* de/3 */ + switch (rm) { + case 1: /* fcompp */ + gen_helper_fmov_FT0_STN(cpu_env, tcg_const_i32(1)); + gen_helper_fcom_ST0_FT0(cpu_env); + gen_helper_fpop(cpu_env); + gen_helper_fpop(cpu_env); + break; + default: + goto unknown_op; + } + break; + case 0x38: /* ffreep sti, undocumented op */ + gen_helper_ffree_STN(cpu_env, tcg_const_i32(opreg)); + gen_helper_fpop(cpu_env); + break; + case 0x3c: /* df/4 */ + switch (rm) { + case 0: + gen_helper_fnstsw(s->tmp2_i32, cpu_env); + tcg_gen_extu_i32_tl(s->T0, s->tmp2_i32); + gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0); + break; + default: + goto unknown_op; + } + break; + case 0x3d: /* fucomip */ + if (!(s->cpuid_features & CPUID_CMOV)) { + goto illegal_op; + } + gen_update_cc_op(s); + gen_helper_fmov_FT0_STN(cpu_env, tcg_const_i32(opreg)); + gen_helper_fucomi_ST0_FT0(cpu_env); + gen_helper_fpop(cpu_env); + set_cc_op(s, CC_OP_EFLAGS); + break; + case 0x3e: /* fcomip */ + if (!(s->cpuid_features & CPUID_CMOV)) { + goto illegal_op; + } + gen_update_cc_op(s); + gen_helper_fmov_FT0_STN(cpu_env, tcg_const_i32(opreg)); + gen_helper_fcomi_ST0_FT0(cpu_env); + gen_helper_fpop(cpu_env); + set_cc_op(s, CC_OP_EFLAGS); + break; + case 0x10 ... 0x13: /* fcmovxx */ + case 0x18 ... 0x1b: + { + int op1; + TCGLabel *l1; + static const uint8_t fcmov_cc[8] =3D { + (JCC_B << 1), + (JCC_Z << 1), + (JCC_BE << 1), + (JCC_P << 1), + }; + + if (!(s->cpuid_features & CPUID_CMOV)) { + goto illegal_op; + } + op1 =3D fcmov_cc[op & 3] | (((op >> 3) & 1) ^ 1); + l1 =3D gen_new_label(); + gen_jcc1_noeob(s, op1, l1); + gen_helper_fmov_ST0_STN(cpu_env, tcg_const_i32(opr= eg)); + gen_set_label(l1); + } + break; + default: + goto unknown_op; + } + } + + if (update_fip) { + tcg_gen_ld_i32(s->tmp2_i32, cpu_env, + offsetof(CPUX86State, segs[R_CS].selector)); + tcg_gen_st16_i32(s->tmp2_i32, cpu_env, + offsetof(CPUX86State, fpcs)); + tcg_gen_st_tl(tcg_constant_tl(pc_start - s->cs_base), + cpu_env, offsetof(CPUX86State, fpip)); + } + } + break; + /************************/ + /* string ops */ + + case 0xa4: /* movsS */ + case 0xa5: + ot =3D mo_b_d(b, dflag); + if (prefixes & (PREFIX_REPZ | PREFIX_REPNZ)) { + gen_repz_movs(s, ot, pc_start - s->cs_base, s->pc - s->cs_base= ); + } else { + gen_movs(s, ot); + } + break; + + case 0xaa: /* stosS */ + case 0xab: + ot =3D mo_b_d(b, dflag); + if (prefixes & (PREFIX_REPZ | PREFIX_REPNZ)) { + gen_repz_stos(s, ot, pc_start - s->cs_base, s->pc - s->cs_base= ); + } else { + gen_stos(s, ot); + } + break; + case 0xac: /* lodsS */ + case 0xad: + ot =3D mo_b_d(b, dflag); + if (prefixes & (PREFIX_REPZ | PREFIX_REPNZ)) { + gen_repz_lods(s, ot, pc_start - s->cs_base, s->pc - s->cs_base= ); + } else { + gen_lods(s, gen_scas(s, ot); + } + break; + + case 0xa6: /* cmpsS */ + case 0xa7: + ot =3D mo_b_d(b, dflag); + if (prefixes & PREFIX_REPNZ) { + gen_repz_cmps(s, ot, pc_start - s->cs_base, s->pc - s->cs_base= , 1); + } else if (prefixes & PREFIX_REPZ) { + gen_repz_cmps(s, ot, pc_start - s->cs_base, s->pc - s->cs_base= , 0); + } else { + gen_cmps(s, ot); + } + break; + case 0x6c: /* insS */ + case 0x6d: + ot =3D mo_b_d32(b, dflag); + tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[R_EDX]); + tcg_gen_ext16u_i32(s->tmp2_i32, s->tmp2_i32); + if (!gen_check_io(s, ot, s->tmp2_i32, + SVM_IOIO_TYPE_MASK | SVM_IOIO_STR_MASK)) { + break; + } + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { + gen_io_start(); + } + if (prefixes & (PREFIX_REPZ | PREFIX_REPNZ)) { + gen_repz_ins(s, ot, pc_start - s->cs_base, s->pc - s->cs_base); + /* jump generated by gen_repz_ins */ + } else { + gen_ins(s, ot); + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { + gen_jmp(s, s->pc - s->cs_base); + } + } + break; + case 0x6e: /* outsS */ + case 0x6f: + ot =3D mo_b_d32(b, dflag); + tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[R_EDX]); + tcg_gen_ext16u_i32(s->tmp2_i32, s->tmp2_i32); + if (!gen_check_io(s, ot, s->tmp2_i32, SVM_IOIO_STR_MASK)) { + break; + } + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { + gen_io_start(); + } + if (prefixes & (PREFIX_REPZ | PREFIX_REPNZ)) { + gen_repz_outs(s, ot, pc_start - s->cs_base, s->pc - s->cs_base= ); + /* jump generated by gen_repz_outs */ + } else { + gen_outs(s, ot); + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { + gen_jmp(s, s->pc - s->cs_base); + } + } + break; + + /************************/ + /* port I/O */ + + case 0xe4: + case 0xe5: + ot =3D mo_b_d32(b, dflag); + val =3D x86_ldub_code(env, s); + tcg_gen_movi_i32(s->tmp2_i32, val); + if (!gen_check_io(s, ot, s->tmp2_i32, SVM_IOIO_TYPE_MASK)) { + break; + } + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { + gen_io_start(); + } + gen_helper_in_func(ot, s->T1, s->tmp2_i32); + gen_op_mov_reg_v(s, ot, R_EAX, s->T1); + gen_bpt_io(s, s->tmp2_i32, ot); + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { + gen_jmp(s, s->pc - s->cs_base); + } + break; + case 0xe6: + case 0xe7: + ot =3D mo_b_d32(b, dflag); + val =3D x86_ldub_code(env, s); + tcg_gen_movi_i32(s->tmp2_i32, val); + if (!gen_check_io(s, ot, s->tmp2_i32, 0)) { + break; + } + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { + gen_io_start(); + } + gen_op_mov_v_reg(s, ot, s->T1, R_EAX); + tcg_gen_trunc_tl_i32(s->tmp3_i32, s->T1); + gen_helper_out_func(ot, s->tmp2_i32, s->tmp3_i32); + gen_bpt_io(s, s->tmp2_i32, ot); + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { + gen_jmp(s, s->pc - s->cs_base); + } + break; + case 0xec: + case 0xed: + ot =3D mo_b_d32(b, dflag); + tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[R_EDX]); + tcg_gen_ext16u_i32(s->tmp2_i32, s->tmp2_i32); + if (!gen_check_io(s, ot, s->tmp2_i32, SVM_IOIO_TYPE_MASK)) { + break; + } + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { + gen_io_start(); + } + gen_helper_in_func(ot, s->T1, s->tmp2_i32); + gen_op_mov_reg_v(s, ot, R_EAX, s->T1); + gen_bpt_io(s, s->tmp2_i32, ot); + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { + gen_jmp(s, s->pc - s->cs_base); + } + break; + case 0xee: + case 0xef: + ot =3D mo_b_d32(b, dflag); + tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[R_EDX]); + tcg_gen_ext16u_i32(s->tmp2_i32, s->tmp2_i32); + if (!gen_check_io(s, ot, s->tmp2_i32, 0)) { + break; + } + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { + gen_io_start(); + } + gen_op_mov_v_reg(s, ot, s->T1, R_EAX); + tcg_gen_trunc_tl_i32(s->tmp3_i32, s->T1); + gen_helper_out_func(ot, s->tmp2_i32, s->tmp3_i32); + gen_bpt_io(s, s->tmp2_i32, ot); + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { + gen_jmp(s, s->pc - s->cs_base); + } + break; + + /************************/ + /* control */ + case 0xc2: /* ret im */ + val =3D x86_ldsw_code(env, s); + ot =3D gen_pop_T0(s); + gen_stack_update(s, val + (1 << ot)); + /* Note that gen_pop_T0 uses a zero-extending load. */ + gen_op_jmp_v(s->T0); + gen_bnd_jmp(s); + gen_jr(s, s->T0); + break; + case 0xc3: /* ret */ + ot =3D gen_pop_T0(s); + gen_pop_update(s, ot); + /* Note that gen_pop_T0 uses a zero-extending load. */ + gen_op_jmp_v(s->T0); + gen_bnd_jmp(s); + gen_jr(s, s->T0); + break; + case 0xca: /* lret im */ + val =3D x86_ldsw_code(env, s); + do_lret: + if (PE(s) && !VM86(s)) { + gen_update_cc_op(s); + gen_jmp_im(s, pc_start - s->cs_base); + gen_helper_lret_protected(cpu_env, tcg_const_i32(dflag - 1), + tcg_const_i32(val)); + } else { + gen_stack_A0(s); + /* pop offset */ + gen_op_ld_v(s, dflag, s->T0, s->A0); + /* NOTE: keeping EIP updated is not a problem in case of + exception */ + gen_op_jmp_v(s->T0); + /* pop selector */ + gen_add_A0_im(s, 1 << dflag); + gen_op_ld_v(s, dflag, s->T0, s->A0); + gen_op_movl_seg_T0_vm(s, R_CS); + /* add stack offset */ + gen_stack_update(s, val + (2 << dflag)); + } + gen_eob(s); + break; + case 0xcb: /* lret */ + val =3D 0; + goto do_lret; + case 0xcf: /* iret */ + gen_svm_check_intercept(s, SVM_EXIT_IRET); + if (!PE(s) || VM86(s)) { + /* real mode or vm86 mode */ + if (!check_vm86_iopl(s)) { + break; + } + gen_helper_iret_real(cpu_env, tcg_const_i32(dflag - 1)); + } else { + gen_helper_iret_protected(cpu_env, tcg_const_i32(dflag - 1), + tcg_const_i32(s->pc - s->cs_base)); + } + set_cc_op(s, CC_OP_EFLAGS); + gen_eob(s); + break; + case 0xe8: /* call im */ + { + if (dflag !=3D MO_16) { + tval =3D (int32_t)insn_get(env, s, MO_32); + } else { + tval =3D (int16_t)insn_get(env, s, MO_16); + } + next_eip =3D s->pc - s->cs_base; + tval +=3D next_eip; + if (dflag =3D=3D MO_16) { + tval &=3D 0xffff; + } else if (!CODE64(s)) { + tval &=3D 0xffffffff; + } + tcg_gen_movi_tl(s->T0, next_eip); + gen_push_v(s, s->T0); + gen_bnd_jmp(s); + gen_jmp(s, tval); + } + break; + case 0x9a: /* lcall im */ + { + unsigned int selector, offset; + + if (CODE64(s)) + goto illegal_op; + ot =3D dflag; + offset =3D insn_get(env, s, ot); + selector =3D insn_get(env, s, MO_16); + + tcg_gen_movi_tl(s->T0, selector); + tcg_gen_movi_tl(s->T1, offset); + } + goto do_lcall; + case 0xe9: /* jmp im */ + if (dflag !=3D MO_16) { + tval =3D (int32_t)insn_get(env, s, MO_32); + } else { + tval =3D (int16_t)insn_get(env, s, MO_16); + } + tval +=3D s->pc - s->cs_base; + if (dflag =3D=3D MO_16) { + tval &=3D 0xffff; + } else if (!CODE64(s)) { + tval &=3D 0xffffffff; + } + gen_bnd_jmp(s); + gen_jmp(s, tval); + break; + case 0xea: /* ljmp im */ + { + unsigned int selector, offset; + + if (CODE64(s)) + goto illegal_op; + ot =3D dflag; + offset =3D insn_get(env, s, ot); + selector =3D insn_get(env, s, MO_16); + + tcg_gen_movi_tl(s->T0, selector); + tcg_gen_movi_tl(s->T1, offset); + } + goto do_ljmp; + case 0xeb: /* jmp Jb */ + tval =3D (int8_t)insn_get(env, s, MO_8); + tval +=3D s->pc - s->cs_base; + if (dflag =3D=3D MO_16) { + tval &=3D 0xffff; + } + gen_jmp(s, tval); + break; + case 0x70 ... 0x7f: /* jcc Jb */ + tval =3D (int8_t)insn_get(env, s, MO_8); + goto do_jcc; + case 0x180 ... 0x18f: /* jcc Jv */ + if (dflag !=3D MO_16) { + tval =3D (int32_t)insn_get(env, s, MO_32); + } else { + tval =3D (int16_t)insn_get(env, s, MO_16); + } + do_jcc: + next_eip =3D s->pc - s->cs_base; + tval +=3D next_eip; + if (dflag =3D=3D MO_16) { + tval &=3D 0xffff; + } + gen_bnd_jmp(s); + gen_jcc(s, b, tval, next_eip); + break; + + case 0x190 ... 0x19f: /* setcc Gv */ + modrm =3D x86_ldub_code(env, s); + gen_setcc1(s, b, s->T0); + gen_ldst_modrm(env, s, modrm, MO_8, OR_TMP0, 1); + break; + case 0x140 ... 0x14f: /* cmov Gv, Ev */ + if (!(s->cpuid_features & CPUID_CMOV)) { + goto illegal_op; + } + ot =3D dflag; + modrm =3D x86_ldub_code(env, s); + reg =3D ((modrm >> 3) & 7) | REX_R(s); + gen_cmovcc1(env, s, ot, b, modrm, reg); + break; + + /************************/ + /* flags */ + case 0x9c: /* pushf */ + gen_svm_check_intercept(s, SVM_EXIT_PUSHF); + if (check_vm86_iopl(s)) { + gen_update_cc_op(s); + gen_helper_read_eflags(s->T0, cpu_env); + gen_push_v(s, s->T0); + } + break; + case 0x9d: /* popf */ + gen_svm_check_intercept(s, SVM_EXIT_POPF); + if (check_vm86_iopl(s)) { + ot =3D gen_pop_T0(s); + if (CPL(s) =3D=3D 0) { + if (dflag !=3D MO_16) { + gen_helper_write_eflags(cpu_env, s->T0, + tcg_const_i32((TF_MASK | AC_MA= SK | + ID_MASK | NT_MA= SK | + IF_MASK | + IOPL_MASK))); + } else { + gen_helper_write_eflags(cpu_env, s->T0, + tcg_const_i32((TF_MASK | AC_MA= SK | + ID_MASK | NT_MA= SK | + IF_MASK | IOPL_= MASK) + & 0xffff)); + } + } else { + if (CPL(s) <=3D IOPL(s)) { + if (dflag !=3D MO_16) { + gen_helper_write_eflags(cpu_env, s->T0, + tcg_const_i32((TF_MASK | + AC_MASK | + ID_MASK | + NT_MASK | + IF_MASK))); + } else { + gen_helper_write_eflags(cpu_env, s->T0, + tcg_const_i32((TF_MASK | + AC_MASK | + ID_MASK | + NT_MASK | + IF_MASK) + & 0xffff)); + } + } else { + if (dflag !=3D MO_16) { + gen_helper_write_eflags(cpu_env, s->T0, + tcg_const_i32((TF_MASK | AC_MAS= K | + ID_MASK | NT_MAS= K))); + } else { + gen_helper_write_eflags(cpu_env, s->T0, + tcg_const_i32((TF_MASK | AC_MAS= K | + ID_MASK | NT_MAS= K) + & 0xffff)); + } + } + } + gen_pop_update(s, ot); + set_cc_op(s, CC_OP_EFLAGS); + /* abort translation because TF/AC flag may change */ + gen_jmp_im(s, s->pc - s->cs_base); + gen_eob(s); + } + break; + case 0x9e: /* sahf */ + if (CODE64(s) && !(s->cpuid_ext3_features & CPUID_EXT3_LAHF_LM)) + goto illegal_op; + gen_op_mov_v_reg(s, MO_8, s->T0, R_AH); + gen_compute_eflags(s); + tcg_gen_andi_tl(cpu_cc_src, cpu_cc_src, CC_O); + tcg_gen_andi_tl(s->T0, s->T0, CC_S | CC_Z | CC_A | CC_P | CC_C); + tcg_gen_or_tl(cpu_cc_src, cpu_cc_src, s->T0); + break; + case 0x9f: /* lahf */ + if (CODE64(s) && !(s->cpuid_ext3_features & CPUID_EXT3_LAHF_LM)) + goto illegal_op; + gen_compute_eflags(s); + /* Note: gen_compute_eflags() only gives the condition codes */ + tcg_gen_ori_tl(s->T0, cpu_cc_src, 0x02); + gen_op_mov_reg_v(s, MO_8, R_AH, s->T0); + break; + case 0xf5: /* cmc */ + gen_compute_eflags(s); + tcg_gen_xori_tl(cpu_cc_src, cpu_cc_src, CC_C); + break; + case 0xf8: /* clc */ + gen_compute_eflags(s); + tcg_gen_andi_tl(cpu_cc_src, cpu_cc_src, ~CC_C); + break; + case 0xf9: /* stc */ + gen_compute_eflags(s); + tcg_gen_ori_tl(cpu_cc_src, cpu_cc_src, CC_C); + break; + case 0xfc: /* cld */ + tcg_gen_movi_i32(s->tmp2_i32, 1); + tcg_gen_st_i32(s->tmp2_i32, cpu_env, offsetof(CPUX86State, df)); + break; + case 0xfd: /* std */ + tcg_gen_movi_i32(s->tmp2_i32, -1); + tcg_gen_st_i32(s->tmp2_i32, cpu_env, offsetof(CPUX86State, df)); + break; + + /************************/ + /* bit operations */ + case 0x1ba: /* bt/bts/btr/btc Gv, im */ + ot =3D dflag; + modrm =3D x86_ldub_code(env, s); + op =3D (modrm >> 3) & 7; + mod =3D (modrm >> 6) & 3; + rm =3D (modrm & 7) | REX_B(s); + if (mod !=3D 3) { + s->rip_offset =3D 1; + gen_lea_modrm(env, s, modrm); + if (!(s->prefix & PREFIX_LOCK)) { + gen_op_ld_v(s, ot, s->T0, s->A0); + } + } else { + gen_op_mov_v_reg(s, ot, s->T0, rm); + } + /* load shift */ + val =3D x86_ldub_code(env, s); + tcg_gen_movi_tl(s->T1, val); + if (op < 4) + goto unknown_op; + op -=3D 4; + goto bt_op; + case 0x1a3: /* bt Gv, Ev */ + op =3D 0; + goto do_btx; + case 0x1ab: /* bts */ + op =3D 1; + goto do_btx; + case 0x1b3: /* btr */ + op =3D 2; + goto do_btx; + case 0x1bb: /* btc */ + op =3D 3; + do_btx: + ot =3D dflag; + modrm =3D x86_ldub_code(env, s); + reg =3D ((modrm >> 3) & 7) | REX_R(s); + lea_modrm_1(s, a), s->tmp0); + gen_lea_v_seg(s, s->aflag, s->A0, a.def_seg, s->override); + if (!(s->prefix & PREFIX_LOCK)) { + gen_op_ld_v(s, ot, s->T0, s->A0); + } + } else { + gen_op_mov_v_reg(s, ot, s->T0, rm); + } + bt_op: + tcg_gen_andi_tl(s->T1, s->T1, (1 << (3 + ot)) - 1); + tcg_gen_movi_tl(s->tmp0, 1); + tcg_gen_shl_tl(s->tmp0, s->tmp0, s->T1); + if (s->prefix & PREFIX_LOCK) { + switch (op) { + case 0: /* bt */ + /* Needs no atomic ops; we surpressed the normal + memory load for LOCK above so do it now. */ + gen_op_ld_v(s, ot, s->T0, s->A0); + break; + case 1: /* bts */ + tcg_gen_atomic_fetch_or_tl(s->T0, s->A0, s->tmp0, + s->mem_index, ot | MO_LE); + break; + case 2: /* btr */ + tcg_gen_not_tl(s->tmp0, s->tmp0); + tcg_gen_atomic_fetch_and_tl(s->T0, s->A0, s->tmp0, + s->mem_index, ot | MO_LE); + break; + default: + case 3: /* btc */ + tcg_gen_atomic_fetch_xor_tl(s->T0, s->A0, s->tmp0, + s->mem_index, ot | MO_LE); + break; + } + tcg_gen_shr_tl(s->tmp4, s->T0, s->T1); + } else { + tcg_gen_shr_tl(s->tmp4, s->T0, s->T1); + switch (op) { + case 0: /* bt */ + /* Data already loaded; nothing to do. */ + break; + case 1: /* bts */ + tcg_gen_or_tl(s->T0, s->T0, s->tmp0); + break; + case 2: /* btr */ + tcg_gen_andc_tl(s->T0, s->T0, s->tmp0); + break; + default: + case 3: /* btc */ + tcg_gen_xor_tl(s->T0, s->T0, s->tmp0); + break; + } + if (op !=3D 0) { + if (mod !=3D 3) { + gen_op_st_v(s, ot, s->T0, s->A0); + } else { + gen_op_mov_reg_v(s, ot, rm, s->T0); + } + } + } + + /* Delay all CC updates until after the store above. Note that + C is the result of the test, Z is unchanged, and the others + are all undefined. */ + switch (s->cc_op) { + case CC_OP_MULB ... CC_OP_MULQ: + case CC_OP_ADDB ... CC_OP_ADDQ: + case CC_OP_ADCB ... CC_OP_ADCQ: + case CC_OP_SUBB ... CC_OP_SUBQ: + case CC_OP_SBBB ... CC_OP_SBBQ: + case CC_OP_LOGICB ... CC_OP_LOGICQ: + case CC_OP_INCB ... CC_OP_INCQ: + case CC_OP_DECB ... CC_OP_DECQ: + case CC_OP_SHLB ... CC_OP_SHLQ: + case CC_OP_SARB ... CC_OP_SARQ: + case CC_OP_BMILGB ... CC_OP_BMILGQ: + /* Z was going to be computed from the non-zero status of CC_D= ST. + We can get that same Z value (and the new C value) by leavi= ng + CC_DST alone, setting CC_SRC, and using a CC_OP_SAR of the + same width. */ + tcg_gen_mov_tl(cpu_cc_src, s->tmp4); + set_cc_op(s, ((s->cc_op - CC_OP_MULB) & 3) + CC_OP_SARB); + break; + default: + /* Otherwise, generate EFLAGS and replace the C bit. */ + gen_compute_eflags(s); + tcg_gen_deposit_tl(cpu_cc_src, cpu_cc_src, s->tmp4, + ctz32(CC_C), 1); + break; + } + break; + case 0x1bc: /* bsf / tzcnt */ + case 0x1bd: /* bsr / lzcnt */ + ot =3D dflag; + modrm =3D x86_ldub_code(env, s); + reg =3D ((modrm >> 3) & 7) | REX_R(s); + gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); + gen_extu(ot, s->T0); + + /* Note that lzcnt and tzcnt are in different extensions. */ + if ((prefixes & PREFIX_REPZ) + && (b & 1 + ? s->cpuid_ext3_features & CPUID_EXT3_ABM + : s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_BMI1)) { + int size =3D 8 << ot; + /* For lzcnt/tzcnt, C bit is defined related to the input. */ + tcg_gen_mov_tl(cpu_cc_src, s->T0); + if (b & 1) { + /* For lzcnt, reduce the target_ulong result by the + number of zeros that we expect to find at the top. */ + tcg_gen_clzi_tl(s->T0, s->T0, TARGET_LONG_BITS); + tcg_gen_subi_tl(s->T0, s->T0, TARGET_LONG_BITS - size); + } else { + /* For tzcnt, a zero input must return the operand size. = */ + tcg_gen_ctzi_tl(s->T0, s->T0, size); + } + /* For lzcnt/tzcnt, Z bit is defined related to the result. */ + gen_op_update1_cc(s); + set_cc_op(s, CC_OP_BMILGB + ot); + } else { + /* For bsr/bsf, only the Z bit is defined and it is related + to the input and not the result. */ + tcg_gen_mov_tl(cpu_cc_dst, s->T0); + set_cc_op(s, CC_OP_LOGICB + ot); + + /* ??? The manual says that the output is undefined when the + input is zero, but real hardware leaves it unchanged, and + real programs appear to depend on that. Accomplish this + by passing the output as the value to return upon zero. */ + if (b & 1) { + /* For bsr, return the bit index of the first 1 bit, + not the count of leading zeros. */ + tcg_gen_xori_tl(s->T1, cpu_regs[reg], TARGET_LONG_BITS - 1= ); + tcg_gen_clz_tl(s->T0, s->T0, s->T1); + tcg_gen_xori_tl(s->T0, s->T0, TARGET_LONG_BITS - 1); + } else { + tcg_gen_ctz_tl(s->T0, s->T0, cpu_regs[reg]); + } + } + gen_op_mov_reg_v(s, ot, reg, s->T0); + break; + /************************/ + /* bcd */ + case 0x27: /* daa */ + if (CODE64(s)) + goto illegal_op; + gen_update_cc_op(s); + gen_helper_daa(cpu_env); + set_cc_op(s, CC_OP_EFLAGS); + break; + case 0x2f: /* das */ + if (CODE64(s)) + goto illegal_op; + gen_update_cc_op(s); + gen_helper_das(cpu_env); + set_cc_op(s, CC_OP_EFLAGS); + break; + case 0x37: /* aaa */ + if (CODE64(s)) + goto illegal_op; + gen_update_cc_op(s); + gen_helper_aaa(cpu_env); + set_cc_op(s, CC_OP_EFLAGS); + break; + case 0x3f: /* aas */ + if (CODE64(s)) + goto illegal_op; + gen_update_cc_op(s); + gen_helper_aas(cpu_env); + set_cc_op(s, CC_OP_EFLAGS); + break; + case 0xd4: /* aam */ + if (CODE64(s)) + goto illegal_op; + val =3D x86_ldub_code(env, s); + if (val =3D=3D 0) { + gen_exception(s, EXCP00_DIVZ, pc_start - s->cs_base); + } else { + gen_helper_aam(cpu_env, tcg_const_i32(val)); + set_cc_op(s, CC_OP_LOGICB); + } + break; + case 0xd5: /* aad */ + if (CODE64(s)) + goto illegal_op; + val =3D x86_ldub_code(env, s); + gen_helper_aad(cpu_env, tcg_const_i32(val)); + set_cc_op(s, CC_OP_LOGICB); + break; + /************************/ + /* misc */ + case 0x90: /* nop */ + /* XXX: correct lock test for all insn */ + if (prefixes & PREFIX_LOCK) { + goto illegal_op; + } + /* If REX_B is set, then this is xchg eax, r8d, not a nop. */ + if (REX_B(s)) { + goto do_xchg_reg_eax; + } + if (prefixes & PREFIX_REPZ) { + gen_update_cc_op(s); + gen_jmp_im(s, pc_start - s->cs_base); + gen_helper_pause(cpu_env, tcg_const_i32(s->pc - pc_start)); + s->base.is_jmp =3D DISAS_NORETURN; + } + break; + case 0x9b: /* fwait */ + if ((s->flags & (HF_MP_MASK | HF_TS_MASK)) =3D=3D + (HF_MP_MASK | HF_TS_MASK)) { + gen_exception(s, EXCP07_PREX, pc_start - s->cs_base); + } else { + gen_helper_fwait(cpu_env); + } + break; + case 0xcc: /* int3 */ + gen_interrupt(s, EXCP03_INT3, pc_start - s->cs_base, s->pc - s->cs= _base); + break; + case 0xcd: /* int N */ + val =3D x86_ldub_code(env, s); + if (check_vm86_iopl(s)) { + gen_interrupt(s, val, pc_start - s->cs_base, s->pc - s->cs_bas= e); + } + break; + case 0xce: /* into */ + if (CODE64(s)) + goto illegal_op; + gen_update_cc_op(s); + gen_jmp_im(s, pc_start - s->cs_base); + gen_helper_into(cpu_env, tcg_const_i32(s->pc - pc_start)); + break; +#ifdef WANT_ICEBP + case 0xf1: /* icebp (undocumented, exits to external debugger) */ + gen_svm_check_intercept(s, SVM_EXIT_ICEBP); + gen_debug(s); + break; +#endif + case 0xfa: /* cli */ + if (check_iopl(s)) { + gen_helper_cli(cpu_env); + } + break; + case 0xfb: /* sti */ + if (check_iopl(s)) { + gen_helper_sti(cpu_env); + /* interruptions are enabled only the first insn after sti */ + gen_jmp_im(s, s->pc - s->cs_base); + gen_eob_inhibit_irq(s, true); + } + break; + case 0x62: /* bound */ + if (CODE64(s)) + goto illegal_op; + ot =3D dflag; + modrm =3D x86_ldub_code(env, s); + reg =3D (modrm >> 3) & 7; + mod =3D (modrm >> 6) & 3; + if (mod =3D=3D 3) + goto illegal_op; + gen_op_mov_v_reg(s, ot, s->T0, reg); + gen_lea_modrm(env, s, modrm); + tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); + if (ot =3D=3D MO_16) { + gen_helper_boundw(cpu_env, s->A0, s->tmp2_i32); + } else { + gen_helper_boundl(cpu_env, s->A0, s->tmp2_i32); + } + break; + case 0x1c8 ... 0x1cf: /* bswap reg */ + reg =3D (b & 7) | REX_B(s); +#ifdef TARGET_X86_64 + if (dflag =3D=3D MO_64) { + tcg_gen_bswap64_i64(cpu_regs[reg], cpu_regs[reg]); + break; + } +#endif + tcg_gen_bswap32_tl(cpu_regs[reg], cpu_regs[reg], TCG_BSWAP_OZ); + break; + case 0xd6: /* salc */ + if (CODE64(s)) + goto illegal_op; + gen_compute_eflags_c(s, s->T0); + tcg_gen_neg_tl(s->T0, s->T0); + gen_op_mov_reg_v(s, MO_8, R_EAX, s->T0); + break; + case 0xe0: /* loopnz */ + case 0xe1: /* loopz */ + case 0xe2: /* loop */ + case 0xe3: /* jecxz */ + { + TCGLabel *l1, *l2, *l3; + + tval =3D (int8_t)insn_get(env, s, MO_8); + next_eip =3D s->pc - s->cs_base; + tval +=3D next_eip; + if (dflag =3D=3D MO_16) { + tval &=3D 0xffff; + } + + l1 =3D gen_new_label(); + l2 =3D gen_new_label(); + l3 =3D gen_new_label(); + gen_update_cc_op(s); + b &=3D 3; + switch(b) { + case 0: /* loopnz */ + case 1: /* loopz */ + gen_op_add_reg_im(s, s->aflag, R_ECX, -1); + gen_op_jz_ecx(s, s->aflag, l3); + gen_jcc1(s, (JCC_Z << 1) | (b ^ 1), l1); + break; + case 2: /* loop */ + gen_op_add_reg_im(s, s->aflag, R_ECX, -1); + gen_op_jnz_ecx(s, s->aflag, l1); + break; + default: + case 3: /* jcxz */ + gen_op_jz_ecx(s, s->aflag, l1); + break; + } + + gen_set_label(l3); + gen_jmp_im(s, next_eip); + tcg_gen_br(l2); + + gen_set_label(l1); + gen_jmp_im(s, tval); + gen_set_label(l2); + gen_eob(s); + } + break; + case 0x130: /* wrmsr */ + case 0x132: /* rdmsr */ + if (check_cpl0(s)) { + gen_update_cc_op(s); + gen_jmp_im(s, pc_start - s->cs_base); + if (b & 2) { + gen_helper_rdmsr(cpu_env); + } else { + gen_helper_wrmsr(cpu_env); + gen_jmp_im(s, s->pc - s->cs_base); + gen_eob(s); + } + } + break; + case 0x131: /* rdtsc */ + gen_update_cc_op(s); + gen_jmp_im(s, pc_start - s->cs_base); + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { + gen_io_start(); + } + gen_helper_rdtsc(cpu_env); + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { + gen_jmp(s, s->pc - s->cs_base); + } + break; + case 0x133: /* rdpmc */ + gen_update_cc_op(s); + gen_jmp_im(s, pc_start - s->cs_base); + gen_helper_rdpmc(cpu_env); + s->base.is_jmp =3D DISAS_NORETURN; + break; + case 0x134: /* sysenter */ + /* For Intel SYSENTER is valid on 64-bit */ + if (CODE64(s) && env->cpuid_vendor1 !=3D CPUID_VENDOR_INTEL_1) + goto illegal_op; + if (!PE(s)) { + gen_exception_gpf(s); + } else { + gen_helper_sysenter(cpu_env); + gen_eob(s); + } + break; + case 0x135: /* sysexit */ + /* For Intel SYSEXIT is valid on 64-bit */ + if (CODE64(s) && env->cpuid_vendor1 !=3D CPUID_VENDOR_INTEL_1) + goto illegal_op; + if (!PE(s)) { + gen_exception_gpf(s); + } else { + gen_helper_sysexit(cpu_env, tcg_const_i32(dflag - 1)); + gen_eob(s); + } + break; +#ifdef TARGET_X86_64 + case 0x105: /* syscall */ + /* XXX: is it usable in real mode ? */ + gen_update_cc_op(s); + gen_jmp_im(s, pc_start - s->cs_base); + gen_helper_syscall(cpu_env, tcg_const_i32(s->pc - pc_start)); + /* TF handling for the syscall insn is different. The TF bit is c= hecked + after the syscall insn completes. This allows #DB to not be + generated after one has entered CPL0 if TF is set in FMASK. */ + gen_eob_worker(s, false, true); + break; + case 0x107: /* sysret */ + if (!PE(s)) { + gen_exception_gpf(s); + } else { + gen_helper_sysret(cpu_env, tcg_const_i32(dflag - 1)); + /* condition codes are modified only in long mode */ + if (LMA(s)) { + set_cc_op(s, CC_OP_EFLAGS); + } + /* TF handling for the sysret insn is different. The TF bit is + checked after the sysret insn completes. This allows #DB to= be + generated "as if" the syscall insn in userspace has just + completed. */ + gen_eob_worker(s, false, true); + } + break; +#endif + case 0x1a2: /* cpuid */ + gen_update_cc_op(s); + gen_jmp_im(s, pc_start - s->cs_base); + gen_helper_cpuid(cpu_env); + break; + case 0xf4: /* hlt */ + if (check_cpl0(s)) { + gen_update_cc_op(s); + gen_jmp_im(s, pc_start - s->cs_base); + gen_helper_hlt(cpu_env, tcg_const_i32(s->pc - pc_start)); + s->base.is_jmp =3D DISAS_NORETURN; + } + break; + case 0x100: + modrm =3D x86_ldub_code(env, s); + mod =3D (modrm >> 6) & 3; + op =3D (modrm >> 3) & 7; + switch(op) { + case 0: /* sldt */ + if (!PE(s) || VM86(s)) + goto illegal_op; + if (s->flags & HF_UMIP_MASK && !check_cpl0(s)) { + break; + } + gen_svm_check_intercept(s, SVM_EXIT_LDTR_READ); + tcg_gen_ld32u_tl(s->T0, cpu_env, + offsetof(CPUX86State, ldt.sel>tmp2_i32, s->T0= ); + gen_helper_lldt(cpu_env, s->tmp2_i32); + } + break; + case 1: /* str */ + if (!PE(s) || VM86(s)) + goto illegal_op; + if (s->flags & HF_UMIP_MASK && !check_cpl0(s)) { + break; + } + gen_svm_check_intercept(s, SVM_EXIT_TR_READ); + tcg_gen_ld32u_tl(s->T0, cpu_env, + offsetof(CPUX86State, tr.selector)); + ot =3D mod =3D=3D 3 ? dflag : MO_16; + gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1); + break; + case 3: /* ltr */ + if (!PE(s) || VM86(s)) + goto illegal_op; + if (check_cpl0(s)) { + gen_svm_check_intercept(s, SVM_EXIT_TR_WRITE); + gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0); + tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); + gen_helper_ltr(cpu_env, s->tmp2_i32); + } + break; + case 4: /* verr */ + case 5: /* verw */ + if (!PE(s) || VM86(s)) + goto illegal_op; + gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0); + gen_update_cc_op(s); + if (op =3D=3D 4) { + gen_helper_verr(cpu_env, s->T0); + } else { + gen_helper_verw(cpu_env, s->T0); + } + set_cc_op(s, CC_OP_EFLAGS); + break; + default: + goto unknown_op; + } + break; + + case 0x101: + modrm =3D x86_ldub_code(env, s); + switch (modrm) { + CASE_MODRM_MEM_OP(0): /* sgdt */ + if (s->flags & HF_UMIP_MASK && !check_cpl0(s)) { + break; + } + gen_svm_check_intercept(s, SVM_EXIT_GDTR_READ); + gen_lea_modrm(env, s, modrm); + tcg_gen_ld32u_tl(s->T0, + cpu_env, offsetof(CPUX86State, gdt.limit)); + gen_op_st_v(s, MO_16, s->T0, s->A0); + gen_add_A0_im(s, 2); + tcg_gen_ld_tl(s->T0, cpu_env, offsetof(CPUX86State, gdt.base)); + if (dflag =3D=3D MO_16) { + tcg_gen_andi_tl(s->T0, s->T0, 0xffffff); + } + gen_op_st_v(s, CODE64(s) + MO_32, s->T0, s->A0); + break; + + case 0xc8: /* monitor */ + if (!(s->cpuid_ext_features & CPUID_EXT_MONITOR) || CPL(s) != =3D 0) { + goto illegal_op; + } + gen_update_cc_op(s); + gen_jmp_im(s, pc_start - s->cs_base); + tcg_gen_mov_tl(s->A0, cpu_regs[R_EAX]); + gen_extu(s->aflag, s->A0); + gen_add_A0_ds_seg(s); + gen_helper_monitor(cpu_env, s->A0); + break; + + case 0xc9: /* mwait */ + if (!(s->cpuid_ext_features & CPUID_EXT_MONITOR) || CPL(s) != =3D 0) { + goto illegal_op; + } + gen_update_cc_op(s); + gen_jmp_im(s, pc_start - s->cs_base); + gen_helper_mwait(cpu_env, tcg_const_i32(s->pc - pc_start)); + s->base.is_jmp =3D DISAS_NORETURN; + break; + + case 0xca: /* clac */ + if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_SMAP) + || CPL(s) !=3D 0) { + goto illegal_op; + } + gen_helper_clac(cpu_env); + gen_jmp_im(s, s->pc - s->cs_base); + gen_eob(s); + break; + + case 0xcb: /* stac */ + if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_SMAP) + || CPL(s) !=3D 0) { + goto illegal_op; + } + gen_helper_stac(cpu_env); + gen_jmp_im(s, s->pc - s->cs_base); + gen_eob(s); + break; + + CASE_MODRM_MEM_OP(1): /* sidt */ + if (s->flags & HF_UMIP_MASK && !check_cpl0(s)) { + break; + } + gen_svm_check_intercept(s, SVM_EXIT_IDTR_READ); + gen_lea_modrm(env, s, modrm); + tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State, idt.lim= it)); + gen_op_st_v(s, MO_16, s->T0, s->A0); + gen_add_A0_im(s, 2); + tcg_gen_ld_tl(s->T0, cpu_env, offsetof(CPUX86State, idt.base)); + if (dflag =3D=3D MO_16) { + tcg_gen_andi_tl(s->T0, s->T0, 0xffffff); + } + gen_op_st_v(s, CODE64(s) + MO_32, s->T0, s->A0); + break; + + case 0xd0: /* xgetbv */ + if ((s->cpuid_ext_features & CPUID_EXT_XSAVE) =3D=3D 0 + || (s->prefix & (PREFIX_LOCK | PREFIX_DATA + | PREFIX_REPZ | PREFIX_REPNZ))) { + goto illegal_op; + } + tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[R_ECX]); + gen_helper_xgetbv(s->tmp1_i64, cpu_env, s->tmp2_i32); + tcg_gen_extr_i64_tl(cpu_regs[R_EAX], cpu_regs[R_EDX], s->tmp1_= i64); + break; + + case 0xd1: /* xsetbv */ + if ((s->cpuid_ext_features & CPUID_EXT_XSAVE) =3D=3D 0 + || (s->prefix & (PREFIX_LOCK | PREFIX_DATA + | PREFIX_REPZ | PREFIX_REPNZ))) { + goto illegal_op; + } + if (!check_cpl0(s)) { + break; + } + tcg_gen_concat_tl_i64(s->tmp1_i64, cpu_regs[R_EAX], + cpu_regs[R_EDX]); + tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[R_ECX]); + gen_helper_xsetbv(cpu_env, s->tmp2_i32, s->tmp1_i64); + /* End TB because translation flags may change. */ + gen_jmp_im(s, s->pc - s->cs_base); + gen_eob(s); + break; + + case 0xd8: /* VMRUN */ + if (!SVME(s) || !PE(s)) { + goto illegal_op; + } + if (!check_cpl0(s)) { + break; + } + gen_update_cc_op(s); + gen_jmp_im(s, pc_start - s->cs_base); + gen_helper_vmrun(cpu_env, tcg_const_i32(s->aflag - 1), + tcg_const_i32(s->pc - pc_start)); + tcg_gen_exit_tb(NULL, 0); + s->base.is_jmp =3D DISAS_NORETURN; + break; + + case 0xd9: /* VMMCALL */ + if (!SVME(s)) { + goto illegal_op; + } + gen_update_cc_op(s); + gen_jmp_im(s, pc_start - s->cs_base); + gen_helper_vmmcall(cpu_env); + break; + + case 0xda: /* VMLOAD */ + if (!SVME(s) || !PE(s)) { + goto illegal_op; + } + if (!check_cpl0(s)) { + break; + } + gen_update_cc_op(s); + gen_jmp_im(s, pc_start - s->cs_base); + gen_helper_vmload(cpu_env, tcg_const_i32(s->aflag - 1)); + break; + + case 0xdb: /* VMSAVE */ + if (!SVME(s) || !PE(s)) { + goto illegal_op; + } + if (!check_cpl0(s)) { + break; + } + gen_update_cc_op(s); + gen_jmp_im(s, pc_start - s->cs_base); + gen_helper_vmsave(cpu_env, tcg_const_i32(s->aflag - 1)); + break; + + case 0xdc: /* STGI */ + if ((!SVME(s) && !(s->cpuid_ext3_features & CPUID_EXT3_SKINIT)) + || !PE(s)) { + goto illegal_op; + } + if (!check_cpl0(s)) { + break; + } + gen_update_cc_op(s); + gen_helper_stgi(cpu_env); + gen_jmp_im(s, s->pc - s->cs_base); + gen_eob(s); + break; + + case 0xdd: /* CLGI */ + if (!SVME(s) || !PE(s)) { + goto illegal_op; + } + if (!check_cpl0(s)) { + break; + } + gen_update_cc_op(s); + gen_jmp_im(s, pc_start - s->cs_base); + gen_helper_clgi(cpu_env); + break; + + case 0xde: /* SKINIT */ + if ((!SVME(s) && !(s->cpuid_ext3_features & CPUID_EXT3_SKINIT)) + || !PE(s)) { + goto illegal_op; + } + gen_svm_check_intercept(s, SVM_EXIT_SKINIT); + /* If not intercepted, not implemented -- raise #UD. */ + goto illegal_op; + + case 0xdf: /* INVLPGA */ + if (!SVME(s) || !PE(s)) { + goto illegal_op; + } + if (!check_cpl0(s)) { + break; + } + gen_svm_check_intercept(s, SVM_EXIT_INVLPGA); + if (s->aflag =3D=3D MO_64) { + tcg_gen_mov_tl(s->A0, cpu_regs[R_EAX]); + } else { + tcg_gen_ext32u_tl(s->A0, cpu_regs[R_EAX]); + } + gen_helper_flush_page(cpu_env, s->A0); + gen_jmp_im(s, s->pc - s->cs_base); + gen_eob(s); + break; + + CASE_MODRM_MEM_OP(2): /* lgdt */ + if (!check_cpl0(s)) { + break; + } + gen_svm_check_intercept(s, SVM_EXIT_GDTR_WRITE); + gen_lea_modrm(env, s, modrm); + gen_op_ld_v(s, MO_16, s->T1, s->A0); + gen_add_A0_im(s, 2); + gen_op_ld_v(s, CODE64(s) + MO_32, s->T0, s->A0); + if (dflag =3D=3D MO_16) { + tcg_gen_andi_tl(s->T0, s->T0, 0xffffff); + } + tcg_gen_st_tl(s->T0, cpu_env, offsetof(CPUX86State, gdt.base)); + tcg_gen_st32_tl(s->T1, cpu_env, offsetof(CPUX86State, gdt.limi= t)); + break; + + CASE_MODRM_MEM_OP(3): /* lidt */ + if (!check_cpl0(s)) { + break; + } + gen_svm_check_intercept(s, SVM_EXIT_IDTR_WRITE); + gen_lea_modrm(env, s, modrm); + gen_op_ld_v(s, MO_16, s->T1, s->A0); + gen_add_A0_im(s, 2); + gen_op_ld_v(s, CODE64(s) + MO_32, s->T0, s->A0); + if (dflag =3D=3D MO_16) { + tcg_gen_andi_tl(s->T0, s->T0, 0xffffff); + } + tcg_gen_st_tl(s->T0, cpu_env, offsetof(CPUX86State, idt.base)); + tcg_gen_st32_tl(s->T1, cpu_env, offsetof(CPUX86State, idt.limi= t)); + break; + + CASE_MODRM_OP(4): /* smsw */ + if (s->flags & HF_UMIP_MASK && !check_cpl0(s)) { + break; + } + gen_svm_check_intercept(s, SVM_EXIT_READ_CR0); + tcg_gen_ld_tl(s->T0, cpu_env, offsetof(CPUX86State, cr[0])); + /* + * In 32-bit mode, the higher 16 bits of the destination + * register are undefined. In practice CR0[31:0] is stored + * just like in 64-bit mode. + */ + mod =3D (modrm >> 6) & 3; + ot =3D (mod !=3D 3 ? MO_16 : s->dflag); + gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1); + break; + case 0xee: /* rdpkru */ + if (prefixes & PREFIX_LOCK) { + goto illegal_op; + } + tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[R_ECX]); + gen_helper_rdpkru(s->tmp1_i64, cpu_env, s->tmp2_i32); + tcg_gen_extr_i64_tl(cpu_regs[R_EAX], cpu_regs[R_EDX], s->tmp1_= i64); + break; + case 0xef: /* wrpkru */ + if (prefixes & PREFIX_LOCK) { + goto illegal_op; + } + tcg_gen_concat_tl_i64(s->tmp1_i64, cpu_regs[R_EAX], + cpu_regs[R_EDX]); + tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[R_ECX]); + gen_helper_wrpkru(cpu_env, s->tmp2_i32, s->tmp1_i64); + break; + + CASE_MODRM_OP(6): /* lmsw */ + if (!check_cpl0(s)) { + break; + } + gen_svm_check_intercept(s, SVM_EXIT_WRITE_CR0); + gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0); + /* + * Only the 4 lower bits of CR0 are modified. + * PE cannot be set to zero if already set to one. + */ + tcg_gen_ld_tl(s->T1, cpu_env, offsetof(CPUX86State, cr[0])); + tcg_gen_andi_tl(s->T0, s->T0, 0xf); + tcg_gen_andi_tl(s->T1, s->T1, ~0xe); + tcg_gen_or_tl(s->T0, s->T0, s->T1); + gen_helper_write_crN(cpu_env, tcg_constant_i32(0), s->T0); + gen_jmp_im(s, s->pc - s->cs_base); + gen_eob(s); + break; + + CASE_MODRM_MEM_OP(7): /* invlpg */ + if (!check_cpl0(s)) { + break; + } + gen_svm_check_intercept(s, SVM_EXIT_INVLPG); + gen_lea_modrm(env, s, modrm); + gen_helper_flush_page(cpu_env, s->A0); + gen_jmp_im(s, s->pc - s->cs_base); + gen_eob(s); + break; + + case 0xf8: /* swapgs */ +#ifdef TARGET_X86_64 + if (CODE64(s)) { + if (check_cpl0(s)) { + tcg_gen_mov_tl(s->T0, cpu_seg_base[R_GS]); + tcg_gen_ld_tl(cpu_seg_base[R_GS], cpu_env, + offsetof(CPUX86State, kernelgsbase)); + tcg_gen_st_tl(s->T0, cpu_env, + offsetof(CPUX86State, kernelgsbase)); + } + break; + } +#endif + goto illegal_op; + + case 0xf9: /* rdtscp */ + if (!(s->cpuid_ext2_features & CPUID_EXT2_RDTSCP)) { + goto illegal_op; + } + gen_update_cc_op(s); + gen_jmp_im(s, pc_start - s->cs_base); + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { + gen_io_start(); + } + gen_helper_rdtscp(cpu_env); + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { + gen_jmp(s, s->pc - s->cs_base); + } + break; + + default: + goto unknown_op; + } + break; + + case 0x108: /* invd */ + case 0x109: /* wbinvd */ + if (check_cpl0(s)) { + gen_svm_check_intercept(s, (b & 2) ? SVM_EXIT_INVD : SVM_EXIT_= WBINVD); + /* nothing to do */ + } + break; + case 0x63: /* arpl or movslS (x86_64) */ +#ifdef TARGET_X86_64 + if (CODE64(s)) { + int d_ot; + /* d_ot is the size of destination */ + d_ot =3D dflag; + + modrm =3D x86_ldub_code(env, s); + reg =3D ((modrm >> 3) & 7) | REX_R(s); + mod =3D (modrm >> 6) & 3; + rm =3D (modrm & 7) | REX_B(s); + + if (mod =3D=3D 3) { + gen_op_mov_v_reg(s, MO_32, s->T0, rm); + /* sign extend */ + if (d_ot =3D=3D MO_64) { + tcg_gen_ext32s_tl(s->T0, s->T0); + } + gen_op_mov_reg_v(s, d_ot, reg, s->T0); + } else { + gen_lea_modrm(env, s, modrm); + gen_op_ld_v(s, MO_32 | MO_SIGN, s->T0, s->A0); + gen_op_mov_reg_v(s, d_ot, reg, s->T0); + } + } else +#endif + { + TCGLabel *label1; + TCGv t0, t1, t2, a0; + + if (!PE(s) || VM86(s)) + goto illegal_op; + t0 =3D tcg_temp_local_new(); + t1 =3D tcg_temp_local_new(); + t2 =3D tcg_temp_local_new(); + ot =3D MO_16; + modrm =3D x86_ldub_code(env, s); + reg =3D (modrm >> 3) & 7; + mod =3D (modrm >> 6) & 3; + rm =3D modrm & 7; + if (mod !=3D 3) { + gen_lea_modrm(env, s, modrm); + gen_op_ld_v(s, ot, t0, s->A0); + a0 =3D tcg_temp_local_new(); + tcg_gen_mov_tl(a0, s->A0); + } else { + gen_op_mov_v_reg(s, ot, t0, rm); + a0 =3D NULL; + } + gen_op_mov_v_reg(s, ot, t1, reg); + tcg a0); + tcg_temp_free(a0); + } else { + gen_op_mov_reg_v(s, ot, rm, t0); + } + gen_compute_eflags(s); + tcg_gen_andi_tl(cpu_cc_src, cpu_cc_src, ~CC_Z); + tcg_gen_or_tl(cpu_cc_src, cpu_cc_src, t2); + tcg_temp_free(t0); + tcg_temp_free(t1); + tcg_temp_free(t2); + } + break; + case 0x102: /* lar */ + case 0x103: /* lsl */ + { + TCGLabel *label1; + TCGv t0; + if (!PE(s) || VM86(s)) + goto illegal_op; + ot =3D dflag !=3D MO_16 ? MO_32 : MO_16; + modrm =3D x86_ldub_code(env, s); + reg =3D ((modrm >> 3) & 7) | REX_R(s); + gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0); + t0 =3D tcg_temp_local_new(); + gen_update_cc_op(s); + if (b =3D=3D 0x102) { + gen_helper_lar(t0, cpu_env, s->T0); + } else { + gen_helper_lsl(t0, cpu_env, s->T0); + } + tcg_gen_andi_tl(s->tmp0, cpu_cc_src, CC_Z); + label1 =3D gen_new_label(); + tcg_gen_brcondi_tl(TCG_COND_EQ, s->tmp0, 0, label1); + gen_op_mov_reg_v(s, ot, reg, t0); + gen_set_label(label1); + set_cc_op(s, CC_OP_EFLAGS); + tcg_temp_free(t0); + } + break; + case 0x118: + modrm =3D x86_ldub_code(env, s); + mod =3D (modrm >> 6) & 3; + op =3D (modrm >> 3) & 7; + switch(op) { + case 0: /* prefetchnta */ + case 1: /* prefetchnt0 */ + case 2: /* prefetchnt0 */ + case 3: /* prefetchnt0 */ + if (mod =3D=3D 3) + goto illegal_op; + gen_nop_modrm(env, s, modrm); + /* nothing more to do */ + break; + default: /* nop (multi byte) */ + gen_nop_modrm(env, s, modrm); + break; + } + break; + case 0x11a: + modrm =3D x86_ldub_code(env, s); + if (s->flags & HF_MPX_EN_MASK) { + mod =3D (modrm >> 6) & 3; + reg =3D ((modrm >> 3) & 7) | REX_R(s); + if (prefixes & PREFIX_REPZ) { + /* bndcl */ + if (reg >=3D 4 + || (prefixes & PREFIX_LOCK) + || s->aflag =3D=3D MO_16) { + goto illegal_op; + } + gen_bndck(env, s, modrm, TCG_COND_LTU, cpu_bndl[reg]); + } else if (prefixes & PREFIX_REPNZ) { + /* bndcu */ + if (reg >=3D 4 + || (prefixes & PREFIX_LOCK) + || s->aflag =3D=3D MO_16) { + goto illegal_op; + } + TCGv_i64 notu =3D tcg_temp_new_i64(); + tcg_gen_not_i64(notu, cpu_bndu[reg]); + gen_bndck(env, s, modrm, TCG_COND_GTU, notu); + tcg_temp_free_i64(notu); + } else if (prefixes & PREFIX_DATA) { + /* bndmov -- from reg/mem */ + if (reg >=3D 4 || s->aflag =3D=3D MO_16) { + goto illegal_op; + } + if (mod =3D=3D 3) { + int reg2 =3D (modrm & 7) | REX_B(s); + if (reg2 >=3D 4 || (prefixes & PREFIX_LOCK)) { + goto illegal_op; + } + if (s->flags & HF_MPX_IU_MASK) { + tcg_gen_mov_i64(cpu_bndl[reg], cpu_bndl[reg2]); + tcg_gen_mov_i64(cpu_bndu[reg], cpu_bndu[reg2]); + } + } else { + gen_lea_modrm(env, s, modrm); + if (CODE64(s)) { + tcg_gen_qemu_ld_i64(cpu_bndl[reg], s->A0, + s->mem_index, MO_LEUQ); + tcg_gen_addi_tl(s->A0, s->A0, 8); + tcg_gen_qemu_ld_i64(cpu_bndu[reg], s->A0, + s->mem_index, MO_LEUQ); + } else { + tcg_gen_qemu_ld_i64(cpu_bndl[reg], s->A0, + s->mem_index, MO_LEUL); + tcg_gen_addi_tl(s->A0, s->A0, 4); + tcg_gen_qemu_ld_i64(cpu_bndu[reg], s->A0, + s->mem_index, MO_LEUL); + } + /* bnd registers are now in-use */ + gen_set_hflag(s, HF_MPX_IU_MASK); + } + } else if (mod !=3D 3) { + /* bndldx */ + AddressParts a =3D gen_lea_modrm_0(env, s, modrm); + if (reg >=3D 4 + || (prefixes & PREFIX_LOCK) + || s->aflag =3D=3D MO_16 + || a.base < -1) { + goto illegal_op; + } + if (a.base >=3D 0) { + tcg_gen_addi_tl(s->A0, cpu_regs[a.base], a.disp); + } else { + tcg_gen_movi_tl(s->A0, 0); + } + gen_lea_v_seg(s, s->aflag, s->A0, a.def_seg, s->override); + if (a.index >=3D 0) { + tcg_gen_mov_tl(s->T0, cpu_regs[a.index]); + } else { + tcg_gen_movi_tl(s->T0, 0); + } + if (CODE64(s)) { + gen_helper_bndldx64(cpu_bndl[reg], cpu_env, s->A0, s->= T0); + tcg_gen_ld_i64(cpu_bndu[reg], cpu_env, + offsetof(CPUX86State, mmx_t0.MMX_Q(0))); + } else { + gen_helper_bndldx32(cpu_bndu[reg], cpu_env, s->A0, s->= T0); + tcg_gen_ext32u_i64(cpu_bndl[reg], cpu_bndu[reg]); + tcg_gen_shri_i64(cpu_bndu[reg], cpu_bndu[reg], 32); + } + gen_set_hflag(s, HF_MPX_IU_MASK); + } + } + gen_nop_modrm(env, s, modrm); + break; + case 0x11b: + modrm =3D x86_ldub_code(env, s); + if (s->flags & HF_MPX_EN_MASK) { + mod =3D (modrm >> 6) & 3; + reg =3D ((modrm >> 3) & 7) | REX_R(s); + if (mod !=3D 3 && (prefixes & PREFIX_REPZ)) { + /* bndmk */ + if (reg >=3D 4 + || (prefixes & PREFIX_LOCK) + || s->aflag =3D=3D MO_16) { + goto illegal_op; + } + AddressParts a =3D gen_lea_modrm_0(env, s, modrm); + if (a.base >=3D 0) { + tcg_gen_extu_tl_i64(cpu_bndl[reg], cpu_regs[a.base]); + if (!CODE64(s)) { + tcg_gen_ext32u_i64(cpu_bndl[reg], cpu_bndl[reg]); + } + } else if (a.base =3D=3D -1) { + /* no base register has lower bound of 0 */ + tcg_gen_movi_i64(cpu_bndl[reg], 0); + } else { + /* rip-relative generates #ud */ + goto illegal_op; + } + tcg_gen_not_tl(s->A0, gen_lea_modrm_1(s, a)); + if (!CODE64(s)) { + tcg_gen_ext32u_tl(s->A0, s->A0); + } + tcg_gen_extu_tl_i64(cpu_bndu[reg], s->A0); + /* bnd registers are now in-use */ + gen_set_hflag(s, HF_MPX_IU_MASK); + break; + } else if (prefixes & PREFIX_REPNZ) { + /* bndcn */ + if (reg >=3D 4 + || (prefixes & PREFIX_LOCK) + || s->aflag =3D=3D MO_16) { + goto illegal_op; + } + gen_bndck(env, s, modrm, TCG_COND_GTU, cpu_bndu[reg]); + } else if (prefixes & PREFIX_DATA) { + /* bndmov -- to reg/mem */ + if (reg >=3D 4 || s->aflag =3D=3D MO_16) { + goto illegal_op; + } + if (mod =3D=3D 3) { + int reg2 =3D (modrm & 7) | REX_B(s); + if (reg2 >=3D 4 || (prefixes & PREFIX_LOCK)) { + goto illegal_op; + } + if (s->flags & HF_MPX_IU_MASK) { + tcg_gen_mov_i64(cpu_bndl[reg2], cpu_bndl[reg]); + tcg_gen_mov_i64(cpu_bndu[reg2], cpu_bndu[reg]); + } + } else { + gen_lea_modrm(env, s, modrm); + if (CODE64(s)) { + tcg_gen_qemu_st_i64(cpu_bndl[reg], s->A0, + s->mem_index, MO_LEUQ); + tcg_gen_addi_tl(s->A0, s->A0, 8); + tcg_gen_qemu_st_i64(cpu_bndu[reg], s->A0, + s->mem_index, MO_LEUQ); + } else { + tcg_gen_qemu_st_i64(cpu_bndl[reg], s->A0, + s->mem_index, MO_LEUL); + tcg_gen_addi_tl(s->A0, s->A0, 4); + tcg_gen_qemu_st_i64(cpu_bndu[reg], s->A0, + s->mem_index, MO_LEUL); + } + } + } else if (mod !=3D 3) { + /* bndstx */ + AddressParts a =3D gen_lea_modrm_0(env, s, modrm); + if (reg >=3D 4 + || (prefixes & PREFIX_LOCK) + || s->aflag =3D=3D MO_16 + || a.base < -1) { + goto illegal_op; + } + if (a.base >=3D 0) { + tcg_gen_addi_tl(s->A0, cpu_regs[a.base], a.disp); + } else { + tcg_gen_movi_tl(s->A0, 0); + } + gen_lea_v_seg(s, s->aflag, s->A0, a.def_seg, s->override); + if (a.index >=3D 0) { + tcg_gen_mov_tl(s->T0, cpu_regs[a.index]); + } else { + tcg_gen_movi_tl(s->T0, 0); + } + if (CODE64(s)) { + gen_helper_bndstx64(cpu_env, s->A0, s->T0, + cpu_bndl[reg], cpu_bndu[reg]); + } else { + gen_helper_bndstx32(cpu_env, s->A0, s->T0, + cpu_bndl[reg], cpu_bndu[reg]); + } + } + } + gen_nop_modrm(env, s, modrm); + break; + case 0x119: case 0x11c ... 0x11f: /* nop (multi byte) */ + modrm =3D x86_ldub_code(env, s); + gen_nop_modrm(env, s, modrm); + break; + + case 0x120: /* mov reg, crN */ + case 0x122: /* mov crN, reg */ + if (!check_cpl0(s)) { + break; + } + modrm =3D x86_ldub_code(env, s); + /* + * Ignore the mod bits (assume (modrm&0xc0)=3D=3D0xc0). + * AMD documentation (24594.pdf) and testing of Intel 386 and 486 + * processors all show that the mod bits are assumed to be 1's, + * regardless of actual values. + */ + rm =3D (modrm & 7) | REX_B(s); + reg =3D ((modrm >> 3) & 7) | REX_R(s); + switch (reg) { + case 0: + if ((prefixes & PREFIX_LOCK) && + (s->cpuid_ext3_features & CPUID_EXT3_CR8LEG)) { + reg =3D 8; + } + break; + case 2: + case 3: + case 4: + case 8: + break; + default: + goto unknown_op; + } + ot =3D (CODE64(s) ? MO_64 : MO_32); + + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { + gen_io_start(); + } + if (b & 2) { + gen_svm_check_intercept(s, SVM_EXIT_WRITE_CR0 + reg); + gen_op_mov_v_reg(s, ot, s->T0, rm); + gen_helper_write_crN(cpu_env, tcg_constant_i32(reg), s->T0); + gen_jmp_im(s, s->pc - s->cs_base); + gen_eob(s); + } else { + gen_svm_check_intercept(s, SVM_EXIT_READ_CR0 + reg); + gen_helper_read_crN(s->T0, cpu_env, tcg_constant_i32(reg)); + gen_op_mov_reg_v(s, ot, rm, s->T0); + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { + gen_jmp(s, s->pc - s->cs_base); + } + } + break; + + case 0x121: /* mov reg, drN */ + case 0x123: /* mov drN, reg */ + if (check_cpl0(s)) { + modrm =3D x86_ldub_code(env, s); + /* Ignore the mod bits (assume (modrm&0xc0)=3D=3D0xc0). + * AMD documentation (24594.pdf) and testing of + * intel 386 and 486 processors all show that the mod bits + * are assumed to be 1's, regardless of actual values. + */ + rm =3D (modrm & 7) | REX_B(s); + reg =3D ((modrm >> 3) & 7) | REX_R(s); + if (CODE64(s)) + ot =3D MO_64; + else + ot =3D MO_32; + if (reg >=3D 8) { + goto illegal_op; + } + if (b & 2) { + gen_svm_check_intercept(s, SVM_EXIT_WRITE_DR0 + reg); + gen_op_mov_v_reg(s, ot, s->T0, rm); + tcg_gen_movi_i32(s->tmp2_i32, reg); + gen_helper_set_dr(cpu_env, s->tmp2_i32, s->T0); + gen_jmp_im(s, s->pc - s->cs_base); + gen_eob(s); + } else { + gen_svm_check_intercept(s, SVM_EXIT_READ_DR0 + reg); + tcg_gen_movi_i32(s->tmp2_i32, reg); + gen_helper_get_dr(s->T0, cpu_env, s->tmp2_i32); + gen_op_mov_reg_v(s, ot, rm, s->T0); + } + } + break; + case 0x106: /* clts */ + if (check_cpl0(s)) { + gen_svm_check_intercept(s, SVM_EXIT_WRITE_CR0); + gen_helper_clts(cpu_env); + /* abort block because static cpu state changed */ + gen_jmp_im(s, s->pc - s->cs_base); + gen_eob(s); + } + break; + /* MMX/3DNow!/SSE/SSE2/SSE3/SSSE3/SSE4 support */ + case 0x1c3: /* MOVNTI reg, mem */ + if (!(s->cpuid_features & CPUID_SSE2)) + goto illegal_op; + ot =3D mo_64_32(dflag); + modrm =3D x86_ldub_code(env, s); + mod =3D (modrm >> 6) & 3; + if (mod =3D=3D 3) + goto illegal_op; + reg =3D ((modrm >> 3) & 7) | REX_R(s); + /* generate a generic store */ + gen_ldst_modrm(env, s, modrm, ot, reg, 1); + break; + case 0x1ae: + modrm =3D x86_ldub_code(env, s); + switch (modrm) { + CASE_MODRM_MEM_OP(0): /* fxsave */ + if (!(s->cpuid_features & CPUID_FXSR) + || (prefixes & PREFIX_LOCK)) { + goto illegal_op; + } + if ((s->flags & HF_EM_MASK) || (s->flags & HF_TS_MASK)) { + gen_exception(s, EXCP07_PREX, pc_start - s->cs_base); + break; + } + gen_lea_modrm(env, s, modrm); + gen_helper_fxsave(cpu_env, s->A0); + break; + + CASE_MODRM_MEM_OP(1): /* fxrstor */ + if (!(s->cpuid_features & CPUID_FXSR) + || (prefixes & PREFIX_LOCK)) { + goto illegal_op; + } + if ((s->flags & HF_EM_MASK) || (s->flags & HF_TS_MASK)) { + gen_exception(s, EXCP07_PREX, pc_start - s->cs_base); + break; + } + gen_lea_modrm(env, s, modrm); + gen_helper_fxrstor(cpu_env, s->A0); + break; + + CASE_MODRM_MEM_OP(2): /* ldmxcsr */ + if ((s->flags & HF_EM_MASK) || !(s->flags & HF_OSFXSR_MASK)) { + goto illegal_op; + } + if (s->flags & HF_TS_MASK) { + gen_exception(s, EXCP07_PREX, pc_start - s->cs_base); + break; + } + gen_lea_modrm(env, s, modrm); + tcg_gen_qemu_->cs_base); + break; + } + gen_helper_update_mxcsr(cpu_env); + gen_lea_modrm(env, s, modrm); + tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State, mxcsr)); + gen_op_st_v(s, MO_32, s->T0, s->A0); + break; + + CASE_MODRM_MEM_OP(4): /* xsave */ + if ((s->cpuid_ext_features & CPUID_EXT_XSAVE) =3D=3D 0 + || (prefixes & (PREFIX_LOCK | PREFIX_DATA + | PREFIX_REPZ | PREFIX_REPNZ))) { + goto illegal_op; + } + gen_lea_modrm(env, s, modrm); + tcg_gen_concat_tl_i64(s->tmp1_i64, cpu_regs[R_EAX], + cpu_regs[R_EDX]); + gen_helper_xsave(cpu_env, s->A0, s->tmp1_i64); + break; + + CASE_MODRM_MEM_OP(5): /* xrstor */ + if ((s->cpuid_ext_features & CPUID_EXT_XSAVE) =3D=3D 0 + || (prefixes & (PREFIX_LOCK | PREFIX_DATA + | PREFIX_REPZ | PREFIX_REPNZ))) { + goto illegal_op; + } + gen_lea_modrm(env, s, modrm); + tcg_gen_concat_tl_i64(s->tmp1_i64, cpu_regs[R_EAX], + cpu_regs[R_EDX]); + gen_helper_xrstor(cpu_env, s->A0, s->tmp1_i64); + /* XRSTOR is how MPX is enabled, which changes how + we translate. Thus we need to end the TB. */ + gen_update_cc_op(s); + gen_jmp_im(s, s->pc - s->cs_base); + gen_eob(s); + break; + + CASE_MODRM_MEM_OP(6): /* xsaveopt / clwb */ + if (prefixes & PREFIX_LOCK) { + goto illegal_op; + } + if (prefixes & PREFIX_DATA) { + /* clwb */ + if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_CLWB)) { + goto illegal_op; + } + gen_nop_modrm(env, s, modrm); + } else { + /* xsaveopt */ + if ((s->cpuid_ext_features & CPUID_EXT_XSAVE) =3D=3D 0 + || (s->cpuid_xsave_features & CPUID_XSAVE_XSAVEOPT) = =3D=3D 0 + || (prefixes & (PREFIX_REPZ | PREFIX_REPNZ))) { + goto illegal_op; + } + gen_lea_modrm(env, s, modrm); + tcg_gen_concat_tl_i64(s->tmp1_i64, cpu_regs[R_EAX], + cpu_regs[R_EDX]); + gen_helper_xsaveopt(cpu_env, s->A0, s->tmp1_i64); + } + break; + + CASE_MODRM_MEM_OP(7): /* clflush / clflushopt */ + if (prefixes & PREFIX_LOCK) { + goto illegal_op; + } + if (prefixes & PREFIX_DATA) { + /* clflushopt */ + if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_CLFLUSHOPT= )) { + goto illegal_op; + } + } else { + /* clflush */ + if ((s->prefix & (PREFIX_REPZ | PREFIX_REPNZ)) + || !(s->cpuid_features & CPUID_CLFLUSH)) { + goto illegal_op; + } + } + gen_nop_modrm(env, s, modrm); + break; + + case 0xc0 ... 0xc7: /* rdfsbase (f3 0f ae /0) */ + case 0xc8 ... 0xcf: /* rdgsbase (f3 0f ae /1) */ + case 0xd0 ... 0xd7: /* wrfsbase (f3 0f ae /2) */ + case 0xd8 ... 0xdf: /* wrgsbase (f3 0f ae /3) */ + if (CODE64(s) + && (prefixes & PREFIX_REPZ) + && !(prefixes & PREFIX_LOCK) + && (s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_FSGSBASE)) { + TCGv base, treg, src, dst; + + /* Preserve hflags bits by testing CR4 at runtime. */ + tcg_gen_movi_i32(s->tmp2_i32, CR4_FSGSBASE_MASK); + gen_helper_cr4_testbit(cpu_env, s->tmp2_i32); + + base =3D cpu_seg_base[modrm & 8 ? R_GS : R_FS]; + treg =3D cpu_regs[(modrm & 7) | REX_B(s)]; + + if (modrm & 0x10) { + /* wr*base */ + dst =3D base, src =3D treg; + } else { + /* rd*base */ + dst =3D treg, src =3D base; + } + + if (s->dflag =3D=3D MO_32) { + tcg_gen_ext32u_tl(dst, src); + } else { + tcg_gen_mov_tl(dst, src); + } + break; + } + goto unknown_op; + + case 0xf8: /* sfence / pcommit */ + if (prefixes & PREFIX_DATA) { + /* pcommit */ + if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_PCOMMIT) + || (prefixes & PREFIX_LOCK)) { + goto illegal_op; + } + break; + } + /* fallthru */ + case 0xf9 ... 0xff: /* sfence */ + if (!(s->cpuid_features & CPUID_SSE) + || (prefixes & PREFIX_LOCK)) { + goto illegal_op; + } + tcg_gen_mb(TCG_MO_ST_ST | TCG_BAR_SC); + break; + case 0xe8 ... 0xef: /* lfence */ + if (!(s->cpuid_features & CPUID_SSE) + || (prefixes & PREFIX_LOCK)) { + goto illegal_op; + } + tcg_gen_mb(TCG_MO_LD_LD | TCG_BAR_SC); + break; + case 0xf0 ... 0xf7: /* mfence */ + if (!(s->cpuid_features & CPUID_SSE2) + || (prefixes & PREFIX_LOCK)) { + goto illegal_op; + } + tcg_gen_mb(TCG_MO_ALL | TCG_BAR_SC); + break; + + default: + goto unknown_op; + } + break; + + case 0x10d: /* 3DNow! prefetch(w) */ + modrm =3D x86_ldub_code(env, s); + mod =3D (modrm >> 6) & 3; + if (mod =3D=3D 3) + goto illegal_op; + gen_nop_modrm(env, s, modrm); + break; + case 0x1aa: /* rsm */ + gen_svm_check_intercept(s, SVM_EXIT_RSM); + if (!(s->flags & HF_SMM_MASK)) + goto illegal_op; +#ifdef CONFIG_USER_ONLY + /* we should not be in SMM mode */ + g_assert_not_reached(); +#else + gen_update_cc_op(s); + gen_jmp_im(s, s->pc - s->cs_base); + gen_helper_rsm(cpu_env); +#endif /* CONFIG_USER_ONLY */ + gen_eob(s); + break; + case 0x1b8: /* SSE4.2 popcnt */ + if ((prefixes & (PREFIX_REPZ | PREFIX_LOCK | PREFIX_REPNZ)) !=3D + PREFIX_REPZ) + goto illegal_op; + if (!(s->cpuid_ext_features & CPUID_EXT_POPCNT)) + goto illegal_op; + + modrm =3D x86_ldub_code(env, s); + reg =3D ((modrm >> 3) & 7) | REX_R(s); + + if (s->prefix & PREFIX_DATA) { + ot =3D MO_16; + } else { + ot =3D mo_64_32(dflag); + } + + gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); + gen_extu(ot, s->T0); + tcg_gen_mov_tl(cpu_cc_src, s->T0); + tcg_gen_ctpop_tl(s->T0, s->T0); + gen_op_mov_reg_v(s, ot, reg, s->T0); + + set_cc_op(s, CC_OP_POPCNT); + break; + case 0x10e ... 0x10f: + /* 3DNow! instructions, ignore prefixes */ + s->prefix &=3D ~(PREFIX_REPZ | PREFIX_REPNZ | PREFIX_DATA); + /* fall through */ + case 0x110 ... 0x117: + case 0x128 ... 0x12f: + case 0x138 ... 0x13a: + case 0x150 ... 0x179: + case 0x17c ... 0x17f: + case 0x1c2: + case 0x1c4 ... 0x1c6: + case 0x1d0 ... 0x1fe: + gen_sse(env, s, b, pc_start); + break; + default: + goto unknown_op; + } + return s->pc; + illegal_op: + gen_illegal_opcode(s); + return s->pc; + unknown_op: + gen_unknown_opcode(env, s); + return s->pc; +} + diff --git a/target/i386/tcg/translate.c b/target/i386/tcg/translate.c index b7972f0ff5..d2d6eb89e7 100644 --- a/target/i386/tcg/translate.c +++ b/target/i386/tcg/translate.c @@ -48,20 +48,6 @@ # define clztl clz32 #endif =20 -/* For a switch indexed by MODRM, match all memory operands for a given OP= . */ -#define CASE_MODRM_MEM_OP(OP) \ - case (0 << 6) | (OP << 3) | 0 ... (0 << 6) | (OP << 3) | 7: \ - case (1 << 6) | (OP << 3) | 0 ... (1 << 6) | (OP << 3) | 7: \ - case (2 << 6) | (OP << 3) | 0 ... (2 << 6) | (OP << 3) | 7 - -#define CASE_MODRM_OP(OP) \ - case (0 << 6) | (OP << 3) | 0 ... (0 << 6) | (OP << 3) | 7: \ - case (1 << 6) | (OP << 3) | 0 ... (1 << 6) | (OP << 3) | 7: \ - case (2 << 6) | (OP << 3) | 0 ... (2 << 6) | (OP << 3) | 7: \ - case (3 << 6) | (OP << 3) | 0 ... (3 << 6) | (OP << 3) | 7 - -//#define MACRO_TEST 1 - /* global register indexes */ static TCGv cpu_cc_dst, cpu_cc_src, cpu_cc_src2; static TCGv_i32 cpu_cc_op; @@ -2776,5706 +2762,7 @@ static inline void gen_op_movq_env_0(DisasContext = *s, int d_offset) tcg_gen_movi_i64(s->tmp1_i64, 0); tcg_gen_st_i64(s->tmp1_i64, cpu_env, d_offset); } - -typedef void (*SSEFunc_i_ep)(TCGv_i32 val, TCGv_ptr env, TCGv_ptr reg); -typedef void (*SSEFunc_l_ep)(TCGv_i64 val, TCGv_ptr env, TCGv_ptr reg); -typedef void (*SSEFunc_0_epi)(TCGv_ptr env, TCGv_ptr reg, TCGv_i32 val); -typedef void (*SSEFunc_0_epl)(TCGv_ptr env, TCGv_ptr reg, TCGv_i64 val); -typedef void (*SSEFunc_0_epp)(TCGv_ptr env, TCGv_ptr reg_a, TCGv_ptr reg_b= ); -typedef void (*SSEFunc_0_eppi)(TCGv_ptr env, TCGv_ptr reg_a, TCGv_ptr reg_= b, - TCGv_i32 val); -typedef void (*SSEFunc_0_ppi)(TCGv_ptr reg_a, TCGv_ptr reg_b, TCGv_i32 val= ); -typedef void (*SSEFunc_0_eppt)(TCGv_ptr env, TCGv_ptr reg_a, TCGv_ptr reg_= b, - TCGv val); - -#define SSE_SPECIAL ((void *)1) -#define SSE_DUMMY ((void *)2) - -#define MMX_OP2(x) { gen_helper_ ## x ## _mmx, gen_helper_ ## x ## _xmm } -#define SSE_FOP(x) { gen_helper_ ## x ## ps, gen_helper_ ## x ## pd, \ - gen_helper_ ## x ## ss, gen_helper_ ## x ## sd, } - -static const SSEFunc_0_epp sse_op_table1[256][4] =3D { - /* 3DNow! extensions */ - [0x0e] =3D { SSE_DUMMY }, /* femms */ - [0x0f] =3D { SSE_DUMMY }, /* pf... */ - /* pure SSE operations */ - [0x10] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, /* = movups, movupd, movss, movsd */ - [0x11] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, /* = movups, movupd, movss, movsd */ - [0x12] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, /* = movlps, movlpd, movsldup, movddup */ - [0x13] =3D { SSE_SPECIAL, SSE_SPECIAL }, /* movlps, movlpd */ - [0x14] =3D { gen_helper_punpckldq_xmm, gen_helper_punpcklqdq_xmm }, - [0x15] =3D { gen_helper_punpckhdq_xmm, gen_helper_punpckhqdq_xmm }, - [0x16] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, /* movhps, movh= pd, movshdup */ - [0x17] =3D { SSE_SPECIAL, SSE_SPECIAL }, /* movhps, movhpd */ - - [0x28] =3D { SSE_SPECIAL, SSE_SPECIAL }, /* movaps, movapd */ - [0x29] =3D { SSE_SPECIAL, SSE_SPECIAL }, /* movaps, movapd */ - [0x2a] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, /* = cvtpi2ps, cvtpi2pd, cvtsi2ss, cvtsi2sd */ - [0x2b] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, /* = movntps, movntpd, movntss, movntsd */ - [0x2c] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, /* = cvttps2pi, cvttpd2pi, cvttsd2si, cvttss2si */ - [0x2d] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, /* = cvtps2pi, cvtpd2pi, cvtsd2si, cvtss2si */ - [0x2e] =3D { gen_helper_ucomiss, gen_helper_ucomisd }, - [0x2f] =3D { gen_helper_comiss, gen_helper_comisd }, - [0x50] =3D { SSE_SPECIAL, SSE_SPECIAL }, /* movmskps, movmskpd */ - [0x51] =3D SSE_FOP(sqrt), - [0x52] =3D { gen_helper_rsqrtps, NULL, gen_helper_rsqrtss, NULL }, - [0x53] =3D { gen_helper_rcpps, NULL, gen_helper_rcpss, NULL }, - [0x54] =3D { gen_helper_pand_xmm, gen_helper_pand_xmm }, /* andps, and= pd */ - [0x55] =3D { gen_helper_pandn_xmm, gen_helper_pandn_xmm }, /* andnps, = andnpd */ - [0x56] =3D { gen_helper_por_xmm, gen_helper_por_xmm }, /* orps, orpd */ - [0x57] =3D { gen_helper_pxor_xmm, gen_helper_pxor_xmm }, /* xorps, xor= pd */ - [0x58] =3D SSE_FOP(add), - [0x59] =3D SSE_FOP(mul), - [0x5a] =3D { gen_helper_cvtps2pd, gen_helper_cvtpd2ps, - gen_helper_cvtss2sd, gen_helper_cvtsd2ss }, - [0x5b] =3D { gen_helper_cvtdq2ps, gen_helper_cvtps2dq, gen_helper_cvtt= ps2dq }, - [0x5c] =3D SSE_FOP(sub), - [0x5d] =3D SSE_FOP(min), - [0x5e] =3D SSE_FOP(div), - [0x5f] =3D SSE_FOP(max), - - [0xc2] =3D SSE_FOP(cmpeq), - [0xc6] =3D { (SSEFunc_0_epp)gen_helper_shufps, - (SSEFunc_0_epp)gen_helper_shufpd }, /* XXX: casts */ - - /* SSSE3, SSE4, MOVBE, CRC32, BMI1, BMI2, ADX. */ - [0x38] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, - [0x3a] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, - - /* MMX ops and their SSE extensions */ - [0x60] =3D MMX_OP2(punpcklbw), - [0x61] =3D MMX_OP2(punpcklwd), - [0x62] =3D MMX_OP2(punpckldq), - [0x63] =3D MMX_OP2(packsswb), - [0x64] =3D MMX_OP2(pcmpgtb), - [0x65] =3D MMX_OP2(pcmpgtw), - [0x66] =3D MMX_OP2(pcmpgtl), - [0x67] =3D MMX_OP2(packuswb), - [0x68] =3D MMX_OP2(punpckhbw), - [0x69] =3D MMX_OP2(punpckhwd), - [0x6a] =3D MMX_OP2(punpckhdq), - [0x6b] =3D MMX_OP2(packssdw), - [0x6c] =3D { NULL, gen_helper_punpcklqdq_xmm }, - [0x6d] =3D { NULL, gen_helper_punpckhqdq_xmm }, - [0x6e] =3D { SSE_SPECIAL, SSE_SPECIAL }, /* movd mm, ea */ - [0x6f] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, /* movq, movdqa,= , movqdu */ - [0x70] =3D { (SSEFunc_0_epp)gen_helper_pshufw_mmx, - (SSEFunc_0_epp)gen_helper_pshufd_xmm, - (SSEFunc_0_epp)gen_helper_pshufhw_xmm, - (SSEFunc_0_epp)gen_helper_pshuflw_xmm }, /* XXX: casts */ - [0x71] =3D { SSE_SPECIAL, SSE_SPECIAL }, /* shiftw */ - [0x72] =3D { SSE_SPECIAL, SSE_SPECIAL }, /* shiftd */ - [0x73] =3D { SSE_SPECIAL, SSE_SPECIAL }, /* shiftq */ - [0x74] =3D MMX_OP2(pcmpeqb), - [0x75] =3D MMX_OP2(pcmpeqw), - [0x76] =3D MMX_OP2(pcmpeql), - [0x77] =3D { SSE_DUMMY }, /* emms */ - [0x78] =3D { NULL, SSE_SPECIAL, NULL, SSE_SPECIAL }, /* extrq_i, inser= tq_i */ - [0x79] =3D { NULL, gen_helper_extrq_r, NULL, gen_helper_insertq_r }, - [0x7c] =3D { NULL, gen_helper_haddpd, NULL, gen_helper_haddps }, - [0x7d] =3D { NULL, gen_helper_hsubpd, NULL, gen_helper_hsubps }, - [0x7e] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, /* movd, movd, ,= movq */ - [0x7f] =3D { SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, /* movq, movdqa,= movdqu */ - [0xc4] =3D { SSE_SPECIAL, SSE_SPECIAL }, /* pinsrw */ - [0xc5] =3D { SSE_SPECIAL, SSE_SPECIAL }, /* pextrw */ - [0xd0] =3D { NULL, gen_helper_addsubpd, NULL, gen_helper_addsubps }, - [0xd1] =3D MMX_OP2(psrlw), - [0xd2] =3D MMX_OP2(psrld), - [0xd3] =3D MMX_OP2(psrlq), - [0xd4] =3D MMX_OP2(paddq), - [0xd5] =3D MMX_OP2(pmullw), - [0xd6] =3D { NULL, SSE_SPECIAL, SSE_SPECIAL, SSE_SPECIAL }, - [0xd7] =3D { SSE_SPECIAL, SSE_SPECIAL }, /* pmovmskb */ - [0xd8] =3D MMX_OP2(psubusb), - [0xd9] =3D MMX_OP2(psubusw), - [0xda] =3D MMX_OP2(pminub), - [0xdb] =3D MMX_OP2(pand), - [0xdc] =3D MMX_OP2(paddusb), - [0xdd] =3D MMX_OP2(paddusw), - [0xde] =3D MMX_OP2(pmaxub), - [0xdf] =3D MMX_OP2(pandn), - [0xe0] =3D MMX_OP2(pavgb), - [0xe1] =3D MMX_OP2(psraw), - [0xe2] =3D MMX_OP2(psrad), - [0xe3] =3D MMX_OP2(pavgw), - [0xe4] =3D MMX_OP2(pmulhuw), - [0xe5] =3D MMX_OP2(pmulhw), - [0xe6] =3D { NULL, gen_helper_cvttpd2dq, gen_helper_cvtdq2pd, gen_help= er_cvtpd2dq }, - [0xe7] =3D { SSE_SPECIAL , SSE_SPECIAL }, /* movntq, movntq */ - [0xe8] =3D MMX_OP2(psubsb), - [0xe9] =3D MMX_OP2(psubsw), - [0xea] =3D MMX_OP2(pminsw), - [0xeb] =3D MMX_OP2(por), - [0xec] =3D MMX_OP2(paddsb), - [0xed] =3D MMX_OP2(paddsw), - [0xee] =3D MMX_OP2(pmaxsw), - [0xef] =3D MMX_OP2(pxor), - [0xf0] =3D { NULL, NULL, NULL, SSE_SPECIAL }, /* lddqu */ - [0xf1] =3D MMX_OP2(psllw), - [0xf2] =3D MMX_OP2(pslld), - [0xf3] =3D MMX_OP2(psllq), - [0xf4] =3D MMX_OP2(pmuludq), - [l), -}; - -static const SSEFunc_0_epp sse_op_table2[3 * 8][2] =3D { - [0 + 2] =3D MMX_OP2(psrlw), - [0 + 4] =3D MMX_OP2(psraw), - [0 + 6] =3D MMX_OP2(psllw), - [8 + 2] =3D MMX_OP2(psrld), - [8 + 4] =3D MMX_OP2(psrad), - [8 + 6] =3D MMX_OP2(pslld), - [16 + 2] =3D MMX_OP2(psrlq), - [16 + 3] =3D { NULL, gen_helper_psrldq_xmm }, - [16 + 6] =3D MMX_OP2(psllq), - [16 + 7] =3D { NULL, gen_helper_pslldq_xmm }, -}; - -static const SSEFunc_0_epi sse_op_table3ai[] =3D { - gen_helper_cvtsi2ss, - gen_helper_cvtsi2sd -}; - -#ifdef TARGET_X86_64 -static const SSEFunc_0_epl sse_op_table3aq[] =3D { - gen_helper_cvtsq2ss, - gen_helper_cvtsq2sd -}; -#endif - -static const SSEFunc_i_ep sse_op_table3bi[] =3D { - gen_helper_cvttss2si, - gen_helper_cvtss2si, - gen_helper_cvttsd2si, - gen_helper_cvtsd2si -}; - -#ifdef TARGET_X86_64 -static const SSEFunc_l_ep sse_op_table3bq[] =3D { - gen_helper_cvttss2sq, - gen_helper_cvtss2sq, - gen_helper_cvttsd2sq, - gen_helper_cvtsd2sq -}; -#endif - -static const SSEFunc_0_epp sse_op_table4[8][4] =3D { - SSE_FOP(cmpeq), - SSE_FOP(cmplt), - SSE_FOP(cmple), - SSE_FOP(cmpunord), - SSE_FOP(cmpneq), - SSE_FOP(cmpnlt), - SSE_FOP(cmpnle), - SSE_FOP(cmpord), -}; - -static const SSEFunc_0_epp sse_op_table5[256] =3D { - [0x0c] =3D gen_helper_pi2fw, - [0x0d] =3D gen_helper_pi2fd, - [0x1c] =3D gen_helper_pf2iw, - [0x1d] =3D gen_helper_pf2id, - [0x8a] =3D gen_helper_pfnacc, - [0x8e] =3D gen_helper_pfpnacc, - [0x90] =3D gen_helper_pfcmpge, - [0x94] =3D gen_helper_pfmin, - [0x96] =3D gen_helper_pfrcp, - [0x97] =3D gen_helper_pfrsqrt, - [0x9a] =3D gen_helper_pfsub, - [0x9e] =3D gen_helper_pfadd, - [0xa0] =3D gen_helper_pfcmpgt, - [0xa4] =3D gen_helper_pfmax, - [0xa6] =3D gen_helper_movq, /* pfrcpit1; no need to actually increase = precision */ - [0xa7] =3D gen_helper_movq, /* pfrsqit1 */ - [0xaa] =3D gen_helper_pfsubr, - [0xae] =3D gen_helper_pfacc, - [0xb0] =3D gen_helper_pfcmpeq, - [0xb4] =3D gen_helper_pfmul, - [0xb6] =3D gen_helper_movq, /* pfrcpit2 */ - [0xb7] =3D gen_helper_pmulhrw_mmx, - [0xbb] =3D gen_helper_pswapd, - [0xbf] =3D gen_helper_pavgb_mmx /* pavgusb */ -}; - -struct SSEOpHelper_epp { - SSEFunc_0_epp op[2]; - uint32_t ext_mask; -}; - -struct SSEOpHelper_eppi { - SSEFunc_0_eppi op[2]; - uint32_t ext_mask; -}; - -#define SSSE3_OP(x) { MMX_OP2(x), CPUID_EXT_SSSE3 } -#define SSE41_OP(x) { { NULL, gen_helper_ ## x ## _xmm }, CPUID_EXT_SSE41 } -#define SSE42_OP(x) { { NULL, gen_helper_ ## x ## _xmm }, CPUID_EXT_SSE42 } -#define SSE41_SPECIAL { { NULL, SSE_SPECIAL }, CPUID_EXT_SSE41 } -#define PCLMULQDQ_OP(x) { { NULL, gen_helper_ ## x ## _xmm }, \ - CPUID_EXT_PCLMULQDQ } -#define AESNI_OP(x) { { NULL, gen_helper_ ## x ## _xmm }, CPUID_EXT_AES } - -static const struct SSEOpHelper_epp sse_op_table6[256] =3D { - [0x00] =3D SSSE3_OP(pshufb), - [0x01] =3D SSSE3_OP(phaddw), - [0x02] =3D SSSE3_OP(phaddd), - [0x03] =3D SSSE3_OP(phaddsw), - [0x04] =3D SSSE3_OP(pmaddubsw), - [0x05] =3D SSSE3_OP(phsubw), - [0x06] =3D SSSE3_OP(phsubd), - [0x07] =3D SSSE3_OP(phsubsw), - [0x08] =3D SSSE3_OP(psignb), - [0x09] =3D SSSE3_OP(psignw), - [0x0a] =3D SSSE3_OP(psignd), - [0x0b] =3D SSSE3_OP(pmulhrsw), - [0x10] =3D SSE41_OP(pblendvb), - [0x14] =3D SSE41_OP(blendvps), - [0x15] =3D SSE41_OP(blendvpd), - [0x17] =3D SSE41_OP(ptest), - [0x1c] =3D SSSE3_OP(pabsb), - [0x1d] =3D SSSE3_OP(pabsw), - [0x1e] =3D SSSE3_OP(pabsd), - [0x20] =3D SSE41_OP(pmovsxbw), - [0x21] =3D SSE41_OP(pmovsxbd), - [0x22] =3D SSE41_OP(pmovsxbq), - [0x23] =3D SSE41_OP(pmovsxwd), - [0x24] =3D SSE41_OP(pmovsxwq), - [0x25] =3D SSE41_OP(pmovsxdq), - [0x28] =3D SSE41_OP(pmuldq), - [0x29] =3D SSE41_OP(pcmpeqq), - [0x2a] =3D SSE41_SPECIAL, /* movntqda */ - [0x2b] =3D SSE41_OP(packusdw), - [0x30] =3D SSE41_OP(pmovzxbw), - [0x31] =3D SSE41_OP(pmovzxbd), - [0x32] =3D SSE41_OP(pmovzxbq), - [0x33] =3D SSE41_OP(pmovzxwd), - [0x34] =3D SSE41_OP(pmovzxwq), - [0x35] =3D SSE41_OP(pmovzxdq), - [0x37] =3D SSE42_OP(pcmpgtq), - [0x38] =3D SSE41_OP(pminsb), - [0x39] =3D SSE41_OP(pminsd), - [0x3a] =3D SSE41_OP(pminuw), - [0x3b] =3D SSE41_OP(pminud), - [0x3c] =3D SSE41_OP(pmaxsb), - [0x3d] =3D SSE41_OP(pmaxsd), - [0x3e] =3D SSE41_OP(pmaxuw), - [0x3f] =3D SSE41_OP(pmaxud), - [0x40] =3D SSE41_OP(pmulld), - [0x41] =3D SSE41_OP(phminposuw), - [0xdb] =3D AESNI_OP(aesimc), - [0xdc] =3D AESNI_OP(aesenc), - [0xdd] =3D AESNI_OP(aesenclast), - [0xde] =3D AESNI_OP(aesdec), - [0xdf] =3D AESNI_OP(aesdeclast), -}; - -static const struct SSEOpHelper_eppi sse_op_table7[256] =3D { - [0x08] =3D SSE41_OP(roundps), - [0x09] =3D SSE41_OP(roundpd), - [0x0a] =3D SSE41_OP(roundss), - [0x0b] =3D SSE41_OP(roundsd), - [0x0c] =3D SSE41_OP(blendps), - [0x0d] =3D SSE41_OP(blendpd), - [0x0e] =3D SSE41_OP(pblendw), - [0x0f] =3D SSSE3_OP(palignr), - [0x14] =3D SSE41_SPECIAL, /* pextrb */ - [0x15] =3D SSE41_SPECIAL, /* pextrw */ - [0x16] =3D SSE41_SPECIAL, /* pextrd/pextrq */ - [0x17] =3D SSE41_SPECIAL, /* extractps */ - [0x20] =3D SSE41_SPECIAL, /* pinsrb */ - [0x21] =3D SSE41_SPECIAL, /* insertps */ - [0x22] =3D SSE41_SPECIAL, /* pinsrd/pinsrq */ - [0x40] =3D SSE41_OP(dpps), - [0x41] =3D SSE41_OP(dppd), - [0x42] =3D SSE41_OP(mpsadbw), - [0x44] =3D PCLMULQDQ_OP(pclmulqdq), - [0x60] =3D SSE42_OP(pcmpestrm), - [0x61] =3D SSE42_OP(pcmpestri), - [0x62] =3D SSE42_OP(pcmpistrm), - [0x63] =3D SSE42_OP(pcmpistri), - [0xdf] =3D AESNI_OP(aeskeygenassist), -}; - -static void gen_sse(CPUX86State *env, DisasContext *s, int b, - target_ulong pc_start) -{ - int b1, op1_offset, op2_offset, is_xmm, val; - int modrm, mod, rm, reg; - SSEFunc_0_epp sse_fn_epp; - SSEFunc_0_eppi sse_fn_eppi; - SSEFunc_0_ppi sse_fn_ppi; - SSEFunc_0_eppt sse_fn_eppt; - MemOp ot; - - b &=3D 0xff; - if (s->prefix & PREFIX_DATA) - b1 =3D 1; - else if (s->prefix & PREFIX_REPZ) - b1 =3D 2; - else if (s->prefix & PREFIX_REPNZ) - b1 =3D 3; - else - b1 =3D 0; - sse_fn_epp =3D sse_op_table1[b][b1]; - if (!sse_fn_epp) { - goto unknown_op; - } - if ((b <=3D 0x5f && b >=3D 0x10) || b =3D=3D 0xc6 || b =3D=3D 0xc2) { - is_xmm =3D 1; - } else { - if (b1 =3D=3D 0) { - /* MMX case */ - is_xmm =3D 0; - } else { - is_xmm =3D 1; - } - } - /* simple MMX/SSE operation */ - if (s->flags & HF_TS_MASK) { - gen_exception(s, EXCP07_PREX, pc_start - s->cs_base); - return; - } - if (s->flags & HF_EM_MASK) { - illegal_op: - gen_illegal_opcode(s); - return; - } - if (is_xmm - && !(s->flags & HF_OSFXSR_MASK) - && (b !=3D 0x38 && b !=3D 0x3a)) { - goto unknown_op; - } - if (b =3D=3D 0x0e) { - if (!(s->cpuid_ext2_features & CPUID_EXT2_3DNOW)) { - /* If we were fully decoding this we might use illegal_op. */ - goto unknown_op; - } - /* femms */ - gen_helper_emms(cpu_env); - return; - } - if (b =3D=3D 0x77) { - /* emms */ - gen_helper_emms(cpu_env); - return; - } - /* prepare MMX state (XXX: optimize by storing fptt and fptags in - the static cpu state) */ - if (!is_xmm) { - gen_helper_enter_mmx(cpu_env); - } - - modrm =3D x86_ldub_code(env, s); - reg =3D ((modrm >> 3) & 7); - if (is_xmm) { - reg |=3D REX_R(s); - } - mod =3D (modrm >> 6) & 3; - if (sse_fn_epp =3D=3D SSE_SPECIAL) { - b |=3D (b1 << 8); - switch(b) { - case 0x0e7: /* movntq */ - if (mod =3D=3D 3) { - goto illegal_op; - } - gen_lea_modrm(env, s, modrm); - gen_stq_env_A0(s, offsetof(CPUX86State, fpregs[reg].mmx)); - break; - case 0x1e7: /* movntdq */ - case 0x02b: /* movntps */ - case 0x12b: /* movntps */ - if (mod =3D=3D 3) - goto illegal_op; - gen_lea_modrm(env, s, modrm); - gen_sto_env_A0(s, offsetof(CPUX86State, xmm_regs[reg])); - break; - case 0x3f0: /* lddqu */ - if (mod =3D=3D 3) - goto illegal_op; - gen_lea_modrm(env, s, modrm); - gen_ldo_env_A0(s, offsetof(CPUX86State, xmm_regs[reg])); - break; - case 0x22b: /* movntss */ - case 0x32b: /* movntsd */ - if (mod =3D=3D 3) - goto illegal_op; - gen_lea_modrm(env, s, modrm); - if (b1 & 1) { - gen_stq_env_A0(s, offsetof(CPUX86State, - xmm_regs[reg].ZMM_Q(0))); - } else { - tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State, - xmm_regs[reg].ZMM_L(0))); - gen_op_st_v(s, MO_32, s->T0, s->A0); - } - break; - case 0x6e: /* movd mm, ea */ -#ifdef TARGET_X86_64 - if (s->dflag =3D=3D MO_64) { - gen_ldst_modrm(env, s, modrm, MO_64, OR_TMP0, 0); - tcg_gen_st_tl(s->T0, cpu_env, - offsetof(CPUX86State, fpregs[reg].mmx)); - } else -#endif - { - gen_ldst_modrm(env, s, modrm, MO_32, OR_TMP0, 0); - tcg_gen_addi_ptr(s->ptr0, cpu_env, - offsetof(CPUX86State,fpregs[reg].mmx)); - tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); - gen_helper_movl_mm_T0_mmx(s->ptr0, s->tmp2_i32); - } - break; - case 0x16e: /* movd xmm, ea */ -#ifdef TARGET_X86_64 - if (s->dflag =3D=3D MO_64) { - gen_ldst_modrm(env, s, modrm, MO_64, OR_TMP0, 0); - tcg_gen_addi_ptr(s->ptr0, cpu_env, - offsetof(CPUX86State,xmm_regs[reg])); - gen_helper_movq_mm_T0_xmm(s->ptr0, s->T0); - } else -#endif - { - gen_ldst_modrm(env, s, modrm, MO_32, OR_TMP0, 0); - tcg_gen_addi_ptr(s->ptr0, cpu_env, - offsetof(CPUX86State,xmm_regs[reg])); - tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); - gen_helper_movl_mm_T0_xmm(s->ptr0, s->tmp2_i32); - } - break; - case 0x6f: /* movq mm, ea */ - if (mod !=3D 3) { - gen_lea_modrm(env, s, modrm); - gen_ldq_env_A0(s, offsetof(CPUX86State, fpregs[reg].mmx)); - } else { - rm =3D (modrm & 7); - tcg_gen_ld_i64(s->tmp1_i64, cpu_env, - offsetof(CPUX86State,fpregs[rm].mmx)); - tcg_gen_st_i64(s->tmp1_i64, cpu_env, - offsetof(CPUX86State,fpregs[reg].mmx)); - } - break; - case 0x010: /* movups */ - case 0x110: /* movupd */ - case 0x028: /* movaps */ - case 0x128: /* movapd */ - case 0x16f: /* movdqa xmm, ea */ - case 0x26f: /* movdqu xmm, ea */ - if (mod !=3D 3) { - gen_lea_modrm(env, s, modrm); - gen_ldo_env_A0(s, offsetof(CPUX86State, xmm_regs[reg])); - } else { - rm =3D (modrm & 7) | REX_B(s); - gen_op_movo(s, offsetof(CPUX86State, xmm_regs[reg]), - offsetof(CPUX86State,xmm_regs[rm])); - } - break; - case 0x210: /* movss xmm, ea */ - if (mod !=3D 3) { - gen_lea_modrm(env, s, modrm); - gen_op_ld_v(s, MO_32, s->T0, s->A0); - tcg_gen_st32_tl(s->T0, cpu_env, - offsetof(CPUX86State, xmm_regs[reg].ZMM_L(= 0))); - tcg_gen_movi_tl(s->T0, 0); - tcg_gen_st32_tl(s->T0, cpu_env, - offsetof(CPUX86State, xmm_regs[reg].ZMM_L(= 1))); - tcg_gen_st32_tl(s->T0, cpu_env, - offsetof(CPUX86State, xmm_regs[reg].ZMM_L(= 2))); - tcg_gen_st32_tl(s->T0, cpu_env, - offsetof(CPUX86State, xmm_regs[reg].ZMM_L(= 3))); - } else { - rm =3D (modrm & 7) | REX_B(s); - gen_op_movl(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_L(0= )), - offsetof(CPUX86State,xmm_regs[rm].ZMM_L(0))); - } - break; - case 0x310: /* movsd xmm, ea */ - if (mod !=3D 3) { - gen_lea_modrm(env, s, modrm); - gen_ldq_env_A0(s, offsetof(CPUX86State, - xmm_regs[reg].ZMM_Q(0))); - tcg_gen_movi_tl(s->T0, 0); - tcg_gen_st32_tl(s->T0, cpu_env, - offsetof(CPUX86State, xmm_regs[reg].ZMM_L(= 2))); - tcg_gen_st32_tl(s->T0, cpu_env, - offsetof(CPUX86State, xmm_regs[reg].ZMM_L(= 3))); - } else { - rm =3D (modrm & 7) | REX_B(s); - gen_op_movq(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_Q(0= )), - offsetof(CPUX86State,xmm_regs[rm].ZMM_Q(0))); - } - break; - case 0x012: /* movlps */ - case 0x112: /* movlpd */ - if (mod !=3D 3) { - gen_lea_modrm(env, s, modrm); - gen_ldq_env_A0(s, offsetof(CPUX86State, - xmm_regs[reg].ZMM_Q(0))); - } else { - /* movhlps */ - rm =3D (modrm & 7) | REX_B(s); - gen_op_movq(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_Q(0= )), - offsetof(CPUX86State,xmm_regs[rm].ZMM_Q(1))); - } - break; - case 0x212: /* movsldup */ - if (mod !=3D 3) { - gen_lea_modrm(env, s, modrm); - gen_ldo_env_A0(s, offsetof(CPUX86State, xmm_regs[reg])); - } else { - rm =3D (modrm & 7) | REX_B(s); - gen_op_movl(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_L(0= )), - offsetof(CPUX86State,xmm_regs[rm].ZMM_L(0))); - gen_op_movl(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_L(2= )), - offsetof(CPUX86State,xmm_regs[rm].ZMM_L(2))); - } - gen_op_movl(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_L(1)), - offsetof(CPUX86State,xmm_regs[reg].ZMM_L(0))); - gen_op_movl(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_L(3)), - offsetof(CPUX86State,xmm_regs[reg].ZMM_L(2))); - break; - case 0x312: /* movddup */ - if (mod !=3D 3) { - gen_lea_modrm(env, s, modrm); - gen_ldq_env_A0(s, offsetof(CPUX86State, - xmm_regs[reg].ZMM_Q(0))); - } else { - rm =3D (modrm & 7) | REX_B(s); - gen_op_movq(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_Q(0= )), - offsetof(CPUX86State,xmm_regs[rm].ZMM_Q(0))); - } - gen_op_movq(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_Q(1)), - offsetof(CPUX86State,xmm_regs[reg].ZMM_Q(0))); - break; - case 0x016: /* movhps */ - case 0x116: /* movhpd */ - if (mod !=3D 3) { - gen_lea_modrm(env, s, modrm); - gen_ldq_env_A0(s, offsetof(CPUX86State, - xmm_regs[reg].ZMM_Q(1))); - } else { - /* movlhps */ - rm =3D (modrm & 7) | REX_B(s); - gen_op_movq(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_Q(1= )), - offsetof(CPUX86State,xmm_regs[rm].ZMM_Q(0))); - } - break; - case 0x216: _regs[reg].ZMM_L(3)), - offsetof(CPUX86State,xmm_regs[rm].ZMM_L(3))); - } - gen_op_movl(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_L(0)), - offsetof(CPUX86State,xmm_regs[reg].ZMM_L(1))); - gen_op_movl(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_L(2)), - offsetof(CPUX86State,xmm_regs[reg].ZMM_L(3))); - break; - case 0x178: - case 0x378: - { - int bit_index, field_length; - - if (b1 =3D=3D 1 && reg !=3D 0) - goto illegal_op; - field_length =3D x86_ldub_code(env, s) & 0x3F; - bit_index =3D x86_ldub_code(env, s) & 0x3F; - tcg_gen_addi_ptr(s->ptr0, cpu_env, - offsetof(CPUX86State,xmm_regs[reg])); - if (b1 =3D=3D 1) - gen_helper_extrq_i(cpu_env, s->ptr0, - tcg_const_i32(bit_index), - tcg_const_i32(field_length)); - else - gen_helper_insertq_i(cpu_env, s->ptr0, - tcg_const_i32(bit_index), - tcg_const_i32(field_length)); - } - break; - case 0x7e: /* movd ea, mm */ -#ifdef TARGET_X86_64 - if (s->dflag =3D=3D MO_64) { - tcg_gen_ld_i64(s->T0, cpu_env, - offsetof(CPUX86State,fpregs[reg].mmx)); - gen_ldst_modrm(env, s, modrm, MO_64, OR_TMP0, 1); - } else -#endif - { - tcg_gen_ld32u_tl(s->T0, cpu_env, - offsetof(CPUX86State,fpregs[reg].mmx.MMX_= L(0))); - gen_ldst_modrm(env, s, modrm, MO_32, OR_TMP0, 1); - } - break; - case 0x17e: /* movd ea, xmm */ -#ifdef TARGET_X86_64 - if (s->dflag =3D=3D MO_64) { - tcg_gen_ld_i64(s->T0, cpu_env, - offsetof(CPUX86State,xmm_regs[reg].ZMM_Q(0)= )); - gen_ldst_modrm(env, s, modrm, MO_64, OR_TMP0, 1); - } else -#endif - { - tcg_gen_ld32u_tl(s->T0, cpu_env, - offsetof(CPUX86State,xmm_regs[reg].ZMM_L(= 0))); - gen_ldst_modrm(env, s, modrm, MO_32, OR_TMP0, 1); - } - break; - case 0x27e: /* movq xmm, ea */ - if (mod !=3D 3) { - gen_lea_modrm(env, s, modrm); - gen_ldq_env_A0(s, offsetof(CPUX86State, - xmm_regs[reg].ZMM_Q(0))); - } else { - rm =3D (modrm & 7) | REX_B(s); - gen_op_movq(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_Q(0= )), - offsetof(CPUX86State,xmm_regs[rm].ZMM_Q(0))); - } - gen_op_movq_env_0(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_Q= (1))); - break; - case 0x7f: /* movq ea, mm */ - if (mod !=3D 3) { - gen_lea_modrm(env, s, modrm); - gen_stq_env_A0(s, offsetof(CPUX86State, fpregs[reg].mmx)); - } else { - rm =3D (modrm & 7); - gen_op_movq(s, offsetof(CPUX86State, fpregs[rm].mmx), - offsetof(CPUX86State,fpregs[reg].mmx)); - } - break; - case 0x011: /* movups */ - case 0x111: /* movupd */ - case 0x029: /* movaps */ - case 0x129: /* movapd */ - case 0x17f: /* movdqa ea, xmm */ - case 0x27f: /* movdqu ea, xmm */ - if (mod !=3D 3) { - gen_lea_modrm(env, s, modrm); - gen_sto_env_A0(s, offsetof(CPUX86State, xmm_regs[reg])); - } else { - rm =3D (modrm & 7) | REX_B(s); - gen_op_movo(s, offsetof(CPUX86State, xmm_regs[rm]), - offsetof(CPUX86State,xmm_regs[reg])); - } - break; - case 0x211: /* movss ea, xmm */ - if (mod !=3D 3) { - gen_lea_modrm(env, s, modrm); - tcg_gen_ld32u_tl(s->T0, cpu_env, - offsetof(CPUX86State, xmm_regs[reg].ZMM_L= (0))); - gen_op_st_v(s, MO_32, s->T0, s->A0); - } else { - rm =3D (modrm & 7) | REX_B(s); - gen_op_movl(s, offsetof(CPUX86State, xmm_regs[rm].ZMM_L(0)= ), - offsetof(CPUX86State,xmm_regs[reg].ZMM_L(0))); - } - break; - case 0x311: /* movsd ea, xmm */ - if (mod !=3D 3) { - gen_lea_modrm(env, s, modrm); - gen_stq_env_A0(s, offsetof(CPUX86State, - xmm_regs[reg].ZMM_Q(0))); - } else { - rm =3D (modrm & 7) | REX_B(s); - gen_op_movq(s, offsetof(CPUX86State, xmm_regs[rm].ZMM_Q(0)= ), - offsetof(CPUX86State,xmm_regs[reg].ZMM_Q(0))); - } - break; - case 0x013: /* movlps */ - case 0x113: /* movlpd */ - if (mod !=3D 3) { - gen_lea_modrm(env, s, modrm); - gen_stq_env_A0(s, offsetof(CPUX86State, - xmm_regs[reg].ZMM_Q(0))); - } else { - goto illegal_op; - } - break; - case 0x017: /* movhps */ - case 0x117: /* movhpd */ - if (mod !=3D 3) { - gen_lea_modrm(env, s, modrm); - gen_stq_env_A0(s, offsetof(CPUX86State, - xmm_regs[reg].ZMM_Q(1))); - } else { - goto illegal_op; - } - break; - case 0x71: /* shift mm, im */ - case 0x72: - case 0x73: - case 0x171: /* shift xmm, im */ - case 0x172: - case 0x173: - val =3D x86_ldub_code(env, s); - if (is_xmm) { - tcg_gen_movi_tl(s->T0, val); - tcg_gen_st32_tl(s->T0, cpu_env, - offsetof(CPUX86State, xmm_t0.ZMM_L(0))); - tcg_gen_movi_tl(s->T0, 0); - tcg_gen_st32_tl(s->T0, cpu_env, - offsetof(CPUX86State, xmm_t0.ZMM_L(1))); - op1_offset =3D offsetof(CPUX86State,xmm_t0); - } else { - tcg_gen_movi_tl(s->T0, val); - tcg_gen_st32_tl(s->T0, cpu_env, - offsetof(CPUX86State, mmx_t0.MMX_L(0))); - tcg_gen_movi_tl(s->T0, 0); - tcg_gen_st32_tl(s->T0, cpu_env, - offsetof(CPUX86State, mmx_t0.MMX_L(1))); - op1_offset =3D offsetof(CPUX86State,mmx_t0); - } - assert(b1 < 2); - sse_fn_epp =3D sse_op_table2[((b - 1) & 3) * 8 + - (((modrm >> 3)) & 7)][b1]; - if (!sse_fn_epp) { - goto unknown_op; - } - if (is_xmm) { - rm =3D (modrm & 7) | REX_B(s); - op2_offset =3D offsetof(CPUX86State,xmm_regs[rm]); - } else { - rm =3D (modrm & 7); - op2_offset =3D offsetof(CPUX86State,fpregs[rm].mmx); - } - tcg_gen_addi_ptr(s->ptr0, cpu_env, op2_offset); - tcg_gen_addi_ptr(s->ptr1, cpu_env, op1_offset); - sse_fn_epp(cpu_env, s->ptr0, s->ptr1); - break; - case 0x050: /* movmskps */ - rm =3D (modrm & 7) | REX_B(s); - tcg_gen_addi_ptr(s->ptr0, cpu_env, - offsetof(CPUX86State,xmm_regs[rm])); - gen_helper_movmskps(s->tmp2_i32, cpu_env, s->ptr0); - tcg_gen_extu_i32_tl(cpu_regs[reg], s->tmp2_i32); - break; - case 0x150: /* movmskpd */ - rm =3D (modrm & 7) | REX_B(s); - tcg_gen_addi_ptr(s->ptr0, cpu_env, - offsetof(CPUX86State,xmm_regs[rm])); - gen_helper_movmskpd(s->tmp2_i32, cpu_env, s->ptr0); - tcg_gen_extu_i32_tl(cpu_regs[reg], s->tmp2_i32); - break; - case 0x02a: /* cvtpi2ps */ - case 0x12a: /* cvtpi2pd */ - gen_helper_enter_mmx(cpu_env); - if (mod !=3D 3) { - gen_lea_modrm(env, s, modrm); - op2_offset =3D offsetof(CPUX86State,mmx_t0); - gen_ldq_env_A0(s, op2_offset); - } else { - rm =3D (modrm & 7); - op2_offset =3D offsetof(CPUX86State,fpregs[rm].mmx); - } - op1_offset =3D offsetof(CPUX86State,xmm_regs[reg]); - tcg_gen_addi_ptr(s->ptr0, cpu_env, op1_offset); - tcg_gen_addi_ptr(s->ptr1, cpu_env, op2_offset); - switch(b >> 8) { - case 0x0: - gen_helper_cvtpi2ps(cpu_env, s->ptr0, s->ptr1); - break; - default: - case 0x1: - gen_helper_cvtpi2pd(cpu_env, s->ptr0, s->ptr1); - break; - } - break; - case 0x22a: /* cvtsi2ss */ - case 0x32a: /* cvtsi2sd */ - ot =3D mo_64_32(s->dflag); - gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); - op1_offset =3D offsetof(CPUX86State,xmm_regs[reg]); - tcg_gen_addi_ptr(s->ptr0, cpu_env, op1_offset); - if (ot =3D=3D MO_32) { - SSEFunc_0_epi sse_fn_epi =3D sse_op_table3ai[(b >> 8) & 1]; - tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); - sse_fn_epi(cpu_env, s->ptr0, s->tmp2_i32); - } else { -#ifdef TARGET_X86_64 - SSEFunc_0_epl sse_fn_epl =3D sse_op_table3aq[(b >> 8) & 1]; - sse_fn_epl(cpu_env, s->ptr0, s->T0); -#else - goto illegal_op; -#endif - } - break; - case 0x02c: /* cvttps2pi */ - case 0x12c: /* cvttpd2pi */ - case 0x02d: /* cvtps2pi */ - case 0x12d: /* cvtpd2pi */ - gen_helper_enter_mmx(cpu_env); - if (mod !=3D 3) { - gen_lea_modrm(env, s, modrm); - op2_offset =3D offsetof(CPUX86State,xmm_t0); - gen_ldo_env_A0(s, op2_offset); - } else { - rm =3D (modrm & 7) | REX_B(s); - op2_offset =3D offsetof(CPUX86State,xmm_regs[rm]); - } - op1_offset =3D offsetof(CPUX86State,fpregs[reg & 7].mmx); - tcg_gen_addi_ptr(s->ptr0, cpu_env, op1_offset); - tcg_gen_addi_ptr(s->ptr1, cpu_env, op2_offset); - switch(b) { - case 0x02c: - gen_helper_cvttps2pi(cpu_env, s->ptr0, s->ptr1); - break; - case 0x12c: - gen_helper_cvttpd2pi(cpu_env, s->ptr0, s->ptr1); - break; - case 0x02d: - gen_helper_cvtps2pi(cpu_env, s->ptr0, s->ptr1); - break; - case 0x12d: - gen_helper_cvtpd2pi(cpu_env, s->ptr0, s->ptr1); - break; - } - break; - case 0x22c: /* cvttss2si */ - case 0x32c: /* cvttsd2si */ - case 0x22d: /* cvtss2si */ - case 0x32d: /* cvtsd2si */ - ot =3D mo_64_32(s->dflag); - if (mod !=3D 3) { - gen_lea_modrm(env, s, modrm); - if ((b >> 8) & 1) { - gen_ldq_env_A0(s, offsetof(CPUX86State, xmm_t0.ZMM_Q(0= ))); - } else { - gen_op_ld_v(s, MO_32, s->T0, s->A0); - tcg_gen_st32_tl(s->T0, cpu_env, - offsetof(CPUX86State, xmm_t0.ZMM_L(0))= ); - } - op2_offset =3D offsetof(CPUX86State,xmm_t0); - } else { - rm =3D (modrm & 7) | REX_B(s); - op2_offset =3D offsetof(CPUX86State,xmm_regs[rm]); - } - tcg_gen_addi_ptr(s->ptr0, cpu_env, op2_offset); - if (ot =3D=3D MO_32) { - SSEFunc_i_ep sse_fn_i_ep =3D - sse_op_table3bi[((b >> 7) & 2) | (b & 1)]; - sse_fn_i_ep(s->tmp2_i32, cpu_env, s->ptr0); - tcg_gen_extu_i32_tl(s->T0, s->tmp2_i32); - } else { -#ifdef TARGET_X86_64 - SSEFunc_l_ep sse_fn_l_ep =3D - sse_op_table3bq[((b >> 7) & 2) | (b & 1)]; - sse_fn_l_ep(s->T0, cpu_env, s->ptr0); -#else - goto illegal_op; -#endif - } - gen_op_mov_reg_v(s, ot, reg, s->T0); - break; - case 0xc4: /* pinsrw */ - case 0x1c4: - s->rip_offset =3D 1; - gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0); - val =3D x86_ldub_code(env, s); - if (b1) { - val &=3D 7; - tcg_gen_st16_tl(s->T0, cpu_env, - offsetof(CPUX86State,xmm_regs[reg].ZMM_W(v= al))); - } else { - val &=3D 3; - tcg_gen_st16_tl(s->T0, cpu_env, - offsetof(CPUX86State,fpregs[reg].mmx.MMX_W= (val))); - } - break; - case 0xc5: /* pextrw */ - case 0x1c5: - if (mod !=3D 3) - goto illegal_op; - ot =3D mo_64_32(s->dflag); - val =3D x86_ldub_code(env, s); - if (b1) { - val &=3D 7; - rm =3D (modrm & 7) | REX_B(s); - tcg_gen_ld16u_tl(s->T0, cpu_env, - offsetof(CPUX86State,xmm_regs[rm].ZMM_W(v= al))); - } else { - val &=3D 3; - rm =3D (modrm & 7); - tcg_gen_ld16u_tl(s->T0, cpu_env, - offsetof(CPUX86State,fpregs[rm].mmx.MMX_W(= val))); - } - reg =3D ((modrm >> 3) & 7) | REX_R(s); - gen_op_mov_reg_v(s, ot, reg, s->T0); - break; - case 0x1d6: /* movq ea, xmm */ - if (mod !=3D 3) { - gen_lea_modrm(env, s, modrm); - gen_stq_env_A0(s, offsetof(CPUX86State, - xmm_regs[reg].ZMM_Q(0))); - } else { - rm =3D (modrm & 7) | REX_B(s); - gen_op_movq(s, offsetof(CPUX86State, xmm_regs[rm].ZMM_Q(0)= ), - offsetof(CPUX86State,xmm_regs[reg].ZMM_Q(0))); - gen_op_movq_env_0(s, - offsetof(CPUX86State, xmm_regs[rm].ZMM_Q= (1))); - } - break; - case 0x2d6: /* movq2dq */ - gen_helper_enter_mmx(cpu_env); - rm =3D (modrm & 7); - gen_op_movq(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_Q(0)), - offsetof(CPUX86State,fpregs[rm].mmx)); - gen_op_movq_env_0(s, offsetof(CPUX86State, xmm_regs[reg].ZMM_Q= (1))); - break; - case 0x3d6: /* movdq2q */ - gen_helper_enter_mmx(cpu_env); - rm =3D (modrm & 7) | REX_B(s); - gen_op_movq(s, offsetof(CPUX86State, fpregs[reg & 7].mmx), - offsetof(CPUX86State,xmm_regs[rm].ZMM_Q(0))); - break; - case 0xd7: /* pmovmskb */ - case 0x1d7: - if (mod !=3D 3) - goto illegal_op; - if (b1) { - rm =3D (modrm & 7) | REX_B(s); - tcg_gen_addi_ptr(s->ptr0, cpu_env, - offsetof(CPUX86State, xmm_regs[rm])); - gen_helper_pmovmskb_xmm(s->tmp2_i32, cpu_env, s->ptr0); - } else { - rm =3D (modrm & 7); - tcg_gen_addi_ptr(s->ptr0, cpu_env, - offsetof(CPUX86State, fpregs[rm].mmx)); - gen_helper_pmovmskb_mmx(s->tmp2_i32, cpu_env, s->ptr0); - } - ((modrm >> 3) & 7) | REX_R(s); - mod =3D (modrm >> 6) & 3; - - assert(b1 < 2); - sse_fn_epp =3D sse_op_table6[b].op[b1]; - if (!sse_fn_epp) { - goto unknown_op; - } - if (!(s->cpuid_ext_features & sse_op_table6[b].ext_mask)) - goto illegal_op; - - if (b1) { - op1_offset =3D offsetof(CPUX86State,xmm_regs[reg]); - if (mod =3D=3D 3) { - op2_offset =3D offsetof(CPUX86State,xmm_regs[rm | REX_= B(s)]); - } else { - op2_offset =3D offsetof(CPUX86State,xmm_t0); - gen_lea_modrm(env, s, modrm); - switch (b) { - case 0x20: case 0x30: /* pmovsxbw, pmovzxbw */ - case 0x23: case 0x33: /* pmovsxwd, pmovzxwd */ - case 0x25: case 0x35: /* pmovsxdq, pmovzxdq */ - gen_ldq_env_A0(s, op2_offset + - offsetof(ZMMReg, ZMM_Q(0))); - break; - case 0x21: case 0x31: /* pmovsxbd, pmovzxbd */ - case 0x24: case 0x34: /* pmovsxwq, pmovzxwq */ - tcg_gen_qemu_ld_i32(s->tmp2_i32, s->A0, - s->mem_index, MO_LEUL); - tcg_gen_st_i32(s->tmp2_i32, cpu_env, op2_offset + - offsetof(ZMMReg, ZMM_L(0))); - break; - case 0x22: case 0x32: /* pmovsxbq, pmovzxbq */ - tcg_gen_qemu_ld_tl(s->tmp0, s->A0, - s->mem_index, MO_LEUW); - tcg_gen_st16_tl(s->tmp0, cpu_env, op2_offset + - offsetof(ZMMReg, ZMM_W(0))); - break; - case 0x2a: /* movntqda */ - gen_ldo_env_A0(s, op1_offset); - return; - default: - gen_ldo_env_A0(s, op2_offset); - } - } - } else { - op1_offset =3D offsetof(CPUX86State,fpregs[reg].mmx); - if (mod =3D=3D 3) { - op2_offset =3D offsetof(CPUX86State,fpregs[rm].mmx); - } else { - op2_offset =3D offsetof(CPUX86State,mmx_t0); - gen_lea_modrm(env, s, modrm); - gen_ldq_env_A0(s, op2_offset); - } - } - if (sse_fn_epp =3D=3D SSE_SPECIAL) { - goto unknown_op; - } - - tcg_gen_addi_ptr(s->ptr0, cpu_env, op1_offset); - tcg_gen_addi_ptr(s->ptr1, cpu_env, op2_offset); - sse_fn_epp(cpu_env, s->ptr0, s->ptr1); - - if (b =3D=3D 0x17) { - set_cc_op(s, CC_OP_EFLAGS); - } - break; - - case 0x238: - case 0x338: - do_0f_38_fx: - /* Various integer extensions at 0f 38 f[0-f]. */ - b =3D modrm | (b1 << 8); - modrm =3D x86_ldub_code(env, s); - reg =3D ((modrm >> 3) & 7) | REX_R(s); - - switch (b) { - case 0x3f0: /* crc32 Gd,Eb */ - case 0x3f1: /* crc32 Gd,Ey */ - do_crc32: - if (!(s->cpuid_ext_features & CPUID_EXT_SSE42)) { - goto illegal_op; - } - if ((b & 0xff) =3D=3D 0xf0) { - ot =3D MO_8; - } else if (s->dflag !=3D MO_64) { - ot =3D (s->prefix & PREFIX_DATA ? MO_16 : MO_32); - } else { - ot =3D MO_64; - } - - tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[reg]); - gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); - gen_helper_crc32(s->T0, s->tmp2_i32, - s->T0, tcg_const_i32(8 << ot)); - - ot =3D mo_64_32(s->dflag); - gen_op_mov_reg_v(s, ot, reg, s->T0); - break; - - case 0x1f0: /* crc32 or movbe */ - case 0x1f1: - /* For these insns, the f3 prefix is supposed to have prio= rity - over the 66 prefix, but that's not what we implement ab= ove - setting b1. */ - if (s->prefix & PREFIX_REPNZ) { - goto do_crc32; - } - /* FALLTHRU */ - case 0x0f0: /* movbe Gy,My */ - case 0x0f1: /* movbe My,Gy */ - if (!(s->cpuid_ext_features & CPUID_EXT_MOVBE)) { - goto illegal_op; - } - if (s->dflag !=3D MO_64) { - ot =3D (s->prefix & PREFIX_DATA ? MO_16 : MO_32); - } else { - ot =3D MO_64; - } - - gen_lea_modrm(env, s, modrm); - if ((b & 1) =3D=3D 0) { - tcg_gen_qemu_ld_tl(s->T0, s->A0, - s->mem_index, ot | MO_BE); - gen_op_mov_reg_v(s, ot, reg, s->T0); - } else { - tcg_gen_qemu_st_tl(cpu_regs[reg], s->A0, - s->mem_index, ot | MO_BE); - } - break; - - case 0x0f2: /* andn Gy, By, Ey */ - if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_BMI1) - || !(s->prefix & PREFIX_VEX) - || s->vex_l !=3D 0) { - goto illegal_op; - } - ot =3D mo_64_32(s->dflag); - gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); - tcg_gen_andc_tl(s->T0, s->T0, cpu_regs[s->vex_v]); - gen_op_mov_reg_v(s, ot, reg, s->T0); - gen_op_update1_cc(s); - set_cc_op(s, CC_OP_LOGICB + ot); - break; - - case 0x0f7: /* bextr Gy, Ey, By */ - if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_BMI1) - || !(s->prefix & PREFIX_VEX) - || s->vex_l !=3D 0) { - goto illegal_op; - } - ot =3D mo_64_32(s->dflag); - { - TCGv bound, zero; - - gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); - /* Extract START, and shift the operand. - Shifts larger than operand size get zeros. */ - tcg_gen_ext8u_tl(s->A0, cpu_regs[s->vex_v]); - tcg_gen_shr_tl(s->T0, s->T0, s->A0); - - bound =3D tcg_const_tl(ot =3D=3D MO_64 ? 63 : 31); - zero =3D tcg_const_tl(0); - tcg_gen_movcond_tl(TCG_COND_LEU, s->T0, s->A0, bound, - s->T0, zero); - tcg_temp_free(zero); - - /* Extract the LEN into a mask. Lengths larger than - operand size get all ones. */ - tcg_gen_extract_tl(s->A0, cpu_regs[s->vex_v], 8, 8); - tcg_gen_movcond_tl(TCG_COND_LEU, s->A0, s->A0, bound, - s->A0, bound); - tcg_temp_free(bound); - tcg_gen_movi_tl(s->T1, 1); - tcg_gen_shl_tl(s->T1, s->T1, s->A0); - tcg_gen_subi_tl(s->T1, s->T1, 1); - tcg_gen_and_tl(s->T0, s->T0, s->T1); - - gen_op_mov_reg_v(s, ot, reg, s->T0); - gen_op_update1_cc(s); - set_cc_op(s, CC_OP_LOGICB + ot); - } - break; - - case 0x0f5: /* bzhi Gy, Ey, By */ - if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_BMI2) - || !(s->prefix & PREFIX_VEX) - || s->vex_l !=3D 0) { - goto illegal_op; - } - ot =3D mo_64_32(s->dflag); - gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); - tcg_gen_ext8u_tl(s->T1, cpu_regs[s->vex_v]); - { - TCGv bound =3D tcg_const_tl(ot =3D=3D MO_64 ? 63 : 31); - /* Note that since we're using BMILG (in order to get O - cleared) we need to store the inverse into C. */ - tcg_gen_setcond_tl(TCG_COND_LT, cpu_cc_src, - s->T1, bound); - tcg_gen_movcond_tl(TCG_COND_GT, s->T1, s->T1, - bound, bound, s->T1); - tcg_temp_free(bound); - } - tcg_gen_movi_tl(s->A0, -1); - tcg_gen_shl_tl(s->A0, s->A0, s->T1); - tcg_gen_andc_tl(s->T0, s->T0, s->A0); - gen_op_mov_reg_v(s, ot, reg, s->T0); - gen_op_update1_cc(s); - set_cc_op(s, CC_OP_BMILGB + ot); - break; - - case 0x3f6: /* mulx By, Gy, rdx, Ey */ - if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_BMI2) - || !(s->prefix & PREFIX_VEX) - || s->vex_l !=3D 0) { - goto illegal_op; - } - ot =3D mo_64_32(s->dflag); - gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); - switch (ot) { - default: - tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); - tcg_gen_trunc_tl_i32(s->tmp3_i32, cpu_regs[R_EDX]); - tcg_gen_mulu2_i32(s->tmp2_i32, s->tmp3_i32, - s->tmp2_i32, s->tmp3_i32); - tcg_gen_extu_i32_tl(cpu_regs[s->vex_v], s->tmp2_i32); - tcg_gen_extu_i32_tl(cpu_regs[reg], s->tmp3_i32); - break; -#ifdef TARGET_X86_64 - case MO_64: - tcg_gen_mulu2_i64(s->T0, s->T1, - s->T0, cpu_regs[R_EDX]); - tcg_gen_mov_i64(cpu_regs[s->vex_v], s->T0); - tcg_gen_mov_i64(cpu_regs[reg], s->T1); - break; -#endif - } - break; - - case 0x3f5: /* pdep Gy, By, Ey */ - if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_BMI2) - || !(s->prefix & PREFIX_VEX) - || s->vex_l !=3D 0) { - goto illegal_op; - } - ot =3D mo_64_32(s->dflag); - gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); - /* Note that by zero-extending the source operand, we - automatically handle zero-extending the result. */ - if (ot =3D=3D MO_64) { - tcg_gen_mov_tl(s->T1, cpu_regs[s->vex_v]); - } else { - tcg_gen_ext32u_tl(s->T1, cpu_regs[s->vex_v]); - } - gen_helper_pdep(cpu_regs[reg], s->T1, s->T0); - break; - - case 0x2f5: /* pext Gy, By, Ey */ - if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_BMI2) - || !(s->prefix & PREFIX_VEX) - || s->vex_l !=3D 0) { - goto illegal_op; - } - ot =3D mo_64_32(s->dflag); - gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); - /* Note that by zero-extending the source operand, we - automatically handle zero-extending the result. */ - if (ot =3D=3D MO_64) { - tcg_gen_mov_tl(s->T1, cpu_regs[s->vex_v]); - } else { - tcg_gen_ext32u_tl(s->T1, cpu_regs[s->vex_v]); - } - gen_helper_pext(cpu_regs[reg], s->T1, s->T0); - break; - - case 0x1f6: /* adcx Gy, Ey */ - case 0x2f6: /* adox Gy, Ey */ - if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_ADX)) { - goto illegal_op; - } else { - TCGv carry_in, carry_out, zero; - int end_op; - - ot =3D mo_64_32(s->dflag); - gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); - - /* Re-use the carry-out from a previous round. */ - carry_in =3D NULL; - carry_out =3D (b =3D=3D 0x1f6 ? cpu_cc_dst : cpu_cc_sr= c2); - switch (s->cc_op) { - case CC_OP_ADCX: - if (b =3D=3D 0x1f6) { - carry_in =3D cpu_cc_dst; - end_op =3D CC_OP_ADCX; - } else { - end_op =3D CC_OP_ADCOX; - } - break; - case CC_OP_ADOX: - if (b =3D=3D 0x1f6) { - end_op =3D CC_OP_ADCOX; - } else { - carry_in =3D cpu_cc_src2; - end_op =3D CC_OP_ADOX; - } - break; - case CC_OP_ADCOX: - end_op =3D CC_OP_ADCOX; - carry_in =3D carry_out; - break; - default: - end_op =3D (b =3D=3D 0x1f6 ? CC_OP_ADCX : CC_OP_AD= OX); - break; - } - /* If we can't reuse carry-out, get it out of EFLAGS. = */ - if (!carry_in) { - if (s->cc_op !=3D CC_OP_ADCX && s->cc_op !=3D CC_O= P_ADOX) { - gen_compute_eflags(s); - } - carry_in =3D s->tmp0; - tcg_gen_extract_tl(carry_in, cpu_cc_src, - ctz32(b =3D=3D 0x1f6 ? CC_C : C= C_O), 1); - } - - switch (ot) { -#ifdef TARGET_X86_64 - case MO_32: - /* If we know TL is 64-bit, and we want a 32-bit - result, just do everything in 64-bit arithmetic= . */ - tcg_gen_ext32u_i64(cpu_regs[reg], cpu_regs[reg]); - tcg_gen_ext32u_i64(s->T0, s->T0); - tcg_gen_add_i64(s->T0, s->T0, cpu_regs[reg]); - tcg_gen_add_i64(s->T0, s->T0, carry_in); - tcg_gen_ext32u_i64(cpu_regs[reg], s->T0); - tcg_gen_shri_i64(carry_out, s->T0, 32); - break; -#endif - default: - /* Otherwise compute the carry-out in two steps. = */ - zero =3D tcg_const_tl(0); - tcg_gen_add2_tl(s->T0, carry_out, - s->T0, zero, - carry_in, zero); - tcg_gen_add2_tl(cpu_regs[reg], carry_out, - cpu_regs[reg], carry_out, - s->T0, zero); - tcg_temp_free(zero); - break; - } - set_cc_op(s, end_op); - } - break; - - case 0x1f7: /* shlx Gy, Ey, By */ - case 0x2f7: /* sarx Gy, Ey, By */ - case 0x3f7: /* shrx Gy, Ey, By */ - if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_BMI2) - || !(s->prefix & PREFIX_VEX) - || s->vex_l !=3D 0) { - goto illegal_op; - } - ot =3D mo_64_32(s->dflag); - gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); - if (ot =3D=3D MO_64) { - tcg_gen_ext32s_tl(s->T0, s->T0); - } - tcg_gen_sar_tl(s->T0, s->T0, s->T1); - } else { - if (ot !=3D MO_64) { - tcg_gen_ext32u_tl(s->T0, s->T0); - } - tcg_gen_shr_tl(s->T0, s->T0, s->T1); - } - gen_op_mov_reg_v(s, ot, reg, s->T0); - break; - - case 0x0f3: - case 0x1f3: - case 0x2f3: - case 0x3f3: /* Group 17 */ - if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_BMI1) - || !(s->prefix & PREFIX_VEX) - || s->vex_l !=3D 0) { - goto illegal_op; - } - ot =3D mo_64_32(s->dflag); - gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); - - tcg_gen_mov_tl(cpu_cc_src, s->T0); - switch (reg & 7) { - case 1: /* blsr By,Ey */ - tcg_gen_subi_tl(s->T1, s->T0, 1); - tcg_gen_and_tl(s->T0, s->T0, s->T1); - break; - case 2: /* blsmsk By,Ey */ - tcg_gen_subi_tl(s->T1, s->T0, 1); - tcg_gen_xor_tl(s->T0, s->T0, s->T1); - break; - case 3: /* blsi By, Ey */ - tcg_gen_neg_tl(s->T1, s->T0); - tcg_gen_and_tl(s->T0, s->T0, s->T1); - break; - default: - goto unknown_op; - } - tcg_gen_mov_tl(cpu_cc_dst, s->T0); - gen_op_mov_reg_v(s, ot, s->vex_v, s->T0); - set_cc_op(s, CC_OP_BMILGB + ot); - break; - - default: - goto unknown_op; - } - break; - - case 0x03a: - case 0x13a: - b =3D modrm; - modrm =3D x86_ldub_code(env, s); - rm =3D modrm & 7; - reg =3D ((modrm >> 3) & 7) | REX_R(s); - mod =3D (modrm >> 6) & 3; - - assert(b1 < 2); - sse_fn_eppi =3D sse_op_table7[b].op[b1]; - if (!sse_fn_eppi) { - goto unknown_op; - } - if (!(s->cpuid_ext_features & sse_op_table7[b].ext_mask)) - goto illegal_op; - - s->rip_offset =3D 1; - - if (sse_fn_eppi =3D=3D SSE_SPECIAL) { - ot =3D mo_64_32(s->dflag); - rm =3D (modrm & 7) | REX_B(s); - if (mod !=3D 3) - gen_lea_modrm(env, s, modrm); - reg =3D ((modrm >> 3) & 7) | REX_R(s); - val =3D x86_ldub_code(env, s); - switch (b) { - case 0x14: /* pextrb */ - tcg_gen_ld8u_tl(s->T0, cpu_env, offsetof(CPUX86State, - xmm_regs[reg].ZMM_B(val & 15))= ); - if (mod =3D=3D 3) { - gen_op_mov_reg_v(s, ot, rm, s->T0); - } else { - tcg_gen_qemu_st_tl(s->T0, s->A0, - s->mem_index, MO_UB); - } - break; - case 0x15: /* pextrw */ - tcg_gen_ld16u_tl(s->T0, cpu_env, offsetof(CPUX86State, - xmm_regs[reg].ZMM_W(val & 7))); - if (mod =3D=3D 3) { - gen_op_mov_reg_v(s, ot, rm, s->T0); - } else { - tcg_gen_qemu_st_tl(s->T0, s->A0, - s->mem_index, MO_LEUW); - } - break; - case 0x16: - if (ot =3D=3D MO_32) { /* pextrd */ - tcg_gen_ld_i32(s->tmp2_i32, cpu_env, - offsetof(CPUX86State, - xmm_regs[reg].ZMM_L(val & = 3))); - if (mod =3D=3D 3) { - tcg_gen_extu_i32_tl(cpu_regs[rm], s->tmp2_i32); - } else { - tcg_gen_qemu_st_i32(s->tmp2_i32, s->A0, - s->mem_index, MO_LEUL); - } - } else { /* pextrq */ -#ifdef TARGET_X86_64 - tcg_gen_ld_i64(s->tmp1_i64, cpu_env, - offsetof(CPUX86State, - xmm_regs[reg].ZMM_Q(val & = 1))); - if (mod =3D=3D 3) { - tcg_gen_mov_i64(cpu_regs[rm], s->tmp1_i64); - } else { - tcg_gen_qemu_st_i64(s->tmp1_i64, s->A0, - s->mem_index, MO_LEUQ); - } -#else - goto illegal_op; -#endif - } - break; - case 0x17: /* extractps */ - tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State, - xmm_regs[reg].ZMM_L(val & 3))); - if (mod =3D=3D 3) { - gen_op_mov_reg_v(s, ot, rm, s->T0); - } else { - tcg_gen_qemu_st_tl(s->T0, s->A0, - s->mem_index, MO_LEUL); - } - break; - case 0x20: /* pinsrb */ - if (mod =3D=3D 3) { - gen_op_mov_v_reg(s, MO_32, s->T0, rm); - } else { - tcg_gen_qemu_ld_tl(s->T0, s->A0, - s->mem_index, MO_UB); - } - tcg_gen_st8_tl(s->T0, cpu_env, offsetof(CPUX86State, - xmm_regs[reg].ZMM_B(val & 15))= ); - break; - case 0x21: /* insertps */ - if (mod =3D=3D 3) { - tcg_gen_ld_i32(s->tmp2_i32, cpu_env, - offsetof(CPUX86State,xmm_regs[rm] - .ZMM_L((val >> 6) & 3))); - } else { - tcg_gen_qemu_ld_i32(s->tmp2_i32, s->A0, - s->mem_index, MO_LEUL); - } - tcg_gen_st_i32(s->tmp2_i32, cpu_env, - offsetof(CPUX86State,xmm_regs[reg] - .ZMM_L((val >> 4) & 3))); - if ((val >> 0) & 1) - tcg_gen_st_i32(tcg_const_i32(0 /*float32_zero*/), - cpu_env, offsetof(CPUX86State, - xmm_regs[reg].ZMM_L(0))); - if ((val >> 1) & 1) - tcg_gen_st_i32(tcg_const_i32(0 /*float32_zero*/), - cpu_env, offsetof(CPUX86State, - xmm_regs[reg].ZMM_L(1))); - if ((val >> 2) & 1) - tcg_gen_st_i32(tcg_const_i32(0 /*float32_zero*/), - cpu_env, offsetof(CPUX86State, - xmm_regs[reg].ZMM_L(2))); - if ((val >> 3) & 1) - tcg_gen_st_i32(tcg_const_i32(0 /*float32_zero*/), - cpu_env, offsetof(CPUX86State, - xmm_regs[reg].ZMM_L(3))); - break; - case 0x22: - if (ot =3D=3D MO_32) { /* pinsrd */ - if (mod =3D=3D 3) { - tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[rm]= ); - } else { - tcg_gen_qemu_ld_i32(s->tmp2_i32, s->A0, - s->mem_index, MO_LEUL); - } - tcg_gen_st_i32(s->tmp2_i32, cpu_env, - offsetof(CPUX86State, - xmm_regs[reg].ZMM_L(val & = 3))); - } else { /* pinsrq */ -#ifdef TARGET_X86_64 - if (mod =3D=3D 3) { - gen_op_mov_v_reg(s, ot, s->tmp1_i64, rm); - } else { - tcg_gen_qemu_ld_i64(s->tmp1_i64, s->A0, - s->mem_index, MO_LEUQ); - } - tcg_gen_st_i64(s->tmp1_i64, cpu_env, - offsetof(CPUX86State, - xmm_regs[reg].ZMM_Q(val & = 1))); -#else - goto illegal_op; -#endif - } - break; - } - return; - } - - if (b1) { - op1_offset =3D offsetof(CPUX86State,xmm_regs[reg]); - if (mod =3D=3D 3) { - op2_offset =3D offsetof(CPUX86State,xmm_regs[rm | REX_= B(s)]); - } else { - op2_offset =3D offsetof(CPUX86State,xmm_t0); - gen_lea_modrm(env, s, modrm); - gen_ldo_env_A0(s, op2_offset); - } - } else { - op1_offset =3D offsetof(CPUX86State,fpregs[reg].mmx); - if (mod =3D=3D 3) { - op2_offset =3D offsetof(CPUX86State,fpregs[rm].mmx); - } else { - op2_offset =3D offsetof(CPUX86State,mmx_t0); - gen_lea_modrm(env, s, modrm); - gen_ldq_env_A0(s, op2_offset); - } - } - val =3D x86_ldub_code(env, s); - - if ((b & 0xfc) =3D=3D 0x60) { /* pcmpXstrX */ - set_cc_op(s, CC_OP_EFLAGS); - - if (s->dflag =3D=3D MO_64) { - /* The helper must use entire 64-bit gp registers */ - val |=3D 1 << 8; - } - } - - tcg_gen_addi_ptr(s->ptr0, cpu_env, op1_offset); - tcg_gen_addi_ptr(s->ptr1, cpu_env, op2_offset); - sse_fn_eppi(cpu_env, s->ptr0, s->ptr1, tcg_const_i32(val)); - break; - - case 0x33a: - /* Various integer extensions at 0f 3a f[0-f]. */ - b =3D modrm | (b1 << 8); - modrm =3D x86_ldub_code(env, s); - reg =3D ((modrm >> 3) & 7) | REX_R(s); - - switch (b) { - case 0x3f0: /* rorx Gy,Ey, Ib */ - if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_BMI2) - || !(s->prefix & PREFIX_VEX) - || s->vex_l !=3D 0) { - goto illegal_op; - } - ot =3D mo_64_32(s->dflag); - gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); - b =3D x86_ldub_code(env, s); - if (ot =3D=3D MO_64) { - tcg_gen_rotri_tl(s->T0, s->T0, b & 63); - } else { - tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); - tcg_gen_rotri_i32(s->tmp2_i32, s->tmp2_i32, b & 31); - tcg_gen_extu_i32_tl(s->T0, s->tmp2_i32); - } - gen_op_mov_reg_v(s, ot, reg, s->T0); - break; - - default: - goto unknown_op; - } - break; - - default: - unknown_op: - gen_unknown_opcode(env, s); - return; - } - } else { - /* generic MMX or SSE operation */ - switch(b) { - case 0x70: /* pshufx insn */ - case 0xc6: /* pshufx insn */ - case 0xc2: /* compare insns */ - s->rip_offset =3D 1; - break; - default: - break; - } - if (is_xmm) { - op1_offset =3D offsetof(CPUX86State,xmm_regs[reg]); - if (mod !=3D 3) { - int sz =3D 4; - - gen_lea_modrm(env, s, modrm); - op2_offset =3D offsetof(CPUX86State,xmm_t0); - - switch (b) { - case 0x50 ... 0x5a: - case 0x5c ... 0x5f: - case 0xc2: - /* Most sse scalar operations. */ - if (b1 =3D=3D 2) { - sz =3D 2; - } else if (b1 =3D=3D 3) { - sz =3D 3; - } - break; - - case 0x2e: /* ucomis[sd] */ - case 0x2f: /* comis[sd] */ - if (b1 =3D=3D 0) { - sz =3D 2; - } else { - sz =3D 3; - } - break; - } - - switch (sz) { - case 2: - /* 32 bit access */ - gen_op_ld_v(s, MO_32, s->T0, s->A0); - tcg_gen_st32_tl(s->T0, cpu_env, - offsetof(CPUX86State,xmm_t0.ZMM_L(0))); - break; - case 3: - /* 64 bit access */ - gen_ldq_env_A0(s, offsetof(CPUX86State, xmm_t0.ZMM_D(0= ))); - break; - default: - /* 128 bit access */ - gen_ldo_env_A0(s, op2_offset); - break; - } - } else { - rm =3D (modrm & 7) | REX_B(s); - op2_offset =3D offsetof(CPUX86State,xmm_regs[rm]); - } - } else { - op1_offset =3D offsetof(CPUX86State,fpregs[reg].mmx); - if (mod !=3D 3) { - gen_lea_modrm(env, s, modrm); - op2_offset =3D offsetof(CPUX86State,mmx_t0); - gen_ldq_env_A0(s, op2_offset); - } else { - rm =3D (modrm & 7); - op2_offset =3D offsetof(CPUX86State,fpregs[rm].mmx); - } - } - switch(b) { - case 0x0f: /* 3DNow! data insns */ - val =3D x86_ldub_code(env, s); - sse_fn_epp =3D sse_op_table5[val]; - if (!sse_fn_epp) { - goto unknown_op; - } - if (!(s->cpuid_ext2_features & CPUID_EXT2_3DNOW)) { - goto illegal_op; - } - tcg_gen_addi_ptr(s->ptr0, cpu_env, op1_offset); - tcg_gen_addi_ptr(s->ptr1, cpu_env, op2_offset); - sse_fn_epp(cpu_env, s->ptr0, s->ptr1); - break; - case 0x70: /* pshufx insn */ - case 0xc6: /* pshufx insn */ - val =3D x86_ldub_code(env, s); - tcg_gen_addi_ptr(s->ptr0, cpu_env, op1_offset); - tcg_gen_addi_ptr(s->ptr1, cpu_env, op2_offset); - /* XXX: introduce a new table? */ - sse_fn_ppi =3D (SSEFunc_0_ppi)sse_fn_epp; - sse_fn_ppi(s->ptr0, s->ptr1, tcg_const_i32(val)); - break; - case 0xc2: - /* compare insns, bits 7:3 (7:5 for AVX) are ignored */ - val =3D x86_ldub_code(env, s) & 7; - sse_fn_epp =3D sse_op_table4[val][b1]; - - tcg_gen_addi_ptr(s->ptr0, cpu_env, op1_offset); - tcg_gen_addi_ptr(s->ptr1, cpu_env, op2_offset); - sse_fn_epp(cpu_env, s->ptr0, s->ptr1); - break; - case 0xf7: - /* maskmov : we must prepare A0 */ - if (mod !=3D 3) - goto illegal_op; - tcg_gen_mov_tl(s->A0, cpu_regs[R_EDI]); - gen_extu(s->aflag, s->A0); - gen_add_A0_ds_seg(s); - - tcg_gen_addi_ptr(s->ptr0, cpu_env, op1_offset); - tcg_gen_addi_ptr(s-tr0, s->ptr1); - break; - } - if (b =3D=3D 0x2e || b =3D=3D 0x2f) { - set_cc_op(s, CC_OP_EFLAGS); - } - } -} - -/* convert one instruction. s->base.is_jmp is set if the translation must - be stopped. Return the next pc value */ -static target_ulong disas_insn(DisasContext *s, CPUState *cpu) -{ - CPUX86State *env =3D cpu->env_ptr; - int b, prefixes; - int shift; - MemOp ot, aflag, dflag; - int modrm, reg, rm, mod, op, opreg, val; - target_ulong next_eip, tval; - target_ulong pc_start =3D s->base.pc_next; - - s->pc_start =3D s->pc =3D pc_start; - s->override =3D -1; -#ifdef TARGET_X86_64 - s->rex_w =3D false; - s->rex_r =3D 0; - s->rex_x =3D 0; - s->rex_b =3D 0; -#endif - s->rip_offset =3D 0; /* for relative ip address */ - s->vex_l =3D 0; - s->vex_v =3D 0; - if (sigsetjmp(s->jmpbuf, 0) !=3D 0) { - gen_exception_gpf(s); - return s->pc; - } - - prefixes =3D 0; - - next_byte: - b =3D x86_ldub_code(env, s); - /* Collect prefixes. */ - switch (b) { - case 0xf3: - prefixes |=3D PREFIX_REPZ; - goto next_byte; - case 0xf2: - prefixes |=3D PREFIX_REPNZ; - goto next_byte; - case 0xf0: - prefixes |=3D PREFIX_LOCK; - goto next_byte; - case 0x2e: - s->override =3D R_CS; - goto next_byte; - case 0x36: - s->override =3D R_SS; - goto next_byte; - case 0x3e: - s->override =3D R_DS; - goto next_byte; - case 0x26: - s->override =3D R_ES; - goto next_byte; - case 0x64: - s->override =3D R_FS; - goto next_byte; - case 0x65: - s->override =3D R_GS; - goto next_byte; - case 0x66: - prefixes |=3D PREFIX_DATA; - goto next_byte; - case 0x67: - prefixes |=3D PREFIX_ADR; - goto next_byte; -#ifdef TARGET_X86_64 - case 0x40 ... 0x4f: - if (CODE64(s)) { - /* REX prefix */ - prefixes |=3D PREFIX_REX; - s->rex_w =3D (b >> 3) & 1; - s->rex_r =3D (b & 0x4) << 1; - s->rex_x =3D (b & 0x2) << 2; - s->rex_b =3D (b & 0x1) << 3; - goto next_byte; - } - break; -#endif - case 0xc5: /* 2-byte VEX */ - case 0xc4: /* 3-byte VEX */ - /* VEX prefixes cannot be used except in 32-bit mode. - Otherwise the instruction is LES or LDS. */ - if (CODE32(s) && !VM86(s)) { - static const int pp_prefix[4] =3D { - 0, PREFIX_DATA, PREFIX_REPZ, PREFIX_REPNZ - }; - int vex3, vex2 =3D x86_ldub_code(env, s); - - if (!CODE64(s) && (vex2 & 0xc0) !=3D 0xc0) { - /* 4.1.4.6: In 32-bit mode, bits [7:6] must be 11b, - otherwise the instruction is LES or LDS. */ - s->pc--; /* rewind the advance_pc() x86_ldub_code() did */ - break; - } - - /* 4.1.1-4.1.3: No preceding lock, 66, f2, f3, or rex prefixes= . */ - if (prefixes & (PREFIX_REPZ | PREFIX_REPNZ - | PREFIX_LOCK | PREFIX_DATA | PREFIX_REX)) { - goto illegal_op; - } -#ifdef TARGET_X86_64 - s->rex_r =3D (~vex2 >> 4) & 8; -#endif - if (b =3D=3D 0xc5) { - /* 2-byte VEX prefix: RVVVVlpp, implied 0f leading opcode = byte */ - vex3 =3D vex2; - b =3D x86_ldub_code(env, s) | 0x100; - } else { - /* 3-byte VEX prefix: RXBmmmmm wVVVVlpp */ - vex3 =3D x86_ldub_code(env, s); -#ifdef TARGET_X86_64 - s->rex_x =3D (~vex2 >> 3) & 8; - s->rex_b =3D (~vex2 >> 2) & 8; - s->rex_w =3D (vex3 >> 7) & 1; -#endif - switch (vex2 & 0x1f) { - case 0x01: /* Implied 0f leading opcode bytes. */ - b =3D x86_ldub_code(env, s) | 0x100; - break; - case 0x02: /* Implied 0f 38 leading opcode bytes. */ - b =3D 0x138; - break; - case 0x03: /* Implied 0f 3a leading opcode bytes. */ - b =3D 0x13a; - break; - default: /* Reserved for future use. */ - goto unknown_op; - } - } - s->vex_v =3D (~vex3 >> 3) & 0xf; - s->vex_l =3D (vex3 >> 2) & 1; - prefixes |=3D pp_prefix[vex3 & 3] | PREFIX_VEX; - } - break; - } - - /* Post-process prefixes. */ - if (CODE64(s)) { - /* In 64-bit mode, the default data size is 32-bit. Select 64-bit - data with rex_w, and 16-bit data with 0x66; rex_w takes precede= nce - over 0x66 if both are present. */ - dflag =3D (REX_W(s) ? MO_64 : prefixes & PREFIX_DATA ? MO_16 : MO_= 32); - /* In 64-bit mode, 0x67 selects 32-bit addressing. */ - aflag =3D (prefixes & PREFIX_ADR ? MO_32 : MO_64); - } else { - /* In 16/32-bit mode, 0x66 selects the opposite data size. */ - if (CODE32(s) ^ ((prefixes & PREFIX_DATA) !=3D 0)) { - dflag =3D MO_32; - } else { - dflag =3D MO_16; - } - /* In 16/32-bit mode, 0x67 selects the opposite addressing. */ - if (CODE32(s) ^ ((prefixes & PREFIX_ADR) !=3D 0)) { - aflag =3D MO_32; - } else { - aflag =3D MO_16; - } - } - - s->prefix =3D prefixes; - s->aflag =3D aflag; - s->dflag =3D dflag; - - /* now check op code */ - reswitch: - switch(b) { - case 0x0f: - /**************************/ - /* extended op code */ - b =3D x86_ldub_code(env, s) | 0x100; - goto reswitch; - - /**************************/ - /* arith & logic */ - case 0x00 ... 0x05: - case 0x08 ... 0x0d: - case 0x10 ... 0x15: - case 0x18 ... 0x1d: - case 0x20 ... 0x25: - case 0x28 ... 0x2d: - case 0x30 ... 0x35: - case 0x38 ... 0x3d: - { - int op, f, val; - op =3D (b >> 3) & 7; - f =3D (b >> 1) & 3; - - ot =3D mo_b_d(b, dflag); - - switch(f) { - case 0: /* OP Ev, Gv */ - modrm =3D x86_ldub_code(env, s); - reg =3D ((modrm >> 3) & 7) | REX_R(s); - mod =3D (modrm >> 6) & 3; - rm =3D (modrm & 7) | REX_B(s); - if (mod !=3D 3) { - gen_lea_modrm(env, s, modrm); - opreg =3D OR_TMP0; - } else if (op =3D=3D OP_XORL && rm =3D=3D reg) { - xor_zero: - /* xor reg, reg optimisation */ - set_cc_op(s, CC_OP_CLR); - tcg_gen_movi_tl(s->T0, 0); - gen_op_mov_reg_v(s, ot, reg, s->T0); - break; - } else { - opreg =3D rm; - } - gen_op_mov_v_reg(s, ot, s->T1, reg); - gen_op(s, op, ot, opreg); - break; - case 1: /* OP Gv, Ev */ - modrm =3D x86_ldub_code(env, s); - mod =3D (modrm >> 6) & 3; - reg =3D ((modrm >> 3) & 7) | REX_R(s); - rm =3D (modrm & 7) | REX_B(s); - if (mod !=3D 3) { - gen_lea_modrm(env, s, modrm); - gen_op_ld_v(s, ot, s->T1, s->A0); - } else if (op =3D=3D OP_XORL && rm =3D=3D reg) { - goto xor_zero; - } else { - gen_op_mov_v_reg(s, ot, s->T1, rm); - } - gen_op(s, op, ot, reg); - break; - case 2: /* OP A, Iv */ - val =3D insn_get(env, s, ot); - tcg_gen_movi_tl(s->T1, val); - gen_op(s, op, ot, OR_EAX); - break; - } - } - break; - - case 0x82: - if (CODE64(s)) - goto illegal_op; - /* fall through */ - case 0x80: /* GRP1 */ - case 0x81: - case 0x83: - { - int val; - - ot =3D mo_b_d(b, dflag); - - modrm =3D x86_ldub_code(env, s); - mod =3D (modrm >> 6) & 3; - rm =3D (modrm & 7) | REX_B(s); - op =3D (modrm >> 3) & 7; - - if (mod !=3D 3) { - if (b =3D=3D 0x83) - s->rip_offset =3D 1; - else - s->rip_offset =3D insn_const_size(ot); - gen_lea_modrm(env, s, modrm); - opreg =3D OR_TMP0; - } else { - opreg =3D rm; - } - - switch(b) { - default: - case 0x80: - case 0x81: - case 0x82: - val =3D insn_get(env, s, ot); - break; - case 0x83: - val =3D (int8_t)insn_get(env, s, MO_8); - break; - } - tcg_gen_movi_tl(s->T1, val); - gen_op(s, op, ot, opreg); - } - break; - - /**************************/ - /* inc, dec, and other misc arith */ - case 0x40 ... 0x47: /* inc Gv */ - ot =3D dflag; - gen_inc(s, ot, OR_EAX + (b & 7), 1); - break; - case 0x48 ... 0x4f: /* dec Gv */ - ot =3D dflag; - gen_inc(s, ot, OR_EAX + (b & 7), -1); - break; - case 0xf6: /* GRP3 */ - case 0xf7: - ot =3D mo_b_d(b, dflag); - - modrm =3D x86_ldub_code(env, s); - mod =3D (modrm >> 6) & 3; - rm =3D (modrm & 7) | REX_B(s); - op =3D (modrm >> 3) & 7; - if (mod !=3D 3) { - if (op =3D=3D 0) { - s->rip_offset =3D insn_const_size(ot); - } - gen_lea_modrm(env, s, modrm); - /* For those below that handle locked memory, don't load here.= */ - if (!(s->prefix & PREFIX_LOCK) - || op !=3D 2) { - gen_op_ld_v(s, ot, s->T0, s->A0); - } - } else { - gen_op_mov_v_reg(s, ot, s->T0, rm); - } - - switch(op) { - case 0: /* test */ - val =3D insn_get(env, s, ot); - tcg_gen_movi_tl(s->T1, val); - gen_op_testl_T0_T1_cc(s); - set_cc_op(s, CC_OP_LOGICB + ot); - break; - case 2: /* not */ - if (s->prefix & PREFIX_LOCK) { - if (mod =3D=3D 3) { - goto illegal_op; - } - tcg_gen_movi_tl(s->T0, ~0); - tcg_gen_atomic_xor_fetch_tl(s->T0, s->A0, s->T0, - s->mem_index, ot | MO_LE); - } else { - tcg_gen_not_tl(s->T0, s->T0); - if (mod !=3D 3) { - gen_op_st_v(s, ot, s->T0, s->A0); - } else { - gen_op_mov_reg_v(s, ot, rm, s->T0); - } - } - break; - case 3: /* neg */ - if (s->prefix & PREFIX_LOCK) { - TCGLabel *label1; - TCGv a0, t0, t1, t2; - - if (mod =3D=3D 3) { - goto illegal_op; - } - a0 =3D tcg_temp_local_new(); - t0 =3D tcg_temp_local_new(); - label1 =3D gen_new_label(); - - tcg_gen_mov_tl(a0, s->A0); - tcg_gen_mov_tl(t0, s->T0); - - gen_set_label(label1); - t1 =3D tcg_temp_new(); - t2 =3D tcg_temp_new(); - tcg_gen_mov_tl(t2, t0); - tcg_gen_neg_tl(t1, t0); - tcg_gen_atomic_cmpxchg_tl(t0, a0, t0, t1, - s->mem_index, ot | MO_LE); - tcg_temp_free(t1); - tcg_gen_brcond_tl(TCG_COND_NE, t0, t2, label1); - - tcg_temp_free(t2); - tcg_temp_free(a0); - tcg_gen_mov_tl(s->T0, t0); - tcg_temp_free(t0); - } else { - tcg_gen_neg_tl(s->T0, s->T0); - if (mod !=3D 3) { - gen_op_st_v(s, ot, s->T0, s->A0); - } else { - gen_op_mov_reg_v(s, ot, rm, s->T0); - } - } - gen_op_update_neg_cc(s); - set_cc_op(s, CC_OP_SUBB + ot); - break; - case 4: /* mul */ - switch(ot) { - case MO_8: - gen_op_mov_v_reg(s, MO_8, s->T1, R_EAX); - tcg_gen_ext8u_tl(s->T0, s->T0); - tcg_gen_ext8u_tl(s->T1, s->T1); - /* XXX: use 32 bit mul which could be faster */ - tcg_gen_mul_tl(s->T0, s->T0, s->T1); - gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0); - tcg_gen_mov_tl(cpu_cc_dst, s->T0); - tcg_gen_andi_tl(cpu_cc_src, s->T0, 0xff00); - set_cc_op(s, CC_OP_MULB); - break; - case MO_16: - gen_op_mov_v_reg(s, MO_16, s->T1, R_EAX); - tcg_gen_ext16u_tl(s->T0, s->T0); - tcg_gen_ext16u_tl(s->T1, s->T1); - /* XXX: use 32 bit mul which could be faster */ - tcg_gen_mul_tl(s->T0, s->T0, s->T1); - gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0); - tcg_gen_mov_tl(cpu_cc_dst, s->T0); - tcg_gen_shri_tl(s->T0, s->T0, 16); - gen_op_mov_reg_v(s, MO_16, R_EDX, s->T0); - tcg_gen_mov_tl(cpu_cc_src, s->T0); - set_cc_op(s, CC_OP_MULW); - break; - default: - case MO_32: - tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); - tcg_gen_trunc_tl_i32(s->tmp3_i32, cpu_regs[R_EAX]); - tcg_gen_mulu2_i32(s->tmp2_i32, s->tmp3_i32, - s->tmp2_i32, s->tmp3_i32); - tcg_gen_extu_i32_tl(cpu_regs[R_EAX], s->tmp2_i32); - tcg_gen_extu_i32_tl(cpu_regs[R_EDX], s->tmp3_i32); - tcg_gen_mov_tl(cpu_cc_dst, cpu_regs[R_EAX]); - tcg_gen_mov_tl(cpu_cc_src, cpu_regs[R_EDX]); - set_cc_op(s, CC_OP_MULL); - break; -#ifdef TARGET_X86_64 - case MO_64: - tcg_gen_mulu2_i64(cpu_regs[R_EAX], cpu_regs[R_EDX], - s->T0, cpu_regs[R_EAX]); - tcg_gen_mov_tl(cpu_cc_dst, cpu_regs[R_EAX]); - tcg_gen_mov_tl(cpu_cc_src, cpu_regs[R_EDX]); - set_cc_op(s, CC_OP_MULQ); - break; -#endif - } - break; - case 5: /* imul */ - switch(ot) { - case MO_8: - gen_op_mov_v_reg(s, MO_8, s->T1, R_EAX); - tcg_gen_ext8s_tl(s->T0, s->T0); - tcg_gen_ext8s_tl(s->T1, s->T1); - /* XXX: use 32 bit mul which could be faster */ - tcg_gen_mul_tl(s->T0, s->T0, s->T1); - gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0); - tcg_gen_mov_tl(cpu_cc_dst, s->T0); - tcg_gen_ext8s_tl(s->tmp0, s->T0); - tcg_gen_sub_tl(cpu_cc_src, s->T0, s->tmp0); - set_cc_op(s, CC_OP_MULB); - break; - case MO_16: - gen_op_mov_v_reg(s, MO_16, s->T1, R_EAX); - tcg_gen_ext16s_tl(s->T0, s->T0); - tcg_gen_ext16s_tl(s->T1, s->T1); - /* XXX: use 32 bit mul which could be faster */ - tcg_gen_mul_tl(s->T0, s->T0, s->T1); - gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0); - tcg_gen_mov_tl(cpu_cc_dst, s->T0); - tcg_gen_ext16s_tl(s->tmp0, s->T0); - tcg_gen_sub_tl(cpu_cc_src, s->T0, s->tmp0); - tcg_gen_shri_tl(s->T0, s->T0, 16); - gen_op_mov_reg_v(s, MO_16, R_EDX, s->T0); - set_cc_op(s, CC_OP_MULW); - tcg_gen_sari_i32(s->tmp2_i32, s->tmp2_i32, 31); - tcg_gen_mov_tl(cpu_cc_dst, cpu_regs[R_EAX]); - tcg_gen_sub_i32(s->tmp2_i32, s->tmp2_i32, s->tmp3_i32); - tcg_gen_extu_i32_tl(cpu_cc_src, s->tmp2_i32); - set_cc_op(s, CC_OP_MULL); - break; -#ifdef TARGET_X86_64 - case MO_64: - tcg_gen_muls2_i64(cpu_regs[R_EAX], cpu_regs[R_EDX], - s->T0, cpu_regs[R_EAX]); - tcg_gen_mov_tl(cpu_cc_dst, cpu_regs[R_EAX]); - tcg_gen_sari_tl(cpu_cc_src, cpu_regs[R_EAX], 63); - tcg_gen_sub_tl(cpu_cc_src, cpu_cc_src, cpu_regs[R_EDX]); - set_cc_op(s, CC_OP_MULQ); - break; -#endif - } - break; - case 6: /* div */ - switch(ot) { - case MO_8: - gen_helper_divb_AL(cpu_env, s->T0); - break; - case MO_16: - gen_helper_divw_AX(cpu_env, s->T0); - break; - default: - case MO_32: - gen_helper_divl_EAX(cpu_env, s->T0); - break; -#ifdef TARGET_X86_64 - case MO_64: - gen_helper_divq_EAX(cpu_env, s->T0); - break; -#endif - } - break; - case 7: /* idiv */ - switch(ot) { - case MO_8: - gen_helper_idivb_AL(cpu_env, s->T0); - break; - case MO_16: - gen_helper_idivw_AX(cpu_env, s->T0); - break; - default: - case MO_32: - gen_helper_idivl_EAX(cpu_env, s->T0); - break; -#ifdef TARGET_X86_64 - case MO_64: - gen_helper_idivq_EAX(cpu_env, s->T0); - break; -#endif - } - break; - default: - goto unknown_op; - } - break; - - case 0xfe: /* GRP4 */ - case 0xff: /* GRP5 */ - ot =3D mo_b_d(b, dflag); - - modrm =3D x86_ldub_code(env, s); - mod =3D (modrm >> 6) & 3; - rm =3D (modrm & 7) | REX_B(s); - op =3D (modrm >> 3) & 7; - if (op >=3D 2 && b =3D=3D 0xfe) { - goto unknown_op; - } - if (CODE64(s)) { - if (op =3D=3D 2 || op =3D=3D 4) { - /* operand size for jumps is 64 bit */ - ot =3D MO_64; - } else if (op =3D=3D 3 || op =3D=3D 5) { - ot =3D dflag !=3D MO_16 ? MO_32 + REX_W(s) : MO_16; - } else if (op =3D=3D 6) { - /* default push size is 64 bit */ - ot =3D mo_pushpop(s, dflag); - } - } - if (mod !=3D 3) { - gen_lea_modrm(env, s, modrm); - if (op >=3D 2 && op !=3D 3 && op !=3D 5) - gen_op_ld_v(s, ot, s->T0, s->A0); - } else { - gen_op_mov_v_reg(s, ot, s->T0, rm); - } - - switch(op) { - case 0: /* inc Ev */ - if (mod !=3D 3) - opreg =3D OR_TMP0; - else - opreg =3D rm; - gen_inc(s, ot, opreg, 1); - break; - case 1: /* dec Ev */ - if (mod !=3D 3) - opreg =3D OR_TMP0; - else - opreg =3D rm; - gen_inc(s, ot, opreg, -1); - break; - case 2: /* call Ev */ - /* XXX: optimize if memory (no 'and' is necessary) */ - if (dflag =3D=3D MO_16) { - tcg_gen_ext16u_tl(s->T0, s->T0); - } - next_eip =3D s->pc - s->cs_base; - tcg_gen_movi_tl(s->T1, next_eip); - gen_push_v(s, s->T1); - gen_op_jmp_v(s->T0); - gen_bnd_jmp(s); - gen_jr(s, s->T0); - break; - case 3: /* lcall Ev */ - if (mod =3D=3D 3) { - goto illegal_op; - } - gen_op_ld_v(s, ot, s->T1, s->A0); - gen_add_A0_im(s, 1 << ot); - gen_op_ld_v(s, MO_16, s->T0, s->A0); - do_lcall: - if (PE(s) && !VM86(s)) { - tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); - gen_helper_lcall_protected(cpu_env, s->tmp2_i32, s->T1, - tcg_const_i32(dflag - 1), - tcg_const_tl(s->pc - s->cs_base= )); - } else { - tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); - gen_helper_lcall_real(cpu_env, s->tmp2_i32, s->T1, - tcg_const_i32(dflag - 1), - tcg_const_i32(s->pc - s->cs_base)); - } - tcg_gen_ld_tl(s->tmp4, cpu_env, offsetof(CPUX86State, eip)); - gen_jr(s, s->tmp4); - break; - case 4: /* jmp Ev */ - if (dflag =3D=3D MO_16) { - tcg_gen_ext16u_tl(s->T0, s->T0); - } - gen_op_jmp_v(s->T0); - gen_bnd_jmp(s); - gen_jr(s, s->T0); - break; - case 5: /* ljmp Ev */ - if (mod =3D=3D 3) { - goto illegal_op; - } - gen_op_ld_v(s, ot, s->T1, s->A0); - gen_add_A0_im(s, 1 << ot); - gen_op_ld_v(s, MO_16, s->T0, s->A0); - do_ljmp: - if (PE(s) && !VM86(s)) { - tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); - gen_helper_ljmp_protected(cpu_env, s->tmp2_i32, s->T1, - tcg_const_tl(s->pc - s->cs_base)= ); - } else { - gen_op_movl_seg_T0_vm(s, R_CS); - gen_op_jmp_v(s->T1); - } - tcg_gen_ld_tl(s->tmp4, cpu_env, offsetof(CPUX86State, eip)); - gen_jr(s, s->tmp4); - break; - case 6: /* push Ev */ - gen_push_v(s, s->T0); - break; - default: - goto unknown_op; - } - break; - - case 0x84: /* test Ev, Gv */ - case 0x85: - ot =3D mo_b_d(b, dflag); - - modrm =3D x86_ldub_code(env, s); - reg =3D ((modrm >> 3) & 7) | REX_R(s); - - gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); - gen_op_mov_v_reg(s, ot, s->T1, reg); - gen_op_testl_T0_T1_cc(s); - set_cc_op(s, CC_OP_LOGICB + ot); - break; - - case 0xa8: /* test eAX, Iv */ - case 0xa9: - ot =3D mo_b_d(b, dflag); - val =3D insn_get(env, s, ot); - - gen_op_mov_v_reg(s, ot, s->T0, OR_EAX); - tcg_gen_movi_tl(s->T1, val); - gen_op_testl_T0_T1_cc(s); - set_cc_op(s, CC_OP_LOGICB + ot); - break; - - case 0x98: /* CWDE/CBW */ - switch (dflag) { -#ifdef TARGET_X86_64 - case MO_64: - gen_op_mov_v_reg(s, MO_32, s->T0, R_EAX); - tcg_gen_ext32s_tl(s->T0, s->T0); - gen_op_mov_reg_v(s, MO_64, R_EAX, s->T0); - break; -#endif - case MO_32: - gen_op_mov_v_reg(s, MO_16, s->T0, R_EAX); - tcg_gen_ext16s_tl(s->T0, s->T0); - gen_op_mov_reg_v(s, MO_32, R_EAX, s->T0); - break; - case MO_16: - gen_op_mov_v_reg(s, MO_8, s->T0, R_EAX); - tcg_gen_ext8s_tl(s->T0, s->T0); - gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0); - break; - default: - tcg_abort(); - } - break; - case 0x99: /* CDQ/CWD */ - switch (dflag) { -#ifdef TARGET_X86_64 - case MO_64: - gen_op_mov_v_reg(s, MO_64, s->T0, R_EAX); - tcg_gen_sari_tl(s->T0, s->T0, 63); - gen_op_mov_reg_v(s, MO_64, R_EDX, s->T0); - break; -#endif - case MO_32: - gen_op_mov_v_reg(s, MO_32, s->T0, R_EAX); - tcg_gen_ext32s_tl(s->T0, s->T0); - tcg_gen_sari_tl(s->T0, s->T0, 31); - gen_op_mov_reg_v(s, MO_32, R_EDX, s->T0); - break; - case MO_16: - gen_op_mov_v_reg(s, MO_16, s->T0, R_EAX); - tcg_gen_ext16s_tl(s->T0, s->T0); - tcg_gen_sari_tl(s->T0, s->T0, 15); - gen_op_mov_reg_v(s, MO_16, R_EDX, s->T0); - break; - default: - tcg_abort(); - } - break; - case 0x1af: /* imul Gv, Ev */ - case 0x69: /* imul Gv, Ev, I */ - case 0x6b: - ot =3D dflag; - modrm =3D x86_ldub_code(env, s); - reg =3D ((modrm >> 3) & 7) | REX_R(s); - if (b =3D=3D 0x69) - s->rip_offset =3D insn_const_size(ot); - else if (b =3D=3D 0x6b) - s->rip_offset =3D 1; - gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); - if (b =3D=3D 0x69) { - val =3D insn_get(env, s, ot); - tcg_gen_movi_tl(s->T1, val); - } else if (b =3D=3D 0x6b) { - val =3D (int8_t)insn_get(env, s, MO_8); - tcg_gen_movi_tl(s->T1, val); - } else { - gen_op_mov_v_reg(s, ot, s->T1, reg); - } - switch (ot) { -#ifdef TARGET_X86_64 - case MO_64: - tcg_gen_muls2_i64(cpu_regs[reg], s->T1, s->T0, s->T1); - tcg_gen_mov_tl(cpu_cc_dst, cpu_regs[reg]); - tcg_gen_sari_tl(cpu_cc_src, cpu_cc_dst, 63); - tcg_gen_sub_tl(cpu_cc_src, cpu_cc_src, s->T1); - break; -#endif - case MO_32: - tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); - tcg_gen_trunc_tl_i32(s->tmp3_i32, s->T1); - tcg_gen_muls2_i32(s->tmp2_i32, s->tmp3_i32, - s->tmp2_i32, s->tmp3_i32); - tcg_gen_extu_i32_tl(cpu_regs[reg], s->tmp2_i32); - tcg_gen_sari_i32(s->tmp2_i32, s->tmp2_i32, 31); - tcg_gen_mov_tl(cpu_cc_dst, cpu_regs[reg]); - tcg_gen_sub_i32(s->tmp2_i32, s->tmp2_i32, s->tmp3_i32); - tcg_gen_extu_i32_tl(cpu_cc_src, s->tmp2_i32); - break; - default: - tcg_gen_ext16s_tl(s->T0, s->T0); - tcg_gen_ext16s_tl(s->T1, s->T1); - /* XXX: use 32 bit mul which could be faster */ - tcg_gen_mul_tl(s->T0, s->T0, s->T1); - tcg_gen_mov_tl(cpu_cc_dst, s->T0); - tcg_gen_ext16s_tl(s->tmp0, s->T0); - tcg_gen_sub_tl(cpu_cc_src, s->T0, s->tmp0); - gen_op_mov_reg_v(s, ot, reg, s->T0); - break; - } - set_cc_op(s, CC_OP_MULB + ot); - break; - case 0x1c0: - case 0x1c1: /* xadd Ev, Gv */ - ot =3D mo_b_d(b, dflag); - modrm =3D x86_ldub_code(env, s); - reg =3D ((modrm >> 3) & 7) | REX_R(s); - mod =3D (modrm >> 6) & 3; - gen_op_mov_v_reg(s, ot, s->T0, reg); - if (mod =3D=3D 3) { - rm =3D (modrm & 7) | REX_B(s); - gen_op_mov_v_reg(s, ot, s->T1, rm); - tcg_gen_add_tl(s->T0, s->T0, s->T1); - gen_op_mov_reg_v(s, ot, reg, s->T1); - gen_op_mov_reg_v(s, ot, rm, s->T0); - } else { - gen_lea_modrm(env, s, modrm); - if (s->prefix & PREFIX_LOCK) { - tcg_gen_atomic_fetch_add_tl(s->T1, s->A0, s->T0, - s->mem_index, ot | MO_LE); - tcg_gen_add_tl(s->T0, s->T0, s->T1); - } else { - gen_op_ld_v(s, ot, s->T1, s->A0); - tcg_gen_add_tl(s->T0, s->T0, s->T1); - gen_op_st_v(s, ot, s->T0, s->A0); - } - gen_op_mov_reg_v(s, ot, reg, s->T1); - } - gen_op_update2_cc(s); - set_cc_op(s, CC_OP_ADDB + ot); - break; - case 0x1b0: - case 0x1b1: /* cmpxchg Ev, Gv */ - { - TCGv oldv, newv, cmpv; - - ot =3D mo_b_d(b, dflag); - modrm =3D x86_ldub_code(env, s); - reg =3D ((modrm >> 3) & 7) | REX_R(s); - mod =3D (modrm >> 6) & 3; - oldv =3D tcg_temp_new(); - newv =3D tcg_temp_new(); - cmpv =3D tcg_temp_new(); - gen_op_mov_v_reg(s, ot, newv, reg); - tcg_gen_mov_tl(cmpv, cpu_regs[R_EAX]); - - if (s->prefix & PREFIX_LOCK) { - if (mod =3D=3D 3) { - goto illegal_op; - } - gen_lea_modrm(env, s, modrm); - tcg_gen_atomic_cmpxchg_tl(oldv, s->A0, cmpv, newv, - s->mem_index, ot | MO_LE); - gen_op_mov_reg_v(s, ot, R_EAX, oldv); - } else { - if (mod =3D=3D 3) { - rm =3D (modrm & 7) | REX_B(s); - gen_op_mov_v_reg(s, ot, oldv, rm); - } else { - gen_lea_modrm(env, s, modrm); - gen_op_ld_v(s, ot, oldv, s->A0); - rm =3D 0; /* avoid warning */ - } - gen_extu(ot, oldv); - gen_extu(ot, cmpv); - /* store value =3D (old =3D=3D cmp ? new : old); */ - tcg_gen_movcond_tl(TCG_COND_EQ, newv, oldv, cmpv, newv, ol= dv); - if (mod =3D=3D 3) { - gen_op_mov_reg_v(s, ot, R_EAX, oldv); - gen_op_mov_reg_v(s, ot, rm, newv); - } else { - /* Perform an unconditional store cycle like physical = cpu; - must be before changing accumulator to ensure - idempotency if the store faults and the instruction - is restarted */ - gen_op_st_v(s, ot, newv, s->A0); - gen_op_mov_reg_v(s, ot, R_EAX, oldv); - } - } - tcg_gen_mov_tl(cpu_cc_src, oldv); - tcg_gen_mov_tl(s->cc_srcT, cmpv); - tcg_gen_sub_tl(cpu_cc_dst, cmpv, oldv); - set_cc_op(s, CC_OP_SUBB + ot); - tcg_temp_free(oldv); - tcg_temp_free(newv); - tcg_temp_free(cmpv); - } - break; - case 0x1c7: /* cmpxchg8b */ - modrm =3D x86_ldub_code(env, s); - mod =3D (modrm >> 6) & 3; - switch ((modrm >> 3) & 7) { - case 1: /* CMPXCHG8, CMPXCHG16 */ - if (mod =3D=3D 3) { - goto illegal_op; - } -#ifdef TARGET_X86_64 - if (dflag =3D=3D MO_64) { - if (!(s->cpuid_ext_features & CPUID_EXT_CX16)) { - goto illegal_op; - } - gen_lea_modrm(env, s, modrm); - if ((s->prefix & PREFIX_LOCK) && - (tb_cflags(s->base.tb) & CF_PARALLEL)) { - gen_helper_cmpxchg16b(cpu_env, s->A0); - } else { - gen_helper_cmpxchg16b_unlocked(cpu_env, s->A0); - } - set_cc_op(s, CC_OP_EFLAGS); - break; - } -#endif =20 - if (!(s->cpuid_features & CPUID_CX8)) { - goto illegal_op; - } - gen_lea_modrm(env, s, modrm); - if ((s->prefix & PREFIX_LOCK) && - (tb_cflags(s->base.tb) & CF_PARALLEL)) { - gen_helper_cmpxchg8b(cpu_env, s->A0); - } else { - gen_helper_cmpxchg8b_unlocked(cpu_env, s->A0); - } - set_cc_op(s, CC_OP_EFLAGS); - break; - - case 7: /* RDSEED */ - case 6: /* RDRAND */ - if (mod !=3D 3 || - (s->prefix & (PREFIX_LOCK | PREFIX_REPZ | PREFIX_REPNZ)) || - !(s->cpuid_ext_features & CPUID_EXT_RDRAND)) { - goto illegal_op; - } - if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { - gen_io_start(); - } - gen_helper_rdrand(s->T0, cpu_env); - rm =3D (modrm & 7) | REX_B(s); - gen_op_mov_reg_v(s, dflag, rm, s->T0); - set_cc_op(s, CC_OP_EFLAGS); - if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { - gen_jmp(s, s->pc - s->cs_base); - } - break; - - de_reg_v(s, ot, (b & 7) | REX_B(s), s->T0); - break; - case 0x60: /* pusha */ - if (CODE64(s)) - goto illegal_op; - gen_pusha(s); - break; - case 0x61: /* popa */ - if (CODE64(s)) - goto illegal_op; - gen_popa(s); - break; - case 0x68: /* push Iv */ - case 0x6a: - ot =3D mo_pushpop(s, dflag); - if (b =3D=3D 0x68) - val =3D insn_get(env, s, ot); - else - val =3D (int8_t)insn_get(env, s, MO_8); - tcg_gen_movi_tl(s->T0, val); - gen_push_v(s, s->T0); - break; - case 0x8f: /* pop Ev */ - modrm =3D x86_ldub_code(env, s); - mod =3D (modrm >> 6) & 3; - ot =3D gen_pop_T0(s); - if (mod =3D=3D 3) { - /* NOTE: order is important for pop %sp */ - gen_pop_update(s, ot); - rm =3D (modrm & 7) | REX_B(s); - gen_op_mov_reg_v(s, ot, rm, s->T0); - } else { - /* NOTE: order is important too for MMU exceptions */ - s->popl_esp_hack =3D 1 << ot; - gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1); - s->popl_esp_hack =3D 0; - gen_pop_update(s, ot); - } - break; - case 0xc8: /* enter */ - { - int level; - val =3D x86_lduw_code(env, s); - level =3D x86_ldub_code(env, s); - gen_enter(s, val, level); - } - break; - case 0xc9: /* leave */ - gen_leave(s); - break; - case 0x06: /* push es */ - case 0x0e: /* push cs */ - case 0x16: /* push ss */ - case 0x1e: /* push ds */ - if (CODE64(s)) - goto illegal_op; - gen_op_movl_T0_seg(s, b >> 3); - gen_push_v(s, s->T0); - break; - case 0x1a0: /* push fs */ - case 0x1a8: /* push gs */ - gen_op_movl_T0_seg(s, (b >> 3) & 7); - gen_push_v(s, s->T0); - break; - case 0x07: /* pop es */ - case 0x17: /* pop ss */ - case 0x1f: /* pop ds */ - if (CODE64(s)) - goto illegal_op; - reg =3D b >> 3; - ot =3D gen_pop_T0(s); - gen_movl_seg_T0(s, reg); - gen_pop_update(s, ot); - /* Note that reg =3D=3D R_SS in gen_movl_seg_T0 always sets is_jmp= . */ - if (s->base.is_jmp) { - gen_jmp_im(s, s->pc - s->cs_base); - if (reg =3D=3D R_SS) { - s->flags &=3D ~HF_TF_MASK; - gen_eob_inhibit_irq(s, true); - } else { - gen_eob(s); - } - } - break; - case 0x1a1: /* pop fs */ - case 0x1a9: /* pop gs */ - ot =3D gen_pop_T0(s); - gen_movl_seg_T0(s, (b >> 3) & 7); - gen_pop_update(s, ot); - if (s->base.is_jmp) { - gen_jmp_im(s, s->pc - s->cs_base); - gen_eob(s); - } - break; - - /**************************/ - /* mov */ - case 0x88: - case 0x89: /* mov Gv, Ev */ - ot =3D mo_b_d(b, dflag); - modrm =3D x86_ldub_code(env, s); - reg =3D ((modrm >> 3) & 7) | REX_R(s); - - /* generate a generic store */ - gen_ldst_modrm(env, s, modrm, ot, reg, 1); - break; - case 0xc6: - case 0xc7: /* mov Ev, Iv */ - ot =3D mo_b_d(b, dflag); - modrm =3D x86_ldub_code(env, s); - mod =3D (modrm >> 6) & 3; - if (mod !=3D 3) { - s->rip_offset =3D insn_const_size(ot); - gen_lea_modrm(env, s, modrm); - } - val =3D insn_get(env, s, ot); - tcg_gen_movi_tl(s->T0, val); - if (mod !=3D 3) { - gen_op_st_v(s, ot, s->T0, s->A0); - } else { - gen_op_mov_reg_v(s, ot, (modrm & 7) | REX_B(s), s->T0); - } - break; - case 0x8a: - case 0x8b: /* mov Ev, Gv */ - ot =3D mo_b_d(b, dflag); - modrm =3D x86_ldub_code(env, s); - reg =3D ((modrm >> 3) & 7) | REX_R(s); - - gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); - gen_op_mov_reg_v(s, ot, reg, s->T0); - break; - case 0x8e: /* mov seg, Gv */ - modrm =3D x86_ldub_code(env, s); - reg =3D (modrm >> 3) & 7; - if (reg >=3D 6 || reg =3D=3D R_CS) - goto illegal_op; - gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0); - gen_movl_seg_T0(s, reg); - /* Note that reg =3D=3D R_SS in gen_movl_seg_T0 always sets is_jmp= . */ - if (s->base.is_jmp) { - gen_jmp_im(s, s->pc - s->cs_base); - if (reg =3D=3D R_SS) { - s->flags &=3D ~HF_TF_MASK; - gen_eob_inhibit_irq(s, true); - } else { - gen_eob(s); - } - } - break; - case 0x8c: /* mov Gv, seg */ - modrm =3D x86_ldub_code(env, s); - reg =3D (modrm >> 3) & 7; - mod =3D (modrm >> 6) & 3; - if (reg >=3D 6) - goto illegal_op; - gen_op_movl_T0_seg(s, reg); - ot =3D mod =3D=3D 3 ? dflag : MO_16; - gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1); - break; - - case 0x1b6: /* movzbS Gv, Eb */ - case 0x1b7: /* movzwS Gv, Eb */ - case 0x1be: /* movsbS Gv, Eb */ - case 0x1bf: /* movswS Gv, Eb */ - { - MemOp d_ot; - MemOp s_ot; - - /* d_ot is the size of destination */ - d_ot =3D dflag; - /* ot is the size of source */ - ot =3D (b & 1) + MO_8; - /* s_ot is the sign+size of source */ - s_ot =3D b & 8 ? MO_SIGN | ot : ot; - - modrm =3D x86_ldub_code(env, s); - reg =3D ((modrm >> 3) & 7) | REX_R(s); - mod =3D (modrm >> 6) & 3; - rm =3D (modrm & 7) | REX_B(s); - - if (mod =3D=3D 3) { - if (s_ot =3D=3D MO_SB && byte_reg_is_xH(s, rm)) { - tcg_gen_sextract_tl(s->T0, cpu_regs[rm - 4], 8, 8); - } else { - gen_op_mov_v_reg(s, ot, s->T0, rm); - switch (s_ot) { - case MO_UB: - tcg_gen_ext8u_tl(s->T0, s->T0); - break; - case MO_SB: - tcg_gen_ext8s_tl(s->T0, s->T0); - break; - case MO_UW: - tcg_gen_ext16u_tl(s->T0, s->T0); - break; - default: - case MO_SW: - tcg_gen_ext16s_tl(s->T0, s->T0); - break; - } - } - gen_op_mov_reg_v(s, d_ot, reg, s->T0); - } else { - gen_lea_modrm(env, s, modrm); - gen_op_ld_v(s, s_ot, s->T0, s->A0); - gen_op_mov_reg_v(s, d_ot, reg, s->T0); - } - } - break; - - case 0x8d: /* lea */ - modrm =3D x86_ldub_code(env, s); - mod =3D (modrm >> 6) & 3; - if (mod =3D=3D 3) - goto illegal_op; - reg =3D ((modrm >> 3) & 7) | REX_R(s); - { - AddressParts a =3D gen_lea_modrm_0(env, s, modrm); - TCGv ea =3D gen_lea_modrm_1(s, a); - gen_lea_v_seg(s, s->aflag, ea, -1, -1); - gen_op_mov_reg_v(s, dflag, reg, s->A0); - } - break; - - case 0xa0: /* mov EAX, Ov */ - case 0xa1: - case 0xa2: /* mov Ov, EAX */ - case 0xa3: - { - target_ulong offset_addr; - - ot =3D mo_b_d(b, dflag); - switch (s->aflag) { -#ifdef TARGET_X86_64 - case MO_64: - offset_addr =3D x86_ldq_code(env, s); - break; -#endif - default: - offset_addr =3D insn_get(env, s, s->aflag); - break; - } - tcg_gen_movi_tl(s->A0, offset_addr); - gen_add_A0_ds_seg(s); - if ((b & 2) =3D=3D 0) { - gen_op_ld_v(s, ot, s->T0, s->A0); - gen_op_mov_reg_v(s, ot, R_EAX, s->T0); - } else { - gen_op_mov_v_reg(s, ot, s->T0, R_EAX); - gen_op_st_v(s, ot, s->T0, s->A0); - } - } - break; - case 0xd7: /* xlat */ - tcg_gen_mov_tl(s->A0, cpu_regs[R_EBX]); - tcg_gen_ext8u_tl(s->T0, cpu_regs[R_EAX]); - tcg_gen_add_tl(s->A0, s->A0, s->T0); - gen_extu(s->aflag, s->A0); - gen_add_A0_ds_seg(s); - gen_op_ld_v(s, MO_8, s->T0, s->A0); - gen_op_mov_reg_v(s, MO_8, R_EAX, s->T0); - break; - case 0xb0 ... 0xb7: /* mov R, Ib */ - val =3D insn_get(env, s, MO_8); - tcg_gen_movi_tl(s->T0, val); - gen_op_mov_reg_v(s, MO_8, (b & 7) | REX_B(s), s->T0); - break; - case 0xb8 ... 0xbf: /* mov R, Iv */ -#ifdef TARGET_X86_64 - if (dflag =3D=3D MO_64) { - uint64_t tmp; - /* 64 bit case */ - tmp =3D x86_ldq_code(env, s); - reg =3D (b & 7) | REX_B(s); - tcg_gen_movi_tl(s->T0, tmp); - gen_op_mov_reg_v(s, MO_64, reg, s->T0); - } else -#endif - { - ot =3D dflag; - val =3D insn_get(env, s, ot); - reg =3D (b & 7) | REX_B(s); - tcg_gen_movi_tl(s->T0, val); - gen_op_mov_reg_v(s, ot, reg, s->T0); - } - break; - - case 0x91 ... 0x97: /* xchg R, EAX */ - do_xchg_reg_eax: - ot =3D dflag; - reg =3D (b & 7) | REX_B(s); - rm =3D R_EAX; - goto do_xchg_reg; - case 0x86: - case 0x87: /* xchg Ev, Gv */ - ot =3D mo_b_d(b, dflag); - modrm =3D x86_ldub_code(env, s); - reg =3D ((modrm >> 3) & 7) | REX_R(s); - mod =3D (modrm >> 6) & 3; - if (mod =3D=3D 3) { - rm =3D (modrm & 7) | REX_B(s); - do_xchg_reg: - gen_op_mov_v_reg(s, ot, s->T0, reg); - gen_op_mov_v_reg(s, ot, s->T1, rm); - gen_op_mov_reg_v(s, ot, rm, s->T0); - gen_op_mov_reg_v(s, ot, reg, s->T1); - } else { - gen_lea_modrm(env, s, modrm); - gen_op_mov_v_reg(s, ot, s->T0, reg); - /* for xchg, lock is implicit */ - tcg_gen_atomic_xchg_tl(s->T1, s->A0, s->T0, - s->mem_index, ot | MO_LE); - gen_op_mov_reg_v(s, ot, reg, s->T1); - } - break; - case 0xc4: /* les Gv */ - /* In CODE64 this is VEX3; see above. */ - op =3D R_ES; - goto do_lxx; - case 0xc5: /* lds Gv */ - /* In CODE64 this is VEX2; see above. */ - op =3D R_DS; - goto do_lxx; - case 0x1b2: /* lss Gv */ - op =3D R_SS; - goto do_lxx; - case 0x1b4: /* lfs Gv */ - op =3D R_FS; - goto do_lxx; - case 0x1b5: /* lgs Gv */ - op =3D R_GS; - do_lxx: - ot =3D dflag !=3D MO_16 ? MO_32 : MO_16; - modrm =3D x86_ldub_code(env, s); - reg =3D ((modrm >> 3) & 7) | REX_R(s); - mod =3D (modrm >> 6) & 3; - if (mod =3D=3D 3) - goto illegal_op; - gen_lea_modrm(env, s, modrm); - gen_op_ld_v(s, ot, s->T1, s->A0); - gen_add_A0_im(s, 1 << ot); - /* load the segment first to handle exceptions properly */ - gen_op_ld_v(s, MO_16, s->T0, s->A0); - gen_movl_seg_T0(s, op); - /* then put the data */ - gen_op_mov_reg_v(s, ot, reg, s->T1); - if (s->base.is_jmp) { - gen_jmp_im(s, s->pc - s->cs_base); - gen_eob(s); - } - break; - - /************************/ - /* shifts */ - case 0xc0: - case 0xc1: - /* shift Ev,Ib */ - shift =3D 2; - grp2: - { - ot =3D mo_b_d(b, dflag); - modrm =3D x86_ldub_code(env, s); - mod =3D (modrm >> 6) & 3; - op =3D (modrm >> 3) & 7; - - if (mod !=3D 3) { - if (shift =3D=3D 2) { - s->rip_offset =3D 1; - } - gen_lea_modrm(env, s, modrm); - opreg =3D OR_TMP0; - } else { - opreg =3D (modrm & 7) | REX_B(s); - } - - /* simpler op */ - if (shift =3D=3D 0) { - gen_shift(s, op, ot, opreg, OR_ECX); - } else { - if (shift =3D=3D 2) { - shift =3D x86_ldub_code(env, s); - } - gen_shifti(s, op, ot, opreg, shift); - } - } - break; - case 0xd0: - case 0xd1: - /* shift Ev,1 */ - shift =3D 1; - goto grp2; - case 0xd2: - case 0xd3: - /* shift Ev,cl */ - shift =3D 0; - goto grp2; - - case 0x1a4: /* shld imm */ - op =3D 0; - shift =3D 1; - goto do_shiftd; - case 0x1a5: /* shld cl */ - op =3D 0; - shift =3D 0; - goto do_shiftd; - case 0x1ac: /* shrd imm */ - op =3D 1; - shift =3D 1; - goto do_shiftd; - case 0x1ad: /* shrd cl */ - op =3D 1; - shift =3D 0; - do_shiftd: - ot =3D dflag; - modrm =3D x86_ldub_code(env, s); - mod =3D (modrm >> 6) & 3; - rm =3D (modrm & 7) | REX_B(s); - reg =3D ((modrm >> 3) & 7) | REX_R(s); - if (mod !=3D 3) { - gen_lea_modrm(env, s, modrm); - opreg =3D OR_TMP0; - } else { - opreg =3D rm; - } - gen_op_mov_v_reg(s, ot, s->T1, reg); - - if (shift) { - TCGv imm =3D tcg_const_tl(x86_ldub_code(env, s)); - gen_shiftd_rm_T1(s, ot, opreg, op, imm); - tcg_temp_free(imm); - } else { - gen_shiftd_rm_T1(s, ot, opreg, op, cpu_regs[R_ECX]); - } - break; - - /************************/ - /* floats */ - case 0xd8 ... 0xdf: - { - bool update_fip =3D true; - - if (s->flags & (HF_EM_MASK | HF_TS_MASK)) { - /* if CR0.EM or CR0.TS are set, generate an FPU exception = */ - /* XXX: what to do if illegal op ? */ - gen_exception(s, EXCP07_PREX, pc_start - s->cs_base); - break; - } - modrm =3D x86_ldub_code(env, s); - mod =3D (modrm >> 6) & 3; - rm =3D modrm & 7; - op =3D ((b & 7) << 3) | ((modrm >> 3) & 7); - if (mod !=3D 3) { - /* memory op */ - AddressParts a =3D gen_lea_modrm_0(env, s, modrm); - TCGv ea =3D gen_lea_modrm_1(s, a); - TCGv last_addr =3D tcg_temp_new(); - bool update_fdp =3D true; - - tcg_gen_mov_tl(last_addr, ea); - gen_lea_v_seg(s, s->aflag, ea, a.def_seg, s->override); - - switch (op) { - case 0x00 ... 0x07: /* fxxxs */ - case 0x10 ... 0x17: /* fixxxl */ - case 0x20 ... 0x27: /* fxxxl */ - case 0x30 ... 0x37: /* fixxx */ - { - int op1; - op1 =3D op & 7; - - switch (op >> 4) { - case 0: - tcg_gen_qemu_ld_i32(s->tmp2_i32, s->A0, - s->mem_index, MO_LEUL); - gen_helper_flds_FT0(cpu_env, s->tmp2_i32); - break; - case 1: - tcg_gen_qemu_ld_i32(s->tmp2_i32, s->A0, - s->mem_index, MO_LEUL); - gen_helper_fildl_FT0(cpu_env, s->tmp2_i32); - break; - case 2: - tcg_gen_qemu_ld_i64(s->tmp1_i64, s->A0, - s->mem_index, MO_LEUQ); - gen_helper_fldl_FT0(cpu_env, s->tmp1_i64); - break; - case 3: - default: - tcg_gen_qemu_ld_i32(s->tmp2_i32, s->A0, - case 0x08: /* flds */ - case 0x0a: /* fsts */ - case 0x0b: /* fstps */ - case 0x18 ... 0x1b: /* fildl, fisttpl, fistl, fistpl */ - case 0x28 ... 0x2b: /* fldl, fisttpll, fstl, fstpl */ - case 0x38 ... 0x3b: /* filds, fisttps, fists, fistps */ - switch (op & 7) { - case 0: - switch (op >> 4) { - case 0: - tcg_gen_qemu_ld_i32(s->tmp2_i32, s->A0, - s->mem_index, MO_LEUL); - gen_helper_flds_ST0(cpu_env, s->tmp2_i32); - break; - case 1: - tcg_gen_qemu_ld_i32(s->tmp2_i32, s->A0, - s->mem_index, MO_LEUL); - gen_helper_fildl_ST0(cpu_env, s->tmp2_i32); - break; - case 2: - tcg_gen_qemu_ld_i64(s->tmp1_i64, s->A0, - s->mem_index, MO_LEUQ); - gen_helper_fldl_ST0(cpu_env, s->tmp1_i64); - break; - case 3: - default: - tcg_gen_qemu_ld_i32(s->tmp2_i32, s->A0, - s->mem_index, MO_LESW); - gen_helper_fildl_ST0(cpu_env, s->tmp2_i32); - break; - } - break; - case 1: - /* XXX: the corresponding CPUID bit must be tested= ! */ - switch (op >> 4) { - case 1: - gen_helper_fisttl_ST0(s->tmp2_i32, cpu_env); - tcg_gen_qemu_st_i32(s->tmp2_i32, s->A0, - s->mem_index, MO_LEUL); - break; - case 2: - gen_helper_fisttll_ST0(s->tmp1_i64, cpu_env); - tcg_gen_qemu_st_i64(s->tmp1_i64, s->A0, - s->mem_index, MO_LEUQ); - break; - case 3: - default: - gen_helper_fistt_ST0(s->tmp2_i32, cpu_env); - tcg_gen_qemu_st_i32(s->tmp2_i32, s->A0, - s->mem_index, MO_LEUW); - break; - } - gen_helper_fpop(cpu_env); - break; - default: - switch (op >> 4) { - case 0: - gen_helper_fsts_ST0(s->tmp2_i32, cpu_env); - tcg_gen_qemu_st_i32(s->tmp2_i32, s->A0, - s->mem_index, MO_LEUL); - break; - case 1: - gen_helper_fistl_ST0(s->tmp2_i32, cpu_env); - tcg_gen_qemu_st_i32(s->tmp2_i32, s->A0, - s->mem_index, MO_LEUL); - break; - case 2: - gen_helper_fstl_ST0(s->tmp1_i64, cpu_env); - tcg_gen_qemu_st_i64(s->tmp1_i64, s->A0, - s->mem_index, MO_LEUQ); - break; - case 3: - default: - gen_helper_fist_ST0(s->tmp2_i32, cpu_env); - tcg_gen_qemu_st_i32(s->tmp2_i32, s->A0, - s->mem_index, MO_LEUW); - break; - } - if ((op & 7) =3D=3D 3) { - gen_helper_fpop(cpu_env); - } - break; - } - break; - case 0x0c: /* fldenv mem */ - gen_helper_fldenv(cpu_env, s->A0, - tcg_const_i32(dflag - 1)); - update_fip =3D update_fdp =3D false; - break; - case 0x0d: /* fldcw mem */ - tcg_gen_qemu_ld_i32(s->tmp2_i32, s->A0, - s->mem_index, MO_LEUW); - gen_helper_fldcw(cpu_env, s->tmp2_i32); - update_fip =3D update_fdp =3D false; - break; - case 0x0e: /* fnstenv mem */ - gen_helper_fstenv(cpu_env, s->A0, - tcg_const_i32(dflag - 1)); - update_fip =3D update_fdp =3D false; - break; - case 0x0f: /* fnstcw mem */ - gen_helper_fnstcw(s->tmp2_i32, cpu_env); - tcg_gen_qemu_st_i32(s->tmp2_i32, s->A0, - s->mem_index, MO_LEUW); - update_fip =3D update_fdp =3D false; - break; - case 0x1d: /* fldt mem */ - gen_helper_fldt_ST0(cpu_env, s->A0); - break; - case 0x1f: /* fstpt mem */ - gen_helper_fstt_ST0(cpu_env, s->A0); - gen_helper_fpop(cpu_env); - break; - case 0x2c: /* frstor mem */ - gen_helper_frstor(cpu_env, s->A0, - tcg_const_i32(dflag - 1)); - update_fip =3D update_fdp =3D false; - break; - case 0x2e: /* fnsave mem */ - gen_helper_fsave(cpu_env, s->A0, - tcg_const_i32(dflag - 1)); - update_fip =3D update_fdp =3D false; - break; - case 0x2f: /* fnstsw mem */ - gen_helper_fnstsw(s->tmp2_i32, cpu_env); - tcg_gen_qemu_st_i32(s->tmp2_i32, s->A0, - s->mem_index, MO_LEUW); - update_fip =3D update_fdp =3D false; - break; - case 0x3c: /* fbld */ - gen_helper_fbld_ST0(cpu_env, s->A0); - break; - case 0x3e: /* fbstp */ - gen_helper_fbst_ST0(cpu_env, s->A0); - gen_helper_fpop(cpu_env); - break; - case 0x3d: /* fildll */ - tcg_gen_qemu_ld_i64(s->tmp1_i64, s->A0, - s->mem_index, MO_LEUQ); - gen_helper_fildll_ST0(cpu_env, s->tmp1_i64); - break; - case 0x3f: /* fistpll */ - gen_helper_fistll_ST0(s->tmp1_i64, cpu_env); - tcg_gen_qemu_st_i64(s->tmp1_i64, s->A0, - s->mem_index, MO_LEUQ); - gen_helper_fpop(cpu_env); - break; - default: - goto unknown_op; - } - - if (update_fdp) { - int last_seg =3D s->override >=3D 0 ? s->override : a.= def_seg; - - tcg_gen_ld_i32(s->tmp2_i32, cpu_env, - offsetof(CPUX86State, - segs[last_seg].selector)); - tcg_gen_st16_i32(s->tmp2_i32, cpu_env, - offsetof(CPUX86State, fpds)); - tcg_gen_st_tl(last_addr, cpu_env, - offsetof(CPUX86State, fpdp)); - } - tcg_temp_free(last_addr); - } else { - /* register float ops */ - opreg =3D rm; - - switch (op) { - case 0x08: /* fld sti */ - gen_helper_fpush(cpu_env); - gen_helper_fmov_ST0_STN(cpu_env, - tcg_const_i32((opreg + 1) & 7)= ); - break; - case 0x09: /* fxchg sti */ - case 0x29: /* fxchg4 sti, undocumented op */ - case 0x39: /* fxchg7 sti, undocumented op */ - gen_helper_fxchg_ST0_STN(cpu_env, tcg_const_i32(opreg)= ); - break; - case 0x0a: /* grp d9/2 */ - switch (rm) { - case 0: /* fnop */ - /* check exceptions (FreeBSD FPU probe) */ - gen_helper_fwait(cpu_env); - update_fip =3D false; - break; - default: - goto unknown_op; - } - break; - case 0x0c: /* grp d9/4 */ - switch (rm) { - case 0: /* fchs */ - gen_helper_fchs_ST0(cpu_env); - break; - case 1: /* fabs */ - gen_helper_fabs_ST0(cpu_env); - break; - case 4: /* ftst */ - gen_helper_fldz_FT0(cpu_env); - gen_helper_fcom_ST0_FT0(cpu_env); - break; - case 5: /* fxam */ - gen_helper_fxam_ST0(cpu_env); - break; - default: - goto unknown_op; - } - break; - case 0x0d: /* grp d9/5 */ - { - switch (rm) { - case 0: - gen_helper_fpush(cpu_env); - gen_helper_fld1_ST0(cpu_env); - break; - case 1: - gen_helper_fpush(cpu_env); - gen_helper_fldl2t_ST0(cpu_env); - break; - case 2: - gen_helper_fpush(cpu_env); - gen_helper_fldl2e_ST0(cpu_env); - break; - case 3: - gen_helper_fpush(cpu_env); - gen_helper_fldpi_ST0(cpu_env); - break; - case 4: - gen_helper_fpush(cpu_env); - gen_helper_fldlg2_ST0(cpu_env); - break; - case 5: - gen_helper_fpush(cpu_env); - gen_helper_fldln2_ST0(cpu_env); - break; - case 6: - gen_helper_fpush(cpu_env); - gen_helper_fldz_ST0(cpu_env); - break; - default: - goto unknown_op; - } - } - break; - case 0x0e: /* grp d9/6 */ - switch (rm) { - case 0: /* f2xm1 */ - gen_helper_f2xm1(cpu_env); - break; - case 1: /* fyl2x */ - gen_helper_fyl2x(cpu_env); - break; - case 2: /* fptan */ - gen_helper_fptan(cpu_env); - break; - case 3: /* fpatan */ - gen_helper_fpatan(cpu_env); - break; - case 4: /* fxtract */ - gen_helper_fxtract(cpu_env); - break; - case 5: /* fprem1 */ - gen_helper_fprem1(cpu_env); - break; - case 6: /* fdecstp */ - gen_helper_fdecstp(cpu_env); - break; - default: - case 7: /* fincstp */ - gen_helper_fincstp(cpu_env); - break; - } - break; - case 0x0f: /* grp d9/7 */ - switch (rm) { - case 0: /* fprem */ - gen_helper_fprem(cpu_env); - break; - case 1: /* fyl2xp1 */ - gen_helper_fyl2xp1(cpu_env); - break; - case 2: /* fsqrt */ - gen_helper_fsqrt(cpu_env); - break; - case 3: /* fsincos */ - gen_helper_fsincos(cpu_env); - break; - case 5: /* fscale */ - gen_helper_fscale(cpu_env); - break; - case 4: /* frndint */ - gen_helper_frndint(cpu_env); - break; - case 6: /* fsin */ - gen_helper_fsin(cpu_env); - break; - default: - case 7: /* fcos */ - gen_helper_fcos(cpu_env); - break; - } - break; - case 0x00: case 0x01: case 0x04 ... 0x07: /* fxxx st, sti = */ - case 0x20: case 0x21: case 0x24 ... 0x27: /* fxxx sti, st = */ - case 0x30: case 0x31: case 0x34 ... 0x37: /* fxxxp sti, st= */ - { - int op1; - - op1 =3D op & 7; - if (op >=3D 0x20) { - gen_helper_fp_arith_STN_ST0(op1, opreg); - if (op >=3D 0x30) { - gen_helper_fpop(cpu_env); - } - } else { - gen_helper_fmov_FT0_STN(cpu_env, - tcg_const_i32(opreg)); - gen_helper_fp_arith_ST0_FT0(op1); - } - } - break; - case 0x02: /* fcom */ - case 0x22: /* fcom2, undocumented op */ - gen_helper_fmov_FT0_STN(cpu_env, tcg_const_i32(opreg)); - gen_helper_fcom_ST0_FT0(cpu_env); - break; - case 0x03: /* fcomp */ - case 0x23: /* fcomp3, undocumented op */ - case 0x32: /* fcomp5, undocumented op */ - gen_helper_fmov_FT0_STN(cpu_env, tcg_const_i32(opreg)); - gen_helper_fcom_ST0_FT0(cpu_env); - gen_helper_fpop(cpu_env); - break; - case 0x15: /* da/5 */ - switch (rm) { - case 1: /* fucompp */ - gen_helper_fmov_FT0_STN(cpu_env, tcg_const_i32(1)); - gen_helper_fucom_ST0_FT0(cpu_env); - gen_helper_fpop(cpu_env); - gen_helper_fpop(cpu_env); - break; - default: - goto unknown_op; - } - break; - case 0x1c: - switch (rm) { - case 0: /* feni (287 only, just do nop here) */ - gen_helper_fninit(cpu_env); - update_fip =3D false; - break; - case 4: /* fsetpm (287 only, just do nop here) */ - break; - default: - goto unknown_op; - } - break; - case 0x1d: /* fucomi */ - if (!(s->cpuid_features & CPUID_CMOV)) { - goto illegal_op; - } - gen_update_cc_op(s); - gen_helper_fmov_FT0_STN(cpu_env, tcg_const_i32(opreg)); - gen_helper_fucomi_ST0_FT0(cpu_env); - set_cc_op(s, CC_OP_EFLAGS); - break; - case 0x1e: /* fcomi */ - if (!(s->cpuid_features & CPUID_CMOV)) { - goto illegal_op; - } - gen_update_cc_op(s); - gen_helper_fmov_FT0_STN(cpu_env, tcg_const_i32(opreg)); - gen_helper_fcomi_ST0_FT0(cpu_env); - set_cc_op(s, CC_OP_EFLAGS); - break; - case 0x28: /* ffree sti */ - gen_helper_ffree_STN(cpu_env, tcg_const_i32(opreg)); - break; - case 0x2a: /* fst sti */ - gen_helper_fmov_STN_ST0(cpu_env, tcg_const_i32(opreg)); - break; - case 0x2b: /* fstp sti */ - case 0x0b: /* fstp1 sti, undocumented op */ - case 0x3a: /* fstp8 sti, undocumented op */ - case 0x3b: /* fstp9 sti, undocumented op */ - gen_helper_fmov_STN_ST0(cpu_env, tcg_const_i32(opreg)); - gen_helper_fpop(cpu_env); - break; - case 0x2c: /* fucom st(i) */ - gen_helper_fmov_FT0_STN(cpu_env, tcg_const_i32(opreg)); - gen_helper_fucom_ST0_FT0(cpu_env); - break; - case 0x2d: /* fucomp st(i) */ - gen_helper_fmov_FT0_STN(cpu_env, tcg_const_i32(opreg)); - gen_helper_fucom_ST0_FT0(cpu_env); - gen_helper_fpop(cpu_env); - break; - case 0x33: /* de/3 */ - switch (rm) { - case 1: /* fcompp */ - gen_helper_fmov_FT0_STN(cpu_env, tcg_const_i32(1)); - gen_helper_fcom_ST0_FT0(cpu_env); - gen_helper_fpop(cpu_env); - gen_helper_fpop(cpu_env); - break; - default: - goto unknown_op; - } - break; - case 0x38: /* ffreep sti, undocumented op */ - gen_helper_ffree_STN(cpu_env, tcg_const_i32(opreg)); - gen_helper_fpop(cpu_env); - break; - case 0x3c: /* df/4 */ - switch (rm) { - case 0: - gen_helper_fnstsw(s->tmp2_i32, cpu_env); - tcg_gen_extu_i32_tl(s->T0, s->tmp2_i32); - gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0); - break; - default: - goto unknown_op; - } - break; - case 0x3d: /* fucomip */ - if (!(s->cpuid_features & CPUID_CMOV)) { - goto illegal_op; - } - gen_update_cc_op(s); - gen_helper_fmov_FT0_STN(cpu_env, tcg_const_i32(opreg)); - gen_helper_fucomi_ST0_FT0(cpu_env); - gen_helper_fpop(cpu_env); - set_cc_op(s, CC_OP_EFLAGS); - break; - case 0x3e: /* fcomip */ - if (!(s->cpuid_features & CPUID_CMOV)) { - goto illegal_op; - } - gen_update_cc_op(s); - gen_helper_fmov_FT0_STN(cpu_env, tcg_const_i32(opreg)); - gen_helper_fcomi_ST0_FT0(cpu_env); - gen_helper_fpop(cpu_env); - set_cc_op(s, CC_OP_EFLAGS); - break; - case 0x10 ... 0x13: /* fcmovxx */ - case 0x18 ... 0x1b: - { - int op1; - TCGLabel *l1; - static const uint8_t fcmov_cc[8] =3D { - (JCC_B << 1), - (JCC_Z << 1), - (JCC_BE << 1), - (JCC_P << 1), - }; - - if (!(s->cpuid_features & CPUID_CMOV)) { - goto illegal_op; - } - op1 =3D fcmov_cc[op & 3] | (((op >> 3) & 1) ^ 1); - l1 =3D gen_new_label(); - gen_jcc1_noeob(s, op1, l1); - gen_helper_fmov_ST0_STN(cpu_env, tcg_const_i32(opr= eg)); - gen_set_label(l1); - } - break; - default: - goto unknown_op; - } - } - - if (update_fip) { - tcg_gen_ld_i32(s->tmp2_i32, cpu_env, - offsetof(CPUX86State, segs[R_CS].selector)); - tcg_gen_st16_i32(s->tmp2_i32, cpu_env, - offsetof(CPUX86State, fpcs)); - tcg_gen_st_tl(tcg_constant_tl(pc_start - s->cs_base), - cpu_env, offsetof(CPUX86State, fpip)); - } - } - break; - /************************/ - /* string ops */ - - case 0xa4: /* movsS */ - case 0xa5: - ot =3D mo_b_d(b, dflag); - if (prefixes & (PREFIX_REPZ | PREFIX_REPNZ)) { - gen_repz_movs(s, ot, pc_start - s->cs_base, s->pc - s->cs_base= ); - } else { - gen_movs(s, ot); - } - break; - - case 0xaa: /* stosS */ - case 0xab: - ot =3D mo_b_d(b, dflag); - if (prefixes & (PREFIX_REPZ | PREFIX_REPNZ)) { - gen_repz_stos(s, ot, pc_start - s->cs_base, s->pc - s->cs_base= ); - } else { - gen_stos(s, ot); - } - break; - case 0xac: /* lodsS */ - case 0xad: - ot =3D mo_b_d(b, dflag); - if (prefixes & (PREFIX_REPZ | PREFIX_REPNZ)) { - gen_repz_lods(s, ot, pc_start - s->cs_base, s->pc - s->cs_base= ); - } else { - gen_lods(s, ot); - } - break; - case 0xae: /* scasS */ - case 0xaf: - ot =3D mo_b_d(b, dflag); - if (prefixes & PREFIX_REPNZ) { - gen_repz_scas(s, ot, pc_start - s->cs_base, s->pc - s->cs_base= , 1); - } else if (prefixes & PREFIX_REPZ) { - gen_repz_scas(s, ot, pc_start - s->cs_base, s->pc - s->cs_base= , 0); - } else { - gen_scas(s, ot); - } - break; - - case 0xa6: /* cmpsS */ - case 0xa7: - ot =3D mo_b_d(b, dflag); - if (prefixes & PREFIX_REPNZ) { - gen_repz_cmps(s, ot, pc_start - s->cs_base, s->pc - s->cs_base= , 1); - } else if (prefixes & PREFIX_REPZ) { - gen_repz_cmps(s, ot, pc_start - s->cs_base, s->pc - s->cs_base= , 0); - } else { - gen_cmps(s, ot); - } - break; - case 0x6c: /* insS */ - case 0x6d: - ot =3D mo_b_d32(b, dflag); - tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[R_EDX]); - tcg_gen_ext16u_i32(s->tmp2_i32, s->tmp2_i32); - if (!gen_check_io(s, ot, s->tmp2_i32, - SVM_IOIO_TYPE_MASK | SVM_IOIO_STR_MASK)) { - break; - } - if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { - gen_io_start(); - } - if (prefixes & (PREFIX_REPZ | PREFIX_REPNZ)) { - gen_repz_ins(s, ot, pc_start - s->cs_base, s->pc - s->cs_base); - /* jump generated by gen_repz_ins */ - } else { - gen_ins(s, ot); - if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { - gen_jmp(s, s->pc - s->cs_base); - } - } - break; - case 0x6e: /* outsS */ - case 0x6f: - ot =3D mo_b_d32(b, dflag); - tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[R_EDX]); - tcg_gen_ext16u_i32(s->tmp2_i32, s->tmp2_i32); - if (!gen_check_io(s, ot, s->tmp2_i32, SVM_IOIO_STR_MASK)) { - break; - } - if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { - gen_io_start(); - } - if (prefixes & (PREFIX_REPZ | PREFIX_REPNZ)) { - gen_repz_outs(s, ot, pc_start - s->cs_base, s->pc - s->cs_base= ); - /* jump generated by gen_repz_outs */ - } else { - gen_outs(s, ot); - if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { - gen_jmp(s, s->pc - s->cs_base); - } - } - break; - - /************************/ - /* port I/O */ - - case 0xe4: - case 0xe5: - ot =3D mo_b_d32(b, dflag); - val =3D x86_ldub_code(env, s); - tcg_gen_movi_i32(s->tmp2_i32, val); - if (!gen_check_io(s, ot, s->tmp2_i32, SVM_IOIO_TYPE_MASK)) { - break; - } - if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { - gen_io_start(); - } - gen_helper_in_func(ot, s->T1, s->tmp2_i32); - gen_op_mov_reg_v(s, ot, R_EAX, s->T1); - gen_bpt_io(s, s->tmp2_i32, ot); - if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { - gen_jmp(s, s->pc - s->cs_base); - } - break; - case 0xe6: - case 0xe7: - ot =3D mo_b_d32(b, dflag); - val =3D x86_ldub_code(env, s); - tcg_gen_movi_i32(s->tmp2_i32, val); - if (!gen_check_io(s, ot, s->tmp2_i32, 0)) { - break; - } - if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { - gen_io_start(); - } - gen_op_mov_v_reg(s, ot, s->T1, R_EAX); - tcg_gen_trunc_tl_i32(s->tmp3_i32, s->T1); - gen_helper_out_func(ot, s->tmp2_i32, s->tmp3_i32); - gen_bpt_io(s, s->tmp2_i32, ot); - if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { - gen_jmp(s, s->pc - s->cs_base); - } - break; - case 0xec: - case 0xed: - ot =3D mo_b_d32(b, dflag); - tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[R_EDX]); - tcg_gen_ext16u_i32(s->tmp2_i32, s->tmp2_i32); - if (!gen_check_io(s, ot, s->tmp2_i32, SVM_IOIO_TYPE_MASK)) { - break; - } - if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { - gen_io_start(); - } - gen_helper_in_func(ot, s->T1, s->tmp2_i32); - gen_op_mov_reg_v(s, ot, R_EAX, s->T1); - gen_bpt_io(s, s->tmp2_i32, ot); - if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { - gen_jmp(s, s->pc - s->cs_base); - } - break; - case 0xee: - case 0xef: - ot =3D mo_b_d32(b, dflag); - tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[R_EDX]); - tcg_gen_ext16u_i32(s->tmp2_i32, s->tmp2_i32); - if (!gen_check_io(s, ot, s->tmp2_i32, 0)) { - break; - } - if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { - gen_io_start(); - } - gen_op_mov_v_reg(s, ot, s->T1, R_EAX); - tcg_gen_trunc_tl_i32(s->tmp3_i32, s->T1); - gen_helper_out_func(ot, s->tmp2_i32, s->tmp3_i32); - gen_bpt_io(s, s->tmp2_i32, ot); - if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { - gen_jmp(s, s->pc - s->cs_base); - } - break; - - /************************/ - /* control */ - case 0xc2: /* ret im */ - val =3D x86_ldsw_code(env, s); - ot =3D gen_pop_T0(s); - gen_stack_update(s, val + (1 << ot)); - /* Note that gen_pop_T0 uses a zero-extending load. */ - gen_op_jmp_v(s->T0); - gen_bnd_jmp(s); - gen_jr(s, s->T0); - break; - case 0xc3: /* ret */ - ot =3D gen_pop_T0(s); - gen_pop_update(s, ot); - /* Note that gen_pop_T0 uses a zero-extending load. */ - gen_op_jmp_v(s->T0); - gen_bnd_jmp(s); - gen_jr(s, s->T0); - break; - case 0xca: /* lret im */ - val =3D x86_ldsw_code(env, s); - do_lret: - if (PE(s) && !VM86(s)) { - gen_update_cc_op(s); - gen_jmp_im(s, pc_start - s->cs_base); - gen_helper_lret_protected(cpu_env, tcg_const_i32(dflag - 1), - tcg_const_i32(val)); - } else { - gen_stack_A0(s); - /* pop offset */ - gen_op_ld_v(s, dflag, s->T0, s->A0); - /* NOTE: keeping EIP updated is not a problem in case of - exception */ - gen_op_jmp_v(s->T0); - /* pop selector */ - gen_add_A0_im(s, 1 << dflag); - gen_op_ld_v(s, dflag, s->T0, s->A0); - gen_op_movl_seg_T0_vm(s, R_CS); - /* add stack offset */ - gen_stack_update(s, val + (2 << dflag)); - } - gen_eob(s); - break; - case 0xcb: /* lret */ - val =3D 0; - goto do_lret; - case 0xcf: /* iret */ - gen_svm_check_intercept(s, SVM_EXIT_IRET); - if (!PE(s) || VM86(s)) { - /* real mode or vm86 mode */ - if (!check_vm86_iopl(s)) { - break; - } - gen_helper_iret_real(cpu_env, tcg_const_i32(dflag - 1)); - } else { - gen_helper_iret_protected(cpu_env, tcg_const_i32(dflag - 1), - tcg_const_i32(s->pc - s->cs_base)); - } - set_cc_op(s, CC_OP_EFLAGS); - gen_eob(s); - break; - case 0xe8: /* call im */ - { - if (dflag !=3D MO_16) { - tval =3D (int32_t)insn_get(env, s, MO_32); - } else { - tval =3D (int16_t)insn_get(env, s, MO_16); - } - next_eip =3D s->pc - s->cs_base; - tval +=3D next_eip; - if (dflag =3D=3D MO_16) { - tval &=3D 0xffff; - } else if (!CODE64(s)) { - tval &=3D 0xffffffff; - } - tcg_gen_movi_tl(s->T0, next_eip); - gen_push_v(s, s->T0); - gen_bnd_jmp(s); - gen_jmp(s, tval); - } - break; - case 0x9a: /* lcall im */ - { - unsigned int selector, offset; - - if (CODE64(s)) - goto illegal_op; - ot =3D dflag; - offset =3D insn_get(env, s, ot); - selector =3D insn_get(env, s, MO_16); - - tcg_gen_movi_tl(s->T0, selector); - tcg_gen_movi_tl(s->T1, offset); - } - goto do_lcall; - case 0xe9: /* jmp im */ - if (dflag !=3D MO_16) { - tval =3D (int32_t)insn_get(env, s, MO_32); - } else { - tval =3D (int16_t)insn_get(env, s, MO_16); - } - tval +=3D s->pc - s->cs_base; - if (dflag =3D=3D MO_16) { - tval &=3D 0xffff; - } else if (!CODE64(s)) { - tval &=3D 0xffffffff; - } - gen_bnd_jmp(s); - gen_jmp(s, tval); - break; - case 0xea: /* ljmp im */ - { - unsigned int selector, offset; - - if (CODE64(s)) - goto illegal_op; - ot =3D dflag; - offset =3D insn_get(env, s, ot); - selector =3D insn_get(env, s, MO_16); - - tcg_gen_movi_tl(s->T0, selector); - tcg_gen_movi_tl(s->T1, offset); - } - goto do_ljmp; - case 0xeb: /* jmp Jb */ - tvse { - tval =3D (int16_t)insn_get(env, s, MO_16); - } - do_jcc: - next_eip =3D s->pc - s->cs_base; - tval +=3D next_eip; - if (dflag =3D=3D MO_16) { - tval &=3D 0xffff; - } - gen_bnd_jmp(s); - gen_jcc(s, b, tval, next_eip); - break; - - case 0x190 ... 0x19f: /* setcc Gv */ - modrm =3D x86_ldub_code(env, s); - gen_setcc1(s, b, s->T0); - gen_ldst_modrm(env, s, modrm, MO_8, OR_TMP0, 1); - break; - case 0x140 ... 0x14f: /* cmov Gv, Ev */ - if (!(s->cpuid_features & CPUID_CMOV)) { - goto illegal_op; - } - ot =3D dflag; - modrm =3D x86_ldub_code(env, s); - reg =3D ((modrm >> 3) & 7) | REX_R(s); - gen_cmovcc1(env, s, ot, b, modrm, reg); - break; - - /************************/ - /* flags */ - case 0x9c: /* pushf */ - gen_svm_check_intercept(s, SVM_EXIT_PUSHF); - if (check_vm86_iopl(s)) { - gen_update_cc_op(s); - gen_helper_read_eflags(s->T0, cpu_env); - gen_push_v(s, s->T0); - } - break; - case 0x9d: /* popf */ - gen_svm_check_intercept(s, SVM_EXIT_POPF); - if (check_vm86_iopl(s)) { - ot =3D gen_pop_T0(s); - if (CPL(s) =3D=3D 0) { - if (dflag !=3D MO_16) { - gen_helper_write_eflags(cpu_env, s->T0, - tcg_const_i32((TF_MASK | AC_MA= SK | - ID_MASK | NT_MA= SK | - IF_MASK | - IOPL_MASK))); - } else { - gen_helper_write_eflags(cpu_env, s->T0, - tcg_const_i32((TF_MASK | AC_MA= SK | - ID_MASK | NT_MA= SK | - IF_MASK | IOPL_= MASK) - & 0xffff)); - } - } else { - if (CPL(s) <=3D IOPL(s)) { - if (dflag !=3D MO_16) { - gen_helper_write_eflags(cpu_env, s->T0, - tcg_const_i32((TF_MASK | - AC_MASK | - ID_MASK | - NT_MASK | - IF_MASK))); - } else { - gen_helper_write_eflags(cpu_env, s->T0, - tcg_const_i32((TF_MASK | - AC_MASK | - ID_MASK | - NT_MASK | - IF_MASK) - & 0xffff)); - } - } else { - if (dflag !=3D MO_16) { - gen_helper_write_eflags(cpu_env, s->T0, - tcg_const_i32((TF_MASK | AC_MAS= K | - ID_MASK | NT_MAS= K))); - } else { - gen_helper_write_eflags(cpu_env, s->T0, - tcg_const_i32((TF_MASK | AC_MAS= K | - ID_MASK | NT_MAS= K) - & 0xffff)); - } - } - } - gen_pop_update(s, ot); - set_cc_op(s, CC_OP_EFLAGS); - /* abort translation because TF/AC flag may change */ - gen_jmp_im(s, s->pc - s->cs_base); - gen_eob(s); - } - break; - case 0x9e: /* sahf */ - if (CODE64(s) && !(s->cpuid_ext3_features & CPUID_EXT3_LAHF_LM)) - goto illegal_op; - gen_op_mov_v_reg(s, MO_8, s->T0, R_AH); - gen_compute_eflags(s); - tcg_gen_andi_tl(cpu_cc_src, cpu_cc_src, CC_O); - tcg_gen_andi_tl(s->T0, s->T0, CC_S | CC_Z | CC_A | CC_P | CC_C); - tcg_gen_or_tl(cpu_cc_src, cpu_cc_src, s->T0); - break; - case 0x9f: /* lahf */ - if (CODE64(s) && !(s->cpuid_ext3_features & CPUID_EXT3_LAHF_LM)) - goto illegal_op; - gen_compute_eflags(s); - /* Note: gen_compute_eflags() only gives the condition codes */ - tcg_gen_ori_tl(s->T0, cpu_cc_src, 0x02); - gen_op_mov_reg_v(s, MO_8, R_AH, s->T0); - break; - case 0xf5: /* cmc */ - gen_compute_eflags(s); - tcg_gen_xori_tl(cpu_cc_src, cpu_cc_src, CC_C); - break; - case 0xf8: /* clc */ - gen_compute_eflags(s); - tcg_gen_andi_tl(cpu_cc_src, cpu_cc_src, ~CC_C); - break; - case 0xf9: /* stc */ - gen_compute_eflags(s); - tcg_gen_ori_tl(cpu_cc_src, cpu_cc_src, CC_C); - break; - case 0xfc: /* cld */ - tcg_gen_movi_i32(s->tmp2_i32, 1); - tcg_gen_st_i32(s->tmp2_i32, cpu_env, offsetof(CPUX86State, df)); - break; - case 0xfd: /* std */ - tcg_gen_movi_i32(s->tmp2_i32, -1); - tcg_gen_st_i32(s->tmp2_i32, cpu_env, offsetof(CPUX86State, df)); - break; - - /************************/ - /* bit operations */ - case 0x1ba: /* bt/bts/btr/btc Gv, im */ - ot =3D dflag; - modrm =3D x86_ldub_code(env, s); - op =3D (modrm >> 3) & 7; - mod =3D (modrm >> 6) & 3; - rm =3D (modrm & 7) | REX_B(s); - if (mod !=3D 3) { - s->rip_offset =3D 1; - gen_lea_modrm(env, s, modrm); - if (!(s->prefix & PREFIX_LOCK)) { - gen_op_ld_v(s, ot, s->T0, s->A0); - } - } else { - gen_op_mov_v_reg(s, ot, s->T0, rm); - } - /* load shift */ - val =3D x86_ldub_code(env, s); - tcg_gen_movi_tl(s->T1, val); - if (op < 4) - goto unknown_op; - op -=3D 4; - goto bt_op; - case 0x1a3: /* bt Gv, Ev */ - op =3D 0; - goto do_btx; - case 0x1ab: /* bts */ - op =3D 1; - goto do_btx; - case 0x1b3: /* btr */ - op =3D 2; - goto do_btx; - case 0x1bb: /* btc */ - op =3D 3; - do_btx: - ot =3D dflag; - modrm =3D x86_ldub_code(env, s); - reg =3D ((modrm >> 3) & 7) | REX_R(s); - mod =3D (modrm >> 6) & 3; - rm =3D (modrm & 7) | REX_B(s); - gen_op_mov_v_reg(s, MO_32, s->T1, reg); - if (mod !=3D 3) { - AddressParts a =3D gen_lea_modrm_0(env, s, modrm); - /* specific case: we need to add a displacement */ - gen_exts(ot, s->T1); - tcg_gen_sari_tl(s->tmp0, s->T1, 3 + ot); - tcg_gen_shli_tl(s->tmp0, s->tmp0, ot); - tcg_gen_add_tl(s->A0, gen_lea_modrm_1(s, a), s->tmp0); - gen_lea_v_seg(s, s->aflag, s->A0, a.def_seg, s->override); - if (!(s->prefix & PREFIX_LOCK)) { - gen_op_ld_v(s, ot, s->T0, s->A0); - } - } else { - gen_op_mov_v_reg(s, ot, s->T0, rm); - } - bt_op: - tcg_gen_andi_tl(s->T1, s->T1, (1 << (3 + ot)) - 1); - tcg_gen_movi_tl(s->tmp0, 1); - tcg_gen_shl_tl(s->tmp0, s->tmp0, s->T1); - if (s->prefix & PREFIX_LOCK) { - switch (op) { - case 0: /* bt */ - /* Needs no atomic ops; we surpressed the normal - memory load for LOCK above so do it now. */ - gen_op_ld_v(s, ot, s->T0, s->A0); - break; - case 1: /* bts */ - tcg_gen_atomic_fetch_or_tl(s->T0, s->A0, s->tmp0, - s->mem_index, ot | MO_LE); - break; - case 2: /* btr */ - tcg_gen_not_tl(s->tmp0, s->tmp0); - tcg_gen_atomic_fetch_and_tl(s->T0, s->A0, s->tmp0, - s->mem_index, ot | MO_LE); - break; - default: - case 3: /* btc */ - tcg_gen_atomic_fetch_xor_tl(s->T0, s->A0, s->tmp0, - s->mem_index, ot | MO_LE); - break; - } - tcg_gen_shr_tl(s->tmp4, s->T0, s->T1); - } else { - tcg_gen_shr_tl(s->tmp4, s->T0, s->T1); - switch (op) { - case 0: /* bt */ - /* Data already loaded; nothing to do. */ - break; - case 1: /* bts */ - tcg_gen_or_tl(s->T0, s->T0, s->tmp0); - break; - case 2: /* btr */ - tcg_gen_andc_tl(s->T0, s->T0, s->tmp0); - break; - default: - case 3: /* btc */ - tcg_gen_xor_tl(s->T0, s->T0, s->tmp0); - break; - } - if (op !=3D 0) { - if (mod !=3D 3) { - gen_op_st_v(s, ot, s->T0, s->A0); - } else { - gen_op_mov_reg_v(s, ot, rm, s->T0); - } - } - } - - /* Delay all CC updates until after the store above. Note that - C is the result of the test, Z is unchanged, and the others - are all undefined. */ - switch (s->cc_op) { - case CC_OP_MULB ... CC_OP_MULQ: - case CC_OP_ADDB ... CC_OP_ADDQ: - case CC_OP_ADCB ... CC_OP_ADCQ: - case CC_OP_SUBB ... CC_OP_SUBQ: - case CC_OP_SBBB ... CC_OP_SBBQ: - case CC_OP_LOGICB ... CC_OP_LOGICQ: - case CC_OP_INCB ... CC_OP_INCQ: - case CC_OP_DECB ... CC_OP_DECQ: - case CC_OP_SHLB ... CC_OP_SHLQ: - case CC_OP_SARB ... CC_OP_SARQ: - case CC_OP_BMILGB ... CC_OP_BMILGQ: - /* Z was going to be computed from the non-zero status of CC_D= ST. - We can get that same Z value (and the new C value) by leavi= ng - CC_DST alone, setting CC_SRC, and using a CC_OP_SAR of the - same width. */ - tcg_gen_mov_tl(cpu_cc_src, s->tmp4); - set_cc_op(s, ((s->cc_op - CC_OP_MULB) & 3) + CC_OP_SARB); - break; - default: - /* Otherwise, generate EFLAGS and replace the C bit. */ - gen_compute_eflags(s); - tcg_gen_deposit_tl(cpu_cc_src, cpu_cc_src, s->tmp4, - ctz32(CC_C), 1); - break; - } - break; - case 0x1bc: /* bsf / tzcnt */ - case 0x1bd: /* bsr / lzcnt */ - ot =3D dflag; - modrm =3D x86_ldub_code(env, s); - reg =3D ((modrm >> 3) & 7) | REX_R(s); - gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); - gen_extu(ot, s->T0); - - /* Note that lzcnt and tzcnt are in different extensions. */ - if ((prefixes & PREFIX_REPZ) - && (b & 1 - ? s->cpuid_ext3_features & CPUID_EXT3_ABM - : s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_BMI1)) { - int size =3D 8 << ot; - /* For lzcnt/tzcnt, C bit is defined related to the input. */ - tcg_gen_mov_tl(cpu_cc_src, s->T0); - if (b & 1) { - /* For lzcnt, reduce the target_ulong result by the - number of zeros that we expect to find at the top. */ - tcg_gen_clzi_tl(s->T0, s->T0, TARGET_LONG_BITS); - tcg_gen_subi_tl(s->T0, s->T0, TARGET_LONG_BITS - size); - } else { - /* For tzcnt, a zero input must return the operand size. = */ - tcg_gen_ctzi_tl(s->T0, s->T0, size); - } - /* For lzcnt/tzcnt, Z bit is defined related to the result. */ - gen_op_update1_cc(s); - set_cc_op(s, CC_OP_BMILGB + ot); - } else { - /* For bsr/bsf, only the Z bit is defined and it is related - to the input and not the result. */ - tcg_gen_mov_tl(cpu_cc_dst, s->T0); - set_cc_op(s, CC_OP_LOGICB + ot); - - /* ??? The manual says that the output is undefined when the - input is zero, but real hardware leaves it unchanged, and - real programs appear to depend on that. Accomplish this - by passing the output as the value to return upon zero. */ - if (b & 1) { - /* For bsr, return the bit index of the first 1 bit, - not the count of leading zeros. */ - tcg_gen_xori_tl(s->T1, cpu_regs[reg], TARGET_LONG_BITS - 1= ); - tcg_gen_clz_tl(s->T0, s->T0, s->T1); - tcg_gen_xori_tl(s->T0, s->T0, TARGET_LONG_BITS - 1); - } else { - tcg_gen_ctz_tl(s->T0, s->T0, cpu_regs[reg]); - } - } - gen_op_mov_reg_v(s, ot, reg, s->T0); - break; - /************************/ - /* bcd */ - case 0x27: /* daa */ - if (CODE64(s)) - goto illegal_op; - gen_update_cc_op(s); - gen_helper_daa(cpu_env); - set_cc_op(s, CC_OP_EFLAGS); - break; - case 0x2f: /* das */ - if (CODE64(s)) - goto illegal_op; - gen_update_cc_op(s); - gen_helper_das(cpu_env); - set_cc_op(s, CC_OP_EFLAGS); - break; - case 0x37: /* aaa */ - if (CODE64(s)) - goto illegal_op; - gen_update_cc_op(s); - gen_helper_aaa(cpu_env); - set_cc_op(s, CC_OP_EFLAGS); - break; - case 0x3f: /* aas */ - if (CODE64(s)) - goto illegal_op; - gen_update_cc_op(s); - gen_helper_aas(cpu_env); - set_cc_op(s, CC_OP_EFLAGS); - break; - case 0xd4: /* aam */ - if (CODE64(s)) - goto illegal_op; - val =3D x86_ldub_code(env, s); - if (val =3D=3D 0) { - gen_exception(s, EXCP00_DIVZ, pc_start - s->cs_base); - } else { - gen_helper_aam(cpu_env, tcg_const_i32(val)); - set_cc_op(s, CC_OP_LOGICB); - } - break; - case 0xd5: /* aad */ - if (CODE64(s)) - goto illegal_op; - val =3D x86_ldub_code(env, s); - gen_helper_aad(cpu_env, tcg_const_i32(val)); - set_cc_op(s, CC_OP_LOGICB); - break; - /************************/ - /* misc */ - case 0x90: /* nop */ - /* XXX: correct lock test for all insn */ - if (prefixes & PREFIX_LOCK) { - goto illegal_op; - } - /* If REX_B is set, then this is xchg eax, r8d, not a nop. */ - if (REX_B(s)) { - goto do_xchg_reg_eax; - } - if (prefixes & PREFIX_REPZ) { - gen_update_cc_op(s); - gen_jmp_im(s, pc_start - s->cs_base); - gen_helper_pause(cpu_env, tcg_const_i32(s->pc - pc_start)); - s->base.is_jmp =3D DISAS_NORETURN; - } - break; - case 0x9b: /* fwait */ - if ((s->flags & (HF_MP_MASK | HF_TS_MASK)) =3D=3D - (HF_MP_MASK | HF_TS_MASK)) { - gen_exception(s, EXCP07_PREX, pc_start - s->cs_base); - } else { - gen_helper_fwait(cpu_env); - } - break; - case 0xcc: /* int3 */ - gen_interrupt(s, EXCP03_INT3, pc_start - s->cs_base, s->pc - s->cs= _base); - break; - case 0xcd: /* int N */ - val =3D x86_ldub_code(env, s); - if (check_vm86_iopl(s)) { - gen_interrupt(s, val, pc_start - s->cs_base, s->pc - s->cs_bas= e); - } - break; - case 0xce: /* into */ - if (CODE64(s)) - goto illegl(s)) { - gen_helper_cli(cpu_env); - } - break; - case 0xfb: /* sti */ - if (check_iopl(s)) { - gen_helper_sti(cpu_env); - /* interruptions are enabled only the first insn after sti */ - gen_jmp_im(s, s->pc - s->cs_base); - gen_eob_inhibit_irq(s, true); - } - break; - case 0x62: /* bound */ - if (CODE64(s)) - goto illegal_op; - ot =3D dflag; - modrm =3D x86_ldub_code(env, s); - reg =3D (modrm >> 3) & 7; - mod =3D (modrm >> 6) & 3; - if (mod =3D=3D 3) - goto illegal_op; - gen_op_mov_v_reg(s, ot, s->T0, reg); - gen_lea_modrm(env, s, modrm); - tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); - if (ot =3D=3D MO_16) { - gen_helper_boundw(cpu_env, s->A0, s->tmp2_i32); - } else { - gen_helper_boundl(cpu_env, s->A0, s->tmp2_i32); - } - break; - case 0x1c8 ... 0x1cf: /* bswap reg */ - reg =3D (b & 7) | REX_B(s); -#ifdef TARGET_X86_64 - if (dflag =3D=3D MO_64) { - tcg_gen_bswap64_i64(cpu_regs[reg], cpu_regs[reg]); - break; - } -#endif - tcg_gen_bswap32_tl(cpu_regs[reg], cpu_regs[reg], TCG_BSWAP_OZ); - break; - case 0xd6: /* salc */ - if (CODE64(s)) - goto illegal_op; - gen_compute_eflags_c(s, s->T0); - tcg_gen_neg_tl(s->T0, s->T0); - gen_op_mov_reg_v(s, MO_8, R_EAX, s->T0); - break; - case 0xe0: /* loopnz */ - case 0xe1: /* loopz */ - case 0xe2: /* loop */ - case 0xe3: /* jecxz */ - { - TCGLabel *l1, *l2, *l3; - - tval =3D (int8_t)insn_get(env, s, MO_8); - next_eip =3D s->pc - s->cs_base; - tval +=3D next_eip; - if (dflag =3D=3D MO_16) { - tval &=3D 0xffff; - } - - l1 =3D gen_new_label(); - l2 =3D gen_new_label(); - l3 =3D gen_new_label(); - gen_update_cc_op(s); - b &=3D 3; - switch(b) { - case 0: /* loopnz */ - case 1: /* loopz */ - gen_op_add_reg_im(s, s->aflag, R_ECX, -1); - gen_op_jz_ecx(s, s->aflag, l3); - gen_jcc1(s, (JCC_Z << 1) | (b ^ 1), l1); - break; - case 2: /* loop */ - gen_op_add_reg_im(s, s->aflag, R_ECX, -1); - gen_op_jnz_ecx(s, s->aflag, l1); - break; - default: - case 3: /* jcxz */ - gen_op_jz_ecx(s, s->aflag, l1); - break; - } - - gen_set_label(l3); - gen_jmp_im(s, next_eip); - tcg_gen_br(l2); - - gen_set_label(l1); - gen_jmp_im(s, tval); - gen_set_label(l2); - gen_eob(s); - } - break; - case 0x130: /* wrmsr */ - case 0x132: /* rdmsr */ - if (check_cpl0(s)) { - gen_update_cc_op(s); - gen_jmp_im(s, pc_start - s->cs_base); - if (b & 2) { - gen_helper_rdmsr(cpu_env); - } else { - gen_helper_wrmsr(cpu_env); - gen_jmp_im(s, s->pc - s->cs_base); - gen_eob(s); - } - } - break; - case 0x131: /* rdtsc */ - gen_update_cc_op(s); - gen_jmp_im(s, pc_start - s->cs_base); - if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { - gen_io_start(); - } - gen_helper_rdtsc(cpu_env); - if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { - gen_jmp(s, s->pc - s->cs_base); - } - break; - case 0x133: /* rdpmc */ - gen_update_cc_op(s); - gen_jmp_im(s, pc_start - s->cs_base); - gen_helper_rdpmc(cpu_env); - s->base.is_jmp =3D DISAS_NORETURN; - break; - case 0x134: /* sysenter */ - /* For Intel SYSENTER is valid on 64-bit */ - if (CODE64(s) && env->cpuid_vendor1 !=3D CPUID_VENDOR_INTEL_1) - goto illegal_op; - if (!PE(s)) { - gen_exception_gpf(s); - } else { - gen_helper_sysenter(cpu_env); - gen_eob(s); - } - break; - case 0x135: /* sysexit */ - /* For Intel SYSEXIT is valid on 64-bit */ - if (CODE64(s) && env->cpuid_vendor1 !=3D CPUID_VENDOR_INTEL_1) - goto illegal_op; - if (!PE(s)) { - gen_exception_gpf(s); - } else { - gen_helper_sysexit(cpu_env, tcg_const_i32(dflag - 1)); - gen_eob(s); - } - break; -#ifdef TARGET_X86_64 - case 0x105: /* syscall */ - /* XXX: is it usable in real mode ? */ - gen_update_cc_op(s); - gen_jmp_im(s, pc_start - s->cs_base); - gen_helper_syscall(cpu_env, tcg_const_i32(s->pc - pc_start)); - /* TF handling for the syscall insn is different. The TF bit is c= hecked - after the syscall insn completes. This allows #DB to not be - generated after one has entered CPL0 if TF is set in FMASK. */ - gen_eob_worker(s, false, true); - break; - case 0x107: /* sysret */ - if (!PE(s)) { - gen_exception_gpf(s); - } else { - gen_helper_sysret(cpu_env, tcg_const_i32(dflag - 1)); - /* condition codes are modified only in long mode */ - if (LMA(s)) { - set_cc_op(s, CC_OP_EFLAGS); - } - /* TF handling for the sysret insn is different. The TF bit is - checked after the sysret insn completes. This allows #DB to= be - generated "as if" the syscall insn in userspace has just - completed. */ - gen_eob_worker(s, false, true); - } - break; -#endif - case 0x1a2: /* cpuid */ - gen_update_cc_op(s); - gen_jmp_im(s, pc_start - s->cs_base); - gen_helper_cpuid(cpu_env); - break; - case 0xf4: /* hlt */ - if (check_cpl0(s)) { - gen_update_cc_op(s); - gen_jmp_im(s, pc_start - s->cs_base); - gen_helper_hlt(cpu_env, tcg_const_i32(s->pc - pc_start)); - s->base.is_jmp =3D DISAS_NORETURN; - } - break; - case 0x100: - modrm =3D x86_ldub_code(env, s); - mod =3D (modrm >> 6) & 3; - op =3D (modrm >> 3) & 7; - switch(op) { - case 0: /* sldt */ - if (!PE(s) || VM86(s)) - goto illegal_op; - if (s->flags & HF_UMIP_MASK && !check_cpl0(s)) { - break; - } - gen_svm_check_intercept(s, SVM_EXIT_LDTR_READ); - tcg_gen_ld32u_tl(s->T0, cpu_env, - offsetof(CPUX86State, ldt.selector)); - ot =3D mod =3D=3D 3 ? dflag : MO_16; - gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1); - break; - case 2: /* lldt */ - if (!PE(s) || VM86(s)) - goto illegal_op; - if (check_cpl0(s)) { - gen_svm_check_intercept(s, SVM_EXIT_LDTR_WRITE); - gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0); - tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); - gen_helper_lldt(cpu_env, s->tmp2_i32); - } - break; - case 1: /* str */ - if (!PE(s) || VM86(s)) - goto illegal_op; - if (s->flags & HF_UMIP_MASK && !check_cpl0(s)) { - break; - } - gen_svm_check_intercept(s, SVM_EXIT_TR_READ); - tcg_gen_ld32u_tl(s->T0, cpu_env, - offsetof(CPUX86State, tr.selector)); - ot =3D mod =3D=3D 3 ? dflag : MO_16; - gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1); - break; - case 3: /* ltr */ - if (!PE(s) || VM86(s)) - goto illegal_op; - if (check_cpl0(s)) { - gen_svm_check_intercept(s, SVM_EXIT_TR_WRITE); - gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0); - tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); - gen_helper_ltr(cpu_env, s->tmp2_i32); - } - break; - case 4: /* verr */ - case 5: /* verw */ - if (!PE(s) || VM86(s)) - goto illegal_op; - gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0); - gen_update_cc_op(s); - if (op =3D=3D 4) { - gen_helper_verr(cpu_env, s->T0); - } else { - gen_helper_verw(cpu_env, s->T0); - } - set_cc_op(s, CC_OP_EFLAGS); - break; - default: - goto unknown_op; - } - break; - - case 0x101: - modrm =3D x86_ldub_code(env, s); - switch (modrm) { - CASE_MODRM_MEM_OP(0): /* sgdt */ - if (s->flags & HF_UMIP_MASK && !check_cpl0(s)) { - break; - } - gen_svm_check_intercept(s, SVM_EXIT_GDTR_READ); - gen_lea_modrm(env, s, modrm); - tcg_gen_ld32u_tl(s->T0, - cpu_env, offsetof(CPUX86State, gdt.limit)); - gen_op_st_v(s, MO_16, s->T0, s->A0); - gen_add_A0_im(s, 2); - tcg_gen_ld_tl(s->T0, cpu_env, offsetof(CPUX86State, gdt.base)); - if (dflag =3D=3D MO_16) { - tcg_gen_andi_tl(s->T0, s->T0, 0xffffff); - } - gen_op_st_v(s, CODE64(s) + MO_32, s->T0, s->A0); - break; - - case 0xc8: /* monitor */ - if (!(s->cpuid_ext_features & CPUID_EXT_MONITOR) || CPL(s) != =3D 0) { - goto illegal_op; - } - gen_update_cc_op(s); - gen_jmp_im(s, pc_start - s->cs_base); - tcg_gen_mov_tl(s->A0, cpu_regs[R_EAX]); - gen_extu(s->aflag, s->A0); - gen_add_A0_ds_seg(s); - gen_helper_monitor(cpu_env, s->A0); - break; - - case 0xc9: /* mwait */ - if (!(s->cpuid_ext_features & CPUID_EXT_MONITOR) || CPL(s) != =3D 0) { - goto illegal_op; - } - gen_update_cc_op(s); - gen_jmp_im(s, pc_start - s->cs_base); - gen_helper_mwait(cpu_env, tcg_const_i32(s->pc - pc_start)); - s->base.is_jmp =3D DISAS_NORETURN; - break; - - case 0xca: /* clac */ - if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_SMAP) - || CPL(s) !=3D 0) { - goto illegal_op; - } - gen_helper_clac(cpu_env); - gen_jmp_im(s, s->pc - s->cs_base); - gen_eob(s); - break; - - case 0xcb: /* stac */ - if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_SMAP) - || CPL(s) !=3D 0) { - goto illegal_op; - } - gen_helper_stac(cpu_env); - gen_jmp_im(s, s->pc - s->cs_base); - gen_eob(s); - break; - - CASE_MODRM_MEM_OP(1): /* sidt */ - if (s->flags & HF_UMIP_MASK && !check_cpl0(s)) { - break; - } - gen_svm_check_intercept(s, SVM_EXIT_IDTR_READ); - gen_lea_modrm(env, s, modrm); - tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State, idt.lim= it)); - gen_op_st_v(s, MO_16, s->T0, s->A0); - gen_add_A0_im(s, 2); - tcg_gen_ld_tl(s->T0, cpu_env, offsetof(CPUX86State, idt.base)); - if (dflag =3D=3D MO_16) { - tcg_gen_andi_tl(s->T0, s->T0, 0xffffff); - } - gen_op_st_v(s, CODE64(s) + MO_32, s->T0, s->A0); - break; - - case 0xd0: /* xgetbv */ - if ((s->cpuid_ext_features & CPUID_EXT_XSAVE) =3D=3D 0 - || (s->prefix & (PREFIX_LOCK | PREFIX_DATA - | PREFIX_REPZ | PREFIX_REPNZ))) { - goto illegal_op; - } - tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[R_ECX]); - gen_helper_xgetbv(s->tmp1_i64, cpu_env, s->tmp2_i32); - tcg_gen_extr_i64_tl(cpu_regs[R_EAX], cpu_regs[R_EDX], s->tmp1_= i64); - break; - - case 0xd1: /* xsetbv */ - if ((s->cpuid_ext_features & CPUID_EXT_XSAVE) =3D=3D 0 - || (s->prefix & (PREFIX_LOCK | PREFIX_DATA - | PREFIX_REPZ | PREFIX_REPNZ))) { - goto illegal_op; - } - if (!check_cpl0(s)) { - break; - } - tcg_gen_concat_tl_i64(s->tmp1_i64, cpu_regs[R_EAX], - cpu_regs[R_EDX]); - tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[R_ECX]); - gen_helper_xsetbv(cpu_env, s->tmp2_i32, s->tmp1_i64); - /* End TB because translation flags may change. */ - gen_jmp_im(s, s->pc - s->cs_base); - gen_eob(s); - break; - - case 0xd8: /* VMRUN */ - if (!SVME(s) || !PE(s)) { - goto illegal_op; - } - if (!check_cpl0(s)) { - break; - } - gen_update_cc_op(s); - gen_jmp_im(s, pc_start - s->cs_base); - gen_helper_vmrun(cpu_env, tcg_const_i32(s->aflag - 1), - tcg_const_i32(s->pc - pc_start)); - tcg_gen_exit_tb(NULL, 0); - s->base.is_jmp =3D DISAS_NORETURN; - break; - - case 0xd9: /* VMMCALL */ - if (!SVME(s)) { - goto illegal_op; - } - gen_update_cc_op(s); - gen_jmp_im(s, pc_start - s->cs_base); - gen_helper_vmmcall(cpu_env); - break; - - case 0xda: /* VMLOAD */ - if (!SVME(s) || !PE(s)) { - goto illegal_op; - } - if (!check_cpl0(s)) { - break; - } - gen_update_cc_op(s); - gen_jmp_im(s, pc_start - s->cs_base); - gen_helper_vmload(cpu_env, tcg_const_i32(s->aflag - 1)); - break; - - case 0xdb: /* VMSAVE */ - if (!SVME(s) || !PE(s)) { - goto illegal_op; - } - if (!check_cpl0(s)) { - break; - } - gen_update_cc_op(s); - gen_jmp_im(s, pc_start - s->cs_base); - gen_helper_vmsave(cpu_env, tcg_const_i32(s->aflag - 1)); - break; - - case 0xdc: /* STGI */ - if ((!SVME(s) && !(s->cpuid_ext3_features & CPUID_EXT3_SKINIT)) - || !PE(s)) { - goto illegal_op; - } - if (!check_cpl0(s)) { - break; - } - gen_update_cc_op(s); - gen_helper_stgi(cpu_env); - gen_jmp_im(s, s->pc - s->cs_base); - gen_eob(s); - break; - - case 0xdd: /* CLGI */ - if (!SVME(s) || !PE(s)) { - goto illegal_op; - } - if (!check_cpl0(s)) { - break; - } - gen_update_cc_op(s); - gen_jmp_im(s, pc_start - s->cs_base); - gen_helper_clgi(cpu_env); - break; - - case 0xde: /* SKINIT */ - if ((!SVME(s) && !(s->cpuid_ext3_features & CPUID_EXT3_SKINIT)) - || !PE(s)) { - goto illegal_op; - } - gen_svm_check_intercept(s, SVM_EXIT_SKINIT); - /* If not intercepted, not implemented -- raise #UD. */ - goto illegal_op; - - case 0xdf: /* INVLPGA */ - if (!SVME(s) || !PE(s)) { - goto illegal_op; - } - if (!check_cpl0(s)) { - break; - } - gen_svm_check_intercept(s, SVM_EXIT_INVLPGA); - if (s->aflag =3D=3D MO_64) { - tcg_gen_mov_tl(s->A0, cpu_regs[Rdrm(env, s, modrm); - gen_op_ld_v(s, MO_16, s->T1, s->A0); - gen_add_A0_im(s, 2); - gen_op_ld_v(s, CODE64(s) + MO_32, s->T0, s->A0); - if (dflag =3D=3D MO_16) { - tcg_gen_andi_tl(s->T0, s->T0, 0xffffff); - } - tcg_gen_st_tl(s->T0, cpu_env, offsetof(CPUX86State, gdt.base)); - tcg_gen_st32_tl(s->T1, cpu_env, offsetof(CPUX86State, gdt.limi= t)); - break; - - CASE_MODRM_MEM_OP(3): /* lidt */ - if (!check_cpl0(s)) { - break; - } - gen_svm_check_intercept(s, SVM_EXIT_IDTR_WRITE); - gen_lea_modrm(env, s, modrm); - gen_op_ld_v(s, MO_16, s->T1, s->A0); - gen_add_A0_im(s, 2); - gen_op_ld_v(s, CODE64(s) + MO_32, s->T0, s->A0); - if (dflag =3D=3D MO_16) { - tcg_gen_andi_tl(s->T0, s->T0, 0xffffff); - } - tcg_gen_st_tl(s->T0, cpu_env, offsetof(CPUX86State, idt.base)); - tcg_gen_st32_tl(s->T1, cpu_env, offsetof(CPUX86State, idt.limi= t)); - break; - - CASE_MODRM_OP(4): /* smsw */ - if (s->flags & HF_UMIP_MASK && !check_cpl0(s)) { - break; - } - gen_svm_check_intercept(s, SVM_EXIT_READ_CR0); - tcg_gen_ld_tl(s->T0, cpu_env, offsetof(CPUX86State, cr[0])); - /* - * In 32-bit mode, the higher 16 bits of the destination - * register are undefined. In practice CR0[31:0] is stored - * just like in 64-bit mode. - */ - mod =3D (modrm >> 6) & 3; - ot =3D (mod !=3D 3 ? MO_16 : s->dflag); - gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1); - break; - case 0xee: /* rdpkru */ - if (prefixes & PREFIX_LOCK) { - goto illegal_op; - } - tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[R_ECX]); - gen_helper_rdpkru(s->tmp1_i64, cpu_env, s->tmp2_i32); - tcg_gen_extr_i64_tl(cpu_regs[R_EAX], cpu_regs[R_EDX], s->tmp1_= i64); - break; - case 0xef: /* wrpkru */ - if (prefixes & PREFIX_LOCK) { - goto illegal_op; - } - tcg_gen_concat_tl_i64(s->tmp1_i64, cpu_regs[R_EAX], - cpu_regs[R_EDX]); - tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[R_ECX]); - gen_helper_wrpkru(cpu_env, s->tmp2_i32, s->tmp1_i64); - break; - - CASE_MODRM_OP(6): /* lmsw */ - if (!check_cpl0(s)) { - break; - } - gen_svm_check_intercept(s, SVM_EXIT_WRITE_CR0); - gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0); - /* - * Only the 4 lower bits of CR0 are modified. - * PE cannot be set to zero if already set to one. - */ - tcg_gen_ld_tl(s->T1, cpu_env, offsetof(CPUX86State, cr[0])); - tcg_gen_andi_tl(s->T0, s->T0, 0xf); - tcg_gen_andi_tl(s->T1, s->T1, ~0xe); - tcg_gen_or_tl(s->T0, s->T0, s->T1); - gen_helper_write_crN(cpu_env, tcg_constant_i32(0), s->T0); - gen_jmp_im(s, s->pc - s->cs_base); - gen_eob(s); - break; - - CASE_MODRM_MEM_OP(7): /* invlpg */ - if (!check_cpl0(s)) { - break; - } - gen_svm_check_intercept(s, SVM_EXIT_INVLPG); - gen_lea_modrm(env, s, modrm); - gen_helper_flush_page(cpu_env, s->A0); - gen_jmp_im(s, s->pc - s->cs_base); - gen_eob(s); - break; - - case 0xf8: /* swapgs */ -#ifdef TARGET_X86_64 - if (CODE64(s)) { - if (check_cpl0(s)) { - tcg_gen_mov_tl(s->T0, cpu_seg_base[R_GS]); - tcg_gen_ld_tl(cpu_seg_base[R_GS], cpu_env, - offsetof(CPUX86State, kernelgsbase)); - tcg_gen_st_tl(s->T0, cpu_env, - offsetof(CPUX86State, kernelgsbase)); - } - break; - } -#endif - goto illegal_op; - - case 0xf9: /* rdtscp */ - if (!(s->cpuid_ext2_features & CPUID_EXT2_RDTSCP)) { - goto illegal_op; - } - gen_update_cc_op(s); - gen_jmp_im(s, pc_start - s->cs_base); - if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { - gen_io_start(); - } - gen_helper_rdtscp(cpu_env); - if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { - gen_jmp(s, s->pc - s->cs_base); - } - break; - - default: - goto unknown_op; - } - break; - - case 0x108: /* invd */ - case 0x109: /* wbinvd */ - if (check_cpl0(s)) { - gen_svm_check_intercept(s, (b & 2) ? SVM_EXIT_INVD : SVM_EXIT_= WBINVD); - /* nothing to do */ - } - break; - case 0x63: /* arpl or movslS (x86_64) */ -#ifdef TARGET_X86_64 - if (CODE64(s)) { - int d_ot; - /* d_ot is the size of destination */ - d_ot =3D dflag; - - modrm =3D x86_ldub_code(env, s); - reg =3D ((modrm >> 3) & 7) | REX_R(s); - mod =3D (modrm >> 6) & 3; - rm =3D (modrm & 7) | REX_B(s); - - if (mod =3D=3D 3) { - gen_op_mov_v_reg(s, MO_32, s->T0, rm); - /* sign extend */ - if (d_ot =3D=3D MO_64) { - tcg_gen_ext32s_tl(s->T0, s->T0); - } - gen_op_mov_reg_v(s, d_ot, reg, s->T0); - } else { - gen_lea_modrm(env, s, modrm); - gen_op_ld_v(s, MO_32 | MO_SIGN, s->T0, s->A0); - gen_op_mov_reg_v(s, d_ot, reg, s->T0); - } - } else -#endif - { - TCGLabel *label1; - TCGv t0, t1, t2, a0; - - if (!PE(s) || VM86(s)) - goto illegal_op; - t0 =3D tcg_temp_local_new(); - t1 =3D tcg_temp_local_new(); - t2 =3D tcg_temp_local_new(); - ot =3D MO_16; - modrm =3D x86_ldub_code(env, s); - reg =3D (modrm >> 3) & 7; - mod =3D (modrm >> 6) & 3; - rm =3D modrm & 7; - if (mod !=3D 3) { - gen_lea_modrm(env, s, modrm); - gen_op_ld_v(s, ot, t0, s->A0); - a0 =3D tcg_temp_local_new(); - tcg_gen_mov_tl(a0, s->A0); - } else { - gen_op_mov_v_reg(s, ot, t0, rm); - a0 =3D NULL; - } - gen_op_mov_v_reg(s, ot, t1, reg); - tcg_gen_andi_tl(s->tmp0, t0, 3); - tcg_gen_andi_tl(t1, t1, 3); - tcg_gen_movi_tl(t2, 0); - label1 =3D gen_new_label(); - tcg_gen_brcond_tl(TCG_COND_GE, s->tmp0, t1, label1); - tcg_gen_andi_tl(t0, t0, ~3); - tcg_gen_or_tl(t0, t0, t1); - tcg_gen_movi_tl(t2, CC_Z); - gen_set_label(label1); - if (mod !=3D 3) { - gen_op_st_v(s, ot, t0, a0); - tcg_temp_free(a0); - } else { - gen_op_mov_reg_v(s, ot, rm, t0); - } - gen_compute_eflags(s); - tcg_gen_andi_tl(cpu_cc_src, cpu_cc_src, ~CC_Z); - tcg_gen_or_tl(cpu_cc_src, cpu_cc_src, t2); - tcg_temp_free(t0); - tcg_temp_free(t1); - tcg_temp_free(t2); - } - break; - case 0x102: /* lar */ - case 0x103: /* lsl */ - { - TCGLabel *label1; - TCGv t0; - if (!PE(s) || VM86(s)) - goto illegal_op; - ot =3D dflag !=3D MO_16 ? MO_32 : MO_16; - modrm =3D x86_ldub_code(env, s); - reg =3D ((modrm >> 3) & 7) | REX_R(s); - gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0); - t0 =3D tcg_temp_local_new(); - gen_update_cc_op(s); - if (b =3D=3D 0x102) { - gen_helper_lar(t0, cpu_env, s->T0); - } else { - gen_helper_lsl(t0, cpu_env, s->T0); - } - tcg_gen_andi_tl(s->tmp0, cpu_cc_src, CC_Z); - label1 =3D gen_new_label(); - tcg_gen_brcondi_tl(TCG_COND_EQ, s->tmp0, 0, label1); - gen_op_mov_reg_v(s, ot, reg, t0); - gen_set_label(label1); - set_cc_op(s, CC_OP_EFLAGS); - tcg_temp_free(t0); - } - break; - case 0x118: - modrm =3D x86_ldub_code(env, s); - mod =3D (modrm >> 6) & 3; - op =3D (modrm >> 3) & 7; - switch(op) { - case 0: /* prefetchnta */ - case 1: /* prefetchnt0 */ - case 2: /* prefetchnt0 */ - case 3: /* prefetchnt0 */ - if (mod =3D=3D 3) - goto illegal_op; - gen_nop_modrm(env, s, modrm); - /* nothing more to do */ - break; - default: /* nop (multi byte) */ - gen_nop_modrm(env, s, modrm); - break; - } - break; - case 0x11a: - modrm =3D x86_ldub_code(env, s); - if (s->flags & HF_MPX_EN_MASK) { - mod =3D (modrm >> 6) & 3; - reg =3D ((modrm >> 3) & 7) | REX_R(s); - if (prefixes & PREFIX_REPZ) { - /* bndcl */ - if (reg >=3D 4 - || (prefixes & PREFIX_LOCK) - || s->aflag =3D=3D MO_16) { - goto illegal_op; - } - gen_bndck(env, s, modrm, TCG_COND_LTU, cpu_bndl[reg]); - } else if (prefixes & PREFIX_REPNZ) { - /* bndcu */ - if (reg >=3D 4 - || (prefixes & PREFIX_LOCK) - || s->aflag =3D=3D MO_16) { - goto illegal_op; - } - TCGv_i64 notu =3D tcg_temp_new_i64(); - tcg_gen_not_i64(notu, cpu_bndu[reg]); - gen_bndck(env, s, modrm, TCG_COND_GTU, notu); - tcg_temp_free_i64(notu); - } else if (prefixes & PREFIX_DATA) { - /* bndmov -- from reg/mem */ - if (reg >=3D 4 || s->aflag =3D=3D MO_16) { - goto illegal_op; - } - if (mod =3D=3D 3) { - int reg2 =3D (modrm & 7) | REX_B(s); - if (reg2 >=3D 4 || (prefixes & PREFIX_LOCK)) { - goto illegal_op; - } - if (s->flags & HF_MPX_IU_MASK) { - tcg_gen_mov_i64(cpu_bndl[reg], cpu_bndl[reg2]); - tcg_gen_mov_i64(cpu_bndu[reg], cpu_bndu[reg2]); - } - } else { - gen_lea_modrm(env, s, modrm); - if (CODE64(s)) { - tcg_gen_qemu_ld_i64(cpu_bndl[reg], s->A0, - s->mem_index, MO_LEUQ); - tcg_gen_addi_tl(s->A0, s->A0, 8); - tcg_gen_qemu_ld_i64(cpu_bndu[reg], s->A0, - s->mem_index, MO_LEUQ); - } else { - tcg_gen_qemu_ld_i64(cpu_bndl[reg], s->A0, - s->mem_index, MO_LEUL); - tcg_gen_addi_tl(s->A0, s->A0, 4); - tcg_gen_qemu_ld_i64(cpu_bndu[reg], s->A0, - s->mem_index, MO_LEUL); - } - /* bnd registers are now in-use */ - gen_set_hflag(s, HF_MPX_IU_MASK); - } - } else if (mod !=3D 3) { - /* bndldx */ - AddressParts a =3D gen_lea_modrm_0(env, s, modrm); - if (reg >=3D 4 - || (prefixes & PREFIX_LOCK) - || s->aflag =3D=3D MO_16 - || a.base < -1) { - goto illegal_op; - } - if (a.base >=3D 0) { - tcg_gen_addi_tl(s->A0, cpu_regs[a.base], a.disp); - } else { - tcg_gen_movi_tl(s->A0, 0); - } - gen_lea_v_seg(s, s->aflag, s->A0, a.def_seg, s->override); - if (a.index >=3D 0) { - tcg_gen_mov_tl(s->T0, cpu_regs[a.index]); - } else { - tcg_gen_movi_tl(s->T0, 0); - } - if (CODE64(s)) { - gen_helper_bndldx64(cpu_bndl[reg], cpu_env, s->A0, s->= T0); - tcg_gen_ld_i64(cpu_bndu[reg], cpu_env, - offsetof(CPUX86State, mmx_t0.MMX_Q(0))); - } else { - gen_helper_bndldx32(cpu_bndu[reg], cpu_env, s->A0, s->= T0); - tcg_gen_ext32u_i64(cpu_bndl[reg], cpu_bndu[reg]); - tcg_gen_shri_i64(cpu_bndu[reg], cpu_bndu[reg], 32); - } - gen_set_hflag(s, HF_MPX_IU_MASK); - } - } - gen_nop_modrm(env, s, modrm); - break; - case 0x11b: - modrm =3D x86_ldub_code(env, s); - if (s->flags & HF_MPX_EN_MASK) { - mod =3D (modrm >> 6) & 3; - reg =3D ((modrm >> 3) & 7) | REX_R(s); - if (mod !=3D 3 && (prefixes & PREFIX_REPZ)) { - /* bndmk */ - if (reg >=3D 4 - || (prefixes & PREFIX_LOCK) - || s->aflag =3D=3D MO_16) { - goto illegal_op; - } - AddressParts a =3D gen_lea_modrm_0(env, s, modrm); - if (a.base >=3D 0) { - tcg_gen_extu_tl_i64(cpu_bndl[reg], cpu_regs[a.base]); - if (!CODE64(s)) { - tcg_gen_ext32u_i64(cpu_bndl[reg], cpu_bndl[reg]); - } - } else if (a.base =3D=3D -1) { - /* no base register has lower bound of 0 */ - tcg_gen_movi_i64(cpu_bndl[reg], 0); - } else { - /* rip-relative generates #ud */ - goto illegal_op; - } - tcg_gen_not_tl(s->A0, gen_lea_modrm_1(s, a)); - if (!CODE64(s)) { - tcg_gen_ext32u_tl(s->A0, s->A0); - } - tcg_gen_extu_tl_i64(cpu_bndu[reg], s->A0); - /* bnd registers are now in-use */ - gen_set_hflag(s, HF_MPX_IU_MASK); - break; - } else if (prefixes & PREFIX_REPNZ) { - /* bndcn */ - if (reg >=3D 4 - || (prefixes & PREFIX_LOCK) - || s->aflag =3D=3D MO_16) { - goto illegal_op; - } - gen_bndck(env, s, modrm, TCG_COND_GTU, cpu_bndu[reg]); - } else if (prefixes & PREFIX_DATA) { - /* bndmov -- to reg/mem */ - if (reg >=3D 4 || s->aflag =3D=3D MO_16) { - goto illegal_op; - } - if (mod =3D=3D 3) { - int reg2 =3D (modrm & 7) | REX_B(s); - if (reg2 >=3D 4 || (prefixes & PREFIX_LOCK)) { - goto illegal_op; - } - if (s->flags & HF_MPX_IU_MASK) { - tcg_gen_mov_i64(cpu_bndl[reg2], cpu_bndl[reg]); - tcg_gen_mov_i64(cpu_bndu[reg2], cpu_bndu[reg]); - } - } else { - gen_lea_modrm(env, s, modrm); - if (CODE64(s)) { - tcg_gen_qemu_st_i64(cpu_bndl[reg], s->A0, - s->mem_index, MO_LEUQ); - tcg_gen_addi_tl(s->A0, s->A0, 8); - tcg s->m= em_index, MO_LEUL); - } - } - } else if (mod !=3D 3) { - /* bndstx */ - AddressParts a =3D gen_lea_modrm_0(env, s, modrm); - if (reg >=3D 4 - || (prefixes & PREFIX_LOCK) - || s->aflag =3D=3D MO_16 - || a.base < -1) { - goto illegal_op; - } - if (a.base >=3D 0) { - tcg_gen_addi_tl(s->A0, cpu_regs[a.base], a.disp); - } else { - tcg_gen_movi_tl(s->A0, 0); - } - gen_lea_v_seg(s, s->aflag, s->A0, a.def_seg, s->override); - if (a.index >=3D 0) { - tcg_gen_mov_tl(s->T0, cpu_regs[a.index]); - } else { - tcg_gen_movi_tl(s->T0, 0); - } - if (CODE64(s)) { - gen_helper_bndstx64(cpu_env, s->A0, s->T0, - cpu_bndl[reg], cpu_bndu[reg]); - } else { - gen_helper_bndstx32(cpu_env, s->A0, s->T0, - cpu_bndl[reg], cpu_bndu[reg]); - } - } - } - gen_nop_modrm(env, s, modrm); - break; - case 0x119: case 0x11c ... 0x11f: /* nop (multi byte) */ - modrm =3D x86_ldub_code(env, s); - gen_nop_modrm(env, s, modrm); - break; - - case 0x120: /* mov reg, crN */ - case 0x122: /* mov crN, reg */ - if (!check_cpl0(s)) { - break; - } - modrm =3D x86_ldub_code(env, s); - /* - * Ignore the mod bits (assume (modrm&0xc0)=3D=3D0xc0). - * AMD documentation (24594.pdf) and testing of Intel 386 and 486 - * processors all show that the mod bits are assumed to be 1's, - * regardless of actual values. - */ - rm =3D (modrm & 7) | REX_B(s); - reg =3D ((modrm >> 3) & 7) | REX_R(s); - switch (reg) { - case 0: - if ((prefixes & PREFIX_LOCK) && - (s->cpuid_ext3_features & CPUID_EXT3_CR8LEG)) { - reg =3D 8; - } - break; - case 2: - case 3: - case 4: - case 8: - break; - default: - goto unknown_op; - } - ot =3D (CODE64(s) ? MO_64 : MO_32); - - if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { - gen_io_start(); - } - if (b & 2) { - gen_svm_check_intercept(s, SVM_EXIT_WRITE_CR0 + reg); - gen_op_mov_v_reg(s, ot, s->T0, rm); - gen_helper_write_crN(cpu_env, tcg_constant_i32(reg), s->T0); - gen_jmp_im(s, s->pc - s->cs_base); - gen_eob(s); - } else { - gen_svm_check_intercept(s, SVM_EXIT_READ_CR0 + reg); - gen_helper_read_crN(s->T0, cpu_env, tcg_constant_i32(reg)); - gen_op_mov_reg_v(s, ot, rm, s->T0); - if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { - gen_jmp(s, s->pc - s->cs_base); - } - } - break; - - case 0x121: /* mov reg, drN */ - case 0x123: /* mov drN, reg */ - if (check_cpl0(s)) { - modrm =3D x86_ldub_code(env, s); - /* Ignore the mod bits (assume (modrm&0xc0)=3D=3D0xc0). - * AMD documentation (24594.pdf) and testing of - * intel 386 and 486 processors all show that the mod bits - * are assumed to be 1's, regardless of actual values. - */ - rm =3D (modrm & 7) | REX_B(s); - reg =3D ((modrm >> 3) & 7) | REX_R(s); - if (CODE64(s)) - ot =3D MO_64; - else - ot =3D MO_32; - if (reg >=3D 8) { - goto illegal_op; - } - if (b & 2) { - gen_svm_check_intercept(s, SVM_EXIT_WRITE_DR0 + reg); - gen_op_mov_v_reg(s, ot, s->T0, rm); - tcg_gen_movi_i32(s->tmp2_i32, reg); - gen_helper_set_dr(cpu_env, s->tmp2_i32, s->T0); - gen_jmp_im(s, s->pc - s->cs_base); - gen_eob(s); - } else { - gen_svm_check_intercept(s, SVM_EXIT_READ_DR0 + reg); - tcg_gen_movi_i32(s->tmp2_i32, reg); - gen_helper_get_dr(s->T0, cpu_env, s->tmp2_i32); - gen_op_mov_reg_v(s, ot, rm, s->T0); - } - } - break; - case 0x106: /* clts */ - if (check_cpl0(s)) { - gen_svm_check_intercept(s, SVM_EXIT_WRITE_CR0); - gen_helper_clts(cpu_env); - /* abort block because static cpu state changed */ - gen_jmp_im(s, s->pc - s->cs_base); - gen_eob(s); - } - break; - /* MMX/3DNow!/SSE/SSE2/SSE3/SSSE3/SSE4 support */ - case 0x1c3: /* MOVNTI reg, mem */ - if (!(s->cpuid_features & CPUID_SSE2)) - goto illegal_op; - ot =3D mo_64_32(dflag); - modrm =3D x86_ldub_code(env, s); - mod =3D (modrm >> 6) & 3; - if (mod =3D=3D 3) - goto illegal_op; - reg =3D ((modrm >> 3) & 7) | REX_R(s); - /* generate a generic store */ - gen_ldst_modrm(env, s, modrm, ot, reg, 1); - break; - case 0x1ae: - modrm =3D x86_ldub_code(env, s); - switch (modrm) { - CASE_MODRM_MEM_OP(0): /* fxsave */ - if (!(s->cpuid_features & CPUID_FXSR) - || (prefixes & PREFIX_LOCK)) { - goto illegal_op; - } - if ((s->flags & HF_EM_MASK) || (s->flags & HF_TS_MASK)) { - gen_exception(s, EXCP07_PREX, pc_start - s->cs_base); - break; - } - gen_lea_modrm(env, s, modrm); - gen_helper_fxsave(cpu_env, s->A0); - break; - - CASE_MODRM_MEM_OP(1): /* fxrstor */ - if (!(s->cpuid_features & CPUID_FXSR) - || (prefixes & PREFIX_LOCK)) { - goto illegal_op; - } - if ((s->flags & HF_EM_MASK) || (s->flags & HF_TS_MASK)) { - gen_exception(s, EXCP07_PREX, pc_start - s->cs_base); - break; - } - gen_lea_modrm(env, s, modrm); - gen_helper_fxrstor(cpu_env, s->A0); - break; - - CASE_MODRM_MEM_OP(2): /* ldmxcsr */ - if ((s->flags & HF_EM_MASK) || !(s->flags & HF_OSFXSR_MASK)) { - goto illegal_op; - } - if (s->flags & HF_TS_MASK) { - gen_exception(s, EXCP07_PREX, pc_start - s->cs_base); - break; - } - gen_lea_modrm(env, s, modrm); - tcg_gen_qemu_ld_i32(s->tmp2_i32, s->A0, s->mem_index, MO_LEUL); - gen_helper_ldmxcsr(cpu_env, s->tmp2_i32); - break; - - CASE_MODRM_MEM_OP(3): /* stmxcsr */ - if ((s->flags & HF_EM_MASK) || !(s->flags & HF_OSFXSR_MASK)) { - goto illegal_op; - } - if (s->flags & HF_TS_MASK) { - gen_exception(s, EXCP07_PREX, pc_start - s->cs_base); - break; - } - gen_helper_update_mxcsr(cpu_env); - gen_lea_modrm(env, s, modrm); - tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State, mxcsr)); - gen_op_st_v(s, MO_32, s->T0, s->A0); - break; - - CASE_MODRM_MEM_OP(4): /* xsave */ - if ((s->cpuid_ext_features & CPUID_EXT_XSAVE) =3D=3D 0 - || (prefixes & (PREFIX_LOCK | PREFIX_DATA - | PREFIX_REPZ | PREFIX_REPNZ))) { - goto illegal_op; - } - gen_lea_modrm(env, s, modrm); - tcg_gen_concat_tl_i64(s->tmp1_i64, cpu_regs[R_EAX], - cpu_regs[R_EDX]); - gen_helper_xsave(cpu_env, s->A0, s->tmp1_i64); - break; - - CASE_MODRM_MEM_OP(5): /* xrstor */ - if ((s->cpuid_ext_features & CPUID_EXT_XSAVE) =3D=3D 0 - || (prefixes & (PREFIX_LOCK | PREFIX_DATA - | PREFIX_REPZ | PREFIX_REPNZ))) { - goto illegal_op; - } - gen_lea_modrm(env, s, modrm); - tcg_gen_concat_tl_i64(s->tmp1_i64, cpu_regs[R_EAX], - cpu_regs[R_EDX]); - gen_helper_xrstor(cpu_env, s->A0, s->tmp1_i64); - /* XRSTOR is how MPX is enabled, which changes how - we translate. Thus we need to end the TB. */ - gen_update_cc_op(s); - gen_jmp_im(s, s->pc - s->cs_base); - gen_eob(s); - break; - - CASE_MODRM_MEM_OP(6): /* xsaveopt / clwb */ - if (prefixes & PREFIX_LOCK) { - goto illegal_op; - } - if (prefixes & PREFIX_DATA) { - /* clwb */ - if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_CLWB)) { - goto illegal_op; - } - gen_nop_modrm(env, s, modrm); - } else { - /* xsaveopt */ - if ((s->cpuid_ext_features & CPUID_EXT_XSAVE) =3D=3D 0 - || (s->cpuid_xsave_features & CPUID_XSAVE_XSAVEOPT) = =3D=3D 0 - || (prefixes & (PREFIX_REPZ | PREFIX_REPNZ))) { - goto illegal_op; - } - gen_lea_modrm(env, s, modrm); - tcg_gen_concat_tl_i64(s->tmp1_i64, cpu_regs[R_EAX], - cpu_regs[R_EDX]); - gen_helper_xsaveopt(cpu_env, s->A0, s->tmp1_i64); - } - break; - - CASE_MODRM_MEM_OP(7): /* clflush / clflushopt */ - if (prefixes & PREFIX_LOCK) { - goto illegal_op; - } - if (prefixes & PREFIX_DATA) { - /* clflushopt */ - if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_CLFLUSHOPT= )) { - goto illegal_op; - } - } else { - /* clflush */ - if ((s->prefix & (PREFIX_REPZ | PREFIX_REPNZ)) - || !(s->cpuid_features & CPUID_CLFLUSH)) { - goto illegal_op; - } - } - gen_nop_modrm(env, s, modrm); - break; - - case 0xc0 ... 0xc7: /* rdfsbase (f3 0f ae /0) */ - case 0xc8 ... 0xcf: /* rdgsbase (f3 0f ae /1) */ - case 0xd0 ... 0xd7: /* wrfsbase (f3 0f ae /2) */ - case 0xd8 ... 0xdf: /* wrgsbase (f3 0f ae /3) */ - if (CODE64(s) - && (prefixes & PREFIX_REPZ) - && !(prefixes & PREFIX_LOCK) - && (s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_FSGSBASE)) { - TCGv base, treg, src, dst; - - /* Preserve hflags bits by testing CR4 at runtime. */ - tcg_gen_movi_i32(s->tmp2_i32, CR4_FSGSBASE_MASK); - gen_helper_cr4_testbit(cpu_env, s->tmp2_i32); - - base =3D cpu_seg_base[modrm & 8 ? R_GS : R_FS]; - treg =3D cpu_regs[(modrm & 7) | REX_B(s)]; - - if (modrm & 0x10) { - /* wr*base */ - dst =3D base, src =3D treg; - } else { - /* rd*base */ - dst =3D treg, src =3D base; - } - - if (s->dflag =3D=3D MO_32) { - tcg_gen_ext32u_tl(dst, src); - } else { - tcg_gen_mov_tl(dst, src); - } - break; - } - goto unknown_op; - - case 0xf8: /* sfence / pcommit */ - if (prefixes & PREFIX_DATA) { - /* pcommit */ - if (!(s->cpuid_7_0_ebx_features & CPUID_7_0_EBX_PCOMMIT) - || (prefixes & PREFIX_LOCK)) { - goto illegal_op; - } - break; - } - /* fallthru */ - case 0xf9 ... 0xff: /* sfence */ - if (!(s->cpuid_features & CPUID_SSE) - || (prefixes & PREFIX_LOCK)) { - goto illegal_op; - } - tcg_gen_mb(TCG_MO_ST_ST | TCG_BAR_SC); - break; - case 0xe8 ... 0xef: /* lfence */ - if (!(s->cpuid_features & CPUID_SSE) - || (prefixes & PREFIX_LOCK)) { - goto illegal_op; - } - tcg_gen_mb(TCG_MO_LD_LD | TCG_BAR_SC); - break; - case 0xf0 ... 0xf7: /* mfence */ - if (!(s->cpuid_features & CPUID_SSE2) - || (prefixes & PREFIX_LOCK)) { - goto illegal_op; - } - tcg_gen_mb(TCG_MO_ALL | TCG_BAR_SC); - break; - - default: - goto unknown_op; - } - break; - - case 0x10d: /* 3DNow! prefetch(w) */ - modrm =3D x86_ldub_code(env, s); - mod =3D (modrm >> 6) & 3; - if (mod =3D=3D 3) - goto illegal_op; - gen_nop_modrm(env, s, modrm); - break; - case 0x1aa: /* rsm */ - gen_svm_check_intercept(s, SVM_EXIT_RSM); - if (!(s->flags & HF_SMM_MASK)) - goto illegal_op; -#ifdef CONFIG_USER_ONLY - /* we should not be in SMM mode */ - g_assert_not_reached(); -#else - gen_update_cc_op(s); - gen_jmp_im(s, s->pc - s->cs_base); - gen_helper_rsm(cpu_env); -#endif /* CONFIG_USER_ONLY */ - gen_eob(s); - break; - case 0x1b8: /* SSE4.2 popcnt */ - if ((prefixes & (PREFIX_REPZ | PREFIX_LOCK | PREFIX_REPNZ)) !=3D - PREFIX_REPZ) - goto illegal_op; - if (!(s->cpuid_ext_features & CPUID_EXT_POPCNT)) - goto illegal_op; - - modrm =3D x86_ldub_code(env, s); - reg =3D ((modrm >> 3) & 7) | REX_R(s); - - if (s->prefix & PREFIX_DATA) { - ot =3D MO_16; - } else { - ot =3D mo_64_32(dflag); - } - - gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); - gen_extu(ot, s->T0); - tcg_gen_mov_tl(cpu_cc_src, s->T0); - tcg_gen_ctpop_tl(s->T0, s->T0); - gen_op_mov_reg_v(s, ot, reg, s->T0); - - set_cc_op(s, CC_OP_POPCNT); - break; - case 0x10e ... 0x10f: - /* 3DNow! instructions, ignore prefixes */ - s->prefix &=3D ~(PREFIX_REPZ | PREFIX_REPNZ | PREFIX_DATA); - /* fall through */ - case 0x110 ... 0x117: - case 0x128 ... 0x12f: - case 0x138 ... 0x13a: - case 0x150 ... 0x179: - case 0x17c ... 0x17f: - case 0x1c2: - case 0x1c4 ... 0x1c6: - case 0x1d0 ... 0x1fe: - gen_sse(env, s, b, pc_start); - break; - default: - goto unknown_op; - } - return s->pc; - illegal_op: - gen_illegal_opcode(s); - return s->pc; - unknown_op: - gen_unknown_opcode(env, s); - return s->pc; -} +#include "decode-old.c.inc" =20 void tcg_x86_init(void) { --=20 2.37.1 From nobody Fri May 17 08:24:54 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661362617; cv=none; d=zohomail.com; s=zohoarc; b=FPSaaKFQFnf6u0zrl+ilL4zWH4lzXWVrzr5zTx9KDtFUrvp4aCIhBkYqZVhVsMgsApCuQaREZGGOML6at7IIi5f3XVwevhtucNjvZsog1y6OjcvE6vGh0PbgP2kb7Zw5ZDOJi/2kwOzC6bzJs9Q/EpN7MAOCwq949paQ8wjP/PI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661362617; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=rr2uV9/Sefp2Go3IHv8EFQK2PMyT695YRejnJjqEqao=; b=M1bVnNHE2DKTRUuxZKVjTtw7ov9nW4/DvyRp/TByVXy8iDZhFC0sAebk6knmc9POpQFajdZXZb4TsqgsgwiZ459wGz4skxrHsXd4iVOEG6peGGkXJPPZAepZbSHsoZ6/kcl45vMkiMbiC1UIbT4lCAgC8UXwkfTRXymsgDfxlEI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1661362617037979.6007693427464; Wed, 24 Aug 2022 10:36:57 -0700 (PDT) Received: from localhost ([::1]:42698 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQuJD-0005gc-1f for importer@patchew.org; Wed, 24 Aug 2022 13:36:55 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:37454) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuEB-0001cP-2N for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:31:43 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:48243) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuE7-0003hy-36 for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:31:40 -0400 Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-338-j_qhn6mgOSyhfqSV9cJO7A-1; Wed, 24 Aug 2022 13:31:30 -0400 Received: by mail-wm1-f70.google.com with SMTP id f9-20020a7bcd09000000b003a62725489bso589580wmj.2 for ; Wed, 24 Aug 2022 10:31:29 -0700 (PDT) Received: from goa-sendmail ([2001:b07:6468:f312:9af8:e5f5:7516:fa89]) by smtp.gmail.com with ESMTPSA id y6-20020a056000108600b002250c35826dsm17319508wrw.104.2022.08.24.10.31.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Aug 2022 10:31:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661362293; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rr2uV9/Sefp2Go3IHv8EFQK2PMyT695YRejnJjqEqao=; b=I6OxVFA1WlhaYS2XcWRXxUm9Jo6caM6NG89WRblrN5NQRHtgJx4CbpaEUvjIpezNNPmPkv tXphbv3cB+LarlIdnt9XgLQjhzuYeKZY2h1hCUe84zBTru+7eXqKcZJlc4rXMMlnQtonlm vlfPETxIjpt3knQBWk6PPcTyUrVLrOY= X-MC-Unique: j_qhn6mgOSyhfqSV9cJO7A-1 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=rr2uV9/Sefp2Go3IHv8EFQK2PMyT695YRejnJjqEqao=; b=xBrEhNqHoRVy0+hRD8IaNX9R75dmqdNQ1vIRuuC3PK4JjuNeVPWYLmKAwEmc2xU7VG LIJhoI+YqdHZbclGwZ6VCS/aDFwfgm82JMJqSzylKa47jv+H16P46zMz1UpzYIYDq8In Q3i3Cxn8lfNY5pegLWdzol7pIO+Jhhbzfvbn2xbA4sdSzKU7U39R76S4OQnVfAFGfa4/ q4Mf2tcZVGoZv7Af6vITEjHHSs2AiSv31jJuED/HUtVjaaxWdG3bjVGf97bBDiskqW1j RpCOa26R5SZ8owZbWBJcCLmyq8+fupQW30/qXNxAtPfTcejAr1O71cn3Ne1/Jt2RSE9t cisw== X-Gm-Message-State: ACgBeo0ui/i6sm9fykoMEm1Yl6iDDQYAwyRam7mwzLVdE5yGbSiHevsA nG6LWG9VaKzucv6QnrKLuKyd8J5cOU8axJsr6NjlZIfDq2KOR9rzgh8S6CtQIExZrt7IiHH7kcu C9JBZI+8ViV3ZGSnBySYEqbqT22boPvS9ECDJi2+tlPimJ4DD4wb7xqDZYI91lYV+mJk= X-Received: by 2002:a5d:43c2:0:b0:225:2d8a:ad25 with SMTP id v2-20020a5d43c2000000b002252d8aad25mr165258wrr.208.1661362288623; Wed, 24 Aug 2022 10:31:28 -0700 (PDT) X-Google-Smtp-Source: AA6agR5SX4kLQhT9CbPxGMAHqa0D3Ufgi8/xiCMg5GgzYMeaHoRAlYFUDgmJ/2qu7wl9A7kvXD7UCA== X-Received: by 2002:a5d:43c2:0:b0:225:2d8a:ad25 with SMTP id v2-20020a5d43c2000000b002252d8aad25mr165246wrr.208.1661362288325; Wed, 24 Aug 2022 10:31:28 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, paul@nowt.org Subject: [PATCH 02/17] target/i386: introduce insn_get_addr Date: Wed, 24 Aug 2022 19:31:08 +0200 Message-Id: <20220824173123.232018-3-pbonzini@redhat.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220824173123.232018-1-pbonzini@redhat.com> References: <20220824173123.232018-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661362618484100001 Content-Type: text/plain; charset="utf-8" The "O" operand type in the Intel SDM needs to load an 8- to 64-bit unsigned value, while insn_get is limited to 32 bits. Extract the code out of disas_insn and into a separate function. Signed-off-by: Paolo Bonzini Reviewed-by: Richard Henderson --- target/i386/tcg/decode-old.c.inc | 11 +---------- target/i386/tcg/translate.c | 25 +++++++++++++++++++++++++ 2 files changed, 26 insertions(+), 10 deletions(-) diff --git a/target/i386/tcg/decode-old.c.inc b/target/i386/tcg/decode-old.= c.inc index 61448ab7c9..603642d6e1 100644 --- a/target/i386/tcg/decode-old.c.inc +++ b/target/i386/tcg/decode-old.c.inc @@ -2930,16 +2930,7 @@ static target_ulong disas_insn(DisasContext *s, CPUS= tate *cpu) target_ulong offset_addr; =20 ot =3D mo_b_d(b, dflag); - switch (s->aflag) { -#ifdef TARGET_X86_64 - case MO_64: - offset_addr =3D x86_ldq_code(env, s); - break; -#endif - default: - offset_addr =3D insn_get(env, s, s->aflag); - break; - } + offset_addr =3D insn_get_addr(env, s, s->aflag); tcg_gen_movi_tl(s->A0, offset_addr); gen_add_A0_ds_seg(s); if ((b & 2) =3D=3D 0) { diff --git a/target/i386/tcg/translate.c b/target/i386/tcg/translate.c index d2d6eb89e7..5c4b685de7 100644 --- a/target/i386/tcg/translate.c +++ b/target/i386/tcg/translate.c @@ -2268,6 +2268,31 @@ static void gen_ldst_modrm(CPUX86State *env, DisasCo= ntext *s, int modrm, } } =20 +static inline target_ulong insn_get_addr(CPUX86State *env, DisasContext *s= , MemOp ot) +{ + target_ulong ret; + + switch (ot) { + case MO_8: + ret =3D x86_ldub_code(env, s); + break; + case MO_16: + ret =3D x86_lduw_code(env, s); + break; + case MO_32: + ret =3D x86_ldl_code(env, s); + break; +#ifdef TARGET_X86_64 + case MO_64: + ret =3D x86_ldq_code(env, s); + break; +#endif + default: + tcg_abort(); + } + return ret; +} + static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, MemOp o= t) { uint32_t ret; --=20 2.37.1 From nobody Fri May 17 08:24:54 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661362970; cv=none; d=zohomail.com; s=zohoarc; b=Z2Pgeq51KDgZ0AOW/I1/Lvb6eCdU2XXWXw+rFLNRiOChmziEqe0Juo6nRrvLEPrlH8nHXqLqL1lObzFDCziflDkCEq9AX+B+vlWq191QBtsElY868cHIAxYwjtBtFuO52RDgVx2PAYz4LWlAiF0q/EmTykZ8B3NaU4E/7ly8BbE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661362970; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=sPEi09UZWczc0iRHlbbJoT8IKqZIvnSmqU5S+SnF5H4=; b=Mpskuo5+qqAW8m/pBoZXQk1D+gbfUbg3TE9LWlxMxpdwy8UL7efVWSncZLCFhEMVBUvETiAzrxVnnoS8zQPk6Dx3KeReXuqIAHXay4D3djZy+PxyN1/yVO7RPht+KBuW8pN9ZN/gS7BLHdGXzJGz7T/dgqvwcNpSBiXWIlTIkpQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1661362970794954.1468363535499; Wed, 24 Aug 2022 10:42:50 -0700 (PDT) Received: from localhost ([::1]:46008 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQuOu-0002Yp-Mg for importer@patchew.org; Wed, 24 Aug 2022 13:42:48 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33922) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuEL-0001hv-OZ for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:31:53 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:25222) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuE7-0003iN-36 for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:31:52 -0400 Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-556--CQe1nF_PXysKr4yXh-2Vw-1; Wed, 24 Aug 2022 13:31:35 -0400 Received: by mail-wr1-f70.google.com with SMTP id l25-20020adfa399000000b002252058bad2so2994760wrb.11 for ; Wed, 24 Aug 2022 10:31:32 -0700 (PDT) Received: from goa-sendmail ([2001:b07:6468:f312:9af8:e5f5:7516:fa89]) by smtp.gmail.com with ESMTPSA id v7-20020a5d5907000000b00220592005edsm17568662wrd.85.2022.08.24.10.31.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Aug 2022 10:31:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661362298; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sPEi09UZWczc0iRHlbbJoT8IKqZIvnSmqU5S+SnF5H4=; b=LaBaOAzCVaH3IdGBtYUnNhrWxBFEqxefmV95JuTBYdD3HqHJsc5K3Jf94L84+JoVR+rU6U 6i34hAHuUpNT7Ipu4ZVAiRm6TDKQZCY9ZDWqWtu/+GxeL0t6mt1SAOrPWeFendE7gFqq+w qOpHeuqcTzFlc7SrtMjb1UiJzzzikuU= X-MC-Unique: -CQe1nF_PXysKr4yXh-2Vw-1 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=sPEi09UZWczc0iRHlbbJoT8IKqZIvnSmqU5S+SnF5H4=; b=e1qjjJMqdSwokJLy96Uhnk2zJ2vO17lDHmyfE4tyAOCogr4YtLkwJ03vG7NVsoRTtj q8gr8efwQl9b/9Uorxnnt53Yd2+PAmVMuEE+5AiJelUjUCk8HOi/vd+HPJC0Ifd/SCga Ta7O/UwvGnRgJEU/pZwD7cDUUYBSx+AjOGprzj4YhAzuQda7RGYyWuMT3PHj+WMJl4LY H0FFaEp/yOPMpJsjOCfATTMbIS9wQSQMgFSRgo/docT2kO4mAjYD1XE4Ocyg/6l4w99R v2s7HOD7wAZHyfKkd96/bJh8adIWiOsIUxJh+wccOKTxY0y4lBQjnl1PSbPLCXJ2zZvD N24Q== X-Gm-Message-State: ACgBeo21bX/awGGB6YW5CnkuQnVVhY8fBXw9XmOSTkQTonGIuW7XoBLF JS3dmaDcxTORMOmDKNADlPNx1RlpHBrzNZn7ZHy3bx7WKYRXwIHgN7aUfIbI56RTZvT8elcmrVB 8WkgzfESCOUfMMffb76954vj+KRgwMHFKbS9xuA0szIPdCov2/xChpYKM+7AOrk0lqxw= X-Received: by 2002:a05:6000:15c8:b0:225:5263:a223 with SMTP id y8-20020a05600015c800b002255263a223mr145473wry.335.1661362290576; Wed, 24 Aug 2022 10:31:30 -0700 (PDT) X-Google-Smtp-Source: AA6agR5uIntZyTFXq9nywoCnfLBRiM+kaq+QyvKf2W5k/tmhcnEheshPuxqAQWnKXqX+k4ApB4GAnA== X-Received: by 2002:a05:6000:15c8:b0:225:5263:a223 with SMTP id y8-20020a05600015c800b002255263a223mr145436wry.335.1661362289804; Wed, 24 Aug 2022 10:31:29 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, paul@nowt.org Subject: [PATCH 03/17] target/i386: add core of new i386 decoder Date: Wed, 24 Aug 2022 19:31:09 +0200 Message-Id: <20220824173123.232018-4-pbonzini@redhat.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220824173123.232018-1-pbonzini@redhat.com> References: <20220824173123.232018-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661362971753100001 Content-Type: text/plain; charset="utf-8" The new decoder is based on three principles: - use mostly table-driven decoding, using tables derived as much as possible from the Intel manual, keeping the code as "non-branchy" as possible - keep address generation and (for ALU operands) memory loads and write back as much in common code as possible (this is less relevant to non-ALU instructions because read-modify-write operations are rare) - do minimal changes on the old decoder while allowing incremental replacement of the old decoder with the new one This patch introduces the main decoder flow, and integrates the old decoder with the new one. The old decoder takes care of parsing prefixes and then optionally drops to the new one. There is a debugging mechanism through a "LIMIT" environment variable. In user-mode emulation, the variable is the number of instructions decoded by the new decoder before permanently switching to the old one. In system emulation, the variable is the highest opcode that is decoded by the new decoder (this is less friendly, but it's the best that can be done without requiring deterministic execution). Signed-off-by: Paolo Bonzini --- target/i386/tcg/decode-new.c.inc | 998 +++++++++++++++++++++++++++++++ target/i386/tcg/decode-old.c.inc | 15 + target/i386/tcg/emit.c.inc | 10 + target/i386/tcg/translate.c | 31 + 4 files changed, 1054 insertions(+) create mode 100644 target/i386/tcg/decode-new.c.inc create mode 100644 target/i386/tcg/emit.c.inc diff --git a/target/i386/tcg/decode-new.c.inc b/target/i386/tcg/decode-new.= c.inc new file mode 100644 index 0000000000..d661f1f6f0 --- /dev/null +++ b/target/i386/tcg/decode-new.c.inc @@ -0,0 +1,998 @@ +/* Decode table flags, mostly based on Intel SDM. */ + +typedef enum X86OpType { + X86_TYPE_A, /* Implicit */ + X86_TYPE_B, /* VEX.vvvv selects a GPR */ + X86_TYPE_C, /* REG in the modrm byte selects a control register */ + X86_TYPE_D, /* REG in the modrm byte selects a debug register */ + X86_TYPE_E, /* ALU modrm operand */ + X86_TYPE_F, /* EFLAGS/RFLAGS */ + X86_TYPE_G, /* REG in the modrm byte selects a GPR */ + X86_TYPE_H, /* For AVX, VEX.vvvv selects an XMM/YMM register */ + X86_TYPE_I, /* Immediate */ + X86_TYPE_J, /* Relative offset for a jump */ + X86_TYPE_L, /* The upper 4 bits of the immediate select a 128-bit regi= ster */ + X86_TYPE_M, /* modrm byte selects a memory operand */ + X86_TYPE_N, /* R/M in the modrm byte selects an MMX register */ + X86_TYPE_O, /* Absolute address encoded in the instruction */ + X86_TYPE_P, /* reg in the modrm byte selects an MMX register */ + X86_TYPE_Q, /* MMX modrm operand */ + X86_TYPE_R, /* R/M in the modrm byte selects a register */ + X86_TYPE_S, /* reg selects a segment register */ + X86_TYPE_U, /* R/M in the modrm byte selects an XMM/YMM register */ + X86_TYPE_V, /* reg in the modrm byte selects an XMM/YMM register */ + X86_TYPE_W, /* XMM/YMM modrm operand */ + X86_TYPE_X, /* string source */ + X86_TYPE_Y, /* string destination */ + + /* Custom */ + X86_TYPE_None, + X86_TYPE_2op, /* 2-operand RMW instruction */ + X86_TYPE_LoBits, /* encoded in bits 0-2 of the operand + REX.B */ + X86_TYPE_0, /* Hard-coded GPRs (RAX..RDI) */ + X86_TYPE_1, + X86_TYPE_2, + X86_TYPE_3, + X86_TYPE_4, + X86_TYPE_5, + X86_TYPE_6, + X86_TYPE_7, + X86_TYPE_ES, /* Hard-coded segment registers */ + X86_TYPE_CS, + X86_TYPE_SS, + X86_TYPE_DS, + X86_TYPE_FS, + X86_TYPE_GS, +} X86OpType; + +typedef enum X86OpSize { + X86_SIZE_a, /* BOUND operand */ + X86_SIZE_b, /* byte */ + X86_SIZE_c, /* 16/32-bit, based on operand size */ + X86_SIZE_d, /* 32-bit */ + X86_SIZE_dq, /* 2x64-bit */ + X86_SIZE_p, /* Far pointer */ + X86_SIZE_pd, /* SSE/AVX packed double precision */ + X86_SIZE_pi, /* MMX */ + X86_SIZE_ps, /* SSE/AVX packed single precision */ + X86_SIZE_q, /* 64-bit */ + X86_SIZE_qq, /* 2x128-bit */ + X86_SIZE_s, /* Descriptor */ + X86_SIZE_sd, /* SSE/AVX scalar double precision */ + X86_SIZE_ss, /* SSE/AVX scalar single precision */ + X86_SIZE_si, /* 32-bit GPR */ + X86_SIZE_v, /* 16/32/64-bit, based on operand size */ + X86_SIZE_w, /* 16-bit */ + X86_SIZE_x, /* 128/256-bit, based on operand size */ + X86_SIZE_y, /* 32/64-bit, based on operand size */ + X86_SIZE_z, /* 16-bit for 16-bit operand size, else 32-bit */ + + /* Custom */ + X86_SIZE_None, + X86_SIZE_d64, + X86_SIZE_f64, +} X86OpSize; + +/* Execution flags */ + +typedef enum X86ALUOpType { + X86_ALU_SKIP, + X86_ALU_SEG, + X86_ALU_CR, + X86_ALU_DR, + X86_ALU_GPR, + X86_ALU_MEM, + X86_ALU_IMM, +} X86ALUOpType; + +typedef enum X86OpSpecial { + X86_SPECIAL_None, + + /* Always locked if it has a memory operand (XCHG) */ + X86_SPECIAL_Locked, + + /* Fault outside protected mode */ + X86_SPECIAL_ProtMode, + + /* Writeback not needed or done manually in the callback */ + X86_SPECIAL_NoWriteback, + + /* Illegal or exclusive to 64-bit mode */ + X86_SPECIAL_i64, + X86_SPECIAL_o64, +} X86OpSpecial; + +typedef struct X86OpEntry X86OpEntry; +typedef struct X86DecodedInsn X86DecodedInsn; + +/* Decode function for multibyte opcodes. */ +typedef void (*X86DecodeFunc)(DisasContext *s, CPUX86State *env, X86OpEntr= y *entry, uint8_t *b); + +/* Code generation function. */ +typedef void (*X86GenFunc)(DisasContext *s, CPUX86State *env, X86DecodedIn= sn *decode); + +struct X86OpEntry { + /* Based on the is_decode flags. */ + union { + X86GenFunc gen; + X86DecodeFunc decode; + }; + /* op0 is always written, op1 and op2 are always read. */ + X86OpType op0 : 8; + X86OpSize s0 : 8; + X86OpType op1 : 8; + X86OpSize s1 : 8; + X86OpType op2 : 8; + X86OpSize s2 : 8; + X86OpSpecial special; + bool is_decode; +}; + +typedef struct X86DecodedOp { + int8_t n; + MemOp ot; /* For b/c/d/p/s/q/v/w/y/z */ + X86ALUOpType alu_op_type; + bool has_ea; +} X86DecodedOp; + +struct X86DecodedInsn { + X86OpEntry e; + X86DecodedOp op[3]; + target_ulong immediate; + AddressParts mem; + + uint8_t b; +}; + +#include "emit.c.inc" + +#define X86_OP_NONE { 0 }, + +#define X86_OP_GROUP3(op, op0_, s0_, op1_, s1_, op2_, s2_, ...) { \ + .decode =3D glue(decode_group_, op), \ + .op0 =3D glue(X86_TYPE_, op0_), \ + .s0 =3D glue(X86_SIZE_, s0_), \ + .op1 =3D glue(X86_TYPE_, op1_), \ + .s1 =3D glue(X86_SIZE_, s1_), \ + .op2 =3D glue(X86_TYPE_, op2_), \ + .s2 =3D glue(X86_SIZE_, s2_), \ + .is_decode =3D true, \ + ## __VA_ARGS__ \ +} + +#define X86_OP_ENTRY3(op, op0_, s0_, op1_, s1_, op2_, s2_, ...) { \ + .gen =3D glue(gen_, op), \ + .op0 =3D glue(X86_TYPE_, op0_), \ + .s0 =3D glue(X86_SIZE_, s0_), \ + .op1 =3D glue(X86_TYPE_, op1_), \ + .s1 =3D glue(X86_SIZE_, s1_), \ + .op2 =3D glue(X86_TYPE_, op2_), \ + .s2 =3D glue(X86_SIZE_, s2_), \ + ## __VA_ARGS__ \ +} + +#define X86_OP_ENTRY2(op, op0, s0, op1, s1, ...) \ + X86_OP_ENTRY3(op, op0, s0, 2op, s0, op1, s1, ## __VA_ARGS__) +#define X86_OP_GROUP2(op, op0, s0, op1, s1, ...) \ + X86_OP_GROUP3(op, op0, s0, 2op, s0, op1, s1, ## __VA_ARGS__) + +#define X86_OP_ENTRYw(op, op0, s0, ...) \ + X86_OP_ENTRY3(op, op0, s0, None, None, None, None, ## __VA_ARGS__) +#define X86_OP_GROUPw(op, op0, s0, ...) \ + X86_OP_GROUP3(op, op0, s0, None, None, None, None, ## __VA_ARGS__) +#define X86_OP_ENTRYr(op, op0, s0, ...) \ + X86_OP_ENTRY3(op, None, None, None, None, op0, s0, ## __VA_ARGS__) +#define X86_OP_ENTRY1(op, op0, s0, ...) \ + X86_OP_ENTRY3(op, op0, s0, 2op, s0, None, None, ## __VA_ARGS__) + +#define X86_OP_ENTRY0(op, ...) \ + X86_OP_ENTRY3(op, None, None, None, None, None, None, ## __VA_ARGS__) +#define X86_OP_GROUP0(op, ...) \ + X86_OP_GROUP3(op, None, None, None, None, None, None, ## __VA_ARGS__) + +#define i64 .special =3D X86_SPECIAL_i64, +#define o64 .special =3D X86_SPECIAL_o64, +#define nowb .special =3D X86_SPECIAL_NoWriteback, +#define xchg .special =3D X86_SPECIAL_Locked, + +static uint8_t get_modrm(DisasContext *s, CPUX86State *env) +{ + if (!s->has_modrm) { + s->modrm =3D x86_ldub_code(env, s); + s->has_modrm =3D true; + } + return s->modrm; +} + +static X86OpEntry A4_00_F7[16][8] =3D { + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, +}; + +static X86OpEntry A4_08_FF[16][8] =3D { + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, +}; + +static void decode_threebyte_38(DisasContext *s, CPUX86State *env, X86OpEn= try *entry, uint8_t *b) +{ + *b =3D x86_ldub_code(env, s); + if (!(*b & 8)) { + *entry =3D A4_00_F7[*b >> 4][*b & 7]; + } else { + *entry =3D A4_08_FF[*b >> 4][*b & 7]; + } +} + +static X86OpEntry A5_00_F7[16][8] =3D { + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, +}; + +static X86OpEntry A5_08_FF[16][8] =3D { + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, +}; + +static void decode_threebyte_3a(DisasContext *s, CPUX86State *env, X86OpEn= try *entry, uint8_t *b) +{ + *b =3D x86_ldub_code(env, s); + if (!(*b & 8)) { + *entry =3D A5_00_F7[*b >> 4][*b & 7]; + } else { + *entry =3D A5_08_FF[*b >> 4][*b & 7]; + } +} + +static X86OpEntry A3_00_77[8][8] =3D { + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, +}; + +static X86OpEntry A3_08_7F[8][8] =3D { + { + }, + { + }, + { + }, + { + [0x38 & 7] =3D { .decode =3D decode_threebyte_38, .is_decode =3D t= rue }, + [0x3a & 7] =3D { .decode =3D decode_threebyte_3a, .is_decode =3D t= rue }, + }, + { + }, + { + }, + { + }, + { + }, +}; + +static X86OpEntry A3_80_F7[8][8] =3D { + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, +}; + +static X86OpEntry A3_88_FF[8][8] =3D { + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, +}; + +static void decode_twobyte(DisasContext *s, CPUX86State *env, X86OpEntry *= entry, uint8_t *b) +{ + *b =3D x86_ldub_code(env, s); + switch (*b & 0x88) { + case 0x00: + *entry =3D A3_00_77[*b >> 4][*b & 7]; + break; + case 0x08: + *entry =3D A3_08_7F[*b >> 4][*b & 7]; + break; + case 0x80: + *entry =3D A3_80_F7[(*b - 0x80) >> 4][*b & 7]; + break; + case 0x88: + *entry =3D A3_88_FF[(*b - 0x80) >> 4][*b & 7]; + break; + } +} + +static X86OpEntry A2_00_F7[16][8] =3D { + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, +}; + +static X86OpEntry A2_08_FF[16][8] =3D { + { + [7] =3D { .decode =3D decode_twobyte, .is_decode =3D true, } + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, + { + }, +}; + +/* + * Decode the fixed part of the opcode and place the last + * in b. + */ +static void decode_root(DisasContext *s, CPUX86State *env, X86OpEntry *ent= ry, uint8_t *b) +{ + if (!(*b & 8)) { + *entry =3D A2_00_F7[*b >> 4][*b & 7]; + } else { + *entry =3D A2_08_FF[*b >> 4][*b & 7]; + } +} + + +static int decode_modrm(DisasContext *s, CPUX86State *env, X86DecodedInsn = *decode, + X86DecodedOp *op, X86OpType type) +{ + int modrm =3D get_modrm(s, env); + if ((modrm >> 6) =3D=3D 3) { + if ((s->prefix & PREFIX_LOCK) || type =3D=3D X86_TYPE_M) { + decode->e.gen =3D gen_illegal; + return 0xff; + } + op->n =3D (modrm & 7) | REX_B(s); + } else { + op->has_ea =3D true; + op->n =3D -1; + decode->mem =3D gen_lea_modrm_0(env, s, get_modrm(s, env)); + } + return modrm; +} + +static MemOp decode_op_size(DisasContext *s, X86OpSize size) +{ + switch (size) { + case X86_SIZE_b: /* byte */ + return MO_8; + + case X86_SIZE_c: /* 16/32-bit, based on operand size */ + return s->dflag =3D=3D MO_64 ? MO_32 : s->dflag; + + case X86_SIZE_d: /* 32-bit */ + return MO_32; + + case X86_SIZE_q: /* 64-bit */ + return MO_64; + + case X86_SIZE_p: /* Far pointer */ + case X86_SIZE_s: /* Descriptor */ + case X86_SIZE_v: /* 16/32/64-bit, based on operand size */ + return s->dflag; + + case X86_SIZE_w: /* 16-bit */ + return MO_16; + + case X86_SIZE_y: /* 32/64-bit, based on operand size */ + return CODE64(s) ? MO_64 : MO_32; + + case X86_SIZE_z: /* 16-bit for 16-bit operand size, else 32-bit */ + return s->dflag =3D=3D MO_16 ? MO_16 : MO_32; + + case X86_SIZE_d64: /* Default to 64-bit in 64-bit mode */ + return CODE64(s) && s->dflag =3D=3D MO_32 ? MO_64 : s->dflag; + + case X86_SIZE_f64: /* Ignore size override prefix in 64-bit mode */ + return CODE64(s) ? MO_64 : s->dflag; + + default: + return -1; + } +} + +static void decode_op(DisasContext *s, CPUX86State *env, X86DecodedInsn *d= ecode, + X86DecodedOp *op, X86OpType type, int b) +{ + int modrm; + + switch (type) { + case X86_TYPE_A: /* Implicit */ + case X86_TYPE_F: /* EFLAGS/RFLAGS */ + break; + + case X86_TYPE_B: /* VEX.vvvv selects a GPR */ + op->alu_op_type =3D X86_ALU_GPR; + op->n =3D s->vex_v; + break; + + case X86_TYPE_C: /* REG in the modrm byte selects a control register = */) | REX_R(s); + break; + + case X86_TYPE_E: /* ALU modrm operand */ + modrm =3D decode_modrm(s, env, decode, op, type); + op->alu_op_type =3D (modrm >> 6) =3D=3D 3 ? X86_ALU_GPR : X86_ALU_= MEM; + break; + + case X86_TYPE_M: /* modrm byte selects a memory operand */ + case X86_TYPE_N: /* R/M in the modrm byte selects an MMX register */ + case X86_TYPE_Q: /* MMX modrm operand */ + case X86_TYPE_R: /* R/M in the modrm byte selects a register */ + case X86_TYPE_U: /* R/M in the modrm byte selects an XMM/YMM register= */ + case X86_TYPE_W: /* XMM/YMM modrm operand */ + decode_modrm(s, env, decode, op, type); + break; + + case X86_TYPE_O: /* Absolute address encoded in the instruction */ + op->alu_op_type =3D X86_ALU_MEM; + op->has_ea =3D true; + op->n =3D -1; + decode->mem =3D (AddressParts) { + .def_seg =3D R_DS, + .base =3D -1, + .index =3D -1, + .disp =3D insn_get_addr(env, s, s->aflag) + }; + break; + + case X86_TYPE_H: /* For AVX, VEX.vvvv selects an XMM/YMM register */ + /* SSE only for now */ + *op =3D decode->op[0]; + break; + + case X86_TYPE_I: /* Immediate */ + op->alu_op_type =3D X86_ALU_IMM; + decode->immediate =3D insn_get_signed(env, s, op->ot); + break; + + case X86_TYPE_J: /* Relative offset for a jump */ + op->alu_op_type =3D X86_ALU_IMM; + decode->immediate =3D insn_get_signed(env, s, op->ot); + decode->immediate +=3D s->pc - s->cs_base; + if (s->dflag =3D=3D MO_16) { + decode->immediate &=3D 0xffff; + } else if (!CODE64(s)) { + decode->immediate &=3D 0xffffffffu; + } + break; + + case X86_TYPE_L: /* The upper 4 bits of the immediate select a 128-bi= t register */ + op->n =3D insn_get(env, s, op->ot) >> 4; + break; + + case X86_TYPE_X: /* string source */ + op->n =3D -1; + decode->mem =3D (AddressParts) { + .def_seg =3D R_DS, + .base =3D R_ESI, + .index =3D -1, + }; + break; + + case X86_TYPE_Y: /* string destination */ + op->n =3D -1; + decode->mem =3D (AddressParts) { + .def_seg =3D R_ES, + .base =3D R_EDI, + .index =3D -1, + }; + break; + + case X86_TYPE_2op: + *op =3D decode->op[0]; + break; + + case X86_TYPE_LoBits: + op->n =3D (b & 7) | REX_B(s); + op->alu_op_type =3D X86_ALU_GPR; + break; + + case X86_TYPE_0 ... X86_TYPE_7: + op->n =3D type - X86_TYPE_0; + op->alu_op_type =3D X86_ALU_GPR; + break; + + case X86_TYPE_ES ... X86_TYPE_GS: + op->n =3D type - X86_TYPE_ES; + op->alu_op_type =3D X86_ALU_SEG; + break; + + default: + abort(); + } +} + +static void decode_insn(DisasContext *s, CPUX86State *env, X86DecodeFunc d= ecode_func, + X86DecodedInsn *decode) +{ + X86OpEntry *e =3D &decode->e; + + decode_func(s, env, e, &decode->b); + while (e->is_decode) { + e->is_decode =3D false; + e->decode(s, env, e, &decode->b); + } + + /* First compute size of operands in order to initialize s->rip_offset= . */ + if (e->op0 !=3D X86_TYPE_None) { + decode->op[0].ot =3D decode_op_size(s, e->s0); + if (e->op0 =3D=3D X86_TYPE_I) { + s->rip_offset +=3D 1 << decode->op[0].ot; + } + } + if (e->op1 !=3D X86_TYPE_None) { + decode->op[1].ot =3D decode_op_size(s, e->s1); + if (e->op1 =3D=3D X86_TYPE_I) { + s->rip_offset +=3D 1 << decode->op[1].ot; + } + } + if (e->op2 !=3D X86_TYPE_None) { + decode->op[2].ot =3D decode_op_size(s, e->s2); + if (e->op2 =3D=3D X86_TYPE_I) { + s->rip_offset +=3D 1 << decode->op[2].ot; + } + } + + if (e->op0 !=3D X86_TYPE_None) { + decode_op(s, env, decode, &decode->op[0], e->op0, decode->b); + } + + if (e->op1 !=3D X86_TYPE_None) { + decode_op(s, env, decode, &decode->op[1], e->op1, decode->b); + } + + if (e->op2 !=3D X86_TYPE_None) { + decode_op(s, env, decode, &decode->op[2], e->op2, decode->b); + } +} + +/* convert one instruction. s->base.is_jmp is set if the translation must + be stopped. Return the next pc value */ +static target_ulong disas_insn_new(DisasContext *s, CPUState *cpu, int b) +{ + CPUX86State *env =3D cpu->env_ptr; + bool first =3D true; + X86DecodedInsn decode; + X86DecodeFunc decode_func =3D decode_root; + +#ifdef CONFIG_USER_ONLY + --limit; +#endif + s->has_modrm =3D false; +#if 0 + s->pc_start =3D s->pc =3D s->base.pc_next; + s->override =3D -1; +#ifdef TARGET_X86_64 + s->rex_w =3D false; + s->rex_r =3D 0; + s->rex_x =3D 0; + s->rex_b =3D 0; +#endif + s->prefix =3D 0; + s->rip_offset =3D 0; /* for relative ip address */ + s->vex_l =3D 0; + s->vex_v =3D 0; + if (sigsetjmp(s->jmpbuf, 0) !=3D 0) { + gen_exception_gpf(s); + return s->pc; + } +#endif + + next_byte: + if (first) { + first =3D false; + } else { + b =3D x86_ldub_code(env, s); + } + /* Collect prefixes. */ + switch (b) { + case 0xf3: + s->prefix |=3D PREFIX_REPZ; + goto next_byte; + case 0xf2: + s->prefix |=3D PREFIX_REPNZ; + goto next_byte; + case 0xf0: + s->prefix |=3D PREFIX_LOCK; + goto next_byte; + case 0x2e: + s->override =3D R_CS; + goto next_byte; + case 0x36: + s->override =3D R_SS; + goto next_byte; + case 0x3e: + s->override =3D R_DS; + goto next_byte; + case 0x26: + s->override =3D R_ES; + goto next_byte; + case 0x64: + s->override =3D R_FS; + goto next_byte; + case 0x65: + s->override =3D R_GS; + goto next_byte; + case 0x66: + s->prefix |=3D PREFIX_DATA; + goto next_byte; + case 0x67: + s->prefix |=3D PREFIX_ADR; + goto next_byte; +#ifdef TARGET_X86_64 + case 0x40 ... 0x4f: + if (CODE64(s)) { + /* REX prefix */ + s->prefix |=3D PREFIX_REX; + s->rex_w =3D (b >> 3) & 1; + s->rex_r =3D (b & 0x4) << 1; + s->rex_x =3D (b & 0x2) << 2; + s->rex_b =3D (b & 0x1) << 3; + goto next_byte; + } + break; +#endif + case 0xc5: /* 2-byte VEX */ + case 0xc4: /* 3-byte VEX */ + /* VEX prefixes cannot be used except in 32-bit mode. + Otherwise the instruction is LES or LDS. */ + if (CODE32(s) && !VM86(s)) { + static const int pp_prefix[4] =3D { + 0, PREFIX_DATA, PREFIX_REPZ, PREFIX_REPNZ + }; + int vex3, vex2 =3D x86_ldub_code(env, s); + + if (!CODE64(s) && (vex2 & 0xc0) !=3D 0xc0) { + /* 4.1.4.6: In 32-bit mode, bits [7:6] must be 11b, + otherwise the instruction is LES or LDS. */ + s->pc--; /* rewind the advance_pc() x86_ldub_code() did */ + break; + } + + /* 4.1.1-4.1.3: No preceding lock, 66, f2, f3, or rex prefixes= . */ + if (s->prefix & (PREFIX_REPZ | PREFIX_REPNZ + | PREFIX_LOCK | PREFIX_DATA | PREFIX_REX)) { + goto illegal_op; + } +#ifdef TARGET_X86_64 + s->rex_r =3D (~vex2 >> 4) & 8; +#endif + if (b =3D=3D 0xc5) { + /* 2-byte VEX prefix: RVVVVlpp, implied 0f leading opcode = byte */ + vex3 =3D vex2; + decode_func =3D decode_twobyte; + } else { + /* 3-byte VEX prefix: RXBmmmmm wVVVVlpp */ + vex3 =3D x86_ldub_code(env, s); +#ifdef TARGET_X86_64 + s->rex_x =3D (~vex2 >> 3) & 8; + s->rex_b =3D (~vex2 >> 2) & 8; + s->rex_w =3D (vex3 >> 7) & 1; +#endif + switch (vex2 & 0x1f) { + case 0x01: /* Implied 0f leading opcode bytes. */ + decode_func =3D decode_twobyte; + break; + case 0x02: /* Implied 0f 38 leading opcode bytes. */ + decode_func =3D decode_threebyte_38; + break; + case 0x03: /* Implied 0f 3a leading opcode bytes. */ + decode_func =3D decode_threebyte_3a; + break; + default: /* Reserved for future use. */ + goto unknown_op; + } + } + s->vex_v =3D (~vex3 >> 3) & 0xf; + s->vex_l =3D (vex3 >> 2) & 1; + s->prefix |=3D pp_prefix[vex3 & 3] | PREFIX_VEX; + } + break; + } + + /* Post-process prefixes. */ + if (CODE64(s)) { + /* In 64-bit mode, the default data size is 32-bit. Select 64-bit + data with rex_w, and 16-bit data with 0x66; rex_w takes precede= nce + over 0x66 if both are present. */ + s->dflag =3D (REX_W(s) ? MO_64 : s->prefix & PREFIX_DATA ? MO_16 := MO_32); + /* In 64-bit mode, 0x67 selects 32-bit addressing. */ + s->aflag =3D (s->prefix & PREFIX_ADR ? MO_32 : MO_64); + } else { + /* In 16/32-bit mode, 0x66 selects the opposite data size. */ + if (CODE32(s) ^ ((s->prefix & PREFIX_DATA) !=3D 0)) { + s->dflag =3D MO_32; + } else { + s->dflag =3D MO_16; + } + /* In 16/32-bit mode, 0x67 selects the opposite addressing. */ + if (CODE32(s) ^ ((s->prefix & PREFIX_ADR) !=3D 0)) { + s->aflag =3D MO_32; + } else { + s->aflag =3D MO_16; + } + } + + memset(&decode, 0, sizeof(decode)); + decode.b =3D b; + decode_insn(s, env, decode_func, &decode); + if (!decode.e.gen) { + goto unknown_op; + } + + switch (decode.e.special) { + case X86_SPECIAL_None: + break; + + case X86_SPECIAL_Locked: + if (decode.op[0].alu_op_type =3D=3D X86_ALU_MEM) { + s->prefix |=3D PREFIX_LOCK; + } + break; + + case X86_SPECIAL_ProtMode: + if (!PE(s) || VM86(s)) { + goto illegal_op; + } + break; + + case X86_SPECIAL_i64: + if (CODE64(s)) { + goto illegal_op; + } + break; + case X86_SPECIAL_o64: + if (!CODE64(s)) { + goto illegal_op; + } + break; + + case X86_SPECIAL_NoWriteback: + decode.op[0].alu_op_type =3D X86_ALU_SKIP; + break; + } + + if (decode.op[0].has_ea || decode.op[1].has_ea || decode.op[2].has_ea)= { + gen_load_ea(s, &decode.mem); + } + decode.e.gen(s, env, &decode); + return s->pc; + illegal_op: + gen_illegal_opcode(s); + return s->pc; + unknown_op: + gen_unknown_opcode(env, s); + return s->pc; +} diff --git a/target/i386/tcg/decode-old.c.inc b/target/i386/tcg/decode-old.= c.inc index 603642d6e1..fb86855501 100644 --- a/target/i386/tcg/decode-old.c.inc +++ b/target/i386/tcg/decode-old.c.inc @@ -1808,10 +1808,24 @@ static target_ulong disas_insn(DisasContext *s, CPU= State *cpu) =20 prefixes =3D 0; =20 + if (first) first =3D false, limit =3D getenv("LIMIT") ? atol(getenv("L= IMIT")) : -1; + bool use_new =3D true; next_byte: + s->prefix =3D prefixes; b =3D x86_ldub_code(env, s); /* Collect prefixes. */ switch (b) { + default: +#ifdef CONFIG_USER_ONLY + use_new &=3D limit > 0; +#else + use_new &=3D b <=3D limit; +#endif + if (use_new && 0) { + return disas_insn_new(s, cpu, b); + } + case 0x0f: + break; case 0xf3: prefixes |=3D PREFIX_REPZ; goto next_byte; @@ -1860,6 +1874,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) #endif case 0xc5: /* 2-byte VEX */ case 0xc4: /* 3-byte VEX */ + use_new =3D false; /* VEX prefixes cannot be used except in 32-bit mode. Otherwise the instruction is LES or LDS. */ if (CODE32(s) && !VM86(s)) { diff --git a/target/i386/tcg/emit.c.inc b/target/i386/tcg/emit.c.inc new file mode 100644 index 0000000000..688aca86f6 --- /dev/null +++ b/target/i386/tcg/emit.c.inc @@ -0,0 +1,10 @@ +static void gen_illegal(DisasContext *s, CPUX86State *env, X86DecodedInsn = *decode) +{ + gen_illegal_opcode(s); +} + +static void gen_load_ea(DisasContext *s, AddressParts *mem) +{ + TCGv ea =3D gen_lea_modrm_1(s, *mem); + gen_lea_v_seg(s, s->aflag, ea, mem->def_seg, s->override); +} diff --git a/target/i386/tcg/translate.c b/target/i386/tcg/translate.c index 5c4b685de7..9b925c7ec8 100644 --- a/target/i386/tcg/translate.c +++ b/target/i386/tcg/translate.c @@ -71,6 +71,9 @@ typedef struct DisasContext { int8_t override; /* -1 if no override, else R_CS, R_DS, etc */ uint8_t prefix; =20 + bool has_modrm; + uint8_t modrm; + #ifndef CONFIG_USER_ONLY uint8_t cpl; /* code priv level */ uint8_t iopl; /* i/o priv level */ @@ -2316,6 +2319,31 @@ static inline uint32_t insn_get(CPUX86State *env, Di= sasContext *s, MemOp ot) return ret; } =20 +static inline target_long insn_get_signed(CPUX86State *env, DisasContext *= s, MemOp ot) +{ + target_long ret; + + switch (ot) { + case MO_8: + ret =3D (int8_t) x86_ldub_code(env, s); + break; + case MO_16: + ret =3D (int16_t) x86_lduw_code(env, s); + break; + case MO_32: + ret =3D (int32_t) x86_ldl_code(env, s); + break; +#ifdef TARGET_X86_64 + case MO_64: + ret =3D x86_ldq_code(env, s); + break; +#endif + default: + tcg_abort(); + } + return ret; +} + static inline int insn_const_size(MemOp ot) { if (ot <=3D MO_32) { @@ -2787,6 +2815,9 @@ static inline void gen_op_movq_env_0(DisasContext *s,= int d_offset) tcg_gen_movi_i64(s->tmp1_i64, 0); tcg_gen_st_i64(s->tmp1_i64, cpu_env, d_offset); } + +static bool first =3D true; static unsigned long limit; +#include "decode-new.c.inc" #include "decode-old.c.inc" =20 void tcg_x86_init(void) --=20 2.37.1 From nobody Fri May 17 08:24:54 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661363100; cv=none; d=zohomail.com; s=zohoarc; b=dTzbdbLw5LTYT/q0mt2X2C8FFAV9KzLqcTMz60VgVzMPrSDgKq2nJkYBouk818Qx8oa/6XgkxkdAbD7JIbJnj5W2341cQQnf+UmWidanWPgEhnzBdqNlEuQXc6J0Lras9CdvCBvODqkXlO6KYJvfiyhd2jelx3t3EEiQNVFZrnw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661363100; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=VZL4F7ZNvAr6VdsW4su/kjiv3FAUVKYaKaXXFtObEzM=; b=bufMwy04TjO3CRiLyLzouIKCzJWZ3pFfeJebUx9Du8tmznprHIUsiC/RoY+owzs3WbZGSB16oAc5D0+kIbA5m5GJFhSWHTwKhqXJ97AYvWsamB1SMX3tAGO7VxhsKMiOP+SJ9yJaW1t5Af81scXKszJqg5P00siDjq9h944KZKc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1661363100633993.9842955180713; Wed, 24 Aug 2022 10:45:00 -0700 (PDT) Received: from localhost ([::1]:35338 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQuR1-0006hC-Hp for importer@patchew.org; Wed, 24 Aug 2022 13:44:59 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:33924) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuEL-0001hw-Rd for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:31:53 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:38832) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuE8-0003i5-9R for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:31:53 -0400 Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-572-Cu3V6WHROIiQOb0PsL8A_Q-1; Wed, 24 Aug 2022 13:31:33 -0400 Received: by mail-wm1-f72.google.com with SMTP id i7-20020a1c3b07000000b003a534ec2570so1103504wma.7 for ; Wed, 24 Aug 2022 10:31:32 -0700 (PDT) Received: from goa-sendmail ([93.56.160.208]) by smtp.gmail.com with ESMTPSA id b18-20020a5d6352000000b002252751629dsm17312648wrw.24.2022.08.24.10.31.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Aug 2022 10:31:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661362295; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VZL4F7ZNvAr6VdsW4su/kjiv3FAUVKYaKaXXFtObEzM=; b=crdebNz78bqx+xOUjEeVPOdcCbGBPraOp9aZ0sV4+sr6/UJH4MiDVug7UGyHWHMw7BG5Wo 9QJezakschkt7zuPsg4QM2MgktMfg2gTu4vp6YWangkZlvAi22STx45q4ktSDaUl6uVey1 u/NOC1L8H3tEjRJCZ5VCtdpaO/42gIA= X-MC-Unique: Cu3V6WHROIiQOb0PsL8A_Q-1 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=VZL4F7ZNvAr6VdsW4su/kjiv3FAUVKYaKaXXFtObEzM=; b=Yu+DdG3H3aDH3vItJBiTwNmDdv988W0fTx7DMkBHZLm9z6q6lCIrJ23sfst6BP9cS9 wWowwrfOaK/VSFwcDpzEeLx2RywcM7eYqvembsY6oG0sKXUoRuM5OZun8CdYkdUHJ9HW 9xnh5baR4UMCFBmgy5mLkx44zkt220SaB+EG25GA8I7K5iob+pqxumTEL3Ks/SyIfy1Q TOp1emGW1S1O3rcZ87vU8BezVRvY8VuLK/wOq7vjSnWpjODd2QR2vouirLgT0Azob3U5 EKMctj8FO3igZrfhAMOi0JzEoDdz84gsP6sW6zPUZkbZXLGpf8f8QpdvQyc7+BYZKKjC gUEg== X-Gm-Message-State: ACgBeo2vu7GCJT9688mpdfxAjlTh2d9PJfGkYhuhmXPRsVT/xj392ppc U3CQxmeGszTkVvNoXCEoAG1W+pMOUtM3rnJWv6Qb9V/9iuakPOmp/M1QTpX7J2SY3hVevIk+sLr eaGfVWIsZp7NhNgck4acR1TbmCbQcP4X+cboz1b7qvVLYFDvNTGDUzwRd1VWf1tRBfFg= X-Received: by 2002:a5d:5350:0:b0:225:7560:8403 with SMTP id t16-20020a5d5350000000b0022575608403mr129784wrv.507.1661362291228; Wed, 24 Aug 2022 10:31:31 -0700 (PDT) X-Google-Smtp-Source: AA6agR4i66GVcPOzQySgqPPStpeOXWAjPSRuUMxihN1egS/K6k7G6VmcGChAaceXjG1BLDbzGCjZWg== X-Received: by 2002:a5d:5350:0:b0:225:7560:8403 with SMTP id t16-20020a5d5350000000b0022575608403mr129768wrv.507.1661362290934; Wed, 24 Aug 2022 10:31:30 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, paul@nowt.org Subject: [PATCH 04/17] target/i386: add ALU load/writeback core Date: Wed, 24 Aug 2022 19:31:10 +0200 Message-Id: <20220824173123.232018-5-pbonzini@redhat.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220824173123.232018-1-pbonzini@redhat.com> References: <20220824173123.232018-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661363102125100001 Content-Type: text/plain; charset="utf-8" Add generic code generation that takes care of preparing operands around calls to decode.e.gen in a table-driven manner, so that ALU operations need not take care of that. Signed-off-by: Paolo Bonzini --- target/i386/tcg/decode-new.c.inc | 14 +++++++- target/i386/tcg/emit.c.inc | 62 ++++++++++++++++++++++++++++++++ 2 files changed, 75 insertions(+), 1 deletion(-) diff --git a/target/i386/tcg/decode-new.c.inc b/target/i386/tcg/decode-new.= c.inc index d661f1f6f0..b53afea9c8 100644 --- a/target/i386/tcg/decode-new.c.inc +++ b/target/i386/tcg/decode-new.c.inc @@ -133,6 +133,7 @@ typedef struct X86DecodedOp { MemOp ot; /* For b/c/d/p/s/q/v/w/y/z */ X86ALUOpType alu_op_type; bool has_ea; + TCGv v; } X86DecodedOp; =20 struct X86DecodedInsn { @@ -987,7 +988,18 @@ static target_ulong disas_insn_new(DisasContext *s, CP= UState *cpu, int b) if (decode.op[0].has_ea || decode.op[1].has_ea || decode.op[2].has_ea)= { gen_load_ea(s, &decode.mem); } - decode.e.gen(s, env, &decode); + if (s->prefix & PREFIX_LOCK) { + if (decode.op[0].alu_op_type !=3D X86_ALU_MEM) { + goto illegal_op; + } + gen_load(s, s->T1, &decode.op[2], decode.immediate); + decode.e.gen(s, env, &decode); + } else { + gen_load(s, s->T0, &decode.op[1], decode.immediate); + gen_load(s, s->T1, &decode.op[2], decode.immediate); + decode.e.gen(s, env, &decode); + gen_writeback(s, &decode.op[0]); + } return s->pc; illegal_op: gen_illegal_opcode(s); diff --git a/target/i386/tcg/emit.c.inc b/target/i386/tcg/emit.c.inc index 688aca86f6..93d14ff793 100644 --- a/target/i386/tcg/emit.c.inc +++ b/target/i386/tcg/emit.c.inc @@ -8,3 +8,65 @@ static void gen_load_ea(DisasContext *s, AddressParts *mem) TCGv ea =3D gen_lea_modrm_1(s, *mem); gen_lea_v_seg(s, s->aflag, ea, mem->def_seg, s->override); } + +static void gen_load(DisasContext *s, TCGv v, X86DecodedOp *op, uint64_t i= mm) +{ + switch (op->alu_op_type) { + case X86_ALU_SKIP: + return; + case X86_ALU_SEG: + tcg_gen_ld32u_tl(v, cpu_env, + offsetof(CPUX86State,segs[op->n].selector)); + break; + case X86_ALU_CR: + tcg_gen_ld_tl(v, cpu_env, offsetof(CPUX86State, cr[op->n])); + break; + case X86_ALU_DR: + tcg_gen_ld_tl(v, cpu_env, offsetof(CPUX86State, dr[op->n])); + break; + case X86_ALU_GPR: + gen_op_mov_v_reg(s, op->ot, v, op->n); + break; + case X86_ALU_MEM: + assert(op->has_ea); + gen_op_ld_v(s, op->ot, v, s->A0); + break; + case X86_ALU_IMM: + tcg_gen_movi_tl(v, imm); + break; + } + op->v =3D v; +} + +static void gen_writeback(DisasContext *s, X86DecodedOp *op) +{ + switch (op->alu_op_type) { + case X86_ALU_SKIP: + break; + case X86_ALU_SEG: + /* Note that reg =3D=3D R_SS in gen_movl_seg_T0 always sets is_jmp= . */ + gen_movl_seg_T0(s, op->n); + if (s->base.is_jmp) { + gen_jmp_im(s, s->pc - s->cs_base); + if (op->n =3D=3D R_SS) { + s->flags &=3D ~HF_TF_MASK; + gen_eob_inhibit_irq(s, true); + } else { + gen_eob(s); + } + } + break; + case X86_ALU_CR: + case X86_ALU_DR: + /* TBD */ + break; + case X86_ALU_GPR: + gen_op_mov_reg_v(s, op->ot, op->n, s->T0); + break; + case X86_ALU_MEM: + gen_op_st_v(s, op->ot, s->T0, s->A0); + break; + default: + abort(); + } +} --=20 2.37.1 From nobody Fri May 17 08:24:54 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661362590; cv=none; d=zohomail.com; s=zohoarc; b=DzFVpYWMMdtwczoFLJuFlMKRp6oYXxWnpg3YBob0+nbO4mLDJO1dAZ+58GPcZtyUOMncpLBEHf7YEMPKlcTUk092D56D8g/RjMd2eF9Fh2eqQ0ngyjxINqqjRZO7h3VAdpvOfuu0phh3d8DfrMUCtRJm6M6FsNqz8a+xFb1QxSg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661362590; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=oi3v8wyEHTnNVGvVtjbZW4JGahDFUEA1SL8MH2Bc040=; b=SiAkTrFu24jMlqlAVQwku/wYJS9c4ZUwnQllK/sgAUioqCjxpXUBPlYX5XfQqaCZo5d5MeTwuOeJq0RfEksSvexEJ66tDn2Naabfq7LqjU5uzAk19Y0QWmic91uW4IhG5uc4sNjVtrzG9PgMqbVVYyynuJquDXWk0Ewp5uhpbLQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1661362590734151.63827909731026; Wed, 24 Aug 2022 10:36:30 -0700 (PDT) Received: from localhost ([::1]:43478 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQuIj-0004Zw-V7 for importer@patchew.org; Wed, 24 Aug 2022 13:36:27 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:37458) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuEE-0001eQ-EH for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:31:50 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:37196) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuE9-0003ic-4z for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:31:43 -0400 Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-479-Ayp0SQQZOaOwIHmMvt8cKA-1; Wed, 24 Aug 2022 13:31:36 -0400 Received: by mail-wm1-f72.google.com with SMTP id m22-20020a7bca56000000b003a652939bd1so588681wml.9 for ; Wed, 24 Aug 2022 10:31:34 -0700 (PDT) Received: from goa-sendmail ([2001:b07:6468:f312:9af8:e5f5:7516:fa89]) by smtp.gmail.com with ESMTPSA id y6-20020a056000108600b002250f9abdefsm20353650wrw.117.2022.08.24.10.31.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Aug 2022 10:31:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661362299; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oi3v8wyEHTnNVGvVtjbZW4JGahDFUEA1SL8MH2Bc040=; b=N+vrgObsf+zgpBQFKWAYCF0Lovd7QA45gN3ELQWTUzDORxcFNux8HPE8z4Sqg7Z2CWbbuU AGeAoivBVpbji1wl+533Xsw8qSWnuR1qe3Zt4uWwZnfwusWMq28IPh51NKXK3btoKJYQuG QWFz2xTbl8X//r10oEH4BebKXuNpsOg= X-MC-Unique: Ayp0SQQZOaOwIHmMvt8cKA-1 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=oi3v8wyEHTnNVGvVtjbZW4JGahDFUEA1SL8MH2Bc040=; b=At8dGjnwgCnjYlq9BkP5Pdvart0K8vWvnJocnR8phAEpSrXYzKEamZzs5e7WRtQyGh mvpBuHElJMHj+0tukh7hbpA5hw3KYCVl/EFyv/+85eQyOG9wZ901ru5dJ3zGIu6k1j0C c9whH03MmSkXI8tEtj8wi7NejJMc0E8Fsfj5TEQkkuF56JsyCvxqORUk/WMCkafi5GOi 4WOq6tN5uHEtUVBGN09+0RQ/EuzhbcHOtD4U3u01qyx8eAGfM8RBkwDoySCEmS39FcJg ZaFnrGA6/6YZO7BxLk4PU2Z9nUQxWOwLdKHUyEKNkdmW0FlNYL04xY5IcxRSt5zxJFcW /dyg== X-Gm-Message-State: ACgBeo36HJydXXBb5pr2zyQ5U02bJndmArNMX3wKakQ1zk9/wjaWU3oT P2G38nF+K1W1Ijx4f+xEtpdjv7sv7ZLB1Ur4GSMjgGGv2Z5LokMRIrQDS2KIwdvBgjFA0avUfht KoPBmiHCaZOV3Tleii7RkQDd3hEEIhjanIcmPNFeWDgt6ZMn0uygJm5M3a9n8bUjL53I= X-Received: by 2002:a7b:c851:0:b0:3a5:f211:45cd with SMTP id c17-20020a7bc851000000b003a5f21145cdmr6084447wml.156.1661362293222; Wed, 24 Aug 2022 10:31:33 -0700 (PDT) X-Google-Smtp-Source: AA6agR7ubTBbiWxpAJvjR+7Sd4V588FwJH5MzSI2jq3BPNk+sAAq1+S2x26ysxQpcWvwd26O3xLaRw== X-Received: by 2002:a7b:c851:0:b0:3a5:f211:45cd with SMTP id c17-20020a7bc851000000b003a5f21145cdmr6084435wml.156.1661362292887; Wed, 24 Aug 2022 10:31:32 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, paul@nowt.org Subject: [PATCH 05/17] target/i386: add 00-07, 10-17 opcodes Date: Wed, 24 Aug 2022 19:31:11 +0200 Message-Id: <20220824173123.232018-6-pbonzini@redhat.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220824173123.232018-1-pbonzini@redhat.com> References: <20220824173123.232018-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661362592721100001 Content-Type: text/plain; charset="utf-8" For simplicity, this also brings in the entire implementation of ALU operations from the old decoder. Signed-off-by: Paolo Bonzini --- target/i386/tcg/decode-new.c.inc | 16 +++++ target/i386/tcg/emit.c.inc | 109 +++++++++++++++++++++++++++++++ 2 files changed, 125 insertions(+) diff --git a/target/i386/tcg/decode-new.c.inc b/target/i386/tcg/decode-new.= c.inc index b53afea9c8..6d0d6a683c 100644 --- a/target/i386/tcg/decode-new.c.inc +++ b/target/i386/tcg/decode-new.c.inc @@ -464,8 +464,24 @@ static void decode_twobyte(DisasContext *s, CPUX86Stat= e *env, X86OpEntry *entry, =20 static X86OpEntry A2_00_F7[16][8] =3D { { + X86_OP_ENTRY2(ADD, E,b, G,b), + X86_OP_ENTRY2(ADD, E,v, G,v), + X86_OP_ENTRY2(ADD, G,b, E,b), + X86_OP_ENTRY2(ADD, G,v, E,v), + X86_OP_ENTRY2(ADD, 0,b, I,b), /* AL, Ib */ + X86_OP_ENTRY2(ADD, 0,v, I,z), /* rAX, Iz */ + X86_OP_ENTRYr(PUSH, ES, w, i64), + X86_OP_ENTRYw(POP, ES, w, i64) }, { + X86_OP_ENTRY2(ADC, E,b, G,b), + X86_OP_ENTRY2(ADC, E,v, G,v), + X86_OP_ENTRY2(ADC, G,b, E,b), + X86_OP_ENTRY2(ADC, G,v, E,v), + X86_OP_ENTRY2(ADC, 0,b, I,b), /* AL, Ib */ + X86_OP_ENTRY2(ADC, 0,v, I,z), /* rAX, Iz */ + X86_OP_ENTRYr(PUSH, SS, w, i64), + X86_OP_ENTRYw(POP, SS, w, i64) }, { }, diff --git a/target/i386/tcg/emit.c.inc b/target/i386/tcg/emit.c.inc index 93d14ff793..758e468a25 100644 --- a/target/i386/tcg/emit.c.inc +++ b/target/i386/tcg/emit.c.inc @@ -38,6 +38,115 @@ static void gen_load(DisasContext *s, TCGv v, X86Decode= dOp *op, uint64_t imm) op->v =3D v; } =20 +static void gen_alu_op(DisasContext *s1, int op, MemOp ot) +{ + switch(op) { + case OP_ADCL: + gen_compute_eflags_c(s1, s1->tmp4); + if (s1->prefix & PREFIX_LOCK) { + tcg_gen_add_tl(s1->T0, s1->tmp4, s1->T1); + tcg_gen_atomic_add_fetch_tl(s1->T0, s1->A0, s1->T0, + s1->mem_index, ot | MO_LE); + } else { + tcg_gen_add_tl(s1->T0, s1->T0, s1->T1); + tcg_gen_add_tl(s1->T0, s1->T0, s1->tmp4); + } + gen_op_update3_cc(s1, s1->tmp4); + set_cc_op(s1, CC_OP_ADCB + ot); + break; + case OP_SBBL: + gen_compute_eflags_c(s1, s1->tmp4); + if (s1->prefix & PREFIX_LOCK) { + tcg_gen_add_tl(s1->T0, s1->T1, s1->tmp4); + tcg_gen_neg_tl(s1->T0, s1->T0); + tcg_gen_atomic_add_fetch_tl(s1->T0, s1->A0, s1->T0, + s1->mem_index, ot | MO_LE); + } else { + tcg_gen_sub_tl(s1->T0, s1->T0, s1->T1); + tcg_gen_sub_tl(s1->T0, s1->T0, s1->tmp4); + } + gen_op_update3_cc(s1, s1->tmp4); + set_cc_op(s1, CC_OP_SBBB + ot); + break; + case OP_ADDL: + if (s1->prefix & PREFIX_LOCK) { + tcg_gen_atomic_add_fetch_tl(s1->T0, s1->A0, s1->T1, + s1->mem_index, ot | MO_LE); + } else { + tcg_gen_add_tl(s1->T0, s1->T0, s1->T1); + } + gen_op_update2_cc(s1); + set_cc_op(s1, CC_OP_ADDB + ot); + break; + case OP_SUBL: + if (s1->prefix & PREFIX_LOCK) { + tcg_gen_neg_tl(s1->T0, s1->T1); + tcg_gen_atomic_fetch_add_tl(s1->cc_srcT, s1->A0, s1->T0, + s1->mem_index, ot | MO_LE); + tcg_gen_sub_tl(s1->T0, s1->cc_srcT, s1->T1); + } else { + tcg_gen_mov_tl(s1->cc_srcT, s1->T0); + tcg_gen_sub_tl(s1->T0, s1->T0, s1->T1); + } + gen_op_update2_cc(s1); + set_cc_op(s1, CC_OP_SUBB + ot); + break; + default: + case OP_ANDL: + if (s1->prefix & PREFIX_LOCK) { + tcg_gen_atomic_and_fetch_tl(s1->T0, s1->A0, s1->T1, + s1->mem_index, ot | MO_LE); + } else { + tcg_gen_and_tl(s1->T0, s1->T0, s1->T1); + } + gen_op_update1_cc(s1); + set_cc_op(s1, CC_OP_LOGICB + ot); + break; + case OP_ORL: + if (s1->prefix & PREFIX_LOCK) { + tcg_gen_atomic_or_fetch_tl(s1->T0, s1->A0, s1->T1, + s1->mem_index, ot | MO_LE); + } else { + tcg_gen_or_tl(s1->T0, s1->T0, s1->T1); + } + gen_op_update1_cc(s1); + set_cc_op(s1, CC_OP_LOGICB + ot); + break; + case OP_XORL: + if (s1->prefix & PREFIX_LOCK) { + tcg_gen_atomic_xor_fetch_tl(s1->T0, s1->A0, s1->T1, + s1->mem_index, ot | MO_LE); + } else { + tcg_gen_xor_tl(s1->T0, s1->T0, s1->T1); + } + gen_op_update1_cc(s1); + set_cc_op(s1, CC_OP_LOGICB + ot); + break; + } +} + +static void gen_ADC(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) +{ + gen_alu_op(s, OP_ADCL, decode->op[0].ot); +} + +static void gen_ADD(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) +{ + gen_alu_op(s, OP_ADDL, decode->op[0].ot); +} + +static void gen_PUSH(DisasContext *s, CPUX86State *env, X86DecodedInsn *de= code) +{ + gen_push_v(s, decode->op[2].v); +} + +static void gen_POP(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) +{ + MemOp ot =3D gen_pop_T0(s); + /* NOTE: order is important for pop %sp */ + gen_pop_update(s, ot); +} + static void gen_writeback(DisasContext *s, X86DecodedOp *op) { switch (op->alu_op_type) { --=20 2.37.1 From nobody Fri May 17 08:24:54 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661362984; cv=none; d=zohomail.com; s=zohoarc; b=PL7iMdpsSeBkI5md8qgNF2Uc6jO9jTtjeeukipUz8MGSqtzMhg+UYoprUa33CjhHepGxDOn9HUSblucs2w1rPS7h1Mg1IfIk0zI1LBcMmknPEf5bFDsaHlUAzLBX77ueNYVxtCTY69kEr4QgDOsThlXmCSjOS51m/frDFF3dLD4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661362984; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=5XL1/mxTpa96eqdW1Fs92s3tPUnKhpSaLtXS3bi+syw=; b=fsmHVsOUGRquTGrC/FIFzgIkEsxqpbsrl4Sn7vUoY+eqlbI2q4NOzF1i+7fcHaFaL72Cfix9A9H9r5dcBp1lnxKfpu2V3DjbkCDhZTbfiRcvirld2VPtu/1xMtw0bsVONTFnuRD5VTjtfpqd+xsYk0DLbdQAkTY9Zr3aRS6UQ8s= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1661362984031678.4259496679316; Wed, 24 Aug 2022 10:43:04 -0700 (PDT) Received: from localhost ([::1]:37446 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQuP7-00036r-UF for importer@patchew.org; Wed, 24 Aug 2022 13:43:01 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:37456) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuEG-0001eP-QR for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:31:50 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:38048) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuE8-0003iU-9m for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:31:42 -0400 Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-620-k93ugxcsMvCwLmdkrFxntw-1; Wed, 24 Aug 2022 13:31:37 -0400 Received: by mail-wr1-f72.google.com with SMTP id c22-20020adfa316000000b0022574c2dc1aso358106wrb.2 for ; Wed, 24 Aug 2022 10:31:36 -0700 (PDT) Received: from goa-sendmail ([2001:b07:6468:f312:9af8:e5f5:7516:fa89]) by smtp.gmail.com with ESMTPSA id t18-20020a05600c129200b003a5fa79007fsm2538446wmd.7.2022.08.24.10.31.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Aug 2022 10:31:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661362299; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5XL1/mxTpa96eqdW1Fs92s3tPUnKhpSaLtXS3bi+syw=; b=BgljwW2tWb+wpNS5ApJEcfFYceQqocklSQGkCcBGfQJTm2I6tnlgwCwA8z1j679iBXJBlD uxJDL4Pz8ttm/Ihs30y5QhJ/4XoOb0oErqas5sEfX2cNZeIpxtEejoLypxTqzWyTHNcrsx LJneg1kjsIejTX8PM7qWhczMhKjy3k0= X-MC-Unique: k93ugxcsMvCwLmdkrFxntw-1 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=5XL1/mxTpa96eqdW1Fs92s3tPUnKhpSaLtXS3bi+syw=; b=Q92plur0HJOxnuDT+oXMzSOb4rNnEVLZSYbjcOduTOo8DIhYEWa+nZeTM299GwVFdw DPNLb7SjgQuQ/fbSU/1kFBe9tB3SXMqUHjR6jUEhC7srYWxPgBOMn78zCVYIKvnQWd77 ZRt74AG3vEgN0zyJSFfzdMFej2TApv37Wy8LGXGWKx+XbrwtU0FBWMETtA4G9f1WbWca GkGXEbNTNPLJd2X2qcNLsdxobiMxV+3wu7HO1ZLPVh0TmM5tdJEaqPDwB9Chyn5dOt3T lcCkaCGGNDwJFONNHDEsGizHVDv+eKkOe0NUOhN95AkdWSoc7GPoYIJg01/7F17sFfcz sh2w== X-Gm-Message-State: ACgBeo3Wu6IRSPNR1pldHG6T1cIFGHbUWUdklS5UClxbAIEd1PA9FdNK QsE2Ssw/z7GRoatkEc6dkfkpc/mauDwm0D66HUqlMyywysfbkmyQV4w2+29EEswZSdd0fPbmZj1 zn1ktt2sH+qoM70WNDkXyM6/jzA2YCiIPWcJvvZfMwR/xK/vYdTlbG4Gh+gt5/QAi1U4= X-Received: by 2002:a1c:cc0f:0:b0:3a5:333:310d with SMTP id h15-20020a1ccc0f000000b003a50333310dmr38088wmb.122.1661362295084; Wed, 24 Aug 2022 10:31:35 -0700 (PDT) X-Google-Smtp-Source: AA6agR7AM6BuX+/LA6SkEzQgZeDS9o04a6p5oytg8pgjJ/2O8xRHSRWnEh87oqidNLqN+7QVzBoEnw== X-Received: by 2002:a1c:cc0f:0:b0:3a5:333:310d with SMTP id h15-20020a1ccc0f000000b003a50333310dmr38075wmb.122.1661362294763; Wed, 24 Aug 2022 10:31:34 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, paul@nowt.org Subject: [PATCH 06/17] target/i386: add 08-0F, 18-1F opcodes Date: Wed, 24 Aug 2022 19:31:12 +0200 Message-Id: <20220824173123.232018-7-pbonzini@redhat.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220824173123.232018-1-pbonzini@redhat.com> References: <20220824173123.232018-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661362985633100001 Content-Type: text/plain; charset="utf-8" Using operands named "0-7" for fixed registers wasn't a great idea in retrospect... It only makes sense for 1-byte INC/DEC, and those could even use LoBits instead. Signed-off-by: Paolo Bonzini --- target/i386/tcg/decode-new.c.inc | 17 ++++++++++++++++- target/i386/tcg/decode-old.c.inc | 2 +- target/i386/tcg/emit.c.inc | 10 ++++++++++ 3 files changed, 27 insertions(+), 2 deletions(-) diff --git a/target/i386/tcg/decode-new.c.inc b/target/i386/tcg/decode-new.= c.inc index 6d0d6a683c..b1e849b332 100644 --- a/target/i386/tcg/decode-new.c.inc +++ b/target/i386/tcg/decode-new.c.inc @@ -515,9 +515,24 @@ static X86OpEntry A2_00_F7[16][8] =3D { =20 static X86OpEntry A2_08_FF[16][8] =3D { { - [7] =3D { .decode =3D decode_twobyte, .is_decode =3D true, } + X86_OP_ENTRY2(OR, E,b, G,b), + X86_OP_ENTRY2(OR, E,v, G,v), + X86_OP_ENTRY2(OR, G,b, E,b), + X86_OP_ENTRY2(OR, G,v, E,v), + X86_OP_ENTRY2(OR, 0,b, I,b), /* AL, Ib */ + X86_OP_ENTRY2(OR, 0,v, I,z), /* rAX, Iz */ + X86_OP_ENTRYr(PUSH, CS, w, i64), + { .decode =3D decode_twobyte, .is_decode =3D true, } }, { + X86_OP_ENTRY2(SBB, E,b, G,b), + X86_OP_ENTRY2(SBB, E,v, G,v), + X86_OP_ENTRY2(SBB, G,b, E,b), + X86_OP_ENTRY2(SBB, G,v, E,v), + X86_OP_ENTRY2(SBB, 0,b, I,b), /* AL, Ib */ + X86_OP_ENTRY2(SBB, 0,v, I,z), /* rAX, Iz */ + X86_OP_ENTRYr(PUSH, DS, w, i64), + X86_OP_ENTRYw(POP, DS, w, i64) }, { }, diff --git a/target/i386/tcg/decode-old.c.inc b/target/i386/tcg/decode-old.= c.inc index fb86855501..937975f69a 100644 --- a/target/i386/tcg/decode-old.c.inc +++ b/target/i386/tcg/decode-old.c.inc @@ -1821,7 +1821,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) #else use_new &=3D b <=3D limit; #endif - if (use_new && 0) { + if (use_new && b <=3D 0x1f) { return disas_insn_new(s, cpu, b); } case 0x0f: diff --git a/target/i386/tcg/emit.c.inc b/target/i386/tcg/emit.c.inc index 758e468a25..1f799d1f18 100644 --- a/target/i386/tcg/emit.c.inc +++ b/target/i386/tcg/emit.c.inc @@ -135,6 +135,11 @@ static void gen_ADD(DisasContext *s, CPUX86State *env,= X86DecodedInsn *decode) gen_alu_op(s, OP_ADDL, decode->op[0].ot); } =20 +static void gen_OR(DisasContext *s, CPUX86State *env, X86DecodedInsn *deco= de) +{ + gen_alu_op(s, OP_ORL, decode->op[0].ot); +} + static void gen_PUSH(DisasContext *s, CPUX86State *env, X86DecodedInsn *de= code) { gen_push_v(s, decode->op[2].v); @@ -147,6 +152,11 @@ static void gen_POP(DisasContext *s, CPUX86State *env,= X86DecodedInsn *decode) gen_pop_update(s, ot); } =20 +static void gen_SBB(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) +{ + gen_alu_op(s, OP_SBBL, decode->op[0].ot); +} + static void gen_writeback(DisasContext *s, X86DecodedOp *op) { switch (op->alu_op_type) { --=20 2.37.1 From nobody Fri May 17 08:24:54 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661363904; cv=none; d=zohomail.com; s=zohoarc; b=CGsAEJ8HIE8mqcZLhBPKZrH+1FvCEPBuvK4yy38CdGxY7NpyyUjBekKZ39n2WkGQx4KbNqWghdlZxZ6guiED1trZB0IiN5vXzi7uVgT0rvc2O6K+IAiNQ3F5GoyKeu/kHfUcPHs/Qu4UCdZJneYvULsTsZNCfh4BKHt8jeZoYSw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661363904; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=ZR3KtubM+ygoifWMUZlCu4WaYwuD+MLGcsh3q5pZDdU=; b=oGCTZjuCll7VSKhUcYP59itxRDe5C2by6ac/JXt/LMEDE4hnNp2FJFZ7+0yjFw8zh7XDdZq68wT9HnoDZtQrcGhrquv6hMfijAuKszBGcoZXCwEkRXXlHCqezsUDZrS0R2FcREiJIImwNSB75aSy7AXpzBZ2QXShQU5CZTVSyHE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1661363904127602.2168059934269; Wed, 24 Aug 2022 10:58:24 -0700 (PDT) Received: from localhost ([::1]:41574 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQudy-0000oN-17 for importer@patchew.org; Wed, 24 Aug 2022 13:58:22 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46804) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuFU-0002MP-1k for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:33:10 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:48807) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuFS-0003tH-FD for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:33:03 -0400 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-145-XvQ3bi32ME6xRD7-6e3p9A-1; Wed, 24 Aug 2022 13:32:58 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 72CA11C14D3D; Wed, 24 Aug 2022 17:32:52 +0000 (UTC) Received: from avogadro.redhat.com (unknown [10.39.192.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9F1C2492C3B; Wed, 24 Aug 2022 17:32:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661362381; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZR3KtubM+ygoifWMUZlCu4WaYwuD+MLGcsh3q5pZDdU=; b=bkcPECW1KAlS/M1SLpvbOsnOWg6fUyXHo8Zx6PZi+6L2stRhP3OLpz9P6lFkQGIaeTT6+5 2aR2dfFcL/MXA4mYY3jh71AzUBWPsJ/o4HNsPeNaHbTqWApHNX6Z+BtqS0SDwipUXFp8tz iEsIEAVv2rYHWvboXFhkSpUsEFc8GDw= X-MC-Unique: XvQ3bi32ME6xRD7-6e3p9A-1 From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, paul@nowt.org Subject: [PATCH 07/17] target/i386: add 20-27, 30-37 opcodes Date: Wed, 24 Aug 2022 19:32:40 +0200 Message-Id: <20220824173250.232491-1-pbonzini@redhat.com> In-Reply-To: <20220824173123.232018-1-pbonzini@redhat.com> References: <20220824173123.232018-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661363905604100001 Content-Type: text/plain; charset="utf-8" Signed-off-by: Paolo Bonzini --- target/i386/tcg/decode-new.c.inc | 16 ++++++++++++++++ target/i386/tcg/emit.c.inc | 33 ++++++++++++++++++++++++++++++++ 2 files changed, 49 insertions(+) diff --git a/target/i386/tcg/decode-new.c.inc b/target/i386/tcg/decode-new.= c.inc index b1e849b332..de0364ac87 100644 --- a/target/i386/tcg/decode-new.c.inc +++ b/target/i386/tcg/decode-new.c.inc @@ -484,8 +484,24 @@ static X86OpEntry A2_00_F7[16][8] =3D { X86_OP_ENTRYw(POP, SS, w, i64) }, { + X86_OP_ENTRY2(AND, E,b, G,b), + X86_OP_ENTRY2(AND, E,v, G,v), + X86_OP_ENTRY2(AND, G,b, E,b), + X86_OP_ENTRY2(AND, G,v, E,v), + X86_OP_ENTRY2(AND, 0,b, I,b), /* AL, Ib */ + X86_OP_ENTRY2(AND, 0,v, I,z), /* rAX, Iz */ + {}, + X86_OP_ENTRY0(DAA, i64), }, { + X86_OP_ENTRY2(XOR, E,b, G,b), + X86_OP_ENTRY2(XOR, E,v, G,v), + X86_OP_ENTRY2(XOR, G,b, E,b), + X86_OP_ENTRY2(XOR, G,v, E,v), + X86_OP_ENTRY2(XOR, 0,b, I,b), /* AL, Ib */ + X86_OP_ENTRY2(XOR, 0,v, I,z), /* rAX, Iz */ + {}, + X86_OP_ENTRY0(AAA, i64), }, { }, diff --git a/target/i386/tcg/emit.c.inc b/target/i386/tcg/emit.c.inc index 1f799d1f18..33469098c2 100644 --- a/target/i386/tcg/emit.c.inc +++ b/target/i386/tcg/emit.c.inc @@ -125,6 +125,13 @@ static void gen_alu_op(DisasContext *s1, int op, MemOp= ot) } } =20 +static void gen_AAA(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) +{ + gen_update_cc_op(s); + gen_helper_aaa(cpu_env); + set_cc_op(s, CC_OP_EFLAGS); +} + static void gen_ADC(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) { gen_alu_op(s, OP_ADCL, decode->op[0].ot); @@ -135,6 +142,18 @@ static void gen_ADD(DisasContext *s, CPUX86State *env,= X86DecodedInsn *decode) gen_alu_op(s, OP_ADDL, decode->op[0].ot); } =20 +static void gen_AND(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) +{ + gen_alu_op(s, OP_ANDL, decode->op[0].ot); +} + +static void gen_DAA(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) +{ + gen_update_cc_op(s); + gen_helper_daa(cpu_env); + set_cc_op(s, CC_OP_EFLAGS); +} + static void gen_OR(DisasContext *s, CPUX86State *env, X86DecodedInsn *deco= de) { gen_alu_op(s, OP_ORL, decode->op[0].ot); @@ -157,6 +176,20 @@ static void gen_SBB(DisasContext *s, CPUX86State *env,= X86DecodedInsn *decode) gen_alu_op(s, OP_SBBL, decode->op[0].ot); } =20 +static void gen_XOR(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) +{ + /* special case XOR reg, reg */ + if (decode->op[1].alu_op_type =3D=3D X86_ALU_GPR && + decode->op[2].alu_op_type =3D=3D X86_ALU_GPR && + decode->op[1].n =3D=3D decode->op[2].n) { + tcg_gen_movi_tl(s->T0, 0); + gen_op_update1_cc(s); + set_cc_op(s, CC_OP_LOGICB + decode->op[0].ot); + } else { + gen_alu_op(s, OP_XORL, decode->op[0].ot); + } +} + static void gen_writeback(DisasContext *s, X86DecodedOp *op) { switch (op->alu_op_type) { --=20 2.37.1 From nobody Fri May 17 08:24:54 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661363982; cv=none; d=zohomail.com; s=zohoarc; b=BCyojmwW7FtqRwLdQ68ZD7WB0O3IR/hjhCMI2iYs1ajZ89v9KVpiodBlpz9Yjrj39LIR0YE3Ey5UEa50AEe5FbSmP5EdJpyeWBUco6iAuWf0VFZBj3YTcZ/BmdeI2JXr6Xt0vdd/86ckLkBWTjcR3xOw00CV1OL2CLzWcq4Bl5c= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661363982; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=PfHaoeYWXQaNbEs0kkSmT2yUAiN+SwsqZo3imONtK4g=; b=Xx3X3rXTcOXRlT3qrmClGobYSFMTz9xat0iv+6D258gHC2MrAtsqZJgGaZvBbq3FNX2wa4fnz3LuxmqucTBUcdCBYuBwlOh4QiCm+m5fm8m+2xJO5YuXdCrHhk9R3Ssc3T4eaYi1SrKPVN1a6zoi2Px/oYKJ/LaeEunpfJLGodc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1661363982060475.87697451689326; Wed, 24 Aug 2022 10:59:42 -0700 (PDT) Received: from localhost ([::1]:52152 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQufF-00034H-11 for importer@patchew.org; Wed, 24 Aug 2022 13:59:41 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46802) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuFT-0002MO-OF for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:33:10 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:22080) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuFQ-0003t0-Gl for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:33:03 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-435-5UBQPDNQP6OmWIVUAYbWSQ-1; Wed, 24 Aug 2022 13:32:56 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 793BD856D4A; Wed, 24 Aug 2022 17:32:53 +0000 (UTC) Received: from avogadro.redhat.com (unknown [10.39.192.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id B677F492CA6; Wed, 24 Aug 2022 17:32:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661362379; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PfHaoeYWXQaNbEs0kkSmT2yUAiN+SwsqZo3imONtK4g=; b=LiukM918o54ep8Z2YvPTDwPVF6NkIIyrgU10cWxXYRlMf26VYdf7P3DcYnFO/IrmNHKUhq SmraOObys4kAihjQotX2PKtBCBRdb7iEDcmjKVUav2JZxXIysQDi9TDbMY8lEh//ElI5En Cd5hWVhXYXF8FICVxBiG41Zp7jfXnAo= X-MC-Unique: 5UBQPDNQP6OmWIVUAYbWSQ-1 From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, paul@nowt.org Subject: [PATCH 08/17] target/i386: add 28-2f, 38-3f opcodes Date: Wed, 24 Aug 2022 19:32:41 +0200 Message-Id: <20220824173250.232491-2-pbonzini@redhat.com> In-Reply-To: <20220824173123.232018-1-pbonzini@redhat.com> References: <20220824173123.232018-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661363984087100001 Content-Type: text/plain; charset="utf-8" Signed-off-by: Paolo Bonzini --- target/i386/tcg/decode-new.c.inc | 16 ++++++++++++++++ target/i386/tcg/decode-old.c.inc | 2 +- target/i386/tcg/emit.c.inc | 22 ++++++++++++++++++++-- 3 files changed, 37 insertions(+), 3 deletions(-) diff --git a/target/i386/tcg/decode-new.c.inc b/target/i386/tcg/decode-new.= c.inc index de0364ac87..c94cd7ac61 100644 --- a/target/i386/tcg/decode-new.c.inc +++ b/target/i386/tcg/decode-new.c.inc @@ -551,8 +551,24 @@ static X86OpEntry A2_08_FF[16][8] =3D { X86_OP_ENTRYw(POP, DS, w, i64) }, { + X86_OP_ENTRY2(SUB, E,b, G,b), + X86_OP_ENTRY2(SUB, E,v, G,v), + X86_OP_ENTRY2(SUB, G,b, E,b), + X86_OP_ENTRY2(SUB, G,v, E,v), + X86_OP_ENTRY2(SUB, 0,b, I,b), /* AL, Ib */ + X86_OP_ENTRY2(SUB, 0,v, I,z), /* rAX, Iz */ + {}, + X86_OP_ENTRY0(DAS, i64), }, { + X86_OP_ENTRY2(SUB, E,b, G,b, nowb), + X86_OP_ENTRY2(SUB, E,v, G,v, nowb), + X86_OP_ENTRY2(SUB, G,b, E,b, nowb), + X86_OP_ENTRY2(SUB, G,v, E,v, nowb), + X86_OP_ENTRY2(SUB, 0,b, I,b, nowb), /* AL, Ib */ + X86_OP_ENTRY2(SUB, 0,v, I,z, nowb), /* rAX, Iz */ + {}, + X86_OP_ENTRY0(AAS, i64), }, { }, diff --git a/target/i386/tcg/decode-old.c.inc b/target/i386/tcg/decode-old.= c.inc index 937975f69a..28edb62b5b 100644 --- a/target/i386/tcg/decode-old.c.inc +++ b/target/i386/tcg/decode-old.c.inc @@ -1821,7 +1821,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) #else use_new &=3D b <=3D limit; #endif - if (use_new && b <=3D 0x1f) { + if (use_new && b <=3D 0x3f) { return disas_insn_new(s, cpu, b); } case 0x0f: diff --git a/target/i386/tcg/emit.c.inc b/target/i386/tcg/emit.c.inc index 33469098c2..e247b542ed 100644 --- a/target/i386/tcg/emit.c.inc +++ b/target/i386/tcg/emit.c.inc @@ -132,6 +132,13 @@ static void gen_AAA(DisasContext *s, CPUX86State *env,= X86DecodedInsn *decode) set_cc_op(s, CC_OP_EFLAGS); } =20 +static void gen_AAS(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) +{ + gen_update_cc_op(s); + gen_helper_aas(cpu_env); + set_cc_op(s, CC_OP_EFLAGS); +} + static void gen_ADC(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) { gen_alu_op(s, OP_ADCL, decode->op[0].ot); @@ -154,6 +161,13 @@ static void gen_DAA(DisasContext *s, CPUX86State *env,= X86DecodedInsn *decode) set_cc_op(s, CC_OP_EFLAGS); } =20 +static void gen_DAS(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) +{ + gen_update_cc_op(s); + gen_helper_das(cpu_env); + set_cc_op(s, CC_OP_EFLAGS); +} + static void gen_OR(DisasContext *s, CPUX86State *env, X86DecodedInsn *deco= de) { gen_alu_op(s, OP_ORL, decode->op[0].ot); @@ -176,6 +190,11 @@ static void gen_SBB(DisasContext *s, CPUX86State *env,= X86DecodedInsn *decode) gen_alu_op(s, OP_SBBL, decode->op[0].ot); } =20 +static void gen_SUB(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) +{ + gen_alu_op(s, OP_SUBL, decode->op[0].ot); +} + static void gen_XOR(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) { /* special case XOR reg, reg */ @@ -183,8 +202,7 @@ static void gen_XOR(DisasContext *s, CPUX86State *env, = X86DecodedInsn *decode) decode->op[2].alu_op_type =3D=3D X86_ALU_GPR && decode->op[1].n =3D=3D decode->op[2].n) { tcg_gen_movi_tl(s->T0, 0); - gen_op_update1_cc(s); - set_cc_op(s, CC_OP_LOGICB + decode->op[0].ot); + set_cc_op(s, CC_OP_CLR); } else { gen_alu_op(s, OP_XORL, decode->op[0].ot); } --=20 2.37.1 From nobody Fri May 17 08:24:54 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661363561; cv=none; d=zohomail.com; s=zohoarc; b=I26u8zi0W8P1x+I0jKJUWMuoIR/DrIeuIGzBp1Je1ENGIdGilygPyoP4IqyRBioiow7yUfWy/t6W3NF3khk5SgNSWuMt+l/420/oF3RXojI+mDeZsh3lu8l5ItHVXhBy9o58Pn3rO6XyP/eFGYr8ye9z1ZCIQwph6B2deA+3suY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661363561; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=5pDyarpE9L5NOEV/LKgy5wFkf1pfVmE7pKthjoV7Dfw=; b=iD3owr5ofqgzzB9OQOz4A3vXlQqomfKfm9e2nBkkwDpnQbcl57SvtfCnf2DzIlCBXEkNhf1xoTMSoBQ44aLo8+TmukTy0k2iw1XGuh/HZXo589E1W3BvNNoCeo8w3pLcY+kHULFmm6DVausAg0mM7uq867nr01L4jSLydR/PN3k= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1661363561168618.3897461647466; Wed, 24 Aug 2022 10:52:41 -0700 (PDT) Received: from localhost ([::1]:35360 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQuYR-0006a3-55 for importer@patchew.org; Wed, 24 Aug 2022 13:52:39 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46800) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuFU-0002MN-1P for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:33:10 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:40705) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuFR-0003t8-Jf for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:33:03 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-426-cu9ZvwmIPBGf8efmb0f2Cw-1; Wed, 24 Aug 2022 13:32:58 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 79D5C8B5903; Wed, 24 Aug 2022 17:32:54 +0000 (UTC) Received: from avogadro.redhat.com (unknown [10.39.192.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id BBEB4492CA5; Wed, 24 Aug 2022 17:32:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661362380; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5pDyarpE9L5NOEV/LKgy5wFkf1pfVmE7pKthjoV7Dfw=; b=SFvoLd/f7H5zvdmTV2SRNzp2FCSxIvbOt4SADGr8m4+PFbT2PYsfE34Nzwb87nYMtLcL/c UL4AJ9EeAIC2BR3STdt10+W4dnBHK0LIM/YOKt7uQnn8LIDd22hu2JX/Dq1BM5x71Enyma LAYpeFlpk3GAiH++QZZdoM3UDhRHqjA= X-MC-Unique: cu9ZvwmIPBGf8efmb0f2Cw-1 From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, paul@nowt.org Subject: [PATCH 09/17] target/i386: add 40-47, 50-57 opcodes Date: Wed, 24 Aug 2022 19:32:42 +0200 Message-Id: <20220824173250.232491-3-pbonzini@redhat.com> In-Reply-To: <20220824173123.232018-1-pbonzini@redhat.com> References: <20220824173123.232018-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661363563208100001 Content-Type: text/plain; charset="utf-8" Signed-off-by: Paolo Bonzini --- target/i386/tcg/decode-new.c.inc | 16 ++++++++++++++++ target/i386/tcg/emit.c.inc | 30 +++++++++++++++++++++++++++++- target/i386/tcg/translate.c | 2 ++ 3 files changed, 47 insertions(+), 1 deletion(-) diff --git a/target/i386/tcg/decode-new.c.inc b/target/i386/tcg/decode-new.= c.inc index c94cd7ac61..bbd6ef07a1 100644 --- a/target/i386/tcg/decode-new.c.inc +++ b/target/i386/tcg/decode-new.c.inc @@ -504,8 +504,24 @@ static X86OpEntry A2_00_F7[16][8] =3D { X86_OP_ENTRY0(AAA, i64), }, { + X86_OP_ENTRY1(INC, 0,v, i64), + X86_OP_ENTRY1(INC, 1,v, i64), + X86_OP_ENTRY1(INC, 2,v, i64), + X86_OP_ENTRY1(INC, 3,v, i64), + X86_OP_ENTRY1(INC, 4,v, i64), + X86_OP_ENTRY1(INC, 5,v, i64), + X86_OP_ENTRY1(INC, 6,v, i64), + X86_OP_ENTRY1(INC, 7,v, i64), }, { + X86_OP_ENTRYr(PUSH, LoBits,d64), + X86_OP_ENTRYr(PUSH, LoBits,d64), + X86_OP_ENTRYr(PUSH, LoBits,d64), + X86_OP_ENTRYr(PUSH, LoBits,d64), + X86_OP_ENTRYr(PUSH, LoBits,d64), + X86_OP_ENTRYr(PUSH, LoBits,d64), + X86_OP_ENTRYr(PUSH, LoBits,d64), + X86_OP_ENTRYr(PUSH, LoBits,d64), }, { }, diff --git a/target/i386/tcg/emit.c.inc b/target/i386/tcg/emit.c.inc index e247b542ed..d3d0f893fb 100644 --- a/target/i386/tcg/emit.c.inc +++ b/target/i386/tcg/emit.c.inc @@ -91,7 +91,30 @@ static void gen_alu_op(DisasContext *s1, int op, MemOp o= t) gen_op_update2_cc(s1); set_cc_op(s1, CC_OP_SUBB + ot); break; - default: + case OP_DECL: + tcg_gen_movi_tl(s1->T1, -1); + if (s1->prefix & PREFIX_LOCK) { + tcg_gen_atomic_add_fetch_tl(s1->T0, s1->A0, s1->T1, + s1->mem_index, ot | MO_LE); + } else { + tcg_gen_add_tl(s1->T0, s1->T0, s1->T1); + } + gen_compute_eflags_c(s1, cpu_cc_src); + tcg_gen_mov_tl(cpu_cc_dst, s1->T0); + set_cc_op(s1, CC_OP_DECB + ot); + break; + case OP_INCL: + tcg_gen_movi_tl(s1->T1, 1); + if (s1->prefix & PREFIX_LOCK) { + tcg_gen_atomic_add_fetch_tl(s1->T0, s1->A0, s1->T1, + s1->mem_index, ot | MO_LE); + } else { + tcg_gen_add_tl(s1->T0, s1->T0, s1->T1); + } + gen_compute_eflags_c(s1, cpu_cc_src); + tcg_gen_mov_tl(cpu_cc_dst, s1->T0); + set_cc_op(s1, CC_OP_INCB + ot); + break; case OP_ANDL: if (s1->prefix & PREFIX_LOCK) { tcg_gen_atomic_and_fetch_tl(s1->T0, s1->A0, s1->T1, @@ -168,6 +191,11 @@ static void gen_DAS(DisasContext *s, CPUX86State *env,= X86DecodedInsn *decode) set_cc_op(s, CC_OP_EFLAGS); } =20 +static void gen_INC(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) +{ + gen_alu_op(s, OP_INCL, decode->op[0].ot); +} + static void gen_OR(DisasContext *s, CPUX86State *env, X86DecodedInsn *deco= de) { gen_alu_op(s, OP_ORL, decode->op[0].ot); diff --git a/target/i386/tcg/translate.c b/target/i386/tcg/translate.c index 9b925c7ec8..d0a8c0becb 100644 --- a/target/i386/tcg/translate.c +++ b/target/i386/tcg/translate.c @@ -223,6 +223,8 @@ enum { OP_SUBL, OP_XORL, OP_CMPL, + OP_INCL, + OP_DECL, }; =20 /* i386 shift ops */ --=20 2.37.1 From nobody Fri May 17 08:24:54 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661363657; cv=none; d=zohomail.com; s=zohoarc; b=SewDZWbr/I81RJ7pHoICYJjgcPwXxuRHPC60VyIppLZ5w+DLEeHJwX0SgR64QvogyBx+RNW4bNEasvU5OGYkC3MEzOHUTKfQc//PavFuQksBOeo+6UWGQgZxDZ16r/w0cC+nryCAdH9ifOTxFMsBCUlhS08ARZlJxk6tEvJQoPo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661363657; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=5apes9ls557TerR9QLvaPcikDx0UWLOUdclITOc3I1c=; b=N0AppJGBBMx/gBCdhc5hEi8OiH22DBWnAaPZSYyAyXoI447o4nzRdgJC3Im5Mhxc4Mn+yUGhxtTFUkjRyur4VC/y4NmWm+fhuIJgsmbzMD7CWM4r2rZ2swnrhCXVMoWMFXX6/SFUwuLqRjJPxhLfQzl5sY9+3LDsAjW3fcB7VuA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1661363657871261.7707542983944; Wed, 24 Aug 2022 10:54:17 -0700 (PDT) Received: from localhost ([::1]:38854 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQua0-0001FD-U1 for importer@patchew.org; Wed, 24 Aug 2022 13:54:16 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:49256) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuFW-0002MX-DO for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:33:10 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:37450) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuFS-0003tM-Fv for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:33:04 -0400 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-451-FjDOq8r5NPWct65s007DmQ-1; Wed, 24 Aug 2022 13:32:56 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7E390280EA24; Wed, 24 Aug 2022 17:32:55 +0000 (UTC) Received: from avogadro.redhat.com (unknown [10.39.192.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id BD3E7492CA5; Wed, 24 Aug 2022 17:32:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661362381; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5apes9ls557TerR9QLvaPcikDx0UWLOUdclITOc3I1c=; b=e8TPXeuT07V3vEVUkcwYtNoMmP1bvTZL3RbLjqAWxmxj15E0oZBJYswMhR4sOZgh6a75XY swnc5TmlNxac1wffvsSBf9x6Hlzv+n1npCh8aqThLyTTG/Uw6T3G4+FGMMGyuQuHM61d/j Y6Khsp5c1Vw78YGPGCYIxGUWaO69iFU= X-MC-Unique: FjDOq8r5NPWct65s007DmQ-1 From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, paul@nowt.org Subject: [PATCH 10/17] target/i386: add 48-4f, 58-5f opcodes Date: Wed, 24 Aug 2022 19:32:43 +0200 Message-Id: <20220824173250.232491-4-pbonzini@redhat.com> In-Reply-To: <20220824173123.232018-1-pbonzini@redhat.com> References: <20220824173123.232018-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661363659885100001 Content-Type: text/plain; charset="utf-8" Signed-off-by: Paolo Bonzini --- target/i386/tcg/decode-new.c.inc | 16 ++++++++++++++++ target/i386/tcg/decode-old.c.inc | 2 +- target/i386/tcg/emit.c.inc | 5 +++++ 3 files changed, 22 insertions(+), 1 deletion(-) diff --git a/target/i386/tcg/decode-new.c.inc b/target/i386/tcg/decode-new.= c.inc index bbd6ef07a1..586894e4ee 100644 --- a/target/i386/tcg/decode-new.c.inc +++ b/target/i386/tcg/decode-new.c.inc @@ -587,8 +587,24 @@ static X86OpEntry A2_08_FF[16][8] =3D { X86_OP_ENTRY0(AAS, i64), }, { + X86_OP_ENTRY1(DEC, 0,v, i64), + X86_OP_ENTRY1(DEC, 1,v, i64), + X86_OP_ENTRY1(DEC, 2,v, i64), + X86_OP_ENTRY1(DEC, 3,v, i64), + X86_OP_ENTRY1(DEC, 4,v, i64), + X86_OP_ENTRY1(DEC, 5,v, i64), + X86_OP_ENTRY1(DEC, 6,v, i64), + X86_OP_ENTRY1(DEC, 7,v, i64), }, { + X86_OP_ENTRYw(POP, LoBits,d64), + X86_OP_ENTRYw(POP, LoBits,d64), + X86_OP_ENTRYw(POP, LoBits,d64), + X86_OP_ENTRYw(POP, LoBits,d64), + X86_OP_ENTRYw(POP, LoBits,d64), + X86_OP_ENTRYw(POP, LoBits,d64), + X86_OP_ENTRYw(POP, LoBits,d64), + X86_OP_ENTRYw(POP, LoBits,d64), }, { }, diff --git a/target/i386/tcg/decode-old.c.inc b/target/i386/tcg/decode-old.= c.inc index 28edb62b5b..a297d126a4 100644 --- a/target/i386/tcg/decode-old.c.inc +++ b/target/i386/tcg/decode-old.c.inc @@ -1821,7 +1821,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) #else use_new &=3D b <=3D limit; #endif - if (use_new && b <=3D 0x3f) { + if (use_new && b <=3D 0x5f) { return disas_insn_new(s, cpu, b); } case 0x0f: diff --git a/target/i386/tcg/emit.c.inc b/target/i386/tcg/emit.c.inc index d3d0f893fb..a76d6820e1 100644 --- a/target/i386/tcg/emit.c.inc +++ b/target/i386/tcg/emit.c.inc @@ -191,6 +191,11 @@ static void gen_DAS(DisasContext *s, CPUX86State *env,= X86DecodedInsn *decode) set_cc_op(s, CC_OP_EFLAGS); } =20 +static void gen_DEC(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) +{ + gen_alu_op(s, OP_DECL, decode->op[0].ot); +} + static void gen_INC(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) { gen_alu_op(s, OP_INCL, decode->op[0].ot); --=20 2.37.1 From nobody Fri May 17 08:24:54 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661363818; cv=none; d=zohomail.com; s=zohoarc; b=PuCfSuR/tb8WuHoiuH/nNddSdiBh9QwZprJBgzx63PTaIHhRdgnTh0dwVXLUEXECL7rYcqNWMtTwxTsYe1PL74VcrOgFCPYbxlAk1nvy5JhdyPMZu6xUz8PBE3oi564Zgx25sJ7pTJGA/L/Ui1aLGJ/cnbB/Ie2HgepnnczXo44= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661363818; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=TBKFQRSDVcArCnGYM0RCSRwgAWnqmGoCBmZE0lrFzQk=; b=ERjNr6IXK+6yGVX5ezvJrFvU60mxUDXqVMbIi8u1rcyKFotEmvCuvU+y/J/GNQIqC2688dHsNMKTmIKd0EJw6fNSrU/SySPL0uebyflxRPNvS1n/RNbmaJbB1swNgkBPnMqnBfp5Jg64qfmCw32o66qz7KRKaul0mSF/Hk5H6/c= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1661363818534932.3240497129808; Wed, 24 Aug 2022 10:56:58 -0700 (PDT) Received: from localhost ([::1]:45244 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQucb-0006jt-Ij for importer@patchew.org; Wed, 24 Aug 2022 13:56:57 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46798) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuFT-0002ML-1r for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:33:10 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:53124) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuFQ-0003t2-Gj for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:33:02 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-2-IdYyopYkPa-27NmRUceOkQ-1; Wed, 24 Aug 2022 13:32:58 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 81AC4803C80; Wed, 24 Aug 2022 17:32:56 +0000 (UTC) Received: from avogadro.redhat.com (unknown [10.39.192.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id C28F7492C3B; Wed, 24 Aug 2022 17:32:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661362379; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TBKFQRSDVcArCnGYM0RCSRwgAWnqmGoCBmZE0lrFzQk=; b=iiZgeW7Ei/T4JQ3T7nw+ssnoqzFjPjvnoTlEzQqWLY3HzegXiJU7+f8L8RDy2jVeHE89cE Zx+nnkolgl6aIV0NxXJZzrhTyMqD22u9r4Z97mP2kw4WS+oxqg51khnAjAqWk1ebsnberF oOWkTQO0jugYmNSTgnuMYBcAY6kAF5c= X-MC-Unique: IdYyopYkPa-27NmRUceOkQ-1 From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, paul@nowt.org Subject: [PATCH 11/17] target/i386: add 60-67, 70-77 opcodes Date: Wed, 24 Aug 2022 19:32:44 +0200 Message-Id: <20220824173250.232491-5-pbonzini@redhat.com> In-Reply-To: <20220824173123.232018-1-pbonzini@redhat.com> References: <20220824173123.232018-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661363819898100001 Content-Type: text/plain; charset="utf-8" Signed-off-by: Paolo Bonzini --- target/i386/tcg/decode-new.c.inc | 30 +++++++++++++ target/i386/tcg/emit.c.inc | 77 ++++++++++++++++++++++++++++++++ 2 files changed, 107 insertions(+) diff --git a/target/i386/tcg/decode-new.c.inc b/target/i386/tcg/decode-new.= c.inc index 586894e4ee..161a3b1554 100644 --- a/target/i386/tcg/decode-new.c.inc +++ b/target/i386/tcg/decode-new.c.inc @@ -462,6 +462,20 @@ static void decode_twobyte(DisasContext *s, CPUX86Stat= e *env, X86OpEntry *entry, } } =20 +static void decode_group_0x63(DisasContext *s, CPUX86State *env, X86OpEntr= y *entry, uint8_t *b) +{ + static X86OpEntry arpl =3D X86_OP_ENTRY2(ARPL, E,w, G,w, .special =3D = X86_SPECIAL_ProtMode); + static X86OpEntry mov =3D X86_OP_ENTRY3(MOV, G,v, E,v, None, None); + static X86OpEntry movsxd =3D X86_OP_ENTRY3(MOVSXD, G,v, E,d, None, Non= e); + if (!CODE64(s)) { + *entry =3D arpl; + } else if (REX_W(s)) { + *entry =3D movsxd; + } else { + *entry =3D mov; + } +} + static X86OpEntry A2_00_F7[16][8] =3D { { X86_OP_ENTRY2(ADD, E,b, G,b), @@ -524,8 +538,24 @@ static X86OpEntry A2_00_F7[16][8] =3D { X86_OP_ENTRYr(PUSH, LoBits,d64), }, { + X86_OP_ENTRY0(PUSHA, i64), + X86_OP_ENTRY0(POPA, i64), + X86_OP_ENTRY2(BOUND, G,v, M,a, i64), + X86_OP_GROUP0(0x63), + {}, + {}, + {}, + {}, }, { + X86_OP_ENTRYr(Jcc, J,b), + X86_OP_ENTRYr(Jcc, J,b), + X86_OP_ENTRYr(Jcc, J,b), + X86_OP_ENTRYr(Jcc, J,b), + X86_OP_ENTRYr(Jcc, J,b), + X86_OP_ENTRYr(Jcc, J,b), + X86_OP_ENTRYr(Jcc, J,b), + X86_OP_ENTRYr(Jcc, J,b), }, { }, diff --git a/target/i386/tcg/emit.c.inc b/target/i386/tcg/emit.c.inc index a76d6820e1..cf606e74c7 100644 --- a/target/i386/tcg/emit.c.inc +++ b/target/i386/tcg/emit.c.inc @@ -177,6 +177,56 @@ static void gen_AND(DisasContext *s, CPUX86State *env,= X86DecodedInsn *decode) gen_alu_op(s, OP_ANDL, decode->op[0].ot); } =20 +static void gen_ARPL(DisasContext *s, CPUX86State *env, X86DecodedInsn *de= code) +{ + TCGLabel *label1; + TCGv t0 =3D tcg_temp_local_new(); + TCGv t1 =3D tcg_temp_local_new(); + TCGv a0; + + if (decode->op[0].alu_op_type =3D=3D X86_ALU_MEM) { + a0 =3D tcg_temp_local_new(); + tcg_gen_mov_tl(a0, s->A0); + decode->op[0].v =3D a0; + } else { + a0 =3D NULL; + } + + gen_compute_eflags(s); + tcg_gen_andi_tl(cpu_cc_src, cpu_cc_src, ~CC_Z); + + tcg_gen_mov_tl(t0, s->T0); + tcg_gen_andi_tl(s->T0, s->T0, 3); + tcg_gen_andi_tl(t1, s->T1, 3); + label1 =3D gen_new_label(); + tcg_gen_brcond_tl(TCG_COND_GE, s->T0, t1, label1); + tcg_gen_andi_tl(t0, t0, ~3); + tcg_gen_or_tl(t0, t0, t1); + tcg_gen_ori_tl(cpu_cc_src, cpu_cc_src, CC_Z); + gen_set_label(label1); + + /* Do writeback here due to temp locals. */ + decode->op[0].alu_op_type =3D X86_ALU_SKIP; + if (a0) { + gen_op_st_v(s, MO_16, t0, a0); + tcg_temp_free(a0); + } else { + gen_op_mov_reg_v(s, MO_16, decode->op[0].n, t0); + } + tcg_temp_free(t0); + tcg_temp_free(t1); +} + +static void gen_BOUND(DisasContext *s, CPUX86State *env, X86DecodedInsn *d= ecode) +{ + tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); + if (decode->op[1].ot =3D=3D MO_16) { + gen_helper_boundw(cpu_env, s->A0, s->tmp2_i32); + } else { + gen_helper_boundl(cpu_env, s->A0, s->tmp2_i32); + } +} + static void gen_DAA(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) { gen_update_cc_op(s); @@ -201,6 +251,23 @@ static void gen_INC(DisasContext *s, CPUX86State *env,= X86DecodedInsn *decode) gen_alu_op(s, OP_INCL, decode->op[0].ot); } =20 +static void gen_Jcc(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) +{ + target_ulong next_eip =3D s->pc - s->cs_base; + gen_bnd_jmp(s); + gen_jcc(s, decode->b & 0xf, decode->immediate, next_eip); +} + +static void gen_MOV(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) +{ + /* nothing to do! */ +} + +static void gen_MOVSXD(DisasContext *s, CPUX86State *env, X86DecodedInsn *= decode) +{ + tcg_gen_ext32s_tl(s->T0, s->T0); +} + static void gen_OR(DisasContext *s, CPUX86State *env, X86DecodedInsn *deco= de) { gen_alu_op(s, OP_ORL, decode->op[0].ot); @@ -211,6 +278,11 @@ static void gen_PUSH(DisasContext *s, CPUX86State *env= , X86DecodedInsn *decode) gen_push_v(s, decode->op[2].v); } =20 +static void gen_PUSHA(DisasContext *s, CPUX86State *env, X86DecodedInsn *d= ecode) +{ + gen_pusha(s); +} + static void gen_POP(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) { MemOp ot =3D gen_pop_T0(s); @@ -218,6 +290,11 @@ static void gen_POP(DisasContext *s, CPUX86State *env,= X86DecodedInsn *decode) gen_pop_update(s, ot); } =20 +static void gen_POPA(DisasContext *s, CPUX86State *env, X86DecodedInsn *de= code) +{ + gen_popa(s); +} + static void gen_SBB(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) { gen_alu_op(s, OP_SBBL, decode->op[0].ot); --=20 2.37.1 From nobody Fri May 17 08:24:54 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661364560; cv=none; d=zohomail.com; s=zohoarc; b=XGeJ6QYd9f45NuxY2nY8L9aQ+kO2z/dTlGfPNR6IFHBpLv/ZblU9IZ03FtEEIn1KIvL8fytbO0qofBAiUqMURuHd6gD/iITuJtVCAT/joL+L1OO/+skaSl/DhLQCP+iLY4DEL0pyFnyP8QUeuzU7SJPt1Wa2Y3cpJBuYIx3mJJo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661364560; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=9uWITMOtiY6eNGn2/dnUIhYVDhgI4ldRYfEla/S1e6M=; b=VzJ9774Z4EWsunmXK2gwP6ejK5VeCH2InkX854DaMdarLC2PyRgZkfcCa/5PCJjDrqmHuuZ5rSSPJ0vhrO/4lTJDyGMynKKH9JyxhCVS/pLvzvz3WjESGn2R3rdOPuu8ndbGuG067ojzv9aif3mbMFpW0bCqvIAyzFyoI5cKNaM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1661364560636303.4316099080919; Wed, 24 Aug 2022 11:09:20 -0700 (PDT) Received: from localhost ([::1]:35276 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQuoY-0004cK-Mq for importer@patchew.org; Wed, 24 Aug 2022 14:09:19 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46808) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuFU-0002MR-59 for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:33:10 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:56418) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuFS-0003tF-E4 for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:33:03 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-125-IYWQ5qOONC6y2MtoxRc_eg-1; Wed, 24 Aug 2022 13:32:58 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A4E2E8039CF; Wed, 24 Aug 2022 17:32:57 +0000 (UTC) Received: from avogadro.redhat.com (unknown [10.39.192.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id C4A28492C3B; Wed, 24 Aug 2022 17:32:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661362381; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9uWITMOtiY6eNGn2/dnUIhYVDhgI4ldRYfEla/S1e6M=; b=dfmRkENHtM9v1Np1O+h1NEIZgd1xZA0yLsHwazko3/5BtUgOUgAwfuZLlUfZwX+srCQR3q 9X42v41GnjmKZbV7zy2IEawBokQ/rc/HbZPehHV8G14899NafUP+lfIK4ZqRLRW4GwAzGd MwdlHK3VNJamKswhj/4XrPP2LgNd/Hc= X-MC-Unique: IYWQ5qOONC6y2MtoxRc_eg-1 From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, paul@nowt.org Subject: [PATCH 12/17] target/i386: add 68-6f, 78-7f opcodes Date: Wed, 24 Aug 2022 19:32:45 +0200 Message-Id: <20220824173250.232491-6-pbonzini@redhat.com> In-Reply-To: <20220824173123.232018-1-pbonzini@redhat.com> References: <20220824173123.232018-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661364561779100001 Content-Type: text/plain; charset="utf-8" Signed-off-by: Paolo Bonzini --- target/i386/tcg/decode-new.c.inc | 16 ++++++ target/i386/tcg/decode-old.c.inc | 2 +- target/i386/tcg/emit.c.inc | 86 ++++++++++++++++++++++++++++++++ 3 files changed, 103 insertions(+), 1 deletion(-) diff --git a/target/i386/tcg/decode-new.c.inc b/target/i386/tcg/decode-new.= c.inc index 161a3b1554..6892000aaf 100644 --- a/target/i386/tcg/decode-new.c.inc +++ b/target/i386/tcg/decode-new.c.inc @@ -637,8 +637,24 @@ static X86OpEntry A2_08_FF[16][8] =3D { X86_OP_ENTRYw(POP, LoBits,d64), }, { + X86_OP_ENTRYr(PUSH, I,z), + X86_OP_ENTRY3(IMUL, G,v, E,v, I,z, nowb), + X86_OP_ENTRYr(PUSH, I,b), + X86_OP_ENTRY3(IMUL, G,v, E,v, I,b, nowb), + X86_OP_ENTRY2(INS, Y,b, 2,w, nowb), /* DX */ + X86_OP_ENTRY2(INS, Y,z, 2,w, nowb), /* DX */ + X86_OP_ENTRY2(OUTS, 2,w, X,b, nowb), /* DX */ + X86_OP_ENTRY2(OUTS, 2,w, X,b, nowb), /* DX */ }, { + X86_OP_ENTRYr(Jcc, J,b), + X86_OP_ENTRYr(Jcc, J,b), + X86_OP_ENTRYr(Jcc, J,b), + X86_OP_ENTRYr(Jcc, J,b), + X86_OP_ENTRYr(Jcc, J,b), + X86_OP_ENTRYr(Jcc, J,b), + X86_OP_ENTRYr(Jcc, J,b), + X86_OP_ENTRYr(Jcc, J,b), }, { }, diff --git a/target/i386/tcg/decode-old.c.inc b/target/i386/tcg/decode-old.= c.inc index a297d126a4..7763bef11d 100644 --- a/target/i386/tcg/decode-old.c.inc +++ b/target/i386/tcg/decode-old.c.inc @@ -1821,7 +1821,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) #else use_new &=3D b <=3D limit; #endif - if (use_new && b <=3D 0x5f) { + if (use_new && b <=3D 0x7f) { return disas_insn_new(s, cpu, b); } case 0x0f: diff --git a/target/i386/tcg/emit.c.inc b/target/i386/tcg/emit.c.inc index cf606e74c7..ae82ebd8c9 100644 --- a/target/i386/tcg/emit.c.inc +++ b/target/i386/tcg/emit.c.inc @@ -246,11 +246,74 @@ static void gen_DEC(DisasContext *s, CPUX86State *env= , X86DecodedInsn *decode) gen_alu_op(s, OP_DECL, decode->op[0].ot); } =20 +static void gen_IMUL(DisasContext *s, CPUX86State *env, X86DecodedInsn *de= code) +{ + int reg =3D decode->op[0].n; + MemOp ot =3D decode->op[0].ot; + + switch (ot) { +#ifdef TARGET_X86_64 + case MO_64: + tcg_gen_muls2_i64(cpu_regs[reg], s->T1, s->T0, s->T1); + tcg_gen_mov_tl(cpu_cc_dst, cpu_regs[reg]); + tcg_gen_sari_tl(cpu_cc_src, cpu_cc_dst, 63); + tcg_gen_sub_tl(cpu_cc_src, cpu_cc_src, s->T1); + break; +#endif + case MO_32: + tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); + tcg_gen_trunc_tl_i32(s->tmp3_i32, s->T1); + tcg_gen_muls2_i32(s->tmp2_i32, s->tmp3_i32, + s->tmp2_i32, s->tmp3_i32); + tcg_gen_extu_i32_tl(cpu_regs[reg], s->tmp2_i32); + tcg_gen_sari_i32(s->tmp2_i32, s->tmp2_i32, 31); + tcg_gen_mov_tl(cpu_cc_dst, cpu_regs[reg]); + tcg_gen_sub_i32(s->tmp2_i32, s->tmp2_i32, s->tmp3_i32); + tcg_gen_extu_i32_tl(cpu_cc_src, s->tmp2_i32); + break; + default: + tcg_gen_ext16s_tl(s->T0, s->T0); + tcg_gen_ext16s_tl(s->T1, s->T1); + /* XXX: use 32 bit mul which could be faster */ + tcg_gen_mul_tl(s->T0, s->T0, s->T1); + tcg_gen_mov_tl(cpu_cc_dst, s->T0); + tcg_gen_ext16s_tl(s->tmp0, s->T0); + tcg_gen_sub_tl(cpu_cc_src, s->T0, s->tmp0); + gen_op_mov_reg_v(s, ot, reg, s->T0); + break; + } + set_cc_op(s, CC_OP_MULB + ot); +} + static void gen_INC(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) { gen_alu_op(s, OP_INCL, decode->op[0].ot); } =20 +static void gen_INS(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) +{ + MemOp ot =3D decode->op[0].ot; + + tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T1); + if (!gen_check_io(s, ot, s->tmp2_i32, + SVM_IOIO_TYPE_MASK | SVM_IOIO_STR_MASK)) { + return; + } + + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { + gen_io_start(); + } + if (s->prefix & (PREFIX_REPZ | PREFIX_REPNZ)) { + gen_repz_ins(s, ot, s->pc_start - s->cs_base, s->pc - s->cs_base); + /* jump generated by gen_repz_ins */ + } else { + gen_ins(s, ot); + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { + gen_jmp(s, s->pc - s->cs_base); + } + } +} + static void gen_Jcc(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) { target_ulong next_eip =3D s->pc - s->cs_base; @@ -273,6 +336,29 @@ static void gen_OR(DisasContext *s, CPUX86State *env, = X86DecodedInsn *decode) gen_alu_op(s, OP_ORL, decode->op[0].ot); } =20 +static void gen_OUTS(DisasContext *s, CPUX86State *env, X86DecodedInsn *de= code) +{ + MemOp ot =3D decode->op[1].ot; + + tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); + if (!gen_check_io(s, ot, s->tmp2_i32, SVM_IOIO_STR_MASK)) { + return; + } + + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { + gen_io_start(); + } + if (s->prefix & (PREFIX_REPZ | PREFIX_REPNZ)) { + gen_repz_outs(s, ot, s->pc_start - s->cs_base, s->pc - s->cs_base); + /* jump generated by gen_repz_ins */ + } else { + gen_outs(s, ot); + if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) { + gen_jmp(s, s->pc - s->cs_base); + } + } +} + static void gen_PUSH(DisasContext *s, CPUX86State *env, X86DecodedInsn *de= code) { gen_push_v(s, decode->op[2].v); --=20 2.37.1 From nobody Fri May 17 08:24:54 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661363474; cv=none; d=zohomail.com; s=zohoarc; b=CGOCq79xzLpwz1y/T7V7DMD1q73vvKLZgnl/gTkaylaIuR9WNBluNs8zxQ0qajCaePS19FUlPEFI5+ILy5+hWkx78tRP/WdJQ6Hc7HlHgtUTKJQpYPy9JKEjDu0elm6Hxh+yC9K69Ce2KvaKg6CGdOOub1Z54YoX+XjJA5SYfsE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661363474; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=91lBhp/q/NfB1o23r3E3RNd8+YHhtHfO7+KWzUVSZR0=; b=DVBIhWqJ9dcafO5TIKehGfL6GPeIqKKOOgxx4KHWs/1MnJOKgLAL+Ms7GBViMedErwk0kYOj4NiJAV1sZiTRgyd+LTU40i9ykrtRbD0oZPZtKBOieYFi/zGoZycjhXmUFLbr3FMiLypaenJqwNiLCrjut/jY3SQFkekCidSmGvg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1661363474713694.35261334304; Wed, 24 Aug 2022 10:51:14 -0700 (PDT) Received: from localhost ([::1]:38376 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQuX3-0003rz-GA for importer@patchew.org; Wed, 24 Aug 2022 13:51:13 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:49254) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuFW-0002MU-CV for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:33:10 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:24222) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuFT-0003tY-2b for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:33:04 -0400 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-352-KP-rFg4cOqGe8Cx5VIRnNA-1; Wed, 24 Aug 2022 13:32:59 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id ABEE91C19789; Wed, 24 Aug 2022 17:32:58 +0000 (UTC) Received: from avogadro.redhat.com (unknown [10.39.192.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id EA5D5492C3B; Wed, 24 Aug 2022 17:32:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661362382; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=91lBhp/q/NfB1o23r3E3RNd8+YHhtHfO7+KWzUVSZR0=; b=Dysc4ychuRSEsZP5G1SSdlG9Pz63syCvdt95WEcDcE/mkDQ+J2moYVnvs2BNAzjmgJkQ+1 q2kf1SCztsShvuf03Zd/MUD0fpTzYum/FP7pG6ChWrvHlMiTEF0e+64/NFIJYhXh5nj8dv Pd6sWgx2ruJoKiDP+i9RkxSKXSk7maI= X-MC-Unique: KP-rFg4cOqGe8Cx5VIRnNA-1 From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, paul@nowt.org Subject: [PATCH 13/17] target/i386: add 80-87, 90-97 opcodes Date: Wed, 24 Aug 2022 19:32:46 +0200 Message-Id: <20220824173250.232491-7-pbonzini@redhat.com> In-Reply-To: <20220824173123.232018-1-pbonzini@redhat.com> References: <20220824173123.232018-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661363476811100001 Content-Type: text/plain; charset="utf-8" Signed-off-by: Paolo Bonzini --- target/i386/tcg/decode-new.c.inc | 30 ++++++++++++++++++++++++++++++ target/i386/tcg/emit.c.inc | 27 +++++++++++++++++++++++++++ 2 files changed, 57 insertions(+) diff --git a/target/i386/tcg/decode-new.c.inc b/target/i386/tcg/decode-new.= c.inc index 6892000aaf..07a2aea540 100644 --- a/target/i386/tcg/decode-new.c.inc +++ b/target/i386/tcg/decode-new.c.inc @@ -476,6 +476,20 @@ static void decode_group_0x63(DisasContext *s, CPUX86S= tate *env, X86OpEntry *ent } } =20 +static void decode_group_1(DisasContext *s, CPUX86State *env, X86OpEntry *= entry, uint8_t *b) +{ + static X86GenFunc group1_gen[8] =3D { + gen_ADD, gen_OR, gen_ADC, gen_SBB, gen_AND, gen_SUB, gen_XOR, gen_= SUB, + }; + int op =3D (get_modrm(s, env) >> 3) & 7; + entry->gen =3D group1_gen[op]; + + if (op =3D=3D 7) { + /* CMP */ + entry->special =3D X86_SPECIAL_NoWriteback; + } +} + static X86OpEntry A2_00_F7[16][8] =3D { { X86_OP_ENTRY2(ADD, E,b, G,b), @@ -558,8 +572,24 @@ static X86OpEntry A2_00_F7[16][8] =3D { X86_OP_ENTRYr(Jcc, J,b), }, { + X86_OP_GROUP2(1, E,b, I,b), + X86_OP_GROUP2(1, E,v, I,z), + X86_OP_GROUP2(1, E,b, I,b, i64), + X86_OP_GROUP2(1, E,v, I,b), + X86_OP_ENTRY2(AND, E,b, G,b, nowb), + X86_OP_ENTRY2(AND, E,v, G,v, nowb), + X86_OP_ENTRY2(XCHG, E,b, G,b, xchg), + X86_OP_ENTRY2(XCHG, E,v, G,v, xchg), }, { + X86_OP_ENTRY2(XCHG, 0,v, LoBits,v), + X86_OP_ENTRY2(XCHG, 0,v, LoBits,v), + X86_OP_ENTRY2(XCHG, 0,v, LoBits,v), + X86_OP_ENTRY2(XCHG, 0,v, LoBits,v), + X86_OP_ENTRY2(XCHG, 0,v, LoBits,v), + X86_OP_ENTRY2(XCHG, 0,v, LoBits,v), + X86_OP_ENTRY2(XCHG, 0,v, LoBits,v), + X86_OP_ENTRY2(XCHG, 0,v, LoBits,v), }, { }, diff --git a/target/i386/tcg/emit.c.inc b/target/i386/tcg/emit.c.inc index ae82ebd8c9..80f6541464 100644 --- a/target/i386/tcg/emit.c.inc +++ b/target/i386/tcg/emit.c.inc @@ -325,6 +325,7 @@ static void gen_MOV(DisasContext *s, CPUX86State *env, = X86DecodedInsn *decode) { /* nothing to do! */ } +#define gen_NOP gen_MOV =20 static void gen_MOVSXD(DisasContext *s, CPUX86State *env, X86DecodedInsn *= decode) { @@ -391,6 +392,32 @@ static void gen_SUB(DisasContext *s, CPUX86State *env,= X86DecodedInsn *decode) gen_alu_op(s, OP_SUBL, decode->op[0].ot); } =20 +static void gen_XCHG(DisasContext *s, CPUX86State *env, X86DecodedInsn *de= code) +{ + if (decode->b =3D=3D 0x90 && !REX_B(s)) { + if (s->prefix & PREFIX_REPZ) { + gen_update_cc_op(s); + gen_jmp_im(s, s->pc_start - s->cs_base); + gen_helper_pause(cpu_env, tcg_const_i32(s->pc - s->pc_start)); + s->base.is_jmp =3D DISAS_NORETURN; + } + /* No writeback. */ + decode->op[0].alu_op_type =3D X86_ALU_SKIP; + return; + } + + if (s->prefix & PREFIX_LOCK) { + tcg_gen_atomic_xchg_tl(s->T0, s->A0, s->T1, + s->mem_index, decode->op[0].ot | MO_LE); + /* now store old value into register operand */ + gen_op_mov_reg_v(s, decode->op[2].ot, decode->op[2].n, s->T0); + } else { + /* move destination value into source operand, source preserved in= T1 */ + gen_op_mov_reg_v(s, decode->op[2].ot, decode->op[2].n, s->T0); + tcg_gen_mov_tl(s->T0, s->T1); + } +} + static void gen_XOR(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) { /* special case XOR reg, reg */ --=20 2.37.1 From nobody Fri May 17 08:24:54 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661363490; cv=none; d=zohomail.com; s=zohoarc; b=jE5zAWTnnDJx5YZXZ4GaEa0VuAN6nlLwui8nyhdKsUoiLXr14XugvTk40PORR3a8bfHTqVugyKg6Sb3YJ/7UgNGANKT2TOHkP4eHToxbU9NqQ4oF6XIu7u7c/jzFIars6FR8Ky0A7s/2/hc3UpLD3gVCrhHVoo2ssfr/yw5KbGs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661363490; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=O01Ev6yrVLoTfgoKbRtpaLHDGtEd7GHlvPRBgqxlJ/A=; b=bpioQRhQq/SZ8j5ZbTdj0G06UhyqlFtrNObwxRAoCoNk0i6Qo7GpOiS2Sr/QWeW6qsOE8oC5wrJShGEIsjl4gL6GV0SEfxAcENqeIHLpZsKIQ5WgE4K4soEXngneVfy4NjO3k8kkXO61J/Gy/UHZjRzCP/JI0j7jpcjaxIxnGrQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1661363490005194.06770891330484; Wed, 24 Aug 2022 10:51:30 -0700 (PDT) Received: from localhost ([::1]:47418 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQuXJ-0004GT-0j for importer@patchew.org; Wed, 24 Aug 2022 13:51:29 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46806) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuFU-0002MQ-3Q for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:33:10 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:49918) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuFS-0003tC-4a for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:33:03 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-361-gHygem8kPNqzneqUQeWbkw-1; Wed, 24 Aug 2022 13:33:00 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B079785A58A; Wed, 24 Aug 2022 17:32:59 +0000 (UTC) Received: from avogadro.redhat.com (unknown [10.39.192.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id F0A9C492C3B; Wed, 24 Aug 2022 17:32:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661362381; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=O01Ev6yrVLoTfgoKbRtpaLHDGtEd7GHlvPRBgqxlJ/A=; b=WmlhQlWfgMpPBZby25+agAEX26GAV16MJm+Eemz4wZuzSVCR5iCzxZ58WrlnPkAdJ23EW5 siuVUybWgnYBo/mXmJ5CnpeXJ6mka/gVwz5c6Se5985BhFdNueYb0EQTpFY5DSoNfxpIYV 4UyyujxyoSqHLxKqGYtm3sEqvg33QNo= X-MC-Unique: gHygem8kPNqzneqUQeWbkw-1 From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, paul@nowt.org Subject: [PATCH 14/17] target/i386: add a0-a7, b0-b7 opcodes Date: Wed, 24 Aug 2022 19:32:47 +0200 Message-Id: <20220824173250.232491-8-pbonzini@redhat.com> In-Reply-To: <20220824173123.232018-1-pbonzini@redhat.com> References: <20220824173123.232018-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661363490901100003 Content-Type: text/plain; charset="utf-8" Signed-off-by: Paolo Bonzini --- target/i386/tcg/decode-new.c.inc | 16 ++++++++++++++++ target/i386/tcg/emit.c.inc | 22 ++++++++++++++++++++++ 2 files changed, 38 insertions(+) diff --git a/target/i386/tcg/decode-new.c.inc b/target/i386/tcg/decode-new.= c.inc index 07a2aea540..3d96ac3adb 100644 --- a/target/i386/tcg/decode-new.c.inc +++ b/target/i386/tcg/decode-new.c.inc @@ -592,8 +592,24 @@ static X86OpEntry A2_00_F7[16][8] =3D { X86_OP_ENTRY2(XCHG, 0,v, LoBits,v), }, { + X86_OP_ENTRY3(MOV, 0,b, O,b, None, None), /* AL, Ob */ + X86_OP_ENTRY3(MOV, 0,v, O,v, None, None), /* rAX, Ov */ + X86_OP_ENTRY3(MOV, O,b, 0,b, None, None), /* Ob, AL */ + X86_OP_ENTRY3(MOV, O,v, 0,v, None, None), /* Ov, rAX */ + X86_OP_ENTRY2(MOVS, Y,b, X,b), + X86_OP_ENTRY2(MOVS, Y,v, X,v), + X86_OP_ENTRY2(CMPS, Y,b, X,b), + X86_OP_ENTRY2(CMPS, Y,v, X,v), }, { + X86_OP_ENTRY3(MOV, LoBits,b, I,b, None, None), + X86_OP_ENTRY3(MOV, LoBits,b, I,b, None, None), + X86_OP_ENTRY3(MOV, LoBits,b, I,b, None, None), + X86_OP_ENTRY3(MOV, LoBits,b, I,b, None, None), + X86_OP_ENTRY3(MOV, LoBits,b, I,b, None, None), + X86_OP_ENTRY3(MOV, LoBits,b, I,b, None, None), + X86_OP_ENTRY3(MOV, LoBits,b, I,b, None, None), + X86_OP_ENTRY3(MOV, LoBits,b, I,b, None, None), }, { }, diff --git a/target/i386/tcg/emit.c.inc b/target/i386/tcg/emit.c.inc index 80f6541464..9395474302 100644 --- a/target/i386/tcg/emit.c.inc +++ b/target/i386/tcg/emit.c.inc @@ -227,6 +227,18 @@ static void gen_BOUND(DisasContext *s, CPUX86State *en= v, X86DecodedInsn *decode) } } =20 +static void gen_CMPS(DisasContext *s, CPUX86State *env, X86DecodedInsn *de= code) +{ + MemOp ot =3D decode->op[0].ot; + if (s->prefix & PREFIX_REPNZ) { + gen_repz_cmps(s, ot, s->pc_start - s->cs_base, s->pc - s->cs_base,= 1); + } else if (s->prefix & PREFIX_REPZ) { + gen_repz_cmps(s, ot, s->pc_start - s->cs_base, s->pc - s->cs_base,= 0); + } else { + gen_cmps(s, ot); + } +} + static void gen_DAA(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) { gen_update_cc_op(s); @@ -327,6 +339,16 @@ static void gen_MOV(DisasContext *s, CPUX86State *env,= X86DecodedInsn *decode) } #define gen_NOP gen_MOV =20 +static void gen_MOVS(DisasContext *s, CPUX86State *env, X86DecodedInsn *de= code) +{ + MemOp ot =3D decode->op[0].ot; + if (s->prefix & (PREFIX_REPZ | PREFIX_REPNZ)) { + gen_repz_movs(s, ot, s->pc_start - s->cs_base, s->pc - s->cs_base); + } else { + gen_movs(s, ot); + } +} + static void gen_MOVSXD(DisasContext *s, CPUX86State *env, X86DecodedInsn *= decode) { tcg_gen_ext32s_tl(s->T0, s->T0); --=20 2.37.1 From nobody Fri May 17 08:24:54 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661363752; cv=none; d=zohomail.com; s=zohoarc; b=nvQIdpy1JuqCvu9EklnCOe56CM6f3U0WRirikqD52EnaKCmYfP0dB6Fss2Nd73EcZZR7VuL+fZFWL8IoRSkHPyVnURY/1ku9TFk4c3FSvV0COsh1O818pu2OZZd2CeF0A40I6WmxHai8PyEhr2Ya7zKxXXAPdIG8F33sNTyAygY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661363752; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=BKEZ0xmV30LNOg8IqGUlYuGtCnK4wDtvZFoarMt2Rd0=; b=T3i1lOWRRz/NyP56yeZZE6+ooCX1jEhlp0R1Huxr/p3iFS44/3ixr1uQ4cHGi/i7y6mt+jah9lnkMXBjoviHaUhSk1qMze1GM/Ffgo0RtgqDk30PNZD8u0OakscPejKMmtbuDQUM3pYqk8xXkhD3JS4h41w/xefZ9zB+EGiAs3w= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1661363752142863.933756208417; Wed, 24 Aug 2022 10:55:52 -0700 (PDT) Received: from localhost ([::1]:52822 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQubW-0003r4-Uv for importer@patchew.org; Wed, 24 Aug 2022 13:55:50 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:49260) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuFW-0002MZ-Di for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:33:10 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:24505) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuFU-0003tt-9Y for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:33:05 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-196-lwqfwKcSOdiGyESivDjyDw-1; Wed, 24 Aug 2022 13:33:02 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B1ACC811E76; Wed, 24 Aug 2022 17:33:00 +0000 (UTC) Received: from avogadro.redhat.com (unknown [10.39.192.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0081B492C3B; Wed, 24 Aug 2022 17:32:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661362383; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BKEZ0xmV30LNOg8IqGUlYuGtCnK4wDtvZFoarMt2Rd0=; b=d68/BiBQo/1sO8vS7Ddcct6vxcabVKAOIKzhQtmP9Pv0C8ZYIQyVZorxqMEt/Iyrz8sB14 88xwFt8unlIXq2B4JjKnWM70I5QO+fFMU6H+Yy24QplANbtBYYWB/7VW/29BHwr/hhKcmg 5MY/FfoHiMTcYT0Qfb7aGsh6z1Mp6AU= X-MC-Unique: lwqfwKcSOdiGyESivDjyDw-1 From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, paul@nowt.org Subject: [PATCH 15/17] target/i386: do not clobber A0 in POP translation Date: Wed, 24 Aug 2022 19:32:48 +0200 Message-Id: <20220824173250.232491-9-pbonzini@redhat.com> In-Reply-To: <20220824173123.232018-1-pbonzini@redhat.com> References: <20220824173123.232018-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661363752575100001 Content-Type: text/plain; charset="utf-8" The new decoder likes to compute the address in A0 very early, so the gen_lea_v_seg in gen_pop_T0 would clobber the address of the memory operand. Instead use T0 since it is already available and will be overwritten immediately after. Signed-off-by: Paolo Bonzini --- target/i386/tcg/translate.c | 34 ++++++++++++++++++++-------------- 1 file changed, 20 insertions(+), 14 deletions(-) diff --git a/target/i386/tcg/translate.c b/target/i386/tcg/translate.c index d0a8c0becb..5c3742a9c7 100644 --- a/target/i386/tcg/translate.c +++ b/target/i386/tcg/translate.c @@ -508,17 +508,17 @@ static inline void gen_jmp_im(DisasContext *s, target= _ulong pc) gen_op_jmp_v(s->tmp0); } =20 -/* Compute SEG:REG into A0. SEG is selected from the override segment +/* Compute SEG:REG into DEST. SEG is selected from the override segment (OVR_SEG) and the default segment (DEF_SEG). OVR_SEG may be -1 to indicate no override. */ -static void gen_lea_v_seg(DisasContext *s, MemOp aflag, TCGv a0, - int def_seg, int ovr_seg) +static void gen_lea_v_seg_dest(DisasContext *s, MemOp aflag, TCGv dest, TC= Gv a0, + int def_seg, int ovr_seg) { switch (aflag) { #ifdef TARGET_X86_64 case MO_64: if (ovr_seg < 0) { - tcg_gen_mov_tl(s->A0, a0); + tcg_gen_mov_tl(dest, a0); return; } break; @@ -529,14 +529,14 @@ static void gen_lea_v_seg(DisasContext *s, MemOp afla= g, TCGv a0, ovr_seg =3D def_seg; } if (ovr_seg < 0) { - tcg_gen_ext32u_tl(s->A0, a0); + tcg_gen_ext32u_tl(dest, a0); return; } break; case MO_16: /* 16 bit address */ - tcg_gen_ext16u_tl(s->A0, a0); - a0 =3D s->A0; + tcg_gen_ext16u_tl(dest, a0); + a0 =3D dest; if (ovr_seg < 0) { if (ADDSEG(s)) { ovr_seg =3D def_seg; @@ -553,17 +553,23 @@ static void gen_lea_v_seg(DisasContext *s, MemOp afla= g, TCGv a0, TCGv seg =3D cpu_seg_base[ovr_seg]; =20 if (aflag =3D=3D MO_64) { - tcg_gen_add_tl(s->A0, a0, seg); + tcg_gen_add_tl(dest, a0, seg); } else if (CODE64(s)) { - tcg_gen_ext32u_tl(s->A0, a0); - tcg_gen_add_tl(s->A0, s->A0, seg); + tcg_gen_ext32u_tl(dest, a0); + tcg_gen_add_tl(dest, dest, seg); } else { - tcg_gen_add_tl(s->A0, a0, seg); - tcg_gen_ext32u_tl(s->A0, s->A0); + tcg_gen_add_tl(dest, a0, seg); + tcg_gen_ext32u_tl(dest, dest); } } } =20 +static void gen_lea_v_seg(DisasContext *s, MemOp aflag, TCGv a0, + int def_seg, int ovr_seg) +{ + gen_lea_v_seg_dest(s, aflag, s->A0, a0, def_seg, ovr_seg); +} + static inline void gen_string_movl_A0_ESI(DisasContext *s) { gen_lea_v_seg(s, s->aflag, cpu_regs[R_ESI], R_DS, s->override); @@ -2506,8 +2512,8 @@ static MemOp gen_pop_T0(DisasContext *s) { MemOp d_ot =3D mo_pushpop(s, s->dflag); =20 - gen_lea_v_seg(s, mo_stacksize(s), cpu_regs[R_ESP], R_SS, -1); - gen_op_ld_v(s, d_ot, s->T0, s->A0); + gen_lea_v_seg_dest(s, mo_stacksize(s), s->T0, cpu_regs[R_ESP], R_SS, -= 1); + gen_op_ld_v(s, d_ot, s->T0, s->T0); =20 return d_ot; } --=20 2.37.1 From nobody Fri May 17 08:24:54 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661364354; cv=none; d=zohomail.com; s=zohoarc; b=l+0uAfIb3rMVnxWgTY2XKpFr9YIN3n+zN5FXSnIQTl1OGmTOdk8j+0S59sFDnmqeV6WebxxDccEvJkSwsB6Wle8gSHZ7I37ooh9tqR763+HuAsJrTct3pBDG47FwBTUMx2FpPDNrRY7fuFnFyPjJSZYxgRexXZLwwVy5fNdITzY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661364354; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=zEqGdtMFivfNjk8JxqyFaSqokqP8GVG0zK8mv628Nsk=; b=igoo3zW6BHCAmnhtp8GAV7CwRKugiJxZAif9i3AXtQD9mnaIxw/STvHEUaFeYK06mJou2EPdi+v7dBJwWjysJzjN9PzWIkES2v0xHoOglqV5457mwHeIGbyuqlMcLGSMOQVkItTXASofCL5jQkENExfkUe59Mgdrwka8EXY/nGc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 16613643546526.862781533336033; Wed, 24 Aug 2022 11:05:54 -0700 (PDT) Received: from localhost ([::1]:37544 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQulE-0007tm-6G for importer@patchew.org; Wed, 24 Aug 2022 14:05:52 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:49258) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuFW-0002MY-Dn for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:33:10 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:38271) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuFU-0003tz-6N for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:33:06 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-442-5aCBhhi-Nlar8UmjgdHlsA-1; Wed, 24 Aug 2022 13:33:02 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BA6ED18A6524; Wed, 24 Aug 2022 17:33:01 +0000 (UTC) Received: from avogadro.redhat.com (unknown [10.39.192.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id 01794492C3B; Wed, 24 Aug 2022 17:33:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661362383; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zEqGdtMFivfNjk8JxqyFaSqokqP8GVG0zK8mv628Nsk=; b=XkuRrUWh9xfZQWuoyhSo0NPuZrqYZ6AYzFqSbUYzzgvJjK9xX6xkp8w6TUbqq9soQ7nvdc 0hTXbUZKJBFbZhbbplJQUxozgoW1O7RB2wVHHqh/p68IHQmQd40Sd5OUrPr6COiuQdIWiF oZv/oEl+RhG1/M4U8n9PlWMDdgBWlY0= X-MC-Unique: 5aCBhhi-Nlar8UmjgdHlsA-1 From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, paul@nowt.org Subject: [PATCH 16/17] target/i386: add 88-8f, 98-9f opcodes Date: Wed, 24 Aug 2022 19:32:49 +0200 Message-Id: <20220824173250.232491-10-pbonzini@redhat.com> In-Reply-To: <20220824173123.232018-1-pbonzini@redhat.com> References: <20220824173123.232018-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661364356419100001 Content-Type: text/plain; charset="utf-8" Signed-off-by: Paolo Bonzini --- target/i386/tcg/decode-new.c.inc | 41 ++++++++ target/i386/tcg/decode-old.c.inc | 19 +--- target/i386/tcg/emit.c.inc | 166 ++++++++++++++++++++++++++++++- target/i386/tcg/translate.c | 17 ++++ 4 files changed, 227 insertions(+), 16 deletions(-) diff --git a/target/i386/tcg/decode-new.c.inc b/target/i386/tcg/decode-new.= c.inc index 3d96ac3adb..1e607b68fa 100644 --- a/target/i386/tcg/decode-new.c.inc +++ b/target/i386/tcg/decode-new.c.inc @@ -97,6 +97,9 @@ typedef enum X86OpSpecial { /* Writeback not needed or done manually in the callback */ X86_SPECIAL_NoWriteback, =20 + /* Do not apply segment base to effective address */ + X86_SPECIAL_NoSeg, + /* Illegal or exclusive to 64-bit mode */ X86_SPECIAL_i64, X86_SPECIAL_o64, @@ -194,8 +197,12 @@ struct X86DecodedInsn { #define i64 .special =3D X86_SPECIAL_i64, #define o64 .special =3D X86_SPECIAL_o64, #define nowb .special =3D X86_SPECIAL_NoWriteback, +#define noseg .special =3D X86_SPECIAL_NoSeg, #define xchg .special =3D X86_SPECIAL_Locked, =20 +static X86OpEntry illegal_opcode =3D + X86_OP_ENTRY0(illegal); + static uint8_t get_modrm(DisasContext *s, CPUX86State *env) { if (!s->has_modrm) { @@ -490,6 +497,18 @@ static void decode_group_1(DisasContext *s, CPUX86Stat= e *env, X86OpEntry *entry, } } =20 +static void decode_group_1A(DisasContext *s, CPUX86State *env, X86OpEntry = *entry, uint8_t *b) +{ + int op =3D (get_modrm(s, env) >> 3) & 7; + if (op !=3D 0) { + *entry =3D illegal_opcode; + } else { + entry->gen =3D gen_POP; + /* The address must use the value of ESP after the pop. */ + s->popl_esp_hack =3D 1 << mo_pushpop(s, s->dflag); + } +} + static X86OpEntry A2_00_F7[16][8] =3D { { X86_OP_ENTRY2(ADD, E,b, G,b), @@ -703,8 +722,24 @@ static X86OpEntry A2_08_FF[16][8] =3D { X86_OP_ENTRYr(Jcc, J,b), }, { + X86_OP_ENTRY3(MOV, E,b, G,b, None, None), + X86_OP_ENTRY3(MOV, E,v, G,v, None, None), + X86_OP_ENTRY3(MOV, G,b, E,b, None, None), + X86_OP_ENTRY3(MOV, G,v, E,v, None, None), + X86_OP_ENTRY3(MOV, E,v, S,w, None, None), + X86_OP_ENTRY3(LEA, G,v, M,v, None, None, noseg), + X86_OP_ENTRY3(MOV, S,w, E,v, None, None), + X86_OP_GROUPw(1A, E,v) }, { + X86_OP_ENTRY1(CBW, 0,v), /* rAX */ + X86_OP_ENTRY3(CWD, 2,v, 0,v, None, None), /* rDX, rAX */ + X86_OP_ENTRYr(CALLF, A,p, i64), + X86_OP_ENTRY0(WAIT), + X86_OP_ENTRY0(PUSHF), + X86_OP_ENTRY0(POPF), + X86_OP_ENTRY0(SAHF), + X86_OP_ENTRY0(LAHF), }, { }, @@ -982,6 +1017,7 @@ static target_ulong disas_insn_new(DisasContext *s, CP= UState *cpu, int b) #if 0 s->pc_start =3D s->pc =3D s->base.pc_next; s->override =3D -1; + s->popl_esp_hack =3D 0; #ifdef TARGET_X86_64 s->rex_w =3D false; s->rex_r =3D 0; @@ -1170,6 +1206,11 @@ static target_ulong disas_insn_new(DisasContext *s, = CPUState *cpu, int b) case X86_SPECIAL_NoWriteback: decode.op[0].alu_op_type =3D X86_ALU_SKIP; break; + + case X86_SPECIAL_NoSeg: + decode.mem.def_seg =3D -1; + s->override =3D -1; + break; } =20 if (decode.op[0].has_ea || decode.op[1].has_ea || decode.op[2].has_ea)= { diff --git a/target/i386/tcg/decode-old.c.inc b/target/i386/tcg/decode-old.= c.inc index 7763bef11d..69ce70d141 100644 --- a/target/i386/tcg/decode-old.c.inc +++ b/target/i386/tcg/decode-old.c.inc @@ -1792,6 +1792,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) =20 s->pc_start =3D s->pc =3D pc_start; s->override =3D -1; + s->popl_esp_hack =3D 0; #ifdef TARGET_X86_64 s->rex_w =3D false; s->rex_r =3D 0; @@ -2380,20 +2381,7 @@ static target_ulong disas_insn(DisasContext *s, CPUS= tate *cpu) gen_op_ld_v(s, ot, s->T1, s->A0); gen_add_A0_im(s, 1 << ot); gen_op_ld_v(s, MO_16, s->T0, s->A0); - do_lcall: - if (PE(s) && !VM86(s)) { - tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); - gen_helper_lcall_protected(cpu_env, s->tmp2_i32, s->T1, - tcg_const_i32(dflag - 1), - tcg_const_tl(s->pc - s->cs_base= )); - } else { - tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); - gen_helper_lcall_real(cpu_env, s->tmp2_i32, s->T1, - tcg_const_i32(dflag - 1), - tcg_const_i32(s->pc - s->cs_base)); - } - tcg_gen_ld_tl(s->tmp4, cpu_env, offsetof(CPUX86State, eip)); - gen_jr(s, s->tmp4); + gen_far_call(s); break; case 4: /* jmp Ev */ if (dflag =3D=3D MO_16) { @@ -3964,7 +3952,8 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) tcg_gen_movi_tl(s->T0, selector); tcg_gen_movi_tl(s->T1, offset); } - goto do_lcall; + gen_far_call(s); + break; case 0xe9: /* jmp im */ if (dflag !=3D MO_16) { tval =3D (int32_t)insn_get(env, s, MO_32); diff --git a/target/i386/tcg/emit.c.inc b/target/i386/tcg/emit.c.inc index 9395474302..22f2fbde79 100644 --- a/target/i386/tcg/emit.c.inc +++ b/target/i386/tcg/emit.c.inc @@ -227,6 +227,42 @@ static void gen_BOUND(DisasContext *s, CPUX86State *en= v, X86DecodedInsn *decode) } } =20 +static void gen_CALLF(DisasContext *s, CPUX86State *env, X86DecodedInsn *d= ecode) +{ + MemOp ot =3D decode->op[1].ot; + unsigned int selector, offset; + + if (CODE64(s)) { + gen_illegal_opcode(s); + return; + } + + offset =3D insn_get(env, s, ot); + selector =3D insn_get(env, s, MO_16); + tcg_gen_movi_tl(s->T0, selector); + tcg_gen_movi_tl(s->T1, offset); + return gen_far_call(s); +} + +static void gen_CBW(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) +{ + switch(decode->op[0].ot) { +#ifdef TARGET_X86_64 + case MO_64: + tcg_gen_ext32s_tl(s->T0, s->T0); + break; +#endif + case MO_32: + tcg_gen_ext16s_tl(s->T0, s->T0); + break; + case MO_16: + tcg_gen_ext8s_tl(s->T0, s->T0); + break; + default: + tcg_abort(); + } +} + static void gen_CMPS(DisasContext *s, CPUX86State *env, X86DecodedInsn *de= code) { MemOp ot =3D decode->op[0].ot; @@ -239,6 +275,24 @@ static void gen_CMPS(DisasContext *s, CPUX86State *env= , X86DecodedInsn *decode) } } =20 +static void gen_CWD(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) +{ + int shift =3D 8 << decode->op[0].ot; + switch (shift) { + case 64: + break; + case 32: + tcg_gen_ext32s_tl(s->T0, s->T0); + break; + case 16: + tcg_gen_ext16s_tl(s->T0, s->T0); + break; + default: + tcg_abort(); + } + tcg_gen_sari_tl(s->T0, s->T0, shift - 1); +} + static void gen_DAA(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) { gen_update_cc_op(s); @@ -333,6 +387,22 @@ static void gen_Jcc(DisasContext *s, CPUX86State *env,= X86DecodedInsn *decode) gen_jcc(s, decode->b & 0xf, decode->immediate, next_eip); } =20 +static void gen_LAHF(DisasContext *s, CPUX86State *env, X86DecodedInsn *de= code) +{ + if (CODE64(s) && !(s->cpuid_ext3_features & CPUID_EXT3_LAHF_LM)) { + return gen_illegal_opcode(s); + } + gen_compute_eflags(s); + /* Note: gen_compute_eflags() only gives the condition codes */ + tcg_gen_ori_tl(s->T0, cpu_cc_src, 0x02); + gen_op_mov_reg_v(s, MO_8, R_AH, s->T0); +} + +static void gen_LEA(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) +{ + tcg_gen_mov_tl(s->T0, s->A0); +} + static void gen_MOV(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) { /* nothing to do! */ @@ -392,10 +462,25 @@ static void gen_PUSHA(DisasContext *s, CPUX86State *e= nv, X86DecodedInsn *decode) gen_pusha(s); } =20 +static void gen_PUSHF(DisasContext *s, CPUX86State *env, X86DecodedInsn *d= ecode) +{ + gen_svm_check_intercept(s, SVM_EXIT_PUSHF); + if (check_vm86_iopl(s)) { + gen_update_cc_op(s); + gen_helper_read_eflags(s->T0, cpu_env); + gen_push_v(s, s->T0); + } +} + static void gen_POP(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) { MemOp ot =3D gen_pop_T0(s); - /* NOTE: order is important for pop %sp */ + if (decode->op[0].alu_op_type =3D=3D X86_ALU_MEM) { + /* NOTE: order is important for MMU exceptions */ + gen_op_st_v(s, ot, s->T0, s->A0); + decode->op[0].alu_op_type =3D X86_ALU_SKIP; + } + /* NOTE: writing back registers after update is important for pop %sp = */ gen_pop_update(s, ot); } =20 @@ -404,6 +489,76 @@ static void gen_POPA(DisasContext *s, CPUX86State *env= , X86DecodedInsn *decode) gen_popa(s); } =20 +static void gen_POPF(DisasContext *s, CPUX86State *env, X86DecodedInsn *de= code) +{ + gen_svm_check_intercept(s, SVM_EXIT_POPF); + if (check_vm86_iopl(s)) { + MemOp ot =3D gen_pop_T0(s); + if (CPL(s) =3D=3D 0) { + if (s->dflag !=3D MO_16) { + gen_helper_write_eflags(cpu_env, s->T0, + tcg_const_i32((TF_MASK | AC_MASK | + ID_MASK | NT_MASK | + IF_MASK | + IOPL_MASK))); + } else { + gen_helper_write_eflags(cpu_env, s->T0, + tcg_const_i32((TF_MASK | AC_MASK | + ID_MASK | NT_MASK | + IF_MASK | IOPL_MASK) + & 0xffff)); + } + } else { + if (CPL(s) <=3D IOPL(s)) { + if (s->dflag !=3D MO_16) { + gen_helper_write_eflags(cpu_env, s->T0, + tcg_const_i32((TF_MASK | + AC_MASK | + ID_MASK | + NT_MASK | + IF_MASK))); + } else { + gen_helper_write_eflags(cpu_env, s->T0, + tcg_const_i32((TF_MASK | + AC_MASK | + ID_MASK | + NT_MASK | + IF_MASK) + & 0xffff)); + } + } else { + if (s->dflag !=3D MO_16) { + gen_helper_write_eflags(cpu_env, s->T0, + tcg_const_i32((TF_MASK | AC_MASK | + ID_MASK | NT_MASK))); + } else { + gen_helper_write_eflags(cpu_env, s->T0, + tcg_const_i32((TF_MASK | AC_MASK | + ID_MASK | NT_MASK) + & 0xffff)); + } + } + } + gen_pop_update(s, ot); + set_cc_op(s, CC_OP_EFLAGS); + /* abort translation because TF/AC flag may change */ + gen_jmp_im(s, s->pc - s->cs_base); + gen_eob(s); + } +} + +static void gen_SAHF(DisasContext *s, CPUX86State *env, X86DecodedInsn *de= code) +{ + if (CODE64(s) && !(s->cpuid_ext3_features & CPUID_EXT3_LAHF_LM)) { + return gen_illegal_opcode(s); + } + gen_op_mov_v_reg(s, MO_8, s->T0, R_AH); + gen_compute_eflags(s); + tcg_gen_andi_tl(cpu_cc_src, cpu_cc_src, CC_O); + tcg_gen_andi_tl(s->T0, s->T0, CC_S | CC_Z | CC_A | CC_P | CC_C); + tcg_gen_or_tl(cpu_cc_src, cpu_cc_src, s->T0); +} + static void gen_SBB(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) { gen_alu_op(s, OP_SBBL, decode->op[0].ot); @@ -414,6 +569,15 @@ static void gen_SUB(DisasContext *s, CPUX86State *env,= X86DecodedInsn *decode) gen_alu_op(s, OP_SUBL, decode->op[0].ot); } =20 +static void gen_WAIT(DisasContext *s, CPUX86State *env, X86DecodedInsn *de= code) +{ + if ((s->flags & (HF_MP_MASK | HF_TS_MASK)) =3D=3D (HF_MP_MASK | HF_TS_= MASK)) { + gen_exception(s, EXCP07_PREX, s->pc_start - s->cs_base); + } else { + gen_helper_fwait(cpu_env); + } +} + static void gen_XCHG(DisasContext *s, CPUX86State *env, X86DecodedInsn *de= code) { if (decode->b =3D=3D 0x90 && !REX_B(s)) { diff --git a/target/i386/tcg/translate.c b/target/i386/tcg/translate.c index 5c3742a9c7..b742e456b0 100644 --- a/target/i386/tcg/translate.c +++ b/target/i386/tcg/translate.c @@ -2471,6 +2471,23 @@ static void gen_movl_seg_T0(DisasContext *s, X86Seg = seg_reg) } } =20 +static void gen_far_call(DisasContext *s) +{ + if (PE(s) && !VM86(s)) { + tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); + gen_helper_lcall_protected(cpu_env, s->tmp2_i32, s->T1, + tcg_const_i32(s->dflag - 1), + tcg_const_tl(s->pc - s->cs_base)); + } else { + tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); + gen_helper_lcall_real(cpu_env, s->tmp2_i32, s->T1, + tcg_const_i32(s->dflag - 1), + tcg_const_i32(s->pc - s->cs_base)); + } + tcg_gen_ld_tl(s->tmp4, cpu_env, offsetof(CPUX86State, eip)); + gen_jr(s, s->tmp4); +} + static void gen_svm_check_intercept(DisasContext *s, uint32_t type) { /* no SVM activated; fast case */ --=20 2.37.1 From nobody Fri May 17 08:24:54 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1661363086; cv=none; d=zohomail.com; s=zohoarc; b=AD14MIGZMoAs6L02cnF4z3Hu6T8Q9g7xyo+YKQ7re7+u/UsmVkbqW+w4WLe2GlGpDVoGnOy5NIDfiAjGd2AzAfpSScgRx3lXCx60PIyjFXkuFhNafzS0ZHSzKntGrmxqSJXps1FcJ7ilqJpKEWvThRIB1qfKCqVqV5WdokDWHVA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1661363086; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=ujtgSY45ospJjakzud3ukSnLoFm01n4BLS2olICm0ug=; b=UJKe2xlMbAlr7/gO+tkTRgUQXNuxOwA31rMK0V221n92FUucNEqKnmIAd8TSmq6+I/XrrahF/WGhgeLg411UKLLAHhv0q2WSZ+xJkjjnYiMHLF1P9ujsh+QLfGHEcthzCglCdR4Bgqsefz+n03fp4lbxa+Pv0fUYp0Ykbefv2xg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1661363086752988.1876909832453; Wed, 24 Aug 2022 10:44:46 -0700 (PDT) Received: from localhost ([::1]:35334 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oQuQi-0006XC-RA for importer@patchew.org; Wed, 24 Aug 2022 13:44:45 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:49262) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuFY-0002Md-Ln for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:33:10 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:24031) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oQuFX-0003uf-0E for qemu-devel@nongnu.org; Wed, 24 Aug 2022 13:33:08 -0400 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-491-vjBg0UPTM4uG4hiOolIwSA-1; Wed, 24 Aug 2022 13:33:03 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BD6A41C05146; Wed, 24 Aug 2022 17:33:02 +0000 (UTC) Received: from avogadro.redhat.com (unknown [10.39.192.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id 09F85492C3B; Wed, 24 Aug 2022 17:33:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661362386; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ujtgSY45ospJjakzud3ukSnLoFm01n4BLS2olICm0ug=; b=achZNCTXM0k/YXCOw4KNhoEhIHpwE55oAlPvN7tSrTfaxPa0ArZy3S0jAnQQ7foVwIrxOG h6EA16w8tCFoU+vrbKf/XJxEFCqdNU7JYWVeWoPRIjBc8nRLEZoKDmJT2b8t+KsmbYVJpV 5pNzvqXV0oTrRt8iVWqTCrqGVNFwjOM= X-MC-Unique: vjBg0UPTM4uG4hiOolIwSA-1 From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, paul@nowt.org Subject: [PATCH 17/17] target/i386: add a8-af, b8-bf opcodes Date: Wed, 24 Aug 2022 19:32:50 +0200 Message-Id: <20220824173250.232491-11-pbonzini@redhat.com> In-Reply-To: <20220824173123.232018-1-pbonzini@redhat.com> References: <20220824173123.232018-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1661363087995100001 Content-Type: text/plain; charset="utf-8" Signed-off-by: Paolo Bonzini --- target/i386/tcg/decode-new.c.inc | 16 +++++++++++++++ target/i386/tcg/decode-old.c.inc | 2 +- target/i386/tcg/emit.c.inc | 35 +++++++++++++++++++++++++++++++- 3 files changed, 51 insertions(+), 2 deletions(-) diff --git a/target/i386/tcg/decode-new.c.inc b/target/i386/tcg/decode-new.= c.inc index 1e607b68fa..832a8d8d25 100644 --- a/target/i386/tcg/decode-new.c.inc +++ b/target/i386/tcg/decode-new.c.inc @@ -742,8 +742,24 @@ static X86OpEntry A2_08_FF[16][8] =3D { X86_OP_ENTRY0(LAHF), }, { + X86_OP_ENTRY2(AND, 0,b, I,b, nowb), /* AL, Ib */ + X86_OP_ENTRY2(AND, 0,v, I,z, nowb), /* rAX, Iz */ + X86_OP_ENTRY2(STOS, Y,b, 0,b), + X86_OP_ENTRY2(STOS, Y,v, 0,v), + X86_OP_ENTRY2(LODS, 0,b, X,b, nowb), + X86_OP_ENTRY2(LODS, 0,v, X,v, nowb), + X86_OP_ENTRY2(SCAS, 0,b, Y,b, nowb), + X86_OP_ENTRY2(SCAS, 0,v, Y,v, nowb), }, { + X86_OP_ENTRY3(MOV, LoBits,v, I,v, None, None), + X86_OP_ENTRY3(MOV, LoBits,v, I,v, None, None), + X86_OP_ENTRY3(MOV, LoBits,v, I,v, None, None), + X86_OP_ENTRY3(MOV, LoBits,v, I,v, None, None), + X86_OP_ENTRY3(MOV, LoBits,v, I,v, None, None), + X86_OP_ENTRY3(MOV, LoBits,v, I,v, None, None), + X86_OP_ENTRY3(MOV, LoBits,v, I,v, None, None), + X86_OP_ENTRY3(MOV, LoBits,v, I,v, None, None), }, { }, diff --git a/target/i386/tcg/decode-old.c.inc b/target/i386/tcg/decode-old.= c.inc index 69ce70d141..d17671b8eb 100644 --- a/target/i386/tcg/decode-old.c.inc +++ b/target/i386/tcg/decode-old.c.inc @@ -1822,7 +1822,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) #else use_new &=3D b <=3D limit; #endif - if (use_new && b <=3D 0x7f) { + if (use_new && b <=3D 0xbf) { return disas_insn_new(s, cpu, b); } case 0x0f: diff --git a/target/i386/tcg/emit.c.inc b/target/i386/tcg/emit.c.inc index 22f2fbde79..1d4f63322e 100644 --- a/target/i386/tcg/emit.c.inc +++ b/target/i386/tcg/emit.c.inc @@ -229,7 +229,7 @@ static void gen_BOUND(DisasContext *s, CPUX86State *env= , X86DecodedInsn *decode) =20 static void gen_CALLF(DisasContext *s, CPUX86State *env, X86DecodedInsn *d= ecode) { - MemOp ot =3D decode->op[1].ot; + MemOp ot =3D decode->op[2].ot; unsigned int selector, offset; =20 if (CODE64(s)) { @@ -237,6 +237,7 @@ static void gen_CALLF(DisasContext *s, CPUX86State *env= , X86DecodedInsn *decode) return; } =20 + assert(ot >=3D MO_16); offset =3D insn_get(env, s, ot); selector =3D insn_get(env, s, MO_16); tcg_gen_movi_tl(s->T0, selector); @@ -403,6 +404,16 @@ static void gen_LEA(DisasContext *s, CPUX86State *env,= X86DecodedInsn *decode) tcg_gen_mov_tl(s->T0, s->A0); } =20 +static void gen_LODS(DisasContext *s, CPUX86State *env, X86DecodedInsn *de= code) +{ + MemOp ot =3D decode->op[0].ot; + if (s->prefix & (PREFIX_REPZ | PREFIX_REPNZ)) { + gen_repz_lods(s, ot, s->pc_start - s->cs_base, s->pc - s->cs_base); + } else { + gen_lods(s, ot); + } +} + static void gen_MOV(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) { /* nothing to do! */ @@ -564,6 +575,28 @@ static void gen_SBB(DisasContext *s, CPUX86State *env,= X86DecodedInsn *decode) gen_alu_op(s, OP_SBBL, decode->op[0].ot); } =20 +static void gen_SCAS(DisasContext *s, CPUX86State *env, X86DecodedInsn *de= code) +{ + MemOp ot =3D decode->op[0].ot; + if (s->prefix & PREFIX_REPNZ) { + gen_repz_scas(s, ot, s->pc_start - s->cs_base, s->pc - s->cs_base,= 1); + } else if (s->prefix & PREFIX_REPZ) { + gen_repz_scas(s, ot, s->pc_start - s->cs_base, s->pc - s->cs_base,= 0); + } else { + gen_scas(s, ot); + } +} + +static void gen_STOS(DisasContext *s, CPUX86State *env, X86DecodedInsn *de= code) +{ + MemOp ot =3D decode->op[0].ot; + if (s->prefix & (PREFIX_REPZ | PREFIX_REPNZ)) { + gen_repz_stos(s, ot, s->pc_start - s->cs_base, s->pc - s->cs_base); + } else { + gen_stos(s, ot); + } +} + static void gen_SUB(DisasContext *s, CPUX86State *env, X86DecodedInsn *dec= ode) { gen_alu_op(s, OP_SUBL, decode->op[0].ot); --=20 2.37.1