From nobody Mon Feb 9 05:43:11 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=bt.com ARC-Seal: i=1; a=rsa-sha256; t=1563809929; cv=none; d=zoho.com; s=zohoarc; b=k/kHHKhNuH6ztiMkSMpZELEaR0SQ6eminRWjpZBeLmQGms4dQ9KBzZ7S/yN0CMaoA6N9eGDTMWvCY2X20o1uoJuODE5c9mmd2XXa8c1okOfZdsplGUk+gioTQp9w95sCnpVSvKXwIAPJEBLvMTEOsQr7nw1uVD0uxbXE+t7j1yU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1563809929; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=5VRrNWFhVITG5dejvx3xmi9OluW2OOmiNAEXg4AfVzY=; b=HcSMelk0qyWkBadeHg6PlayjW2NHOOZUBRoXSI2j4vukeymnN4CKLG04SE61O7MidBw5aLjv+fpIpTg2qH6LC7zIeuoaOVPorC+WiOx30PPA1BTkRoPj4HjMMwDmMeScSjA/h6ckkHc4LplCltuaKgWQJXl6oGqXCGpJVK78M3U= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1563809929804300.79141876793085; Mon, 22 Jul 2019 08:38:49 -0700 (PDT) Received: from localhost ([::1]:34648 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaOl-0008Gr-EI for importer@patchew.org; Mon, 22 Jul 2019 11:38:47 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51327) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaON-0007oy-B6 for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:38:32 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hpaOD-0007eb-LQ for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:38:23 -0400 Received: from smtpe1.intersmtp.com ([213.121.35.74]:32149) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hpaOC-0007dO-U8 for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:38:13 -0400 Received: from tpw09926dag18f.domain1.systemhost.net (10.9.212.26) by BWP09926079.bt.com (10.36.82.110) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.1.1713.5; Mon, 22 Jul 2019 16:38:09 +0100 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by tpw09926dag18f.domain1.systemhost.net (10.9.212.26) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Jul 2019 16:38:09 +0100 Received: from tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c]) by tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c%12]) with mapi id 15.00.1395.000; Mon, 22 Jul 2019 16:38:09 +0100 From: To: Thread-Topic: [Qemu-devel] [PATCH v2 01/20] tcg: Replace MO_8 with MO_UB alias Thread-Index: AQHVQKNq0Qv9Y9yzaki71enBtb4N6w== Date: Mon, 22 Jul 2019 15:38:09 +0000 Message-ID: <1563809888089.57547@bt.com> References: In-Reply-To: Accept-Language: en-AU, en-GB, en-US Content-Language: en-AU X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.187.101.37] MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 [fuzzy] X-Received-From: 213.121.35.74 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.23 Subject: [Qemu-devel] [PATCH v2 01/20] tcg: Replace MO_8 with MO_UB alias X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, walling@linux.ibm.com, mst@redhat.com, palmer@sifive.com, mark.cave-ayland@ilande.co.uk, Alistair.Francis@wdc.com, arikalo@wavecomp.com, david@redhat.com, pasic@linux.ibm.com, borntraeger@de.ibm.com, rth@twiddle.net, atar4qemu@gmail.com, ehabkost@redhat.com, sw@weilnetz.de, qemu-s390x@nongnu.org, qemu-arm@nongnu.org, david@gibson.dropbear.id.au, qemu-riscv@nongnu.org, cohuck@redhat.com, claudio.fontana@huawei.com, alex.williamson@redhat.com, qemu-ppc@nongnu.org, amarkovic@wavecomp.com, pbonzini@redhat.com, aurelien@aurel32.net Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" Preparation for splitting MO_8 out from TCGMemOp into new accelerator independent MemOp. As MO_8 will be a value of MemOp, existing TCGMemOp comparisons and coercions will trigger -Wenum-compare and -Wenum-conversion. Signed-off-by: Tony Nguyen --- target/arm/sve_helper.c | 4 +- target/arm/translate-a64.c | 14 +++---- target/arm/translate-sve.c | 4 +- target/arm/translate.c | 38 +++++++++---------- target/i386/translate.c | 72 +++++++++++++++++----------------= -- target/mips/translate.c | 4 +- target/ppc/translate/vmx-impl.inc.c | 28 +++++++------- target/s390x/translate.c | 2 +- target/s390x/translate_vx.inc.c | 4 +- target/s390x/vec.h | 4 +- tcg/aarch64/tcg-target.inc.c | 16 ++++---- tcg/arm/tcg-target.inc.c | 6 +-- tcg/i386/tcg-target.inc.c | 54 +++++++++++++------------- tcg/mips/tcg-target.inc.c | 4 +- tcg/riscv/tcg-target.inc.c | 4 +- tcg/sparc/tcg-target.inc.c | 2 +- tcg/tcg-op-gvec.c | 76 ++++++++++++++++++---------------= ---- tcg/tcg-op-vec.c | 10 ++--- tcg/tcg-op.c | 6 +-- tcg/tcg.h | 2 +- 20 files changed, 177 insertions(+), 177 deletions(-) diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c index fc0c175..4c7e11f 100644 --- a/target/arm/sve_helper.c +++ b/target/arm/sve_helper.c @@ -1531,7 +1531,7 @@ void HELPER(sve_cpy_m_b)(void *vd, void *vn, void *vg, uint64_t *d =3D vd, *n =3D vn; uint8_t *pg =3D vg; - mm =3D dup_const(MO_8, mm); + mm =3D dup_const(MO_UB, mm); for (i =3D 0; i < opr_sz; i +=3D 1) { uint64_t nn =3D n[i]; uint64_t pp =3D expand_pred_b(pg[H1(i)]); @@ -1588,7 +1588,7 @@ void HELPER(sve_cpy_z_b)(void *vd, void *vg, uint64_t= val, uint32_t desc) uint64_t *d =3D vd; uint8_t *pg =3D vg; - val =3D dup_const(MO_8, val); + val =3D dup_const(MO_UB, val); for (i =3D 0; i < opr_sz; i +=3D 1) { d[i] =3D val & expand_pred_b(pg[H1(i)]); } diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c index d323147..f840b43 100644 --- a/target/arm/translate-a64.c +++ b/target/arm/translate-a64.c @@ -993,7 +993,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 = tcg_dest, int srcidx, { int vect_off =3D vec_reg_offset(s, srcidx, element, memop & MO_SIZE); switch (memop) { - case MO_8: + case MO_UB: tcg_gen_ld8u_i64(tcg_dest, cpu_env, vect_off); break; case MO_16: @@ -1002,7 +1002,7 @@ static void read_vec_element(DisasContext *s, TCGv_i6= 4 tcg_dest, int srcidx, case MO_32: tcg_gen_ld32u_i64(tcg_dest, cpu_env, vect_off); break; - case MO_8|MO_SIGN: + case MO_SB: tcg_gen_ld8s_i64(tcg_dest, cpu_env, vect_off); break; case MO_16|MO_SIGN: @@ -1025,13 +1025,13 @@ static void read_vec_element_i32(DisasContext *s, T= CGv_i32 tcg_dest, int srcidx, { int vect_off =3D vec_reg_offset(s, srcidx, element, memop & MO_SIZE); switch (memop) { - case MO_8: + case MO_UB: tcg_gen_ld8u_i32(tcg_dest, cpu_env, vect_off); break; case MO_16: tcg_gen_ld16u_i32(tcg_dest, cpu_env, vect_off); break; - case MO_8|MO_SIGN: + case MO_SB: tcg_gen_ld8s_i32(tcg_dest, cpu_env, vect_off); break; case MO_16|MO_SIGN: @@ -1052,7 +1052,7 @@ static void write_vec_element(DisasContext *s, TCGv_i= 64 tcg_src, int destidx, { int vect_off =3D vec_reg_offset(s, destidx, element, memop & MO_SIZE); switch (memop) { - case MO_8: + case MO_UB: tcg_gen_st8_i64(tcg_src, cpu_env, vect_off); break; case MO_16: @@ -1074,7 +1074,7 @@ static void write_vec_element_i32(DisasContext *s, TC= Gv_i32 tcg_src, { int vect_off =3D vec_reg_offset(s, destidx, element, memop & MO_SIZE); switch (memop) { - case MO_8: + case MO_UB: tcg_gen_st8_i32(tcg_src, cpu_env, vect_off); break; case MO_16: @@ -12885,7 +12885,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) default: /* integer */ switch (size) { - case MO_8: + case MO_UB: case MO_64: unallocated_encoding(s); return; diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c index fa068b0..ec5fb11 100644 --- a/target/arm/translate-sve.c +++ b/target/arm/translate-sve.c @@ -1665,7 +1665,7 @@ static void do_sat_addsub_vec(DisasContext *s, int es= z, int rd, int rn, desc =3D tcg_const_i32(simd_desc(vsz, vsz, 0)); switch (esz) { - case MO_8: + case MO_UB: t32 =3D tcg_temp_new_i32(); tcg_gen_extrl_i64_i32(t32, val); if (d) { @@ -3308,7 +3308,7 @@ static bool trans_SUBR_zzi(DisasContext *s, arg_rri_e= sz *a) .fniv =3D tcg_gen_sub_vec, .fno =3D gen_helper_sve_subri_b, .opt_opc =3D vecop_list, - .vece =3D MO_8, + .vece =3D MO_UB, .scalar_first =3D true }, { .fni8 =3D tcg_gen_vec_sub16_i64, .fniv =3D tcg_gen_sub_vec, diff --git a/target/arm/translate.c b/target/arm/translate.c index 7853462..39266cf 100644 --- a/target/arm/translate.c +++ b/target/arm/translate.c @@ -1474,7 +1474,7 @@ static void neon_store_element(int reg, int ele, TCGM= emOp size, TCGv_i32 var) long offset =3D neon_element_offset(reg, ele, size); switch (size) { - case MO_8: + case MO_UB: tcg_gen_st8_i32(var, cpu_env, offset); break; case MO_16: @@ -1493,7 +1493,7 @@ static void neon_store_element64(int reg, int ele, TC= GMemOp size, TCGv_i64 var) long offset =3D neon_element_offset(reg, ele, size); switch (size) { - case MO_8: + case MO_UB: tcg_gen_st8_i64(var, cpu_env, offset); break; case MO_16: @@ -4262,7 +4262,7 @@ const GVecGen2i ssra_op[4] =3D { .fniv =3D gen_ssra_vec, .load_dest =3D true, .opt_opc =3D vecop_list_ssra, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fni8 =3D gen_ssra16_i64, .fniv =3D gen_ssra_vec, .load_dest =3D true, @@ -4320,7 +4320,7 @@ const GVecGen2i usra_op[4] =3D { .fniv =3D gen_usra_vec, .load_dest =3D true, .opt_opc =3D vecop_list_usra, - .vece =3D MO_8, }, + .vece =3D MO_UB, }, { .fni8 =3D gen_usra16_i64, .fniv =3D gen_usra_vec, .load_dest =3D true, @@ -4341,7 +4341,7 @@ const GVecGen2i usra_op[4] =3D { static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift) { - uint64_t mask =3D dup_const(MO_8, 0xff >> shift); + uint64_t mask =3D dup_const(MO_UB, 0xff >> shift); TCGv_i64 t =3D tcg_temp_new_i64(); tcg_gen_shri_i64(t, a, shift); @@ -4400,7 +4400,7 @@ const GVecGen2i sri_op[4] =3D { .fniv =3D gen_shr_ins_vec, .load_dest =3D true, .opt_opc =3D vecop_list_sri, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fni8 =3D gen_shr16_ins_i64, .fniv =3D gen_shr_ins_vec, .load_dest =3D true, @@ -4421,7 +4421,7 @@ const GVecGen2i sri_op[4] =3D { static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift) { - uint64_t mask =3D dup_const(MO_8, 0xff << shift); + uint64_t mask =3D dup_const(MO_UB, 0xff << shift); TCGv_i64 t =3D tcg_temp_new_i64(); tcg_gen_shli_i64(t, a, shift); @@ -4478,7 +4478,7 @@ const GVecGen2i sli_op[4] =3D { .fniv =3D gen_shl_ins_vec, .load_dest =3D true, .opt_opc =3D vecop_list_sli, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fni8 =3D gen_shl16_ins_i64, .fniv =3D gen_shl_ins_vec, .load_dest =3D true, @@ -4574,7 +4574,7 @@ const GVecGen3 mla_op[4] =3D { .fniv =3D gen_mla_vec, .load_dest =3D true, .opt_opc =3D vecop_list_mla, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fni4 =3D gen_mla16_i32, .fniv =3D gen_mla_vec, .load_dest =3D true, @@ -4598,7 +4598,7 @@ const GVecGen3 mls_op[4] =3D { .fniv =3D gen_mls_vec, .load_dest =3D true, .opt_opc =3D vecop_list_mls, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fni4 =3D gen_mls16_i32, .fniv =3D gen_mls_vec, .load_dest =3D true, @@ -4645,7 +4645,7 @@ const GVecGen3 cmtst_op[4] =3D { { .fni4 =3D gen_helper_neon_tst_u8, .fniv =3D gen_cmtst_vec, .opt_opc =3D vecop_list_cmtst, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fni4 =3D gen_helper_neon_tst_u16, .fniv =3D gen_cmtst_vec, .opt_opc =3D vecop_list_cmtst, @@ -4681,7 +4681,7 @@ const GVecGen4 uqadd_op[4] =3D { .fno =3D gen_helper_gvec_uqadd_b, .write_aofs =3D true, .opt_opc =3D vecop_list_uqadd, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fniv =3D gen_uqadd_vec, .fno =3D gen_helper_gvec_uqadd_h, .write_aofs =3D true, @@ -4719,7 +4719,7 @@ const GVecGen4 sqadd_op[4] =3D { .fno =3D gen_helper_gvec_sqadd_b, .opt_opc =3D vecop_list_sqadd, .write_aofs =3D true, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fniv =3D gen_sqadd_vec, .fno =3D gen_helper_gvec_sqadd_h, .opt_opc =3D vecop_list_sqadd, @@ -4757,7 +4757,7 @@ const GVecGen4 uqsub_op[4] =3D { .fno =3D gen_helper_gvec_uqsub_b, .opt_opc =3D vecop_list_uqsub, .write_aofs =3D true, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fniv =3D gen_uqsub_vec, .fno =3D gen_helper_gvec_uqsub_h, .opt_opc =3D vecop_list_uqsub, @@ -4795,7 +4795,7 @@ const GVecGen4 sqsub_op[4] =3D { .fno =3D gen_helper_gvec_sqsub_b, .opt_opc =3D vecop_list_sqsub, .write_aofs =3D true, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fniv =3D gen_sqsub_vec, .fno =3D gen_helper_gvec_sqsub_h, .opt_opc =3D vecop_list_sqsub, @@ -4972,15 +4972,15 @@ static int disas_neon_data_insn(DisasContext *s, ui= nt32_t insn) vec_size, vec_size); break; case 5: /* VBSL */ - tcg_gen_gvec_bitsel(MO_8, rd_ofs, rd_ofs, rn_ofs, rm_ofs, + tcg_gen_gvec_bitsel(MO_UB, rd_ofs, rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size); break; case 6: /* VBIT */ - tcg_gen_gvec_bitsel(MO_8, rd_ofs, rm_ofs, rn_ofs, rd_ofs, + tcg_gen_gvec_bitsel(MO_UB, rd_ofs, rm_ofs, rn_ofs, rd_ofs, vec_size, vec_size); break; case 7: /* VBIF */ - tcg_gen_gvec_bitsel(MO_8, rd_ofs, rm_ofs, rd_ofs, rn_ofs, + tcg_gen_gvec_bitsel(MO_UB, rd_ofs, rm_ofs, rd_ofs, rn_ofs, vec_size, vec_size); break; } @@ -6873,7 +6873,7 @@ static int disas_neon_data_insn(DisasContext *s, uint= 32_t insn) return 1; } if (insn & (1 << 16)) { - size =3D MO_8; + size =3D MO_UB; element =3D (insn >> 17) & 7; } else if (insn & (1 << 17)) { size =3D MO_16; diff --git a/target/i386/translate.c b/target/i386/translate.c index 03150a8..0e45300 100644 --- a/target/i386/translate.c +++ b/target/i386/translate.c @@ -349,20 +349,20 @@ static inline TCGMemOp mo_64_32(TCGMemOp ot) byte vs word opcodes. */ static inline TCGMemOp mo_b_d(int b, TCGMemOp ot) { - return b & 1 ? ot : MO_8; + return b & 1 ? ot : MO_UB; } /* Select size 8 if lsb of B is clear, else OT capped at 32. Used for decoding operand size of port opcodes. */ static inline TCGMemOp mo_b_d32(int b, TCGMemOp ot) { - return b & 1 ? (ot =3D=3D MO_16 ? MO_16 : MO_32) : MO_8; + return b & 1 ? (ot =3D=3D MO_16 ? MO_16 : MO_32) : MO_UB; } static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t= 0) { switch(ot) { - case MO_8: + case MO_UB: if (!byte_reg_is_xH(s, reg)) { tcg_gen_deposit_tl(cpu_regs[reg], cpu_regs[reg], t0, 0, 8); } else { @@ -390,7 +390,7 @@ static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp = ot, int reg, TCGv t0) static inline void gen_op_mov_v_reg(DisasContext *s, TCGMemOp ot, TCGv t0, int reg) { - if (ot =3D=3D MO_8 && byte_reg_is_xH(s, reg)) { + if (ot =3D=3D MO_UB && byte_reg_is_xH(s, reg)) { tcg_gen_extract_tl(t0, cpu_regs[reg - 4], 8, 8); } else { tcg_gen_mov_tl(t0, cpu_regs[reg]); @@ -523,7 +523,7 @@ static inline void gen_op_movl_T0_Dshift(DisasContext *= s, TCGMemOp ot) static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp size, bool sign) { switch (size) { - case MO_8: + case MO_UB: if (sign) { tcg_gen_ext8s_tl(dst, src); } else { @@ -580,7 +580,7 @@ void gen_op_jz_ecx(DisasContext *s, TCGMemOp size, TCGL= abel *label1) static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n) { switch (ot) { - case MO_8: + case MO_UB: gen_helper_inb(v, cpu_env, n); break; case MO_16: @@ -597,7 +597,7 @@ static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCG= v_i32 n) static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v, TCGv_i32 n) { switch (ot) { - case MO_8: + case MO_UB: gen_helper_outb(cpu_env, v, n); break; case MO_16: @@ -619,7 +619,7 @@ static void gen_check_io(DisasContext *s, TCGMemOp ot, = target_ulong cur_eip, if (s->pe && (s->cpl > s->iopl || s->vm86)) { tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); switch (ot) { - case MO_8: + case MO_UB: gen_helper_check_iob(cpu_env, s->tmp2_i32); break; case MO_16: @@ -1557,7 +1557,7 @@ static void gen_rot_rm_T1(DisasContext *s, TCGMemOp o= t, int op1, int is_right) tcg_gen_andi_tl(s->T1, s->T1, mask); switch (ot) { - case MO_8: + case MO_UB: /* Replicate the 8-bit input so that a 32-bit rotate works. */ tcg_gen_ext8u_tl(s->T0, s->T0); tcg_gen_muli_tl(s->T0, s->T0, 0x01010101); @@ -1661,7 +1661,7 @@ static void gen_rot_rm_im(DisasContext *s, TCGMemOp o= t, int op1, int op2, tcg_gen_rotli_tl(s->T0, s->T0, op2); } break; - case MO_8: + case MO_UB: mask =3D 7; goto do_shifts; case MO_16: @@ -1719,7 +1719,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp = ot, int op1, if (is_right) { switch (ot) { - case MO_8: + case MO_UB: gen_helper_rcrb(s->T0, cpu_env, s->T0, s->T1); break; case MO_16: @@ -1738,7 +1738,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp = ot, int op1, } } else { switch (ot) { - case MO_8: + case MO_UB: gen_helper_rclb(s->T0, cpu_env, s->T0, s->T1); break; case MO_16: @@ -2184,7 +2184,7 @@ static inline uint32_t insn_get(CPUX86State *env, Dis= asContext *s, TCGMemOp ot) uint32_t ret; switch (ot) { - case MO_8: + case MO_UB: ret =3D x86_ldub_code(env, s); break; case MO_16: @@ -3784,7 +3784,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, goto illegal_op; } if ((b & 0xff) =3D=3D 0xf0) { - ot =3D MO_8; + ot =3D MO_UB; } else if (s->dflag !=3D MO_64) { ot =3D (s->prefix & PREFIX_DATA ? MO_16 : MO_32); } else { @@ -4760,7 +4760,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) val =3D insn_get(env, s, ot); break; case 0x83: - val =3D (int8_t)insn_get(env, s, MO_8); + val =3D (int8_t)insn_get(env, s, MO_UB); break; } tcg_gen_movi_tl(s->T1, val); @@ -4866,8 +4866,8 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) break; case 4: /* mul */ switch(ot) { - case MO_8: - gen_op_mov_v_reg(s, MO_8, s->T1, R_EAX); + case MO_UB: + gen_op_mov_v_reg(s, MO_UB, s->T1, R_EAX); tcg_gen_ext8u_tl(s->T0, s->T0); tcg_gen_ext8u_tl(s->T1, s->T1); /* XXX: use 32 bit mul which could be faster */ @@ -4915,8 +4915,8 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) break; case 5: /* imul */ switch(ot) { - case MO_8: - gen_op_mov_v_reg(s, MO_8, s->T1, R_EAX); + case MO_UB: + gen_op_mov_v_reg(s, MO_UB, s->T1, R_EAX); tcg_gen_ext8s_tl(s->T0, s->T0); tcg_gen_ext8s_tl(s->T1, s->T1); /* XXX: use 32 bit mul which could be faster */ @@ -4969,7 +4969,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) break; case 6: /* div */ switch(ot) { - case MO_8: + case MO_UB: gen_helper_divb_AL(cpu_env, s->T0); break; case MO_16: @@ -4988,7 +4988,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) break; case 7: /* idiv */ switch(ot) { - case MO_8: + case MO_UB: gen_helper_idivb_AL(cpu_env, s->T0); break; case MO_16: @@ -5157,7 +5157,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) gen_op_mov_reg_v(s, MO_32, R_EAX, s->T0); break; case MO_16: - gen_op_mov_v_reg(s, MO_8, s->T0, R_EAX); + gen_op_mov_v_reg(s, MO_UB, s->T0, R_EAX); tcg_gen_ext8s_tl(s->T0, s->T0); gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0); break; @@ -5205,7 +5205,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) val =3D insn_get(env, s, ot); tcg_gen_movi_tl(s->T1, val); } else if (b =3D=3D 0x6b) { - val =3D (int8_t)insn_get(env, s, MO_8); + val =3D (int8_t)insn_get(env, s, MO_UB); tcg_gen_movi_tl(s->T1, val); } else { gen_op_mov_v_reg(s, ot, s->T1, reg); @@ -5419,7 +5419,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) if (b =3D=3D 0x68) val =3D insn_get(env, s, ot); else - val =3D (int8_t)insn_get(env, s, MO_8); + val =3D (int8_t)insn_get(env, s, MO_UB); tcg_gen_movi_tl(s->T0, val); gen_push_v(s, s->T0); break; @@ -5573,7 +5573,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) /* d_ot is the size of destination */ d_ot =3D dflag; /* ot is the size of source */ - ot =3D (b & 1) + MO_8; + ot =3D (b & 1) + MO_UB; /* s_ot is the sign+size of source */ s_ot =3D b & 8 ? MO_SIGN | ot : ot; @@ -5661,13 +5661,13 @@ static target_ulong disas_insn(DisasContext *s, CPU= State *cpu) tcg_gen_add_tl(s->A0, s->A0, s->T0); gen_extu(s->aflag, s->A0); gen_add_A0_ds_seg(s); - gen_op_ld_v(s, MO_8, s->T0, s->A0); - gen_op_mov_reg_v(s, MO_8, R_EAX, s->T0); + gen_op_ld_v(s, MO_UB, s->T0, s->A0); + gen_op_mov_reg_v(s, MO_UB, R_EAX, s->T0); break; case 0xb0 ... 0xb7: /* mov R, Ib */ - val =3D insn_get(env, s, MO_8); + val =3D insn_get(env, s, MO_UB); tcg_gen_movi_tl(s->T0, val); - gen_op_mov_reg_v(s, MO_8, (b & 7) | REX_B(s), s->T0); + gen_op_mov_reg_v(s, MO_UB, (b & 7) | REX_B(s), s->T0); break; case 0xb8 ... 0xbf: /* mov R, Iv */ #ifdef TARGET_X86_64 @@ -6637,7 +6637,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) } goto do_ljmp; case 0xeb: /* jmp Jb */ - tval =3D (int8_t)insn_get(env, s, MO_8); + tval =3D (int8_t)insn_get(env, s, MO_UB); tval +=3D s->pc - s->cs_base; if (dflag =3D=3D MO_16) { tval &=3D 0xffff; @@ -6645,7 +6645,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) gen_jmp(s, tval); break; case 0x70 ... 0x7f: /* jcc Jb */ - tval =3D (int8_t)insn_get(env, s, MO_8); + tval =3D (int8_t)insn_get(env, s, MO_UB); goto do_jcc; case 0x180 ... 0x18f: /* jcc Jv */ if (dflag !=3D MO_16) { @@ -6666,7 +6666,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) case 0x190 ... 0x19f: /* setcc Gv */ modrm =3D x86_ldub_code(env, s); gen_setcc1(s, b, s->T0); - gen_ldst_modrm(env, s, modrm, MO_8, OR_TMP0, 1); + gen_ldst_modrm(env, s, modrm, MO_UB, OR_TMP0, 1); break; case 0x140 ... 0x14f: /* cmov Gv, Ev */ if (!(s->cpuid_features & CPUID_CMOV)) { @@ -6751,7 +6751,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) case 0x9e: /* sahf */ if (CODE64(s) && !(s->cpuid_ext3_features & CPUID_EXT3_LAHF_LM)) goto illegal_op; - gen_op_mov_v_reg(s, MO_8, s->T0, R_AH); + gen_op_mov_v_reg(s, MO_UB, s->T0, R_AH); gen_compute_eflags(s); tcg_gen_andi_tl(cpu_cc_src, cpu_cc_src, CC_O); tcg_gen_andi_tl(s->T0, s->T0, CC_S | CC_Z | CC_A | CC_P | CC_C); @@ -6763,7 +6763,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) gen_compute_eflags(s); /* Note: gen_compute_eflags() only gives the condition codes */ tcg_gen_ori_tl(s->T0, cpu_cc_src, 0x02); - gen_op_mov_reg_v(s, MO_8, R_AH, s->T0); + gen_op_mov_reg_v(s, MO_UB, R_AH, s->T0); break; case 0xf5: /* cmc */ gen_compute_eflags(s); @@ -7137,7 +7137,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) goto illegal_op; gen_compute_eflags_c(s, s->T0); tcg_gen_neg_tl(s->T0, s->T0); - gen_op_mov_reg_v(s, MO_8, R_EAX, s->T0); + gen_op_mov_reg_v(s, MO_UB, R_EAX, s->T0); break; case 0xe0: /* loopnz */ case 0xe1: /* loopz */ @@ -7146,7 +7146,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) { TCGLabel *l1, *l2, *l3; - tval =3D (int8_t)insn_get(env, s, MO_8); + tval =3D (int8_t)insn_get(env, s, MO_UB); next_eip =3D s->pc - s->cs_base; tval +=3D next_eip; if (dflag =3D=3D MO_16) { diff --git a/target/mips/translate.c b/target/mips/translate.c index 3575eff..20a9777 100644 --- a/target/mips/translate.c +++ b/target/mips/translate.c @@ -3684,7 +3684,7 @@ static void gen_st(DisasContext *ctx, uint32_t opc, i= nt rt, mem_idx =3D MIPS_HFLAG_UM; /* fall through */ case OPC_SB: - tcg_gen_qemu_st_tl(t1, t0, mem_idx, MO_8); + tcg_gen_qemu_st_tl(t1, t0, mem_idx, MO_UB); break; case OPC_SWLE: mem_idx =3D MIPS_HFLAG_UM; @@ -20193,7 +20193,7 @@ static void gen_p_lsx(DisasContext *ctx, int rd, in= t rs, int rt) check_nms(ctx); gen_load_gpr(t1, rd); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, - MO_8); + MO_UB); break; case NM_SHX: /*case NM_SHXS:*/ diff --git a/target/ppc/translate/vmx-impl.inc.c b/target/ppc/translate/vmx= -impl.inc.c index 663275b..4130dd1 100644 --- a/target/ppc/translate/vmx-impl.inc.c +++ b/target/ppc/translate/vmx-impl.inc.c @@ -403,7 +403,7 @@ static void glue(gen_, name)(DisasContext *ctx) = \ tcg_temp_free_ptr(rb); \ } -GEN_VXFORM_V(vaddubm, MO_8, tcg_gen_gvec_add, 0, 0); +GEN_VXFORM_V(vaddubm, MO_UB, tcg_gen_gvec_add, 0, 0); GEN_VXFORM_DUAL_EXT(vaddubm, PPC_ALTIVEC, PPC_NONE, 0, \ vmul10cuq, PPC_NONE, PPC2_ISA300, 0x0000F800) GEN_VXFORM_V(vadduhm, MO_16, tcg_gen_gvec_add, 0, 1); @@ -411,23 +411,23 @@ GEN_VXFORM_DUAL(vadduhm, PPC_ALTIVEC, PPC_NONE, \ vmul10ecuq, PPC_NONE, PPC2_ISA300) GEN_VXFORM_V(vadduwm, MO_32, tcg_gen_gvec_add, 0, 2); GEN_VXFORM_V(vaddudm, MO_64, tcg_gen_gvec_add, 0, 3); -GEN_VXFORM_V(vsububm, MO_8, tcg_gen_gvec_sub, 0, 16); +GEN_VXFORM_V(vsububm, MO_UB, tcg_gen_gvec_sub, 0, 16); GEN_VXFORM_V(vsubuhm, MO_16, tcg_gen_gvec_sub, 0, 17); GEN_VXFORM_V(vsubuwm, MO_32, tcg_gen_gvec_sub, 0, 18); GEN_VXFORM_V(vsubudm, MO_64, tcg_gen_gvec_sub, 0, 19); -GEN_VXFORM_V(vmaxub, MO_8, tcg_gen_gvec_umax, 1, 0); +GEN_VXFORM_V(vmaxub, MO_UB, tcg_gen_gvec_umax, 1, 0); GEN_VXFORM_V(vmaxuh, MO_16, tcg_gen_gvec_umax, 1, 1); GEN_VXFORM_V(vmaxuw, MO_32, tcg_gen_gvec_umax, 1, 2); GEN_VXFORM_V(vmaxud, MO_64, tcg_gen_gvec_umax, 1, 3); -GEN_VXFORM_V(vmaxsb, MO_8, tcg_gen_gvec_smax, 1, 4); +GEN_VXFORM_V(vmaxsb, MO_UB, tcg_gen_gvec_smax, 1, 4); GEN_VXFORM_V(vmaxsh, MO_16, tcg_gen_gvec_smax, 1, 5); GEN_VXFORM_V(vmaxsw, MO_32, tcg_gen_gvec_smax, 1, 6); GEN_VXFORM_V(vmaxsd, MO_64, tcg_gen_gvec_smax, 1, 7); -GEN_VXFORM_V(vminub, MO_8, tcg_gen_gvec_umin, 1, 8); +GEN_VXFORM_V(vminub, MO_UB, tcg_gen_gvec_umin, 1, 8); GEN_VXFORM_V(vminuh, MO_16, tcg_gen_gvec_umin, 1, 9); GEN_VXFORM_V(vminuw, MO_32, tcg_gen_gvec_umin, 1, 10); GEN_VXFORM_V(vminud, MO_64, tcg_gen_gvec_umin, 1, 11); -GEN_VXFORM_V(vminsb, MO_8, tcg_gen_gvec_smin, 1, 12); +GEN_VXFORM_V(vminsb, MO_UB, tcg_gen_gvec_smin, 1, 12); GEN_VXFORM_V(vminsh, MO_16, tcg_gen_gvec_smin, 1, 13); GEN_VXFORM_V(vminsw, MO_32, tcg_gen_gvec_smin, 1, 14); GEN_VXFORM_V(vminsd, MO_64, tcg_gen_gvec_smin, 1, 15); @@ -530,18 +530,18 @@ GEN_VXFORM(vmuleuw, 4, 10); GEN_VXFORM(vmulesb, 4, 12); GEN_VXFORM(vmulesh, 4, 13); GEN_VXFORM(vmulesw, 4, 14); -GEN_VXFORM_V(vslb, MO_8, tcg_gen_gvec_shlv, 2, 4); +GEN_VXFORM_V(vslb, MO_UB, tcg_gen_gvec_shlv, 2, 4); GEN_VXFORM_V(vslh, MO_16, tcg_gen_gvec_shlv, 2, 5); GEN_VXFORM_V(vslw, MO_32, tcg_gen_gvec_shlv, 2, 6); GEN_VXFORM(vrlwnm, 2, 6); GEN_VXFORM_DUAL(vslw, PPC_ALTIVEC, PPC_NONE, \ vrlwnm, PPC_NONE, PPC2_ISA300) GEN_VXFORM_V(vsld, MO_64, tcg_gen_gvec_shlv, 2, 23); -GEN_VXFORM_V(vsrb, MO_8, tcg_gen_gvec_shrv, 2, 8); +GEN_VXFORM_V(vsrb, MO_UB, tcg_gen_gvec_shrv, 2, 8); GEN_VXFORM_V(vsrh, MO_16, tcg_gen_gvec_shrv, 2, 9); GEN_VXFORM_V(vsrw, MO_32, tcg_gen_gvec_shrv, 2, 10); GEN_VXFORM_V(vsrd, MO_64, tcg_gen_gvec_shrv, 2, 27); -GEN_VXFORM_V(vsrab, MO_8, tcg_gen_gvec_sarv, 2, 12); +GEN_VXFORM_V(vsrab, MO_UB, tcg_gen_gvec_sarv, 2, 12); GEN_VXFORM_V(vsrah, MO_16, tcg_gen_gvec_sarv, 2, 13); GEN_VXFORM_V(vsraw, MO_32, tcg_gen_gvec_sarv, 2, 14); GEN_VXFORM_V(vsrad, MO_64, tcg_gen_gvec_sarv, 2, 15); @@ -589,20 +589,20 @@ static void glue(gen_, NAME)(DisasContext *ctx) = \ 16, 16, &g); \ } -GEN_VXFORM_SAT(vaddubs, MO_8, add, usadd, 0, 8); +GEN_VXFORM_SAT(vaddubs, MO_UB, add, usadd, 0, 8); GEN_VXFORM_DUAL_EXT(vaddubs, PPC_ALTIVEC, PPC_NONE, 0, \ vmul10uq, PPC_NONE, PPC2_ISA300, 0x0000F800) GEN_VXFORM_SAT(vadduhs, MO_16, add, usadd, 0, 9); GEN_VXFORM_DUAL(vadduhs, PPC_ALTIVEC, PPC_NONE, \ vmul10euq, PPC_NONE, PPC2_ISA300) GEN_VXFORM_SAT(vadduws, MO_32, add, usadd, 0, 10); -GEN_VXFORM_SAT(vaddsbs, MO_8, add, ssadd, 0, 12); +GEN_VXFORM_SAT(vaddsbs, MO_UB, add, ssadd, 0, 12); GEN_VXFORM_SAT(vaddshs, MO_16, add, ssadd, 0, 13); GEN_VXFORM_SAT(vaddsws, MO_32, add, ssadd, 0, 14); -GEN_VXFORM_SAT(vsububs, MO_8, sub, ussub, 0, 24); +GEN_VXFORM_SAT(vsububs, MO_UB, sub, ussub, 0, 24); GEN_VXFORM_SAT(vsubuhs, MO_16, sub, ussub, 0, 25); GEN_VXFORM_SAT(vsubuws, MO_32, sub, ussub, 0, 26); -GEN_VXFORM_SAT(vsubsbs, MO_8, sub, sssub, 0, 28); +GEN_VXFORM_SAT(vsubsbs, MO_UB, sub, sssub, 0, 28); GEN_VXFORM_SAT(vsubshs, MO_16, sub, sssub, 0, 29); GEN_VXFORM_SAT(vsubsws, MO_32, sub, sssub, 0, 30); GEN_VXFORM(vadduqm, 0, 4); @@ -912,7 +912,7 @@ static void glue(gen_, name)(DisasContext *ctx) = \ tcg_temp_free_ptr(rd); \ } -GEN_VXFORM_VSPLT(vspltb, MO_8, 6, 8); +GEN_VXFORM_VSPLT(vspltb, MO_UB, 6, 8); GEN_VXFORM_VSPLT(vsplth, MO_16, 6, 9); GEN_VXFORM_VSPLT(vspltw, MO_32, 6, 10); GEN_VXFORM_UIMM_SPLAT(vextractub, 6, 8, 15); diff --git a/target/s390x/translate.c b/target/s390x/translate.c index ac0d8b6..415747f 100644 --- a/target/s390x/translate.c +++ b/target/s390x/translate.c @@ -154,7 +154,7 @@ static inline int vec_full_reg_offset(uint8_t reg) static inline int vec_reg_offset(uint8_t reg, uint8_t enr, TCGMemOp es) { - /* Convert element size (es) - e.g. MO_8 - to bytes */ + /* Convert element size (es) - e.g. MO_UB - to bytes */ const uint8_t bytes =3D 1 << es; int offs =3D enr * bytes; diff --git a/target/s390x/translate_vx.inc.c b/target/s390x/translate_vx.in= c.c index 41d5cf8..bb424c8 100644 --- a/target/s390x/translate_vx.inc.c +++ b/target/s390x/translate_vx.inc.c @@ -30,7 +30,7 @@ * Sizes: * On s390x, the operand size (oprsz) and the maximum size (maxsz) are * always 16 (128 bit). What gvec code calls "vece", s390x calls "es", - * a.k.a. "element size". These values nicely map to MO_8 ... MO_64. Only + * a.k.a. "element size". These values nicely map to MO_UB ... MO_64. Only * 128 bit element size has to be treated in a special way (MO_64 + 1). * We will use ES_* instead of MO_* for this reason in this file. * @@ -46,7 +46,7 @@ #define NUM_VEC_ELEMENTS(es) (16 / NUM_VEC_ELEMENT_BYTES(es)) #define NUM_VEC_ELEMENT_BITS(es) (NUM_VEC_ELEMENT_BYTES(es) * BITS_PER_BYT= E) -#define ES_8 MO_8 +#define ES_8 MO_UB #define ES_16 MO_16 #define ES_32 MO_32 #define ES_64 MO_64 diff --git a/target/s390x/vec.h b/target/s390x/vec.h index a6e3618..b813054 100644 --- a/target/s390x/vec.h +++ b/target/s390x/vec.h @@ -76,7 +76,7 @@ static inline uint64_t s390_vec_read_element(const S390Ve= ctor *v, uint8_t enr, uint8_t es) { switch (es) { - case MO_8: + case MO_UB: return s390_vec_read_element8(v, enr); case MO_16: return s390_vec_read_element16(v, enr); @@ -121,7 +121,7 @@ static inline void s390_vec_write_element(S390Vector *v= , uint8_t enr, uint8_t es, uint64_t data) { switch (es) { - case MO_8: + case MO_UB: s390_vec_write_element8(v, enr, data); break; case MO_16: diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c index 0713448..e4e0845 100644 --- a/tcg/aarch64/tcg-target.inc.c +++ b/tcg/aarch64/tcg-target.inc.c @@ -429,20 +429,20 @@ typedef enum { /* Load/store register. Described here as 3.3.12, but the helper that emits them can transform to 3.3.10 or 3.3.13. */ - I3312_STRB =3D 0x38000000 | LDST_ST << 22 | MO_8 << 30, + I3312_STRB =3D 0x38000000 | LDST_ST << 22 | MO_UB << 30, I3312_STRH =3D 0x38000000 | LDST_ST << 22 | MO_16 << 30, I3312_STRW =3D 0x38000000 | LDST_ST << 22 | MO_32 << 30, I3312_STRX =3D 0x38000000 | LDST_ST << 22 | MO_64 << 30, - I3312_LDRB =3D 0x38000000 | LDST_LD << 22 | MO_8 << 30, + I3312_LDRB =3D 0x38000000 | LDST_LD << 22 | MO_UB << 30, I3312_LDRH =3D 0x38000000 | LDST_LD << 22 | MO_16 << 30, I3312_LDRW =3D 0x38000000 | LDST_LD << 22 | MO_32 << 30, I3312_LDRX =3D 0x38000000 | LDST_LD << 22 | MO_64 << 30, - I3312_LDRSBW =3D 0x38000000 | LDST_LD_S_W << 22 | MO_8 << 30, + I3312_LDRSBW =3D 0x38000000 | LDST_LD_S_W << 22 | MO_UB << 30, I3312_LDRSHW =3D 0x38000000 | LDST_LD_S_W << 22 | MO_16 << 30, - I3312_LDRSBX =3D 0x38000000 | LDST_LD_S_X << 22 | MO_8 << 30, + I3312_LDRSBX =3D 0x38000000 | LDST_LD_S_X << 22 | MO_UB << 30, I3312_LDRSHX =3D 0x38000000 | LDST_LD_S_X << 22 | MO_16 << 30, I3312_LDRSWX =3D 0x38000000 | LDST_LD_S_X << 22 | MO_32 << 30, @@ -862,7 +862,7 @@ static void tcg_out_dupi_vec(TCGContext *s, TCGType typ= e, int cmode, imm8, i; /* Test all bytes equal first. */ - if (v64 =3D=3D dup_const(MO_8, v64)) { + if (v64 =3D=3D dup_const(MO_UB, v64)) { imm8 =3D (uint8_t)v64; tcg_out_insn(s, 3606, MOVI, q, rd, 0, 0xe, imm8); return; @@ -1772,7 +1772,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCG= MemOp memop, const TCGMemOp bswap =3D memop & MO_BSWAP; switch (memop & MO_SIZE) { - case MO_8: + case MO_UB: tcg_out_ldst_r(s, I3312_STRB, data_r, addr_r, otype, off_r); break; case MO_16: @@ -2186,7 +2186,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, case INDEX_op_ext8s_i64: case INDEX_op_ext8s_i32: - tcg_out_sxt(s, ext, MO_8, a0, a1); + tcg_out_sxt(s, ext, MO_UB, a0, a1); break; case INDEX_op_ext16s_i64: case INDEX_op_ext16s_i32: @@ -2198,7 +2198,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, break; case INDEX_op_ext8u_i64: case INDEX_op_ext8u_i32: - tcg_out_uxt(s, MO_8, a0, a1); + tcg_out_uxt(s, MO_UB, a0, a1); break; case INDEX_op_ext16u_i64: case INDEX_op_ext16u_i32: diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c index ece88dc..542ffa8 100644 --- a/tcg/arm/tcg-target.inc.c +++ b/tcg/arm/tcg-target.inc.c @@ -1429,7 +1429,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, = TCGLabelQemuLdst *lb) datalo =3D lb->datalo_reg; datahi =3D lb->datahi_reg; switch (opc & MO_SIZE) { - case MO_8: + case MO_UB: argreg =3D tcg_out_arg_reg8(s, argreg, datalo); break; case MO_16: @@ -1621,7 +1621,7 @@ static inline void tcg_out_qemu_st_index(TCGContext *= s, int cond, TCGMemOp opc, TCGMemOp bswap =3D opc & MO_BSWAP; switch (opc & MO_SIZE) { - case MO_8: + case MO_UB: tcg_out_st8_r(s, cond, datalo, addrlo, addend); break; case MO_16: @@ -1666,7 +1666,7 @@ static inline void tcg_out_qemu_st_direct(TCGContext = *s, TCGMemOp opc, TCGMemOp bswap =3D opc & MO_BSWAP; switch (opc & MO_SIZE) { - case MO_8: + case MO_UB: tcg_out_st8_12(s, COND_AL, datalo, addrlo, 0); break; case MO_16: diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c index 6ddeebf..0d68ba4 100644 --- a/tcg/i386/tcg-target.inc.c +++ b/tcg/i386/tcg-target.inc.c @@ -888,7 +888,7 @@ static bool tcg_out_dup_vec(TCGContext *s, TCGType type= , unsigned vece, tcg_out_vex_modrm(s, avx2_dup_insn[vece] + vex_l, r, 0, a); } else { switch (vece) { - case MO_8: + case MO_UB: /* ??? With zero in a register, use PSHUFB. */ tcg_out_vex_modrm(s, OPC_PUNPCKLBW, r, a, a); a =3D r; @@ -932,7 +932,7 @@ static bool tcg_out_dupm_vec(TCGContext *s, TCGType typ= e, unsigned vece, tcg_out8(s, 0); /* imm8 */ tcg_out_dup_vec(s, type, vece, r, r); break; - case MO_8: + case MO_UB: tcg_out_vex_modrm_offset(s, OPC_VPINSRB, r, r, base, offset); tcg_out8(s, 0); /* imm8 */ tcg_out_dup_vec(s, type, vece, r, r); @@ -2154,7 +2154,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCG= Reg datalo, TCGReg datahi, } switch (memop & MO_SIZE) { - case MO_8: + case MO_UB: /* In 32-bit mode, 8-bit stores can only happen from [abcd]x. Use the scratch register if necessary. */ if (TCG_TARGET_REG_BITS =3D=3D 32 && datalo >=3D 4) { @@ -2901,7 +2901,7 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode o= pc, tcg_debug_assert(vece !=3D MO_64); sub =3D 4; gen_shift: - tcg_debug_assert(vece !=3D MO_8); + tcg_debug_assert(vece !=3D MO_UB); insn =3D shift_imm_insn[vece]; if (type =3D=3D TCG_TYPE_V256) { insn |=3D P_VEXL; @@ -3273,12 +3273,12 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type= , unsigned vece) case INDEX_op_shli_vec: case INDEX_op_shri_vec: - /* We must expand the operation for MO_8. */ - return vece =3D=3D MO_8 ? -1 : 1; + /* We must expand the operation for MO_UB. */ + return vece =3D=3D MO_UB ? -1 : 1; case INDEX_op_sari_vec: - /* We must expand the operation for MO_8. */ - if (vece =3D=3D MO_8) { + /* We must expand the operation for MO_UB. */ + if (vece =3D=3D MO_UB) { return -1; } /* We can emulate this for MO_64, but it does not pay off @@ -3301,8 +3301,8 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, = unsigned vece) return have_avx2 && vece =3D=3D MO_32; case INDEX_op_mul_vec: - if (vece =3D=3D MO_8) { - /* We can expand the operation for MO_8. */ + if (vece =3D=3D MO_UB) { + /* We can expand the operation for MO_UB. */ return -1; } if (vece =3D=3D MO_64) { @@ -3332,7 +3332,7 @@ static void expand_vec_shi(TCGType type, unsigned vec= e, bool shr, { TCGv_vec t1, t2; - tcg_debug_assert(vece =3D=3D MO_8); + tcg_debug_assert(vece =3D=3D MO_UB); t1 =3D tcg_temp_new_vec(type); t2 =3D tcg_temp_new_vec(type); @@ -3346,9 +3346,9 @@ static void expand_vec_shi(TCGType type, unsigned vec= e, bool shr, (3) Step 2 leaves high half zero such that PACKUSWB (pack with unsigned saturation) does not modify the quantity. */ - vec_gen_3(INDEX_op_x86_punpckl_vec, type, MO_8, + vec_gen_3(INDEX_op_x86_punpckl_vec, type, MO_UB, tcgv_vec_arg(t1), tcgv_vec_arg(v1), tcgv_vec_arg(v1)); - vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_8, + vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_UB, tcgv_vec_arg(t2), tcgv_vec_arg(v1), tcgv_vec_arg(v1)); if (shr) { @@ -3361,7 +3361,7 @@ static void expand_vec_shi(TCGType type, unsigned vec= e, bool shr, tcg_gen_shri_vec(MO_16, t2, t2, 8); } - vec_gen_3(INDEX_op_x86_packus_vec, type, MO_8, + vec_gen_3(INDEX_op_x86_packus_vec, type, MO_UB, tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t2)); tcg_temp_free_vec(t1); tcg_temp_free_vec(t2); @@ -3373,17 +3373,17 @@ static void expand_vec_sari(TCGType type, unsigned = vece, TCGv_vec t1, t2; switch (vece) { - case MO_8: + case MO_UB: /* Unpack to W, shift, and repack, as in expand_vec_shi. */ t1 =3D tcg_temp_new_vec(type); t2 =3D tcg_temp_new_vec(type); - vec_gen_3(INDEX_op_x86_punpckl_vec, type, MO_8, + vec_gen_3(INDEX_op_x86_punpckl_vec, type, MO_UB, tcgv_vec_arg(t1), tcgv_vec_arg(v1), tcgv_vec_arg(v1)); - vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_8, + vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_UB, tcgv_vec_arg(t2), tcgv_vec_arg(v1), tcgv_vec_arg(v1)); tcg_gen_sari_vec(MO_16, t1, t1, imm + 8); tcg_gen_sari_vec(MO_16, t2, t2, imm + 8); - vec_gen_3(INDEX_op_x86_packss_vec, type, MO_8, + vec_gen_3(INDEX_op_x86_packss_vec, type, MO_UB, tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t2)); tcg_temp_free_vec(t1); tcg_temp_free_vec(t2); @@ -3425,7 +3425,7 @@ static void expand_vec_mul(TCGType type, unsigned vec= e, { TCGv_vec t1, t2, t3, t4; - tcg_debug_assert(vece =3D=3D MO_8); + tcg_debug_assert(vece =3D=3D MO_UB); /* * Unpack v1 bytes to words, 0 | x. @@ -3442,13 +3442,13 @@ static void expand_vec_mul(TCGType type, unsigned v= ece, t1 =3D tcg_temp_new_vec(TCG_TYPE_V128); t2 =3D tcg_temp_new_vec(TCG_TYPE_V128); tcg_gen_dup16i_vec(t2, 0); - vec_gen_3(INDEX_op_x86_punpckl_vec, TCG_TYPE_V128, MO_8, + vec_gen_3(INDEX_op_x86_punpckl_vec, TCG_TYPE_V128, MO_UB, tcgv_vec_arg(t1), tcgv_vec_arg(v1), tcgv_vec_arg(t2)); - vec_gen_3(INDEX_op_x86_punpckl_vec, TCG_TYPE_V128, MO_8, + vec_gen_3(INDEX_op_x86_punpckl_vec, TCG_TYPE_V128, MO_UB, tcgv_vec_arg(t2), tcgv_vec_arg(t2), tcgv_vec_arg(v2)); tcg_gen_mul_vec(MO_16, t1, t1, t2); tcg_gen_shri_vec(MO_16, t1, t1, 8); - vec_gen_3(INDEX_op_x86_packus_vec, TCG_TYPE_V128, MO_8, + vec_gen_3(INDEX_op_x86_packus_vec, TCG_TYPE_V128, MO_UB, tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t1)); tcg_temp_free_vec(t1); tcg_temp_free_vec(t2); @@ -3461,19 +3461,19 @@ static void expand_vec_mul(TCGType type, unsigned v= ece, t3 =3D tcg_temp_new_vec(type); t4 =3D tcg_temp_new_vec(type); tcg_gen_dup16i_vec(t4, 0); - vec_gen_3(INDEX_op_x86_punpckl_vec, type, MO_8, + vec_gen_3(INDEX_op_x86_punpckl_vec, type, MO_UB, tcgv_vec_arg(t1), tcgv_vec_arg(v1), tcgv_vec_arg(t4)); - vec_gen_3(INDEX_op_x86_punpckl_vec, type, MO_8, + vec_gen_3(INDEX_op_x86_punpckl_vec, type, MO_UB, tcgv_vec_arg(t2), tcgv_vec_arg(t4), tcgv_vec_arg(v2)); - vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_8, + vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_UB, tcgv_vec_arg(t3), tcgv_vec_arg(v1), tcgv_vec_arg(t4)); - vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_8, + vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_UB, tcgv_vec_arg(t4), tcgv_vec_arg(t4), tcgv_vec_arg(v2)); tcg_gen_mul_vec(MO_16, t1, t1, t2); tcg_gen_mul_vec(MO_16, t3, t3, t4); tcg_gen_shri_vec(MO_16, t1, t1, 8); tcg_gen_shri_vec(MO_16, t3, t3, 8); - vec_gen_3(INDEX_op_x86_packus_vec, type, MO_8, + vec_gen_3(INDEX_op_x86_packus_vec, type, MO_UB, tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t3)); tcg_temp_free_vec(t1); tcg_temp_free_vec(t2); diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c index 41bff32..c6d13ea 100644 --- a/tcg/mips/tcg-target.inc.c +++ b/tcg/mips/tcg-target.inc.c @@ -1380,7 +1380,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, = TCGLabelQemuLdst *l) i =3D tcg_out_call_iarg_reg(s, i, l->addrlo_reg); } switch (s_bits) { - case MO_8: + case MO_UB: i =3D tcg_out_call_iarg_reg8(s, i, l->datalo_reg); break; case MO_16: @@ -1566,7 +1566,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCG= Reg lo, TCGReg hi, } switch (opc & (MO_SIZE | MO_BSWAP)) { - case MO_8: + case MO_UB: tcg_out_opc_imm(s, OPC_SB, lo, base, 0); break; diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c index 3e76bf5..9c60c0f 100644 --- a/tcg/riscv/tcg-target.inc.c +++ b/tcg/riscv/tcg-target.inc.c @@ -1101,7 +1101,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, = TCGLabelQemuLdst *l) tcg_out_mov(s, TCG_TYPE_PTR, a1, l->addrlo_reg); tcg_out_mov(s, TCG_TYPE_PTR, a2, l->datalo_reg); switch (s_bits) { - case MO_8: + case MO_UB: tcg_out_ext8u(s, a2, a2); break; case MO_16: @@ -1216,7 +1216,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCG= Reg lo, TCGReg hi, g_assert(!bswap); switch (opc & (MO_SSIZE)) { - case MO_8: + case MO_UB: tcg_out_opc_store(s, OPC_SB, base, lo, 0); break; case MO_16: diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c index 10b1cea..479ee2e 100644 --- a/tcg/sparc/tcg-target.inc.c +++ b/tcg/sparc/tcg-target.inc.c @@ -882,7 +882,7 @@ static void emit_extend(TCGContext *s, TCGReg r, int op) * required by the MO_* value op; do nothing for 64 bit. */ switch (op & MO_SIZE) { - case MO_8: + case MO_UB: tcg_out_arithi(s, r, r, 0xff, ARITH_AND); break; case MO_16: diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c index 17679b6..9658c36 100644 --- a/tcg/tcg-op-gvec.c +++ b/tcg/tcg-op-gvec.c @@ -306,7 +306,7 @@ static void expand_clr(uint32_t dofs, uint32_t maxsz); uint64_t (dup_const)(unsigned vece, uint64_t c) { switch (vece) { - case MO_8: + case MO_UB: return 0x0101010101010101ull * (uint8_t)c; case MO_16: return 0x0001000100010001ull * (uint16_t)c; @@ -323,7 +323,7 @@ uint64_t (dup_const)(unsigned vece, uint64_t c) static void gen_dup_i32(unsigned vece, TCGv_i32 out, TCGv_i32 in) { switch (vece) { - case MO_8: + case MO_UB: tcg_gen_ext8u_i32(out, in); tcg_gen_muli_i32(out, out, 0x01010101); break; @@ -341,7 +341,7 @@ static void gen_dup_i32(unsigned vece, TCGv_i32 out, TC= Gv_i32 in) static void gen_dup_i64(unsigned vece, TCGv_i64 out, TCGv_i64 in) { switch (vece) { - case MO_8: + case MO_UB: tcg_gen_ext8u_i64(out, in); tcg_gen_muli_i64(out, out, 0x0101010101010101ull); break; @@ -556,7 +556,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32= _t oprsz, t_32 =3D tcg_temp_new_i32(); if (in_64) { tcg_gen_extrl_i64_i32(t_32, in_64); - } else if (vece =3D=3D MO_8) { + } else if (vece =3D=3D MO_UB) { tcg_gen_movi_i32(t_32, in_c & 0xff); } else if (vece =3D=3D MO_16) { tcg_gen_movi_i32(t_32, in_c & 0xffff); @@ -581,7 +581,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32= _t oprsz, /* Likewise, but with zero. */ static void expand_clr(uint32_t dofs, uint32_t maxsz) { - do_dup(MO_8, dofs, maxsz, maxsz, NULL, NULL, 0); + do_dup(MO_UB, dofs, maxsz, maxsz, NULL, NULL, 0); } /* Expand OPSZ bytes worth of two-operand operations using i32 elements. = */ @@ -1456,7 +1456,7 @@ void tcg_gen_gvec_dup_mem(unsigned vece, uint32_t dof= s, uint32_t aofs, } else if (vece <=3D MO_32) { TCGv_i32 in =3D tcg_temp_new_i32(); switch (vece) { - case MO_8: + case MO_UB: tcg_gen_ld8u_i32(in, cpu_env, aofs); break; case MO_16: @@ -1533,7 +1533,7 @@ void tcg_gen_gvec_dup8i(uint32_t dofs, uint32_t oprsz, uint32_t maxsz, uint8_t x) { check_size_align(oprsz, maxsz, dofs); - do_dup(MO_8, dofs, oprsz, maxsz, NULL, NULL, x); + do_dup(MO_UB, dofs, oprsz, maxsz, NULL, NULL, x); } void tcg_gen_gvec_not(unsigned vece, uint32_t dofs, uint32_t aofs, @@ -1572,7 +1572,7 @@ static void gen_addv_mask(TCGv_i64 d, TCGv_i64 a, TCG= v_i64 b, TCGv_i64 m) void tcg_gen_vec_add8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b) { - TCGv_i64 m =3D tcg_const_i64(dup_const(MO_8, 0x80)); + TCGv_i64 m =3D tcg_const_i64(dup_const(MO_UB, 0x80)); gen_addv_mask(d, a, b, m); tcg_temp_free_i64(m); } @@ -1608,7 +1608,7 @@ void tcg_gen_gvec_add(unsigned vece, uint32_t dofs, u= int32_t aofs, .fniv =3D tcg_gen_add_vec, .fno =3D gen_helper_gvec_add8, .opt_opc =3D vecop_list_add, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fni8 =3D tcg_gen_vec_add16_i64, .fniv =3D tcg_gen_add_vec, .fno =3D gen_helper_gvec_add16, @@ -1639,7 +1639,7 @@ void tcg_gen_gvec_adds(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_add_vec, .fno =3D gen_helper_gvec_adds8, .opt_opc =3D vecop_list_add, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fni8 =3D tcg_gen_vec_add16_i64, .fniv =3D tcg_gen_add_vec, .fno =3D gen_helper_gvec_adds16, @@ -1680,7 +1680,7 @@ void tcg_gen_gvec_subs(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_sub_vec, .fno =3D gen_helper_gvec_subs8, .opt_opc =3D vecop_list_sub, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fni8 =3D tcg_gen_vec_sub16_i64, .fniv =3D tcg_gen_sub_vec, .fno =3D gen_helper_gvec_subs16, @@ -1725,7 +1725,7 @@ static void gen_subv_mask(TCGv_i64 d, TCGv_i64 a, TCG= v_i64 b, TCGv_i64 m) void tcg_gen_vec_sub8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b) { - TCGv_i64 m =3D tcg_const_i64(dup_const(MO_8, 0x80)); + TCGv_i64 m =3D tcg_const_i64(dup_const(MO_UB, 0x80)); gen_subv_mask(d, a, b, m); tcg_temp_free_i64(m); } @@ -1759,7 +1759,7 @@ void tcg_gen_gvec_sub(unsigned vece, uint32_t dofs, u= int32_t aofs, .fniv =3D tcg_gen_sub_vec, .fno =3D gen_helper_gvec_sub8, .opt_opc =3D vecop_list_sub, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fni8 =3D tcg_gen_vec_sub16_i64, .fniv =3D tcg_gen_sub_vec, .fno =3D gen_helper_gvec_sub16, @@ -1791,7 +1791,7 @@ void tcg_gen_gvec_mul(unsigned vece, uint32_t dofs, u= int32_t aofs, { .fniv =3D tcg_gen_mul_vec, .fno =3D gen_helper_gvec_mul8, .opt_opc =3D vecop_list_mul, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fniv =3D tcg_gen_mul_vec, .fno =3D gen_helper_gvec_mul16, .opt_opc =3D vecop_list_mul, @@ -1820,7 +1820,7 @@ void tcg_gen_gvec_muls(unsigned vece, uint32_t dofs, = uint32_t aofs, { .fniv =3D tcg_gen_mul_vec, .fno =3D gen_helper_gvec_muls8, .opt_opc =3D vecop_list_mul, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fniv =3D tcg_gen_mul_vec, .fno =3D gen_helper_gvec_muls16, .opt_opc =3D vecop_list_mul, @@ -1858,7 +1858,7 @@ void tcg_gen_gvec_ssadd(unsigned vece, uint32_t dofs,= uint32_t aofs, { .fniv =3D tcg_gen_ssadd_vec, .fno =3D gen_helper_gvec_ssadd8, .opt_opc =3D vecop_list, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fniv =3D tcg_gen_ssadd_vec, .fno =3D gen_helper_gvec_ssadd16, .opt_opc =3D vecop_list, @@ -1884,7 +1884,7 @@ void tcg_gen_gvec_sssub(unsigned vece, uint32_t dofs,= uint32_t aofs, { .fniv =3D tcg_gen_sssub_vec, .fno =3D gen_helper_gvec_sssub8, .opt_opc =3D vecop_list, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fniv =3D tcg_gen_sssub_vec, .fno =3D gen_helper_gvec_sssub16, .opt_opc =3D vecop_list, @@ -1926,7 +1926,7 @@ void tcg_gen_gvec_usadd(unsigned vece, uint32_t dofs,= uint32_t aofs, { .fniv =3D tcg_gen_usadd_vec, .fno =3D gen_helper_gvec_usadd8, .opt_opc =3D vecop_list, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fniv =3D tcg_gen_usadd_vec, .fno =3D gen_helper_gvec_usadd16, .opt_opc =3D vecop_list, @@ -1970,7 +1970,7 @@ void tcg_gen_gvec_ussub(unsigned vece, uint32_t dofs,= uint32_t aofs, { .fniv =3D tcg_gen_ussub_vec, .fno =3D gen_helper_gvec_ussub8, .opt_opc =3D vecop_list, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fniv =3D tcg_gen_ussub_vec, .fno =3D gen_helper_gvec_ussub16, .opt_opc =3D vecop_list, @@ -1998,7 +1998,7 @@ void tcg_gen_gvec_smin(unsigned vece, uint32_t dofs, = uint32_t aofs, { .fniv =3D tcg_gen_smin_vec, .fno =3D gen_helper_gvec_smin8, .opt_opc =3D vecop_list, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fniv =3D tcg_gen_smin_vec, .fno =3D gen_helper_gvec_smin16, .opt_opc =3D vecop_list, @@ -2026,7 +2026,7 @@ void tcg_gen_gvec_umin(unsigned vece, uint32_t dofs, = uint32_t aofs, { .fniv =3D tcg_gen_umin_vec, .fno =3D gen_helper_gvec_umin8, .opt_opc =3D vecop_list, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fniv =3D tcg_gen_umin_vec, .fno =3D gen_helper_gvec_umin16, .opt_opc =3D vecop_list, @@ -2054,7 +2054,7 @@ void tcg_gen_gvec_smax(unsigned vece, uint32_t dofs, = uint32_t aofs, { .fniv =3D tcg_gen_smax_vec, .fno =3D gen_helper_gvec_smax8, .opt_opc =3D vecop_list, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fniv =3D tcg_gen_smax_vec, .fno =3D gen_helper_gvec_smax16, .opt_opc =3D vecop_list, @@ -2082,7 +2082,7 @@ void tcg_gen_gvec_umax(unsigned vece, uint32_t dofs, = uint32_t aofs, { .fniv =3D tcg_gen_umax_vec, .fno =3D gen_helper_gvec_umax8, .opt_opc =3D vecop_list, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fniv =3D tcg_gen_umax_vec, .fno =3D gen_helper_gvec_umax16, .opt_opc =3D vecop_list, @@ -2120,7 +2120,7 @@ static void gen_negv_mask(TCGv_i64 d, TCGv_i64 b, TCG= v_i64 m) void tcg_gen_vec_neg8_i64(TCGv_i64 d, TCGv_i64 b) { - TCGv_i64 m =3D tcg_const_i64(dup_const(MO_8, 0x80)); + TCGv_i64 m =3D tcg_const_i64(dup_const(MO_UB, 0x80)); gen_negv_mask(d, b, m); tcg_temp_free_i64(m); } @@ -2155,7 +2155,7 @@ void tcg_gen_gvec_neg(unsigned vece, uint32_t dofs, u= int32_t aofs, .fniv =3D tcg_gen_neg_vec, .fno =3D gen_helper_gvec_neg8, .opt_opc =3D vecop_list, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fni8 =3D tcg_gen_vec_neg16_i64, .fniv =3D tcg_gen_neg_vec, .fno =3D gen_helper_gvec_neg16, @@ -2201,7 +2201,7 @@ static void gen_absv_mask(TCGv_i64 d, TCGv_i64 b, uns= igned vece) static void tcg_gen_vec_abs8_i64(TCGv_i64 d, TCGv_i64 b) { - gen_absv_mask(d, b, MO_8); + gen_absv_mask(d, b, MO_UB); } static void tcg_gen_vec_abs16_i64(TCGv_i64 d, TCGv_i64 b) @@ -2218,7 +2218,7 @@ void tcg_gen_gvec_abs(unsigned vece, uint32_t dofs, u= int32_t aofs, .fniv =3D tcg_gen_abs_vec, .fno =3D gen_helper_gvec_abs8, .opt_opc =3D vecop_list, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fni8 =3D tcg_gen_vec_abs16_i64, .fniv =3D tcg_gen_abs_vec, .fno =3D gen_helper_gvec_abs16, @@ -2454,7 +2454,7 @@ void tcg_gen_gvec_ori(unsigned vece, uint32_t dofs, u= int32_t aofs, void tcg_gen_vec_shl8i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c) { - uint64_t mask =3D dup_const(MO_8, 0xff << c); + uint64_t mask =3D dup_const(MO_UB, 0xff << c); tcg_gen_shli_i64(d, a, c); tcg_gen_andi_i64(d, d, mask); } @@ -2475,7 +2475,7 @@ void tcg_gen_gvec_shli(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_shli_vec, .fno =3D gen_helper_gvec_shl8i, .opt_opc =3D vecop_list, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fni8 =3D tcg_gen_vec_shl16i_i64, .fniv =3D tcg_gen_shli_vec, .fno =3D gen_helper_gvec_shl16i, @@ -2505,7 +2505,7 @@ void tcg_gen_gvec_shli(unsigned vece, uint32_t dofs, = uint32_t aofs, void tcg_gen_vec_shr8i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c) { - uint64_t mask =3D dup_const(MO_8, 0xff >> c); + uint64_t mask =3D dup_const(MO_UB, 0xff >> c); tcg_gen_shri_i64(d, a, c); tcg_gen_andi_i64(d, d, mask); } @@ -2526,7 +2526,7 @@ void tcg_gen_gvec_shri(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_shri_vec, .fno =3D gen_helper_gvec_shr8i, .opt_opc =3D vecop_list, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fni8 =3D tcg_gen_vec_shr16i_i64, .fniv =3D tcg_gen_shri_vec, .fno =3D gen_helper_gvec_shr16i, @@ -2556,8 +2556,8 @@ void tcg_gen_gvec_shri(unsigned vece, uint32_t dofs, = uint32_t aofs, void tcg_gen_vec_sar8i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c) { - uint64_t s_mask =3D dup_const(MO_8, 0x80 >> c); - uint64_t c_mask =3D dup_const(MO_8, 0xff >> c); + uint64_t s_mask =3D dup_const(MO_UB, 0x80 >> c); + uint64_t c_mask =3D dup_const(MO_UB, 0xff >> c); TCGv_i64 s =3D tcg_temp_new_i64(); tcg_gen_shri_i64(d, a, c); @@ -2591,7 +2591,7 @@ void tcg_gen_gvec_sari(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_sari_vec, .fno =3D gen_helper_gvec_sar8i, .opt_opc =3D vecop_list, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fni8 =3D tcg_gen_vec_sar16i_i64, .fniv =3D tcg_gen_sari_vec, .fno =3D gen_helper_gvec_sar16i, @@ -2880,7 +2880,7 @@ void tcg_gen_gvec_shlv(unsigned vece, uint32_t dofs, = uint32_t aofs, { .fniv =3D tcg_gen_shlv_mod_vec, .fno =3D gen_helper_gvec_shl8v, .opt_opc =3D vecop_list, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fniv =3D tcg_gen_shlv_mod_vec, .fno =3D gen_helper_gvec_shl16v, .opt_opc =3D vecop_list, @@ -2943,7 +2943,7 @@ void tcg_gen_gvec_shrv(unsigned vece, uint32_t dofs, = uint32_t aofs, { .fniv =3D tcg_gen_shrv_mod_vec, .fno =3D gen_helper_gvec_shr8v, .opt_opc =3D vecop_list, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fniv =3D tcg_gen_shrv_mod_vec, .fno =3D gen_helper_gvec_shr16v, .opt_opc =3D vecop_list, @@ -3006,7 +3006,7 @@ void tcg_gen_gvec_sarv(unsigned vece, uint32_t dofs, = uint32_t aofs, { .fniv =3D tcg_gen_sarv_mod_vec, .fno =3D gen_helper_gvec_sar8v, .opt_opc =3D vecop_list, - .vece =3D MO_8 }, + .vece =3D MO_UB }, { .fniv =3D tcg_gen_sarv_mod_vec, .fno =3D gen_helper_gvec_sar16v, .opt_opc =3D vecop_list, @@ -3129,7 +3129,7 @@ void tcg_gen_gvec_cmp(TCGCond cond, unsigned vece, ui= nt32_t dofs, check_overlap_3(dofs, aofs, bofs, maxsz); if (cond =3D=3D TCG_COND_NEVER || cond =3D=3D TCG_COND_ALWAYS) { - do_dup(MO_8, dofs, oprsz, maxsz, + do_dup(MO_UB, dofs, oprsz, maxsz, NULL, NULL, -(cond =3D=3D TCG_COND_ALWAYS)); return; } diff --git a/tcg/tcg-op-vec.c b/tcg/tcg-op-vec.c index 6714991..d7ffc9e 100644 --- a/tcg/tcg-op-vec.c +++ b/tcg/tcg-op-vec.c @@ -275,7 +275,7 @@ void tcg_gen_dup16i_vec(TCGv_vec r, uint32_t a) void tcg_gen_dup8i_vec(TCGv_vec r, uint32_t a) { - do_dupi_vec(r, MO_REG, dup_const(MO_8, a)); + do_dupi_vec(r, MO_REG, dup_const(MO_UB, a)); } void tcg_gen_dupi_vec(unsigned vece, TCGv_vec r, uint64_t a) @@ -752,13 +752,13 @@ void tcg_gen_bitsel_vec(unsigned vece, TCGv_vec r, TC= Gv_vec a, tcg_debug_assert(ct->base_type >=3D type); if (TCG_TARGET_HAS_bitsel_vec) { - vec_gen_4(INDEX_op_bitsel_vec, type, MO_8, + vec_gen_4(INDEX_op_bitsel_vec, type, MO_UB, temp_arg(rt), temp_arg(at), temp_arg(bt), temp_arg(ct)); } else { TCGv_vec t =3D tcg_temp_new_vec(type); - tcg_gen_and_vec(MO_8, t, a, b); - tcg_gen_andc_vec(MO_8, r, c, a); - tcg_gen_or_vec(MO_8, r, r, t); + tcg_gen_and_vec(MO_UB, t, a, b); + tcg_gen_andc_vec(MO_UB, r, c, a); + tcg_gen_or_vec(MO_UB, r, r, t); tcg_temp_free_vec(t); } } diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c index 587d092..61eda33 100644 --- a/tcg/tcg-op.c +++ b/tcg/tcg-op.c @@ -2720,7 +2720,7 @@ static inline TCGMemOp tcg_canonicalize_memop(TCGMemO= p op, bool is64, bool st) (void)get_alignment_bits(op); switch (op & MO_SIZE) { - case MO_8: + case MO_UB: op &=3D ~MO_BSWAP; break; case MO_16: @@ -3024,7 +3024,7 @@ typedef void (*gen_atomic_op_i64)(TCGv_i64, TCGv_env,= TCGv, TCGv_i64); #endif static void * const table_cmpxchg[16] =3D { - [MO_8] =3D gen_helper_atomic_cmpxchgb, + [MO_UB] =3D gen_helper_atomic_cmpxchgb, [MO_16 | MO_LE] =3D gen_helper_atomic_cmpxchgw_le, [MO_16 | MO_BE] =3D gen_helper_atomic_cmpxchgw_be, [MO_32 | MO_LE] =3D gen_helper_atomic_cmpxchgl_le, @@ -3248,7 +3248,7 @@ static void do_atomic_op_i64(TCGv_i64 ret, TCGv addr,= TCGv_i64 val, #define GEN_ATOMIC_HELPER(NAME, OP, NEW) \ static void * const table_##NAME[16] =3D { \ - [MO_8] =3D gen_helper_atomic_##NAME##b, \ + [MO_UB] =3D gen_helper_atomic_##NAME##b, = \ [MO_16 | MO_LE] =3D gen_helper_atomic_##NAME##w_le, \ [MO_16 | MO_BE] =3D gen_helper_atomic_##NAME##w_be, \ [MO_32 | MO_LE] =3D gen_helper_atomic_##NAME##l_le, \ diff --git a/tcg/tcg.h b/tcg/tcg.h index b411e17..5636d6b 100644 --- a/tcg/tcg.h +++ b/tcg/tcg.h @@ -1302,7 +1302,7 @@ uint64_t dup_const(unsigned vece, uint64_t c); #define dup_const(VECE, C) \ (__builtin_constant_p(VECE) \ - ? ( (VECE) =3D=3D MO_8 ? 0x0101010101010101ull * (uint8_t)(C) \ + ? ((VECE) =3D=3D MO_UB ? 0x0101010101010101ull * (uint8_t)(C) \ : (VECE) =3D=3D MO_16 ? 0x0001000100010001ull * (uint16_t)(C) \ : (VECE) =3D=3D MO_32 ? 0x0000000100000001ull * (uint32_t)(C) \ : dup_const(VECE, C)) \ -- 1.8.3.1 From nobody Mon Feb 9 05:43:11 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=bt.com ARC-Seal: i=1; a=rsa-sha256; t=1563810106; cv=none; d=zoho.com; s=zohoarc; b=XAI5sy5WvfN4WsbYKIwEJNp/g6hhHUbTDZXgR2gFF2eEnXCw1hoylRyxhnz+UrIodUq6uFfrPQchDYoleR8z1hR7Nk24q0unaPdC6faSU3Vx2N57eD4bKBM3QEXRq/3c6D1UgW2QjBLn67XoTlVOLENCgdMuw6H9UBzELolt8r0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1563810106; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=Zfc/PirQjREVj50NEYCpS4XTurIdjCD+ylK/490Vups=; b=K3JsqZVWj/06h9cDNxKGM1Iu+aDTrFeDqEJIsuvfgYohgeECrzlf/5hOhu6pkWtg5hCDnM5x9q08ZUnh3aUgjdESfbJpFqZneXOowknqKt67le6pQaEv00lO01ChqPuAMiGG0iG+k0WVLxKkUF8tgxVMHLagb2qDYamnejQW7ys= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1563810106394682.2759881453173; Mon, 22 Jul 2019 08:41:46 -0700 (PDT) Received: from localhost ([::1]:34676 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaRc-00029i-Vt for importer@patchew.org; Mon, 22 Jul 2019 11:41:44 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51887) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaRB-0001gG-1W for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:41:27 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hpaQz-0001fm-V0 for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:41:16 -0400 Received: from smtpe1.intersmtp.com ([62.239.224.234]:39577) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hpaQJ-00017n-1w; Mon, 22 Jul 2019 11:40:24 -0400 Received: from tpw09926dag18g.domain1.systemhost.net (10.9.212.34) by RDW083A012ED68.bt.com (10.187.98.38) with Microsoft SMTP Server (TLS) id 14.3.439.0; Mon, 22 Jul 2019 16:40:04 +0100 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by tpw09926dag18g.domain1.systemhost.net (10.9.212.34) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Jul 2019 16:40:18 +0100 Received: from tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c]) by tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c%12]) with mapi id 15.00.1395.000; Mon, 22 Jul 2019 16:40:18 +0100 From: To: Thread-Topic: [Qemu-devel] [PATCH v2 02/20] tcg: Replace MO_16 with MO_UW alias Thread-Index: AQHVQKPDyJRtmfK8Wku9MyQfdwKCPw== Date: Mon, 22 Jul 2019 15:40:17 +0000 Message-ID: <1563810016433.48708@bt.com> References: In-Reply-To: Accept-Language: en-AU, en-GB, en-US Content-Language: en-AU X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.187.101.37] MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 62.239.224.234 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.23 Subject: [Qemu-devel] [PATCH v2 02/20] tcg: Replace MO_16 with MO_UW alias X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, walling@linux.ibm.com, mst@redhat.com, palmer@sifive.com, mark.cave-ayland@ilande.co.uk, Alistair.Francis@wdc.com, arikalo@wavecomp.com, david@redhat.com, pasic@linux.ibm.com, borntraeger@de.ibm.com, rth@twiddle.net, atar4qemu@gmail.com, ehabkost@redhat.com, sw@weilnetz.de, qemu-s390x@nongnu.org, qemu-arm@nongnu.org, david@gibson.dropbear.id.au, qemu-riscv@nongnu.org, cohuck@redhat.com, claudio.fontana@huawei.com, alex.williamson@redhat.com, qemu-ppc@nongnu.org, amarkovic@wavecomp.com, pbonzini@redhat.com, aurelien@aurel32.net Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" Preparation for splitting MO_16 out from TCGMemOp into new accelerator independent MemOp. As MO_16 will be a value of MemOp, existing TCGMemOp comparisons and coercions will trigger -Wenum-compare and -Wenum-conversion. Signed-off-by: Tony Nguyen --- target/arm/sve_helper.c | 4 +- target/arm/translate-a64.c | 90 ++++++++-------- target/arm/translate-sve.c | 40 ++++---- target/arm/translate-vfp.inc.c | 2 +- target/arm/translate.c | 32 +++--- target/i386/translate.c | 200 ++++++++++++++++++--------------= ---- target/mips/translate.c | 2 +- target/ppc/translate/vmx-impl.inc.c | 28 ++--- target/s390x/translate_vx.inc.c | 2 +- target/s390x/vec.h | 4 +- tcg/aarch64/tcg-target.inc.c | 20 ++-- tcg/arm/tcg-target.inc.c | 6 +- tcg/i386/tcg-target.inc.c | 48 ++++----- tcg/mips/tcg-target.inc.c | 6 +- tcg/riscv/tcg-target.inc.c | 4 +- tcg/sparc/tcg-target.inc.c | 2 +- tcg/tcg-op-gvec.c | 72 ++++++------- tcg/tcg-op-vec.c | 2 +- tcg/tcg-op.c | 18 ++-- tcg/tcg.h | 2 +- 20 files changed, 292 insertions(+), 292 deletions(-) diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c index 4c7e11f..f6bef3d 100644 --- a/target/arm/sve_helper.c +++ b/target/arm/sve_helper.c @@ -1546,7 +1546,7 @@ void HELPER(sve_cpy_m_h)(void *vd, void *vn, void *vg, uint64_t *d =3D vd, *n =3D vn; uint8_t *pg =3D vg; - mm =3D dup_const(MO_16, mm); + mm =3D dup_const(MO_UW, mm); for (i =3D 0; i < opr_sz; i +=3D 1) { uint64_t nn =3D n[i]; uint64_t pp =3D expand_pred_h(pg[H1(i)]); @@ -1600,7 +1600,7 @@ void HELPER(sve_cpy_z_h)(void *vd, void *vg, uint64_t= val, uint32_t desc) uint64_t *d =3D vd; uint8_t *pg =3D vg; - val =3D dup_const(MO_16, val); + val =3D dup_const(MO_UW, val); for (i =3D 0; i < opr_sz; i +=3D 1) { d[i] =3D val & expand_pred_h(pg[H1(i)]); } diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c index f840b43..3acfccb 100644 --- a/target/arm/translate-a64.c +++ b/target/arm/translate-a64.c @@ -492,7 +492,7 @@ static TCGv_i32 read_fp_hreg(DisasContext *s, int reg) { TCGv_i32 v =3D tcg_temp_new_i32(); - tcg_gen_ld16u_i32(v, cpu_env, fp_reg_offset(s, reg, MO_16)); + tcg_gen_ld16u_i32(v, cpu_env, fp_reg_offset(s, reg, MO_UW)); return v; } @@ -996,7 +996,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 = tcg_dest, int srcidx, case MO_UB: tcg_gen_ld8u_i64(tcg_dest, cpu_env, vect_off); break; - case MO_16: + case MO_UW: tcg_gen_ld16u_i64(tcg_dest, cpu_env, vect_off); break; case MO_32: @@ -1005,7 +1005,7 @@ static void read_vec_element(DisasContext *s, TCGv_i6= 4 tcg_dest, int srcidx, case MO_SB: tcg_gen_ld8s_i64(tcg_dest, cpu_env, vect_off); break; - case MO_16|MO_SIGN: + case MO_SW: tcg_gen_ld16s_i64(tcg_dest, cpu_env, vect_off); break; case MO_32|MO_SIGN: @@ -1028,13 +1028,13 @@ static void read_vec_element_i32(DisasContext *s, T= CGv_i32 tcg_dest, int srcidx, case MO_UB: tcg_gen_ld8u_i32(tcg_dest, cpu_env, vect_off); break; - case MO_16: + case MO_UW: tcg_gen_ld16u_i32(tcg_dest, cpu_env, vect_off); break; case MO_SB: tcg_gen_ld8s_i32(tcg_dest, cpu_env, vect_off); break; - case MO_16|MO_SIGN: + case MO_SW: tcg_gen_ld16s_i32(tcg_dest, cpu_env, vect_off); break; case MO_32: @@ -1055,7 +1055,7 @@ static void write_vec_element(DisasContext *s, TCGv_i= 64 tcg_src, int destidx, case MO_UB: tcg_gen_st8_i64(tcg_src, cpu_env, vect_off); break; - case MO_16: + case MO_UW: tcg_gen_st16_i64(tcg_src, cpu_env, vect_off); break; case MO_32: @@ -1077,7 +1077,7 @@ static void write_vec_element_i32(DisasContext *s, TC= Gv_i32 tcg_src, case MO_UB: tcg_gen_st8_i32(tcg_src, cpu_env, vect_off); break; - case MO_16: + case MO_UW: tcg_gen_st16_i32(tcg_src, cpu_env, vect_off); break; case MO_32: @@ -5269,7 +5269,7 @@ static void handle_fp_compare(DisasContext *s, int si= ze, bool cmp_with_zero, bool signal_all_nans) { TCGv_i64 tcg_flags =3D tcg_temp_new_i64(); - TCGv_ptr fpst =3D get_fpstatus_ptr(size =3D=3D MO_16); + TCGv_ptr fpst =3D get_fpstatus_ptr(size =3D=3D MO_UW); if (size =3D=3D MO_64) { TCGv_i64 tcg_vn, tcg_vm; @@ -5306,7 +5306,7 @@ static void handle_fp_compare(DisasContext *s, int si= ze, gen_helper_vfp_cmps_a64(tcg_flags, tcg_vn, tcg_vm, fpst); } break; - case MO_16: + case MO_UW: if (signal_all_nans) { gen_helper_vfp_cmpeh_a64(tcg_flags, tcg_vn, tcg_vm, fpst); } else { @@ -5360,7 +5360,7 @@ static void disas_fp_compare(DisasContext *s, uint32_= t insn) size =3D MO_64; break; case 3: - size =3D MO_16; + size =3D MO_UW; if (dc_isar_feature(aa64_fp16, s)) { break; } @@ -5411,7 +5411,7 @@ static void disas_fp_ccomp(DisasContext *s, uint32_t = insn) size =3D MO_64; break; case 3: - size =3D MO_16; + size =3D MO_UW; if (dc_isar_feature(aa64_fp16, s)) { break; } @@ -5477,7 +5477,7 @@ static void disas_fp_csel(DisasContext *s, uint32_t i= nsn) sz =3D MO_64; break; case 3: - sz =3D MO_16; + sz =3D MO_UW; if (dc_isar_feature(aa64_fp16, s)) { break; } @@ -6282,7 +6282,7 @@ static void disas_fp_imm(DisasContext *s, uint32_t in= sn) sz =3D MO_64; break; case 3: - sz =3D MO_16; + sz =3D MO_UW; if (dc_isar_feature(aa64_fp16, s)) { break; } @@ -6593,7 +6593,7 @@ static void handle_fmov(DisasContext *s, int rd, int = rn, int type, bool itof) break; case 3: /* 16 bit */ - tcg_gen_ld16u_i64(tcg_rd, cpu_env, fp_reg_offset(s, rn, MO_16)= ); + tcg_gen_ld16u_i64(tcg_rd, cpu_env, fp_reg_offset(s, rn, MO_UW)= ); break; default: g_assert_not_reached(); @@ -7030,7 +7030,7 @@ static TCGv_i32 do_reduction_op(DisasContext *s, int = fpopcode, int rn, { if (esize =3D=3D size) { int element; - TCGMemOp msize =3D esize =3D=3D 16 ? MO_16 : MO_32; + TCGMemOp msize =3D esize =3D=3D 16 ? MO_UW : MO_32; TCGv_i32 tcg_elem; /* We should have one register left here */ @@ -7204,7 +7204,7 @@ static void disas_simd_across_lanes(DisasContext *s, = uint32_t insn) * Note that correct NaN propagation requires that we do these * operations in exactly the order specified by the pseudocode. */ - TCGv_ptr fpst =3D get_fpstatus_ptr(size =3D=3D MO_16); + TCGv_ptr fpst =3D get_fpstatus_ptr(size =3D=3D MO_UW); int fpopcode =3D opcode | is_min << 4 | is_u << 5; int vmap =3D (1 << elements) - 1; TCGv_i32 tcg_res32 =3D do_reduction_op(s, fpopcode, rn, esize, @@ -7591,7 +7591,7 @@ static void disas_simd_mod_imm(DisasContext *s, uint3= 2_t insn) } else { if (o2) { /* FMOV (vector, immediate) - half-precision */ - imm =3D vfp_expand_imm(MO_16, abcdefgh); + imm =3D vfp_expand_imm(MO_UW, abcdefgh); /* now duplicate across the lanes */ imm =3D bitfield_replicate(imm, 16); } else { @@ -7699,7 +7699,7 @@ static void disas_simd_scalar_pairwise(DisasContext *= s, uint32_t insn) unallocated_encoding(s); return; } else { - size =3D MO_16; + size =3D MO_UW; } } else { size =3D extract32(size, 0, 1) ? MO_64 : MO_32; @@ -7709,7 +7709,7 @@ static void disas_simd_scalar_pairwise(DisasContext *= s, uint32_t insn) return; } - fpst =3D get_fpstatus_ptr(size =3D=3D MO_16); + fpst =3D get_fpstatus_ptr(size =3D=3D MO_UW); break; default: unallocated_encoding(s); @@ -7760,7 +7760,7 @@ static void disas_simd_scalar_pairwise(DisasContext *= s, uint32_t insn) read_vec_element_i32(s, tcg_op1, rn, 0, size); read_vec_element_i32(s, tcg_op2, rn, 1, size); - if (size =3D=3D MO_16) { + if (size =3D=3D MO_UW) { switch (opcode) { case 0xc: /* FMAXNMP */ gen_helper_advsimd_maxnumh(tcg_res, tcg_op1, tcg_op2, fpst= ); @@ -8222,7 +8222,7 @@ static void handle_simd_intfp_conv(DisasContext *s, i= nt rd, int rn, int elements, int is_signed, int fracbits, int size) { - TCGv_ptr tcg_fpst =3D get_fpstatus_ptr(size =3D=3D MO_16); + TCGv_ptr tcg_fpst =3D get_fpstatus_ptr(size =3D=3D MO_UW); TCGv_i32 tcg_shift =3D NULL; TCGMemOp mop =3D size | (is_signed ? MO_SIGN : 0); @@ -8281,7 +8281,7 @@ static void handle_simd_intfp_conv(DisasContext *s, i= nt rd, int rn, } } break; - case MO_16: + case MO_UW: if (fracbits) { if (is_signed) { gen_helper_vfp_sltoh(tcg_float, tcg_int32, @@ -8339,7 +8339,7 @@ static void handle_simd_shift_intfp_conv(DisasContext= *s, bool is_scalar, } else if (immh & 4) { size =3D MO_32; } else if (immh & 2) { - size =3D MO_16; + size =3D MO_UW; if (!dc_isar_feature(aa64_fp16, s)) { unallocated_encoding(s); return; @@ -8384,7 +8384,7 @@ static void handle_simd_shift_fpint_conv(DisasContext= *s, bool is_scalar, } else if (immh & 0x4) { size =3D MO_32; } else if (immh & 0x2) { - size =3D MO_16; + size =3D MO_UW; if (!dc_isar_feature(aa64_fp16, s)) { unallocated_encoding(s); return; @@ -8403,7 +8403,7 @@ static void handle_simd_shift_fpint_conv(DisasContext= *s, bool is_scalar, assert(!(is_scalar && is_q)); tcg_rmode =3D tcg_const_i32(arm_rmode_to_sf(FPROUNDING_ZERO)); - tcg_fpstatus =3D get_fpstatus_ptr(size =3D=3D MO_16); + tcg_fpstatus =3D get_fpstatus_ptr(size =3D=3D MO_UW); gen_helper_set_rmode(tcg_rmode, tcg_rmode, tcg_fpstatus); fracbits =3D (16 << size) - immhb; tcg_shift =3D tcg_const_i32(fracbits); @@ -8429,7 +8429,7 @@ static void handle_simd_shift_fpint_conv(DisasContext= *s, bool is_scalar, int maxpass =3D is_scalar ? 1 : ((8 << is_q) >> size); switch (size) { - case MO_16: + case MO_UW: if (is_u) { fn =3D gen_helper_vfp_touhh; } else { @@ -9388,7 +9388,7 @@ static void handle_2misc_fcmp_zero(DisasContext *s, i= nt opcode, return; } - fpst =3D get_fpstatus_ptr(size =3D=3D MO_16); + fpst =3D get_fpstatus_ptr(size =3D=3D MO_UW); if (is_double) { TCGv_i64 tcg_op =3D tcg_temp_new_i64(); @@ -9440,7 +9440,7 @@ static void handle_2misc_fcmp_zero(DisasContext *s, i= nt opcode, bool swap =3D false; int pass, maxpasses; - if (size =3D=3D MO_16) { + if (size =3D=3D MO_UW) { switch (opcode) { case 0x2e: /* FCMLT (zero) */ swap =3D true; @@ -11422,8 +11422,8 @@ static void disas_simd_three_reg_same_fp16(DisasCon= text *s, uint32_t insn) int passreg =3D pass < (maxpass / 2) ? rn : rm; int passelt =3D (pass << 1) & (maxpass - 1); - read_vec_element_i32(s, tcg_op1, passreg, passelt, MO_16); - read_vec_element_i32(s, tcg_op2, passreg, passelt + 1, MO_16); + read_vec_element_i32(s, tcg_op1, passreg, passelt, MO_UW); + read_vec_element_i32(s, tcg_op2, passreg, passelt + 1, MO_UW); tcg_res[pass] =3D tcg_temp_new_i32(); switch (fpopcode) { @@ -11450,7 +11450,7 @@ static void disas_simd_three_reg_same_fp16(DisasCon= text *s, uint32_t insn) } for (pass =3D 0; pass < maxpass; pass++) { - write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_16); + write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_UW); tcg_temp_free_i32(tcg_res[pass]); } @@ -11463,15 +11463,15 @@ static void disas_simd_three_reg_same_fp16(DisasC= ontext *s, uint32_t insn) TCGv_i32 tcg_op2 =3D tcg_temp_new_i32(); TCGv_i32 tcg_res =3D tcg_temp_new_i32(); - read_vec_element_i32(s, tcg_op1, rn, pass, MO_16); - read_vec_element_i32(s, tcg_op2, rm, pass, MO_16); + read_vec_element_i32(s, tcg_op1, rn, pass, MO_UW); + read_vec_element_i32(s, tcg_op2, rm, pass, MO_UW); switch (fpopcode) { case 0x0: /* FMAXNM */ gen_helper_advsimd_maxnumh(tcg_res, tcg_op1, tcg_op2, fpst= ); break; case 0x1: /* FMLA */ - read_vec_element_i32(s, tcg_res, rd, pass, MO_16); + read_vec_element_i32(s, tcg_res, rd, pass, MO_UW); gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_= res, fpst); break; @@ -11496,7 +11496,7 @@ static void disas_simd_three_reg_same_fp16(DisasCon= text *s, uint32_t insn) case 0x9: /* FMLS */ /* As usual for ARM, separate negation for fused multiply-= add */ tcg_gen_xori_i32(tcg_op1, tcg_op1, 0x8000); - read_vec_element_i32(s, tcg_res, rd, pass, MO_16); + read_vec_element_i32(s, tcg_res, rd, pass, MO_UW); gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_= res, fpst); break; @@ -11537,7 +11537,7 @@ static void disas_simd_three_reg_same_fp16(DisasCon= text *s, uint32_t insn) g_assert_not_reached(); } - write_vec_element_i32(s, tcg_res, rd, pass, MO_16); + write_vec_element_i32(s, tcg_res, rd, pass, MO_UW); tcg_temp_free_i32(tcg_res); tcg_temp_free_i32(tcg_op1); tcg_temp_free_i32(tcg_op2); @@ -11727,7 +11727,7 @@ static void handle_2misc_widening(DisasContext *s, = int opcode, bool is_q, for (pass =3D 0; pass < 4; pass++) { tcg_res[pass] =3D tcg_temp_new_i32(); - read_vec_element_i32(s, tcg_res[pass], rn, srcelt + pass, MO_1= 6); + read_vec_element_i32(s, tcg_res[pass], rn, srcelt + pass, MO_U= W); gen_helper_vfp_fcvt_f16_to_f32(tcg_res[pass], tcg_res[pass], fpst, ahp); } @@ -11768,7 +11768,7 @@ static void handle_rev(DisasContext *s, int opcode,= bool u, read_vec_element(s, tcg_tmp, rn, i, grp_size); switch (grp_size) { - case MO_16: + case MO_UW: tcg_gen_bswap16_i64(tcg_tmp, tcg_tmp); break; case MO_32: @@ -12499,7 +12499,7 @@ static void disas_simd_two_reg_misc_fp16(DisasConte= xt *s, uint32_t insn) if (!fp_access_check(s)) { return; } - handle_simd_intfp_conv(s, rd, rn, elements, !u, 0, MO_16); + handle_simd_intfp_conv(s, rd, rn, elements, !u, 0, MO_UW); return; } break; @@ -12508,7 +12508,7 @@ static void disas_simd_two_reg_misc_fp16(DisasConte= xt *s, uint32_t insn) case 0x2e: /* FCMLT (zero) */ case 0x6c: /* FCMGE (zero) */ case 0x6d: /* FCMLE (zero) */ - handle_2misc_fcmp_zero(s, fpop, is_scalar, 0, is_q, MO_16, rn, rd); + handle_2misc_fcmp_zero(s, fpop, is_scalar, 0, is_q, MO_UW, rn, rd); return; case 0x3d: /* FRECPE */ case 0x3f: /* FRECPX */ @@ -12668,7 +12668,7 @@ static void disas_simd_two_reg_misc_fp16(DisasConte= xt *s, uint32_t insn) TCGv_i32 tcg_op =3D tcg_temp_new_i32(); TCGv_i32 tcg_res =3D tcg_temp_new_i32(); - read_vec_element_i32(s, tcg_op, rn, pass, MO_16); + read_vec_element_i32(s, tcg_op, rn, pass, MO_UW); switch (fpop) { case 0x1a: /* FCVTNS */ @@ -12715,7 +12715,7 @@ static void disas_simd_two_reg_misc_fp16(DisasConte= xt *s, uint32_t insn) g_assert_not_reached(); } - write_vec_element_i32(s, tcg_res, rd, pass, MO_16); + write_vec_element_i32(s, tcg_res, rd, pass, MO_UW); tcg_temp_free_i32(tcg_res); tcg_temp_free_i32(tcg_op); @@ -12839,7 +12839,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) unallocated_encoding(s); return; } - size =3D MO_16; + size =3D MO_UW; /* is_fp, but we pass cpu_env not fp_status. */ break; default: @@ -12852,7 +12852,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) /* convert insn encoded size to TCGMemOp size */ switch (size) { case 0: /* half-precision */ - size =3D MO_16; + size =3D MO_UW; is_fp16 =3D true; break; case MO_32: /* single precision */ @@ -12899,7 +12899,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) /* Given TCGMemOp size, adjust register and indexing. */ switch (size) { - case MO_16: + case MO_UW: index =3D h << 2 | l << 1 | m; break; case MO_32: diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c index ec5fb11..2bc1bd1 100644 --- a/target/arm/translate-sve.c +++ b/target/arm/translate-sve.c @@ -1679,7 +1679,7 @@ static void do_sat_addsub_vec(DisasContext *s, int es= z, int rd, int rn, tcg_temp_free_i32(t32); break; - case MO_16: + case MO_UW: t32 =3D tcg_temp_new_i32(); tcg_gen_extrl_i64_i32(t32, val); if (d) { @@ -3314,7 +3314,7 @@ static bool trans_SUBR_zzi(DisasContext *s, arg_rri_e= sz *a) .fniv =3D tcg_gen_sub_vec, .fno =3D gen_helper_sve_subri_h, .opt_opc =3D vecop_list, - .vece =3D MO_16, + .vece =3D MO_UW, .scalar_first =3D true }, { .fni4 =3D tcg_gen_sub_i32, .fniv =3D tcg_gen_sub_vec, @@ -3468,7 +3468,7 @@ static bool trans_FMLA_zzxz(DisasContext *s, arg_FMLA= _zzxz *a) if (sve_access_check(s)) { unsigned vsz =3D vec_full_reg_size(s); - TCGv_ptr status =3D get_fpstatus_ptr(a->esz =3D=3D MO_16); + TCGv_ptr status =3D get_fpstatus_ptr(a->esz =3D=3D MO_UW); tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd), vec_full_reg_offset(s, a->rn), vec_full_reg_offset(s, a->rm), @@ -3494,7 +3494,7 @@ static bool trans_FMUL_zzx(DisasContext *s, arg_FMUL_= zzx *a) if (sve_access_check(s)) { unsigned vsz =3D vec_full_reg_size(s); - TCGv_ptr status =3D get_fpstatus_ptr(a->esz =3D=3D MO_16); + TCGv_ptr status =3D get_fpstatus_ptr(a->esz =3D=3D MO_UW); tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd), vec_full_reg_offset(s, a->rn), vec_full_reg_offset(s, a->rm), @@ -3526,7 +3526,7 @@ static void do_reduce(DisasContext *s, arg_rpr_esz *a, tcg_gen_addi_ptr(t_zn, cpu_env, vec_full_reg_offset(s, a->rn)); tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, a->pg)); - status =3D get_fpstatus_ptr(a->esz =3D=3D MO_16); + status =3D get_fpstatus_ptr(a->esz =3D=3D MO_UW); fn(temp, t_zn, t_pg, status, t_desc); tcg_temp_free_ptr(t_zn); @@ -3568,7 +3568,7 @@ DO_VPZ(FMAXV, fmaxv) static void do_zz_fp(DisasContext *s, arg_rr_esz *a, gen_helper_gvec_2_ptr= *fn) { unsigned vsz =3D vec_full_reg_size(s); - TCGv_ptr status =3D get_fpstatus_ptr(a->esz =3D=3D MO_16); + TCGv_ptr status =3D get_fpstatus_ptr(a->esz =3D=3D MO_UW); tcg_gen_gvec_2_ptr(vec_full_reg_offset(s, a->rd), vec_full_reg_offset(s, a->rn), @@ -3616,7 +3616,7 @@ static void do_ppz_fp(DisasContext *s, arg_rpr_esz *a, gen_helper_gvec_3_ptr *fn) { unsigned vsz =3D vec_full_reg_size(s); - TCGv_ptr status =3D get_fpstatus_ptr(a->esz =3D=3D MO_16); + TCGv_ptr status =3D get_fpstatus_ptr(a->esz =3D=3D MO_UW); tcg_gen_gvec_3_ptr(pred_full_reg_offset(s, a->rd), vec_full_reg_offset(s, a->rn), @@ -3668,7 +3668,7 @@ static bool trans_FTMAD(DisasContext *s, arg_FTMAD *a) } if (sve_access_check(s)) { unsigned vsz =3D vec_full_reg_size(s); - TCGv_ptr status =3D get_fpstatus_ptr(a->esz =3D=3D MO_16); + TCGv_ptr status =3D get_fpstatus_ptr(a->esz =3D=3D MO_UW); tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd), vec_full_reg_offset(s, a->rn), vec_full_reg_offset(s, a->rm), @@ -3708,7 +3708,7 @@ static bool trans_FADDA(DisasContext *s, arg_rprr_esz= *a) t_pg =3D tcg_temp_new_ptr(); tcg_gen_addi_ptr(t_rm, cpu_env, vec_full_reg_offset(s, a->rm)); tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, a->pg)); - t_fpst =3D get_fpstatus_ptr(a->esz =3D=3D MO_16); + t_fpst =3D get_fpstatus_ptr(a->esz =3D=3D MO_UW); t_desc =3D tcg_const_i32(simd_desc(vsz, vsz, 0)); fns[a->esz - 1](t_val, t_val, t_rm, t_pg, t_fpst, t_desc); @@ -3735,7 +3735,7 @@ static bool do_zzz_fp(DisasContext *s, arg_rrr_esz *a, } if (sve_access_check(s)) { unsigned vsz =3D vec_full_reg_size(s); - TCGv_ptr status =3D get_fpstatus_ptr(a->esz =3D=3D MO_16); + TCGv_ptr status =3D get_fpstatus_ptr(a->esz =3D=3D MO_UW); tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd), vec_full_reg_offset(s, a->rn), vec_full_reg_offset(s, a->rm), @@ -3777,7 +3777,7 @@ static bool do_zpzz_fp(DisasContext *s, arg_rprr_esz = *a, } if (sve_access_check(s)) { unsigned vsz =3D vec_full_reg_size(s); - TCGv_ptr status =3D get_fpstatus_ptr(a->esz =3D=3D MO_16); + TCGv_ptr status =3D get_fpstatus_ptr(a->esz =3D=3D MO_UW); tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd), vec_full_reg_offset(s, a->rn), vec_full_reg_offset(s, a->rm), @@ -3844,7 +3844,7 @@ static void do_fp_imm(DisasContext *s, arg_rpri_esz *= a, uint64_t imm, gen_helper_sve_fp2scalar *fn) { TCGv_i64 temp =3D tcg_const_i64(imm); - do_fp_scalar(s, a->rd, a->rn, a->pg, a->esz =3D=3D MO_16, temp, fn); + do_fp_scalar(s, a->rd, a->rn, a->pg, a->esz =3D=3D MO_UW, temp, fn); tcg_temp_free_i64(temp); } @@ -3893,7 +3893,7 @@ static bool do_fp_cmp(DisasContext *s, arg_rprr_esz *= a, } if (sve_access_check(s)) { unsigned vsz =3D vec_full_reg_size(s); - TCGv_ptr status =3D get_fpstatus_ptr(a->esz =3D=3D MO_16); + TCGv_ptr status =3D get_fpstatus_ptr(a->esz =3D=3D MO_UW); tcg_gen_gvec_4_ptr(pred_full_reg_offset(s, a->rd), vec_full_reg_offset(s, a->rn), vec_full_reg_offset(s, a->rm), @@ -3937,7 +3937,7 @@ static bool trans_FCADD(DisasContext *s, arg_FCADD *a) } if (sve_access_check(s)) { unsigned vsz =3D vec_full_reg_size(s); - TCGv_ptr status =3D get_fpstatus_ptr(a->esz =3D=3D MO_16); + TCGv_ptr status =3D get_fpstatus_ptr(a->esz =3D=3D MO_UW); tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd), vec_full_reg_offset(s, a->rn), vec_full_reg_offset(s, a->rm), @@ -4044,7 +4044,7 @@ static bool trans_FCMLA_zzxz(DisasContext *s, arg_FCM= LA_zzxz *a) tcg_debug_assert(a->rd =3D=3D a->ra); if (sve_access_check(s)) { unsigned vsz =3D vec_full_reg_size(s); - TCGv_ptr status =3D get_fpstatus_ptr(a->esz =3D=3D MO_16); + TCGv_ptr status =3D get_fpstatus_ptr(a->esz =3D=3D MO_UW); tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd), vec_full_reg_offset(s, a->rn), vec_full_reg_offset(s, a->rm), @@ -4186,7 +4186,7 @@ static bool trans_FRINTI(DisasContext *s, arg_rpr_esz= *a) if (a->esz =3D=3D 0) { return false; } - return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz =3D=3D MO_16, + return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz =3D=3D MO_UW, frint_fns[a->esz - 1]); } @@ -4200,7 +4200,7 @@ static bool trans_FRINTX(DisasContext *s, arg_rpr_esz= *a) if (a->esz =3D=3D 0) { return false; } - return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz =3D=3D MO_16, fns[a->= esz - 1]); + return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz =3D=3D MO_UW, fns[a->= esz - 1]); } static bool do_frint_mode(DisasContext *s, arg_rpr_esz *a, int mode) @@ -4211,7 +4211,7 @@ static bool do_frint_mode(DisasContext *s, arg_rpr_es= z *a, int mode) if (sve_access_check(s)) { unsigned vsz =3D vec_full_reg_size(s); TCGv_i32 tmode =3D tcg_const_i32(mode); - TCGv_ptr status =3D get_fpstatus_ptr(a->esz =3D=3D MO_16); + TCGv_ptr status =3D get_fpstatus_ptr(a->esz =3D=3D MO_UW); gen_helper_set_rmode(tmode, tmode, status); @@ -4262,7 +4262,7 @@ static bool trans_FRECPX(DisasContext *s, arg_rpr_esz= *a) if (a->esz =3D=3D 0) { return false; } - return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz =3D=3D MO_16, fns[a->= esz - 1]); + return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz =3D=3D MO_UW, fns[a->= esz - 1]); } static bool trans_FSQRT(DisasContext *s, arg_rpr_esz *a) @@ -4275,7 +4275,7 @@ static bool trans_FSQRT(DisasContext *s, arg_rpr_esz = *a) if (a->esz =3D=3D 0) { return false; } - return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz =3D=3D MO_16, fns[a->= esz - 1]); + return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz =3D=3D MO_UW, fns[a->= esz - 1]); } static bool trans_SCVTF_hh(DisasContext *s, arg_rpr_esz *a) diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c index 092eb5e..549874c 100644 --- a/target/arm/translate-vfp.inc.c +++ b/target/arm/translate-vfp.inc.c @@ -52,7 +52,7 @@ uint64_t vfp_expand_imm(int size, uint8_t imm8) (extract32(imm8, 0, 6) << 3); imm <<=3D 16; break; - case MO_16: + case MO_UW: imm =3D (extract32(imm8, 7, 1) ? 0x8000 : 0) | (extract32(imm8, 6, 1) ? 0x3000 : 0x4000) | (extract32(imm8, 0, 6) << 6); diff --git a/target/arm/translate.c b/target/arm/translate.c index 39266cf..8d10922 100644 --- a/target/arm/translate.c +++ b/target/arm/translate.c @@ -1477,7 +1477,7 @@ static void neon_store_element(int reg, int ele, TCGM= emOp size, TCGv_i32 var) case MO_UB: tcg_gen_st8_i32(var, cpu_env, offset); break; - case MO_16: + case MO_UW: tcg_gen_st16_i32(var, cpu_env, offset); break; case MO_32: @@ -1496,7 +1496,7 @@ static void neon_store_element64(int reg, int ele, TC= GMemOp size, TCGv_i64 var) case MO_UB: tcg_gen_st8_i64(var, cpu_env, offset); break; - case MO_16: + case MO_UW: tcg_gen_st16_i64(var, cpu_env, offset); break; case MO_32: @@ -4267,7 +4267,7 @@ const GVecGen2i ssra_op[4] =3D { .fniv =3D gen_ssra_vec, .load_dest =3D true, .opt_opc =3D vecop_list_ssra, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D gen_ssra32_i32, .fniv =3D gen_ssra_vec, .load_dest =3D true, @@ -4325,7 +4325,7 @@ const GVecGen2i usra_op[4] =3D { .fniv =3D gen_usra_vec, .load_dest =3D true, .opt_opc =3D vecop_list_usra, - .vece =3D MO_16, }, + .vece =3D MO_UW, }, { .fni4 =3D gen_usra32_i32, .fniv =3D gen_usra_vec, .load_dest =3D true, @@ -4353,7 +4353,7 @@ static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, = int64_t shift) static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift) { - uint64_t mask =3D dup_const(MO_16, 0xffff >> shift); + uint64_t mask =3D dup_const(MO_UW, 0xffff >> shift); TCGv_i64 t =3D tcg_temp_new_i64(); tcg_gen_shri_i64(t, a, shift); @@ -4405,7 +4405,7 @@ const GVecGen2i sri_op[4] =3D { .fniv =3D gen_shr_ins_vec, .load_dest =3D true, .opt_opc =3D vecop_list_sri, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D gen_shr32_ins_i32, .fniv =3D gen_shr_ins_vec, .load_dest =3D true, @@ -4433,7 +4433,7 @@ static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, = int64_t shift) static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift) { - uint64_t mask =3D dup_const(MO_16, 0xffff << shift); + uint64_t mask =3D dup_const(MO_UW, 0xffff << shift); TCGv_i64 t =3D tcg_temp_new_i64(); tcg_gen_shli_i64(t, a, shift); @@ -4483,7 +4483,7 @@ const GVecGen2i sli_op[4] =3D { .fniv =3D gen_shl_ins_vec, .load_dest =3D true, .opt_opc =3D vecop_list_sli, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D gen_shl32_ins_i32, .fniv =3D gen_shl_ins_vec, .load_dest =3D true, @@ -4579,7 +4579,7 @@ const GVecGen3 mla_op[4] =3D { .fniv =3D gen_mla_vec, .load_dest =3D true, .opt_opc =3D vecop_list_mla, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D gen_mla32_i32, .fniv =3D gen_mla_vec, .load_dest =3D true, @@ -4603,7 +4603,7 @@ const GVecGen3 mls_op[4] =3D { .fniv =3D gen_mls_vec, .load_dest =3D true, .opt_opc =3D vecop_list_mls, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D gen_mls32_i32, .fniv =3D gen_mls_vec, .load_dest =3D true, @@ -4649,7 +4649,7 @@ const GVecGen3 cmtst_op[4] =3D { { .fni4 =3D gen_helper_neon_tst_u16, .fniv =3D gen_cmtst_vec, .opt_opc =3D vecop_list_cmtst, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D gen_cmtst_i32, .fniv =3D gen_cmtst_vec, .opt_opc =3D vecop_list_cmtst, @@ -4686,7 +4686,7 @@ const GVecGen4 uqadd_op[4] =3D { .fno =3D gen_helper_gvec_uqadd_h, .write_aofs =3D true, .opt_opc =3D vecop_list_uqadd, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fniv =3D gen_uqadd_vec, .fno =3D gen_helper_gvec_uqadd_s, .write_aofs =3D true, @@ -4724,7 +4724,7 @@ const GVecGen4 sqadd_op[4] =3D { .fno =3D gen_helper_gvec_sqadd_h, .opt_opc =3D vecop_list_sqadd, .write_aofs =3D true, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fniv =3D gen_sqadd_vec, .fno =3D gen_helper_gvec_sqadd_s, .opt_opc =3D vecop_list_sqadd, @@ -4762,7 +4762,7 @@ const GVecGen4 uqsub_op[4] =3D { .fno =3D gen_helper_gvec_uqsub_h, .opt_opc =3D vecop_list_uqsub, .write_aofs =3D true, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fniv =3D gen_uqsub_vec, .fno =3D gen_helper_gvec_uqsub_s, .opt_opc =3D vecop_list_uqsub, @@ -4800,7 +4800,7 @@ const GVecGen4 sqsub_op[4] =3D { .fno =3D gen_helper_gvec_sqsub_h, .opt_opc =3D vecop_list_sqsub, .write_aofs =3D true, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fniv =3D gen_sqsub_vec, .fno =3D gen_helper_gvec_sqsub_s, .opt_opc =3D vecop_list_sqsub, @@ -6876,7 +6876,7 @@ static int disas_neon_data_insn(DisasContext *s, uint= 32_t insn) size =3D MO_UB; element =3D (insn >> 17) & 7; } else if (insn & (1 << 17)) { - size =3D MO_16; + size =3D MO_UW; element =3D (insn >> 18) & 3; } else { size =3D MO_32; diff --git a/target/i386/translate.c b/target/i386/translate.c index 0e45300..0535bae 100644 --- a/target/i386/translate.c +++ b/target/i386/translate.c @@ -323,7 +323,7 @@ static inline bool byte_reg_is_xH(DisasContext *s, int = reg) static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot) { if (CODE64(s)) { - return ot =3D=3D MO_16 ? MO_16 : MO_64; + return ot =3D=3D MO_UW ? MO_UW : MO_64; } else { return ot; } @@ -332,7 +332,7 @@ static inline TCGMemOp mo_pushpop(DisasContext *s, TCGM= emOp ot) /* Select the size of the stack pointer. */ static inline TCGMemOp mo_stacksize(DisasContext *s) { - return CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16; + return CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_UW; } /* Select only size 64 else 32. Used for SSE operand sizes. */ @@ -356,7 +356,7 @@ static inline TCGMemOp mo_b_d(int b, TCGMemOp ot) Used for decoding operand size of port opcodes. */ static inline TCGMemOp mo_b_d32(int b, TCGMemOp ot) { - return b & 1 ? (ot =3D=3D MO_16 ? MO_16 : MO_32) : MO_UB; + return b & 1 ? (ot =3D=3D MO_UW ? MO_UW : MO_32) : MO_UB; } static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t= 0) @@ -369,7 +369,7 @@ static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp = ot, int reg, TCGv t0) tcg_gen_deposit_tl(cpu_regs[reg - 4], cpu_regs[reg - 4], t0, 8= , 8); } break; - case MO_16: + case MO_UW: tcg_gen_deposit_tl(cpu_regs[reg], cpu_regs[reg], t0, 0, 16); break; case MO_32: @@ -473,7 +473,7 @@ static void gen_lea_v_seg(DisasContext *s, TCGMemOp afl= ag, TCGv a0, return; } break; - case MO_16: + case MO_UW: /* 16 bit address */ tcg_gen_ext16u_tl(s->A0, a0); a0 =3D s->A0; @@ -530,7 +530,7 @@ static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp siz= e, bool sign) tcg_gen_ext8u_tl(dst, src); } return dst; - case MO_16: + case MO_UW: if (sign) { tcg_gen_ext16s_tl(dst, src); } else { @@ -583,7 +583,7 @@ static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCG= v_i32 n) case MO_UB: gen_helper_inb(v, cpu_env, n); break; - case MO_16: + case MO_UW: gen_helper_inw(v, cpu_env, n); break; case MO_32: @@ -600,7 +600,7 @@ static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v= , TCGv_i32 n) case MO_UB: gen_helper_outb(cpu_env, v, n); break; - case MO_16: + case MO_UW: gen_helper_outw(cpu_env, v, n); break; case MO_32: @@ -622,7 +622,7 @@ static void gen_check_io(DisasContext *s, TCGMemOp ot, = target_ulong cur_eip, case MO_UB: gen_helper_check_iob(cpu_env, s->tmp2_i32); break; - case MO_16: + case MO_UW: gen_helper_check_iow(cpu_env, s->tmp2_i32); break; case MO_32: @@ -1562,7 +1562,7 @@ static void gen_rot_rm_T1(DisasContext *s, TCGMemOp o= t, int op1, int is_right) tcg_gen_ext8u_tl(s->T0, s->T0); tcg_gen_muli_tl(s->T0, s->T0, 0x01010101); goto do_long; - case MO_16: + case MO_UW: /* Replicate the 16-bit input so that a 32-bit rotate works. */ tcg_gen_deposit_tl(s->T0, s->T0, s->T0, 16, 16); goto do_long; @@ -1664,7 +1664,7 @@ static void gen_rot_rm_im(DisasContext *s, TCGMemOp o= t, int op1, int op2, case MO_UB: mask =3D 7; goto do_shifts; - case MO_16: + case MO_UW: mask =3D 15; do_shifts: shift =3D op2 & mask; @@ -1722,7 +1722,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp = ot, int op1, case MO_UB: gen_helper_rcrb(s->T0, cpu_env, s->T0, s->T1); break; - case MO_16: + case MO_UW: gen_helper_rcrw(s->T0, cpu_env, s->T0, s->T1); break; case MO_32: @@ -1741,7 +1741,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp = ot, int op1, case MO_UB: gen_helper_rclb(s->T0, cpu_env, s->T0, s->T1); break; - case MO_16: + case MO_UW: gen_helper_rclw(s->T0, cpu_env, s->T0, s->T1); break; case MO_32: @@ -1778,7 +1778,7 @@ static void gen_shiftd_rm_T1(DisasContext *s, TCGMemO= p ot, int op1, tcg_gen_andi_tl(count, count_in, mask); switch (ot) { - case MO_16: + case MO_UW: /* Note: we implement the Intel behaviour for shift count > 16. This means "shrdw C, B, A" shifts A:B:A >> C. Build the B:A portion by constructing it as a 32-bit value. */ @@ -1817,7 +1817,7 @@ static void gen_shiftd_rm_T1(DisasContext *s, TCGMemO= p ot, int op1, tcg_gen_shl_tl(s->T1, s->T1, s->tmp4); } else { tcg_gen_shl_tl(s->tmp0, s->T0, s->tmp0); - if (ot =3D=3D MO_16) { + if (ot =3D=3D MO_UW) { /* Only needed if count > 16, for Intel behaviour. */ tcg_gen_subfi_tl(s->tmp4, 33, count); tcg_gen_shr_tl(s->tmp4, s->T1, s->tmp4); @@ -2026,7 +2026,7 @@ static AddressParts gen_lea_modrm_0(CPUX86State *env,= DisasContext *s, } break; - case MO_16: + case MO_UW: if (mod =3D=3D 0) { if (rm =3D=3D 6) { base =3D -1; @@ -2187,7 +2187,7 @@ static inline uint32_t insn_get(CPUX86State *env, Dis= asContext *s, TCGMemOp ot) case MO_UB: ret =3D x86_ldub_code(env, s); break; - case MO_16: + case MO_UW: ret =3D x86_lduw_code(env, s); break; case MO_32: @@ -2400,12 +2400,12 @@ static inline void gen_pop_update(DisasContext *s, = TCGMemOp ot) static inline void gen_stack_A0(DisasContext *s) { - gen_lea_v_seg(s, s->ss32 ? MO_32 : MO_16, cpu_regs[R_ESP], R_SS, -1); + gen_lea_v_seg(s, s->ss32 ? MO_32 : MO_UW, cpu_regs[R_ESP], R_SS, -1); } static void gen_pusha(DisasContext *s) { - TCGMemOp s_ot =3D s->ss32 ? MO_32 : MO_16; + TCGMemOp s_ot =3D s->ss32 ? MO_32 : MO_UW; TCGMemOp d_ot =3D s->dflag; int size =3D 1 << d_ot; int i; @@ -2421,7 +2421,7 @@ static void gen_pusha(DisasContext *s) static void gen_popa(DisasContext *s) { - TCGMemOp s_ot =3D s->ss32 ? MO_32 : MO_16; + TCGMemOp s_ot =3D s->ss32 ? MO_32 : MO_UW; TCGMemOp d_ot =3D s->dflag; int size =3D 1 << d_ot; int i; @@ -2443,7 +2443,7 @@ static void gen_popa(DisasContext *s) static void gen_enter(DisasContext *s, int esp_addend, int level) { TCGMemOp d_ot =3D mo_pushpop(s, s->dflag); - TCGMemOp a_ot =3D CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16; + TCGMemOp a_ot =3D CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_UW; int size =3D 1 << d_ot; /* Push BP; compute FrameTemp into T1. */ @@ -3613,7 +3613,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, case 0xc4: /* pinsrw */ case 0x1c4: s->rip_offset =3D 1; - gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0); + gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0); val =3D x86_ldub_code(env, s); if (b1) { val &=3D 7; @@ -3786,7 +3786,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, if ((b & 0xff) =3D=3D 0xf0) { ot =3D MO_UB; } else if (s->dflag !=3D MO_64) { - ot =3D (s->prefix & PREFIX_DATA ? MO_16 : MO_32); + ot =3D (s->prefix & PREFIX_DATA ? MO_UW : MO_32); } else { ot =3D MO_64; } @@ -3815,7 +3815,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, goto illegal_op; } if (s->dflag !=3D MO_64) { - ot =3D (s->prefix & PREFIX_DATA ? MO_16 : MO_32); + ot =3D (s->prefix & PREFIX_DATA ? MO_UW : MO_32); } else { ot =3D MO_64; } @@ -4630,7 +4630,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) /* In 64-bit mode, the default data size is 32-bit. Select 64-bit data with rex_w, and 16-bit data with 0x66; rex_w takes precede= nce over 0x66 if both are present. */ - dflag =3D (rex_w > 0 ? MO_64 : prefixes & PREFIX_DATA ? MO_16 : MO= _32); + dflag =3D (rex_w > 0 ? MO_64 : prefixes & PREFIX_DATA ? MO_UW : MO= _32); /* In 64-bit mode, 0x67 selects 32-bit addressing. */ aflag =3D (prefixes & PREFIX_ADR ? MO_32 : MO_64); } else { @@ -4638,13 +4638,13 @@ static target_ulong disas_insn(DisasContext *s, CPU= State *cpu) if (s->code32 ^ ((prefixes & PREFIX_DATA) !=3D 0)) { dflag =3D MO_32; } else { - dflag =3D MO_16; + dflag =3D MO_UW; } /* In 16/32-bit mode, 0x67 selects the opposite addressing. */ if (s->code32 ^ ((prefixes & PREFIX_ADR) !=3D 0)) { aflag =3D MO_32; } else { - aflag =3D MO_16; + aflag =3D MO_UW; } } @@ -4872,21 +4872,21 @@ static target_ulong disas_insn(DisasContext *s, CPU= State *cpu) tcg_gen_ext8u_tl(s->T1, s->T1); /* XXX: use 32 bit mul which could be faster */ tcg_gen_mul_tl(s->T0, s->T0, s->T1); - gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0); + gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0); tcg_gen_mov_tl(cpu_cc_dst, s->T0); tcg_gen_andi_tl(cpu_cc_src, s->T0, 0xff00); set_cc_op(s, CC_OP_MULB); break; - case MO_16: - gen_op_mov_v_reg(s, MO_16, s->T1, R_EAX); + case MO_UW: + gen_op_mov_v_reg(s, MO_UW, s->T1, R_EAX); tcg_gen_ext16u_tl(s->T0, s->T0); tcg_gen_ext16u_tl(s->T1, s->T1); /* XXX: use 32 bit mul which could be faster */ tcg_gen_mul_tl(s->T0, s->T0, s->T1); - gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0); + gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0); tcg_gen_mov_tl(cpu_cc_dst, s->T0); tcg_gen_shri_tl(s->T0, s->T0, 16); - gen_op_mov_reg_v(s, MO_16, R_EDX, s->T0); + gen_op_mov_reg_v(s, MO_UW, R_EDX, s->T0); tcg_gen_mov_tl(cpu_cc_src, s->T0); set_cc_op(s, CC_OP_MULW); break; @@ -4921,24 +4921,24 @@ static target_ulong disas_insn(DisasContext *s, CPU= State *cpu) tcg_gen_ext8s_tl(s->T1, s->T1); /* XXX: use 32 bit mul which could be faster */ tcg_gen_mul_tl(s->T0, s->T0, s->T1); - gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0); + gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0); tcg_gen_mov_tl(cpu_cc_dst, s->T0); tcg_gen_ext8s_tl(s->tmp0, s->T0); tcg_gen_sub_tl(cpu_cc_src, s->T0, s->tmp0); set_cc_op(s, CC_OP_MULB); break; - case MO_16: - gen_op_mov_v_reg(s, MO_16, s->T1, R_EAX); + case MO_UW: + gen_op_mov_v_reg(s, MO_UW, s->T1, R_EAX); tcg_gen_ext16s_tl(s->T0, s->T0); tcg_gen_ext16s_tl(s->T1, s->T1); /* XXX: use 32 bit mul which could be faster */ tcg_gen_mul_tl(s->T0, s->T0, s->T1); - gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0); + gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0); tcg_gen_mov_tl(cpu_cc_dst, s->T0); tcg_gen_ext16s_tl(s->tmp0, s->T0); tcg_gen_sub_tl(cpu_cc_src, s->T0, s->tmp0); tcg_gen_shri_tl(s->T0, s->T0, 16); - gen_op_mov_reg_v(s, MO_16, R_EDX, s->T0); + gen_op_mov_reg_v(s, MO_UW, R_EDX, s->T0); set_cc_op(s, CC_OP_MULW); break; default: @@ -4972,7 +4972,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) case MO_UB: gen_helper_divb_AL(cpu_env, s->T0); break; - case MO_16: + case MO_UW: gen_helper_divw_AX(cpu_env, s->T0); break; default: @@ -4991,7 +4991,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) case MO_UB: gen_helper_idivb_AL(cpu_env, s->T0); break; - case MO_16: + case MO_UW: gen_helper_idivw_AX(cpu_env, s->T0); break; default: @@ -5026,7 +5026,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) /* operand size for jumps is 64 bit */ ot =3D MO_64; } else if (op =3D=3D 3 || op =3D=3D 5) { - ot =3D dflag !=3D MO_16 ? MO_32 + (rex_w =3D=3D 1) : MO_16; + ot =3D dflag !=3D MO_UW ? MO_32 + (rex_w =3D=3D 1) : MO_UW; } else if (op =3D=3D 6) { /* default push size is 64 bit */ ot =3D mo_pushpop(s, dflag); @@ -5057,7 +5057,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) break; case 2: /* call Ev */ /* XXX: optimize if memory (no 'and' is necessary) */ - if (dflag =3D=3D MO_16) { + if (dflag =3D=3D MO_UW) { tcg_gen_ext16u_tl(s->T0, s->T0); } next_eip =3D s->pc - s->cs_base; @@ -5070,7 +5070,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) case 3: /* lcall Ev */ gen_op_ld_v(s, ot, s->T1, s->A0); gen_add_A0_im(s, 1 << ot); - gen_op_ld_v(s, MO_16, s->T0, s->A0); + gen_op_ld_v(s, MO_UW, s->T0, s->A0); do_lcall: if (s->pe && !s->vm86) { tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); @@ -5087,7 +5087,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) gen_jr(s, s->tmp4); break; case 4: /* jmp Ev */ - if (dflag =3D=3D MO_16) { + if (dflag =3D=3D MO_UW) { tcg_gen_ext16u_tl(s->T0, s->T0); } gen_op_jmp_v(s->T0); @@ -5097,7 +5097,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) case 5: /* ljmp Ev */ gen_op_ld_v(s, ot, s->T1, s->A0); gen_add_A0_im(s, 1 << ot); - gen_op_ld_v(s, MO_16, s->T0, s->A0); + gen_op_ld_v(s, MO_UW, s->T0, s->A0); do_ljmp: if (s->pe && !s->vm86) { tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); @@ -5152,14 +5152,14 @@ static target_ulong disas_insn(DisasContext *s, CPU= State *cpu) break; #endif case MO_32: - gen_op_mov_v_reg(s, MO_16, s->T0, R_EAX); + gen_op_mov_v_reg(s, MO_UW, s->T0, R_EAX); tcg_gen_ext16s_tl(s->T0, s->T0); gen_op_mov_reg_v(s, MO_32, R_EAX, s->T0); break; - case MO_16: + case MO_UW: gen_op_mov_v_reg(s, MO_UB, s->T0, R_EAX); tcg_gen_ext8s_tl(s->T0, s->T0); - gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0); + gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0); break; default: tcg_abort(); @@ -5180,11 +5180,11 @@ static target_ulong disas_insn(DisasContext *s, CPU= State *cpu) tcg_gen_sari_tl(s->T0, s->T0, 31); gen_op_mov_reg_v(s, MO_32, R_EDX, s->T0); break; - case MO_16: - gen_op_mov_v_reg(s, MO_16, s->T0, R_EAX); + case MO_UW: + gen_op_mov_v_reg(s, MO_UW, s->T0, R_EAX); tcg_gen_ext16s_tl(s->T0, s->T0); tcg_gen_sari_tl(s->T0, s->T0, 15); - gen_op_mov_reg_v(s, MO_16, R_EDX, s->T0); + gen_op_mov_reg_v(s, MO_UW, R_EDX, s->T0); break; default: tcg_abort(); @@ -5538,7 +5538,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) reg =3D (modrm >> 3) & 7; if (reg >=3D 6 || reg =3D=3D R_CS) goto illegal_op; - gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0); + gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0); gen_movl_seg_T0(s, reg); /* Note that reg =3D=3D R_SS in gen_movl_seg_T0 always sets is_jmp= . */ if (s->base.is_jmp) { @@ -5558,7 +5558,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) if (reg >=3D 6) goto illegal_op; gen_op_movl_T0_seg(s, reg); - ot =3D mod =3D=3D 3 ? dflag : MO_16; + ot =3D mod =3D=3D 3 ? dflag : MO_UW; gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1); break; @@ -5734,7 +5734,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) case 0x1b5: /* lgs Gv */ op =3D R_GS; do_lxx: - ot =3D dflag !=3D MO_16 ? MO_32 : MO_16; + ot =3D dflag !=3D MO_UW ? MO_32 : MO_UW; modrm =3D x86_ldub_code(env, s); reg =3D ((modrm >> 3) & 7) | rex_r; mod =3D (modrm >> 6) & 3; @@ -5744,7 +5744,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) gen_op_ld_v(s, ot, s->T1, s->A0); gen_add_A0_im(s, 1 << ot); /* load the segment first to handle exceptions properly */ - gen_op_ld_v(s, MO_16, s->T0, s->A0); + gen_op_ld_v(s, MO_UW, s->T0, s->A0); gen_movl_seg_T0(s, op); /* then put the data */ gen_op_mov_reg_v(s, ot, reg, s->T1); @@ -6287,7 +6287,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) case 0: gen_helper_fnstsw(s->tmp2_i32, cpu_env); tcg_gen_extu_i32_tl(s->T0, s->tmp2_i32); - gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0); + gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0); break; default: goto unknown_op; @@ -6575,14 +6575,14 @@ static target_ulong disas_insn(DisasContext *s, CPU= State *cpu) break; case 0xe8: /* call im */ { - if (dflag !=3D MO_16) { + if (dflag !=3D MO_UW) { tval =3D (int32_t)insn_get(env, s, MO_32); } else { - tval =3D (int16_t)insn_get(env, s, MO_16); + tval =3D (int16_t)insn_get(env, s, MO_UW); } next_eip =3D s->pc - s->cs_base; tval +=3D next_eip; - if (dflag =3D=3D MO_16) { + if (dflag =3D=3D MO_UW) { tval &=3D 0xffff; } else if (!CODE64(s)) { tval &=3D 0xffffffff; @@ -6601,20 +6601,20 @@ static target_ulong disas_insn(DisasContext *s, CPU= State *cpu) goto illegal_op; ot =3D dflag; offset =3D insn_get(env, s, ot); - selector =3D insn_get(env, s, MO_16); + selector =3D insn_get(env, s, MO_UW); tcg_gen_movi_tl(s->T0, selector); tcg_gen_movi_tl(s->T1, offset); } goto do_lcall; case 0xe9: /* jmp im */ - if (dflag !=3D MO_16) { + if (dflag !=3D MO_UW) { tval =3D (int32_t)insn_get(env, s, MO_32); } else { - tval =3D (int16_t)insn_get(env, s, MO_16); + tval =3D (int16_t)insn_get(env, s, MO_UW); } tval +=3D s->pc - s->cs_base; - if (dflag =3D=3D MO_16) { + if (dflag =3D=3D MO_UW) { tval &=3D 0xffff; } else if (!CODE64(s)) { tval &=3D 0xffffffff; @@ -6630,7 +6630,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) goto illegal_op; ot =3D dflag; offset =3D insn_get(env, s, ot); - selector =3D insn_get(env, s, MO_16); + selector =3D insn_get(env, s, MO_UW); tcg_gen_movi_tl(s->T0, selector); tcg_gen_movi_tl(s->T1, offset); @@ -6639,7 +6639,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) case 0xeb: /* jmp Jb */ tval =3D (int8_t)insn_get(env, s, MO_UB); tval +=3D s->pc - s->cs_base; - if (dflag =3D=3D MO_16) { + if (dflag =3D=3D MO_UW) { tval &=3D 0xffff; } gen_jmp(s, tval); @@ -6648,15 +6648,15 @@ static target_ulong disas_insn(DisasContext *s, CPU= State *cpu) tval =3D (int8_t)insn_get(env, s, MO_UB); goto do_jcc; case 0x180 ... 0x18f: /* jcc Jv */ - if (dflag !=3D MO_16) { + if (dflag !=3D MO_UW) { tval =3D (int32_t)insn_get(env, s, MO_32); } else { - tval =3D (int16_t)insn_get(env, s, MO_16); + tval =3D (int16_t)insn_get(env, s, MO_UW); } do_jcc: next_eip =3D s->pc - s->cs_base; tval +=3D next_eip; - if (dflag =3D=3D MO_16) { + if (dflag =3D=3D MO_UW) { tval &=3D 0xffff; } gen_bnd_jmp(s); @@ -6697,7 +6697,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) } else { ot =3D gen_pop_T0(s); if (s->cpl =3D=3D 0) { - if (dflag !=3D MO_16) { + if (dflag !=3D MO_UW) { gen_helper_write_eflags(cpu_env, s->T0, tcg_const_i32((TF_MASK | AC_MA= SK | ID_MASK | NT_MA= SK | @@ -6712,7 +6712,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) } } else { if (s->cpl <=3D s->iopl) { - if (dflag !=3D MO_16) { + if (dflag !=3D MO_UW) { gen_helper_write_eflags(cpu_env, s->T0, tcg_const_i32((TF_MASK | AC_MASK | @@ -6729,7 +6729,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) & 0xffff)); } } else { - if (dflag !=3D MO_16) { + if (dflag !=3D MO_UW) { gen_helper_write_eflags(cpu_env, s->T0, tcg_const_i32((TF_MASK | AC_MAS= K | ID_MASK | NT_MAS= K))); @@ -7110,7 +7110,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) gen_op_mov_v_reg(s, ot, s->T0, reg); gen_lea_modrm(env, s, modrm); tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); - if (ot =3D=3D MO_16) { + if (ot =3D=3D MO_UW) { gen_helper_boundw(cpu_env, s->A0, s->tmp2_i32); } else { gen_helper_boundl(cpu_env, s->A0, s->tmp2_i32); @@ -7149,7 +7149,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) tval =3D (int8_t)insn_get(env, s, MO_UB); next_eip =3D s->pc - s->cs_base; tval +=3D next_eip; - if (dflag =3D=3D MO_16) { + if (dflag =3D=3D MO_UW) { tval &=3D 0xffff; } @@ -7291,7 +7291,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) gen_svm_check_intercept(s, pc_start, SVM_EXIT_LDTR_READ); tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State, ldt.selector)); - ot =3D mod =3D=3D 3 ? dflag : MO_16; + ot =3D mod =3D=3D 3 ? dflag : MO_UW; gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1); break; case 2: /* lldt */ @@ -7301,7 +7301,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base); } else { gen_svm_check_intercept(s, pc_start, SVM_EXIT_LDTR_WRITE); - gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0); + gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0); tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); gen_helper_lldt(cpu_env, s->tmp2_i32); } @@ -7312,7 +7312,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) gen_svm_check_intercept(s, pc_start, SVM_EXIT_TR_READ); tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State, tr.selector)); - ot =3D mod =3D=3D 3 ? dflag : MO_16; + ot =3D mod =3D=3D 3 ? dflag : MO_UW; gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1); break; case 3: /* ltr */ @@ -7322,7 +7322,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base); } else { gen_svm_check_intercept(s, pc_start, SVM_EXIT_TR_WRITE); - gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0); + gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0); tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); gen_helper_ltr(cpu_env, s->tmp2_i32); } @@ -7331,7 +7331,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) case 5: /* verw */ if (!s->pe || s->vm86) goto illegal_op; - gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0); + gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0); gen_update_cc_op(s); if (op =3D=3D 4) { gen_helper_verr(cpu_env, s->T0); @@ -7353,10 +7353,10 @@ static target_ulong disas_insn(DisasContext *s, CPU= State *cpu) gen_lea_modrm(env, s, modrm); tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State, gdt.limit)); - gen_op_st_v(s, MO_16, s->T0, s->A0); + gen_op_st_v(s, MO_UW, s->T0, s->A0); gen_add_A0_im(s, 2); tcg_gen_ld_tl(s->T0, cpu_env, offsetof(CPUX86State, gdt.base)); - if (dflag =3D=3D MO_16) { + if (dflag =3D=3D MO_UW) { tcg_gen_andi_tl(s->T0, s->T0, 0xffffff); } gen_op_st_v(s, CODE64(s) + MO_32, s->T0, s->A0); @@ -7408,10 +7408,10 @@ static target_ulong disas_insn(DisasContext *s, CPU= State *cpu) gen_svm_check_intercept(s, pc_start, SVM_EXIT_IDTR_READ); gen_lea_modrm(env, s, modrm); tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State, idt.lim= it)); - gen_op_st_v(s, MO_16, s->T0, s->A0); + gen_op_st_v(s, MO_UW, s->T0, s->A0); gen_add_A0_im(s, 2); tcg_gen_ld_tl(s->T0, cpu_env, offsetof(CPUX86State, idt.base)); - if (dflag =3D=3D MO_16) { + if (dflag =3D=3D MO_UW) { tcg_gen_andi_tl(s->T0, s->T0, 0xffffff); } gen_op_st_v(s, CODE64(s) + MO_32, s->T0, s->A0); @@ -7558,10 +7558,10 @@ static target_ulong disas_insn(DisasContext *s, CPU= State *cpu) } gen_svm_check_intercept(s, pc_start, SVM_EXIT_GDTR_WRITE); gen_lea_modrm(env, s, modrm); - gen_op_ld_v(s, MO_16, s->T1, s->A0); + gen_op_ld_v(s, MO_UW, s->T1, s->A0); gen_add_A0_im(s, 2); gen_op_ld_v(s, CODE64(s) + MO_32, s->T0, s->A0); - if (dflag =3D=3D MO_16) { + if (dflag =3D=3D MO_UW) { tcg_gen_andi_tl(s->T0, s->T0, 0xffffff); } tcg_gen_st_tl(s->T0, cpu_env, offsetof(CPUX86State, gdt.base)); @@ -7575,10 +7575,10 @@ static target_ulong disas_insn(DisasContext *s, CPU= State *cpu) } gen_svm_check_intercept(s, pc_start, SVM_EXIT_IDTR_WRITE); gen_lea_modrm(env, s, modrm); - gen_op_ld_v(s, MO_16, s->T1, s->A0); + gen_op_ld_v(s, MO_UW, s->T1, s->A0); gen_add_A0_im(s, 2); gen_op_ld_v(s, CODE64(s) + MO_32, s->T0, s->A0); - if (dflag =3D=3D MO_16) { + if (dflag =3D=3D MO_UW) { tcg_gen_andi_tl(s->T0, s->T0, 0xffffff); } tcg_gen_st_tl(s->T0, cpu_env, offsetof(CPUX86State, idt.base)); @@ -7590,9 +7590,9 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) tcg_gen_ld_tl(s->T0, cpu_env, offsetof(CPUX86State, cr[0])); if (CODE64(s)) { mod =3D (modrm >> 6) & 3; - ot =3D (mod !=3D 3 ? MO_16 : s->dflag); + ot =3D (mod !=3D 3 ? MO_UW : s->dflag); } else { - ot =3D MO_16; + ot =3D MO_UW; } gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1); break; @@ -7619,7 +7619,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) break; } gen_svm_check_intercept(s, pc_start, SVM_EXIT_WRITE_CR0); - gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0); + gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0); gen_helper_lmsw(cpu_env, s->T0); gen_jmp_im(s, s->pc - s->cs_base); gen_eob(s); @@ -7720,7 +7720,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) t0 =3D tcg_temp_local_new(); t1 =3D tcg_temp_local_new(); t2 =3D tcg_temp_local_new(); - ot =3D MO_16; + ot =3D MO_UW; modrm =3D x86_ldub_code(env, s); reg =3D (modrm >> 3) & 7; mod =3D (modrm >> 6) & 3; @@ -7765,10 +7765,10 @@ static target_ulong disas_insn(DisasContext *s, CPU= State *cpu) TCGv t0; if (!s->pe || s->vm86) goto illegal_op; - ot =3D dflag !=3D MO_16 ? MO_32 : MO_16; + ot =3D dflag !=3D MO_UW ? MO_32 : MO_UW; modrm =3D x86_ldub_code(env, s); reg =3D ((modrm >> 3) & 7) | rex_r; - gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0); + gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0); t0 =3D tcg_temp_local_new(); gen_update_cc_op(s); if (b =3D=3D 0x102) { @@ -7813,7 +7813,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) /* bndcl */ if (reg >=3D 4 || (prefixes & PREFIX_LOCK) - || s->aflag =3D=3D MO_16) { + || s->aflag =3D=3D MO_UW) { goto illegal_op; } gen_bndck(env, s, modrm, TCG_COND_LTU, cpu_bndl[reg]); @@ -7821,7 +7821,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) /* bndcu */ if (reg >=3D 4 || (prefixes & PREFIX_LOCK) - || s->aflag =3D=3D MO_16) { + || s->aflag =3D=3D MO_UW) { goto illegal_op; } TCGv_i64 notu =3D tcg_temp_new_i64(); @@ -7830,7 +7830,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) tcg_temp_free_i64(notu); } else if (prefixes & PREFIX_DATA) { /* bndmov -- from reg/mem */ - if (reg >=3D 4 || s->aflag =3D=3D MO_16) { + if (reg >=3D 4 || s->aflag =3D=3D MO_UW) { goto illegal_op; } if (mod =3D=3D 3) { @@ -7865,7 +7865,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) AddressParts a =3D gen_lea_modrm_0(env, s, modrm); if (reg >=3D 4 || (prefixes & PREFIX_LOCK) - || s->aflag =3D=3D MO_16 + || s->aflag =3D=3D MO_UW || a.base < -1) { goto illegal_op; } @@ -7903,7 +7903,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) /* bndmk */ if (reg >=3D 4 || (prefixes & PREFIX_LOCK) - || s->aflag =3D=3D MO_16) { + || s->aflag =3D=3D MO_UW) { goto illegal_op; } AddressParts a =3D gen_lea_modrm_0(env, s, modrm); @@ -7931,13 +7931,13 @@ static target_ulong disas_insn(DisasContext *s, CPU= State *cpu) /* bndcn */ if (reg >=3D 4 || (prefixes & PREFIX_LOCK) - || s->aflag =3D=3D MO_16) { + || s->aflag =3D=3D MO_UW) { goto illegal_op; } gen_bndck(env, s, modrm, TCG_COND_GTU, cpu_bndu[reg]); } else if (prefixes & PREFIX_DATA) { /* bndmov -- to reg/mem */ - if (reg >=3D 4 || s->aflag =3D=3D MO_16) { + if (reg >=3D 4 || s->aflag =3D=3D MO_UW) { goto illegal_op; } if (mod =3D=3D 3) { @@ -7970,7 +7970,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) AddressParts a =3D gen_lea_modrm_0(env, s, modrm); if (reg >=3D 4 || (prefixes & PREFIX_LOCK) - || s->aflag =3D=3D MO_16 + || s->aflag =3D=3D MO_UW || a.base < -1) { goto illegal_op; } @@ -8341,7 +8341,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) reg =3D ((modrm >> 3) & 7) | rex_r; if (s->prefix & PREFIX_DATA) { - ot =3D MO_16; + ot =3D MO_UW; } else { ot =3D mo_64_32(dflag); } diff --git a/target/mips/translate.c b/target/mips/translate.c index 20a9777..525c7fe 100644 --- a/target/mips/translate.c +++ b/target/mips/translate.c @@ -21087,7 +21087,7 @@ static void gen_pool32a5_nanomips_insn(DisasContext= *ctx, int opc, imm =3D sextract32(ctx->opcode, 11, 11); imm =3D (int16_t)(imm << 6) >> 6; if (rt !=3D 0) { - tcg_gen_movi_tl(cpu_gpr[rt], dup_const(MO_16, imm)); + tcg_gen_movi_tl(cpu_gpr[rt], dup_const(MO_UW, imm)); } } break; diff --git a/target/ppc/translate/vmx-impl.inc.c b/target/ppc/translate/vmx= -impl.inc.c index 4130dd1..71efef4 100644 --- a/target/ppc/translate/vmx-impl.inc.c +++ b/target/ppc/translate/vmx-impl.inc.c @@ -406,29 +406,29 @@ static void glue(gen_, name)(DisasContext *ctx) = \ GEN_VXFORM_V(vaddubm, MO_UB, tcg_gen_gvec_add, 0, 0); GEN_VXFORM_DUAL_EXT(vaddubm, PPC_ALTIVEC, PPC_NONE, 0, \ vmul10cuq, PPC_NONE, PPC2_ISA300, 0x0000F800) -GEN_VXFORM_V(vadduhm, MO_16, tcg_gen_gvec_add, 0, 1); +GEN_VXFORM_V(vadduhm, MO_UW, tcg_gen_gvec_add, 0, 1); GEN_VXFORM_DUAL(vadduhm, PPC_ALTIVEC, PPC_NONE, \ vmul10ecuq, PPC_NONE, PPC2_ISA300) GEN_VXFORM_V(vadduwm, MO_32, tcg_gen_gvec_add, 0, 2); GEN_VXFORM_V(vaddudm, MO_64, tcg_gen_gvec_add, 0, 3); GEN_VXFORM_V(vsububm, MO_UB, tcg_gen_gvec_sub, 0, 16); -GEN_VXFORM_V(vsubuhm, MO_16, tcg_gen_gvec_sub, 0, 17); +GEN_VXFORM_V(vsubuhm, MO_UW, tcg_gen_gvec_sub, 0, 17); GEN_VXFORM_V(vsubuwm, MO_32, tcg_gen_gvec_sub, 0, 18); GEN_VXFORM_V(vsubudm, MO_64, tcg_gen_gvec_sub, 0, 19); GEN_VXFORM_V(vmaxub, MO_UB, tcg_gen_gvec_umax, 1, 0); -GEN_VXFORM_V(vmaxuh, MO_16, tcg_gen_gvec_umax, 1, 1); +GEN_VXFORM_V(vmaxuh, MO_UW, tcg_gen_gvec_umax, 1, 1); GEN_VXFORM_V(vmaxuw, MO_32, tcg_gen_gvec_umax, 1, 2); GEN_VXFORM_V(vmaxud, MO_64, tcg_gen_gvec_umax, 1, 3); GEN_VXFORM_V(vmaxsb, MO_UB, tcg_gen_gvec_smax, 1, 4); -GEN_VXFORM_V(vmaxsh, MO_16, tcg_gen_gvec_smax, 1, 5); +GEN_VXFORM_V(vmaxsh, MO_UW, tcg_gen_gvec_smax, 1, 5); GEN_VXFORM_V(vmaxsw, MO_32, tcg_gen_gvec_smax, 1, 6); GEN_VXFORM_V(vmaxsd, MO_64, tcg_gen_gvec_smax, 1, 7); GEN_VXFORM_V(vminub, MO_UB, tcg_gen_gvec_umin, 1, 8); -GEN_VXFORM_V(vminuh, MO_16, tcg_gen_gvec_umin, 1, 9); +GEN_VXFORM_V(vminuh, MO_UW, tcg_gen_gvec_umin, 1, 9); GEN_VXFORM_V(vminuw, MO_32, tcg_gen_gvec_umin, 1, 10); GEN_VXFORM_V(vminud, MO_64, tcg_gen_gvec_umin, 1, 11); GEN_VXFORM_V(vminsb, MO_UB, tcg_gen_gvec_smin, 1, 12); -GEN_VXFORM_V(vminsh, MO_16, tcg_gen_gvec_smin, 1, 13); +GEN_VXFORM_V(vminsh, MO_UW, tcg_gen_gvec_smin, 1, 13); GEN_VXFORM_V(vminsw, MO_32, tcg_gen_gvec_smin, 1, 14); GEN_VXFORM_V(vminsd, MO_64, tcg_gen_gvec_smin, 1, 15); GEN_VXFORM(vavgub, 1, 16); @@ -531,18 +531,18 @@ GEN_VXFORM(vmulesb, 4, 12); GEN_VXFORM(vmulesh, 4, 13); GEN_VXFORM(vmulesw, 4, 14); GEN_VXFORM_V(vslb, MO_UB, tcg_gen_gvec_shlv, 2, 4); -GEN_VXFORM_V(vslh, MO_16, tcg_gen_gvec_shlv, 2, 5); +GEN_VXFORM_V(vslh, MO_UW, tcg_gen_gvec_shlv, 2, 5); GEN_VXFORM_V(vslw, MO_32, tcg_gen_gvec_shlv, 2, 6); GEN_VXFORM(vrlwnm, 2, 6); GEN_VXFORM_DUAL(vslw, PPC_ALTIVEC, PPC_NONE, \ vrlwnm, PPC_NONE, PPC2_ISA300) GEN_VXFORM_V(vsld, MO_64, tcg_gen_gvec_shlv, 2, 23); GEN_VXFORM_V(vsrb, MO_UB, tcg_gen_gvec_shrv, 2, 8); -GEN_VXFORM_V(vsrh, MO_16, tcg_gen_gvec_shrv, 2, 9); +GEN_VXFORM_V(vsrh, MO_UW, tcg_gen_gvec_shrv, 2, 9); GEN_VXFORM_V(vsrw, MO_32, tcg_gen_gvec_shrv, 2, 10); GEN_VXFORM_V(vsrd, MO_64, tcg_gen_gvec_shrv, 2, 27); GEN_VXFORM_V(vsrab, MO_UB, tcg_gen_gvec_sarv, 2, 12); -GEN_VXFORM_V(vsrah, MO_16, tcg_gen_gvec_sarv, 2, 13); +GEN_VXFORM_V(vsrah, MO_UW, tcg_gen_gvec_sarv, 2, 13); GEN_VXFORM_V(vsraw, MO_32, tcg_gen_gvec_sarv, 2, 14); GEN_VXFORM_V(vsrad, MO_64, tcg_gen_gvec_sarv, 2, 15); GEN_VXFORM(vsrv, 2, 28); @@ -592,18 +592,18 @@ static void glue(gen_, NAME)(DisasContext *ctx) = \ GEN_VXFORM_SAT(vaddubs, MO_UB, add, usadd, 0, 8); GEN_VXFORM_DUAL_EXT(vaddubs, PPC_ALTIVEC, PPC_NONE, 0, \ vmul10uq, PPC_NONE, PPC2_ISA300, 0x0000F800) -GEN_VXFORM_SAT(vadduhs, MO_16, add, usadd, 0, 9); +GEN_VXFORM_SAT(vadduhs, MO_UW, add, usadd, 0, 9); GEN_VXFORM_DUAL(vadduhs, PPC_ALTIVEC, PPC_NONE, \ vmul10euq, PPC_NONE, PPC2_ISA300) GEN_VXFORM_SAT(vadduws, MO_32, add, usadd, 0, 10); GEN_VXFORM_SAT(vaddsbs, MO_UB, add, ssadd, 0, 12); -GEN_VXFORM_SAT(vaddshs, MO_16, add, ssadd, 0, 13); +GEN_VXFORM_SAT(vaddshs, MO_UW, add, ssadd, 0, 13); GEN_VXFORM_SAT(vaddsws, MO_32, add, ssadd, 0, 14); GEN_VXFORM_SAT(vsububs, MO_UB, sub, ussub, 0, 24); -GEN_VXFORM_SAT(vsubuhs, MO_16, sub, ussub, 0, 25); +GEN_VXFORM_SAT(vsubuhs, MO_UW, sub, ussub, 0, 25); GEN_VXFORM_SAT(vsubuws, MO_32, sub, ussub, 0, 26); GEN_VXFORM_SAT(vsubsbs, MO_UB, sub, sssub, 0, 28); -GEN_VXFORM_SAT(vsubshs, MO_16, sub, sssub, 0, 29); +GEN_VXFORM_SAT(vsubshs, MO_UW, sub, sssub, 0, 29); GEN_VXFORM_SAT(vsubsws, MO_32, sub, sssub, 0, 30); GEN_VXFORM(vadduqm, 0, 4); GEN_VXFORM(vaddcuq, 0, 5); @@ -913,7 +913,7 @@ static void glue(gen_, name)(DisasContext *ctx) = \ } GEN_VXFORM_VSPLT(vspltb, MO_UB, 6, 8); -GEN_VXFORM_VSPLT(vsplth, MO_16, 6, 9); +GEN_VXFORM_VSPLT(vsplth, MO_UW, 6, 9); GEN_VXFORM_VSPLT(vspltw, MO_32, 6, 10); GEN_VXFORM_UIMM_SPLAT(vextractub, 6, 8, 15); GEN_VXFORM_UIMM_SPLAT(vextractuh, 6, 9, 14); diff --git a/target/s390x/translate_vx.inc.c b/target/s390x/translate_vx.in= c.c index bb424c8..65da6b3 100644 --- a/target/s390x/translate_vx.inc.c +++ b/target/s390x/translate_vx.inc.c @@ -47,7 +47,7 @@ #define NUM_VEC_ELEMENT_BITS(es) (NUM_VEC_ELEMENT_BYTES(es) * BITS_PER_BYT= E) #define ES_8 MO_UB -#define ES_16 MO_16 +#define ES_16 MO_UW #define ES_32 MO_32 #define ES_64 MO_64 #define ES_128 4 diff --git a/target/s390x/vec.h b/target/s390x/vec.h index b813054..28e1b1d 100644 --- a/target/s390x/vec.h +++ b/target/s390x/vec.h @@ -78,7 +78,7 @@ static inline uint64_t s390_vec_read_element(const S390Ve= ctor *v, uint8_t enr, switch (es) { case MO_UB: return s390_vec_read_element8(v, enr); - case MO_16: + case MO_UW: return s390_vec_read_element16(v, enr); case MO_32: return s390_vec_read_element32(v, enr); @@ -124,7 +124,7 @@ static inline void s390_vec_write_element(S390Vector *v= , uint8_t enr, case MO_UB: s390_vec_write_element8(v, enr, data); break; - case MO_16: + case MO_UW: s390_vec_write_element16(v, enr, data); break; case MO_32: diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c index e4e0845..3d90c4b 100644 --- a/tcg/aarch64/tcg-target.inc.c +++ b/tcg/aarch64/tcg-target.inc.c @@ -430,20 +430,20 @@ typedef enum { /* Load/store register. Described here as 3.3.12, but the helper that emits them can transform to 3.3.10 or 3.3.13. */ I3312_STRB =3D 0x38000000 | LDST_ST << 22 | MO_UB << 30, - I3312_STRH =3D 0x38000000 | LDST_ST << 22 | MO_16 << 30, + I3312_STRH =3D 0x38000000 | LDST_ST << 22 | MO_UW << 30, I3312_STRW =3D 0x38000000 | LDST_ST << 22 | MO_32 << 30, I3312_STRX =3D 0x38000000 | LDST_ST << 22 | MO_64 << 30, I3312_LDRB =3D 0x38000000 | LDST_LD << 22 | MO_UB << 30, - I3312_LDRH =3D 0x38000000 | LDST_LD << 22 | MO_16 << 30, + I3312_LDRH =3D 0x38000000 | LDST_LD << 22 | MO_UW << 30, I3312_LDRW =3D 0x38000000 | LDST_LD << 22 | MO_32 << 30, I3312_LDRX =3D 0x38000000 | LDST_LD << 22 | MO_64 << 30, I3312_LDRSBW =3D 0x38000000 | LDST_LD_S_W << 22 | MO_UB << 30, - I3312_LDRSHW =3D 0x38000000 | LDST_LD_S_W << 22 | MO_16 << 30, + I3312_LDRSHW =3D 0x38000000 | LDST_LD_S_W << 22 | MO_UW << 30, I3312_LDRSBX =3D 0x38000000 | LDST_LD_S_X << 22 | MO_UB << 30, - I3312_LDRSHX =3D 0x38000000 | LDST_LD_S_X << 22 | MO_16 << 30, + I3312_LDRSHX =3D 0x38000000 | LDST_LD_S_X << 22 | MO_UW << 30, I3312_LDRSWX =3D 0x38000000 | LDST_LD_S_X << 22 | MO_32 << 30, I3312_LDRVS =3D 0x3c000000 | LDST_LD << 22 | MO_32 << 30, @@ -870,7 +870,7 @@ static void tcg_out_dupi_vec(TCGContext *s, TCGType typ= e, /* * Test all bytes 0x00 or 0xff second. This can match cases that - * might otherwise take 2 or 3 insns for MO_16 or MO_32 below. + * might otherwise take 2 or 3 insns for MO_UW or MO_32 below. */ for (i =3D imm8 =3D 0; i < 8; i++) { uint8_t byte =3D v64 >> (i * 8); @@ -889,7 +889,7 @@ static void tcg_out_dupi_vec(TCGContext *s, TCGType typ= e, * cannot find an expansion there's no point checking a larger * width because we already know by replication it cannot match. */ - if (v64 =3D=3D dup_const(MO_16, v64)) { + if (v64 =3D=3D dup_const(MO_UW, v64)) { uint16_t v16 =3D v64; if (is_shimm16(v16, &cmode, &imm8)) { @@ -1733,7 +1733,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCG= MemOp memop, TCGType ext, if (bswap) { tcg_out_ldst_r(s, I3312_LDRH, data_r, addr_r, otype, off_r); tcg_out_rev16(s, data_r, data_r); - tcg_out_sxt(s, ext, MO_16, data_r, data_r); + tcg_out_sxt(s, ext, MO_UW, data_r, data_r); } else { tcg_out_ldst_r(s, (ext ? I3312_LDRSHX : I3312_LDRSHW), data_r, addr_r, otype, off_r); @@ -1775,7 +1775,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCG= MemOp memop, case MO_UB: tcg_out_ldst_r(s, I3312_STRB, data_r, addr_r, otype, off_r); break; - case MO_16: + case MO_UW: if (bswap && data_r !=3D TCG_REG_XZR) { tcg_out_rev16(s, TCG_REG_TMP, data_r); data_r =3D TCG_REG_TMP; @@ -2190,7 +2190,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, break; case INDEX_op_ext16s_i64: case INDEX_op_ext16s_i32: - tcg_out_sxt(s, ext, MO_16, a0, a1); + tcg_out_sxt(s, ext, MO_UW, a0, a1); break; case INDEX_op_ext_i32_i64: case INDEX_op_ext32s_i64: @@ -2202,7 +2202,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, break; case INDEX_op_ext16u_i64: case INDEX_op_ext16u_i32: - tcg_out_uxt(s, MO_16, a0, a1); + tcg_out_uxt(s, MO_UW, a0, a1); break; case INDEX_op_extu_i32_i64: case INDEX_op_ext32u_i64: diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c index 542ffa8..0bd400e 100644 --- a/tcg/arm/tcg-target.inc.c +++ b/tcg/arm/tcg-target.inc.c @@ -1432,7 +1432,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, = TCGLabelQemuLdst *lb) case MO_UB: argreg =3D tcg_out_arg_reg8(s, argreg, datalo); break; - case MO_16: + case MO_UW: argreg =3D tcg_out_arg_reg16(s, argreg, datalo); break; case MO_32: @@ -1624,7 +1624,7 @@ static inline void tcg_out_qemu_st_index(TCGContext *= s, int cond, TCGMemOp opc, case MO_UB: tcg_out_st8_r(s, cond, datalo, addrlo, addend); break; - case MO_16: + case MO_UW: if (bswap) { tcg_out_bswap16st(s, cond, TCG_REG_R0, datalo); tcg_out_st16_r(s, cond, TCG_REG_R0, addrlo, addend); @@ -1669,7 +1669,7 @@ static inline void tcg_out_qemu_st_direct(TCGContext = *s, TCGMemOp opc, case MO_UB: tcg_out_st8_12(s, COND_AL, datalo, addrlo, 0); break; - case MO_16: + case MO_UW: if (bswap) { tcg_out_bswap16st(s, COND_AL, TCG_REG_R0, datalo); tcg_out_st16_8(s, COND_AL, TCG_REG_R0, addrlo, 0); diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c index 0d68ba4..31c3664 100644 --- a/tcg/i386/tcg-target.inc.c +++ b/tcg/i386/tcg-target.inc.c @@ -893,7 +893,7 @@ static bool tcg_out_dup_vec(TCGContext *s, TCGType type= , unsigned vece, tcg_out_vex_modrm(s, OPC_PUNPCKLBW, r, a, a); a =3D r; /* FALLTHRU */ - case MO_16: + case MO_UW: tcg_out_vex_modrm(s, OPC_PUNPCKLWD, r, a, a); a =3D r; /* FALLTHRU */ @@ -927,7 +927,7 @@ static bool tcg_out_dupm_vec(TCGContext *s, TCGType typ= e, unsigned vece, case MO_32: tcg_out_vex_modrm_offset(s, OPC_VBROADCASTSS, r, 0, base, offs= et); break; - case MO_16: + case MO_UW: tcg_out_vex_modrm_offset(s, OPC_VPINSRW, r, r, base, offset); tcg_out8(s, 0); /* imm8 */ tcg_out_dup_vec(s, type, vece, r, r); @@ -2164,7 +2164,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCG= Reg datalo, TCGReg datahi, tcg_out_modrm_sib_offset(s, OPC_MOVB_EvGv + P_REXB_R + seg, datalo, base, index, 0, ofs); break; - case MO_16: + case MO_UW: if (bswap) { tcg_out_mov(s, TCG_TYPE_I32, scratch, datalo); tcg_out_rolw_8(s, scratch); @@ -2747,15 +2747,15 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode= opc, OPC_PMAXUB, OPC_PMAXUW, OPC_PMAXUD, OPC_UD2 }; static int const shlv_insn[4] =3D { - /* TODO: AVX512 adds support for MO_16. */ + /* TODO: AVX512 adds support for MO_UW. */ OPC_UD2, OPC_UD2, OPC_VPSLLVD, OPC_VPSLLVQ }; static int const shrv_insn[4] =3D { - /* TODO: AVX512 adds support for MO_16. */ + /* TODO: AVX512 adds support for MO_UW. */ OPC_UD2, OPC_UD2, OPC_VPSRLVD, OPC_VPSRLVQ }; static int const sarv_insn[4] =3D { - /* TODO: AVX512 adds support for MO_16, MO_64. */ + /* TODO: AVX512 adds support for MO_UW, MO_64. */ OPC_UD2, OPC_UD2, OPC_VPSRAVD, OPC_UD2 }; static int const shls_insn[4] =3D { @@ -2925,7 +2925,7 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode o= pc, sub =3D args[3]; goto gen_simd_imm8; case INDEX_op_x86_blend_vec: - if (vece =3D=3D MO_16) { + if (vece =3D=3D MO_UW) { insn =3D OPC_PBLENDW; } else if (vece =3D=3D MO_32) { insn =3D (have_avx2 ? OPC_VPBLENDD : OPC_BLENDPS); @@ -3290,9 +3290,9 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, = unsigned vece) case INDEX_op_shls_vec: case INDEX_op_shrs_vec: - return vece >=3D MO_16; + return vece >=3D MO_UW; case INDEX_op_sars_vec: - return vece >=3D MO_16 && vece <=3D MO_32; + return vece >=3D MO_UW && vece <=3D MO_32; case INDEX_op_shlv_vec: case INDEX_op_shrv_vec: @@ -3314,7 +3314,7 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, = unsigned vece) case INDEX_op_usadd_vec: case INDEX_op_sssub_vec: case INDEX_op_ussub_vec: - return vece <=3D MO_16; + return vece <=3D MO_UW; case INDEX_op_smin_vec: case INDEX_op_smax_vec: case INDEX_op_umin_vec: @@ -3352,13 +3352,13 @@ static void expand_vec_shi(TCGType type, unsigned v= ece, bool shr, tcgv_vec_arg(t2), tcgv_vec_arg(v1), tcgv_vec_arg(v1)); if (shr) { - tcg_gen_shri_vec(MO_16, t1, t1, imm + 8); - tcg_gen_shri_vec(MO_16, t2, t2, imm + 8); + tcg_gen_shri_vec(MO_UW, t1, t1, imm + 8); + tcg_gen_shri_vec(MO_UW, t2, t2, imm + 8); } else { - tcg_gen_shli_vec(MO_16, t1, t1, imm + 8); - tcg_gen_shli_vec(MO_16, t2, t2, imm + 8); - tcg_gen_shri_vec(MO_16, t1, t1, 8); - tcg_gen_shri_vec(MO_16, t2, t2, 8); + tcg_gen_shli_vec(MO_UW, t1, t1, imm + 8); + tcg_gen_shli_vec(MO_UW, t2, t2, imm + 8); + tcg_gen_shri_vec(MO_UW, t1, t1, 8); + tcg_gen_shri_vec(MO_UW, t2, t2, 8); } vec_gen_3(INDEX_op_x86_packus_vec, type, MO_UB, @@ -3381,8 +3381,8 @@ static void expand_vec_sari(TCGType type, unsigned ve= ce, tcgv_vec_arg(t1), tcgv_vec_arg(v1), tcgv_vec_arg(v1)); vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_UB, tcgv_vec_arg(t2), tcgv_vec_arg(v1), tcgv_vec_arg(v1)); - tcg_gen_sari_vec(MO_16, t1, t1, imm + 8); - tcg_gen_sari_vec(MO_16, t2, t2, imm + 8); + tcg_gen_sari_vec(MO_UW, t1, t1, imm + 8); + tcg_gen_sari_vec(MO_UW, t2, t2, imm + 8); vec_gen_3(INDEX_op_x86_packss_vec, type, MO_UB, tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t2)); tcg_temp_free_vec(t1); @@ -3446,8 +3446,8 @@ static void expand_vec_mul(TCGType type, unsigned vec= e, tcgv_vec_arg(t1), tcgv_vec_arg(v1), tcgv_vec_arg(t2)); vec_gen_3(INDEX_op_x86_punpckl_vec, TCG_TYPE_V128, MO_UB, tcgv_vec_arg(t2), tcgv_vec_arg(t2), tcgv_vec_arg(v2)); - tcg_gen_mul_vec(MO_16, t1, t1, t2); - tcg_gen_shri_vec(MO_16, t1, t1, 8); + tcg_gen_mul_vec(MO_UW, t1, t1, t2); + tcg_gen_shri_vec(MO_UW, t1, t1, 8); vec_gen_3(INDEX_op_x86_packus_vec, TCG_TYPE_V128, MO_UB, tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t1)); tcg_temp_free_vec(t1); @@ -3469,10 +3469,10 @@ static void expand_vec_mul(TCGType type, unsigned v= ece, tcgv_vec_arg(t3), tcgv_vec_arg(v1), tcgv_vec_arg(t4)); vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_UB, tcgv_vec_arg(t4), tcgv_vec_arg(t4), tcgv_vec_arg(v2)); - tcg_gen_mul_vec(MO_16, t1, t1, t2); - tcg_gen_mul_vec(MO_16, t3, t3, t4); - tcg_gen_shri_vec(MO_16, t1, t1, 8); - tcg_gen_shri_vec(MO_16, t3, t3, 8); + tcg_gen_mul_vec(MO_UW, t1, t1, t2); + tcg_gen_mul_vec(MO_UW, t3, t3, t4); + tcg_gen_shri_vec(MO_UW, t1, t1, 8); + tcg_gen_shri_vec(MO_UW, t3, t3, 8); vec_gen_3(INDEX_op_x86_packus_vec, type, MO_UB, tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t3)); tcg_temp_free_vec(t1); diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c index c6d13ea..1780cb1 100644 --- a/tcg/mips/tcg-target.inc.c +++ b/tcg/mips/tcg-target.inc.c @@ -1383,7 +1383,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, = TCGLabelQemuLdst *l) case MO_UB: i =3D tcg_out_call_iarg_reg8(s, i, l->datalo_reg); break; - case MO_16: + case MO_UW: i =3D tcg_out_call_iarg_reg16(s, i, l->datalo_reg); break; case MO_32: @@ -1570,12 +1570,12 @@ static void tcg_out_qemu_st_direct(TCGContext *s, T= CGReg lo, TCGReg hi, tcg_out_opc_imm(s, OPC_SB, lo, base, 0); break; - case MO_16 | MO_BSWAP: + case MO_UW | MO_BSWAP: tcg_out_opc_imm(s, OPC_ANDI, TCG_TMP1, lo, 0xffff); tcg_out_bswap16(s, TCG_TMP1, TCG_TMP1); lo =3D TCG_TMP1; /* FALLTHRU */ - case MO_16: + case MO_UW: tcg_out_opc_imm(s, OPC_SH, lo, base, 0); break; diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c index 9c60c0f..20bc19d 100644 --- a/tcg/riscv/tcg-target.inc.c +++ b/tcg/riscv/tcg-target.inc.c @@ -1104,7 +1104,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, = TCGLabelQemuLdst *l) case MO_UB: tcg_out_ext8u(s, a2, a2); break; - case MO_16: + case MO_UW: tcg_out_ext16u(s, a2, a2); break; default: @@ -1219,7 +1219,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCG= Reg lo, TCGReg hi, case MO_UB: tcg_out_opc_store(s, OPC_SB, base, lo, 0); break; - case MO_16: + case MO_UW: tcg_out_opc_store(s, OPC_SH, base, lo, 0); break; case MO_32: diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c index 479ee2e..85550b5 100644 --- a/tcg/sparc/tcg-target.inc.c +++ b/tcg/sparc/tcg-target.inc.c @@ -885,7 +885,7 @@ static void emit_extend(TCGContext *s, TCGReg r, int op) case MO_UB: tcg_out_arithi(s, r, r, 0xff, ARITH_AND); break; - case MO_16: + case MO_UW: tcg_out_arithi(s, r, r, 16, SHIFT_SLL); tcg_out_arithi(s, r, r, 16, SHIFT_SRL); break; diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c index 9658c36..da409f5 100644 --- a/tcg/tcg-op-gvec.c +++ b/tcg/tcg-op-gvec.c @@ -308,7 +308,7 @@ uint64_t (dup_const)(unsigned vece, uint64_t c) switch (vece) { case MO_UB: return 0x0101010101010101ull * (uint8_t)c; - case MO_16: + case MO_UW: return 0x0001000100010001ull * (uint16_t)c; case MO_32: return 0x0000000100000001ull * (uint32_t)c; @@ -327,7 +327,7 @@ static void gen_dup_i32(unsigned vece, TCGv_i32 out, TC= Gv_i32 in) tcg_gen_ext8u_i32(out, in); tcg_gen_muli_i32(out, out, 0x01010101); break; - case MO_16: + case MO_UW: tcg_gen_deposit_i32(out, in, in, 16, 16); break; case MO_32: @@ -345,7 +345,7 @@ static void gen_dup_i64(unsigned vece, TCGv_i64 out, TC= Gv_i64 in) tcg_gen_ext8u_i64(out, in); tcg_gen_muli_i64(out, out, 0x0101010101010101ull); break; - case MO_16: + case MO_UW: tcg_gen_ext16u_i64(out, in); tcg_gen_muli_i64(out, out, 0x0001000100010001ull); break; @@ -558,7 +558,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32= _t oprsz, tcg_gen_extrl_i64_i32(t_32, in_64); } else if (vece =3D=3D MO_UB) { tcg_gen_movi_i32(t_32, in_c & 0xff); - } else if (vece =3D=3D MO_16) { + } else if (vece =3D=3D MO_UW) { tcg_gen_movi_i32(t_32, in_c & 0xffff); } else { tcg_gen_movi_i32(t_32, in_c); @@ -1459,7 +1459,7 @@ void tcg_gen_gvec_dup_mem(unsigned vece, uint32_t dof= s, uint32_t aofs, case MO_UB: tcg_gen_ld8u_i32(in, cpu_env, aofs); break; - case MO_16: + case MO_UW: tcg_gen_ld16u_i32(in, cpu_env, aofs); break; default: @@ -1526,7 +1526,7 @@ void tcg_gen_gvec_dup16i(uint32_t dofs, uint32_t oprs= z, uint32_t maxsz, uint16_t x) { check_size_align(oprsz, maxsz, dofs); - do_dup(MO_16, dofs, oprsz, maxsz, NULL, NULL, x); + do_dup(MO_UW, dofs, oprsz, maxsz, NULL, NULL, x); } void tcg_gen_gvec_dup8i(uint32_t dofs, uint32_t oprsz, @@ -1579,7 +1579,7 @@ void tcg_gen_vec_add8_i64(TCGv_i64 d, TCGv_i64 a, TCG= v_i64 b) void tcg_gen_vec_add16_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b) { - TCGv_i64 m =3D tcg_const_i64(dup_const(MO_16, 0x8000)); + TCGv_i64 m =3D tcg_const_i64(dup_const(MO_UW, 0x8000)); gen_addv_mask(d, a, b, m); tcg_temp_free_i64(m); } @@ -1613,7 +1613,7 @@ void tcg_gen_gvec_add(unsigned vece, uint32_t dofs, u= int32_t aofs, .fniv =3D tcg_gen_add_vec, .fno =3D gen_helper_gvec_add16, .opt_opc =3D vecop_list_add, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D tcg_gen_add_i32, .fniv =3D tcg_gen_add_vec, .fno =3D gen_helper_gvec_add32, @@ -1644,7 +1644,7 @@ void tcg_gen_gvec_adds(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_add_vec, .fno =3D gen_helper_gvec_adds16, .opt_opc =3D vecop_list_add, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D tcg_gen_add_i32, .fniv =3D tcg_gen_add_vec, .fno =3D gen_helper_gvec_adds32, @@ -1685,7 +1685,7 @@ void tcg_gen_gvec_subs(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_sub_vec, .fno =3D gen_helper_gvec_subs16, .opt_opc =3D vecop_list_sub, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D tcg_gen_sub_i32, .fniv =3D tcg_gen_sub_vec, .fno =3D gen_helper_gvec_subs32, @@ -1732,7 +1732,7 @@ void tcg_gen_vec_sub8_i64(TCGv_i64 d, TCGv_i64 a, TCG= v_i64 b) void tcg_gen_vec_sub16_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b) { - TCGv_i64 m =3D tcg_const_i64(dup_const(MO_16, 0x8000)); + TCGv_i64 m =3D tcg_const_i64(dup_const(MO_UW, 0x8000)); gen_subv_mask(d, a, b, m); tcg_temp_free_i64(m); } @@ -1764,7 +1764,7 @@ void tcg_gen_gvec_sub(unsigned vece, uint32_t dofs, u= int32_t aofs, .fniv =3D tcg_gen_sub_vec, .fno =3D gen_helper_gvec_sub16, .opt_opc =3D vecop_list_sub, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D tcg_gen_sub_i32, .fniv =3D tcg_gen_sub_vec, .fno =3D gen_helper_gvec_sub32, @@ -1795,7 +1795,7 @@ void tcg_gen_gvec_mul(unsigned vece, uint32_t dofs, u= int32_t aofs, { .fniv =3D tcg_gen_mul_vec, .fno =3D gen_helper_gvec_mul16, .opt_opc =3D vecop_list_mul, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D tcg_gen_mul_i32, .fniv =3D tcg_gen_mul_vec, .fno =3D gen_helper_gvec_mul32, @@ -1824,7 +1824,7 @@ void tcg_gen_gvec_muls(unsigned vece, uint32_t dofs, = uint32_t aofs, { .fniv =3D tcg_gen_mul_vec, .fno =3D gen_helper_gvec_muls16, .opt_opc =3D vecop_list_mul, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D tcg_gen_mul_i32, .fniv =3D tcg_gen_mul_vec, .fno =3D gen_helper_gvec_muls32, @@ -1862,7 +1862,7 @@ void tcg_gen_gvec_ssadd(unsigned vece, uint32_t dofs,= uint32_t aofs, { .fniv =3D tcg_gen_ssadd_vec, .fno =3D gen_helper_gvec_ssadd16, .opt_opc =3D vecop_list, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fniv =3D tcg_gen_ssadd_vec, .fno =3D gen_helper_gvec_ssadd32, .opt_opc =3D vecop_list, @@ -1888,7 +1888,7 @@ void tcg_gen_gvec_sssub(unsigned vece, uint32_t dofs,= uint32_t aofs, { .fniv =3D tcg_gen_sssub_vec, .fno =3D gen_helper_gvec_sssub16, .opt_opc =3D vecop_list, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fniv =3D tcg_gen_sssub_vec, .fno =3D gen_helper_gvec_sssub32, .opt_opc =3D vecop_list, @@ -1930,7 +1930,7 @@ void tcg_gen_gvec_usadd(unsigned vece, uint32_t dofs,= uint32_t aofs, { .fniv =3D tcg_gen_usadd_vec, .fno =3D gen_helper_gvec_usadd16, .opt_opc =3D vecop_list, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D tcg_gen_usadd_i32, .fniv =3D tcg_gen_usadd_vec, .fno =3D gen_helper_gvec_usadd32, @@ -1974,7 +1974,7 @@ void tcg_gen_gvec_ussub(unsigned vece, uint32_t dofs,= uint32_t aofs, { .fniv =3D tcg_gen_ussub_vec, .fno =3D gen_helper_gvec_ussub16, .opt_opc =3D vecop_list, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D tcg_gen_ussub_i32, .fniv =3D tcg_gen_ussub_vec, .fno =3D gen_helper_gvec_ussub32, @@ -2002,7 +2002,7 @@ void tcg_gen_gvec_smin(unsigned vece, uint32_t dofs, = uint32_t aofs, { .fniv =3D tcg_gen_smin_vec, .fno =3D gen_helper_gvec_smin16, .opt_opc =3D vecop_list, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D tcg_gen_smin_i32, .fniv =3D tcg_gen_smin_vec, .fno =3D gen_helper_gvec_smin32, @@ -2030,7 +2030,7 @@ void tcg_gen_gvec_umin(unsigned vece, uint32_t dofs, = uint32_t aofs, { .fniv =3D tcg_gen_umin_vec, .fno =3D gen_helper_gvec_umin16, .opt_opc =3D vecop_list, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D tcg_gen_umin_i32, .fniv =3D tcg_gen_umin_vec, .fno =3D gen_helper_gvec_umin32, @@ -2058,7 +2058,7 @@ void tcg_gen_gvec_smax(unsigned vece, uint32_t dofs, = uint32_t aofs, { .fniv =3D tcg_gen_smax_vec, .fno =3D gen_helper_gvec_smax16, .opt_opc =3D vecop_list, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D tcg_gen_smax_i32, .fniv =3D tcg_gen_smax_vec, .fno =3D gen_helper_gvec_smax32, @@ -2086,7 +2086,7 @@ void tcg_gen_gvec_umax(unsigned vece, uint32_t dofs, = uint32_t aofs, { .fniv =3D tcg_gen_umax_vec, .fno =3D gen_helper_gvec_umax16, .opt_opc =3D vecop_list, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D tcg_gen_umax_i32, .fniv =3D tcg_gen_umax_vec, .fno =3D gen_helper_gvec_umax32, @@ -2127,7 +2127,7 @@ void tcg_gen_vec_neg8_i64(TCGv_i64 d, TCGv_i64 b) void tcg_gen_vec_neg16_i64(TCGv_i64 d, TCGv_i64 b) { - TCGv_i64 m =3D tcg_const_i64(dup_const(MO_16, 0x8000)); + TCGv_i64 m =3D tcg_const_i64(dup_const(MO_UW, 0x8000)); gen_negv_mask(d, b, m); tcg_temp_free_i64(m); } @@ -2160,7 +2160,7 @@ void tcg_gen_gvec_neg(unsigned vece, uint32_t dofs, u= int32_t aofs, .fniv =3D tcg_gen_neg_vec, .fno =3D gen_helper_gvec_neg16, .opt_opc =3D vecop_list, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D tcg_gen_neg_i32, .fniv =3D tcg_gen_neg_vec, .fno =3D gen_helper_gvec_neg32, @@ -2206,7 +2206,7 @@ static void tcg_gen_vec_abs8_i64(TCGv_i64 d, TCGv_i64= b) static void tcg_gen_vec_abs16_i64(TCGv_i64 d, TCGv_i64 b) { - gen_absv_mask(d, b, MO_16); + gen_absv_mask(d, b, MO_UW); } void tcg_gen_gvec_abs(unsigned vece, uint32_t dofs, uint32_t aofs, @@ -2223,7 +2223,7 @@ void tcg_gen_gvec_abs(unsigned vece, uint32_t dofs, u= int32_t aofs, .fniv =3D tcg_gen_abs_vec, .fno =3D gen_helper_gvec_abs16, .opt_opc =3D vecop_list, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D tcg_gen_abs_i32, .fniv =3D tcg_gen_abs_vec, .fno =3D gen_helper_gvec_abs32, @@ -2461,7 +2461,7 @@ void tcg_gen_vec_shl8i_i64(TCGv_i64 d, TCGv_i64 a, in= t64_t c) void tcg_gen_vec_shl16i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c) { - uint64_t mask =3D dup_const(MO_16, 0xffff << c); + uint64_t mask =3D dup_const(MO_UW, 0xffff << c); tcg_gen_shli_i64(d, a, c); tcg_gen_andi_i64(d, d, mask); } @@ -2480,7 +2480,7 @@ void tcg_gen_gvec_shli(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_shli_vec, .fno =3D gen_helper_gvec_shl16i, .opt_opc =3D vecop_list, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D tcg_gen_shli_i32, .fniv =3D tcg_gen_shli_vec, .fno =3D gen_helper_gvec_shl32i, @@ -2512,7 +2512,7 @@ void tcg_gen_vec_shr8i_i64(TCGv_i64 d, TCGv_i64 a, in= t64_t c) void tcg_gen_vec_shr16i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c) { - uint64_t mask =3D dup_const(MO_16, 0xffff >> c); + uint64_t mask =3D dup_const(MO_UW, 0xffff >> c); tcg_gen_shri_i64(d, a, c); tcg_gen_andi_i64(d, d, mask); } @@ -2531,7 +2531,7 @@ void tcg_gen_gvec_shri(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_shri_vec, .fno =3D gen_helper_gvec_shr16i, .opt_opc =3D vecop_list, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D tcg_gen_shri_i32, .fniv =3D tcg_gen_shri_vec, .fno =3D gen_helper_gvec_shr32i, @@ -2570,8 +2570,8 @@ void tcg_gen_vec_sar8i_i64(TCGv_i64 d, TCGv_i64 a, in= t64_t c) void tcg_gen_vec_sar16i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c) { - uint64_t s_mask =3D dup_const(MO_16, 0x8000 >> c); - uint64_t c_mask =3D dup_const(MO_16, 0xffff >> c); + uint64_t s_mask =3D dup_const(MO_UW, 0x8000 >> c); + uint64_t c_mask =3D dup_const(MO_UW, 0xffff >> c); TCGv_i64 s =3D tcg_temp_new_i64(); tcg_gen_shri_i64(d, a, c); @@ -2596,7 +2596,7 @@ void tcg_gen_gvec_sari(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_sari_vec, .fno =3D gen_helper_gvec_sar16i, .opt_opc =3D vecop_list, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D tcg_gen_sari_i32, .fniv =3D tcg_gen_sari_vec, .fno =3D gen_helper_gvec_sar32i, @@ -2884,7 +2884,7 @@ void tcg_gen_gvec_shlv(unsigned vece, uint32_t dofs, = uint32_t aofs, { .fniv =3D tcg_gen_shlv_mod_vec, .fno =3D gen_helper_gvec_shl16v, .opt_opc =3D vecop_list, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D tcg_gen_shl_mod_i32, .fniv =3D tcg_gen_shlv_mod_vec, .fno =3D gen_helper_gvec_shl32v, @@ -2947,7 +2947,7 @@ void tcg_gen_gvec_shrv(unsigned vece, uint32_t dofs, = uint32_t aofs, { .fniv =3D tcg_gen_shrv_mod_vec, .fno =3D gen_helper_gvec_shr16v, .opt_opc =3D vecop_list, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D tcg_gen_shr_mod_i32, .fniv =3D tcg_gen_shrv_mod_vec, .fno =3D gen_helper_gvec_shr32v, @@ -3010,7 +3010,7 @@ void tcg_gen_gvec_sarv(unsigned vece, uint32_t dofs, = uint32_t aofs, { .fniv =3D tcg_gen_sarv_mod_vec, .fno =3D gen_helper_gvec_sar16v, .opt_opc =3D vecop_list, - .vece =3D MO_16 }, + .vece =3D MO_UW }, { .fni4 =3D tcg_gen_sar_mod_i32, .fniv =3D tcg_gen_sarv_mod_vec, .fno =3D gen_helper_gvec_sar32v, diff --git a/tcg/tcg-op-vec.c b/tcg/tcg-op-vec.c index d7ffc9e..b0a4d98 100644 --- a/tcg/tcg-op-vec.c +++ b/tcg/tcg-op-vec.c @@ -270,7 +270,7 @@ void tcg_gen_dup32i_vec(TCGv_vec r, uint32_t a) void tcg_gen_dup16i_vec(TCGv_vec r, uint32_t a) { - do_dupi_vec(r, MO_REG, dup_const(MO_16, a)); + do_dupi_vec(r, MO_REG, dup_const(MO_UW, a)); } void tcg_gen_dup8i_vec(TCGv_vec r, uint32_t a) diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c index 61eda33..21d448c 100644 --- a/tcg/tcg-op.c +++ b/tcg/tcg-op.c @@ -2723,7 +2723,7 @@ static inline TCGMemOp tcg_canonicalize_memop(TCGMemO= p op, bool is64, bool st) case MO_UB: op &=3D ~MO_BSWAP; break; - case MO_16: + case MO_UW: break; case MO_32: if (!is64) { @@ -2810,7 +2810,7 @@ void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCG= Arg idx, TCGMemOp memop) if ((orig_memop ^ memop) & MO_BSWAP) { switch (orig_memop & MO_SIZE) { - case MO_16: + case MO_UW: tcg_gen_bswap16_i32(val, val); if (orig_memop & MO_SIGN) { tcg_gen_ext16s_i32(val, val); @@ -2837,7 +2837,7 @@ void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCG= Arg idx, TCGMemOp memop) if (!TCG_TARGET_HAS_MEMORY_BSWAP && (memop & MO_BSWAP)) { swap =3D tcg_temp_new_i32(); switch (memop & MO_SIZE) { - case MO_16: + case MO_UW: tcg_gen_ext16u_i32(swap, val); tcg_gen_bswap16_i32(swap, swap); break; @@ -2890,7 +2890,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCG= Arg idx, TCGMemOp memop) if ((orig_memop ^ memop) & MO_BSWAP) { switch (orig_memop & MO_SIZE) { - case MO_16: + case MO_UW: tcg_gen_bswap16_i64(val, val); if (orig_memop & MO_SIGN) { tcg_gen_ext16s_i64(val, val); @@ -2928,7 +2928,7 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCG= Arg idx, TCGMemOp memop) if (!TCG_TARGET_HAS_MEMORY_BSWAP && (memop & MO_BSWAP)) { swap =3D tcg_temp_new_i64(); switch (memop & MO_SIZE) { - case MO_16: + case MO_UW: tcg_gen_ext16u_i64(swap, val); tcg_gen_bswap16_i64(swap, swap); break; @@ -3025,8 +3025,8 @@ typedef void (*gen_atomic_op_i64)(TCGv_i64, TCGv_env,= TCGv, TCGv_i64); static void * const table_cmpxchg[16] =3D { [MO_UB] =3D gen_helper_atomic_cmpxchgb, - [MO_16 | MO_LE] =3D gen_helper_atomic_cmpxchgw_le, - [MO_16 | MO_BE] =3D gen_helper_atomic_cmpxchgw_be, + [MO_UW | MO_LE] =3D gen_helper_atomic_cmpxchgw_le, + [MO_UW | MO_BE] =3D gen_helper_atomic_cmpxchgw_be, [MO_32 | MO_LE] =3D gen_helper_atomic_cmpxchgl_le, [MO_32 | MO_BE] =3D gen_helper_atomic_cmpxchgl_be, WITH_ATOMIC64([MO_64 | MO_LE] =3D gen_helper_atomic_cmpxchgq_le) @@ -3249,8 +3249,8 @@ static void do_atomic_op_i64(TCGv_i64 ret, TCGv addr,= TCGv_i64 val, #define GEN_ATOMIC_HELPER(NAME, OP, NEW) \ static void * const table_##NAME[16] =3D { \ [MO_UB] =3D gen_helper_atomic_##NAME##b, = \ - [MO_16 | MO_LE] =3D gen_helper_atomic_##NAME##w_le, \ - [MO_16 | MO_BE] =3D gen_helper_atomic_##NAME##w_be, \ + [MO_UW | MO_LE] =3D gen_helper_atomic_##NAME##w_le, \ + [MO_UW | MO_BE] =3D gen_helper_atomic_##NAME##w_be, \ [MO_32 | MO_LE] =3D gen_helper_atomic_##NAME##l_le, \ [MO_32 | MO_BE] =3D gen_helper_atomic_##NAME##l_be, \ WITH_ATOMIC64([MO_64 | MO_LE] =3D gen_helper_atomic_##NAME##q_le) \ diff --git a/tcg/tcg.h b/tcg/tcg.h index 5636d6b..a378887 100644 --- a/tcg/tcg.h +++ b/tcg/tcg.h @@ -1303,7 +1303,7 @@ uint64_t dup_const(unsigned vece, uint64_t c); #define dup_const(VECE, C) \ (__builtin_constant_p(VECE) \ ? ((VECE) =3D=3D MO_UB ? 0x0101010101010101ull * (uint8_t)(C) \ - : (VECE) =3D=3D MO_16 ? 0x0001000100010001ull * (uint16_t)(C) \ + : (VECE) =3D=3D MO_UW ? 0x0001000100010001ull * (uint16_t)(C) \ : (VECE) =3D=3D MO_32 ? 0x0000000100000001ull * (uint32_t)(C) \ : dup_const(VECE, C)) \ : dup_const(VECE, C)) -- 1.8.3.1 From nobody Mon Feb 9 05:43:11 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=bt.com ARC-Seal: i=1; a=rsa-sha256; t=1563810152; cv=none; d=zoho.com; s=zohoarc; b=FIzxWzGECfnGmCh+EH/2s2G+P0vvmzJBQORTe0ceOSFV0CiDC4YfE3It6TtjPU/zmDIhCUJHStPCZHKwv8J38iNtprCpqDkFvYKISbSy790TPQG/+G2+XrnOBQvFFu/zXPG10BHX6zAZ2KTqxaG1+NGOtb2eFyA9FayvHgVdnRU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1563810152; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=djkAIBjgnjVexp7a36PhDRbssKGCrTqej6IlLsCPp5w=; b=jGi0eXRD4DCDU4hWYAxOeU/5ZD0ETQXX0fNGKZJfRKV9HK2wQt9O8AVtmrAjumrIK63GcOjeHNNaT0bTY+cqJHK0oPl5S+sR5nAx7OOeGDQ0L0qwqTZxpNnDTYi8TRseFXA3O5WOMrqtEqEqiPQyZL7Crx+A86MWJlUiTwji3M8= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1563810152587432.3712578138361; Mon, 22 Jul 2019 08:42:32 -0700 (PDT) Received: from localhost ([::1]:34698 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaSN-0003PS-EO for importer@patchew.org; Mon, 22 Jul 2019 11:42:31 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:52018) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaRt-0002bE-Ia for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:42:11 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hpaRj-0002UV-2j for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:42:01 -0400 Received: from smtpe1.intersmtp.com ([213.121.35.76]:4359) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hpaRi-0002TI-72 for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:41:51 -0400 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by BWP09926081.bt.com (10.36.82.112) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.1.1713.5; Mon, 22 Jul 2019 16:41:46 +0100 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by tpw09926dag18e.domain1.systemhost.net (10.9.212.18) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Jul 2019 16:41:47 +0100 Received: from tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c]) by tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c%12]) with mapi id 15.00.1395.000; Mon, 22 Jul 2019 16:41:47 +0100 From: To: Thread-Topic: [Qemu-devel] [PATCH v2 03/20] tcg: Replace MO_32 with MO_UL alias Thread-Index: AQHVQKP47nfq9GamtU2q8ZXtYPwq2Q== Date: Mon, 22 Jul 2019 15:41:47 +0000 Message-ID: <1563810105644.28725@bt.com> References: In-Reply-To: Accept-Language: en-AU, en-GB, en-US Content-Language: en-AU X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.187.101.37] MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 [fuzzy] X-Received-From: 213.121.35.76 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.23 Subject: [Qemu-devel] [PATCH v2 03/20] tcg: Replace MO_32 with MO_UL alias X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, walling@linux.ibm.com, mst@redhat.com, palmer@sifive.com, mark.cave-ayland@ilande.co.uk, Alistair.Francis@wdc.com, arikalo@wavecomp.com, david@redhat.com, pasic@linux.ibm.com, borntraeger@de.ibm.com, rth@twiddle.net, atar4qemu@gmail.com, ehabkost@redhat.com, sw@weilnetz.de, qemu-s390x@nongnu.org, qemu-arm@nongnu.org, david@gibson.dropbear.id.au, qemu-riscv@nongnu.org, cohuck@redhat.com, claudio.fontana@huawei.com, alex.williamson@redhat.com, qemu-ppc@nongnu.org, amarkovic@wavecomp.com, pbonzini@redhat.com, aurelien@aurel32.net Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" Preparation for splitting MO_32 out from TCGMemOp into new accelerator independent MemOp. As MO_32 will be a value of MemOp, existing TCGMemOp comparisons and coercions will trigger -Wenum-compare and -Wenum-conversion. Signed-off-by: Tony Nguyen --- target/arm/sve_helper.c | 6 +- target/arm/translate-a64.c | 148 +++++++++++++++++---------------= --- target/arm/translate-sve.c | 12 +-- target/arm/translate-vfp.inc.c | 4 +- target/arm/translate.c | 34 ++++---- target/i386/translate.c | 150 ++++++++++++++++++--------------= ---- target/ppc/translate/vmx-impl.inc.c | 28 +++---- target/ppc/translate/vsx-impl.inc.c | 4 +- target/s390x/translate.c | 4 +- target/s390x/translate_vx.inc.c | 2 +- target/s390x/vec.h | 4 +- tcg/aarch64/tcg-target.inc.c | 20 ++--- tcg/arm/tcg-target.inc.c | 6 +- tcg/i386/tcg-target.inc.c | 28 +++---- tcg/mips/tcg-target.inc.c | 6 +- tcg/ppc/tcg-target.inc.c | 2 +- tcg/riscv/tcg-target.inc.c | 2 +- tcg/sparc/tcg-target.inc.c | 2 +- tcg/tcg-op-gvec.c | 64 +++++++-------- tcg/tcg-op-vec.c | 6 +- tcg/tcg-op.c | 18 ++--- tcg/tcg.h | 2 +- 22 files changed, 276 insertions(+), 276 deletions(-) diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c index f6bef3d..fa705c4 100644 --- a/target/arm/sve_helper.c +++ b/target/arm/sve_helper.c @@ -1561,7 +1561,7 @@ void HELPER(sve_cpy_m_s)(void *vd, void *vn, void *vg, uint64_t *d =3D vd, *n =3D vn; uint8_t *pg =3D vg; - mm =3D dup_const(MO_32, mm); + mm =3D dup_const(MO_UL, mm); for (i =3D 0; i < opr_sz; i +=3D 1) { uint64_t nn =3D n[i]; uint64_t pp =3D expand_pred_s(pg[H1(i)]); @@ -1612,7 +1612,7 @@ void HELPER(sve_cpy_z_s)(void *vd, void *vg, uint64_t= val, uint32_t desc) uint64_t *d =3D vd; uint8_t *pg =3D vg; - val =3D dup_const(MO_32, val); + val =3D dup_const(MO_UL, val); for (i =3D 0; i < opr_sz; i +=3D 1) { d[i] =3D val & expand_pred_s(pg[H1(i)]); } @@ -5123,7 +5123,7 @@ static inline void sve_ldff1_zs(CPUARMState *env, voi= d *vd, void *vg, void *vm, target_ulong addr; /* Skip to the first true predicate. */ - reg_off =3D find_next_active(vg, 0, reg_max, MO_32); + reg_off =3D find_next_active(vg, 0, reg_max, MO_UL); if (likely(reg_off < reg_max)) { /* Perform one normal read, which will fault or not. */ set_helper_retaddr(ra); diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c index 3acfccb..0b92e6d 100644 --- a/target/arm/translate-a64.c +++ b/target/arm/translate-a64.c @@ -484,7 +484,7 @@ static TCGv_i32 read_fp_sreg(DisasContext *s, int reg) { TCGv_i32 v =3D tcg_temp_new_i32(); - tcg_gen_ld_i32(v, cpu_env, fp_reg_offset(s, reg, MO_32)); + tcg_gen_ld_i32(v, cpu_env, fp_reg_offset(s, reg, MO_UL)); return v; } @@ -999,7 +999,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 = tcg_dest, int srcidx, case MO_UW: tcg_gen_ld16u_i64(tcg_dest, cpu_env, vect_off); break; - case MO_32: + case MO_UL: tcg_gen_ld32u_i64(tcg_dest, cpu_env, vect_off); break; case MO_SB: @@ -1008,7 +1008,7 @@ static void read_vec_element(DisasContext *s, TCGv_i6= 4 tcg_dest, int srcidx, case MO_SW: tcg_gen_ld16s_i64(tcg_dest, cpu_env, vect_off); break; - case MO_32|MO_SIGN: + case MO_SL: tcg_gen_ld32s_i64(tcg_dest, cpu_env, vect_off); break; case MO_64: @@ -1037,8 +1037,8 @@ static void read_vec_element_i32(DisasContext *s, TCG= v_i32 tcg_dest, int srcidx, case MO_SW: tcg_gen_ld16s_i32(tcg_dest, cpu_env, vect_off); break; - case MO_32: - case MO_32|MO_SIGN: + case MO_UL: + case MO_SL: tcg_gen_ld_i32(tcg_dest, cpu_env, vect_off); break; default: @@ -1058,7 +1058,7 @@ static void write_vec_element(DisasContext *s, TCGv_i= 64 tcg_src, int destidx, case MO_UW: tcg_gen_st16_i64(tcg_src, cpu_env, vect_off); break; - case MO_32: + case MO_UL: tcg_gen_st32_i64(tcg_src, cpu_env, vect_off); break; case MO_64: @@ -1080,7 +1080,7 @@ static void write_vec_element_i32(DisasContext *s, TC= Gv_i32 tcg_src, case MO_UW: tcg_gen_st16_i32(tcg_src, cpu_env, vect_off); break; - case MO_32: + case MO_UL: tcg_gen_st_i32(tcg_src, cpu_env, vect_off); break; default: @@ -5299,7 +5299,7 @@ static void handle_fp_compare(DisasContext *s, int si= ze, } switch (size) { - case MO_32: + case MO_UL: if (signal_all_nans) { gen_helper_vfp_cmpes_a64(tcg_flags, tcg_vn, tcg_vm, fpst); } else { @@ -5354,7 +5354,7 @@ static void disas_fp_compare(DisasContext *s, uint32_= t insn) switch (type) { case 0: - size =3D MO_32; + size =3D MO_UL; break; case 1: size =3D MO_64; @@ -5405,7 +5405,7 @@ static void disas_fp_ccomp(DisasContext *s, uint32_t = insn) switch (type) { case 0: - size =3D MO_32; + size =3D MO_UL; break; case 1: size =3D MO_64; @@ -5471,7 +5471,7 @@ static void disas_fp_csel(DisasContext *s, uint32_t i= nsn) switch (type) { case 0: - sz =3D MO_32; + sz =3D MO_UL; break; case 1: sz =3D MO_64; @@ -6276,7 +6276,7 @@ static void disas_fp_imm(DisasContext *s, uint32_t in= sn) switch (type) { case 0: - sz =3D MO_32; + sz =3D MO_UL; break; case 1: sz =3D MO_64; @@ -6581,7 +6581,7 @@ static void handle_fmov(DisasContext *s, int rd, int = rn, int type, bool itof) switch (type) { case 0: /* 32 bit */ - tcg_gen_ld32u_i64(tcg_rd, cpu_env, fp_reg_offset(s, rn, MO_32)= ); + tcg_gen_ld32u_i64(tcg_rd, cpu_env, fp_reg_offset(s, rn, MO_UL)= ); break; case 1: /* 64 bit */ @@ -7030,7 +7030,7 @@ static TCGv_i32 do_reduction_op(DisasContext *s, int = fpopcode, int rn, { if (esize =3D=3D size) { int element; - TCGMemOp msize =3D esize =3D=3D 16 ? MO_UW : MO_32; + TCGMemOp msize =3D esize =3D=3D 16 ? MO_UW : MO_UL; TCGv_i32 tcg_elem; /* We should have one register left here */ @@ -7702,7 +7702,7 @@ static void disas_simd_scalar_pairwise(DisasContext *= s, uint32_t insn) size =3D MO_UW; } } else { - size =3D extract32(size, 0, 1) ? MO_64 : MO_32; + size =3D extract32(size, 0, 1) ? MO_64 : MO_UL; } if (!fp_access_check(s)) { @@ -8181,7 +8181,7 @@ static void handle_simd_qshl(DisasContext *s, bool sc= alar, bool is_q, } }; NeonGenTwoOpEnvFn *genfn =3D fns[src_unsigned][dst_unsigned][size]; - TCGMemOp memop =3D scalar ? size : MO_32; + TCGMemOp memop =3D scalar ? size : MO_UL; int maxpass =3D scalar ? 1 : is_q ? 4 : 2; for (pass =3D 0; pass < maxpass; pass++) { @@ -8204,7 +8204,7 @@ static void handle_simd_qshl(DisasContext *s, bool sc= alar, bool is_q, } write_fp_sreg(s, rd, tcg_op); } else { - write_vec_element_i32(s, tcg_op, rd, pass, MO_32); + write_vec_element_i32(s, tcg_op, rd, pass, MO_UL); } tcg_temp_free_i32(tcg_op); @@ -8264,7 +8264,7 @@ static void handle_simd_intfp_conv(DisasContext *s, i= nt rd, int rn, read_vec_element_i32(s, tcg_int32, rn, pass, mop); switch (size) { - case MO_32: + case MO_UL: if (fracbits) { if (is_signed) { gen_helper_vfp_sltos(tcg_float, tcg_int32, @@ -8337,7 +8337,7 @@ static void handle_simd_shift_intfp_conv(DisasContext= *s, bool is_scalar, return; } } else if (immh & 4) { - size =3D MO_32; + size =3D MO_UL; } else if (immh & 2) { size =3D MO_UW; if (!dc_isar_feature(aa64_fp16, s)) { @@ -8382,7 +8382,7 @@ static void handle_simd_shift_fpint_conv(DisasContext= *s, bool is_scalar, return; } } else if (immh & 0x4) { - size =3D MO_32; + size =3D MO_UL; } else if (immh & 0x2) { size =3D MO_UW; if (!dc_isar_feature(aa64_fp16, s)) { @@ -8436,7 +8436,7 @@ static void handle_simd_shift_fpint_conv(DisasContext= *s, bool is_scalar, fn =3D gen_helper_vfp_toshh; } break; - case MO_32: + case MO_UL: if (is_u) { fn =3D gen_helper_vfp_touls; } else { @@ -8588,8 +8588,8 @@ static void disas_simd_scalar_three_reg_diff(DisasCon= text *s, uint32_t insn) TCGv_i64 tcg_op2 =3D tcg_temp_new_i64(); TCGv_i64 tcg_res =3D tcg_temp_new_i64(); - read_vec_element(s, tcg_op1, rn, 0, MO_32 | MO_SIGN); - read_vec_element(s, tcg_op2, rm, 0, MO_32 | MO_SIGN); + read_vec_element(s, tcg_op1, rn, 0, MO_SL); + read_vec_element(s, tcg_op2, rm, 0, MO_SL); tcg_gen_mul_i64(tcg_res, tcg_op1, tcg_op2); gen_helper_neon_addl_saturate_s64(tcg_res, cpu_env, tcg_res, tcg_r= es); @@ -8631,7 +8631,7 @@ static void disas_simd_scalar_three_reg_diff(DisasCon= text *s, uint32_t insn) case 0x9: /* SQDMLAL, SQDMLAL2 */ { TCGv_i64 tcg_op3 =3D tcg_temp_new_i64(); - read_vec_element(s, tcg_op3, rd, 0, MO_32); + read_vec_element(s, tcg_op3, rd, 0, MO_UL); gen_helper_neon_addl_saturate_s32(tcg_res, cpu_env, tcg_res, tcg_op3); tcg_temp_free_i64(tcg_op3); @@ -8831,8 +8831,8 @@ static void handle_3same_float(DisasContext *s, int s= ize, int elements, TCGv_i32 tcg_op2 =3D tcg_temp_new_i32(); TCGv_i32 tcg_res =3D tcg_temp_new_i32(); - read_vec_element_i32(s, tcg_op1, rn, pass, MO_32); - read_vec_element_i32(s, tcg_op2, rm, pass, MO_32); + read_vec_element_i32(s, tcg_op1, rn, pass, MO_UL); + read_vec_element_i32(s, tcg_op2, rm, pass, MO_UL); switch (fpopcode) { case 0x39: /* FMLS */ @@ -8840,7 +8840,7 @@ static void handle_3same_float(DisasContext *s, int s= ize, int elements, gen_helper_vfp_negs(tcg_op1, tcg_op1); /* fall through */ case 0x19: /* FMLA */ - read_vec_element_i32(s, tcg_res, rd, pass, MO_32); + read_vec_element_i32(s, tcg_res, rd, pass, MO_UL); gen_helper_vfp_muladds(tcg_res, tcg_op1, tcg_op2, tcg_res, fpst); break; @@ -8908,7 +8908,7 @@ static void handle_3same_float(DisasContext *s, int s= ize, int elements, write_vec_element(s, tcg_tmp, rd, pass, MO_64); tcg_temp_free_i64(tcg_tmp); } else { - write_vec_element_i32(s, tcg_res, rd, pass, MO_32); + write_vec_element_i32(s, tcg_res, rd, pass, MO_UL); } tcg_temp_free_i32(tcg_res); @@ -9557,7 +9557,7 @@ static void handle_2misc_reciprocal(DisasContext *s, = int opcode, } for (pass =3D 0; pass < maxpasses; pass++) { - read_vec_element_i32(s, tcg_op, rn, pass, MO_32); + read_vec_element_i32(s, tcg_op, rn, pass, MO_UL); switch (opcode) { case 0x3c: /* URECPE */ @@ -9579,7 +9579,7 @@ static void handle_2misc_reciprocal(DisasContext *s, = int opcode, if (is_scalar) { write_fp_sreg(s, rd, tcg_res); } else { - write_vec_element_i32(s, tcg_res, rd, pass, MO_32); + write_vec_element_i32(s, tcg_res, rd, pass, MO_UL); } } tcg_temp_free_i32(tcg_res); @@ -9693,7 +9693,7 @@ static void handle_2misc_narrow(DisasContext *s, bool= scalar, } for (pass =3D 0; pass < 2; pass++) { - write_vec_element_i32(s, tcg_res[pass], rd, destelt + pass, MO_32); + write_vec_element_i32(s, tcg_res[pass], rd, destelt + pass, MO_UL); tcg_temp_free_i32(tcg_res[pass]); } clear_vec_high(s, is_q, rd); @@ -9740,8 +9740,8 @@ static void handle_2misc_satacc(DisasContext *s, bool= is_scalar, bool is_u, read_vec_element_i32(s, tcg_rn, rn, pass, size); read_vec_element_i32(s, tcg_rd, rd, pass, size); } else { - read_vec_element_i32(s, tcg_rn, rn, pass, MO_32); - read_vec_element_i32(s, tcg_rd, rd, pass, MO_32); + read_vec_element_i32(s, tcg_rn, rn, pass, MO_UL); + read_vec_element_i32(s, tcg_rd, rd, pass, MO_UL); } if (is_u) { /* USQADD */ @@ -9779,7 +9779,7 @@ static void handle_2misc_satacc(DisasContext *s, bool= is_scalar, bool is_u, write_vec_element(s, tcg_zero, rd, 0, MO_64); tcg_temp_free_i64(tcg_zero); } - write_vec_element_i32(s, tcg_rd, rd, pass, MO_32); + write_vec_element_i32(s, tcg_rd, rd, pass, MO_UL); } tcg_temp_free_i32(tcg_rd); tcg_temp_free_i32(tcg_rn); @@ -10347,7 +10347,7 @@ static void handle_3rd_widening(DisasContext *s, in= t is_q, int is_u, int size, TCGv_i64 tcg_op1 =3D tcg_temp_new_i64(); TCGv_i64 tcg_op2 =3D tcg_temp_new_i64(); TCGv_i64 tcg_passres; - TCGMemOp memop =3D MO_32 | (is_u ? 0 : MO_SIGN); + TCGMemOp memop =3D is_u ? MO_UL : MO_SL; int elt =3D pass + is_q * 2; @@ -10426,8 +10426,8 @@ static void handle_3rd_widening(DisasContext *s, in= t is_q, int is_u, int size, TCGv_i64 tcg_passres; int elt =3D pass + is_q * 2; - read_vec_element_i32(s, tcg_op1, rn, elt, MO_32); - read_vec_element_i32(s, tcg_op2, rm, elt, MO_32); + read_vec_element_i32(s, tcg_op1, rn, elt, MO_UL); + read_vec_element_i32(s, tcg_op2, rm, elt, MO_UL); if (accop =3D=3D 0) { tcg_passres =3D tcg_res[pass]; @@ -10547,7 +10547,7 @@ static void handle_3rd_wide(DisasContext *s, int is= _q, int is_u, int size, NeonGenWidenFn *widenfn =3D widenfns[size][is_u]; read_vec_element(s, tcg_op1, rn, pass, MO_64); - read_vec_element_i32(s, tcg_op2, rm, part + pass, MO_32); + read_vec_element_i32(s, tcg_op2, rm, part + pass, MO_UL); widenfn(tcg_op2_wide, tcg_op2); tcg_temp_free_i32(tcg_op2); tcg_res[pass] =3D tcg_temp_new_i64(); @@ -10603,7 +10603,7 @@ static void handle_3rd_narrowing(DisasContext *s, i= nt is_q, int is_u, int size, } for (pass =3D 0; pass < 2; pass++) { - write_vec_element_i32(s, tcg_res[pass], rd, pass + part, MO_32); + write_vec_element_i32(s, tcg_res[pass], rd, pass + part, MO_UL); tcg_temp_free_i32(tcg_res[pass]); } clear_vec_high(s, is_q, rd); @@ -10860,8 +10860,8 @@ static void handle_simd_3same_pair(DisasContext *s,= int is_q, int u, int opcode, int passreg =3D pass < (maxpass / 2) ? rn : rm; int passelt =3D (is_q && (pass & 1)) ? 2 : 0; - read_vec_element_i32(s, tcg_op1, passreg, passelt, MO_32); - read_vec_element_i32(s, tcg_op2, passreg, passelt + 1, MO_32); + read_vec_element_i32(s, tcg_op1, passreg, passelt, MO_UL); + read_vec_element_i32(s, tcg_op2, passreg, passelt + 1, MO_UL); tcg_res[pass] =3D tcg_temp_new_i32(); switch (opcode) { @@ -10925,7 +10925,7 @@ static void handle_simd_3same_pair(DisasContext *s,= int is_q, int u, int opcode, } for (pass =3D 0; pass < maxpass; pass++) { - write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_32); + write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_UL); tcg_temp_free_i32(tcg_res[pass]); } clear_vec_high(s, is_q, rd); @@ -10971,7 +10971,7 @@ static void disas_simd_3same_float(DisasContext *s,= uint32_t insn) unallocated_encoding(s); return; } - handle_simd_3same_pair(s, is_q, 0, fpopcode, size ? MO_64 : MO_32, + handle_simd_3same_pair(s, is_q, 0, fpopcode, size ? MO_64 : MO_UL, rn, rm, rd); return; case 0x1b: /* FMULX */ @@ -11174,8 +11174,8 @@ static void disas_simd_3same_int(DisasContext *s, u= int32_t insn) NeonGenTwoOpFn *genfn =3D NULL; NeonGenTwoOpEnvFn *genenvfn =3D NULL; - read_vec_element_i32(s, tcg_op1, rn, pass, MO_32); - read_vec_element_i32(s, tcg_op2, rm, pass, MO_32); + read_vec_element_i32(s, tcg_op1, rn, pass, MO_UL); + read_vec_element_i32(s, tcg_op2, rm, pass, MO_UL); switch (opcode) { case 0x0: /* SHADD, UHADD */ @@ -11292,11 +11292,11 @@ static void disas_simd_3same_int(DisasContext *s,= uint32_t insn) tcg_gen_add_i32, }; - read_vec_element_i32(s, tcg_op1, rd, pass, MO_32); + read_vec_element_i32(s, tcg_op1, rd, pass, MO_UL); fns[size](tcg_res, tcg_op1, tcg_res); } - write_vec_element_i32(s, tcg_res, rd, pass, MO_32); + write_vec_element_i32(s, tcg_res, rd, pass, MO_UL); tcg_temp_free_i32(tcg_res); tcg_temp_free_i32(tcg_op1); @@ -11578,7 +11578,7 @@ static void disas_simd_three_reg_same_extra(DisasCo= ntext *s, uint32_t insn) break; case 0x02: /* SDOT (vector) */ case 0x12: /* UDOT (vector) */ - if (size !=3D MO_32) { + if (size !=3D MO_UL) { unallocated_encoding(s); return; } @@ -11709,7 +11709,7 @@ static void handle_2misc_widening(DisasContext *s, = int opcode, bool is_q, TCGv_i32 tcg_op =3D tcg_temp_new_i32(); tcg_res[pass] =3D tcg_temp_new_i64(); - read_vec_element_i32(s, tcg_op, rn, srcelt + pass, MO_32); + read_vec_element_i32(s, tcg_op, rn, srcelt + pass, MO_UL); gen_helper_vfp_fcvtds(tcg_res[pass], tcg_op, cpu_env); tcg_temp_free_i32(tcg_op); } @@ -11732,7 +11732,7 @@ static void handle_2misc_widening(DisasContext *s, = int opcode, bool is_q, fpst, ahp); } for (pass =3D 0; pass < 4; pass++) { - write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_32); + write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_UL); tcg_temp_free_i32(tcg_res[pass]); } @@ -11771,7 +11771,7 @@ static void handle_rev(DisasContext *s, int opcode,= bool u, case MO_UW: tcg_gen_bswap16_i64(tcg_tmp, tcg_tmp); break; - case MO_32: + case MO_UL: tcg_gen_bswap32_i64(tcg_tmp, tcg_tmp); break; case MO_64: @@ -11900,7 +11900,7 @@ static void handle_shll(DisasContext *s, bool is_q,= int size, int rn, int rd) NeonGenWidenFn *widenfn =3D widenfns[size]; TCGv_i32 tcg_op =3D tcg_temp_new_i32(); - read_vec_element_i32(s, tcg_op, rn, part + pass, MO_32); + read_vec_element_i32(s, tcg_op, rn, part + pass, MO_UL); tcg_res[pass] =3D tcg_temp_new_i64(); widenfn(tcg_res[pass], tcg_op); tcg_gen_shli_i64(tcg_res[pass], tcg_res[pass], 8 << size); @@ -12251,7 +12251,7 @@ static void disas_simd_two_reg_misc(DisasContext *s= , uint32_t insn) TCGv_i32 tcg_res =3D tcg_temp_new_i32(); TCGCond cond; - read_vec_element_i32(s, tcg_op, rn, pass, MO_32); + read_vec_element_i32(s, tcg_op, rn, pass, MO_UL); if (size =3D=3D 2) { /* Special cases for 32 bit elements */ @@ -12418,7 +12418,7 @@ static void disas_simd_two_reg_misc(DisasContext *s= , uint32_t insn) } } - write_vec_element_i32(s, tcg_res, rd, pass, MO_32); + write_vec_element_i32(s, tcg_res, rd, pass, MO_UL); tcg_temp_free_i32(tcg_res); tcg_temp_free_i32(tcg_op); @@ -12816,7 +12816,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) break; case 0x0e: /* SDOT */ case 0x1e: /* UDOT */ - if (is_scalar || size !=3D MO_32 || !dc_isar_feature(aa64_dp, s)) { + if (is_scalar || size !=3D MO_UL || !dc_isar_feature(aa64_dp, s)) { unallocated_encoding(s); return; } @@ -12835,7 +12835,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) case 0x04: /* FMLSL */ case 0x18: /* FMLAL2 */ case 0x1c: /* FMLSL2 */ - if (is_scalar || size !=3D MO_32 || !dc_isar_feature(aa64_fhm, s))= { + if (is_scalar || size !=3D MO_UL || !dc_isar_feature(aa64_fhm, s))= { unallocated_encoding(s); return; } @@ -12855,7 +12855,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) size =3D MO_UW; is_fp16 =3D true; break; - case MO_32: /* single precision */ + case MO_UL: /* single precision */ case MO_64: /* double precision */ break; default: @@ -12868,7 +12868,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) /* Each indexable element is a complex pair. */ size +=3D 1; switch (size) { - case MO_32: + case MO_UL: if (h && !is_q) { unallocated_encoding(s); return; @@ -12902,7 +12902,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) case MO_UW: index =3D h << 2 | l << 1 | m; break; - case MO_32: + case MO_UL: index =3D h << 1 | l; rm |=3D m << 4; break; @@ -13038,7 +13038,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) TCGv_i32 tcg_op =3D tcg_temp_new_i32(); TCGv_i32 tcg_res =3D tcg_temp_new_i32(); - read_vec_element_i32(s, tcg_op, rn, pass, is_scalar ? size : M= O_32); + read_vec_element_i32(s, tcg_op, rn, pass, is_scalar ? size : M= O_UL); switch (16 * u + opcode) { case 0x08: /* MUL */ @@ -13060,7 +13060,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) if (opcode =3D=3D 0x8) { break; } - read_vec_element_i32(s, tcg_op, rd, pass, MO_32); + read_vec_element_i32(s, tcg_op, rd, pass, MO_UL); genfn =3D fns[size - 1][is_sub]; genfn(tcg_res, tcg_op, tcg_res); break; @@ -13068,7 +13068,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) case 0x05: /* FMLS */ case 0x01: /* FMLA */ read_vec_element_i32(s, tcg_res, rd, pass, - is_scalar ? size : MO_32); + is_scalar ? size : MO_UL); switch (size) { case 1: if (opcode =3D=3D 0x5) { @@ -13153,7 +13153,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) break; case 0x1d: /* SQRDMLAH */ read_vec_element_i32(s, tcg_res, rd, pass, - is_scalar ? size : MO_32); + is_scalar ? size : MO_UL); if (size =3D=3D 1) { gen_helper_neon_qrdmlah_s16(tcg_res, cpu_env, tcg_op, tcg_idx, tcg_res); @@ -13164,7 +13164,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) break; case 0x1f: /* SQRDMLSH */ read_vec_element_i32(s, tcg_res, rd, pass, - is_scalar ? size : MO_32); + is_scalar ? size : MO_UL); if (size =3D=3D 1) { gen_helper_neon_qrdmlsh_s16(tcg_res, cpu_env, tcg_op, tcg_idx, tcg_res); @@ -13180,7 +13180,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) if (is_scalar) { write_fp_sreg(s, rd, tcg_res); } else { - write_vec_element_i32(s, tcg_res, rd, pass, MO_32); + write_vec_element_i32(s, tcg_res, rd, pass, MO_UL); } tcg_temp_free_i32(tcg_op); @@ -13194,7 +13194,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) TCGv_i64 tcg_res[2]; int pass; bool satop =3D extract32(opcode, 0, 1); - TCGMemOp memop =3D MO_32; + TCGMemOp memop =3D MO_UL; if (satop || !u) { memop |=3D MO_SIGN; @@ -13288,7 +13288,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) read_vec_element_i32(s, tcg_op, rn, pass, size); } else { read_vec_element_i32(s, tcg_op, rn, - pass + (is_q * 2), MO_32); + pass + (is_q * 2), MO_UL); } tcg_res[pass] =3D tcg_temp_new_i64(); @@ -13780,19 +13780,19 @@ static void disas_crypto_four_reg(DisasContext *s= , uint32_t insn) tcg_res =3D tcg_temp_new_i32(); tcg_zero =3D tcg_const_i32(0); - read_vec_element_i32(s, tcg_op1, rn, 3, MO_32); - read_vec_element_i32(s, tcg_op2, rm, 3, MO_32); - read_vec_element_i32(s, tcg_op3, ra, 3, MO_32); + read_vec_element_i32(s, tcg_op1, rn, 3, MO_UL); + read_vec_element_i32(s, tcg_op2, rm, 3, MO_UL); + read_vec_element_i32(s, tcg_op3, ra, 3, MO_UL); tcg_gen_rotri_i32(tcg_res, tcg_op1, 20); tcg_gen_add_i32(tcg_res, tcg_res, tcg_op2); tcg_gen_add_i32(tcg_res, tcg_res, tcg_op3); tcg_gen_rotri_i32(tcg_res, tcg_res, 25); - write_vec_element_i32(s, tcg_zero, rd, 0, MO_32); - write_vec_element_i32(s, tcg_zero, rd, 1, MO_32); - write_vec_element_i32(s, tcg_zero, rd, 2, MO_32); - write_vec_element_i32(s, tcg_res, rd, 3, MO_32); + write_vec_element_i32(s, tcg_zero, rd, 0, MO_UL); + write_vec_element_i32(s, tcg_zero, rd, 1, MO_UL); + write_vec_element_i32(s, tcg_zero, rd, 2, MO_UL); + write_vec_element_i32(s, tcg_res, rd, 3, MO_UL); tcg_temp_free_i32(tcg_op1); tcg_temp_free_i32(tcg_op2); diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c index 2bc1bd1..f7c891d 100644 --- a/target/arm/translate-sve.c +++ b/target/arm/translate-sve.c @@ -1693,7 +1693,7 @@ static void do_sat_addsub_vec(DisasContext *s, int es= z, int rd, int rn, tcg_temp_free_i32(t32); break; - case MO_32: + case MO_UL: t64 =3D tcg_temp_new_i64(); if (d) { tcg_gen_neg_i64(t64, val); @@ -3320,7 +3320,7 @@ static bool trans_SUBR_zzi(DisasContext *s, arg_rri_e= sz *a) .fniv =3D tcg_gen_sub_vec, .fno =3D gen_helper_sve_subri_s, .opt_opc =3D vecop_list, - .vece =3D MO_32, + .vece =3D MO_UL, .scalar_first =3D true }, { .fni8 =3D tcg_gen_sub_i64, .fniv =3D tcg_gen_sub_vec, @@ -5258,7 +5258,7 @@ static bool trans_LD1_zprz(DisasContext *s, arg_LD1_z= prz *a) } switch (a->esz) { - case MO_32: + case MO_UL: fn =3D gather_load_fn32[be][a->ff][a->xs][a->u][a->msz]; break; case MO_64: @@ -5286,7 +5286,7 @@ static bool trans_LD1_zpiz(DisasContext *s, arg_LD1_z= piz *a) } switch (a->esz) { - case MO_32: + case MO_UL: fn =3D gather_load_fn32[be][a->ff][0][a->u][a->msz]; break; case MO_64: @@ -5364,7 +5364,7 @@ static bool trans_ST1_zprz(DisasContext *s, arg_ST1_z= prz *a) return true; } switch (a->esz) { - case MO_32: + case MO_UL: fn =3D scatter_store_fn32[be][a->xs][a->msz]; break; case MO_64: @@ -5392,7 +5392,7 @@ static bool trans_ST1_zpiz(DisasContext *s, arg_ST1_z= piz *a) } switch (a->esz) { - case MO_32: + case MO_UL: fn =3D scatter_store_fn32[be][0][a->msz]; break; case MO_64: diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c index 549874c..5e0cd63 100644 --- a/target/arm/translate-vfp.inc.c +++ b/target/arm/translate-vfp.inc.c @@ -46,7 +46,7 @@ uint64_t vfp_expand_imm(int size, uint8_t imm8) extract32(imm8, 0, 6); imm <<=3D 48; break; - case MO_32: + case MO_UL: imm =3D (extract32(imm8, 7, 1) ? 0x8000 : 0) | (extract32(imm8, 6, 1) ? 0x3e00 : 0x4000) | (extract32(imm8, 0, 6) << 3); @@ -1901,7 +1901,7 @@ static bool trans_VMOV_imm_sp(DisasContext *s, arg_VM= OV_imm_sp *a) } } - fd =3D tcg_const_i32(vfp_expand_imm(MO_32, a->imm)); + fd =3D tcg_const_i32(vfp_expand_imm(MO_UL, a->imm)); for (;;) { neon_store_reg32(fd, vd); diff --git a/target/arm/translate.c b/target/arm/translate.c index 8d10922..5510ecd 100644 --- a/target/arm/translate.c +++ b/target/arm/translate.c @@ -1085,7 +1085,7 @@ static inline TCGv gen_aa32_addr(DisasContext *s, TCG= v_i32 a32, TCGMemOp op) tcg_gen_extu_i32_tl(addr, a32); /* Not needed for user-mode BE32, where we use MO_BE instead. */ - if (!IS_USER_ONLY && s->sctlr_b && (op & MO_SIZE) < MO_32) { + if (!IS_USER_ONLY && s->sctlr_b && (op & MO_SIZE) < MO_UL) { tcg_gen_xori_tl(addr, addr, 4 - (1 << (op & MO_SIZE))); } return addr; @@ -1480,7 +1480,7 @@ static void neon_store_element(int reg, int ele, TCGM= emOp size, TCGv_i32 var) case MO_UW: tcg_gen_st16_i32(var, cpu_env, offset); break; - case MO_32: + case MO_UL: tcg_gen_st_i32(var, cpu_env, offset); break; default: @@ -1499,7 +1499,7 @@ static void neon_store_element64(int reg, int ele, TC= GMemOp size, TCGv_i64 var) case MO_UW: tcg_gen_st16_i64(var, cpu_env, offset); break; - case MO_32: + case MO_UL: tcg_gen_st32_i64(var, cpu_env, offset); break; case MO_64: @@ -4272,7 +4272,7 @@ const GVecGen2i ssra_op[4] =3D { .fniv =3D gen_ssra_vec, .load_dest =3D true, .opt_opc =3D vecop_list_ssra, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D gen_ssra64_i64, .fniv =3D gen_ssra_vec, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, @@ -4330,7 +4330,7 @@ const GVecGen2i usra_op[4] =3D { .fniv =3D gen_usra_vec, .load_dest =3D true, .opt_opc =3D vecop_list_usra, - .vece =3D MO_32, }, + .vece =3D MO_UL, }, { .fni8 =3D gen_usra64_i64, .fniv =3D gen_usra_vec, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, @@ -4410,7 +4410,7 @@ const GVecGen2i sri_op[4] =3D { .fniv =3D gen_shr_ins_vec, .load_dest =3D true, .opt_opc =3D vecop_list_sri, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D gen_shr64_ins_i64, .fniv =3D gen_shr_ins_vec, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, @@ -4488,7 +4488,7 @@ const GVecGen2i sli_op[4] =3D { .fniv =3D gen_shl_ins_vec, .load_dest =3D true, .opt_opc =3D vecop_list_sli, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D gen_shl64_ins_i64, .fniv =3D gen_shl_ins_vec, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, @@ -4584,7 +4584,7 @@ const GVecGen3 mla_op[4] =3D { .fniv =3D gen_mla_vec, .load_dest =3D true, .opt_opc =3D vecop_list_mla, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D gen_mla64_i64, .fniv =3D gen_mla_vec, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, @@ -4608,7 +4608,7 @@ const GVecGen3 mls_op[4] =3D { .fniv =3D gen_mls_vec, .load_dest =3D true, .opt_opc =3D vecop_list_mls, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D gen_mls64_i64, .fniv =3D gen_mls_vec, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, @@ -4653,7 +4653,7 @@ const GVecGen3 cmtst_op[4] =3D { { .fni4 =3D gen_cmtst_i32, .fniv =3D gen_cmtst_vec, .opt_opc =3D vecop_list_cmtst, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D gen_cmtst_i64, .fniv =3D gen_cmtst_vec, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, @@ -4691,7 +4691,7 @@ const GVecGen4 uqadd_op[4] =3D { .fno =3D gen_helper_gvec_uqadd_s, .write_aofs =3D true, .opt_opc =3D vecop_list_uqadd, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fniv =3D gen_uqadd_vec, .fno =3D gen_helper_gvec_uqadd_d, .write_aofs =3D true, @@ -4729,7 +4729,7 @@ const GVecGen4 sqadd_op[4] =3D { .fno =3D gen_helper_gvec_sqadd_s, .opt_opc =3D vecop_list_sqadd, .write_aofs =3D true, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fniv =3D gen_sqadd_vec, .fno =3D gen_helper_gvec_sqadd_d, .opt_opc =3D vecop_list_sqadd, @@ -4767,7 +4767,7 @@ const GVecGen4 uqsub_op[4] =3D { .fno =3D gen_helper_gvec_uqsub_s, .opt_opc =3D vecop_list_uqsub, .write_aofs =3D true, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fniv =3D gen_uqsub_vec, .fno =3D gen_helper_gvec_uqsub_d, .opt_opc =3D vecop_list_uqsub, @@ -4805,7 +4805,7 @@ const GVecGen4 sqsub_op[4] =3D { .fno =3D gen_helper_gvec_sqsub_s, .opt_opc =3D vecop_list_sqsub, .write_aofs =3D true, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fniv =3D gen_sqsub_vec, .fno =3D gen_helper_gvec_sqsub_d, .opt_opc =3D vecop_list_sqsub, @@ -5798,10 +5798,10 @@ static int disas_neon_data_insn(DisasContext *s, ui= nt32_t insn) /* The immediate value has already been inverted, * so BIC becomes AND. */ - tcg_gen_gvec_andi(MO_32, reg_ofs, reg_ofs, imm, + tcg_gen_gvec_andi(MO_UL, reg_ofs, reg_ofs, imm, vec_size, vec_size); } else { - tcg_gen_gvec_ori(MO_32, reg_ofs, reg_ofs, imm, + tcg_gen_gvec_ori(MO_UL, reg_ofs, reg_ofs, imm, vec_size, vec_size); } } else { @@ -6879,7 +6879,7 @@ static int disas_neon_data_insn(DisasContext *s, uint= 32_t insn) size =3D MO_UW; element =3D (insn >> 18) & 3; } else { - size =3D MO_32; + size =3D MO_UL; element =3D (insn >> 19) & 1; } tcg_gen_gvec_dup_mem(size, neon_reg_offset(rd, 0), diff --git a/target/i386/translate.c b/target/i386/translate.c index 0535bae..0e863d4 100644 --- a/target/i386/translate.c +++ b/target/i386/translate.c @@ -332,16 +332,16 @@ static inline TCGMemOp mo_pushpop(DisasContext *s, TC= GMemOp ot) /* Select the size of the stack pointer. */ static inline TCGMemOp mo_stacksize(DisasContext *s) { - return CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_UW; + return CODE64(s) ? MO_64 : s->ss32 ? MO_UL : MO_UW; } /* Select only size 64 else 32. Used for SSE operand sizes. */ static inline TCGMemOp mo_64_32(TCGMemOp ot) { #ifdef TARGET_X86_64 - return ot =3D=3D MO_64 ? MO_64 : MO_32; + return ot =3D=3D MO_64 ? MO_64 : MO_UL; #else - return MO_32; + return MO_UL; #endif } @@ -356,7 +356,7 @@ static inline TCGMemOp mo_b_d(int b, TCGMemOp ot) Used for decoding operand size of port opcodes. */ static inline TCGMemOp mo_b_d32(int b, TCGMemOp ot) { - return b & 1 ? (ot =3D=3D MO_UW ? MO_UW : MO_32) : MO_UB; + return b & 1 ? (ot =3D=3D MO_UW ? MO_UW : MO_UL) : MO_UB; } static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t= 0) @@ -372,7 +372,7 @@ static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp = ot, int reg, TCGv t0) case MO_UW: tcg_gen_deposit_tl(cpu_regs[reg], cpu_regs[reg], t0, 0, 16); break; - case MO_32: + case MO_UL: /* For x86_64, this sets the higher half of register to zero. For i386, this is equivalent to a mov. */ tcg_gen_ext32u_tl(cpu_regs[reg], t0); @@ -463,7 +463,7 @@ static void gen_lea_v_seg(DisasContext *s, TCGMemOp afl= ag, TCGv a0, } break; #endif - case MO_32: + case MO_UL: /* 32 bit address */ if (ovr_seg < 0 && s->addseg) { ovr_seg =3D def_seg; @@ -538,7 +538,7 @@ static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp siz= e, bool sign) } return dst; #ifdef TARGET_X86_64 - case MO_32: + case MO_UL: if (sign) { tcg_gen_ext32s_tl(dst, src); } else { @@ -586,7 +586,7 @@ static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCG= v_i32 n) case MO_UW: gen_helper_inw(v, cpu_env, n); break; - case MO_32: + case MO_UL: gen_helper_inl(v, cpu_env, n); break; default: @@ -603,7 +603,7 @@ static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v= , TCGv_i32 n) case MO_UW: gen_helper_outw(cpu_env, v, n); break; - case MO_32: + case MO_UL: gen_helper_outl(cpu_env, v, n); break; default: @@ -625,7 +625,7 @@ static void gen_check_io(DisasContext *s, TCGMemOp ot, = target_ulong cur_eip, case MO_UW: gen_helper_check_iow(cpu_env, s->tmp2_i32); break; - case MO_32: + case MO_UL: gen_helper_check_iol(cpu_env, s->tmp2_i32); break; default: @@ -1077,7 +1077,7 @@ static TCGLabel *gen_jz_ecx_string(DisasContext *s, t= arget_ulong next_eip) static inline void gen_stos(DisasContext *s, TCGMemOp ot) { - gen_op_mov_v_reg(s, MO_32, s->T0, R_EAX); + gen_op_mov_v_reg(s, MO_UL, s->T0, R_EAX); gen_string_movl_A0_EDI(s); gen_op_st_v(s, ot, s->T0, s->A0); gen_op_movl_T0_Dshift(s, ot); @@ -1568,7 +1568,7 @@ static void gen_rot_rm_T1(DisasContext *s, TCGMemOp o= t, int op1, int is_right) goto do_long; do_long: #ifdef TARGET_X86_64 - case MO_32: + case MO_UL: tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); tcg_gen_trunc_tl_i32(s->tmp3_i32, s->T1); if (is_right) { @@ -1644,7 +1644,7 @@ static void gen_rot_rm_im(DisasContext *s, TCGMemOp o= t, int op1, int op2, if (op2 !=3D 0) { switch (ot) { #ifdef TARGET_X86_64 - case MO_32: + case MO_UL: tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); if (is_right) { tcg_gen_rotri_i32(s->tmp2_i32, s->tmp2_i32, op2); @@ -1725,7 +1725,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp = ot, int op1, case MO_UW: gen_helper_rcrw(s->T0, cpu_env, s->T0, s->T1); break; - case MO_32: + case MO_UL: gen_helper_rcrl(s->T0, cpu_env, s->T0, s->T1); break; #ifdef TARGET_X86_64 @@ -1744,7 +1744,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp = ot, int op1, case MO_UW: gen_helper_rclw(s->T0, cpu_env, s->T0, s->T1); break; - case MO_32: + case MO_UL: gen_helper_rcll(s->T0, cpu_env, s->T0, s->T1); break; #ifdef TARGET_X86_64 @@ -1791,7 +1791,7 @@ static void gen_shiftd_rm_T1(DisasContext *s, TCGMemO= p ot, int op1, } /* FALLTHRU */ #ifdef TARGET_X86_64 - case MO_32: + case MO_UL: /* Concatenate the two 32-bit values and use a 64-bit shift. */ tcg_gen_subi_tl(s->tmp0, count, 1); if (is_right) { @@ -1984,7 +1984,7 @@ static AddressParts gen_lea_modrm_0(CPUX86State *env,= DisasContext *s, switch (s->aflag) { case MO_64: - case MO_32: + case MO_UL: havesib =3D 0; if (rm =3D=3D 4) { int code =3D x86_ldub_code(env, s); @@ -2190,7 +2190,7 @@ static inline uint32_t insn_get(CPUX86State *env, Dis= asContext *s, TCGMemOp ot) case MO_UW: ret =3D x86_lduw_code(env, s); break; - case MO_32: + case MO_UL: #ifdef TARGET_X86_64 case MO_64: #endif @@ -2204,7 +2204,7 @@ static inline uint32_t insn_get(CPUX86State *env, Dis= asContext *s, TCGMemOp ot) static inline int insn_const_size(TCGMemOp ot) { - if (ot <=3D MO_32) { + if (ot <=3D MO_UL) { return 1 << ot; } else { return 4; @@ -2400,12 +2400,12 @@ static inline void gen_pop_update(DisasContext *s, = TCGMemOp ot) static inline void gen_stack_A0(DisasContext *s) { - gen_lea_v_seg(s, s->ss32 ? MO_32 : MO_UW, cpu_regs[R_ESP], R_SS, -1); + gen_lea_v_seg(s, s->ss32 ? MO_UL : MO_UW, cpu_regs[R_ESP], R_SS, -1); } static void gen_pusha(DisasContext *s) { - TCGMemOp s_ot =3D s->ss32 ? MO_32 : MO_UW; + TCGMemOp s_ot =3D s->ss32 ? MO_UL : MO_UW; TCGMemOp d_ot =3D s->dflag; int size =3D 1 << d_ot; int i; @@ -2421,7 +2421,7 @@ static void gen_pusha(DisasContext *s) static void gen_popa(DisasContext *s) { - TCGMemOp s_ot =3D s->ss32 ? MO_32 : MO_UW; + TCGMemOp s_ot =3D s->ss32 ? MO_UL : MO_UW; TCGMemOp d_ot =3D s->dflag; int size =3D 1 << d_ot; int i; @@ -2443,7 +2443,7 @@ static void gen_popa(DisasContext *s) static void gen_enter(DisasContext *s, int esp_addend, int level) { TCGMemOp d_ot =3D mo_pushpop(s, s->dflag); - TCGMemOp a_ot =3D CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_UW; + TCGMemOp a_ot =3D CODE64(s) ? MO_64 : s->ss32 ? MO_UL : MO_UW; int size =3D 1 << d_ot; /* Push BP; compute FrameTemp into T1. */ @@ -3145,7 +3145,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, } else { tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State, xmm_regs[reg].ZMM_L(0))); - gen_op_st_v(s, MO_32, s->T0, s->A0); + gen_op_st_v(s, MO_UL, s->T0, s->A0); } break; case 0x6e: /* movd mm, ea */ @@ -3157,7 +3157,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, } else #endif { - gen_ldst_modrm(env, s, modrm, MO_32, OR_TMP0, 0); + gen_ldst_modrm(env, s, modrm, MO_UL, OR_TMP0, 0); tcg_gen_addi_ptr(s->ptr0, cpu_env, offsetof(CPUX86State,fpregs[reg].mmx)); tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); @@ -3174,7 +3174,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, } else #endif { - gen_ldst_modrm(env, s, modrm, MO_32, OR_TMP0, 0); + gen_ldst_modrm(env, s, modrm, MO_UL, OR_TMP0, 0); tcg_gen_addi_ptr(s->ptr0, cpu_env, offsetof(CPUX86State,xmm_regs[reg])); tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); @@ -3211,7 +3211,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, case 0x210: /* movss xmm, ea */ if (mod !=3D 3) { gen_lea_modrm(env, s, modrm); - gen_op_ld_v(s, MO_32, s->T0, s->A0); + gen_op_ld_v(s, MO_UL, s->T0, s->A0); tcg_gen_st32_tl(s->T0, cpu_env, offsetof(CPUX86State, xmm_regs[reg].ZMM_L(= 0))); tcg_gen_movi_tl(s->T0, 0); @@ -3346,7 +3346,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, { tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State,fpregs[reg].mmx.MMX_= L(0))); - gen_ldst_modrm(env, s, modrm, MO_32, OR_TMP0, 1); + gen_ldst_modrm(env, s, modrm, MO_UL, OR_TMP0, 1); } break; case 0x17e: /* movd ea, xmm */ @@ -3360,7 +3360,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, { tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State,xmm_regs[reg].ZMM_L(= 0))); - gen_ldst_modrm(env, s, modrm, MO_32, OR_TMP0, 1); + gen_ldst_modrm(env, s, modrm, MO_UL, OR_TMP0, 1); } break; case 0x27e: /* movq xmm, ea */ @@ -3405,7 +3405,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, gen_lea_modrm(env, s, modrm); tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State, xmm_regs[reg].ZMM_L= (0))); - gen_op_st_v(s, MO_32, s->T0, s->A0); + gen_op_st_v(s, MO_UL, s->T0, s->A0); } else { rm =3D (modrm & 7) | REX_B(s); gen_op_movl(s, offsetof(CPUX86State, xmm_regs[rm].ZMM_L(0)= ), @@ -3530,7 +3530,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); op1_offset =3D offsetof(CPUX86State,xmm_regs[reg]); tcg_gen_addi_ptr(s->ptr0, cpu_env, op1_offset); - if (ot =3D=3D MO_32) { + if (ot =3D=3D MO_UL) { SSEFunc_0_epi sse_fn_epi =3D sse_op_table3ai[(b >> 8) & 1]; tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); sse_fn_epi(cpu_env, s->ptr0, s->tmp2_i32); @@ -3584,7 +3584,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, if ((b >> 8) & 1) { gen_ldq_env_A0(s, offsetof(CPUX86State, xmm_t0.ZMM_Q(0= ))); } else { - gen_op_ld_v(s, MO_32, s->T0, s->A0); + gen_op_ld_v(s, MO_UL, s->T0, s->A0); tcg_gen_st32_tl(s->T0, cpu_env, offsetof(CPUX86State, xmm_t0.ZMM_L(0))= ); } @@ -3594,7 +3594,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, op2_offset =3D offsetof(CPUX86State,xmm_regs[rm]); } tcg_gen_addi_ptr(s->ptr0, cpu_env, op2_offset); - if (ot =3D=3D MO_32) { + if (ot =3D=3D MO_UL) { SSEFunc_i_ep sse_fn_i_ep =3D sse_op_table3bi[((b >> 7) & 2) | (b & 1)]; sse_fn_i_ep(s->tmp2_i32, cpu_env, s->ptr0); @@ -3786,7 +3786,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, if ((b & 0xff) =3D=3D 0xf0) { ot =3D MO_UB; } else if (s->dflag !=3D MO_64) { - ot =3D (s->prefix & PREFIX_DATA ? MO_UW : MO_32); + ot =3D (s->prefix & PREFIX_DATA ? MO_UW : MO_UL); } else { ot =3D MO_64; } @@ -3815,7 +3815,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, goto illegal_op; } if (s->dflag !=3D MO_64) { - ot =3D (s->prefix & PREFIX_DATA ? MO_UW : MO_32); + ot =3D (s->prefix & PREFIX_DATA ? MO_UW : MO_UL); } else { ot =3D MO_64; } @@ -4026,7 +4026,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, switch (ot) { #ifdef TARGET_X86_64 - case MO_32: + case MO_UL: /* If we know TL is 64-bit, and we want a 32-bit result, just do everything in 64-bit arithmetic= . */ tcg_gen_ext32u_i64(cpu_regs[reg], cpu_regs[reg]); @@ -4172,7 +4172,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, } break; case 0x16: - if (ot =3D=3D MO_32) { /* pextrd */ + if (ot =3D=3D MO_UL) { /* pextrd */ tcg_gen_ld_i32(s->tmp2_i32, cpu_env, offsetof(CPUX86State, xmm_regs[reg].ZMM_L(val & = 3))); @@ -4210,7 +4210,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, break; case 0x20: /* pinsrb */ if (mod =3D=3D 3) { - gen_op_mov_v_reg(s, MO_32, s->T0, rm); + gen_op_mov_v_reg(s, MO_UL, s->T0, rm); } else { tcg_gen_qemu_ld_tl(s->T0, s->A0, s->mem_index, MO_UB); @@ -4248,7 +4248,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, xmm_regs[reg].ZMM_L(3))); break; case 0x22: - if (ot =3D=3D MO_32) { /* pinsrd */ + if (ot =3D=3D MO_UL) { /* pinsrd */ if (mod =3D=3D 3) { tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[rm]= ); } else { @@ -4393,7 +4393,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, switch (sz) { case 2: /* 32 bit access */ - gen_op_ld_v(s, MO_32, s->T0, s->A0); + gen_op_ld_v(s, MO_UL, s->T0, s->A0); tcg_gen_st32_tl(s->T0, cpu_env, offsetof(CPUX86State,xmm_t0.ZMM_L(0))); break; @@ -4630,19 +4630,19 @@ static target_ulong disas_insn(DisasContext *s, CPU= State *cpu) /* In 64-bit mode, the default data size is 32-bit. Select 64-bit data with rex_w, and 16-bit data with 0x66; rex_w takes precede= nce over 0x66 if both are present. */ - dflag =3D (rex_w > 0 ? MO_64 : prefixes & PREFIX_DATA ? MO_UW : MO= _32); + dflag =3D (rex_w > 0 ? MO_64 : prefixes & PREFIX_DATA ? MO_UW : MO= _UL); /* In 64-bit mode, 0x67 selects 32-bit addressing. */ - aflag =3D (prefixes & PREFIX_ADR ? MO_32 : MO_64); + aflag =3D (prefixes & PREFIX_ADR ? MO_UL : MO_64); } else { /* In 16/32-bit mode, 0x66 selects the opposite data size. */ if (s->code32 ^ ((prefixes & PREFIX_DATA) !=3D 0)) { - dflag =3D MO_32; + dflag =3D MO_UL; } else { dflag =3D MO_UW; } /* In 16/32-bit mode, 0x67 selects the opposite addressing. */ if (s->code32 ^ ((prefixes & PREFIX_ADR) !=3D 0)) { - aflag =3D MO_32; + aflag =3D MO_UL; } else { aflag =3D MO_UW; } @@ -4891,7 +4891,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) set_cc_op(s, CC_OP_MULW); break; default: - case MO_32: + case MO_UL: tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); tcg_gen_trunc_tl_i32(s->tmp3_i32, cpu_regs[R_EAX]); tcg_gen_mulu2_i32(s->tmp2_i32, s->tmp3_i32, @@ -4942,7 +4942,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) set_cc_op(s, CC_OP_MULW); break; default: - case MO_32: + case MO_UL: tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); tcg_gen_trunc_tl_i32(s->tmp3_i32, cpu_regs[R_EAX]); tcg_gen_muls2_i32(s->tmp2_i32, s->tmp3_i32, @@ -4976,7 +4976,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) gen_helper_divw_AX(cpu_env, s->T0); break; default: - case MO_32: + case MO_UL: gen_helper_divl_EAX(cpu_env, s->T0); break; #ifdef TARGET_X86_64 @@ -4995,7 +4995,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) gen_helper_idivw_AX(cpu_env, s->T0); break; default: - case MO_32: + case MO_UL: gen_helper_idivl_EAX(cpu_env, s->T0); break; #ifdef TARGET_X86_64 @@ -5026,7 +5026,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) /* operand size for jumps is 64 bit */ ot =3D MO_64; } else if (op =3D=3D 3 || op =3D=3D 5) { - ot =3D dflag !=3D MO_UW ? MO_32 + (rex_w =3D=3D 1) : MO_UW; + ot =3D dflag !=3D MO_UW ? MO_UL + (rex_w =3D=3D 1) : MO_UW; } else if (op =3D=3D 6) { /* default push size is 64 bit */ ot =3D mo_pushpop(s, dflag); @@ -5146,15 +5146,15 @@ static target_ulong disas_insn(DisasContext *s, CPU= State *cpu) switch (dflag) { #ifdef TARGET_X86_64 case MO_64: - gen_op_mov_v_reg(s, MO_32, s->T0, R_EAX); + gen_op_mov_v_reg(s, MO_UL, s->T0, R_EAX); tcg_gen_ext32s_tl(s->T0, s->T0); gen_op_mov_reg_v(s, MO_64, R_EAX, s->T0); break; #endif - case MO_32: + case MO_UL: gen_op_mov_v_reg(s, MO_UW, s->T0, R_EAX); tcg_gen_ext16s_tl(s->T0, s->T0); - gen_op_mov_reg_v(s, MO_32, R_EAX, s->T0); + gen_op_mov_reg_v(s, MO_UL, R_EAX, s->T0); break; case MO_UW: gen_op_mov_v_reg(s, MO_UB, s->T0, R_EAX); @@ -5174,11 +5174,11 @@ static target_ulong disas_insn(DisasContext *s, CPU= State *cpu) gen_op_mov_reg_v(s, MO_64, R_EDX, s->T0); break; #endif - case MO_32: - gen_op_mov_v_reg(s, MO_32, s->T0, R_EAX); + case MO_UL: + gen_op_mov_v_reg(s, MO_UL, s->T0, R_EAX); tcg_gen_ext32s_tl(s->T0, s->T0); tcg_gen_sari_tl(s->T0, s->T0, 31); - gen_op_mov_reg_v(s, MO_32, R_EDX, s->T0); + gen_op_mov_reg_v(s, MO_UL, R_EDX, s->T0); break; case MO_UW: gen_op_mov_v_reg(s, MO_UW, s->T0, R_EAX); @@ -5219,7 +5219,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) tcg_gen_sub_tl(cpu_cc_src, cpu_cc_src, s->T1); break; #endif - case MO_32: + case MO_UL: tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); tcg_gen_trunc_tl_i32(s->tmp3_i32, s->T1); tcg_gen_muls2_i32(s->tmp2_i32, s->tmp3_i32, @@ -5394,7 +5394,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) /**************************/ /* push/pop */ case 0x50 ... 0x57: /* push */ - gen_op_mov_v_reg(s, MO_32, s->T0, (b & 7) | REX_B(s)); + gen_op_mov_v_reg(s, MO_UL, s->T0, (b & 7) | REX_B(s)); gen_push_v(s, s->T0); break; case 0x58 ... 0x5f: /* pop */ @@ -5734,7 +5734,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) case 0x1b5: /* lgs Gv */ op =3D R_GS; do_lxx: - ot =3D dflag !=3D MO_UW ? MO_32 : MO_UW; + ot =3D dflag !=3D MO_UW ? MO_UL : MO_UW; modrm =3D x86_ldub_code(env, s); reg =3D ((modrm >> 3) & 7) | rex_r; mod =3D (modrm >> 6) & 3; @@ -6576,7 +6576,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) case 0xe8: /* call im */ { if (dflag !=3D MO_UW) { - tval =3D (int32_t)insn_get(env, s, MO_32); + tval =3D (int32_t)insn_get(env, s, MO_UL); } else { tval =3D (int16_t)insn_get(env, s, MO_UW); } @@ -6609,7 +6609,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) goto do_lcall; case 0xe9: /* jmp im */ if (dflag !=3D MO_UW) { - tval =3D (int32_t)insn_get(env, s, MO_32); + tval =3D (int32_t)insn_get(env, s, MO_UL); } else { tval =3D (int16_t)insn_get(env, s, MO_UW); } @@ -6649,7 +6649,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) goto do_jcc; case 0x180 ... 0x18f: /* jcc Jv */ if (dflag !=3D MO_UW) { - tval =3D (int32_t)insn_get(env, s, MO_32); + tval =3D (int32_t)insn_get(env, s, MO_UL); } else { tval =3D (int16_t)insn_get(env, s, MO_UW); } @@ -6827,7 +6827,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) reg =3D ((modrm >> 3) & 7) | rex_r; mod =3D (modrm >> 6) & 3; rm =3D (modrm & 7) | REX_B(s); - gen_op_mov_v_reg(s, MO_32, s->T1, reg); + gen_op_mov_v_reg(s, MO_UL, s->T1, reg); if (mod !=3D 3) { AddressParts a =3D gen_lea_modrm_0(env, s, modrm); /* specific case: we need to add a displacement */ @@ -7126,10 +7126,10 @@ static target_ulong disas_insn(DisasContext *s, CPU= State *cpu) } else #endif { - gen_op_mov_v_reg(s, MO_32, s->T0, reg); + gen_op_mov_v_reg(s, MO_UL, s->T0, reg); tcg_gen_ext32u_tl(s->T0, s->T0); tcg_gen_bswap32_tl(s->T0, s->T0); - gen_op_mov_reg_v(s, MO_32, reg, s->T0); + gen_op_mov_reg_v(s, MO_UL, reg, s->T0); } break; case 0xd6: /* salc */ @@ -7359,7 +7359,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) if (dflag =3D=3D MO_UW) { tcg_gen_andi_tl(s->T0, s->T0, 0xffffff); } - gen_op_st_v(s, CODE64(s) + MO_32, s->T0, s->A0); + gen_op_st_v(s, CODE64(s) + MO_UL, s->T0, s->A0); break; case 0xc8: /* monitor */ @@ -7414,7 +7414,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) if (dflag =3D=3D MO_UW) { tcg_gen_andi_tl(s->T0, s->T0, 0xffffff); } - gen_op_st_v(s, CODE64(s) + MO_32, s->T0, s->A0); + gen_op_st_v(s, CODE64(s) + MO_UL, s->T0, s->A0); break; case 0xd0: /* xgetbv */ @@ -7560,7 +7560,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) gen_lea_modrm(env, s, modrm); gen_op_ld_v(s, MO_UW, s->T1, s->A0); gen_add_A0_im(s, 2); - gen_op_ld_v(s, CODE64(s) + MO_32, s->T0, s->A0); + gen_op_ld_v(s, CODE64(s) + MO_UL, s->T0, s->A0); if (dflag =3D=3D MO_UW) { tcg_gen_andi_tl(s->T0, s->T0, 0xffffff); } @@ -7577,7 +7577,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) gen_lea_modrm(env, s, modrm); gen_op_ld_v(s, MO_UW, s->T1, s->A0); gen_add_A0_im(s, 2); - gen_op_ld_v(s, CODE64(s) + MO_32, s->T0, s->A0); + gen_op_ld_v(s, CODE64(s) + MO_UL, s->T0, s->A0); if (dflag =3D=3D MO_UW) { tcg_gen_andi_tl(s->T0, s->T0, 0xffffff); } @@ -7698,7 +7698,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) rm =3D (modrm & 7) | REX_B(s); if (mod =3D=3D 3) { - gen_op_mov_v_reg(s, MO_32, s->T0, rm); + gen_op_mov_v_reg(s, MO_UL, s->T0, rm); /* sign extend */ if (d_ot =3D=3D MO_64) { tcg_gen_ext32s_tl(s->T0, s->T0); @@ -7706,7 +7706,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) gen_op_mov_reg_v(s, d_ot, reg, s->T0); } else { gen_lea_modrm(env, s, modrm); - gen_op_ld_v(s, MO_32 | MO_SIGN, s->T0, s->A0); + gen_op_ld_v(s, MO_SL, s->T0, s->A0); gen_op_mov_reg_v(s, d_ot, reg, s->T0); } } else @@ -7765,7 +7765,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) TCGv t0; if (!s->pe || s->vm86) goto illegal_op; - ot =3D dflag !=3D MO_UW ? MO_32 : MO_UW; + ot =3D dflag !=3D MO_UW ? MO_UL : MO_UW; modrm =3D x86_ldub_code(env, s); reg =3D ((modrm >> 3) & 7) | rex_r; gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0); @@ -8016,7 +8016,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) if (CODE64(s)) ot =3D MO_64; else - ot =3D MO_32; + ot =3D MO_UL; if ((prefixes & PREFIX_LOCK) && (reg =3D=3D 0) && (s->cpuid_ext3_features & CPUID_EXT3_CR8LEG)) { reg =3D 8; @@ -8073,7 +8073,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) if (CODE64(s)) ot =3D MO_64; else - ot =3D MO_32; + ot =3D MO_UL; if (reg >=3D 8) { goto illegal_op; } @@ -8168,7 +8168,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) } gen_lea_modrm(env, s, modrm); tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State, mxcsr)); - gen_op_st_v(s, MO_32, s->T0, s->A0); + gen_op_st_v(s, MO_UL, s->T0, s->A0); break; CASE_MODRM_MEM_OP(4): /* xsave */ @@ -8268,7 +8268,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) dst =3D treg, src =3D base; } - if (s->dflag =3D=3D MO_32) { + if (s->dflag =3D=3D MO_UL) { tcg_gen_ext32u_tl(dst, src); } else { tcg_gen_mov_tl(dst, src); diff --git a/target/ppc/translate/vmx-impl.inc.c b/target/ppc/translate/vmx= -impl.inc.c index 71efef4..8aa767e 100644 --- a/target/ppc/translate/vmx-impl.inc.c +++ b/target/ppc/translate/vmx-impl.inc.c @@ -409,27 +409,27 @@ GEN_VXFORM_DUAL_EXT(vaddubm, PPC_ALTIVEC, PPC_NONE, 0= , \ GEN_VXFORM_V(vadduhm, MO_UW, tcg_gen_gvec_add, 0, 1); GEN_VXFORM_DUAL(vadduhm, PPC_ALTIVEC, PPC_NONE, \ vmul10ecuq, PPC_NONE, PPC2_ISA300) -GEN_VXFORM_V(vadduwm, MO_32, tcg_gen_gvec_add, 0, 2); +GEN_VXFORM_V(vadduwm, MO_UL, tcg_gen_gvec_add, 0, 2); GEN_VXFORM_V(vaddudm, MO_64, tcg_gen_gvec_add, 0, 3); GEN_VXFORM_V(vsububm, MO_UB, tcg_gen_gvec_sub, 0, 16); GEN_VXFORM_V(vsubuhm, MO_UW, tcg_gen_gvec_sub, 0, 17); -GEN_VXFORM_V(vsubuwm, MO_32, tcg_gen_gvec_sub, 0, 18); +GEN_VXFORM_V(vsubuwm, MO_UL, tcg_gen_gvec_sub, 0, 18); GEN_VXFORM_V(vsubudm, MO_64, tcg_gen_gvec_sub, 0, 19); GEN_VXFORM_V(vmaxub, MO_UB, tcg_gen_gvec_umax, 1, 0); GEN_VXFORM_V(vmaxuh, MO_UW, tcg_gen_gvec_umax, 1, 1); -GEN_VXFORM_V(vmaxuw, MO_32, tcg_gen_gvec_umax, 1, 2); +GEN_VXFORM_V(vmaxuw, MO_UL, tcg_gen_gvec_umax, 1, 2); GEN_VXFORM_V(vmaxud, MO_64, tcg_gen_gvec_umax, 1, 3); GEN_VXFORM_V(vmaxsb, MO_UB, tcg_gen_gvec_smax, 1, 4); GEN_VXFORM_V(vmaxsh, MO_UW, tcg_gen_gvec_smax, 1, 5); -GEN_VXFORM_V(vmaxsw, MO_32, tcg_gen_gvec_smax, 1, 6); +GEN_VXFORM_V(vmaxsw, MO_UL, tcg_gen_gvec_smax, 1, 6); GEN_VXFORM_V(vmaxsd, MO_64, tcg_gen_gvec_smax, 1, 7); GEN_VXFORM_V(vminub, MO_UB, tcg_gen_gvec_umin, 1, 8); GEN_VXFORM_V(vminuh, MO_UW, tcg_gen_gvec_umin, 1, 9); -GEN_VXFORM_V(vminuw, MO_32, tcg_gen_gvec_umin, 1, 10); +GEN_VXFORM_V(vminuw, MO_UL, tcg_gen_gvec_umin, 1, 10); GEN_VXFORM_V(vminud, MO_64, tcg_gen_gvec_umin, 1, 11); GEN_VXFORM_V(vminsb, MO_UB, tcg_gen_gvec_smin, 1, 12); GEN_VXFORM_V(vminsh, MO_UW, tcg_gen_gvec_smin, 1, 13); -GEN_VXFORM_V(vminsw, MO_32, tcg_gen_gvec_smin, 1, 14); +GEN_VXFORM_V(vminsw, MO_UL, tcg_gen_gvec_smin, 1, 14); GEN_VXFORM_V(vminsd, MO_64, tcg_gen_gvec_smin, 1, 15); GEN_VXFORM(vavgub, 1, 16); GEN_VXFORM(vabsdub, 1, 16); @@ -532,18 +532,18 @@ GEN_VXFORM(vmulesh, 4, 13); GEN_VXFORM(vmulesw, 4, 14); GEN_VXFORM_V(vslb, MO_UB, tcg_gen_gvec_shlv, 2, 4); GEN_VXFORM_V(vslh, MO_UW, tcg_gen_gvec_shlv, 2, 5); -GEN_VXFORM_V(vslw, MO_32, tcg_gen_gvec_shlv, 2, 6); +GEN_VXFORM_V(vslw, MO_UL, tcg_gen_gvec_shlv, 2, 6); GEN_VXFORM(vrlwnm, 2, 6); GEN_VXFORM_DUAL(vslw, PPC_ALTIVEC, PPC_NONE, \ vrlwnm, PPC_NONE, PPC2_ISA300) GEN_VXFORM_V(vsld, MO_64, tcg_gen_gvec_shlv, 2, 23); GEN_VXFORM_V(vsrb, MO_UB, tcg_gen_gvec_shrv, 2, 8); GEN_VXFORM_V(vsrh, MO_UW, tcg_gen_gvec_shrv, 2, 9); -GEN_VXFORM_V(vsrw, MO_32, tcg_gen_gvec_shrv, 2, 10); +GEN_VXFORM_V(vsrw, MO_UL, tcg_gen_gvec_shrv, 2, 10); GEN_VXFORM_V(vsrd, MO_64, tcg_gen_gvec_shrv, 2, 27); GEN_VXFORM_V(vsrab, MO_UB, tcg_gen_gvec_sarv, 2, 12); GEN_VXFORM_V(vsrah, MO_UW, tcg_gen_gvec_sarv, 2, 13); -GEN_VXFORM_V(vsraw, MO_32, tcg_gen_gvec_sarv, 2, 14); +GEN_VXFORM_V(vsraw, MO_UL, tcg_gen_gvec_sarv, 2, 14); GEN_VXFORM_V(vsrad, MO_64, tcg_gen_gvec_sarv, 2, 15); GEN_VXFORM(vsrv, 2, 28); GEN_VXFORM(vslv, 2, 29); @@ -595,16 +595,16 @@ GEN_VXFORM_DUAL_EXT(vaddubs, PPC_ALTIVEC, PPC_NONE, 0= , \ GEN_VXFORM_SAT(vadduhs, MO_UW, add, usadd, 0, 9); GEN_VXFORM_DUAL(vadduhs, PPC_ALTIVEC, PPC_NONE, \ vmul10euq, PPC_NONE, PPC2_ISA300) -GEN_VXFORM_SAT(vadduws, MO_32, add, usadd, 0, 10); +GEN_VXFORM_SAT(vadduws, MO_UL, add, usadd, 0, 10); GEN_VXFORM_SAT(vaddsbs, MO_UB, add, ssadd, 0, 12); GEN_VXFORM_SAT(vaddshs, MO_UW, add, ssadd, 0, 13); -GEN_VXFORM_SAT(vaddsws, MO_32, add, ssadd, 0, 14); +GEN_VXFORM_SAT(vaddsws, MO_UL, add, ssadd, 0, 14); GEN_VXFORM_SAT(vsububs, MO_UB, sub, ussub, 0, 24); GEN_VXFORM_SAT(vsubuhs, MO_UW, sub, ussub, 0, 25); -GEN_VXFORM_SAT(vsubuws, MO_32, sub, ussub, 0, 26); +GEN_VXFORM_SAT(vsubuws, MO_UL, sub, ussub, 0, 26); GEN_VXFORM_SAT(vsubsbs, MO_UB, sub, sssub, 0, 28); GEN_VXFORM_SAT(vsubshs, MO_UW, sub, sssub, 0, 29); -GEN_VXFORM_SAT(vsubsws, MO_32, sub, sssub, 0, 30); +GEN_VXFORM_SAT(vsubsws, MO_UL, sub, sssub, 0, 30); GEN_VXFORM(vadduqm, 0, 4); GEN_VXFORM(vaddcuq, 0, 5); GEN_VXFORM3(vaddeuqm, 30, 0); @@ -914,7 +914,7 @@ static void glue(gen_, name)(DisasContext *ctx) = \ GEN_VXFORM_VSPLT(vspltb, MO_UB, 6, 8); GEN_VXFORM_VSPLT(vsplth, MO_UW, 6, 9); -GEN_VXFORM_VSPLT(vspltw, MO_32, 6, 10); +GEN_VXFORM_VSPLT(vspltw, MO_UL, 6, 10); GEN_VXFORM_UIMM_SPLAT(vextractub, 6, 8, 15); GEN_VXFORM_UIMM_SPLAT(vextractuh, 6, 9, 14); GEN_VXFORM_UIMM_SPLAT(vextractuw, 6, 10, 12); diff --git a/target/ppc/translate/vsx-impl.inc.c b/target/ppc/translate/vsx= -impl.inc.c index 3922686..212817e 100644 --- a/target/ppc/translate/vsx-impl.inc.c +++ b/target/ppc/translate/vsx-impl.inc.c @@ -1553,12 +1553,12 @@ static void gen_xxspltw(DisasContext *ctx) tofs =3D vsr_full_offset(rt); bofs =3D vsr_full_offset(rb); - bofs +=3D uim << MO_32; + bofs +=3D uim << MO_UL; #ifndef HOST_WORDS_BIG_ENDIAN bofs ^=3D 8 | 4; #endif - tcg_gen_gvec_dup_mem(MO_32, tofs, bofs, 16, 16); + tcg_gen_gvec_dup_mem(MO_UL, tofs, bofs, 16, 16); } #define pattern(x) (((x) & 0xff) * (~(uint64_t)0 / 0xff)) diff --git a/target/s390x/translate.c b/target/s390x/translate.c index 415747f..9e646f1 100644 --- a/target/s390x/translate.c +++ b/target/s390x/translate.c @@ -196,7 +196,7 @@ static inline int freg64_offset(uint8_t reg) static inline int freg32_offset(uint8_t reg) { g_assert(reg < 16); - return vec_reg_offset(reg, 0, MO_32); + return vec_reg_offset(reg, 0, MO_UL); } static TCGv_i64 load_reg(int reg) @@ -2283,7 +2283,7 @@ static DisasJumpType op_csp(DisasContext *s, DisasOps= *o) /* Write back the output now, so that it happens before the following branch, so that we don't need local temps. */ - if ((mop & MO_SIZE) =3D=3D MO_32) { + if ((mop & MO_SIZE) =3D=3D MO_UL) { tcg_gen_deposit_i64(o->out, o->out, old, 0, 32); } else { tcg_gen_mov_i64(o->out, old); diff --git a/target/s390x/translate_vx.inc.c b/target/s390x/translate_vx.in= c.c index 65da6b3..75d788c 100644 --- a/target/s390x/translate_vx.inc.c +++ b/target/s390x/translate_vx.inc.c @@ -48,7 +48,7 @@ #define ES_8 MO_UB #define ES_16 MO_UW -#define ES_32 MO_32 +#define ES_32 MO_UL #define ES_64 MO_64 #define ES_128 4 diff --git a/target/s390x/vec.h b/target/s390x/vec.h index 28e1b1d..f67392c 100644 --- a/target/s390x/vec.h +++ b/target/s390x/vec.h @@ -80,7 +80,7 @@ static inline uint64_t s390_vec_read_element(const S390Ve= ctor *v, uint8_t enr, return s390_vec_read_element8(v, enr); case MO_UW: return s390_vec_read_element16(v, enr); - case MO_32: + case MO_UL: return s390_vec_read_element32(v, enr); case MO_64: return s390_vec_read_element64(v, enr); @@ -127,7 +127,7 @@ static inline void s390_vec_write_element(S390Vector *v= , uint8_t enr, case MO_UW: s390_vec_write_element16(v, enr, data); break; - case MO_32: + case MO_UL: s390_vec_write_element32(v, enr, data); break; case MO_64: diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c index 3d90c4b..dc4fd21 100644 --- a/tcg/aarch64/tcg-target.inc.c +++ b/tcg/aarch64/tcg-target.inc.c @@ -431,12 +431,12 @@ typedef enum { that emits them can transform to 3.3.10 or 3.3.13. */ I3312_STRB =3D 0x38000000 | LDST_ST << 22 | MO_UB << 30, I3312_STRH =3D 0x38000000 | LDST_ST << 22 | MO_UW << 30, - I3312_STRW =3D 0x38000000 | LDST_ST << 22 | MO_32 << 30, + I3312_STRW =3D 0x38000000 | LDST_ST << 22 | MO_UL << 30, I3312_STRX =3D 0x38000000 | LDST_ST << 22 | MO_64 << 30, I3312_LDRB =3D 0x38000000 | LDST_LD << 22 | MO_UB << 30, I3312_LDRH =3D 0x38000000 | LDST_LD << 22 | MO_UW << 30, - I3312_LDRW =3D 0x38000000 | LDST_LD << 22 | MO_32 << 30, + I3312_LDRW =3D 0x38000000 | LDST_LD << 22 | MO_UL << 30, I3312_LDRX =3D 0x38000000 | LDST_LD << 22 | MO_64 << 30, I3312_LDRSBW =3D 0x38000000 | LDST_LD_S_W << 22 | MO_UB << 30, @@ -444,10 +444,10 @@ typedef enum { I3312_LDRSBX =3D 0x38000000 | LDST_LD_S_X << 22 | MO_UB << 30, I3312_LDRSHX =3D 0x38000000 | LDST_LD_S_X << 22 | MO_UW << 30, - I3312_LDRSWX =3D 0x38000000 | LDST_LD_S_X << 22 | MO_32 << 30, + I3312_LDRSWX =3D 0x38000000 | LDST_LD_S_X << 22 | MO_UL << 30, - I3312_LDRVS =3D 0x3c000000 | LDST_LD << 22 | MO_32 << 30, - I3312_STRVS =3D 0x3c000000 | LDST_ST << 22 | MO_32 << 30, + I3312_LDRVS =3D 0x3c000000 | LDST_LD << 22 | MO_UL << 30, + I3312_STRVS =3D 0x3c000000 | LDST_ST << 22 | MO_UL << 30, I3312_LDRVD =3D 0x3c000000 | LDST_LD << 22 | MO_64 << 30, I3312_STRVD =3D 0x3c000000 | LDST_ST << 22 | MO_64 << 30, @@ -870,7 +870,7 @@ static void tcg_out_dupi_vec(TCGContext *s, TCGType typ= e, /* * Test all bytes 0x00 or 0xff second. This can match cases that - * might otherwise take 2 or 3 insns for MO_UW or MO_32 below. + * might otherwise take 2 or 3 insns for MO_UW or MO_UL below. */ for (i =3D imm8 =3D 0; i < 8; i++) { uint8_t byte =3D v64 >> (i * 8); @@ -908,7 +908,7 @@ static void tcg_out_dupi_vec(TCGContext *s, TCGType typ= e, tcg_out_insn(s, 3606, MOVI, q, rd, 0, 0x8, v16 & 0xff); tcg_out_insn(s, 3606, ORR, q, rd, 0, 0xa, v16 >> 8); return; - } else if (v64 =3D=3D dup_const(MO_32, v64)) { + } else if (v64 =3D=3D dup_const(MO_UL, v64)) { uint32_t v32 =3D v64; uint32_t n32 =3D ~v32; @@ -1749,7 +1749,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCG= MemOp memop, TCGType ext, if (bswap) { tcg_out_ldst_r(s, I3312_LDRW, data_r, addr_r, otype, off_r); tcg_out_rev32(s, data_r, data_r); - tcg_out_sxt(s, TCG_TYPE_I64, MO_32, data_r, data_r); + tcg_out_sxt(s, TCG_TYPE_I64, MO_UL, data_r, data_r); } else { tcg_out_ldst_r(s, I3312_LDRSWX, data_r, addr_r, otype, off_r); } @@ -1782,7 +1782,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCG= MemOp memop, } tcg_out_ldst_r(s, I3312_STRH, data_r, addr_r, otype, off_r); break; - case MO_32: + case MO_UL: if (bswap && data_r !=3D TCG_REG_XZR) { tcg_out_rev32(s, TCG_REG_TMP, data_r); data_r =3D TCG_REG_TMP; @@ -2194,7 +2194,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, break; case INDEX_op_ext_i32_i64: case INDEX_op_ext32s_i64: - tcg_out_sxt(s, TCG_TYPE_I64, MO_32, a0, a1); + tcg_out_sxt(s, TCG_TYPE_I64, MO_UL, a0, a1); break; case INDEX_op_ext8u_i64: case INDEX_op_ext8u_i32: diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c index 0bd400e..05560a2 100644 --- a/tcg/arm/tcg-target.inc.c +++ b/tcg/arm/tcg-target.inc.c @@ -1435,7 +1435,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, = TCGLabelQemuLdst *lb) case MO_UW: argreg =3D tcg_out_arg_reg16(s, argreg, datalo); break; - case MO_32: + case MO_UL: default: argreg =3D tcg_out_arg_reg32(s, argreg, datalo); break; @@ -1632,7 +1632,7 @@ static inline void tcg_out_qemu_st_index(TCGContext *= s, int cond, TCGMemOp opc, tcg_out_st16_r(s, cond, datalo, addrlo, addend); } break; - case MO_32: + case MO_UL: default: if (bswap) { tcg_out_bswap32(s, cond, TCG_REG_R0, datalo); @@ -1677,7 +1677,7 @@ static inline void tcg_out_qemu_st_direct(TCGContext = *s, TCGMemOp opc, tcg_out_st16_8(s, COND_AL, datalo, addrlo, 0); } break; - case MO_32: + case MO_UL: default: if (bswap) { tcg_out_bswap32(s, COND_AL, TCG_REG_R0, datalo); diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c index 31c3664..93e4c63 100644 --- a/tcg/i386/tcg-target.inc.c +++ b/tcg/i386/tcg-target.inc.c @@ -897,7 +897,7 @@ static bool tcg_out_dup_vec(TCGContext *s, TCGType type= , unsigned vece, tcg_out_vex_modrm(s, OPC_PUNPCKLWD, r, a, a); a =3D r; /* FALLTHRU */ - case MO_32: + case MO_UL: tcg_out_vex_modrm(s, OPC_PSHUFD, r, 0, a); /* imm8 operand: all output lanes selected from input lane 0. = */ tcg_out8(s, 0); @@ -924,7 +924,7 @@ static bool tcg_out_dupm_vec(TCGContext *s, TCGType typ= e, unsigned vece, case MO_64: tcg_out_vex_modrm_offset(s, OPC_MOVDDUP, r, 0, base, offset); break; - case MO_32: + case MO_UL: tcg_out_vex_modrm_offset(s, OPC_VBROADCASTSS, r, 0, base, offs= et); break; case MO_UW: @@ -2173,7 +2173,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCG= Reg datalo, TCGReg datahi, tcg_out_modrm_sib_offset(s, movop + P_DATA16 + seg, datalo, base, index, 0, ofs); break; - case MO_32: + case MO_UL: if (bswap) { tcg_out_mov(s, TCG_TYPE_I32, scratch, datalo); tcg_out_bswap32(s, scratch); @@ -2927,7 +2927,7 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode o= pc, case INDEX_op_x86_blend_vec: if (vece =3D=3D MO_UW) { insn =3D OPC_PBLENDW; - } else if (vece =3D=3D MO_32) { + } else if (vece =3D=3D MO_UL) { insn =3D (have_avx2 ? OPC_VPBLENDD : OPC_BLENDPS); } else { g_assert_not_reached(); @@ -3292,13 +3292,13 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type= , unsigned vece) case INDEX_op_shrs_vec: return vece >=3D MO_UW; case INDEX_op_sars_vec: - return vece >=3D MO_UW && vece <=3D MO_32; + return vece >=3D MO_UW && vece <=3D MO_UL; case INDEX_op_shlv_vec: case INDEX_op_shrv_vec: - return have_avx2 && vece >=3D MO_32; + return have_avx2 && vece >=3D MO_UL; case INDEX_op_sarv_vec: - return have_avx2 && vece =3D=3D MO_32; + return have_avx2 && vece =3D=3D MO_UL; case INDEX_op_mul_vec: if (vece =3D=3D MO_UB) { @@ -3320,7 +3320,7 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, = unsigned vece) case INDEX_op_umin_vec: case INDEX_op_umax_vec: case INDEX_op_abs_vec: - return vece <=3D MO_32; + return vece <=3D MO_UL; default: return 0; @@ -3396,9 +3396,9 @@ static void expand_vec_sari(TCGType type, unsigned ve= ce, * shift (note that the ISA says shift of 32 is valid). */ t1 =3D tcg_temp_new_vec(type); - tcg_gen_sari_vec(MO_32, t1, v1, imm); + tcg_gen_sari_vec(MO_UL, t1, v1, imm); tcg_gen_shri_vec(MO_64, v0, v1, imm); - vec_gen_4(INDEX_op_x86_blend_vec, type, MO_32, + vec_gen_4(INDEX_op_x86_blend_vec, type, MO_UL, tcgv_vec_arg(v0), tcgv_vec_arg(v0), tcgv_vec_arg(t1), 0xaa); tcg_temp_free_vec(t1); @@ -3515,28 +3515,28 @@ static bool expand_vec_cmp_noinv(TCGType type, unsi= gned vece, TCGv_vec v0, fixup =3D NEED_SWAP | NEED_INV; break; case TCG_COND_LEU: - if (vece <=3D MO_32) { + if (vece <=3D MO_UL) { fixup =3D NEED_UMIN; } else { fixup =3D NEED_BIAS | NEED_INV; } break; case TCG_COND_GTU: - if (vece <=3D MO_32) { + if (vece <=3D MO_UL) { fixup =3D NEED_UMIN | NEED_INV; } else { fixup =3D NEED_BIAS; } break; case TCG_COND_GEU: - if (vece <=3D MO_32) { + if (vece <=3D MO_UL) { fixup =3D NEED_UMAX; } else { fixup =3D NEED_BIAS | NEED_SWAP | NEED_INV; } break; case TCG_COND_LTU: - if (vece <=3D MO_32) { + if (vece <=3D MO_UL) { fixup =3D NEED_UMAX | NEED_INV; } else { fixup =3D NEED_BIAS | NEED_SWAP; diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c index 1780cb1..a78fe87 100644 --- a/tcg/mips/tcg-target.inc.c +++ b/tcg/mips/tcg-target.inc.c @@ -1386,7 +1386,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, = TCGLabelQemuLdst *l) case MO_UW: i =3D tcg_out_call_iarg_reg16(s, i, l->datalo_reg); break; - case MO_32: + case MO_UL: i =3D tcg_out_call_iarg_reg(s, i, l->datalo_reg); break; case MO_64: @@ -1579,11 +1579,11 @@ static void tcg_out_qemu_st_direct(TCGContext *s, T= CGReg lo, TCGReg hi, tcg_out_opc_imm(s, OPC_SH, lo, base, 0); break; - case MO_32 | MO_BSWAP: + case MO_UL | MO_BSWAP: tcg_out_bswap32(s, TCG_TMP3, lo); lo =3D TCG_TMP3; /* FALLTHRU */ - case MO_32: + case MO_UL: tcg_out_opc_imm(s, OPC_SW, lo, base, 0); break; diff --git a/tcg/ppc/tcg-target.inc.c b/tcg/ppc/tcg-target.inc.c index 852b894..835336a 100644 --- a/tcg/ppc/tcg-target.inc.c +++ b/tcg/ppc/tcg-target.inc.c @@ -1714,7 +1714,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, = TCGLabelQemuLdst *lb) #endif tcg_out_mov(s, TCG_TYPE_I32, arg++, hi); /* FALLTHRU */ - case MO_32: + case MO_UL: tcg_out_mov(s, TCG_TYPE_I32, arg++, lo); break; default: diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c index 20bc19d..1905986 100644 --- a/tcg/riscv/tcg-target.inc.c +++ b/tcg/riscv/tcg-target.inc.c @@ -1222,7 +1222,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCG= Reg lo, TCGReg hi, case MO_UW: tcg_out_opc_store(s, OPC_SH, base, lo, 0); break; - case MO_32: + case MO_UL: tcg_out_opc_store(s, OPC_SW, base, lo, 0); break; case MO_64: diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c index 85550b5..ac0d3a3 100644 --- a/tcg/sparc/tcg-target.inc.c +++ b/tcg/sparc/tcg-target.inc.c @@ -889,7 +889,7 @@ static void emit_extend(TCGContext *s, TCGReg r, int op) tcg_out_arithi(s, r, r, 16, SHIFT_SLL); tcg_out_arithi(s, r, r, 16, SHIFT_SRL); break; - case MO_32: + case MO_UL: if (SPARC64) { tcg_out_arith(s, r, r, 0, SHIFT_SRL); } diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c index da409f5..e63622c 100644 --- a/tcg/tcg-op-gvec.c +++ b/tcg/tcg-op-gvec.c @@ -310,7 +310,7 @@ uint64_t (dup_const)(unsigned vece, uint64_t c) return 0x0101010101010101ull * (uint8_t)c; case MO_UW: return 0x0001000100010001ull * (uint16_t)c; - case MO_32: + case MO_UL: return 0x0000000100000001ull * (uint32_t)c; case MO_64: return c; @@ -330,7 +330,7 @@ static void gen_dup_i32(unsigned vece, TCGv_i32 out, TC= Gv_i32 in) case MO_UW: tcg_gen_deposit_i32(out, in, in, 16, 16); break; - case MO_32: + case MO_UL: tcg_gen_mov_i32(out, in); break; default: @@ -349,7 +349,7 @@ static void gen_dup_i64(unsigned vece, TCGv_i64 out, TC= Gv_i64 in) tcg_gen_ext16u_i64(out, in); tcg_gen_muli_i64(out, out, 0x0001000100010001ull); break; - case MO_32: + case MO_UL: tcg_gen_deposit_i64(out, in, in, 32, 32); break; case MO_64: @@ -443,7 +443,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32= _t oprsz, TCGv_ptr t_ptr; uint32_t i; - assert(vece <=3D (in_32 ? MO_32 : MO_64)); + assert(vece <=3D (in_32 ? MO_UL : MO_64)); assert(in_32 =3D=3D NULL || in_64 =3D=3D NULL); /* If we're storing 0, expand oprsz to maxsz. */ @@ -485,7 +485,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32= _t oprsz, use a 64-bit operation unless the 32-bit operation would be simple enough. */ if (TCG_TARGET_REG_BITS =3D=3D 64 - && (vece !=3D MO_32 || !check_size_impl(oprsz, 4))) { + && (vece !=3D MO_UL || !check_size_impl(oprsz, 4))) { t_64 =3D tcg_temp_new_i64(); tcg_gen_extu_i32_i64(t_64, in_32); gen_dup_i64(vece, t_64, t_64); @@ -1430,7 +1430,7 @@ void tcg_gen_gvec_dup_i32(unsigned vece, uint32_t dof= s, uint32_t oprsz, uint32_t maxsz, TCGv_i32 in) { check_size_align(oprsz, maxsz, dofs); - tcg_debug_assert(vece <=3D MO_32); + tcg_debug_assert(vece <=3D MO_UL); do_dup(vece, dofs, oprsz, maxsz, in, NULL, 0); } @@ -1453,7 +1453,7 @@ void tcg_gen_gvec_dup_mem(unsigned vece, uint32_t dof= s, uint32_t aofs, tcg_gen_dup_mem_vec(vece, t_vec, cpu_env, aofs); do_dup_store(type, dofs, oprsz, maxsz, t_vec); tcg_temp_free_vec(t_vec); - } else if (vece <=3D MO_32) { + } else if (vece <=3D MO_UL) { TCGv_i32 in =3D tcg_temp_new_i32(); switch (vece) { case MO_UB: @@ -1519,7 +1519,7 @@ void tcg_gen_gvec_dup32i(uint32_t dofs, uint32_t oprs= z, uint32_t maxsz, uint32_t x) { check_size_align(oprsz, maxsz, dofs); - do_dup(MO_32, dofs, oprsz, maxsz, NULL, NULL, x); + do_dup(MO_UL, dofs, oprsz, maxsz, NULL, NULL, x); } void tcg_gen_gvec_dup16i(uint32_t dofs, uint32_t oprsz, @@ -1618,7 +1618,7 @@ void tcg_gen_gvec_add(unsigned vece, uint32_t dofs, u= int32_t aofs, .fniv =3D tcg_gen_add_vec, .fno =3D gen_helper_gvec_add32, .opt_opc =3D vecop_list_add, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D tcg_gen_add_i64, .fniv =3D tcg_gen_add_vec, .fno =3D gen_helper_gvec_add64, @@ -1649,7 +1649,7 @@ void tcg_gen_gvec_adds(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_add_vec, .fno =3D gen_helper_gvec_adds32, .opt_opc =3D vecop_list_add, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D tcg_gen_add_i64, .fniv =3D tcg_gen_add_vec, .fno =3D gen_helper_gvec_adds64, @@ -1690,7 +1690,7 @@ void tcg_gen_gvec_subs(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_sub_vec, .fno =3D gen_helper_gvec_subs32, .opt_opc =3D vecop_list_sub, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D tcg_gen_sub_i64, .fniv =3D tcg_gen_sub_vec, .fno =3D gen_helper_gvec_subs64, @@ -1769,7 +1769,7 @@ void tcg_gen_gvec_sub(unsigned vece, uint32_t dofs, u= int32_t aofs, .fniv =3D tcg_gen_sub_vec, .fno =3D gen_helper_gvec_sub32, .opt_opc =3D vecop_list_sub, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D tcg_gen_sub_i64, .fniv =3D tcg_gen_sub_vec, .fno =3D gen_helper_gvec_sub64, @@ -1800,7 +1800,7 @@ void tcg_gen_gvec_mul(unsigned vece, uint32_t dofs, u= int32_t aofs, .fniv =3D tcg_gen_mul_vec, .fno =3D gen_helper_gvec_mul32, .opt_opc =3D vecop_list_mul, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D tcg_gen_mul_i64, .fniv =3D tcg_gen_mul_vec, .fno =3D gen_helper_gvec_mul64, @@ -1829,7 +1829,7 @@ void tcg_gen_gvec_muls(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_mul_vec, .fno =3D gen_helper_gvec_muls32, .opt_opc =3D vecop_list_mul, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D tcg_gen_mul_i64, .fniv =3D tcg_gen_mul_vec, .fno =3D gen_helper_gvec_muls64, @@ -1866,7 +1866,7 @@ void tcg_gen_gvec_ssadd(unsigned vece, uint32_t dofs,= uint32_t aofs, { .fniv =3D tcg_gen_ssadd_vec, .fno =3D gen_helper_gvec_ssadd32, .opt_opc =3D vecop_list, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fniv =3D tcg_gen_ssadd_vec, .fno =3D gen_helper_gvec_ssadd64, .opt_opc =3D vecop_list, @@ -1892,7 +1892,7 @@ void tcg_gen_gvec_sssub(unsigned vece, uint32_t dofs,= uint32_t aofs, { .fniv =3D tcg_gen_sssub_vec, .fno =3D gen_helper_gvec_sssub32, .opt_opc =3D vecop_list, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fniv =3D tcg_gen_sssub_vec, .fno =3D gen_helper_gvec_sssub64, .opt_opc =3D vecop_list, @@ -1935,7 +1935,7 @@ void tcg_gen_gvec_usadd(unsigned vece, uint32_t dofs,= uint32_t aofs, .fniv =3D tcg_gen_usadd_vec, .fno =3D gen_helper_gvec_usadd32, .opt_opc =3D vecop_list, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D tcg_gen_usadd_i64, .fniv =3D tcg_gen_usadd_vec, .fno =3D gen_helper_gvec_usadd64, @@ -1979,7 +1979,7 @@ void tcg_gen_gvec_ussub(unsigned vece, uint32_t dofs,= uint32_t aofs, .fniv =3D tcg_gen_ussub_vec, .fno =3D gen_helper_gvec_ussub32, .opt_opc =3D vecop_list, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D tcg_gen_ussub_i64, .fniv =3D tcg_gen_ussub_vec, .fno =3D gen_helper_gvec_ussub64, @@ -2007,7 +2007,7 @@ void tcg_gen_gvec_smin(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_smin_vec, .fno =3D gen_helper_gvec_smin32, .opt_opc =3D vecop_list, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D tcg_gen_smin_i64, .fniv =3D tcg_gen_smin_vec, .fno =3D gen_helper_gvec_smin64, @@ -2035,7 +2035,7 @@ void tcg_gen_gvec_umin(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_umin_vec, .fno =3D gen_helper_gvec_umin32, .opt_opc =3D vecop_list, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D tcg_gen_umin_i64, .fniv =3D tcg_gen_umin_vec, .fno =3D gen_helper_gvec_umin64, @@ -2063,7 +2063,7 @@ void tcg_gen_gvec_smax(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_smax_vec, .fno =3D gen_helper_gvec_smax32, .opt_opc =3D vecop_list, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D tcg_gen_smax_i64, .fniv =3D tcg_gen_smax_vec, .fno =3D gen_helper_gvec_smax64, @@ -2091,7 +2091,7 @@ void tcg_gen_gvec_umax(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_umax_vec, .fno =3D gen_helper_gvec_umax32, .opt_opc =3D vecop_list, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D tcg_gen_umax_i64, .fniv =3D tcg_gen_umax_vec, .fno =3D gen_helper_gvec_umax64, @@ -2165,7 +2165,7 @@ void tcg_gen_gvec_neg(unsigned vece, uint32_t dofs, u= int32_t aofs, .fniv =3D tcg_gen_neg_vec, .fno =3D gen_helper_gvec_neg32, .opt_opc =3D vecop_list, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D tcg_gen_neg_i64, .fniv =3D tcg_gen_neg_vec, .fno =3D gen_helper_gvec_neg64, @@ -2228,7 +2228,7 @@ void tcg_gen_gvec_abs(unsigned vece, uint32_t dofs, u= int32_t aofs, .fniv =3D tcg_gen_abs_vec, .fno =3D gen_helper_gvec_abs32, .opt_opc =3D vecop_list, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D tcg_gen_abs_i64, .fniv =3D tcg_gen_abs_vec, .fno =3D gen_helper_gvec_abs64, @@ -2485,7 +2485,7 @@ void tcg_gen_gvec_shli(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_shli_vec, .fno =3D gen_helper_gvec_shl32i, .opt_opc =3D vecop_list, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D tcg_gen_shli_i64, .fniv =3D tcg_gen_shli_vec, .fno =3D gen_helper_gvec_shl64i, @@ -2536,7 +2536,7 @@ void tcg_gen_gvec_shri(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_shri_vec, .fno =3D gen_helper_gvec_shr32i, .opt_opc =3D vecop_list, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D tcg_gen_shri_i64, .fniv =3D tcg_gen_shri_vec, .fno =3D gen_helper_gvec_shr64i, @@ -2601,7 +2601,7 @@ void tcg_gen_gvec_sari(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_sari_vec, .fno =3D gen_helper_gvec_sar32i, .opt_opc =3D vecop_list, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D tcg_gen_sari_i64, .fniv =3D tcg_gen_sari_vec, .fno =3D gen_helper_gvec_sar64i, @@ -2736,7 +2736,7 @@ do_gvec_shifts(unsigned vece, uint32_t dofs, uint32_t= aofs, TCGv_i32 shift, } /* Otherwise fall back to integral... */ - if (vece =3D=3D MO_32 && check_size_impl(oprsz, 4)) { + if (vece =3D=3D MO_UL && check_size_impl(oprsz, 4)) { expand_2s_i32(dofs, aofs, oprsz, shift, false, g->fni4); } else if (vece =3D=3D MO_64 && check_size_impl(oprsz, 8)) { TCGv_i64 sh64 =3D tcg_temp_new_i64(); @@ -2889,7 +2889,7 @@ void tcg_gen_gvec_shlv(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_shlv_mod_vec, .fno =3D gen_helper_gvec_shl32v, .opt_opc =3D vecop_list, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D tcg_gen_shl_mod_i64, .fniv =3D tcg_gen_shlv_mod_vec, .fno =3D gen_helper_gvec_shl64v, @@ -2952,7 +2952,7 @@ void tcg_gen_gvec_shrv(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_shrv_mod_vec, .fno =3D gen_helper_gvec_shr32v, .opt_opc =3D vecop_list, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D tcg_gen_shr_mod_i64, .fniv =3D tcg_gen_shrv_mod_vec, .fno =3D gen_helper_gvec_shr64v, @@ -3015,7 +3015,7 @@ void tcg_gen_gvec_sarv(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_sarv_mod_vec, .fno =3D gen_helper_gvec_sar32v, .opt_opc =3D vecop_list, - .vece =3D MO_32 }, + .vece =3D MO_UL }, { .fni8 =3D tcg_gen_sar_mod_i64, .fniv =3D tcg_gen_sarv_mod_vec, .fno =3D gen_helper_gvec_sar64v, @@ -3168,7 +3168,7 @@ void tcg_gen_gvec_cmp(TCGCond cond, unsigned vece, ui= nt32_t dofs, case 0: if (vece =3D=3D MO_64 && check_size_impl(oprsz, 8)) { expand_cmp_i64(dofs, aofs, bofs, oprsz, cond); - } else if (vece =3D=3D MO_32 && check_size_impl(oprsz, 4)) { + } else if (vece =3D=3D MO_UL && check_size_impl(oprsz, 4)) { expand_cmp_i32(dofs, aofs, bofs, oprsz, cond); } else { gen_helper_gvec_3 * const *fn =3D fns[cond]; diff --git a/tcg/tcg-op-vec.c b/tcg/tcg-op-vec.c index b0a4d98..ff723ab 100644 --- a/tcg/tcg-op-vec.c +++ b/tcg/tcg-op-vec.c @@ -216,7 +216,7 @@ void tcg_gen_mov_vec(TCGv_vec r, TCGv_vec a) } } -#define MO_REG (TCG_TARGET_REG_BITS =3D=3D 64 ? MO_64 : MO_32) +#define MO_REG (TCG_TARGET_REG_BITS =3D=3D 64 ? MO_64 : MO_UL) static void do_dupi_vec(TCGv_vec r, unsigned vece, TCGArg a) { @@ -253,7 +253,7 @@ TCGv_vec tcg_const_ones_vec_matching(TCGv_vec m) void tcg_gen_dup64i_vec(TCGv_vec r, uint64_t a) { if (TCG_TARGET_REG_BITS =3D=3D 32 && a =3D=3D deposit64(a, 32, 32, a))= { - do_dupi_vec(r, MO_32, a); + do_dupi_vec(r, MO_UL, a); } else if (TCG_TARGET_REG_BITS =3D=3D 64 || a =3D=3D (uint64_t)(int32_= t)a) { do_dupi_vec(r, MO_64, a); } else { @@ -265,7 +265,7 @@ void tcg_gen_dup64i_vec(TCGv_vec r, uint64_t a) void tcg_gen_dup32i_vec(TCGv_vec r, uint32_t a) { - do_dupi_vec(r, MO_REG, dup_const(MO_32, a)); + do_dupi_vec(r, MO_REG, dup_const(MO_UL, a)); } void tcg_gen_dup16i_vec(TCGv_vec r, uint32_t a) diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c index 21d448c..447683d 100644 --- a/tcg/tcg-op.c +++ b/tcg/tcg-op.c @@ -2725,7 +2725,7 @@ static inline TCGMemOp tcg_canonicalize_memop(TCGMemO= p op, bool is64, bool st) break; case MO_UW: break; - case MO_32: + case MO_UL: if (!is64) { op &=3D ~MO_SIGN; } @@ -2816,7 +2816,7 @@ void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCG= Arg idx, TCGMemOp memop) tcg_gen_ext16s_i32(val, val); } break; - case MO_32: + case MO_UL: tcg_gen_bswap32_i32(val, val); break; default: @@ -2841,7 +2841,7 @@ void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCG= Arg idx, TCGMemOp memop) tcg_gen_ext16u_i32(swap, val); tcg_gen_bswap16_i32(swap, swap); break; - case MO_32: + case MO_UL: tcg_gen_bswap32_i32(swap, val); break; default: @@ -2896,7 +2896,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCG= Arg idx, TCGMemOp memop) tcg_gen_ext16s_i64(val, val); } break; - case MO_32: + case MO_UL: tcg_gen_bswap32_i64(val, val); if (orig_memop & MO_SIGN) { tcg_gen_ext32s_i64(val, val); @@ -2932,7 +2932,7 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCG= Arg idx, TCGMemOp memop) tcg_gen_ext16u_i64(swap, val); tcg_gen_bswap16_i64(swap, swap); break; - case MO_32: + case MO_UL: tcg_gen_ext32u_i64(swap, val); tcg_gen_bswap32_i64(swap, swap); break; @@ -3027,8 +3027,8 @@ static void * const table_cmpxchg[16] =3D { [MO_UB] =3D gen_helper_atomic_cmpxchgb, [MO_UW | MO_LE] =3D gen_helper_atomic_cmpxchgw_le, [MO_UW | MO_BE] =3D gen_helper_atomic_cmpxchgw_be, - [MO_32 | MO_LE] =3D gen_helper_atomic_cmpxchgl_le, - [MO_32 | MO_BE] =3D gen_helper_atomic_cmpxchgl_be, + [MO_UL | MO_LE] =3D gen_helper_atomic_cmpxchgl_le, + [MO_UL | MO_BE] =3D gen_helper_atomic_cmpxchgl_be, WITH_ATOMIC64([MO_64 | MO_LE] =3D gen_helper_atomic_cmpxchgq_le) WITH_ATOMIC64([MO_64 | MO_BE] =3D gen_helper_atomic_cmpxchgq_be) }; @@ -3251,8 +3251,8 @@ static void * const table_##NAME[16] =3D { = \ [MO_UB] =3D gen_helper_atomic_##NAME##b, = \ [MO_UW | MO_LE] =3D gen_helper_atomic_##NAME##w_le, \ [MO_UW | MO_BE] =3D gen_helper_atomic_##NAME##w_be, \ - [MO_32 | MO_LE] =3D gen_helper_atomic_##NAME##l_le, \ - [MO_32 | MO_BE] =3D gen_helper_atomic_##NAME##l_be, \ + [MO_UL | MO_LE] =3D gen_helper_atomic_##NAME##l_le, \ + [MO_UL | MO_BE] =3D gen_helper_atomic_##NAME##l_be, \ WITH_ATOMIC64([MO_64 | MO_LE] =3D gen_helper_atomic_##NAME##q_le) \ WITH_ATOMIC64([MO_64 | MO_BE] =3D gen_helper_atomic_##NAME##q_be) \ }; \ diff --git a/tcg/tcg.h b/tcg/tcg.h index a378887..4b6ee89 100644 --- a/tcg/tcg.h +++ b/tcg/tcg.h @@ -1304,7 +1304,7 @@ uint64_t dup_const(unsigned vece, uint64_t c); (__builtin_constant_p(VECE) \ ? ((VECE) =3D=3D MO_UB ? 0x0101010101010101ull * (uint8_t)(C) \ : (VECE) =3D=3D MO_UW ? 0x0001000100010001ull * (uint16_t)(C) \ - : (VECE) =3D=3D MO_32 ? 0x0000000100000001ull * (uint32_t)(C) \ + : (VECE) =3D=3D MO_UL ? 0x0000000100000001ull * (uint32_t)(C) \ : dup_const(VECE, C)) \ : dup_const(VECE, C)) -- 1.8.3.1 From nobody Mon Feb 9 05:43:11 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=bt.com ARC-Seal: i=1; a=rsa-sha256; t=1563810211; cv=none; d=zoho.com; s=zohoarc; b=BhdWA+qW1Y6l7iKrXlXwhaedohx2ArZ+imWNPfCZz6kq3cgtSyyeXqP5Q1w+2f4mwFoh6cepegmAIry6SADm2JMthNJ56Vt60vPM+AwPVfwg4aeZio3fOBOnmgzaDSBn2noAF0Zul3n05HsGOcZ72ZVMItQAbar9rcAGalJ83BE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1563810211; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=ap1N8hv58bU7lvUwmbbwMzEAS9QCLXkI0AFq/9i7bCM=; b=gjuIWkf1A8IIPjSsk0HkHGBHZbvI+wbOExDVRmloDZtAVQ0S2LDNoh5M/nWFX74yxmkOEmHq5pvGQjQDwwSStVFwvGh//sOZwSSqE8cPClqhupwI4yLa/czUYvBCuOi6grfmYVN/7xBzj+iRjBIQItrJV4pdSuc2B7EVCTZZmWc= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1563810211153764.886259415485; Mon, 22 Jul 2019 08:43:31 -0700 (PDT) Received: from localhost ([::1]:34706 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaTJ-0004bk-Mr for importer@patchew.org; Mon, 22 Jul 2019 11:43:29 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:52223) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaSq-0003zg-IT for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:43:11 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hpaSf-0003Bl-T0 for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:43:00 -0400 Received: from smtpe1.intersmtp.com ([213.121.35.73]:33614) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hpaSd-00039a-78 for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:42:48 -0400 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by BWP09926078.bt.com (10.36.82.109) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.1.1713.5; Mon, 22 Jul 2019 16:42:35 +0100 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by tpw09926dag18e.domain1.systemhost.net (10.9.212.18) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Jul 2019 16:42:43 +0100 Received: from tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c]) by tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c%12]) with mapi id 15.00.1395.000; Mon, 22 Jul 2019 16:42:43 +0100 From: To: Thread-Topic: [Qemu-devel] [PATCH v2 04/20] tcg: Replace MO_64 with MO_UQ alias Thread-Index: AQHVQKQaj8jO5oIE+Ea78kyYQQqgwA== Date: Mon, 22 Jul 2019 15:42:43 +0000 Message-ID: <1563810162081.40323@bt.com> References: In-Reply-To: Accept-Language: en-AU, en-GB, en-US Content-Language: en-AU X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.187.101.37] MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 [fuzzy] X-Received-From: 213.121.35.73 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.23 Subject: [Qemu-devel] [PATCH v2 04/20] tcg: Replace MO_64 with MO_UQ alias X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, walling@linux.ibm.com, mst@redhat.com, palmer@sifive.com, mark.cave-ayland@ilande.co.uk, Alistair.Francis@wdc.com, arikalo@wavecomp.com, david@redhat.com, pasic@linux.ibm.com, borntraeger@de.ibm.com, rth@twiddle.net, atar4qemu@gmail.com, ehabkost@redhat.com, sw@weilnetz.de, qemu-s390x@nongnu.org, qemu-arm@nongnu.org, david@gibson.dropbear.id.au, qemu-riscv@nongnu.org, cohuck@redhat.com, claudio.fontana@huawei.com, alex.williamson@redhat.com, qemu-ppc@nongnu.org, amarkovic@wavecomp.com, pbonzini@redhat.com, aurelien@aurel32.net Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" Preparation for splitting MO_64 out from TCGMemOp into new accelerator independent MemOp. As MO_64 will be a value of MemOp, existing TCGMemOp comparisons and coercions will trigger -Wenum-compare and -Wenum-conversion. Signed-off-by: Tony Nguyen --- target/arm/sve_helper.c | 2 +- target/arm/translate-a64.c | 270 ++++++++++++++++++--------------= ---- target/arm/translate-sve.c | 18 +-- target/arm/translate-vfp.inc.c | 4 +- target/arm/translate.c | 30 ++-- target/i386/translate.c | 122 ++++++++-------- target/mips/translate.c | 2 +- target/ppc/translate.c | 28 ++-- target/ppc/translate/fp-impl.inc.c | 4 +- target/ppc/translate/vmx-impl.inc.c | 34 ++--- target/ppc/translate/vsx-impl.inc.c | 18 +-- target/s390x/translate.c | 4 +- target/s390x/translate_vx.inc.c | 6 +- target/s390x/vec.h | 4 +- target/sparc/translate.c | 4 +- tcg/aarch64/tcg-target.inc.c | 20 +-- tcg/arm/tcg-target.inc.c | 12 +- tcg/i386/tcg-target.inc.c | 42 +++--- tcg/mips/tcg-target.inc.c | 12 +- tcg/ppc/tcg-target.inc.c | 18 +-- tcg/riscv/tcg-target.inc.c | 6 +- tcg/s390/tcg-target.inc.c | 10 +- tcg/sparc/tcg-target.inc.c | 8 +- tcg/tcg-op-gvec.c | 132 +++++++++--------- tcg/tcg-op-vec.c | 14 +- tcg/tcg-op.c | 24 ++-- tcg/tcg.h | 9 +- 27 files changed, 430 insertions(+), 427 deletions(-) diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c index fa705c4..1cfd746 100644 --- a/target/arm/sve_helper.c +++ b/target/arm/sve_helper.c @@ -5165,7 +5165,7 @@ static inline void sve_ldff1_zd(CPUARMState *env, voi= d *vd, void *vg, void *vm, target_ulong addr; /* Skip to the first true predicate. */ - reg_off =3D find_next_active(vg, 0, reg_max, MO_64); + reg_off =3D find_next_active(vg, 0, reg_max, MO_UQ); if (likely(reg_off < reg_max)) { /* Perform one normal read, which will fault or not. */ set_helper_retaddr(ra); diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c index 0b92e6d..3f9d103 100644 --- a/target/arm/translate-a64.c +++ b/target/arm/translate-a64.c @@ -463,7 +463,7 @@ static inline int fp_reg_offset(DisasContext *s, int re= gno, TCGMemOp size) /* Offset of the high half of the 128 bit vector Qn */ static inline int fp_reg_hi_offset(DisasContext *s, int regno) { - return vec_reg_offset(s, regno, 1, MO_64); + return vec_reg_offset(s, regno, 1, MO_UQ); } /* Convenience accessors for reading and writing single and double @@ -476,7 +476,7 @@ static TCGv_i64 read_fp_dreg(DisasContext *s, int reg) { TCGv_i64 v =3D tcg_temp_new_i64(); - tcg_gen_ld_i64(v, cpu_env, fp_reg_offset(s, reg, MO_64)); + tcg_gen_ld_i64(v, cpu_env, fp_reg_offset(s, reg, MO_UQ)); return v; } @@ -501,7 +501,7 @@ static TCGv_i32 read_fp_hreg(DisasContext *s, int reg) */ static void clear_vec_high(DisasContext *s, bool is_q, int rd) { - unsigned ofs =3D fp_reg_offset(s, rd, MO_64); + unsigned ofs =3D fp_reg_offset(s, rd, MO_UQ); unsigned vsz =3D vec_full_reg_size(s); if (!is_q) { @@ -516,7 +516,7 @@ static void clear_vec_high(DisasContext *s, bool is_q, = int rd) void write_fp_dreg(DisasContext *s, int reg, TCGv_i64 v) { - unsigned ofs =3D fp_reg_offset(s, reg, MO_64); + unsigned ofs =3D fp_reg_offset(s, reg, MO_UQ); tcg_gen_st_i64(v, cpu_env, ofs); clear_vec_high(s, false, reg); @@ -918,7 +918,7 @@ static void do_fp_st(DisasContext *s, int srcidx, TCGv_= i64 tcg_addr, int size) { /* This writes the bottom N bits of a 128 bit wide vector to memory */ TCGv_i64 tmp =3D tcg_temp_new_i64(); - tcg_gen_ld_i64(tmp, cpu_env, fp_reg_offset(s, srcidx, MO_64)); + tcg_gen_ld_i64(tmp, cpu_env, fp_reg_offset(s, srcidx, MO_UQ)); if (size < 4) { tcg_gen_qemu_st_i64(tmp, tcg_addr, get_mem_index(s), s->be_data + size); @@ -928,10 +928,10 @@ static void do_fp_st(DisasContext *s, int srcidx, TCG= v_i64 tcg_addr, int size) tcg_gen_addi_i64(tcg_hiaddr, tcg_addr, 8); tcg_gen_qemu_st_i64(tmp, be ? tcg_hiaddr : tcg_addr, get_mem_index= (s), - s->be_data | MO_Q); + s->be_data | MO_UQ); tcg_gen_ld_i64(tmp, cpu_env, fp_reg_hi_offset(s, srcidx)); tcg_gen_qemu_st_i64(tmp, be ? tcg_addr : tcg_hiaddr, get_mem_index= (s), - s->be_data | MO_Q); + s->be_data | MO_UQ); tcg_temp_free_i64(tcg_hiaddr); } @@ -960,13 +960,13 @@ static void do_fp_ld(DisasContext *s, int destidx, TC= Gv_i64 tcg_addr, int size) tcg_gen_addi_i64(tcg_hiaddr, tcg_addr, 8); tcg_gen_qemu_ld_i64(tmplo, be ? tcg_hiaddr : tcg_addr, get_mem_ind= ex(s), - s->be_data | MO_Q); + s->be_data | MO_UQ); tcg_gen_qemu_ld_i64(tmphi, be ? tcg_addr : tcg_hiaddr, get_mem_ind= ex(s), - s->be_data | MO_Q); + s->be_data | MO_UQ); tcg_temp_free_i64(tcg_hiaddr); } - tcg_gen_st_i64(tmplo, cpu_env, fp_reg_offset(s, destidx, MO_64)); + tcg_gen_st_i64(tmplo, cpu_env, fp_reg_offset(s, destidx, MO_UQ)); tcg_gen_st_i64(tmphi, cpu_env, fp_reg_hi_offset(s, destidx)); tcg_temp_free_i64(tmplo); @@ -1011,8 +1011,8 @@ static void read_vec_element(DisasContext *s, TCGv_i6= 4 tcg_dest, int srcidx, case MO_SL: tcg_gen_ld32s_i64(tcg_dest, cpu_env, vect_off); break; - case MO_64: - case MO_64|MO_SIGN: + case MO_UQ: + case MO_SQ: tcg_gen_ld_i64(tcg_dest, cpu_env, vect_off); break; default: @@ -1061,7 +1061,7 @@ static void write_vec_element(DisasContext *s, TCGv_i= 64 tcg_src, int destidx, case MO_UL: tcg_gen_st32_i64(tcg_src, cpu_env, vect_off); break; - case MO_64: + case MO_UQ: tcg_gen_st_i64(tcg_src, cpu_env, vect_off); break; default: @@ -2207,7 +2207,7 @@ static void gen_load_exclusive(DisasContext *s, int r= t, int rt2, g_assert(size >=3D 2); if (size =3D=3D 2) { /* The pair must be single-copy atomic for the doubleword. */ - memop |=3D MO_64 | MO_ALIGN; + memop |=3D MO_UQ | MO_ALIGN; tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop); if (s->be_data =3D=3D MO_LE) { tcg_gen_extract_i64(cpu_reg(s, rt), cpu_exclusive_val, 0, = 32); @@ -2219,7 +2219,7 @@ static void gen_load_exclusive(DisasContext *s, int r= t, int rt2, } else { /* The pair must be single-copy atomic for *each* doubleword, = not the entire quadword, however it must be quadword aligned. = */ - memop |=3D MO_64; + memop |=3D MO_UQ; tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop | MO_ALIGN_16); @@ -2271,7 +2271,7 @@ static void gen_store_exclusive(DisasContext *s, int = rd, int rt, int rt2, tcg_gen_atomic_cmpxchg_i64(tmp, cpu_exclusive_addr, cpu_exclusive_val, tmp, get_mem_index(s), - MO_64 | MO_ALIGN | s->be_data); + MO_UQ | MO_ALIGN | s->be_data); tcg_gen_setcond_i64(TCG_COND_NE, tmp, tmp, cpu_exclusive_val); } else if (tb_cflags(s->base.tb) & CF_PARALLEL) { if (!HAVE_CMPXCHG128) { @@ -2355,7 +2355,7 @@ static void gen_compare_and_swap_pair(DisasContext *s= , int rs, int rt, } tcg_gen_atomic_cmpxchg_i64(cmp, clean_addr, cmp, val, memidx, - MO_64 | MO_ALIGN | s->be_data); + MO_UQ | MO_ALIGN | s->be_data); tcg_temp_free_i64(val); if (s->be_data =3D=3D MO_LE) { @@ -2389,9 +2389,9 @@ static void gen_compare_and_swap_pair(DisasContext *s= , int rs, int rt, /* Load the two words, in memory order. */ tcg_gen_qemu_ld_i64(d1, clean_addr, memidx, - MO_64 | MO_ALIGN_16 | s->be_data); + MO_UQ | MO_ALIGN_16 | s->be_data); tcg_gen_addi_i64(a2, clean_addr, 8); - tcg_gen_qemu_ld_i64(d2, a2, memidx, MO_64 | s->be_data); + tcg_gen_qemu_ld_i64(d2, a2, memidx, MO_UQ | s->be_data); /* Compare the two words, also in memory order. */ tcg_gen_setcond_i64(TCG_COND_EQ, c1, d1, s1); @@ -2401,8 +2401,8 @@ static void gen_compare_and_swap_pair(DisasContext *s= , int rs, int rt, /* If compare equal, write back new data, else write back old data= . */ tcg_gen_movcond_i64(TCG_COND_NE, c1, c2, zero, t1, d1); tcg_gen_movcond_i64(TCG_COND_NE, c2, c2, zero, t2, d2); - tcg_gen_qemu_st_i64(c1, clean_addr, memidx, MO_64 | s->be_data); - tcg_gen_qemu_st_i64(c2, a2, memidx, MO_64 | s->be_data); + tcg_gen_qemu_st_i64(c1, clean_addr, memidx, MO_UQ | s->be_data); + tcg_gen_qemu_st_i64(c2, a2, memidx, MO_UQ | s->be_data); tcg_temp_free_i64(a2); tcg_temp_free_i64(c1); tcg_temp_free_i64(c2); @@ -5271,7 +5271,7 @@ static void handle_fp_compare(DisasContext *s, int si= ze, TCGv_i64 tcg_flags =3D tcg_temp_new_i64(); TCGv_ptr fpst =3D get_fpstatus_ptr(size =3D=3D MO_UW); - if (size =3D=3D MO_64) { + if (size =3D=3D MO_UQ) { TCGv_i64 tcg_vn, tcg_vm; tcg_vn =3D read_fp_dreg(s, rn); @@ -5357,7 +5357,7 @@ static void disas_fp_compare(DisasContext *s, uint32_= t insn) size =3D MO_UL; break; case 1: - size =3D MO_64; + size =3D MO_UQ; break; case 3: size =3D MO_UW; @@ -5408,7 +5408,7 @@ static void disas_fp_ccomp(DisasContext *s, uint32_t = insn) size =3D MO_UL; break; case 1: - size =3D MO_64; + size =3D MO_UQ; break; case 3: size =3D MO_UW; @@ -5474,7 +5474,7 @@ static void disas_fp_csel(DisasContext *s, uint32_t i= nsn) sz =3D MO_UL; break; case 1: - sz =3D MO_64; + sz =3D MO_UQ; break; case 3: sz =3D MO_UW; @@ -6279,7 +6279,7 @@ static void disas_fp_imm(DisasContext *s, uint32_t in= sn) sz =3D MO_UL; break; case 1: - sz =3D MO_64; + sz =3D MO_UQ; break; case 3: sz =3D MO_UW; @@ -6585,7 +6585,7 @@ static void handle_fmov(DisasContext *s, int rd, int = rn, int type, bool itof) break; case 1: /* 64 bit */ - tcg_gen_ld_i64(tcg_rd, cpu_env, fp_reg_offset(s, rn, MO_64)); + tcg_gen_ld_i64(tcg_rd, cpu_env, fp_reg_offset(s, rn, MO_UQ)); break; case 2: /* 64 bits from top half */ @@ -6819,9 +6819,9 @@ static void disas_simd_ext(DisasContext *s, uint32_t = insn) * extracting 64 bits from a 64:64 concatenation. */ if (!is_q) { - read_vec_element(s, tcg_resl, rn, 0, MO_64); + read_vec_element(s, tcg_resl, rn, 0, MO_UQ); if (pos !=3D 0) { - read_vec_element(s, tcg_resh, rm, 0, MO_64); + read_vec_element(s, tcg_resh, rm, 0, MO_UQ); do_ext64(s, tcg_resh, tcg_resl, pos); } tcg_gen_movi_i64(tcg_resh, 0); @@ -6839,22 +6839,22 @@ static void disas_simd_ext(DisasContext *s, uint32_= t insn) pos -=3D 64; } - read_vec_element(s, tcg_resl, elt->reg, elt->elt, MO_64); + read_vec_element(s, tcg_resl, elt->reg, elt->elt, MO_UQ); elt++; - read_vec_element(s, tcg_resh, elt->reg, elt->elt, MO_64); + read_vec_element(s, tcg_resh, elt->reg, elt->elt, MO_UQ); elt++; if (pos !=3D 0) { do_ext64(s, tcg_resh, tcg_resl, pos); tcg_hh =3D tcg_temp_new_i64(); - read_vec_element(s, tcg_hh, elt->reg, elt->elt, MO_64); + read_vec_element(s, tcg_hh, elt->reg, elt->elt, MO_UQ); do_ext64(s, tcg_hh, tcg_resh, pos); tcg_temp_free_i64(tcg_hh); } } - write_vec_element(s, tcg_resl, rd, 0, MO_64); + write_vec_element(s, tcg_resl, rd, 0, MO_UQ); tcg_temp_free_i64(tcg_resl); - write_vec_element(s, tcg_resh, rd, 1, MO_64); + write_vec_element(s, tcg_resh, rd, 1, MO_UQ); tcg_temp_free_i64(tcg_resh); } @@ -6895,12 +6895,12 @@ static void disas_simd_tb(DisasContext *s, uint32_t= insn) tcg_resh =3D tcg_temp_new_i64(); if (is_tblx) { - read_vec_element(s, tcg_resl, rd, 0, MO_64); + read_vec_element(s, tcg_resl, rd, 0, MO_UQ); } else { tcg_gen_movi_i64(tcg_resl, 0); } if (is_tblx && is_q) { - read_vec_element(s, tcg_resh, rd, 1, MO_64); + read_vec_element(s, tcg_resh, rd, 1, MO_UQ); } else { tcg_gen_movi_i64(tcg_resh, 0); } @@ -6908,11 +6908,11 @@ static void disas_simd_tb(DisasContext *s, uint32_t= insn) tcg_idx =3D tcg_temp_new_i64(); tcg_regno =3D tcg_const_i32(rn); tcg_numregs =3D tcg_const_i32(len + 1); - read_vec_element(s, tcg_idx, rm, 0, MO_64); + read_vec_element(s, tcg_idx, rm, 0, MO_UQ); gen_helper_simd_tbl(tcg_resl, cpu_env, tcg_resl, tcg_idx, tcg_regno, tcg_numregs); if (is_q) { - read_vec_element(s, tcg_idx, rm, 1, MO_64); + read_vec_element(s, tcg_idx, rm, 1, MO_UQ); gen_helper_simd_tbl(tcg_resh, cpu_env, tcg_resh, tcg_idx, tcg_regno, tcg_numregs); } @@ -6920,9 +6920,9 @@ static void disas_simd_tb(DisasContext *s, uint32_t i= nsn) tcg_temp_free_i32(tcg_regno); tcg_temp_free_i32(tcg_numregs); - write_vec_element(s, tcg_resl, rd, 0, MO_64); + write_vec_element(s, tcg_resl, rd, 0, MO_UQ); tcg_temp_free_i64(tcg_resl); - write_vec_element(s, tcg_resh, rd, 1, MO_64); + write_vec_element(s, tcg_resh, rd, 1, MO_UQ); tcg_temp_free_i64(tcg_resh); } @@ -7009,9 +7009,9 @@ static void disas_simd_zip_trn(DisasContext *s, uint3= 2_t insn) tcg_temp_free_i64(tcg_res); - write_vec_element(s, tcg_resl, rd, 0, MO_64); + write_vec_element(s, tcg_resl, rd, 0, MO_UQ); tcg_temp_free_i64(tcg_resl); - write_vec_element(s, tcg_resh, rd, 1, MO_64); + write_vec_element(s, tcg_resh, rd, 1, MO_UQ); tcg_temp_free_i64(tcg_resh); } @@ -7625,9 +7625,9 @@ static void disas_simd_mod_imm(DisasContext *s, uint3= 2_t insn) } else { /* ORR or BIC, with BIC negation to AND handled above. */ if (is_neg) { - gen_gvec_fn2i(s, is_q, rd, rd, imm, tcg_gen_gvec_andi, MO_64); + gen_gvec_fn2i(s, is_q, rd, rd, imm, tcg_gen_gvec_andi, MO_UQ); } else { - gen_gvec_fn2i(s, is_q, rd, rd, imm, tcg_gen_gvec_ori, MO_64); + gen_gvec_fn2i(s, is_q, rd, rd, imm, tcg_gen_gvec_ori, MO_UQ); } } } @@ -7702,7 +7702,7 @@ static void disas_simd_scalar_pairwise(DisasContext *= s, uint32_t insn) size =3D MO_UW; } } else { - size =3D extract32(size, 0, 1) ? MO_64 : MO_UL; + size =3D extract32(size, 0, 1) ? MO_UQ : MO_UL; } if (!fp_access_check(s)) { @@ -7716,13 +7716,13 @@ static void disas_simd_scalar_pairwise(DisasContext= *s, uint32_t insn) return; } - if (size =3D=3D MO_64) { + if (size =3D=3D MO_UQ) { TCGv_i64 tcg_op1 =3D tcg_temp_new_i64(); TCGv_i64 tcg_op2 =3D tcg_temp_new_i64(); TCGv_i64 tcg_res =3D tcg_temp_new_i64(); - read_vec_element(s, tcg_op1, rn, 0, MO_64); - read_vec_element(s, tcg_op2, rn, 1, MO_64); + read_vec_element(s, tcg_op1, rn, 0, MO_UQ); + read_vec_element(s, tcg_op2, rn, 1, MO_UQ); switch (opcode) { case 0x3b: /* ADDP */ @@ -8085,9 +8085,9 @@ static void handle_vec_simd_sqshrn(DisasContext *s, b= ool is_scalar, bool is_q, } if (!is_q) { - write_vec_element(s, tcg_final, rd, 0, MO_64); + write_vec_element(s, tcg_final, rd, 0, MO_UQ); } else { - write_vec_element(s, tcg_final, rd, 1, MO_64); + write_vec_element(s, tcg_final, rd, 1, MO_UQ); } if (round) { @@ -8155,9 +8155,9 @@ static void handle_simd_qshl(DisasContext *s, bool sc= alar, bool is_q, for (pass =3D 0; pass < maxpass; pass++) { TCGv_i64 tcg_op =3D tcg_temp_new_i64(); - read_vec_element(s, tcg_op, rn, pass, MO_64); + read_vec_element(s, tcg_op, rn, pass, MO_UQ); genfn(tcg_op, cpu_env, tcg_op, tcg_shift); - write_vec_element(s, tcg_op, rd, pass, MO_64); + write_vec_element(s, tcg_op, rd, pass, MO_UQ); tcg_temp_free_i64(tcg_op); } @@ -8228,11 +8228,11 @@ static void handle_simd_intfp_conv(DisasContext *s,= int rd, int rn, TCGMemOp mop =3D size | (is_signed ? MO_SIGN : 0); int pass; - if (fracbits || size =3D=3D MO_64) { + if (fracbits || size =3D=3D MO_UQ) { tcg_shift =3D tcg_const_i32(fracbits); } - if (size =3D=3D MO_64) { + if (size =3D=3D MO_UQ) { TCGv_i64 tcg_int64 =3D tcg_temp_new_i64(); TCGv_i64 tcg_double =3D tcg_temp_new_i64(); @@ -8249,7 +8249,7 @@ static void handle_simd_intfp_conv(DisasContext *s, i= nt rd, int rn, if (elements =3D=3D 1) { write_fp_dreg(s, rd, tcg_double); } else { - write_vec_element(s, tcg_double, rd, pass, MO_64); + write_vec_element(s, tcg_double, rd, pass, MO_UQ); } } @@ -8331,7 +8331,7 @@ static void handle_simd_shift_intfp_conv(DisasContext= *s, bool is_scalar, int immhb =3D immh << 3 | immb; if (immh & 8) { - size =3D MO_64; + size =3D MO_UQ; if (!is_scalar && !is_q) { unallocated_encoding(s); return; @@ -8376,7 +8376,7 @@ static void handle_simd_shift_fpint_conv(DisasContext= *s, bool is_scalar, TCGv_i32 tcg_rmode, tcg_shift; if (immh & 0x8) { - size =3D MO_64; + size =3D MO_UQ; if (!is_scalar && !is_q) { unallocated_encoding(s); return; @@ -8408,19 +8408,19 @@ static void handle_simd_shift_fpint_conv(DisasConte= xt *s, bool is_scalar, fracbits =3D (16 << size) - immhb; tcg_shift =3D tcg_const_i32(fracbits); - if (size =3D=3D MO_64) { + if (size =3D=3D MO_UQ) { int maxpass =3D is_scalar ? 1 : 2; for (pass =3D 0; pass < maxpass; pass++) { TCGv_i64 tcg_op =3D tcg_temp_new_i64(); - read_vec_element(s, tcg_op, rn, pass, MO_64); + read_vec_element(s, tcg_op, rn, pass, MO_UQ); if (is_u) { gen_helper_vfp_touqd(tcg_op, tcg_op, tcg_shift, tcg_fpstat= us); } else { gen_helper_vfp_tosqd(tcg_op, tcg_op, tcg_shift, tcg_fpstat= us); } - write_vec_element(s, tcg_op, rd, pass, MO_64); + write_vec_element(s, tcg_op, rd, pass, MO_UQ); tcg_temp_free_i64(tcg_op); } clear_vec_high(s, is_q, rd); @@ -8601,7 +8601,7 @@ static void disas_simd_scalar_three_reg_diff(DisasCon= text *s, uint32_t insn) tcg_gen_neg_i64(tcg_res, tcg_res); /* fall through */ case 0x9: /* SQDMLAL, SQDMLAL2 */ - read_vec_element(s, tcg_op1, rd, 0, MO_64); + read_vec_element(s, tcg_op1, rd, 0, MO_UQ); gen_helper_neon_addl_saturate_s64(tcg_res, cpu_env, tcg_res, tcg_op1); break; @@ -8751,8 +8751,8 @@ static void handle_3same_float(DisasContext *s, int s= ize, int elements, TCGv_i64 tcg_op2 =3D tcg_temp_new_i64(); TCGv_i64 tcg_res =3D tcg_temp_new_i64(); - read_vec_element(s, tcg_op1, rn, pass, MO_64); - read_vec_element(s, tcg_op2, rm, pass, MO_64); + read_vec_element(s, tcg_op1, rn, pass, MO_UQ); + read_vec_element(s, tcg_op2, rm, pass, MO_UQ); switch (fpopcode) { case 0x39: /* FMLS */ @@ -8760,7 +8760,7 @@ static void handle_3same_float(DisasContext *s, int s= ize, int elements, gen_helper_vfp_negd(tcg_op1, tcg_op1); /* fall through */ case 0x19: /* FMLA */ - read_vec_element(s, tcg_res, rd, pass, MO_64); + read_vec_element(s, tcg_res, rd, pass, MO_UQ); gen_helper_vfp_muladdd(tcg_res, tcg_op1, tcg_op2, tcg_res, fpst); break; @@ -8820,7 +8820,7 @@ static void handle_3same_float(DisasContext *s, int s= ize, int elements, g_assert_not_reached(); } - write_vec_element(s, tcg_res, rd, pass, MO_64); + write_vec_element(s, tcg_res, rd, pass, MO_UQ); tcg_temp_free_i64(tcg_res); tcg_temp_free_i64(tcg_op1); @@ -8905,7 +8905,7 @@ static void handle_3same_float(DisasContext *s, int s= ize, int elements, TCGv_i64 tcg_tmp =3D tcg_temp_new_i64(); tcg_gen_extu_i32_i64(tcg_tmp, tcg_res); - write_vec_element(s, tcg_tmp, rd, pass, MO_64); + write_vec_element(s, tcg_tmp, rd, pass, MO_UQ); tcg_temp_free_i64(tcg_tmp); } else { write_vec_element_i32(s, tcg_res, rd, pass, MO_UL); @@ -9381,7 +9381,7 @@ static void handle_2misc_fcmp_zero(DisasContext *s, i= nt opcode, bool is_scalar, bool is_u, bool is_q, int size, int rn, int rd) { - bool is_double =3D (size =3D=3D MO_64); + bool is_double =3D (size =3D=3D MO_UQ); TCGv_ptr fpst; if (!fp_access_check(s)) { @@ -9419,13 +9419,13 @@ static void handle_2misc_fcmp_zero(DisasContext *s,= int opcode, } for (pass =3D 0; pass < (is_scalar ? 1 : 2); pass++) { - read_vec_element(s, tcg_op, rn, pass, MO_64); + read_vec_element(s, tcg_op, rn, pass, MO_UQ); if (swap) { genfn(tcg_res, tcg_zero, tcg_op, fpst); } else { genfn(tcg_res, tcg_op, tcg_zero, fpst); } - write_vec_element(s, tcg_res, rd, pass, MO_64); + write_vec_element(s, tcg_res, rd, pass, MO_UQ); } tcg_temp_free_i64(tcg_res); tcg_temp_free_i64(tcg_zero); @@ -9526,7 +9526,7 @@ static void handle_2misc_reciprocal(DisasContext *s, = int opcode, int pass; for (pass =3D 0; pass < (is_scalar ? 1 : 2); pass++) { - read_vec_element(s, tcg_op, rn, pass, MO_64); + read_vec_element(s, tcg_op, rn, pass, MO_UQ); switch (opcode) { case 0x3d: /* FRECPE */ gen_helper_recpe_f64(tcg_res, tcg_op, fpst); @@ -9540,7 +9540,7 @@ static void handle_2misc_reciprocal(DisasContext *s, = int opcode, default: g_assert_not_reached(); } - write_vec_element(s, tcg_res, rd, pass, MO_64); + write_vec_element(s, tcg_res, rd, pass, MO_UQ); } tcg_temp_free_i64(tcg_res); tcg_temp_free_i64(tcg_op); @@ -9615,7 +9615,7 @@ static void handle_2misc_narrow(DisasContext *s, bool= scalar, if (scalar) { read_vec_element(s, tcg_op, rn, pass, size + 1); } else { - read_vec_element(s, tcg_op, rn, pass, MO_64); + read_vec_element(s, tcg_op, rn, pass, MO_UQ); } tcg_res[pass] =3D tcg_temp_new_i32(); @@ -9711,15 +9711,15 @@ static void handle_2misc_satacc(DisasContext *s, bo= ol is_scalar, bool is_u, int pass; for (pass =3D 0; pass < (is_scalar ? 1 : 2); pass++) { - read_vec_element(s, tcg_rn, rn, pass, MO_64); - read_vec_element(s, tcg_rd, rd, pass, MO_64); + read_vec_element(s, tcg_rn, rn, pass, MO_UQ); + read_vec_element(s, tcg_rd, rd, pass, MO_UQ); if (is_u) { /* USQADD */ gen_helper_neon_uqadd_s64(tcg_rd, cpu_env, tcg_rn, tcg_rd); } else { /* SUQADD */ gen_helper_neon_sqadd_u64(tcg_rd, cpu_env, tcg_rn, tcg_rd); } - write_vec_element(s, tcg_rd, rd, pass, MO_64); + write_vec_element(s, tcg_rd, rd, pass, MO_UQ); } tcg_temp_free_i64(tcg_rd); tcg_temp_free_i64(tcg_rn); @@ -9776,7 +9776,7 @@ static void handle_2misc_satacc(DisasContext *s, bool= is_scalar, bool is_u, if (is_scalar) { TCGv_i64 tcg_zero =3D tcg_const_i64(0); - write_vec_element(s, tcg_zero, rd, 0, MO_64); + write_vec_element(s, tcg_zero, rd, 0, MO_UQ); tcg_temp_free_i64(tcg_zero); } write_vec_element_i32(s, tcg_rd, rd, pass, MO_UL); @@ -10146,7 +10146,7 @@ static void handle_vec_simd_wshli(DisasContext *s, = bool is_q, bool is_u, * so if rd =3D=3D rn we would overwrite parts of our input. * So load everything right now and use shifts in the main loop. */ - read_vec_element(s, tcg_rn, rn, is_q ? 1 : 0, MO_64); + read_vec_element(s, tcg_rn, rn, is_q ? 1 : 0, MO_UQ); for (i =3D 0; i < elements; i++) { tcg_gen_shri_i64(tcg_rd, tcg_rn, i * esize); @@ -10183,7 +10183,7 @@ static void handle_vec_simd_shrn(DisasContext *s, b= ool is_q, tcg_rn =3D tcg_temp_new_i64(); tcg_rd =3D tcg_temp_new_i64(); tcg_final =3D tcg_temp_new_i64(); - read_vec_element(s, tcg_final, rd, is_q ? 1 : 0, MO_64); + read_vec_element(s, tcg_final, rd, is_q ? 1 : 0, MO_UQ); if (round) { uint64_t round_const =3D 1ULL << (shift - 1); @@ -10201,9 +10201,9 @@ static void handle_vec_simd_shrn(DisasContext *s, b= ool is_q, } if (!is_q) { - write_vec_element(s, tcg_final, rd, 0, MO_64); + write_vec_element(s, tcg_final, rd, 0, MO_UQ); } else { - write_vec_element(s, tcg_final, rd, 1, MO_64); + write_vec_element(s, tcg_final, rd, 1, MO_UQ); } if (round) { tcg_temp_free_i64(tcg_round); @@ -10335,8 +10335,8 @@ static void handle_3rd_widening(DisasContext *s, in= t is_q, int is_u, int size, } if (accop !=3D 0) { - read_vec_element(s, tcg_res[0], rd, 0, MO_64); - read_vec_element(s, tcg_res[1], rd, 1, MO_64); + read_vec_element(s, tcg_res[0], rd, 0, MO_UQ); + read_vec_element(s, tcg_res[1], rd, 1, MO_UQ); } /* size =3D=3D 2 means two 32x32->64 operations; this is worth special @@ -10522,8 +10522,8 @@ static void handle_3rd_widening(DisasContext *s, in= t is_q, int is_u, int size, } } - write_vec_element(s, tcg_res[0], rd, 0, MO_64); - write_vec_element(s, tcg_res[1], rd, 1, MO_64); + write_vec_element(s, tcg_res[0], rd, 0, MO_UQ); + write_vec_element(s, tcg_res[1], rd, 1, MO_UQ); tcg_temp_free_i64(tcg_res[0]); tcg_temp_free_i64(tcg_res[1]); } @@ -10546,7 +10546,7 @@ static void handle_3rd_wide(DisasContext *s, int is= _q, int is_u, int size, }; NeonGenWidenFn *widenfn =3D widenfns[size][is_u]; - read_vec_element(s, tcg_op1, rn, pass, MO_64); + read_vec_element(s, tcg_op1, rn, pass, MO_UQ); read_vec_element_i32(s, tcg_op2, rm, part + pass, MO_UL); widenfn(tcg_op2_wide, tcg_op2); tcg_temp_free_i32(tcg_op2); @@ -10558,7 +10558,7 @@ static void handle_3rd_wide(DisasContext *s, int is= _q, int is_u, int size, } for (pass =3D 0; pass < 2; pass++) { - write_vec_element(s, tcg_res[pass], rd, pass, MO_64); + write_vec_element(s, tcg_res[pass], rd, pass, MO_UQ); tcg_temp_free_i64(tcg_res[pass]); } } @@ -10589,8 +10589,8 @@ static void handle_3rd_narrowing(DisasContext *s, i= nt is_q, int is_u, int size, }; NeonGenNarrowFn *gennarrow =3D narrowfns[size][is_u]; - read_vec_element(s, tcg_op1, rn, pass, MO_64); - read_vec_element(s, tcg_op2, rm, pass, MO_64); + read_vec_element(s, tcg_op1, rn, pass, MO_UQ); + read_vec_element(s, tcg_op2, rm, pass, MO_UQ); gen_neon_addl(size, (opcode =3D=3D 6), tcg_wideres, tcg_op1, tcg_o= p2); @@ -10621,12 +10621,12 @@ static void handle_pmull_64(DisasContext *s, int = is_q, int rd, int rn, int rm) TCGv_i64 tcg_op2 =3D tcg_temp_new_i64(); TCGv_i64 tcg_res =3D tcg_temp_new_i64(); - read_vec_element(s, tcg_op1, rn, is_q, MO_64); - read_vec_element(s, tcg_op2, rm, is_q, MO_64); + read_vec_element(s, tcg_op1, rn, is_q, MO_UQ); + read_vec_element(s, tcg_op2, rm, is_q, MO_UQ); gen_helper_neon_pmull_64_lo(tcg_res, tcg_op1, tcg_op2); - write_vec_element(s, tcg_res, rd, 0, MO_64); + write_vec_element(s, tcg_res, rd, 0, MO_UQ); gen_helper_neon_pmull_64_hi(tcg_res, tcg_op1, tcg_op2); - write_vec_element(s, tcg_res, rd, 1, MO_64); + write_vec_element(s, tcg_res, rd, 1, MO_UQ); tcg_temp_free_i64(tcg_op1); tcg_temp_free_i64(tcg_op2); @@ -10814,8 +10814,8 @@ static void handle_simd_3same_pair(DisasContext *s,= int is_q, int u, int opcode, TCGv_i64 tcg_op2 =3D tcg_temp_new_i64(); int passreg =3D (pass =3D=3D 0) ? rn : rm; - read_vec_element(s, tcg_op1, passreg, 0, MO_64); - read_vec_element(s, tcg_op2, passreg, 1, MO_64); + read_vec_element(s, tcg_op1, passreg, 0, MO_UQ); + read_vec_element(s, tcg_op2, passreg, 1, MO_UQ); tcg_res[pass] =3D tcg_temp_new_i64(); switch (opcode) { @@ -10846,7 +10846,7 @@ static void handle_simd_3same_pair(DisasContext *s,= int is_q, int u, int opcode, } for (pass =3D 0; pass < 2; pass++) { - write_vec_element(s, tcg_res[pass], rd, pass, MO_64); + write_vec_element(s, tcg_res[pass], rd, pass, MO_UQ); tcg_temp_free_i64(tcg_res[pass]); } } else { @@ -10971,7 +10971,7 @@ static void disas_simd_3same_float(DisasContext *s,= uint32_t insn) unallocated_encoding(s); return; } - handle_simd_3same_pair(s, is_q, 0, fpopcode, size ? MO_64 : MO_UL, + handle_simd_3same_pair(s, is_q, 0, fpopcode, size ? MO_UQ : MO_UL, rn, rm, rd); return; case 0x1b: /* FMULX */ @@ -11155,12 +11155,12 @@ static void disas_simd_3same_int(DisasContext *s,= uint32_t insn) TCGv_i64 tcg_op2 =3D tcg_temp_new_i64(); TCGv_i64 tcg_res =3D tcg_temp_new_i64(); - read_vec_element(s, tcg_op1, rn, pass, MO_64); - read_vec_element(s, tcg_op2, rm, pass, MO_64); + read_vec_element(s, tcg_op1, rn, pass, MO_UQ); + read_vec_element(s, tcg_op2, rm, pass, MO_UQ); handle_3same_64(s, opcode, u, tcg_res, tcg_op1, tcg_op2); - write_vec_element(s, tcg_res, rd, pass, MO_64); + write_vec_element(s, tcg_res, rd, pass, MO_UQ); tcg_temp_free_i64(tcg_res); tcg_temp_free_i64(tcg_op1); @@ -11714,7 +11714,7 @@ static void handle_2misc_widening(DisasContext *s, = int opcode, bool is_q, tcg_temp_free_i32(tcg_op); } for (pass =3D 0; pass < 2; pass++) { - write_vec_element(s, tcg_res[pass], rd, pass, MO_64); + write_vec_element(s, tcg_res[pass], rd, pass, MO_UQ); tcg_temp_free_i64(tcg_res[pass]); } } else { @@ -11774,7 +11774,7 @@ static void handle_rev(DisasContext *s, int opcode,= bool u, case MO_UL: tcg_gen_bswap32_i64(tcg_tmp, tcg_tmp); break; - case MO_64: + case MO_UQ: tcg_gen_bswap64_i64(tcg_tmp, tcg_tmp); break; default: @@ -11803,8 +11803,8 @@ static void handle_rev(DisasContext *s, int opcode,= bool u, tcg_gen_deposit_i64(tcg_rd, tcg_rd, tcg_rn, off, esize); } } - write_vec_element(s, tcg_rd, rd, 0, MO_64); - write_vec_element(s, tcg_rd_hi, rd, 1, MO_64); + write_vec_element(s, tcg_rd, rd, 0, MO_UQ); + write_vec_element(s, tcg_rd_hi, rd, 1, MO_UQ); tcg_temp_free_i64(tcg_rd_hi); tcg_temp_free_i64(tcg_rd); @@ -11839,7 +11839,7 @@ static void handle_2misc_pairwise(DisasContext *s, = int opcode, bool u, read_vec_element(s, tcg_op2, rn, pass * 2 + 1, memop); tcg_gen_add_i64(tcg_res[pass], tcg_op1, tcg_op2); if (accum) { - read_vec_element(s, tcg_op1, rd, pass, MO_64); + read_vec_element(s, tcg_op1, rd, pass, MO_UQ); tcg_gen_add_i64(tcg_res[pass], tcg_res[pass], tcg_op1); } @@ -11859,11 +11859,11 @@ static void handle_2misc_pairwise(DisasContext *s= , int opcode, bool u, tcg_res[pass] =3D tcg_temp_new_i64(); - read_vec_element(s, tcg_op, rn, pass, MO_64); + read_vec_element(s, tcg_op, rn, pass, MO_UQ); genfn(tcg_res[pass], tcg_op); if (accum) { - read_vec_element(s, tcg_op, rd, pass, MO_64); + read_vec_element(s, tcg_op, rd, pass, MO_UQ); if (size =3D=3D 0) { gen_helper_neon_addl_u16(tcg_res[pass], tcg_res[pass], tcg_op); @@ -11879,7 +11879,7 @@ static void handle_2misc_pairwise(DisasContext *s, = int opcode, bool u, tcg_res[1] =3D tcg_const_i64(0); } for (pass =3D 0; pass < 2; pass++) { - write_vec_element(s, tcg_res[pass], rd, pass, MO_64); + write_vec_element(s, tcg_res[pass], rd, pass, MO_UQ); tcg_temp_free_i64(tcg_res[pass]); } } @@ -11909,7 +11909,7 @@ static void handle_shll(DisasContext *s, bool is_q,= int size, int rn, int rd) } for (pass =3D 0; pass < 2; pass++) { - write_vec_element(s, tcg_res[pass], rd, pass, MO_64); + write_vec_element(s, tcg_res[pass], rd, pass, MO_UQ); tcg_temp_free_i64(tcg_res[pass]); } } @@ -12233,12 +12233,12 @@ static void disas_simd_two_reg_misc(DisasContext = *s, uint32_t insn) TCGv_i64 tcg_op =3D tcg_temp_new_i64(); TCGv_i64 tcg_res =3D tcg_temp_new_i64(); - read_vec_element(s, tcg_op, rn, pass, MO_64); + read_vec_element(s, tcg_op, rn, pass, MO_UQ); handle_2misc_64(s, opcode, u, tcg_res, tcg_op, tcg_rmode, tcg_fpstatus); - write_vec_element(s, tcg_res, rd, pass, MO_64); + write_vec_element(s, tcg_res, rd, pass, MO_UQ); tcg_temp_free_i64(tcg_res); tcg_temp_free_i64(tcg_op); @@ -12856,7 +12856,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) is_fp16 =3D true; break; case MO_UL: /* single precision */ - case MO_64: /* double precision */ + case MO_UQ: /* double precision */ break; default: unallocated_encoding(s); @@ -12875,7 +12875,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) } is_fp16 =3D true; break; - case MO_64: + case MO_UQ: break; default: unallocated_encoding(s); @@ -12886,7 +12886,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) default: /* integer */ switch (size) { case MO_UB: - case MO_64: + case MO_UQ: unallocated_encoding(s); return; } @@ -12906,7 +12906,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) index =3D h << 1 | l; rm |=3D m << 4; break; - case MO_64: + case MO_UQ: if (l || !is_q) { unallocated_encoding(s); return; @@ -12946,7 +12946,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) vec_full_reg_offset(s, rn), vec_full_reg_offset(s, rm), fpst, is_q ? 16 : 8, vec_full_reg_size(s), data, - size =3D=3D MO_64 + size =3D=3D MO_UQ ? gen_helper_gvec_fcmlas_idx : gen_helper_gvec_fcmlah_idx); tcg_temp_free_ptr(fpst); @@ -12976,13 +12976,13 @@ static void disas_simd_indexed(DisasContext *s, u= int32_t insn) assert(is_fp && is_q && !is_long); - read_vec_element(s, tcg_idx, rm, index, MO_64); + read_vec_element(s, tcg_idx, rm, index, MO_UQ); for (pass =3D 0; pass < (is_scalar ? 1 : 2); pass++) { TCGv_i64 tcg_op =3D tcg_temp_new_i64(); TCGv_i64 tcg_res =3D tcg_temp_new_i64(); - read_vec_element(s, tcg_op, rn, pass, MO_64); + read_vec_element(s, tcg_op, rn, pass, MO_UQ); switch (16 * u + opcode) { case 0x05: /* FMLS */ @@ -12990,7 +12990,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) gen_helper_vfp_negd(tcg_op, tcg_op); /* fall through */ case 0x01: /* FMLA */ - read_vec_element(s, tcg_res, rd, pass, MO_64); + read_vec_element(s, tcg_res, rd, pass, MO_UQ); gen_helper_vfp_muladdd(tcg_res, tcg_op, tcg_idx, tcg_res, = fpst); break; case 0x09: /* FMUL */ @@ -13003,7 +13003,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) g_assert_not_reached(); } - write_vec_element(s, tcg_res, rd, pass, MO_64); + write_vec_element(s, tcg_res, rd, pass, MO_UQ); tcg_temp_free_i64(tcg_op); tcg_temp_free_i64(tcg_res); } @@ -13241,7 +13241,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) } /* Accumulating op: handle accumulate step */ - read_vec_element(s, tcg_res[pass], rd, pass, MO_64); + read_vec_element(s, tcg_res[pass], rd, pass, MO_UQ); switch (opcode) { case 0x2: /* SMLAL, SMLAL2, UMLAL, UMLAL2 */ @@ -13316,7 +13316,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) } /* Accumulating op: handle accumulate step */ - read_vec_element(s, tcg_res[pass], rd, pass, MO_64); + read_vec_element(s, tcg_res[pass], rd, pass, MO_UQ); switch (opcode) { case 0x2: /* SMLAL, SMLAL2, UMLAL, UMLAL2 */ @@ -13352,7 +13352,7 @@ static void disas_simd_indexed(DisasContext *s, uin= t32_t insn) } for (pass =3D 0; pass < 2; pass++) { - write_vec_element(s, tcg_res[pass], rd, pass, MO_64); + write_vec_element(s, tcg_res[pass], rd, pass, MO_UQ); tcg_temp_free_i64(tcg_res[pass]); } } @@ -13639,14 +13639,14 @@ static void disas_crypto_three_reg_sha512(DisasCo= ntext *s, uint32_t insn) tcg_res[1] =3D tcg_temp_new_i64(); for (pass =3D 0; pass < 2; pass++) { - read_vec_element(s, tcg_op1, rn, pass, MO_64); - read_vec_element(s, tcg_op2, rm, pass, MO_64); + read_vec_element(s, tcg_op1, rn, pass, MO_UQ); + read_vec_element(s, tcg_op2, rm, pass, MO_UQ); tcg_gen_rotli_i64(tcg_res[pass], tcg_op2, 1); tcg_gen_xor_i64(tcg_res[pass], tcg_res[pass], tcg_op1); } - write_vec_element(s, tcg_res[0], rd, 0, MO_64); - write_vec_element(s, tcg_res[1], rd, 1, MO_64); + write_vec_element(s, tcg_res[0], rd, 0, MO_UQ); + write_vec_element(s, tcg_res[1], rd, 1, MO_UQ); tcg_temp_free_i64(tcg_op1); tcg_temp_free_i64(tcg_op2); @@ -13750,9 +13750,9 @@ static void disas_crypto_four_reg(DisasContext *s, = uint32_t insn) tcg_res[1] =3D tcg_temp_new_i64(); for (pass =3D 0; pass < 2; pass++) { - read_vec_element(s, tcg_op1, rn, pass, MO_64); - read_vec_element(s, tcg_op2, rm, pass, MO_64); - read_vec_element(s, tcg_op3, ra, pass, MO_64); + read_vec_element(s, tcg_op1, rn, pass, MO_UQ); + read_vec_element(s, tcg_op2, rm, pass, MO_UQ); + read_vec_element(s, tcg_op3, ra, pass, MO_UQ); if (op0 =3D=3D 0) { /* EOR3 */ @@ -13763,8 +13763,8 @@ static void disas_crypto_four_reg(DisasContext *s, = uint32_t insn) } tcg_gen_xor_i64(tcg_res[pass], tcg_res[pass], tcg_op1); } - write_vec_element(s, tcg_res[0], rd, 0, MO_64); - write_vec_element(s, tcg_res[1], rd, 1, MO_64); + write_vec_element(s, tcg_res[0], rd, 0, MO_UQ); + write_vec_element(s, tcg_res[1], rd, 1, MO_UQ); tcg_temp_free_i64(tcg_op1); tcg_temp_free_i64(tcg_op2); @@ -13832,14 +13832,14 @@ static void disas_crypto_xar(DisasContext *s, uin= t32_t insn) tcg_res[1] =3D tcg_temp_new_i64(); for (pass =3D 0; pass < 2; pass++) { - read_vec_element(s, tcg_op1, rn, pass, MO_64); - read_vec_element(s, tcg_op2, rm, pass, MO_64); + read_vec_element(s, tcg_op1, rn, pass, MO_UQ); + read_vec_element(s, tcg_op2, rm, pass, MO_UQ); tcg_gen_xor_i64(tcg_res[pass], tcg_op1, tcg_op2); tcg_gen_rotri_i64(tcg_res[pass], tcg_res[pass], imm6); } - write_vec_element(s, tcg_res[0], rd, 0, MO_64); - write_vec_element(s, tcg_res[1], rd, 1, MO_64); + write_vec_element(s, tcg_res[0], rd, 0, MO_UQ); + write_vec_element(s, tcg_res[1], rd, 1, MO_UQ); tcg_temp_free_i64(tcg_op1); tcg_temp_free_i64(tcg_op2); diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c index f7c891d..423c461 100644 --- a/target/arm/translate-sve.c +++ b/target/arm/translate-sve.c @@ -1708,7 +1708,7 @@ static void do_sat_addsub_vec(DisasContext *s, int es= z, int rd, int rn, tcg_temp_free_i64(t64); break; - case MO_64: + case MO_UQ: if (u) { if (d) { gen_helper_sve_uqsubi_d(dptr, nptr, val, desc); @@ -1862,7 +1862,7 @@ static bool do_zz_dbm(DisasContext *s, arg_rr_dbm *a,= GVecGen2iFn *gvec_fn) } if (sve_access_check(s)) { unsigned vsz =3D vec_full_reg_size(s); - gvec_fn(MO_64, vec_full_reg_offset(s, a->rd), + gvec_fn(MO_UQ, vec_full_reg_offset(s, a->rd), vec_full_reg_offset(s, a->rn), imm, vsz, vsz); } return true; @@ -2076,7 +2076,7 @@ static bool trans_INSR_f(DisasContext *s, arg_rrr_esz= *a) { if (sve_access_check(s)) { TCGv_i64 t =3D tcg_temp_new_i64(); - tcg_gen_ld_i64(t, cpu_env, vec_reg_offset(s, a->rm, 0, MO_64)); + tcg_gen_ld_i64(t, cpu_env, vec_reg_offset(s, a->rm, 0, MO_UQ)); do_insr_i64(s, a, t); tcg_temp_free_i64(t); } @@ -3327,7 +3327,7 @@ static bool trans_SUBR_zzi(DisasContext *s, arg_rri_e= sz *a) .fno =3D gen_helper_sve_subri_d, .opt_opc =3D vecop_list, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, - .vece =3D MO_64, + .vece =3D MO_UQ, .scalar_first =3D true } }; @@ -4571,7 +4571,7 @@ static const TCGMemOp dtype_mop[16] =3D { MO_UB, MO_UB, MO_UB, MO_UB, MO_SL, MO_UW, MO_UW, MO_UW, MO_SW, MO_SW, MO_UL, MO_UL, - MO_SB, MO_SB, MO_SB, MO_Q + MO_SB, MO_SB, MO_SB, MO_UQ }; #define dtype_msz(x) (dtype_mop[x] & MO_SIZE) @@ -5261,7 +5261,7 @@ static bool trans_LD1_zprz(DisasContext *s, arg_LD1_z= prz *a) case MO_UL: fn =3D gather_load_fn32[be][a->ff][a->xs][a->u][a->msz]; break; - case MO_64: + case MO_UQ: fn =3D gather_load_fn64[be][a->ff][a->xs][a->u][a->msz]; break; } @@ -5289,7 +5289,7 @@ static bool trans_LD1_zpiz(DisasContext *s, arg_LD1_z= piz *a) case MO_UL: fn =3D gather_load_fn32[be][a->ff][0][a->u][a->msz]; break; - case MO_64: + case MO_UQ: fn =3D gather_load_fn64[be][a->ff][2][a->u][a->msz]; break; } @@ -5367,7 +5367,7 @@ static bool trans_ST1_zprz(DisasContext *s, arg_ST1_z= prz *a) case MO_UL: fn =3D scatter_store_fn32[be][a->xs][a->msz]; break; - case MO_64: + case MO_UQ: fn =3D scatter_store_fn64[be][a->xs][a->msz]; break; default: @@ -5395,7 +5395,7 @@ static bool trans_ST1_zpiz(DisasContext *s, arg_ST1_z= piz *a) case MO_UL: fn =3D scatter_store_fn32[be][0][a->msz]; break; - case MO_64: + case MO_UQ: fn =3D scatter_store_fn64[be][2][a->msz]; break; } diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c index 5e0cd63..d71944d 100644 --- a/target/arm/translate-vfp.inc.c +++ b/target/arm/translate-vfp.inc.c @@ -40,7 +40,7 @@ uint64_t vfp_expand_imm(int size, uint8_t imm8) uint64_t imm; switch (size) { - case MO_64: + case MO_UQ: imm =3D (extract32(imm8, 7, 1) ? 0x8000 : 0) | (extract32(imm8, 6, 1) ? 0x3fc0 : 0x4000) | extract32(imm8, 0, 6); @@ -1960,7 +1960,7 @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VM= OV_imm_dp *a) } } - fd =3D tcg_const_i64(vfp_expand_imm(MO_64, a->imm)); + fd =3D tcg_const_i64(vfp_expand_imm(MO_UQ, a->imm)); for (;;) { neon_store_reg64(fd, vd); diff --git a/target/arm/translate.c b/target/arm/translate.c index 5510ecd..306ef24 100644 --- a/target/arm/translate.c +++ b/target/arm/translate.c @@ -1171,7 +1171,7 @@ static void gen_aa32_ld_i64(DisasContext *s, TCGv_i64= val, TCGv_i32 a32, static inline void gen_aa32_ld64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32, int index) { - gen_aa32_ld_i64(s, val, a32, index, MO_Q | s->be_data); + gen_aa32_ld_i64(s, val, a32, index, MO_UQ | s->be_data); } static void gen_aa32_st_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32, @@ -1194,7 +1194,7 @@ static void gen_aa32_st_i64(DisasContext *s, TCGv_i64= val, TCGv_i32 a32, static inline void gen_aa32_st64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32, int index) { - gen_aa32_st_i64(s, val, a32, index, MO_Q | s->be_data); + gen_aa32_st_i64(s, val, a32, index, MO_UQ | s->be_data); } DO_GEN_LD(8s, MO_SB) @@ -1455,7 +1455,7 @@ static void neon_load_element64(TCGv_i64 var, int reg= , int ele, TCGMemOp mop) case MO_UL: tcg_gen_ld32u_i64(var, cpu_env, offset); break; - case MO_Q: + case MO_UQ: tcg_gen_ld_i64(var, cpu_env, offset); break; default: @@ -1502,7 +1502,7 @@ static void neon_store_element64(int reg, int ele, TC= GMemOp size, TCGv_i64 var) case MO_UL: tcg_gen_st32_i64(var, cpu_env, offset); break; - case MO_64: + case MO_UQ: tcg_gen_st_i64(var, cpu_env, offset); break; default: @@ -4278,7 +4278,7 @@ const GVecGen2i ssra_op[4] =3D { .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, .opt_opc =3D vecop_list_ssra, .load_dest =3D true, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift) @@ -4336,7 +4336,7 @@ const GVecGen2i usra_op[4] =3D { .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, .load_dest =3D true, .opt_opc =3D vecop_list_usra, - .vece =3D MO_64, }, + .vece =3D MO_UQ, }, }; static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift) @@ -4416,7 +4416,7 @@ const GVecGen2i sri_op[4] =3D { .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, .load_dest =3D true, .opt_opc =3D vecop_list_sri, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift) @@ -4494,7 +4494,7 @@ const GVecGen2i sli_op[4] =3D { .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, .load_dest =3D true, .opt_opc =3D vecop_list_sli, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b) @@ -4590,7 +4590,7 @@ const GVecGen3 mla_op[4] =3D { .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, .load_dest =3D true, .opt_opc =3D vecop_list_mla, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; const GVecGen3 mls_op[4] =3D { @@ -4614,7 +4614,7 @@ const GVecGen3 mls_op[4] =3D { .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, .load_dest =3D true, .opt_opc =3D vecop_list_mls, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; /* CMTST : test is "if (X & Y !=3D 0)". */ @@ -4658,7 +4658,7 @@ const GVecGen3 cmtst_op[4] =3D { .fniv =3D gen_cmtst_vec, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, .opt_opc =3D vecop_list_cmtst, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; static void gen_uqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat, @@ -4696,7 +4696,7 @@ const GVecGen4 uqadd_op[4] =3D { .fno =3D gen_helper_gvec_uqadd_d, .write_aofs =3D true, .opt_opc =3D vecop_list_uqadd, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; static void gen_sqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat, @@ -4734,7 +4734,7 @@ const GVecGen4 sqadd_op[4] =3D { .fno =3D gen_helper_gvec_sqadd_d, .opt_opc =3D vecop_list_sqadd, .write_aofs =3D true, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; static void gen_uqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat, @@ -4772,7 +4772,7 @@ const GVecGen4 uqsub_op[4] =3D { .fno =3D gen_helper_gvec_uqsub_d, .opt_opc =3D vecop_list_uqsub, .write_aofs =3D true, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; static void gen_sqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat, @@ -4810,7 +4810,7 @@ const GVecGen4 sqsub_op[4] =3D { .fno =3D gen_helper_gvec_sqsub_d, .opt_opc =3D vecop_list_sqsub, .write_aofs =3D true, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; /* Translate a NEON data processing instruction. Return nonzero if the diff --git a/target/i386/translate.c b/target/i386/translate.c index 0e863d4..8d62b37 100644 --- a/target/i386/translate.c +++ b/target/i386/translate.c @@ -323,7 +323,7 @@ static inline bool byte_reg_is_xH(DisasContext *s, int = reg) static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot) { if (CODE64(s)) { - return ot =3D=3D MO_UW ? MO_UW : MO_64; + return ot =3D=3D MO_UW ? MO_UW : MO_UQ; } else { return ot; } @@ -332,14 +332,14 @@ static inline TCGMemOp mo_pushpop(DisasContext *s, TC= GMemOp ot) /* Select the size of the stack pointer. */ static inline TCGMemOp mo_stacksize(DisasContext *s) { - return CODE64(s) ? MO_64 : s->ss32 ? MO_UL : MO_UW; + return CODE64(s) ? MO_UQ : s->ss32 ? MO_UL : MO_UW; } /* Select only size 64 else 32. Used for SSE operand sizes. */ static inline TCGMemOp mo_64_32(TCGMemOp ot) { #ifdef TARGET_X86_64 - return ot =3D=3D MO_64 ? MO_64 : MO_UL; + return ot =3D=3D MO_UQ ? MO_UQ : MO_UL; #else return MO_UL; #endif @@ -378,7 +378,7 @@ static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp = ot, int reg, TCGv t0) tcg_gen_ext32u_tl(cpu_regs[reg], t0); break; #ifdef TARGET_X86_64 - case MO_64: + case MO_UQ: tcg_gen_mov_tl(cpu_regs[reg], t0); break; #endif @@ -456,7 +456,7 @@ static void gen_lea_v_seg(DisasContext *s, TCGMemOp afl= ag, TCGv a0, { switch (aflag) { #ifdef TARGET_X86_64 - case MO_64: + case MO_UQ: if (ovr_seg < 0) { tcg_gen_mov_tl(s->A0, a0); return; @@ -492,7 +492,7 @@ static void gen_lea_v_seg(DisasContext *s, TCGMemOp afl= ag, TCGv a0, if (ovr_seg >=3D 0) { TCGv seg =3D cpu_seg_base[ovr_seg]; - if (aflag =3D=3D MO_64) { + if (aflag =3D=3D MO_UQ) { tcg_gen_add_tl(s->A0, a0, seg); } else if (CODE64(s)) { tcg_gen_ext32u_tl(s->A0, a0); @@ -1469,7 +1469,7 @@ static void gen_shift_flags(DisasContext *s, TCGMemOp= ot, TCGv result, static void gen_shift_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right, int is_arith) { - target_ulong mask =3D (ot =3D=3D MO_64 ? 0x3f : 0x1f); + target_ulong mask =3D (ot =3D=3D MO_UQ ? 0x3f : 0x1f); /* load */ if (op1 =3D=3D OR_TMP0) { @@ -1505,7 +1505,7 @@ static void gen_shift_rm_T1(DisasContext *s, TCGMemOp= ot, int op1, static void gen_shift_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2, int is_right, int is_arith) { - int mask =3D (ot =3D=3D MO_64 ? 0x3f : 0x1f); + int mask =3D (ot =3D=3D MO_UQ ? 0x3f : 0x1f); /* load */ if (op1 =3D=3D OR_TMP0) @@ -1544,7 +1544,7 @@ static void gen_shift_rm_im(DisasContext *s, TCGMemOp= ot, int op1, int op2, static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_ri= ght) { - target_ulong mask =3D (ot =3D=3D MO_64 ? 0x3f : 0x1f); + target_ulong mask =3D (ot =3D=3D MO_UQ ? 0x3f : 0x1f); TCGv_i32 t0, t1; /* load */ @@ -1630,7 +1630,7 @@ static void gen_rot_rm_T1(DisasContext *s, TCGMemOp o= t, int op1, int is_right) static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2, int is_right) { - int mask =3D (ot =3D=3D MO_64 ? 0x3f : 0x1f); + int mask =3D (ot =3D=3D MO_UQ ? 0x3f : 0x1f); int shift; /* load */ @@ -1729,7 +1729,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp = ot, int op1, gen_helper_rcrl(s->T0, cpu_env, s->T0, s->T1); break; #ifdef TARGET_X86_64 - case MO_64: + case MO_UQ: gen_helper_rcrq(s->T0, cpu_env, s->T0, s->T1); break; #endif @@ -1748,7 +1748,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp = ot, int op1, gen_helper_rcll(s->T0, cpu_env, s->T0, s->T1); break; #ifdef TARGET_X86_64 - case MO_64: + case MO_UQ: gen_helper_rclq(s->T0, cpu_env, s->T0, s->T1); break; #endif @@ -1764,7 +1764,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp = ot, int op1, static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1, bool is_right, TCGv count_in) { - target_ulong mask =3D (ot =3D=3D MO_64 ? 63 : 31); + target_ulong mask =3D (ot =3D=3D MO_UQ ? 63 : 31); TCGv count; /* load */ @@ -1983,7 +1983,7 @@ static AddressParts gen_lea_modrm_0(CPUX86State *env,= DisasContext *s, } switch (s->aflag) { - case MO_64: + case MO_UQ: case MO_UL: havesib =3D 0; if (rm =3D=3D 4) { @@ -2192,7 +2192,7 @@ static inline uint32_t insn_get(CPUX86State *env, Dis= asContext *s, TCGMemOp ot) break; case MO_UL: #ifdef TARGET_X86_64 - case MO_64: + case MO_UQ: #endif ret =3D x86_ldl_code(env, s); break; @@ -2443,7 +2443,7 @@ static void gen_popa(DisasContext *s) static void gen_enter(DisasContext *s, int esp_addend, int level) { TCGMemOp d_ot =3D mo_pushpop(s, s->dflag); - TCGMemOp a_ot =3D CODE64(s) ? MO_64 : s->ss32 ? MO_UL : MO_UW; + TCGMemOp a_ot =3D CODE64(s) ? MO_UQ : s->ss32 ? MO_UL : MO_UW; int size =3D 1 << d_ot; /* Push BP; compute FrameTemp into T1. */ @@ -3150,8 +3150,8 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, break; case 0x6e: /* movd mm, ea */ #ifdef TARGET_X86_64 - if (s->dflag =3D=3D MO_64) { - gen_ldst_modrm(env, s, modrm, MO_64, OR_TMP0, 0); + if (s->dflag =3D=3D MO_UQ) { + gen_ldst_modrm(env, s, modrm, MO_UQ, OR_TMP0, 0); tcg_gen_st_tl(s->T0, cpu_env, offsetof(CPUX86State, fpregs[reg].mmx)); } else @@ -3166,8 +3166,8 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, break; case 0x16e: /* movd xmm, ea */ #ifdef TARGET_X86_64 - if (s->dflag =3D=3D MO_64) { - gen_ldst_modrm(env, s, modrm, MO_64, OR_TMP0, 0); + if (s->dflag =3D=3D MO_UQ) { + gen_ldst_modrm(env, s, modrm, MO_UQ, OR_TMP0, 0); tcg_gen_addi_ptr(s->ptr0, cpu_env, offsetof(CPUX86State,xmm_regs[reg])); gen_helper_movq_mm_T0_xmm(s->ptr0, s->T0); @@ -3337,10 +3337,10 @@ static void gen_sse(CPUX86State *env, DisasContext = *s, int b, break; case 0x7e: /* movd ea, mm */ #ifdef TARGET_X86_64 - if (s->dflag =3D=3D MO_64) { + if (s->dflag =3D=3D MO_UQ) { tcg_gen_ld_i64(s->T0, cpu_env, offsetof(CPUX86State,fpregs[reg].mmx)); - gen_ldst_modrm(env, s, modrm, MO_64, OR_TMP0, 1); + gen_ldst_modrm(env, s, modrm, MO_UQ, OR_TMP0, 1); } else #endif { @@ -3351,10 +3351,10 @@ static void gen_sse(CPUX86State *env, DisasContext = *s, int b, break; case 0x17e: /* movd ea, xmm */ #ifdef TARGET_X86_64 - if (s->dflag =3D=3D MO_64) { + if (s->dflag =3D=3D MO_UQ) { tcg_gen_ld_i64(s->T0, cpu_env, offsetof(CPUX86State,xmm_regs[reg].ZMM_Q(0)= )); - gen_ldst_modrm(env, s, modrm, MO_64, OR_TMP0, 1); + gen_ldst_modrm(env, s, modrm, MO_UQ, OR_TMP0, 1); } else #endif { @@ -3785,10 +3785,10 @@ static void gen_sse(CPUX86State *env, DisasContext = *s, int b, } if ((b & 0xff) =3D=3D 0xf0) { ot =3D MO_UB; - } else if (s->dflag !=3D MO_64) { + } else if (s->dflag !=3D MO_UQ) { ot =3D (s->prefix & PREFIX_DATA ? MO_UW : MO_UL); } else { - ot =3D MO_64; + ot =3D MO_UQ; } tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[reg]); @@ -3814,10 +3814,10 @@ static void gen_sse(CPUX86State *env, DisasContext = *s, int b, if (!(s->cpuid_ext_features & CPUID_EXT_MOVBE)) { goto illegal_op; } - if (s->dflag !=3D MO_64) { + if (s->dflag !=3D MO_UQ) { ot =3D (s->prefix & PREFIX_DATA ? MO_UW : MO_UL); } else { - ot =3D MO_64; + ot =3D MO_UQ; } gen_lea_modrm(env, s, modrm); @@ -3861,7 +3861,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, tcg_gen_ext8u_tl(s->A0, cpu_regs[s->vex_v]); tcg_gen_shr_tl(s->T0, s->T0, s->A0); - bound =3D tcg_const_tl(ot =3D=3D MO_64 ? 63 : 31); + bound =3D tcg_const_tl(ot =3D=3D MO_UQ ? 63 : 31); zero =3D tcg_const_tl(0); tcg_gen_movcond_tl(TCG_COND_LEU, s->T0, s->A0, bound, s->T0, zero); @@ -3894,7 +3894,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); tcg_gen_ext8u_tl(s->T1, cpu_regs[s->vex_v]); { - TCGv bound =3D tcg_const_tl(ot =3D=3D MO_64 ? 63 : 31); + TCGv bound =3D tcg_const_tl(ot =3D=3D MO_UQ ? 63 : 31); /* Note that since we're using BMILG (in order to get O cleared) we need to store the inverse into C. */ tcg_gen_setcond_tl(TCG_COND_LT, cpu_cc_src, @@ -3929,7 +3929,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, tcg_gen_extu_i32_tl(cpu_regs[reg], s->tmp3_i32); break; #ifdef TARGET_X86_64 - case MO_64: + case MO_UQ: tcg_gen_mulu2_i64(s->T0, s->T1, s->T0, cpu_regs[R_EDX]); tcg_gen_mov_i64(cpu_regs[s->vex_v], s->T0); @@ -3949,7 +3949,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); /* Note that by zero-extending the mask operand, we automatically handle zero-extending the result. */ - if (ot =3D=3D MO_64) { + if (ot =3D=3D MO_UQ) { tcg_gen_mov_tl(s->T1, cpu_regs[s->vex_v]); } else { tcg_gen_ext32u_tl(s->T1, cpu_regs[s->vex_v]); @@ -3967,7 +3967,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); /* Note that by zero-extending the mask operand, we automatically handle zero-extending the result. */ - if (ot =3D=3D MO_64) { + if (ot =3D=3D MO_UQ) { tcg_gen_mov_tl(s->T1, cpu_regs[s->vex_v]); } else { tcg_gen_ext32u_tl(s->T1, cpu_regs[s->vex_v]); @@ -4063,7 +4063,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, } ot =3D mo_64_32(s->dflag); gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); - if (ot =3D=3D MO_64) { + if (ot =3D=3D MO_UQ) { tcg_gen_andi_tl(s->T1, cpu_regs[s->vex_v], 63); } else { tcg_gen_andi_tl(s->T1, cpu_regs[s->vex_v], 31); @@ -4071,12 +4071,12 @@ static void gen_sse(CPUX86State *env, DisasContext = *s, int b, if (b =3D=3D 0x1f7) { tcg_gen_shl_tl(s->T0, s->T0, s->T1); } else if (b =3D=3D 0x2f7) { - if (ot !=3D MO_64) { + if (ot !=3D MO_UQ) { tcg_gen_ext32s_tl(s->T0, s->T0); } tcg_gen_sar_tl(s->T0, s->T0, s->T1); } else { - if (ot !=3D MO_64) { + if (ot !=3D MO_UQ) { tcg_gen_ext32u_tl(s->T0, s->T0); } tcg_gen_shr_tl(s->T0, s->T0, s->T1); @@ -4302,7 +4302,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, if ((b & 0xfc) =3D=3D 0x60) { /* pcmpXstrX */ set_cc_op(s, CC_OP_EFLAGS); - if (s->dflag =3D=3D MO_64) { + if (s->dflag =3D=3D MO_UQ) { /* The helper must use entire 64-bit gp registers */ val |=3D 1 << 8; } @@ -4329,7 +4329,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s= , int b, ot =3D mo_64_32(s->dflag); gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0); b =3D x86_ldub_code(env, s); - if (ot =3D=3D MO_64) { + if (ot =3D=3D MO_UQ) { tcg_gen_rotri_tl(s->T0, s->T0, b & 63); } else { tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0); @@ -4630,9 +4630,9 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) /* In 64-bit mode, the default data size is 32-bit. Select 64-bit data with rex_w, and 16-bit data with 0x66; rex_w takes precede= nce over 0x66 if both are present. */ - dflag =3D (rex_w > 0 ? MO_64 : prefixes & PREFIX_DATA ? MO_UW : MO= _UL); + dflag =3D (rex_w > 0 ? MO_UQ : prefixes & PREFIX_DATA ? MO_UW : MO= _UL); /* In 64-bit mode, 0x67 selects 32-bit addressing. */ - aflag =3D (prefixes & PREFIX_ADR ? MO_UL : MO_64); + aflag =3D (prefixes & PREFIX_ADR ? MO_UL : MO_UQ); } else { /* In 16/32-bit mode, 0x66 selects the opposite data size. */ if (s->code32 ^ ((prefixes & PREFIX_DATA) !=3D 0)) { @@ -4903,7 +4903,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) set_cc_op(s, CC_OP_MULL); break; #ifdef TARGET_X86_64 - case MO_64: + case MO_UQ: tcg_gen_mulu2_i64(cpu_regs[R_EAX], cpu_regs[R_EDX], s->T0, cpu_regs[R_EAX]); tcg_gen_mov_tl(cpu_cc_dst, cpu_regs[R_EAX]); @@ -4956,7 +4956,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) set_cc_op(s, CC_OP_MULL); break; #ifdef TARGET_X86_64 - case MO_64: + case MO_UQ: tcg_gen_muls2_i64(cpu_regs[R_EAX], cpu_regs[R_EDX], s->T0, cpu_regs[R_EAX]); tcg_gen_mov_tl(cpu_cc_dst, cpu_regs[R_EAX]); @@ -4980,7 +4980,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) gen_helper_divl_EAX(cpu_env, s->T0); break; #ifdef TARGET_X86_64 - case MO_64: + case MO_UQ: gen_helper_divq_EAX(cpu_env, s->T0); break; #endif @@ -4999,7 +4999,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) gen_helper_idivl_EAX(cpu_env, s->T0); break; #ifdef TARGET_X86_64 - case MO_64: + case MO_UQ: gen_helper_idivq_EAX(cpu_env, s->T0); break; #endif @@ -5024,7 +5024,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) if (CODE64(s)) { if (op =3D=3D 2 || op =3D=3D 4) { /* operand size for jumps is 64 bit */ - ot =3D MO_64; + ot =3D MO_UQ; } else if (op =3D=3D 3 || op =3D=3D 5) { ot =3D dflag !=3D MO_UW ? MO_UL + (rex_w =3D=3D 1) : MO_UW; } else if (op =3D=3D 6) { @@ -5145,10 +5145,10 @@ static target_ulong disas_insn(DisasContext *s, CPU= State *cpu) case 0x98: /* CWDE/CBW */ switch (dflag) { #ifdef TARGET_X86_64 - case MO_64: + case MO_UQ: gen_op_mov_v_reg(s, MO_UL, s->T0, R_EAX); tcg_gen_ext32s_tl(s->T0, s->T0); - gen_op_mov_reg_v(s, MO_64, R_EAX, s->T0); + gen_op_mov_reg_v(s, MO_UQ, R_EAX, s->T0); break; #endif case MO_UL: @@ -5168,10 +5168,10 @@ static target_ulong disas_insn(DisasContext *s, CPU= State *cpu) case 0x99: /* CDQ/CWD */ switch (dflag) { #ifdef TARGET_X86_64 - case MO_64: - gen_op_mov_v_reg(s, MO_64, s->T0, R_EAX); + case MO_UQ: + gen_op_mov_v_reg(s, MO_UQ, s->T0, R_EAX); tcg_gen_sari_tl(s->T0, s->T0, 63); - gen_op_mov_reg_v(s, MO_64, R_EDX, s->T0); + gen_op_mov_reg_v(s, MO_UQ, R_EDX, s->T0); break; #endif case MO_UL: @@ -5212,7 +5212,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) } switch (ot) { #ifdef TARGET_X86_64 - case MO_64: + case MO_UQ: tcg_gen_muls2_i64(cpu_regs[reg], s->T1, s->T0, s->T1); tcg_gen_mov_tl(cpu_cc_dst, cpu_regs[reg]); tcg_gen_sari_tl(cpu_cc_src, cpu_cc_dst, 63); @@ -5338,7 +5338,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) goto illegal_op; } #ifdef TARGET_X86_64 - if (dflag =3D=3D MO_64) { + if (dflag =3D=3D MO_UQ) { if (!(s->cpuid_ext_features & CPUID_EXT_CX16)) { goto illegal_op; } @@ -5636,7 +5636,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) ot =3D mo_b_d(b, dflag); switch (s->aflag) { #ifdef TARGET_X86_64 - case MO_64: + case MO_UQ: offset_addr =3D x86_ldq_code(env, s); break; #endif @@ -5671,13 +5671,13 @@ static target_ulong disas_insn(DisasContext *s, CPU= State *cpu) break; case 0xb8 ... 0xbf: /* mov R, Iv */ #ifdef TARGET_X86_64 - if (dflag =3D=3D MO_64) { + if (dflag =3D=3D MO_UQ) { uint64_t tmp; /* 64 bit case */ tmp =3D x86_ldq_code(env, s); reg =3D (b & 7) | REX_B(s); tcg_gen_movi_tl(s->T0, tmp); - gen_op_mov_reg_v(s, MO_64, reg, s->T0); + gen_op_mov_reg_v(s, MO_UQ, reg, s->T0); } else #endif { @@ -7119,10 +7119,10 @@ static target_ulong disas_insn(DisasContext *s, CPU= State *cpu) case 0x1c8 ... 0x1cf: /* bswap reg */ reg =3D (b & 7) | REX_B(s); #ifdef TARGET_X86_64 - if (dflag =3D=3D MO_64) { - gen_op_mov_v_reg(s, MO_64, s->T0, reg); + if (dflag =3D=3D MO_UQ) { + gen_op_mov_v_reg(s, MO_UQ, s->T0, reg); tcg_gen_bswap64_i64(s->T0, s->T0); - gen_op_mov_reg_v(s, MO_64, reg, s->T0); + gen_op_mov_reg_v(s, MO_UQ, reg, s->T0); } else #endif { @@ -7700,7 +7700,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) if (mod =3D=3D 3) { gen_op_mov_v_reg(s, MO_UL, s->T0, rm); /* sign extend */ - if (d_ot =3D=3D MO_64) { + if (d_ot =3D=3D MO_UQ) { tcg_gen_ext32s_tl(s->T0, s->T0); } gen_op_mov_reg_v(s, d_ot, reg, s->T0); @@ -8014,7 +8014,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) rm =3D (modrm & 7) | REX_B(s); reg =3D ((modrm >> 3) & 7) | rex_r; if (CODE64(s)) - ot =3D MO_64; + ot =3D MO_UQ; else ot =3D MO_UL; if ((prefixes & PREFIX_LOCK) && (reg =3D=3D 0) && @@ -8071,7 +8071,7 @@ static target_ulong disas_insn(DisasContext *s, CPUSt= ate *cpu) rm =3D (modrm & 7) | REX_B(s); reg =3D ((modrm >> 3) & 7) | rex_r; if (CODE64(s)) - ot =3D MO_64; + ot =3D MO_UQ; else ot =3D MO_UL; if (reg >=3D 8) { diff --git a/target/mips/translate.c b/target/mips/translate.c index 525c7fe..1023f68 100644 --- a/target/mips/translate.c +++ b/target/mips/translate.c @@ -3766,7 +3766,7 @@ static void gen_scwp(DisasContext *ctx, uint32_t base= , int16_t offset, tcg_gen_ld_i64(llval, cpu_env, offsetof(CPUMIPSState, llval_wp)); tcg_gen_atomic_cmpxchg_i64(val, taddr, llval, tval, - eva ? MIPS_HFLAG_UM : ctx->mem_idx, MO_64); + eva ? MIPS_HFLAG_UM : ctx->mem_idx, MO_UQ); if (reg1 !=3D 0) { tcg_gen_movi_tl(cpu_gpr[reg1], 1); } diff --git a/target/ppc/translate.c b/target/ppc/translate.c index 4a5de28..f39dd94 100644 --- a/target/ppc/translate.c +++ b/target/ppc/translate.c @@ -2470,10 +2470,10 @@ GEN_QEMU_LOAD_64(ld8u, DEF_MEMOP(MO_UB)) GEN_QEMU_LOAD_64(ld16u, DEF_MEMOP(MO_UW)) GEN_QEMU_LOAD_64(ld32u, DEF_MEMOP(MO_UL)) GEN_QEMU_LOAD_64(ld32s, DEF_MEMOP(MO_SL)) -GEN_QEMU_LOAD_64(ld64, DEF_MEMOP(MO_Q)) +GEN_QEMU_LOAD_64(ld64, DEF_MEMOP(MO_UQ)) #if defined(TARGET_PPC64) -GEN_QEMU_LOAD_64(ld64ur, BSWAP_MEMOP(MO_Q)) +GEN_QEMU_LOAD_64(ld64ur, BSWAP_MEMOP(MO_UQ)) #endif #define GEN_QEMU_STORE_TL(stop, op) \ @@ -2502,10 +2502,10 @@ static void glue(gen_qemu_, glue(stop, _i64))(Disas= Context *ctx, \ GEN_QEMU_STORE_64(st8, DEF_MEMOP(MO_UB)) GEN_QEMU_STORE_64(st16, DEF_MEMOP(MO_UW)) GEN_QEMU_STORE_64(st32, DEF_MEMOP(MO_UL)) -GEN_QEMU_STORE_64(st64, DEF_MEMOP(MO_Q)) +GEN_QEMU_STORE_64(st64, DEF_MEMOP(MO_UQ)) #if defined(TARGET_PPC64) -GEN_QEMU_STORE_64(st64r, BSWAP_MEMOP(MO_Q)) +GEN_QEMU_STORE_64(st64r, BSWAP_MEMOP(MO_UQ)) #endif #define GEN_LD(name, ldop, opc, type) = \ @@ -2605,7 +2605,7 @@ GEN_LDEPX(lb, DEF_MEMOP(MO_UB), 0x1F, 0x02) GEN_LDEPX(lh, DEF_MEMOP(MO_UW), 0x1F, 0x08) GEN_LDEPX(lw, DEF_MEMOP(MO_UL), 0x1F, 0x00) #if defined(TARGET_PPC64) -GEN_LDEPX(ld, DEF_MEMOP(MO_Q), 0x1D, 0x00) +GEN_LDEPX(ld, DEF_MEMOP(MO_UQ), 0x1D, 0x00) #endif #if defined(TARGET_PPC64) @@ -2808,7 +2808,7 @@ GEN_STEPX(stb, DEF_MEMOP(MO_UB), 0x1F, 0x06) GEN_STEPX(sth, DEF_MEMOP(MO_UW), 0x1F, 0x0C) GEN_STEPX(stw, DEF_MEMOP(MO_UL), 0x1F, 0x04) #if defined(TARGET_PPC64) -GEN_STEPX(std, DEF_MEMOP(MO_Q), 0x1d, 0x04) +GEN_STEPX(std, DEF_MEMOP(MO_UQ), 0x1d, 0x04) #endif #if defined(TARGET_PPC64) @@ -3244,7 +3244,7 @@ static void gen_ld_atomic(DisasContext *ctx, TCGMemOp= memop) TCGv t1 =3D tcg_temp_new(); tcg_gen_qemu_ld_tl(t0, EA, ctx->mem_idx, memop); - if ((memop & MO_SIZE) =3D=3D MO_64 || TARGET_LONG_BITS =3D=3D = 32) { + if ((memop & MO_SIZE) =3D=3D MO_UQ || TARGET_LONG_BITS =3D=3D = 32) { tcg_gen_mov_tl(t1, src); } else { tcg_gen_ext32u_tl(t1, src); @@ -3302,7 +3302,7 @@ static void gen_lwat(DisasContext *ctx) #ifdef TARGET_PPC64 static void gen_ldat(DisasContext *ctx) { - gen_ld_atomic(ctx, DEF_MEMOP(MO_Q)); + gen_ld_atomic(ctx, DEF_MEMOP(MO_UQ)); } #endif @@ -3385,7 +3385,7 @@ static void gen_stwat(DisasContext *ctx) #ifdef TARGET_PPC64 static void gen_stdat(DisasContext *ctx) { - gen_st_atomic(ctx, DEF_MEMOP(MO_Q)); + gen_st_atomic(ctx, DEF_MEMOP(MO_UQ)); } #endif @@ -3437,9 +3437,9 @@ STCX(stwcx_, DEF_MEMOP(MO_UL)) #if defined(TARGET_PPC64) /* ldarx */ -LARX(ldarx, DEF_MEMOP(MO_Q)) +LARX(ldarx, DEF_MEMOP(MO_UQ)) /* stdcx. */ -STCX(stdcx_, DEF_MEMOP(MO_Q)) +STCX(stdcx_, DEF_MEMOP(MO_UQ)) /* lqarx */ static void gen_lqarx(DisasContext *ctx) @@ -3520,7 +3520,7 @@ static void gen_stqcx_(DisasContext *ctx) if (tb_cflags(ctx->base.tb) & CF_PARALLEL) { if (HAVE_CMPXCHG128) { - TCGv_i32 oi =3D tcg_const_i32(DEF_MEMOP(MO_Q) | MO_ALIGN_16); + TCGv_i32 oi =3D tcg_const_i32(DEF_MEMOP(MO_UQ) | MO_ALIGN_16); if (ctx->le_mode) { gen_helper_stqcx_le_parallel(cpu_crf[0], cpu_env, EA, lo, hi, oi); @@ -7366,7 +7366,7 @@ GEN_LDEPX(lb, DEF_MEMOP(MO_UB), 0x1F, 0x02) GEN_LDEPX(lh, DEF_MEMOP(MO_UW), 0x1F, 0x08) GEN_LDEPX(lw, DEF_MEMOP(MO_UL), 0x1F, 0x00) #if defined(TARGET_PPC64) -GEN_LDEPX(ld, DEF_MEMOP(MO_Q), 0x1D, 0x00) +GEN_LDEPX(ld, DEF_MEMOP(MO_UQ), 0x1D, 0x00) #endif #undef GEN_ST @@ -7412,7 +7412,7 @@ GEN_STEPX(stb, DEF_MEMOP(MO_UB), 0x1F, 0x06) GEN_STEPX(sth, DEF_MEMOP(MO_UW), 0x1F, 0x0C) GEN_STEPX(stw, DEF_MEMOP(MO_UL), 0x1F, 0x04) #if defined(TARGET_PPC64) -GEN_STEPX(std, DEF_MEMOP(MO_Q), 0x1D, 0x04) +GEN_STEPX(std, DEF_MEMOP(MO_UQ), 0x1D, 0x04) #endif #undef GEN_CRLOGIC diff --git a/target/ppc/translate/fp-impl.inc.c b/target/ppc/translate/fp-i= mpl.inc.c index 9dcff94..3fd54ac 100644 --- a/target/ppc/translate/fp-impl.inc.c +++ b/target/ppc/translate/fp-impl.inc.c @@ -855,7 +855,7 @@ static void gen_lfdepx(DisasContext *ctx) EA =3D tcg_temp_new(); t0 =3D tcg_temp_new_i64(); gen_addr_reg_index(ctx, EA); - tcg_gen_qemu_ld_i64(t0, EA, PPC_TLB_EPID_LOAD, DEF_MEMOP(MO_Q)); + tcg_gen_qemu_ld_i64(t0, EA, PPC_TLB_EPID_LOAD, DEF_MEMOP(MO_UQ)); set_fpr(rD(ctx->opcode), t0); tcg_temp_free(EA); tcg_temp_free_i64(t0); @@ -1091,7 +1091,7 @@ static void gen_stfdepx(DisasContext *ctx) t0 =3D tcg_temp_new_i64(); gen_addr_reg_index(ctx, EA); get_fpr(t0, rD(ctx->opcode)); - tcg_gen_qemu_st_i64(t0, EA, PPC_TLB_EPID_STORE, DEF_MEMOP(MO_Q)); + tcg_gen_qemu_st_i64(t0, EA, PPC_TLB_EPID_STORE, DEF_MEMOP(MO_UQ)); tcg_temp_free(EA); tcg_temp_free_i64(t0); } diff --git a/target/ppc/translate/vmx-impl.inc.c b/target/ppc/translate/vmx= -impl.inc.c index 8aa767e..867dc52 100644 --- a/target/ppc/translate/vmx-impl.inc.c +++ b/target/ppc/translate/vmx-impl.inc.c @@ -290,14 +290,14 @@ static void glue(gen_, name)(DisasContext *ctx) = \ } /* Logical operations */ -GEN_VXFORM_V(vand, MO_64, tcg_gen_gvec_and, 2, 16); -GEN_VXFORM_V(vandc, MO_64, tcg_gen_gvec_andc, 2, 17); -GEN_VXFORM_V(vor, MO_64, tcg_gen_gvec_or, 2, 18); -GEN_VXFORM_V(vxor, MO_64, tcg_gen_gvec_xor, 2, 19); -GEN_VXFORM_V(vnor, MO_64, tcg_gen_gvec_nor, 2, 20); -GEN_VXFORM_V(veqv, MO_64, tcg_gen_gvec_eqv, 2, 26); -GEN_VXFORM_V(vnand, MO_64, tcg_gen_gvec_nand, 2, 22); -GEN_VXFORM_V(vorc, MO_64, tcg_gen_gvec_orc, 2, 21); +GEN_VXFORM_V(vand, MO_UQ, tcg_gen_gvec_and, 2, 16); +GEN_VXFORM_V(vandc, MO_UQ, tcg_gen_gvec_andc, 2, 17); +GEN_VXFORM_V(vor, MO_UQ, tcg_gen_gvec_or, 2, 18); +GEN_VXFORM_V(vxor, MO_UQ, tcg_gen_gvec_xor, 2, 19); +GEN_VXFORM_V(vnor, MO_UQ, tcg_gen_gvec_nor, 2, 20); +GEN_VXFORM_V(veqv, MO_UQ, tcg_gen_gvec_eqv, 2, 26); +GEN_VXFORM_V(vnand, MO_UQ, tcg_gen_gvec_nand, 2, 22); +GEN_VXFORM_V(vorc, MO_UQ, tcg_gen_gvec_orc, 2, 21); #define GEN_VXFORM(name, opc2, opc3) \ static void glue(gen_, name)(DisasContext *ctx) \ @@ -410,27 +410,27 @@ GEN_VXFORM_V(vadduhm, MO_UW, tcg_gen_gvec_add, 0, 1); GEN_VXFORM_DUAL(vadduhm, PPC_ALTIVEC, PPC_NONE, \ vmul10ecuq, PPC_NONE, PPC2_ISA300) GEN_VXFORM_V(vadduwm, MO_UL, tcg_gen_gvec_add, 0, 2); -GEN_VXFORM_V(vaddudm, MO_64, tcg_gen_gvec_add, 0, 3); +GEN_VXFORM_V(vaddudm, MO_UQ, tcg_gen_gvec_add, 0, 3); GEN_VXFORM_V(vsububm, MO_UB, tcg_gen_gvec_sub, 0, 16); GEN_VXFORM_V(vsubuhm, MO_UW, tcg_gen_gvec_sub, 0, 17); GEN_VXFORM_V(vsubuwm, MO_UL, tcg_gen_gvec_sub, 0, 18); -GEN_VXFORM_V(vsubudm, MO_64, tcg_gen_gvec_sub, 0, 19); +GEN_VXFORM_V(vsubudm, MO_UQ, tcg_gen_gvec_sub, 0, 19); GEN_VXFORM_V(vmaxub, MO_UB, tcg_gen_gvec_umax, 1, 0); GEN_VXFORM_V(vmaxuh, MO_UW, tcg_gen_gvec_umax, 1, 1); GEN_VXFORM_V(vmaxuw, MO_UL, tcg_gen_gvec_umax, 1, 2); -GEN_VXFORM_V(vmaxud, MO_64, tcg_gen_gvec_umax, 1, 3); +GEN_VXFORM_V(vmaxud, MO_UQ, tcg_gen_gvec_umax, 1, 3); GEN_VXFORM_V(vmaxsb, MO_UB, tcg_gen_gvec_smax, 1, 4); GEN_VXFORM_V(vmaxsh, MO_UW, tcg_gen_gvec_smax, 1, 5); GEN_VXFORM_V(vmaxsw, MO_UL, tcg_gen_gvec_smax, 1, 6); -GEN_VXFORM_V(vmaxsd, MO_64, tcg_gen_gvec_smax, 1, 7); +GEN_VXFORM_V(vmaxsd, MO_UQ, tcg_gen_gvec_smax, 1, 7); GEN_VXFORM_V(vminub, MO_UB, tcg_gen_gvec_umin, 1, 8); GEN_VXFORM_V(vminuh, MO_UW, tcg_gen_gvec_umin, 1, 9); GEN_VXFORM_V(vminuw, MO_UL, tcg_gen_gvec_umin, 1, 10); -GEN_VXFORM_V(vminud, MO_64, tcg_gen_gvec_umin, 1, 11); +GEN_VXFORM_V(vminud, MO_UQ, tcg_gen_gvec_umin, 1, 11); GEN_VXFORM_V(vminsb, MO_UB, tcg_gen_gvec_smin, 1, 12); GEN_VXFORM_V(vminsh, MO_UW, tcg_gen_gvec_smin, 1, 13); GEN_VXFORM_V(vminsw, MO_UL, tcg_gen_gvec_smin, 1, 14); -GEN_VXFORM_V(vminsd, MO_64, tcg_gen_gvec_smin, 1, 15); +GEN_VXFORM_V(vminsd, MO_UQ, tcg_gen_gvec_smin, 1, 15); GEN_VXFORM(vavgub, 1, 16); GEN_VXFORM(vabsdub, 1, 16); GEN_VXFORM_DUAL(vavgub, PPC_ALTIVEC, PPC_NONE, \ @@ -536,15 +536,15 @@ GEN_VXFORM_V(vslw, MO_UL, tcg_gen_gvec_shlv, 2, 6); GEN_VXFORM(vrlwnm, 2, 6); GEN_VXFORM_DUAL(vslw, PPC_ALTIVEC, PPC_NONE, \ vrlwnm, PPC_NONE, PPC2_ISA300) -GEN_VXFORM_V(vsld, MO_64, tcg_gen_gvec_shlv, 2, 23); +GEN_VXFORM_V(vsld, MO_UQ, tcg_gen_gvec_shlv, 2, 23); GEN_VXFORM_V(vsrb, MO_UB, tcg_gen_gvec_shrv, 2, 8); GEN_VXFORM_V(vsrh, MO_UW, tcg_gen_gvec_shrv, 2, 9); GEN_VXFORM_V(vsrw, MO_UL, tcg_gen_gvec_shrv, 2, 10); -GEN_VXFORM_V(vsrd, MO_64, tcg_gen_gvec_shrv, 2, 27); +GEN_VXFORM_V(vsrd, MO_UQ, tcg_gen_gvec_shrv, 2, 27); GEN_VXFORM_V(vsrab, MO_UB, tcg_gen_gvec_sarv, 2, 12); GEN_VXFORM_V(vsrah, MO_UW, tcg_gen_gvec_sarv, 2, 13); GEN_VXFORM_V(vsraw, MO_UL, tcg_gen_gvec_sarv, 2, 14); -GEN_VXFORM_V(vsrad, MO_64, tcg_gen_gvec_sarv, 2, 15); +GEN_VXFORM_V(vsrad, MO_UQ, tcg_gen_gvec_sarv, 2, 15); GEN_VXFORM(vsrv, 2, 28); GEN_VXFORM(vslv, 2, 29); GEN_VXFORM(vslo, 6, 16); diff --git a/target/ppc/translate/vsx-impl.inc.c b/target/ppc/translate/vsx= -impl.inc.c index 212817e..d607974 100644 --- a/target/ppc/translate/vsx-impl.inc.c +++ b/target/ppc/translate/vsx-impl.inc.c @@ -1475,14 +1475,14 @@ static void glue(gen_, name)(DisasContext *ctx) = \ vsr_full_offset(xB(ctx->opcode)), 16, 16); \ } -VSX_LOGICAL(xxland, MO_64, tcg_gen_gvec_and) -VSX_LOGICAL(xxlandc, MO_64, tcg_gen_gvec_andc) -VSX_LOGICAL(xxlor, MO_64, tcg_gen_gvec_or) -VSX_LOGICAL(xxlxor, MO_64, tcg_gen_gvec_xor) -VSX_LOGICAL(xxlnor, MO_64, tcg_gen_gvec_nor) -VSX_LOGICAL(xxleqv, MO_64, tcg_gen_gvec_eqv) -VSX_LOGICAL(xxlnand, MO_64, tcg_gen_gvec_nand) -VSX_LOGICAL(xxlorc, MO_64, tcg_gen_gvec_orc) +VSX_LOGICAL(xxland, MO_UQ, tcg_gen_gvec_and) +VSX_LOGICAL(xxlandc, MO_UQ, tcg_gen_gvec_andc) +VSX_LOGICAL(xxlor, MO_UQ, tcg_gen_gvec_or) +VSX_LOGICAL(xxlxor, MO_UQ, tcg_gen_gvec_xor) +VSX_LOGICAL(xxlnor, MO_UQ, tcg_gen_gvec_nor) +VSX_LOGICAL(xxleqv, MO_UQ, tcg_gen_gvec_eqv) +VSX_LOGICAL(xxlnand, MO_UQ, tcg_gen_gvec_nand) +VSX_LOGICAL(xxlorc, MO_UQ, tcg_gen_gvec_orc) #define VSX_XXMRG(name, high) \ static void glue(gen_, name)(DisasContext *ctx) \ @@ -1535,7 +1535,7 @@ static void gen_xxsel(DisasContext *ctx) gen_exception(ctx, POWERPC_EXCP_VSXU); return; } - tcg_gen_gvec_bitsel(MO_64, vsr_full_offset(rt), vsr_full_offset(rc), + tcg_gen_gvec_bitsel(MO_UQ, vsr_full_offset(rt), vsr_full_offset(rc), vsr_full_offset(rb), vsr_full_offset(ra), 16, 16); } diff --git a/target/s390x/translate.c b/target/s390x/translate.c index 9e646f1..5c72db1 100644 --- a/target/s390x/translate.c +++ b/target/s390x/translate.c @@ -180,7 +180,7 @@ static inline int vec_reg_offset(uint8_t reg, uint8_t e= nr, TCGMemOp es) * the two 8 byte elements have to be loaded separately. Let's force a= ll * 16 byte operations to handle it in a special way. */ - g_assert(es <=3D MO_64); + g_assert(es <=3D MO_UQ); #ifndef HOST_WORDS_BIGENDIAN offs ^=3D (8 - bytes); #endif @@ -190,7 +190,7 @@ static inline int vec_reg_offset(uint8_t reg, uint8_t e= nr, TCGMemOp es) static inline int freg64_offset(uint8_t reg) { g_assert(reg < 16); - return vec_reg_offset(reg, 0, MO_64); + return vec_reg_offset(reg, 0, MO_UQ); } static inline int freg32_offset(uint8_t reg) diff --git a/target/s390x/translate_vx.inc.c b/target/s390x/translate_vx.in= c.c index 75d788c..6252262 100644 --- a/target/s390x/translate_vx.inc.c +++ b/target/s390x/translate_vx.inc.c @@ -30,8 +30,8 @@ * Sizes: * On s390x, the operand size (oprsz) and the maximum size (maxsz) are * always 16 (128 bit). What gvec code calls "vece", s390x calls "es", - * a.k.a. "element size". These values nicely map to MO_UB ... MO_64. Only - * 128 bit element size has to be treated in a special way (MO_64 + 1). + * a.k.a. "element size". These values nicely map to MO_UB ... MO_UQ. Only + * 128 bit element size has to be treated in a special way (MO_UQ + 1). * We will use ES_* instead of MO_* for this reason in this file. * * CC handling: @@ -49,7 +49,7 @@ #define ES_8 MO_UB #define ES_16 MO_UW #define ES_32 MO_UL -#define ES_64 MO_64 +#define ES_64 MO_UQ #define ES_128 4 /* Floating-Point Format */ diff --git a/target/s390x/vec.h b/target/s390x/vec.h index f67392c..b59da65 100644 --- a/target/s390x/vec.h +++ b/target/s390x/vec.h @@ -82,7 +82,7 @@ static inline uint64_t s390_vec_read_element(const S390Ve= ctor *v, uint8_t enr, return s390_vec_read_element16(v, enr); case MO_UL: return s390_vec_read_element32(v, enr); - case MO_64: + case MO_UQ: return s390_vec_read_element64(v, enr); default: g_assert_not_reached(); @@ -130,7 +130,7 @@ static inline void s390_vec_write_element(S390Vector *v= , uint8_t enr, case MO_UL: s390_vec_write_element32(v, enr, data); break; - case MO_64: + case MO_UQ: s390_vec_write_element64(v, enr, data); break; default: diff --git a/target/sparc/translate.c b/target/sparc/translate.c index 091bab5..499622b 100644 --- a/target/sparc/translate.c +++ b/target/sparc/translate.c @@ -2840,7 +2840,7 @@ static void gen_ldda_asi(DisasContext *dc, TCGv addr,= int insn, int rd) default: { TCGv_i32 r_asi =3D tcg_const_i32(da.asi); - TCGv_i32 r_mop =3D tcg_const_i32(MO_Q); + TCGv_i32 r_mop =3D tcg_const_i32(MO_UQ); save_state(dc); gen_helper_ld_asi(t64, cpu_env, addr, r_asi, r_mop); @@ -2896,7 +2896,7 @@ static void gen_stda_asi(DisasContext *dc, TCGv hi, T= CGv addr, default: { TCGv_i32 r_asi =3D tcg_const_i32(da.asi); - TCGv_i32 r_mop =3D tcg_const_i32(MO_Q); + TCGv_i32 r_mop =3D tcg_const_i32(MO_UQ); save_state(dc); gen_helper_st_asi(cpu_env, addr, t64, r_asi, r_mop); diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c index dc4fd21..d14afa9 100644 --- a/tcg/aarch64/tcg-target.inc.c +++ b/tcg/aarch64/tcg-target.inc.c @@ -432,12 +432,12 @@ typedef enum { I3312_STRB =3D 0x38000000 | LDST_ST << 22 | MO_UB << 30, I3312_STRH =3D 0x38000000 | LDST_ST << 22 | MO_UW << 30, I3312_STRW =3D 0x38000000 | LDST_ST << 22 | MO_UL << 30, - I3312_STRX =3D 0x38000000 | LDST_ST << 22 | MO_64 << 30, + I3312_STRX =3D 0x38000000 | LDST_ST << 22 | MO_UQ << 30, I3312_LDRB =3D 0x38000000 | LDST_LD << 22 | MO_UB << 30, I3312_LDRH =3D 0x38000000 | LDST_LD << 22 | MO_UW << 30, I3312_LDRW =3D 0x38000000 | LDST_LD << 22 | MO_UL << 30, - I3312_LDRX =3D 0x38000000 | LDST_LD << 22 | MO_64 << 30, + I3312_LDRX =3D 0x38000000 | LDST_LD << 22 | MO_UQ << 30, I3312_LDRSBW =3D 0x38000000 | LDST_LD_S_W << 22 | MO_UB << 30, I3312_LDRSHW =3D 0x38000000 | LDST_LD_S_W << 22 | MO_UW << 30, @@ -449,8 +449,8 @@ typedef enum { I3312_LDRVS =3D 0x3c000000 | LDST_LD << 22 | MO_UL << 30, I3312_STRVS =3D 0x3c000000 | LDST_ST << 22 | MO_UL << 30, - I3312_LDRVD =3D 0x3c000000 | LDST_LD << 22 | MO_64 << 30, - I3312_STRVD =3D 0x3c000000 | LDST_ST << 22 | MO_64 << 30, + I3312_LDRVD =3D 0x3c000000 | LDST_LD << 22 | MO_UQ << 30, + I3312_STRVD =3D 0x3c000000 | LDST_ST << 22 | MO_UQ << 30, I3312_LDRVQ =3D 0x3c000000 | 3 << 22 | 0 << 30, I3312_STRVQ =3D 0x3c000000 | 2 << 22 | 0 << 30, @@ -1595,7 +1595,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, = TCGLabelQemuLdst *lb) if (opc & MO_SIGN) { tcg_out_sxt(s, lb->type, size, lb->datalo_reg, TCG_REG_X0); } else { - tcg_out_mov(s, size =3D=3D MO_64, lb->datalo_reg, TCG_REG_X0); + tcg_out_mov(s, size =3D=3D MO_UQ, lb->datalo_reg, TCG_REG_X0); } tcg_out_goto(s, lb->raddr); @@ -1614,7 +1614,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, = TCGLabelQemuLdst *lb) tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_X0, TCG_AREG0); tcg_out_mov(s, TARGET_LONG_BITS =3D=3D 64, TCG_REG_X1, lb->addrlo_reg); - tcg_out_mov(s, size =3D=3D MO_64, TCG_REG_X2, lb->datalo_reg); + tcg_out_mov(s, size =3D=3D MO_UQ, TCG_REG_X2, lb->datalo_reg); tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_X3, oi); tcg_out_adr(s, TCG_REG_X4, lb->raddr); tcg_out_call(s, qemu_st_helpers[opc & (MO_BSWAP | MO_SIZE)]); @@ -1754,7 +1754,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCG= MemOp memop, TCGType ext, tcg_out_ldst_r(s, I3312_LDRSWX, data_r, addr_r, otype, off_r); } break; - case MO_Q: + case MO_UQ: tcg_out_ldst_r(s, I3312_LDRX, data_r, addr_r, otype, off_r); if (bswap) { tcg_out_rev64(s, data_r, data_r); @@ -1789,7 +1789,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCG= MemOp memop, } tcg_out_ldst_r(s, I3312_STRW, data_r, addr_r, otype, off_r); break; - case MO_64: + case MO_UQ: if (bswap && data_r !=3D TCG_REG_XZR) { tcg_out_rev64(s, TCG_REG_TMP, data_r); data_r =3D TCG_REG_TMP; @@ -1838,7 +1838,7 @@ static void tcg_out_qemu_st(TCGContext *s, TCGReg dat= a_reg, TCGReg addr_reg, tcg_out_tlb_read(s, addr_reg, memop, &label_ptr, mem_index, 0); tcg_out_qemu_st_direct(s, memop, data_reg, TCG_REG_X1, otype, addr_reg); - add_qemu_ldst_label(s, false, oi, (memop & MO_SIZE)=3D=3D MO_64, + add_qemu_ldst_label(s, false, oi, (memop & MO_SIZE) =3D=3D MO_UQ, data_reg, addr_reg, s->code_ptr, label_ptr); #else /* !CONFIG_SOFTMMU */ if (USE_GUEST_BASE) { @@ -2506,7 +2506,7 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, = unsigned vece) case INDEX_op_smin_vec: case INDEX_op_umax_vec: case INDEX_op_umin_vec: - return vece < MO_64; + return vece < MO_UQ; default: return 0; diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c index 05560a2..70eeb8a 100644 --- a/tcg/arm/tcg-target.inc.c +++ b/tcg/arm/tcg-target.inc.c @@ -1389,7 +1389,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, = TCGLabelQemuLdst *lb) default: tcg_out_mov_reg(s, COND_AL, datalo, TCG_REG_R0); break; - case MO_Q: + case MO_UQ: if (datalo !=3D TCG_REG_R1) { tcg_out_mov_reg(s, COND_AL, datalo, TCG_REG_R0); tcg_out_mov_reg(s, COND_AL, datahi, TCG_REG_R1); @@ -1439,7 +1439,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, = TCGLabelQemuLdst *lb) default: argreg =3D tcg_out_arg_reg32(s, argreg, datalo); break; - case MO_64: + case MO_UQ: argreg =3D tcg_out_arg_reg64(s, argreg, datalo, datahi); break; } @@ -1487,7 +1487,7 @@ static inline void tcg_out_qemu_ld_index(TCGContext *= s, TCGMemOp opc, tcg_out_bswap32(s, COND_AL, datalo, datalo); } break; - case MO_Q: + case MO_UQ: { TCGReg dl =3D (bswap ? datahi : datalo); TCGReg dh =3D (bswap ? datalo : datahi); @@ -1548,7 +1548,7 @@ static inline void tcg_out_qemu_ld_direct(TCGContext = *s, TCGMemOp opc, tcg_out_bswap32(s, COND_AL, datalo, datalo); } break; - case MO_Q: + case MO_UQ: { TCGReg dl =3D (bswap ? datahi : datalo); TCGReg dh =3D (bswap ? datalo : datahi); @@ -1641,7 +1641,7 @@ static inline void tcg_out_qemu_st_index(TCGContext *= s, int cond, TCGMemOp opc, tcg_out_st32_r(s, cond, datalo, addrlo, addend); } break; - case MO_64: + case MO_UQ: /* Avoid strd for user-only emulation, to handle unaligned. */ if (bswap) { tcg_out_bswap32(s, cond, TCG_REG_R0, datahi); @@ -1686,7 +1686,7 @@ static inline void tcg_out_qemu_st_direct(TCGContext = *s, TCGMemOp opc, tcg_out_st32_12(s, COND_AL, datalo, addrlo, 0); } break; - case MO_64: + case MO_UQ: /* Avoid strd for user-only emulation, to handle unaligned. */ if (bswap) { tcg_out_bswap32(s, COND_AL, TCG_REG_R0, datahi); diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c index 93e4c63..3a73334 100644 --- a/tcg/i386/tcg-target.inc.c +++ b/tcg/i386/tcg-target.inc.c @@ -902,7 +902,7 @@ static bool tcg_out_dup_vec(TCGContext *s, TCGType type= , unsigned vece, /* imm8 operand: all output lanes selected from input lane 0. = */ tcg_out8(s, 0); break; - case MO_64: + case MO_UQ: tcg_out_vex_modrm(s, OPC_PUNPCKLQDQ, r, a, a); break; default: @@ -921,7 +921,7 @@ static bool tcg_out_dupm_vec(TCGContext *s, TCGType typ= e, unsigned vece, r, 0, base, offset); } else { switch (vece) { - case MO_64: + case MO_UQ: tcg_out_vex_modrm_offset(s, OPC_MOVDDUP, r, 0, base, offset); break; case MO_UL: @@ -1868,7 +1868,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, = TCGLabelQemuLdst *l) case MO_UL: tcg_out_mov(s, TCG_TYPE_I32, data_reg, TCG_REG_EAX); break; - case MO_Q: + case MO_UQ: if (TCG_TARGET_REG_BITS =3D=3D 64) { tcg_out_mov(s, TCG_TYPE_I64, data_reg, TCG_REG_RAX); } else if (data_reg =3D=3D TCG_REG_EDX) { @@ -1923,7 +1923,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, = TCGLabelQemuLdst *l) tcg_out_st(s, TCG_TYPE_I32, l->datalo_reg, TCG_REG_ESP, ofs); ofs +=3D 4; - if (s_bits =3D=3D MO_64) { + if (s_bits =3D=3D MO_UQ) { tcg_out_st(s, TCG_TYPE_I32, l->datahi_reg, TCG_REG_ESP, ofs); ofs +=3D 4; } @@ -1937,7 +1937,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, = TCGLabelQemuLdst *l) } else { tcg_out_mov(s, TCG_TYPE_PTR, tcg_target_call_iarg_regs[0], TCG_ARE= G0); /* The second argument is already loaded with addrlo. */ - tcg_out_mov(s, (s_bits =3D=3D MO_64 ? TCG_TYPE_I64 : TCG_TYPE_I32), + tcg_out_mov(s, (s_bits =3D=3D MO_UQ ? TCG_TYPE_I64 : TCG_TYPE_I32), tcg_target_call_iarg_regs[2], l->datalo_reg); tcg_out_movi(s, TCG_TYPE_I32, tcg_target_call_iarg_regs[3], oi); @@ -2060,7 +2060,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCG= Reg datalo, TCGReg datahi, } break; #endif - case MO_Q: + case MO_UQ: if (TCG_TARGET_REG_BITS =3D=3D 64) { tcg_out_modrm_sib_offset(s, movop + P_REXW + seg, datalo, base, index, 0, ofs); @@ -2181,7 +2181,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCG= Reg datalo, TCGReg datahi, } tcg_out_modrm_sib_offset(s, movop + seg, datalo, base, index, 0, o= fs); break; - case MO_64: + case MO_UQ: if (TCG_TARGET_REG_BITS =3D=3D 64) { if (bswap) { tcg_out_mov(s, TCG_TYPE_I64, scratch, datalo); @@ -2755,7 +2755,7 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode o= pc, OPC_UD2, OPC_UD2, OPC_VPSRLVD, OPC_VPSRLVQ }; static int const sarv_insn[4] =3D { - /* TODO: AVX512 adds support for MO_UW, MO_64. */ + /* TODO: AVX512 adds support for MO_UW, MO_UQ. */ OPC_UD2, OPC_UD2, OPC_VPSRAVD, OPC_UD2 }; static int const shls_insn[4] =3D { @@ -2768,7 +2768,7 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode o= pc, OPC_UD2, OPC_PSRAW, OPC_PSRAD, OPC_UD2 }; static int const abs_insn[4] =3D { - /* TODO: AVX512 adds support for MO_64. */ + /* TODO: AVX512 adds support for MO_UQ. */ OPC_PABSB, OPC_PABSW, OPC_PABSD, OPC_UD2 }; @@ -2898,7 +2898,7 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode o= pc, sub =3D 2; goto gen_shift; case INDEX_op_sari_vec: - tcg_debug_assert(vece !=3D MO_64); + tcg_debug_assert(vece !=3D MO_UQ); sub =3D 4; gen_shift: tcg_debug_assert(vece !=3D MO_UB); @@ -3281,9 +3281,11 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type,= unsigned vece) if (vece =3D=3D MO_UB) { return -1; } - /* We can emulate this for MO_64, but it does not pay off - unless we're producing at least 4 values. */ - if (vece =3D=3D MO_64) { + /* + * We can emulate this for MO_UQ, but it does not pay off + * unless we're producing at least 4 values. + */ + if (vece =3D=3D MO_UQ) { return type >=3D TCG_TYPE_V256 ? -1 : 0; } return 1; @@ -3305,7 +3307,7 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, = unsigned vece) /* We can expand the operation for MO_UB. */ return -1; } - if (vece =3D=3D MO_64) { + if (vece =3D=3D MO_UQ) { return 0; } return 1; @@ -3389,7 +3391,7 @@ static void expand_vec_sari(TCGType type, unsigned ve= ce, tcg_temp_free_vec(t2); break; - case MO_64: + case MO_UQ: if (imm <=3D 32) { /* We can emulate a small sign extend by performing an arithme= tic * 32-bit shift and overwriting the high half of a 64-bit logi= cal @@ -3397,7 +3399,7 @@ static void expand_vec_sari(TCGType type, unsigned ve= ce, */ t1 =3D tcg_temp_new_vec(type); tcg_gen_sari_vec(MO_UL, t1, v1, imm); - tcg_gen_shri_vec(MO_64, v0, v1, imm); + tcg_gen_shri_vec(MO_UQ, v0, v1, imm); vec_gen_4(INDEX_op_x86_blend_vec, type, MO_UL, tcgv_vec_arg(v0), tcgv_vec_arg(v0), tcgv_vec_arg(t1), 0xaa); @@ -3407,10 +3409,10 @@ static void expand_vec_sari(TCGType type, unsigned = vece, * the sign-extend, shift and merge. */ t1 =3D tcg_const_zeros_vec(type); - tcg_gen_cmp_vec(TCG_COND_GT, MO_64, t1, t1, v1); - tcg_gen_shri_vec(MO_64, v0, v1, imm); - tcg_gen_shli_vec(MO_64, t1, t1, 64 - imm); - tcg_gen_or_vec(MO_64, v0, v0, t1); + tcg_gen_cmp_vec(TCG_COND_GT, MO_UQ, t1, t1, v1); + tcg_gen_shri_vec(MO_UQ, v0, v1, imm); + tcg_gen_shli_vec(MO_UQ, t1, t1, 64 - imm); + tcg_gen_or_vec(MO_UQ, v0, v0, t1); tcg_temp_free_vec(t1); } break; diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c index a78fe87..ef31fc8 100644 --- a/tcg/mips/tcg-target.inc.c +++ b/tcg/mips/tcg-target.inc.c @@ -1336,7 +1336,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, = TCGLabelQemuLdst *l) tcg_out_mov(s, TCG_TYPE_PTR, tcg_target_call_iarg_regs[0], TCG_AREG0); v0 =3D l->datalo_reg; - if (TCG_TARGET_REG_BITS =3D=3D 32 && (opc & MO_SIZE) =3D=3D MO_64) { + if (TCG_TARGET_REG_BITS =3D=3D 32 && (opc & MO_SIZE) =3D=3D MO_UQ) { /* We eliminated V0 from the possible output registers, so it cannot be clobbered here. So we must move V1 first. */ if (MIPS_BE) { @@ -1389,7 +1389,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, = TCGLabelQemuLdst *l) case MO_UL: i =3D tcg_out_call_iarg_reg(s, i, l->datalo_reg); break; - case MO_64: + case MO_UQ: if (TCG_TARGET_REG_BITS =3D=3D 32) { i =3D tcg_out_call_iarg_reg2(s, i, l->datalo_reg, l->datahi_re= g); } else { @@ -1470,7 +1470,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCG= Reg lo, TCGReg hi, case MO_SL: tcg_out_opc_imm(s, OPC_LW, lo, base, 0); break; - case MO_Q | MO_BSWAP: + case MO_UQ | MO_BSWAP: if (TCG_TARGET_REG_BITS =3D=3D 64) { if (use_mips32r2_instructions) { tcg_out_opc_imm(s, OPC_LD, lo, base, 0); @@ -1499,7 +1499,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCG= Reg lo, TCGReg hi, tcg_out_mov(s, TCG_TYPE_I32, MIPS_BE ? hi : lo, TCG_TMP3); } break; - case MO_Q: + case MO_UQ: /* Prefer to load from offset 0 first, but allow for overlap. */ if (TCG_TARGET_REG_BITS =3D=3D 64) { tcg_out_opc_imm(s, OPC_LD, lo, base, 0); @@ -1587,7 +1587,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCG= Reg lo, TCGReg hi, tcg_out_opc_imm(s, OPC_SW, lo, base, 0); break; - case MO_64 | MO_BSWAP: + case MO_UQ | MO_BSWAP: if (TCG_TARGET_REG_BITS =3D=3D 64) { tcg_out_bswap64(s, TCG_TMP3, lo); tcg_out_opc_imm(s, OPC_SD, TCG_TMP3, base, 0); @@ -1605,7 +1605,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCG= Reg lo, TCGReg hi, tcg_out_opc_imm(s, OPC_SW, TCG_TMP3, base, 4); } break; - case MO_64: + case MO_UQ: if (TCG_TARGET_REG_BITS =3D=3D 64) { tcg_out_opc_imm(s, OPC_SD, lo, base, 0); } else { diff --git a/tcg/ppc/tcg-target.inc.c b/tcg/ppc/tcg-target.inc.c index 835336a..13a2437 100644 --- a/tcg/ppc/tcg-target.inc.c +++ b/tcg/ppc/tcg-target.inc.c @@ -1445,24 +1445,24 @@ static const uint32_t qemu_ldx_opc[16] =3D { [MO_UB] =3D LBZX, [MO_UW] =3D LHZX, [MO_UL] =3D LWZX, - [MO_Q] =3D LDX, + [MO_UQ] =3D LDX, [MO_SW] =3D LHAX, [MO_SL] =3D LWAX, [MO_BSWAP | MO_UB] =3D LBZX, [MO_BSWAP | MO_UW] =3D LHBRX, [MO_BSWAP | MO_UL] =3D LWBRX, - [MO_BSWAP | MO_Q] =3D LDBRX, + [MO_BSWAP | MO_UQ] =3D LDBRX, }; static const uint32_t qemu_stx_opc[16] =3D { [MO_UB] =3D STBX, [MO_UW] =3D STHX, [MO_UL] =3D STWX, - [MO_Q] =3D STDX, + [MO_UQ] =3D STDX, [MO_BSWAP | MO_UB] =3D STBX, [MO_BSWAP | MO_UW] =3D STHBRX, [MO_BSWAP | MO_UL] =3D STWBRX, - [MO_BSWAP | MO_Q] =3D STDBRX, + [MO_BSWAP | MO_UQ] =3D STDBRX, }; static const uint32_t qemu_exts_opc[4] =3D { @@ -1663,7 +1663,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, = TCGLabelQemuLdst *lb) lo =3D lb->datalo_reg; hi =3D lb->datahi_reg; - if (TCG_TARGET_REG_BITS =3D=3D 32 && (opc & MO_SIZE) =3D=3D MO_64) { + if (TCG_TARGET_REG_BITS =3D=3D 32 && (opc & MO_SIZE) =3D=3D MO_UQ) { tcg_out_mov(s, TCG_TYPE_I32, lo, TCG_REG_R4); tcg_out_mov(s, TCG_TYPE_I32, hi, TCG_REG_R3); } else if (opc & MO_SIGN) { @@ -1708,7 +1708,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, = TCGLabelQemuLdst *lb) hi =3D lb->datahi_reg; if (TCG_TARGET_REG_BITS =3D=3D 32) { switch (s_bits) { - case MO_64: + case MO_UQ: #ifdef TCG_TARGET_CALL_ALIGN_ARGS arg |=3D 1; #endif @@ -1722,7 +1722,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, = TCGLabelQemuLdst *lb) break; } } else { - if (s_bits =3D=3D MO_64) { + if (s_bits =3D=3D MO_UQ) { tcg_out_mov(s, TCG_TYPE_I64, arg++, lo); } else { tcg_out_rld(s, RLDICL, arg++, lo, 0, 64 - (8 << s_bits)); @@ -1775,7 +1775,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGA= rg *args, bool is_64) } #endif - if (TCG_TARGET_REG_BITS =3D=3D 32 && s_bits =3D=3D MO_64) { + if (TCG_TARGET_REG_BITS =3D=3D 32 && s_bits =3D=3D MO_UQ) { if (opc & MO_BSWAP) { tcg_out32(s, ADDI | TAI(TCG_REG_R0, addrlo, 4)); tcg_out32(s, LWBRX | TAB(datalo, rbase, addrlo)); @@ -1850,7 +1850,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGA= rg *args, bool is_64) } #endif - if (TCG_TARGET_REG_BITS =3D=3D 32 && s_bits =3D=3D MO_64) { + if (TCG_TARGET_REG_BITS =3D=3D 32 && s_bits =3D=3D MO_UQ) { if (opc & MO_BSWAP) { tcg_out32(s, ADDI | TAI(TCG_REG_R0, addrlo, 4)); tcg_out32(s, STWBRX | SAB(datalo, rbase, addrlo)); diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c index 1905986..90363df 100644 --- a/tcg/riscv/tcg-target.inc.c +++ b/tcg/riscv/tcg-target.inc.c @@ -1068,7 +1068,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, = TCGLabelQemuLdst *l) tcg_out_movi(s, TCG_TYPE_PTR, a3, (tcg_target_long)l->raddr); tcg_out_call(s, qemu_ld_helpers[opc & (MO_BSWAP | MO_SSIZE)]); - tcg_out_mov(s, (opc & MO_SIZE) =3D=3D MO_64, l->datalo_reg, a0); + tcg_out_mov(s, (opc & MO_SIZE) =3D=3D MO_UQ, l->datalo_reg, a0); tcg_out_goto(s, l->raddr); return true; @@ -1150,7 +1150,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCG= Reg lo, TCGReg hi, case MO_SL: tcg_out_opc_imm(s, OPC_LW, lo, base, 0); break; - case MO_Q: + case MO_UQ: /* Prefer to load from offset 0 first, but allow for overlap. */ if (TCG_TARGET_REG_BITS =3D=3D 64) { tcg_out_opc_imm(s, OPC_LD, lo, base, 0); @@ -1225,7 +1225,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCG= Reg lo, TCGReg hi, case MO_UL: tcg_out_opc_store(s, OPC_SW, base, lo, 0); break; - case MO_64: + case MO_UQ: if (TCG_TARGET_REG_BITS =3D=3D 64) { tcg_out_opc_store(s, OPC_SD, base, lo, 0); } else { diff --git a/tcg/s390/tcg-target.inc.c b/tcg/s390/tcg-target.inc.c index fe42939..db1102e 100644 --- a/tcg/s390/tcg-target.inc.c +++ b/tcg/s390/tcg-target.inc.c @@ -1477,10 +1477,10 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, T= CGMemOp opc, TCGReg data, tcg_out_insn(s, RXY, LGF, data, base, index, disp); break; - case MO_Q | MO_BSWAP: + case MO_UQ | MO_BSWAP: tcg_out_insn(s, RXY, LRVG, data, base, index, disp); break; - case MO_Q: + case MO_UQ: tcg_out_insn(s, RXY, LG, data, base, index, disp); break; @@ -1523,10 +1523,10 @@ static void tcg_out_qemu_st_direct(TCGContext *s, T= CGMemOp opc, TCGReg data, } break; - case MO_Q | MO_BSWAP: + case MO_UQ | MO_BSWAP: tcg_out_insn(s, RXY, STRVG, data, base, index, disp); break; - case MO_Q: + case MO_UQ: tcg_out_insn(s, RXY, STG, data, base, index, disp); break; @@ -1660,7 +1660,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, = TCGLabelQemuLdst *lb) case MO_UL: tgen_ext32u(s, TCG_REG_R4, data_reg); break; - case MO_Q: + case MO_UQ: tcg_out_mov(s, TCG_TYPE_I64, TCG_REG_R4, data_reg); break; default: diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c index ac0d3a3..7c50118 100644 --- a/tcg/sparc/tcg-target.inc.c +++ b/tcg/sparc/tcg-target.inc.c @@ -894,7 +894,7 @@ static void emit_extend(TCGContext *s, TCGReg r, int op) tcg_out_arith(s, r, r, 0, SHIFT_SRL); } break; - case MO_64: + case MO_UQ: break; } } @@ -977,7 +977,7 @@ static void build_trampolines(TCGContext *s) } else { ra +=3D 1; } - if ((i & MO_SIZE) =3D=3D MO_64) { + if ((i & MO_SIZE) =3D=3D MO_UQ) { /* Install the high part of the data. */ tcg_out_arithi(s, ra, ra + 1, 32, SHIFT_SRLX); ra +=3D 2; @@ -1217,7 +1217,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg dat= a, TCGReg addr, tcg_out_mov(s, TCG_TYPE_REG, data, TCG_REG_O0); } } else { - if ((memop & MO_SIZE) =3D=3D MO_64) { + if ((memop & MO_SIZE) =3D=3D MO_UQ) { tcg_out_arithi(s, TCG_REG_O0, TCG_REG_O0, 32, SHIFT_SLLX); tcg_out_arithi(s, TCG_REG_O1, TCG_REG_O1, 0, SHIFT_SRL); tcg_out_arith(s, data, TCG_REG_O0, TCG_REG_O1, ARITH_OR); @@ -1274,7 +1274,7 @@ static void tcg_out_qemu_st(TCGContext *s, TCGReg dat= a, TCGReg addr, param++; } tcg_out_mov(s, TCG_TYPE_REG, param++, addrz); - if (!SPARC64 && (memop & MO_SIZE) =3D=3D MO_64) { + if (!SPARC64 && (memop & MO_SIZE) =3D=3D MO_UQ) { /* Skip the high-part; we'll perform the extract in the trampoline= . */ param++; } diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c index e63622c..0c0eea5 100644 --- a/tcg/tcg-op-gvec.c +++ b/tcg/tcg-op-gvec.c @@ -312,7 +312,7 @@ uint64_t (dup_const)(unsigned vece, uint64_t c) return 0x0001000100010001ull * (uint16_t)c; case MO_UL: return 0x0000000100000001ull * (uint32_t)c; - case MO_64: + case MO_UQ: return c; default: g_assert_not_reached(); @@ -352,7 +352,7 @@ static void gen_dup_i64(unsigned vece, TCGv_i64 out, TC= Gv_i64 in) case MO_UL: tcg_gen_deposit_i64(out, in, in, 32, 32); break; - case MO_64: + case MO_UQ: tcg_gen_mov_i64(out, in); break; default: @@ -443,7 +443,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32= _t oprsz, TCGv_ptr t_ptr; uint32_t i; - assert(vece <=3D (in_32 ? MO_UL : MO_64)); + assert(vece <=3D (in_32 ? MO_UL : MO_UQ)); assert(in_32 =3D=3D NULL || in_64 =3D=3D NULL); /* If we're storing 0, expand oprsz to maxsz. */ @@ -459,7 +459,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32= _t oprsz, */ type =3D choose_vector_type(NULL, vece, oprsz, (TCG_TARGET_REG_BITS =3D=3D 64 && in_32 =3D= =3D NULL - && (in_64 =3D=3D NULL || vece =3D=3D MO_64)= )); + && (in_64 =3D=3D NULL || vece =3D=3D MO_UQ)= )); if (type !=3D 0) { TCGv_vec t_vec =3D tcg_temp_new_vec(type); @@ -502,7 +502,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32= _t oprsz, /* For 64-bit hosts, use 64-bit constants for "simple" constan= ts or when we'd need too many 32-bit stores, or when a 64-bit constant is really required. */ - if (vece =3D=3D MO_64 + if (vece =3D=3D MO_UQ || (TCG_TARGET_REG_BITS =3D=3D 64 && (in_c =3D=3D 0 || in_c =3D=3D -1 || !check_size_impl(oprsz, 4)))) { @@ -534,7 +534,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32= _t oprsz, tcg_gen_addi_ptr(t_ptr, cpu_env, dofs); t_desc =3D tcg_const_i32(simd_desc(oprsz, maxsz, 0)); - if (vece =3D=3D MO_64) { + if (vece =3D=3D MO_UQ) { if (in_64) { gen_helper_gvec_dup64(t_ptr, t_desc, in_64); } else { @@ -1438,7 +1438,7 @@ void tcg_gen_gvec_dup_i64(unsigned vece, uint32_t dof= s, uint32_t oprsz, uint32_t maxsz, TCGv_i64 in) { check_size_align(oprsz, maxsz, dofs); - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); do_dup(vece, dofs, oprsz, maxsz, NULL, in, 0); } @@ -1446,7 +1446,7 @@ void tcg_gen_gvec_dup_mem(unsigned vece, uint32_t dof= s, uint32_t aofs, uint32_t oprsz, uint32_t maxsz) { check_size_align(oprsz, maxsz, dofs); - if (vece <=3D MO_64) { + if (vece <=3D MO_UQ) { TCGType type =3D choose_vector_type(NULL, vece, oprsz, 0); if (type !=3D 0) { TCGv_vec t_vec =3D tcg_temp_new_vec(type); @@ -1512,7 +1512,7 @@ void tcg_gen_gvec_dup64i(uint32_t dofs, uint32_t oprs= z, uint32_t maxsz, uint64_t x) { check_size_align(oprsz, maxsz, dofs); - do_dup(MO_64, dofs, oprsz, maxsz, NULL, NULL, x); + do_dup(MO_UQ, dofs, oprsz, maxsz, NULL, NULL, x); } void tcg_gen_gvec_dup32i(uint32_t dofs, uint32_t oprsz, @@ -1624,10 +1624,10 @@ void tcg_gen_gvec_add(unsigned vece, uint32_t dofs,= uint32_t aofs, .fno =3D gen_helper_gvec_add64, .opt_opc =3D vecop_list_add, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]); } @@ -1655,10 +1655,10 @@ void tcg_gen_gvec_adds(unsigned vece, uint32_t dofs= , uint32_t aofs, .fno =3D gen_helper_gvec_adds64, .opt_opc =3D vecop_list_add, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); tcg_gen_gvec_2s(dofs, aofs, oprsz, maxsz, c, &g[vece]); } @@ -1696,10 +1696,10 @@ void tcg_gen_gvec_subs(unsigned vece, uint32_t dofs= , uint32_t aofs, .fno =3D gen_helper_gvec_subs64, .opt_opc =3D vecop_list_sub, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); tcg_gen_gvec_2s(dofs, aofs, oprsz, maxsz, c, &g[vece]); } @@ -1775,10 +1775,10 @@ void tcg_gen_gvec_sub(unsigned vece, uint32_t dofs,= uint32_t aofs, .fno =3D gen_helper_gvec_sub64, .opt_opc =3D vecop_list_sub, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]); } @@ -1806,10 +1806,10 @@ void tcg_gen_gvec_mul(unsigned vece, uint32_t dofs,= uint32_t aofs, .fno =3D gen_helper_gvec_mul64, .opt_opc =3D vecop_list_mul, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]); } @@ -1835,10 +1835,10 @@ void tcg_gen_gvec_muls(unsigned vece, uint32_t dofs= , uint32_t aofs, .fno =3D gen_helper_gvec_muls64, .opt_opc =3D vecop_list_mul, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); tcg_gen_gvec_2s(dofs, aofs, oprsz, maxsz, c, &g[vece]); } @@ -1870,9 +1870,9 @@ void tcg_gen_gvec_ssadd(unsigned vece, uint32_t dofs,= uint32_t aofs, { .fniv =3D tcg_gen_ssadd_vec, .fno =3D gen_helper_gvec_ssadd64, .opt_opc =3D vecop_list, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]); } @@ -1896,9 +1896,9 @@ void tcg_gen_gvec_sssub(unsigned vece, uint32_t dofs,= uint32_t aofs, { .fniv =3D tcg_gen_sssub_vec, .fno =3D gen_helper_gvec_sssub64, .opt_opc =3D vecop_list, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]); } @@ -1940,9 +1940,9 @@ void tcg_gen_gvec_usadd(unsigned vece, uint32_t dofs,= uint32_t aofs, .fniv =3D tcg_gen_usadd_vec, .fno =3D gen_helper_gvec_usadd64, .opt_opc =3D vecop_list, - .vece =3D MO_64 } + .vece =3D MO_UQ } }; - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]); } @@ -1984,9 +1984,9 @@ void tcg_gen_gvec_ussub(unsigned vece, uint32_t dofs,= uint32_t aofs, .fniv =3D tcg_gen_ussub_vec, .fno =3D gen_helper_gvec_ussub64, .opt_opc =3D vecop_list, - .vece =3D MO_64 } + .vece =3D MO_UQ } }; - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]); } @@ -2012,9 +2012,9 @@ void tcg_gen_gvec_smin(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_smin_vec, .fno =3D gen_helper_gvec_smin64, .opt_opc =3D vecop_list, - .vece =3D MO_64 } + .vece =3D MO_UQ } }; - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]); } @@ -2040,9 +2040,9 @@ void tcg_gen_gvec_umin(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_umin_vec, .fno =3D gen_helper_gvec_umin64, .opt_opc =3D vecop_list, - .vece =3D MO_64 } + .vece =3D MO_UQ } }; - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]); } @@ -2068,9 +2068,9 @@ void tcg_gen_gvec_smax(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_smax_vec, .fno =3D gen_helper_gvec_smax64, .opt_opc =3D vecop_list, - .vece =3D MO_64 } + .vece =3D MO_UQ } }; - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]); } @@ -2096,9 +2096,9 @@ void tcg_gen_gvec_umax(unsigned vece, uint32_t dofs, = uint32_t aofs, .fniv =3D tcg_gen_umax_vec, .fno =3D gen_helper_gvec_umax64, .opt_opc =3D vecop_list, - .vece =3D MO_64 } + .vece =3D MO_UQ } }; - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]); } @@ -2171,10 +2171,10 @@ void tcg_gen_gvec_neg(unsigned vece, uint32_t dofs,= uint32_t aofs, .fno =3D gen_helper_gvec_neg64, .opt_opc =3D vecop_list, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); tcg_gen_gvec_2(dofs, aofs, oprsz, maxsz, &g[vece]); } @@ -2234,10 +2234,10 @@ void tcg_gen_gvec_abs(unsigned vece, uint32_t dofs,= uint32_t aofs, .fno =3D gen_helper_gvec_abs64, .opt_opc =3D vecop_list, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); tcg_gen_gvec_2(dofs, aofs, oprsz, maxsz, &g[vece]); } @@ -2382,7 +2382,7 @@ static const GVecGen2s gop_ands =3D { .fniv =3D tcg_gen_and_vec, .fno =3D gen_helper_gvec_ands, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, - .vece =3D MO_64 + .vece =3D MO_UQ }; void tcg_gen_gvec_ands(unsigned vece, uint32_t dofs, uint32_t aofs, @@ -2407,7 +2407,7 @@ static const GVecGen2s gop_xors =3D { .fniv =3D tcg_gen_xor_vec, .fno =3D gen_helper_gvec_xors, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, - .vece =3D MO_64 + .vece =3D MO_UQ }; void tcg_gen_gvec_xors(unsigned vece, uint32_t dofs, uint32_t aofs, @@ -2432,7 +2432,7 @@ static const GVecGen2s gop_ors =3D { .fniv =3D tcg_gen_or_vec, .fno =3D gen_helper_gvec_ors, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, - .vece =3D MO_64 + .vece =3D MO_UQ }; void tcg_gen_gvec_ors(unsigned vece, uint32_t dofs, uint32_t aofs, @@ -2491,10 +2491,10 @@ void tcg_gen_gvec_shli(unsigned vece, uint32_t dofs= , uint32_t aofs, .fno =3D gen_helper_gvec_shl64i, .opt_opc =3D vecop_list, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); tcg_debug_assert(shift >=3D 0 && shift < (8 << vece)); if (shift =3D=3D 0) { tcg_gen_gvec_mov(vece, dofs, aofs, oprsz, maxsz); @@ -2542,10 +2542,10 @@ void tcg_gen_gvec_shri(unsigned vece, uint32_t dofs= , uint32_t aofs, .fno =3D gen_helper_gvec_shr64i, .opt_opc =3D vecop_list, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); tcg_debug_assert(shift >=3D 0 && shift < (8 << vece)); if (shift =3D=3D 0) { tcg_gen_gvec_mov(vece, dofs, aofs, oprsz, maxsz); @@ -2607,10 +2607,10 @@ void tcg_gen_gvec_sari(unsigned vece, uint32_t dofs= , uint32_t aofs, .fno =3D gen_helper_gvec_sar64i, .opt_opc =3D vecop_list, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); tcg_debug_assert(shift >=3D 0 && shift < (8 << vece)); if (shift =3D=3D 0) { tcg_gen_gvec_mov(vece, dofs, aofs, oprsz, maxsz); @@ -2660,7 +2660,7 @@ do_gvec_shifts(unsigned vece, uint32_t dofs, uint32_t= aofs, TCGv_i32 shift, check_overlap_2(dofs, aofs, maxsz); /* If the backend has a scalar expansion, great. */ - type =3D choose_vector_type(g->s_list, vece, oprsz, vece =3D=3D MO_64); + type =3D choose_vector_type(g->s_list, vece, oprsz, vece =3D=3D MO_UQ); if (type) { const TCGOpcode *hold_list =3D tcg_swap_vecop_list(NULL); switch (type) { @@ -2692,15 +2692,15 @@ do_gvec_shifts(unsigned vece, uint32_t dofs, uint32= _t aofs, TCGv_i32 shift, } /* If the backend supports variable vector shifts, also cool. */ - type =3D choose_vector_type(g->v_list, vece, oprsz, vece =3D=3D MO_64); + type =3D choose_vector_type(g->v_list, vece, oprsz, vece =3D=3D MO_UQ); if (type) { const TCGOpcode *hold_list =3D tcg_swap_vecop_list(NULL); TCGv_vec v_shift =3D tcg_temp_new_vec(type); - if (vece =3D=3D MO_64) { + if (vece =3D=3D MO_UQ) { TCGv_i64 sh64 =3D tcg_temp_new_i64(); tcg_gen_extu_i32_i64(sh64, shift); - tcg_gen_dup_i64_vec(MO_64, v_shift, sh64); + tcg_gen_dup_i64_vec(MO_UQ, v_shift, sh64); tcg_temp_free_i64(sh64); } else { tcg_gen_dup_i32_vec(vece, v_shift, shift); @@ -2738,7 +2738,7 @@ do_gvec_shifts(unsigned vece, uint32_t dofs, uint32_t= aofs, TCGv_i32 shift, /* Otherwise fall back to integral... */ if (vece =3D=3D MO_UL && check_size_impl(oprsz, 4)) { expand_2s_i32(dofs, aofs, oprsz, shift, false, g->fni4); - } else if (vece =3D=3D MO_64 && check_size_impl(oprsz, 8)) { + } else if (vece =3D=3D MO_UQ && check_size_impl(oprsz, 8)) { TCGv_i64 sh64 =3D tcg_temp_new_i64(); tcg_gen_extu_i32_i64(sh64, shift); expand_2s_i64(dofs, aofs, oprsz, sh64, false, g->fni8); @@ -2785,7 +2785,7 @@ void tcg_gen_gvec_shls(unsigned vece, uint32_t dofs, = uint32_t aofs, .v_list =3D { INDEX_op_shlv_vec, 0 }, }; - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); do_gvec_shifts(vece, dofs, aofs, shift, oprsz, maxsz, &g); } @@ -2807,7 +2807,7 @@ void tcg_gen_gvec_shrs(unsigned vece, uint32_t dofs, = uint32_t aofs, .v_list =3D { INDEX_op_shrv_vec, 0 }, }; - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); do_gvec_shifts(vece, dofs, aofs, shift, oprsz, maxsz, &g); } @@ -2829,7 +2829,7 @@ void tcg_gen_gvec_sars(unsigned vece, uint32_t dofs, = uint32_t aofs, .v_list =3D { INDEX_op_sarv_vec, 0 }, }; - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); do_gvec_shifts(vece, dofs, aofs, shift, oprsz, maxsz, &g); } @@ -2895,10 +2895,10 @@ void tcg_gen_gvec_shlv(unsigned vece, uint32_t dofs= , uint32_t aofs, .fno =3D gen_helper_gvec_shl64v, .opt_opc =3D vecop_list, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]); } @@ -2958,10 +2958,10 @@ void tcg_gen_gvec_shrv(unsigned vece, uint32_t dofs= , uint32_t aofs, .fno =3D gen_helper_gvec_shr64v, .opt_opc =3D vecop_list, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]); } @@ -3021,10 +3021,10 @@ void tcg_gen_gvec_sarv(unsigned vece, uint32_t dofs= , uint32_t aofs, .fno =3D gen_helper_gvec_sar64v, .opt_opc =3D vecop_list, .prefer_i64 =3D TCG_TARGET_REG_BITS =3D=3D 64, - .vece =3D MO_64 }, + .vece =3D MO_UQ }, }; - tcg_debug_assert(vece <=3D MO_64); + tcg_debug_assert(vece <=3D MO_UQ); tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]); } @@ -3140,7 +3140,7 @@ void tcg_gen_gvec_cmp(TCGCond cond, unsigned vece, ui= nt32_t dofs, */ hold_list =3D tcg_swap_vecop_list(cmp_list); type =3D choose_vector_type(cmp_list, vece, oprsz, - TCG_TARGET_REG_BITS =3D=3D 64 && vece =3D=3D= MO_64); + TCG_TARGET_REG_BITS =3D=3D 64 && vece =3D=3D= MO_UQ); switch (type) { case TCG_TYPE_V256: /* Recall that ARM SVE allows vector sizes that are not a @@ -3166,7 +3166,7 @@ void tcg_gen_gvec_cmp(TCGCond cond, unsigned vece, ui= nt32_t dofs, break; case 0: - if (vece =3D=3D MO_64 && check_size_impl(oprsz, 8)) { + if (vece =3D=3D MO_UQ && check_size_impl(oprsz, 8)) { expand_cmp_i64(dofs, aofs, bofs, oprsz, cond); } else if (vece =3D=3D MO_UL && check_size_impl(oprsz, 4)) { expand_cmp_i32(dofs, aofs, bofs, oprsz, cond); diff --git a/tcg/tcg-op-vec.c b/tcg/tcg-op-vec.c index ff723ab..e8aea38 100644 --- a/tcg/tcg-op-vec.c +++ b/tcg/tcg-op-vec.c @@ -216,7 +216,7 @@ void tcg_gen_mov_vec(TCGv_vec r, TCGv_vec a) } } -#define MO_REG (TCG_TARGET_REG_BITS =3D=3D 64 ? MO_64 : MO_UL) +#define MO_REG (TCG_TARGET_REG_BITS =3D=3D 64 ? MO_UQ : MO_UL) static void do_dupi_vec(TCGv_vec r, unsigned vece, TCGArg a) { @@ -255,10 +255,10 @@ void tcg_gen_dup64i_vec(TCGv_vec r, uint64_t a) if (TCG_TARGET_REG_BITS =3D=3D 32 && a =3D=3D deposit64(a, 32, 32, a))= { do_dupi_vec(r, MO_UL, a); } else if (TCG_TARGET_REG_BITS =3D=3D 64 || a =3D=3D (uint64_t)(int32_= t)a) { - do_dupi_vec(r, MO_64, a); + do_dupi_vec(r, MO_UQ, a); } else { TCGv_i64 c =3D tcg_const_i64(a); - tcg_gen_dup_i64_vec(MO_64, r, c); + tcg_gen_dup_i64_vec(MO_UQ, r, c); tcg_temp_free_i64(c); } } @@ -292,10 +292,10 @@ void tcg_gen_dup_i64_vec(unsigned vece, TCGv_vec r, T= CGv_i64 a) if (TCG_TARGET_REG_BITS =3D=3D 64) { TCGArg ai =3D tcgv_i64_arg(a); vec_gen_2(INDEX_op_dup_vec, type, vece, ri, ai); - } else if (vece =3D=3D MO_64) { + } else if (vece =3D=3D MO_UQ) { TCGArg al =3D tcgv_i32_arg(TCGV_LOW(a)); TCGArg ah =3D tcgv_i32_arg(TCGV_HIGH(a)); - vec_gen_3(INDEX_op_dup2_vec, type, MO_64, ri, al, ah); + vec_gen_3(INDEX_op_dup2_vec, type, MO_UQ, ri, al, ah); } else { TCGArg ai =3D tcgv_i32_arg(TCGV_LOW(a)); vec_gen_2(INDEX_op_dup_vec, type, vece, ri, ai); @@ -709,10 +709,10 @@ static void do_shifts(unsigned vece, TCGv_vec r, TCGv= _vec a, } else { TCGv_vec vec_s =3D tcg_temp_new_vec(type); - if (vece =3D=3D MO_64) { + if (vece =3D=3D MO_UQ) { TCGv_i64 s64 =3D tcg_temp_new_i64(); tcg_gen_extu_i32_i64(s64, s); - tcg_gen_dup_i64_vec(MO_64, vec_s, s64); + tcg_gen_dup_i64_vec(MO_UQ, vec_s, s64); tcg_temp_free_i64(s64); } else { tcg_gen_dup_i32_vec(vece, vec_s, s); diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c index 447683d..a9f3e13 100644 --- a/tcg/tcg-op.c +++ b/tcg/tcg-op.c @@ -2730,7 +2730,7 @@ static inline TCGMemOp tcg_canonicalize_memop(TCGMemO= p op, bool is64, bool st) op &=3D ~MO_SIGN; } break; - case MO_64: + case MO_UQ: if (!is64) { tcg_abort(); } @@ -2862,7 +2862,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCG= Arg idx, TCGMemOp memop) { TCGMemOp orig_memop; - if (TCG_TARGET_REG_BITS =3D=3D 32 && (memop & MO_SIZE) < MO_64) { + if (TCG_TARGET_REG_BITS =3D=3D 32 && (memop & MO_SIZE) < MO_UQ) { tcg_gen_qemu_ld_i32(TCGV_LOW(val), addr, idx, memop); if (memop & MO_SIGN) { tcg_gen_sari_i32(TCGV_HIGH(val), TCGV_LOW(val), 31); @@ -2881,7 +2881,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCG= Arg idx, TCGMemOp memop) if (!TCG_TARGET_HAS_MEMORY_BSWAP && (memop & MO_BSWAP)) { memop &=3D ~MO_BSWAP; /* The bswap primitive requires zero-extended input. */ - if ((memop & MO_SIGN) && (memop & MO_SIZE) < MO_64) { + if ((memop & MO_SIGN) && (memop & MO_SIZE) < MO_UQ) { memop &=3D ~MO_SIGN; } } @@ -2902,7 +2902,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCG= Arg idx, TCGMemOp memop) tcg_gen_ext32s_i64(val, val); } break; - case MO_64: + case MO_UQ: tcg_gen_bswap64_i64(val, val); break; default: @@ -2915,7 +2915,7 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCG= Arg idx, TCGMemOp memop) { TCGv_i64 swap =3D NULL; - if (TCG_TARGET_REG_BITS =3D=3D 32 && (memop & MO_SIZE) < MO_64) { + if (TCG_TARGET_REG_BITS =3D=3D 32 && (memop & MO_SIZE) < MO_UQ) { tcg_gen_qemu_st_i32(TCGV_LOW(val), addr, idx, memop); return; } @@ -2936,7 +2936,7 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCG= Arg idx, TCGMemOp memop) tcg_gen_ext32u_i64(swap, val); tcg_gen_bswap32_i64(swap, swap); break; - case MO_64: + case MO_UQ: tcg_gen_bswap64_i64(swap, val); break; default: @@ -3029,8 +3029,8 @@ static void * const table_cmpxchg[16] =3D { [MO_UW | MO_BE] =3D gen_helper_atomic_cmpxchgw_be, [MO_UL | MO_LE] =3D gen_helper_atomic_cmpxchgl_le, [MO_UL | MO_BE] =3D gen_helper_atomic_cmpxchgl_be, - WITH_ATOMIC64([MO_64 | MO_LE] =3D gen_helper_atomic_cmpxchgq_le) - WITH_ATOMIC64([MO_64 | MO_BE] =3D gen_helper_atomic_cmpxchgq_be) + WITH_ATOMIC64([MO_UQ | MO_LE] =3D gen_helper_atomic_cmpxchgq_le) + WITH_ATOMIC64([MO_UQ | MO_BE] =3D gen_helper_atomic_cmpxchgq_be) }; void tcg_gen_atomic_cmpxchg_i32(TCGv_i32 retv, TCGv addr, TCGv_i32 cmpv, @@ -3099,7 +3099,7 @@ void tcg_gen_atomic_cmpxchg_i64(TCGv_i64 retv, TCGv a= ddr, TCGv_i64 cmpv, tcg_gen_mov_i64(retv, t1); } tcg_temp_free_i64(t1); - } else if ((memop & MO_SIZE) =3D=3D MO_64) { + } else if ((memop & MO_SIZE) =3D=3D MO_UQ) { #ifdef CONFIG_ATOMIC64 gen_atomic_cx_i64 gen; @@ -3207,7 +3207,7 @@ static void do_atomic_op_i64(TCGv_i64 ret, TCGv addr,= TCGv_i64 val, { memop =3D tcg_canonicalize_memop(memop, 1, 0); - if ((memop & MO_SIZE) =3D=3D MO_64) { + if ((memop & MO_SIZE) =3D=3D MO_UQ) { #ifdef CONFIG_ATOMIC64 gen_atomic_op_i64 gen; @@ -3253,8 +3253,8 @@ static void * const table_##NAME[16] =3D { = \ [MO_UW | MO_BE] =3D gen_helper_atomic_##NAME##w_be, \ [MO_UL | MO_LE] =3D gen_helper_atomic_##NAME##l_le, \ [MO_UL | MO_BE] =3D gen_helper_atomic_##NAME##l_be, \ - WITH_ATOMIC64([MO_64 | MO_LE] =3D gen_helper_atomic_##NAME##q_le) \ - WITH_ATOMIC64([MO_64 | MO_BE] =3D gen_helper_atomic_##NAME##q_be) \ + WITH_ATOMIC64([MO_UQ | MO_LE] =3D gen_helper_atomic_##NAME##q_le) \ + WITH_ATOMIC64([MO_UQ | MO_BE] =3D gen_helper_atomic_##NAME##q_be) \ }; \ void tcg_gen_atomic_##NAME##_i32 \ (TCGv_i32 ret, TCGv addr, TCGv_i32 val, TCGArg idx, TCGMemOp memop) \ diff --git a/tcg/tcg.h b/tcg/tcg.h index 4b6ee89..63e9897 100644 --- a/tcg/tcg.h +++ b/tcg/tcg.h @@ -371,28 +371,29 @@ typedef enum TCGMemOp { MO_UB =3D MO_8, MO_UW =3D MO_16, MO_UL =3D MO_32, + MO_UQ =3D MO_64, MO_SB =3D MO_SIGN | MO_8, MO_SW =3D MO_SIGN | MO_16, MO_SL =3D MO_SIGN | MO_32, - MO_Q =3D MO_64, + MO_SQ =3D MO_SIGN | MO_64, MO_LEUW =3D MO_LE | MO_UW, MO_LEUL =3D MO_LE | MO_UL, MO_LESW =3D MO_LE | MO_SW, MO_LESL =3D MO_LE | MO_SL, - MO_LEQ =3D MO_LE | MO_Q, + MO_LEQ =3D MO_LE | MO_UQ, MO_BEUW =3D MO_BE | MO_UW, MO_BEUL =3D MO_BE | MO_UL, MO_BESW =3D MO_BE | MO_SW, MO_BESL =3D MO_BE | MO_SL, - MO_BEQ =3D MO_BE | MO_Q, + MO_BEQ =3D MO_BE | MO_UQ, MO_TEUW =3D MO_TE | MO_UW, MO_TEUL =3D MO_TE | MO_UL, MO_TESW =3D MO_TE | MO_SW, MO_TESL =3D MO_TE | MO_SL, - MO_TEQ =3D MO_TE | MO_Q, + MO_TEQ =3D MO_TE | MO_UQ, MO_SSIZE =3D MO_SIZE | MO_SIGN, } TCGMemOp; -- 1.8.3.1 From nobody Mon Feb 9 05:43:11 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=bt.com ARC-Seal: i=1; a=rsa-sha256; t=1563810259; cv=none; d=zoho.com; s=zohoarc; b=e+hbhy8hCbxxbMOktFjRg+768raSsOD6TBYy26uJ4AAQlBJlZWDL5tlugVsDAQSZ0tJvDNYGdGYzbCNOpYXKEfcMvJTtM9nDnu/O2Tb8noJiZ7f77rSTMl8ZXuYyhuQeNWiEWINgJ5uQ9GFCdrwgHZRAk0GsB7MuQecNLOB5/Qo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1563810259; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=D74kE6pcqoD/tMAXadM0vE59NkP8DzUtGjLzkFSzNjs=; b=GxbRFRyZ2EYxI0fZWeUZymo8LZD8Dyg9yP77Xk7Fwlk6XxIsPnbtUhmpJfInPH3k4t3jxMQfqvQVaRlQ89cJJBBZhdWrzrBzJ4eHHUvanZyvqpp+q1rSm/FuMOAJA3niyV2kovcdOQHAGE5j5dMVP+WUTyWLI91fReEVfXnGV0Q= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 156381025972458.82716285449419; Mon, 22 Jul 2019 08:44:19 -0700 (PDT) Received: from localhost ([::1]:34752 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaU6-0005nu-JK for importer@patchew.org; Mon, 22 Jul 2019 11:44:18 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:52487) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaTv-0005P7-A3 for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:44:08 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hpaTt-00046T-CE for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:44:07 -0400 Received: from smtpe1.intersmtp.com ([213.121.35.80]:15138) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hpaTZ-0003rl-65; Mon, 22 Jul 2019 11:43:45 -0400 Received: from tpw09926dag18h.domain1.systemhost.net (10.9.212.42) by BWP09926085.bt.com (10.36.82.116) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.1.1713.5; Mon, 22 Jul 2019 16:43:46 +0100 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by tpw09926dag18h.domain1.systemhost.net (10.9.212.42) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Jul 2019 16:43:43 +0100 Received: from tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c]) by tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c%12]) with mapi id 15.00.1395.000; Mon, 22 Jul 2019 16:43:43 +0100 From: To: Thread-Topic: [Qemu-devel] [PATCH v2 05/20] tcg: Move size+sign+endian from TCGMemOp to MemOp Thread-Index: AQHVQKQ9cEFsk0Sv/kSj8smbj0lJlQ== Date: Mon, 22 Jul 2019 15:43:43 +0000 Message-ID: <1563810222730.76414@bt.com> References: In-Reply-To: Accept-Language: en-AU, en-GB, en-US Content-Language: en-AU X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.187.101.37] MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 [fuzzy] X-Received-From: 213.121.35.80 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.23 Subject: [Qemu-devel] [PATCH v2 05/20] tcg: Move size+sign+endian from TCGMemOp to MemOp X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, walling@linux.ibm.com, mst@redhat.com, palmer@sifive.com, mark.cave-ayland@ilande.co.uk, Alistair.Francis@wdc.com, arikalo@wavecomp.com, david@redhat.com, pasic@linux.ibm.com, borntraeger@de.ibm.com, rth@twiddle.net, atar4qemu@gmail.com, ehabkost@redhat.com, sw@weilnetz.de, qemu-s390x@nongnu.org, qemu-arm@nongnu.org, david@gibson.dropbear.id.au, qemu-riscv@nongnu.org, cohuck@redhat.com, claudio.fontana@huawei.com, alex.williamson@redhat.com, qemu-ppc@nongnu.org, amarkovic@wavecomp.com, pbonzini@redhat.com, aurelien@aurel32.net Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" Preparation for modifying the memory API to take size+sign+endianness instead of just size. Accelerator independent MemOp enum is extended by TCGMemOp enum. Signed-off-by: Tony Nguyen --- MAINTAINERS | 1 + include/exec/memop.h | 27 +++++++++++++++++++++++++++ tcg/tcg.h | 15 +++++---------- 3 files changed, 33 insertions(+), 10 deletions(-) create mode 100644 include/exec/memop.h diff --git a/MAINTAINERS b/MAINTAINERS index cc9636b..3f148cd 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1890,6 +1890,7 @@ M: Paolo Bonzini S: Supported F: include/exec/ioport.h F: ioport.c +F: include/exec/memop.h F: include/exec/memory.h F: include/exec/ram_addr.h F: memory.c diff --git a/include/exec/memop.h b/include/exec/memop.h new file mode 100644 index 0000000..43e99d7 --- /dev/null +++ b/include/exec/memop.h @@ -0,0 +1,27 @@ +/* + * Constants for memory operations + * + * Authors: + * Richard Henderson + * + * This work is licensed under the terms of the GNU GPL, version 2 or late= r. + * See the COPYING file in the top-level directory. + * + */ + +#ifndef MEMOP_H +#define MEMOP_H + +typedef enum MemOp { + MO_8 =3D 0, + MO_16 =3D 1, + MO_32 =3D 2, + MO_64 =3D 3, + MO_SIZE =3D 3, /* Mask for the above. */ + + MO_SIGN =3D 4, /* Sign-extended, otherwise zero-extended. */ + + MO_BSWAP =3D 8, /* Host reverse endian. */ +} MemOp; + +#endif diff --git a/tcg/tcg.h b/tcg/tcg.h index 63e9897..18b91fe 100644 --- a/tcg/tcg.h +++ b/tcg/tcg.h @@ -26,6 +26,7 @@ #define TCG_H #include "cpu.h" +#include "exec/memop.h" #include "exec/tb-context.h" #include "qemu/bitops.h" #include "qemu/queue.h" @@ -309,17 +310,11 @@ typedef enum TCGType { #endif } TCGType; -/* Constants for qemu_ld and qemu_st for the Memory Operation field. */ +/* + * Extend MemOp with constants for qemu_ld and qemu_st for the Memory + * Operation field. + */ typedef enum TCGMemOp { - MO_8 =3D 0, - MO_16 =3D 1, - MO_32 =3D 2, - MO_64 =3D 3, - MO_SIZE =3D 3, /* Mask for the above. */ - - MO_SIGN =3D 4, /* Sign-extended, otherwise zero-extended. */ - - MO_BSWAP =3D 8, /* Host reverse endian. */ #ifdef HOST_WORDS_BIGENDIAN MO_LE =3D MO_BSWAP, MO_BE =3D 0, -- 1.8.3.1 From nobody Mon Feb 9 05:43:11 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=bt.com ARC-Seal: i=1; a=rsa-sha256; t=1563810303; cv=none; d=zoho.com; s=zohoarc; b=bEUB5vaqSd/Tjt+y8c79PpbnyL0e9le0C5zIdieYzBfTsVTHW5K3t0Wdnr4OpO9prPOkpEqjZAV/Npv9xsEY08CfQZbnMN8H5v3xmfMTFz9Ne05PIXOU+3X/SwQ7LkS/PvtwbCEEUi1XpTYgenLOafB2KQ6H+a3S4QSkYUB85nY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1563810303; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=wzHKyjjmrUj1elQl71aRKfuZ3HFR2lFkmxC00Pm/9rM=; b=cizdBkFURWL5l46UyV8qyTh3DBRKXzCzoi4ruKDddqGw2WQ4v24mMKRkYEMzRHcezSP0ol5FCgkt7CKJcT333CbL3ripYcNK5yBKdPRm9RyZekso2Q3/4oAKFpKA5HNRxR8UiF+G2NAcHlKp5pwEwykt/7/OyefQJ8ahyk3M4hI= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 156381030294827.423115630530674; Mon, 22 Jul 2019 08:45:02 -0700 (PDT) Received: from localhost ([::1]:34806 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaUn-0006jn-Qp for importer@patchew.org; Mon, 22 Jul 2019 11:45:01 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:52686) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaUZ-0006Ke-DM for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:44:51 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hpaUV-0004fC-KI for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:44:47 -0400 Received: from smtpe1.intersmtp.com ([62.239.224.236]:37940) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hpaUF-0004Ux-Jz; Mon, 22 Jul 2019 11:44:28 -0400 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by RDW083A009ED65.bt.com (10.187.98.35) with Microsoft SMTP Server (TLS) id 14.3.439.0; Mon, 22 Jul 2019 16:40:47 +0100 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by tpw09926dag18e.domain1.systemhost.net (10.9.212.18) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Jul 2019 16:44:24 +0100 Received: from tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c]) by tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c%12]) with mapi id 15.00.1395.000; Mon, 22 Jul 2019 16:44:24 +0100 From: To: Thread-Topic: [Qemu-devel] [PATCH v2 06/20] tcg: Rename get_memop to get_tcgmemop Thread-Index: AQHVQKRWuilU/GWOfUKTTPphRrvupQ== Date: Mon, 22 Jul 2019 15:44:24 +0000 Message-ID: <1563810264064.38406@bt.com> References: In-Reply-To: Accept-Language: en-AU, en-GB, en-US Content-Language: en-AU X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.187.101.37] MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 62.239.224.236 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.23 Subject: [Qemu-devel] [PATCH v2 06/20] tcg: Rename get_memop to get_tcgmemop X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, walling@linux.ibm.com, mst@redhat.com, palmer@sifive.com, mark.cave-ayland@ilande.co.uk, Alistair.Francis@wdc.com, arikalo@wavecomp.com, david@redhat.com, pasic@linux.ibm.com, borntraeger@de.ibm.com, rth@twiddle.net, atar4qemu@gmail.com, ehabkost@redhat.com, sw@weilnetz.de, qemu-s390x@nongnu.org, qemu-arm@nongnu.org, david@gibson.dropbear.id.au, qemu-riscv@nongnu.org, cohuck@redhat.com, claudio.fontana@huawei.com, alex.williamson@redhat.com, qemu-ppc@nongnu.org, amarkovic@wavecomp.com, pbonzini@redhat.com, aurelien@aurel32.net Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" Correct naming as there is now both MemOp and TCGMemOp. Signed-off-by: Tony Nguyen --- accel/tcg/cputlb.c | 6 +++--- tcg/aarch64/tcg-target.inc.c | 8 ++++---- tcg/arm/tcg-target.inc.c | 8 ++++---- tcg/i386/tcg-target.inc.c | 8 ++++---- tcg/mips/tcg-target.inc.c | 10 +++++----- tcg/optimize.c | 2 +- tcg/ppc/tcg-target.inc.c | 8 ++++---- tcg/riscv/tcg-target.inc.c | 10 +++++----- tcg/s390/tcg-target.inc.c | 8 ++++---- tcg/sparc/tcg-target.inc.c | 4 ++-- tcg/tcg.c | 2 +- tcg/tcg.h | 4 ++-- tcg/tci.c | 8 ++++---- 13 files changed, 43 insertions(+), 43 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index bb9897b..184fc54 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1133,7 +1133,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, tar= get_ulong addr, uintptr_t index =3D tlb_index(env, mmu_idx, addr); CPUTLBEntry *tlbe =3D tlb_entry(env, mmu_idx, addr); target_ulong tlb_addr =3D tlb_addr_write(tlbe); - TCGMemOp mop =3D get_memop(oi); + TCGMemOp mop =3D get_tcgmemop(oi); int a_bits =3D get_alignment_bits(mop); int s_bits =3D mop & MO_SIZE; void *hostaddr; @@ -1257,7 +1257,7 @@ load_helper(CPUArchState *env, target_ulong addr, TCG= MemOpIdx oi, offsetof(CPUTLBEntry, addr_code) : offsetof(CPUTLBEntry, addr_read= ); const MMUAccessType access_type =3D code_read ? MMU_INST_FETCH : MMU_DATA_LOAD; - unsigned a_bits =3D get_alignment_bits(get_memop(oi)); + unsigned a_bits =3D get_alignment_bits(get_tcgmemop(oi)); void *haddr; uint64_t res; @@ -1506,7 +1506,7 @@ store_helper(CPUArchState *env, target_ulong addr, ui= nt64_t val, CPUTLBEntry *entry =3D tlb_entry(env, mmu_idx, addr); target_ulong tlb_addr =3D tlb_addr_write(entry); const size_t tlb_off =3D offsetof(CPUTLBEntry, addr_write); - unsigned a_bits =3D get_alignment_bits(get_memop(oi)); + unsigned a_bits =3D get_alignment_bits(get_tcgmemop(oi)); void *haddr; /* Handle CPU specific unaligned behaviour */ diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c index d14afa9..886da51 100644 --- a/tcg/aarch64/tcg-target.inc.c +++ b/tcg/aarch64/tcg-target.inc.c @@ -1580,7 +1580,7 @@ static inline void tcg_out_adr(TCGContext *s, TCGReg = rd, void *target) static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) { TCGMemOpIdx oi =3D lb->oi; - TCGMemOp opc =3D get_memop(oi); + TCGMemOp opc =3D get_tcgmemop(oi); TCGMemOp size =3D opc & MO_SIZE; if (!reloc_pc19(lb->label_ptr[0], s->code_ptr)) { @@ -1605,7 +1605,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, = TCGLabelQemuLdst *lb) static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) { TCGMemOpIdx oi =3D lb->oi; - TCGMemOp opc =3D get_memop(oi); + TCGMemOp opc =3D get_tcgmemop(oi); TCGMemOp size =3D opc & MO_SIZE; if (!reloc_pc19(lb->label_ptr[0], s->code_ptr)) { @@ -1804,7 +1804,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCG= MemOp memop, static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_re= g, TCGMemOpIdx oi, TCGType ext) { - TCGMemOp memop =3D get_memop(oi); + TCGMemOp memop =3D get_tcgmemop(oi); const TCGType otype =3D TARGET_LONG_BITS =3D=3D 64 ? TCG_TYPE_I64 : TC= G_TYPE_I32; #ifdef CONFIG_SOFTMMU unsigned mem_index =3D get_mmuidx(oi); @@ -1829,7 +1829,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg dat= a_reg, TCGReg addr_reg, static void tcg_out_qemu_st(TCGContext *s, TCGReg data_reg, TCGReg addr_re= g, TCGMemOpIdx oi) { - TCGMemOp memop =3D get_memop(oi); + TCGMemOp memop =3D get_tcgmemop(oi); const TCGType otype =3D TARGET_LONG_BITS =3D=3D 64 ? TCG_TYPE_I64 : TC= G_TYPE_I32; #ifdef CONFIG_SOFTMMU unsigned mem_index =3D get_mmuidx(oi); diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c index 70eeb8a..98c5b47 100644 --- a/tcg/arm/tcg-target.inc.c +++ b/tcg/arm/tcg-target.inc.c @@ -1348,7 +1348,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, = TCGLabelQemuLdst *lb) { TCGReg argreg, datalo, datahi; TCGMemOpIdx oi =3D lb->oi; - TCGMemOp opc =3D get_memop(oi); + TCGMemOp opc =3D get_tcgmemop(oi); void *func; if (!reloc_pc24(lb->label_ptr[0], s->code_ptr)) { @@ -1412,7 +1412,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, = TCGLabelQemuLdst *lb) { TCGReg argreg, datalo, datahi; TCGMemOpIdx oi =3D lb->oi; - TCGMemOp opc =3D get_memop(oi); + TCGMemOp opc =3D get_tcgmemop(oi); if (!reloc_pc24(lb->label_ptr[0], s->code_ptr)) { return false; @@ -1589,7 +1589,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGA= rg *args, bool is64) addrlo =3D *args++; addrhi =3D (TARGET_LONG_BITS =3D=3D 64 ? *args++ : 0); oi =3D *args++; - opc =3D get_memop(oi); + opc =3D get_tcgmemop(oi); #ifdef CONFIG_SOFTMMU mem_index =3D get_mmuidx(oi); @@ -1720,7 +1720,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGA= rg *args, bool is64) addrlo =3D *args++; addrhi =3D (TARGET_LONG_BITS =3D=3D 64 ? *args++ : 0); oi =3D *args++; - opc =3D get_memop(oi); + opc =3D get_tcgmemop(oi); #ifdef CONFIG_SOFTMMU mem_index =3D get_mmuidx(oi); diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c index 3a73334..e4525ca 100644 --- a/tcg/i386/tcg-target.inc.c +++ b/tcg/i386/tcg-target.inc.c @@ -1810,7 +1810,7 @@ static void add_qemu_ldst_label(TCGContext *s, bool i= s_ld, bool is_64, static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) { TCGMemOpIdx oi =3D l->oi; - TCGMemOp opc =3D get_memop(oi); + TCGMemOp opc =3D get_tcgmemop(oi); TCGReg data_reg; tcg_insn_unit **label_ptr =3D &l->label_ptr[0]; int rexw =3D (l->type =3D=3D TCG_TYPE_I64 ? P_REXW : 0); @@ -1895,7 +1895,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, = TCGLabelQemuLdst *l) static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) { TCGMemOpIdx oi =3D l->oi; - TCGMemOp opc =3D get_memop(oi); + TCGMemOp opc =3D get_tcgmemop(oi); TCGMemOp s_bits =3D opc & MO_SIZE; tcg_insn_unit **label_ptr =3D &l->label_ptr[0]; TCGReg retaddr; @@ -2114,7 +2114,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGA= rg *args, bool is64) addrlo =3D *args++; addrhi =3D (TARGET_LONG_BITS > TCG_TARGET_REG_BITS ? *args++ : 0); oi =3D *args++; - opc =3D get_memop(oi); + opc =3D get_tcgmemop(oi); #if defined(CONFIG_SOFTMMU) mem_index =3D get_mmuidx(oi); @@ -2232,7 +2232,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGA= rg *args, bool is64) addrlo =3D *args++; addrhi =3D (TARGET_LONG_BITS > TCG_TARGET_REG_BITS ? *args++ : 0); oi =3D *args++; - opc =3D get_memop(oi); + opc =3D get_tcgmemop(oi); #if defined(CONFIG_SOFTMMU) mem_index =3D get_mmuidx(oi); diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c index ef31fc8..010afd0 100644 --- a/tcg/mips/tcg-target.inc.c +++ b/tcg/mips/tcg-target.inc.c @@ -1215,7 +1215,7 @@ static void tcg_out_tlb_load(TCGContext *s, TCGReg ba= se, TCGReg addrl, TCGReg addrh, TCGMemOpIdx oi, tcg_insn_unit *label_ptr[2], bool is_load) { - TCGMemOp opc =3D get_memop(oi); + TCGMemOp opc =3D get_tcgmemop(oi); unsigned s_bits =3D opc & MO_SIZE; unsigned a_bits =3D get_alignment_bits(opc); int mem_index =3D get_mmuidx(oi); @@ -1313,7 +1313,7 @@ static void add_qemu_ldst_label(TCGContext *s, int is= _ld, TCGMemOpIdx oi, static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) { TCGMemOpIdx oi =3D l->oi; - TCGMemOp opc =3D get_memop(oi); + TCGMemOp opc =3D get_tcgmemop(oi); TCGReg v0; int i; @@ -1363,7 +1363,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, = TCGLabelQemuLdst *l) static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) { TCGMemOpIdx oi =3D l->oi; - TCGMemOp opc =3D get_memop(oi); + TCGMemOp opc =3D get_tcgmemop(oi); TCGMemOp s_bits =3D opc & MO_SIZE; int i; @@ -1532,7 +1532,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGA= rg *args, bool is_64) addr_regl =3D *args++; addr_regh =3D (TCG_TARGET_REG_BITS < TARGET_LONG_BITS ? *args++ : 0); oi =3D *args++; - opc =3D get_memop(oi); + opc =3D get_tcgmemop(oi); #if defined(CONFIG_SOFTMMU) tcg_out_tlb_load(s, base, addr_regl, addr_regh, oi, label_ptr, 1); @@ -1635,7 +1635,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGA= rg *args, bool is_64) addr_regl =3D *args++; addr_regh =3D (TCG_TARGET_REG_BITS < TARGET_LONG_BITS ? *args++ : 0); oi =3D *args++; - opc =3D get_memop(oi); + opc =3D get_tcgmemop(oi); #if defined(CONFIG_SOFTMMU) tcg_out_tlb_load(s, base, addr_regl, addr_regh, oi, label_ptr, 0); diff --git a/tcg/optimize.c b/tcg/optimize.c index d2424de..422bcbb 100644 --- a/tcg/optimize.c +++ b/tcg/optimize.c @@ -1014,7 +1014,7 @@ void tcg_optimize(TCGContext *s) CASE_OP_32_64(qemu_ld): { TCGMemOpIdx oi =3D op->args[nb_oargs + nb_iargs]; - TCGMemOp mop =3D get_memop(oi); + TCGMemOp mop =3D get_tcgmemop(oi); if (!(mop & MO_SIGN)) { mask =3D (2ULL << ((8 << (mop & MO_SIZE)) - 1)) - 1; } diff --git a/tcg/ppc/tcg-target.inc.c b/tcg/ppc/tcg-target.inc.c index 13a2437..0ab4faa 100644 --- a/tcg/ppc/tcg-target.inc.c +++ b/tcg/ppc/tcg-target.inc.c @@ -1633,7 +1633,7 @@ static void add_qemu_ldst_label(TCGContext *s, bool i= s_ld, TCGMemOpIdx oi, static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) { TCGMemOpIdx oi =3D lb->oi; - TCGMemOp opc =3D get_memop(oi); + TCGMemOp opc =3D get_tcgmemop(oi); TCGReg hi, lo, arg =3D TCG_REG_R3; if (!reloc_pc14(lb->label_ptr[0], s->code_ptr)) { @@ -1680,7 +1680,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, = TCGLabelQemuLdst *lb) static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) { TCGMemOpIdx oi =3D lb->oi; - TCGMemOp opc =3D get_memop(oi); + TCGMemOp opc =3D get_tcgmemop(oi); TCGMemOp s_bits =3D opc & MO_SIZE; TCGReg hi, lo, arg =3D TCG_REG_R3; @@ -1755,7 +1755,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGA= rg *args, bool is_64) addrlo =3D *args++; addrhi =3D (TCG_TARGET_REG_BITS < TARGET_LONG_BITS ? *args++ : 0); oi =3D *args++; - opc =3D get_memop(oi); + opc =3D get_tcgmemop(oi); s_bits =3D opc & MO_SIZE; #ifdef CONFIG_SOFTMMU @@ -1830,7 +1830,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGA= rg *args, bool is_64) addrlo =3D *args++; addrhi =3D (TCG_TARGET_REG_BITS < TARGET_LONG_BITS ? *args++ : 0); oi =3D *args++; - opc =3D get_memop(oi); + opc =3D get_tcgmemop(oi); s_bits =3D opc & MO_SIZE; #ifdef CONFIG_SOFTMMU diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c index 90363df..ab4e035 100644 --- a/tcg/riscv/tcg-target.inc.c +++ b/tcg/riscv/tcg-target.inc.c @@ -970,7 +970,7 @@ static void tcg_out_tlb_load(TCGContext *s, TCGReg addr= l, TCGReg addrh, TCGMemOpIdx oi, tcg_insn_unit **label_ptr, bool is_load) { - TCGMemOp opc =3D get_memop(oi); + TCGMemOp opc =3D get_tcgmemop(oi); unsigned s_bits =3D opc & MO_SIZE; unsigned a_bits =3D get_alignment_bits(opc); tcg_target_long compare_mask; @@ -1044,7 +1044,7 @@ static void add_qemu_ldst_label(TCGContext *s, int is= _ld, TCGMemOpIdx oi, static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) { TCGMemOpIdx oi =3D l->oi; - TCGMemOp opc =3D get_memop(oi); + TCGMemOp opc =3D get_tcgmemop(oi); TCGReg a0 =3D tcg_target_call_iarg_regs[0]; TCGReg a1 =3D tcg_target_call_iarg_regs[1]; TCGReg a2 =3D tcg_target_call_iarg_regs[2]; @@ -1077,7 +1077,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, = TCGLabelQemuLdst *l) static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) { TCGMemOpIdx oi =3D l->oi; - TCGMemOp opc =3D get_memop(oi); + TCGMemOp opc =3D get_tcgmemop(oi); TCGMemOp s_bits =3D opc & MO_SIZE; TCGReg a0 =3D tcg_target_call_iarg_regs[0]; TCGReg a1 =3D tcg_target_call_iarg_regs[1]; @@ -1183,7 +1183,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGA= rg *args, bool is_64) addr_regl =3D *args++; addr_regh =3D (TCG_TARGET_REG_BITS < TARGET_LONG_BITS ? *args++ : 0); oi =3D *args++; - opc =3D get_memop(oi); + opc =3D get_tcgmemop(oi); #if defined(CONFIG_SOFTMMU) tcg_out_tlb_load(s, addr_regl, addr_regh, oi, label_ptr, 1); @@ -1254,7 +1254,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGA= rg *args, bool is_64) addr_regl =3D *args++; addr_regh =3D (TCG_TARGET_REG_BITS < TARGET_LONG_BITS ? *args++ : 0); oi =3D *args++; - opc =3D get_memop(oi); + opc =3D get_tcgmemop(oi); #if defined(CONFIG_SOFTMMU) tcg_out_tlb_load(s, addr_regl, addr_regh, oi, label_ptr, 0); diff --git a/tcg/s390/tcg-target.inc.c b/tcg/s390/tcg-target.inc.c index db1102e..4d8078b 100644 --- a/tcg/s390/tcg-target.inc.c +++ b/tcg/s390/tcg-target.inc.c @@ -1614,7 +1614,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, = TCGLabelQemuLdst *lb) TCGReg addr_reg =3D lb->addrlo_reg; TCGReg data_reg =3D lb->datalo_reg; TCGMemOpIdx oi =3D lb->oi; - TCGMemOp opc =3D get_memop(oi); + TCGMemOp opc =3D get_tcgmemop(oi); if (!patch_reloc(lb->label_ptr[0], R_390_PC16DBL, (intptr_t)s->code_ptr, 2)) { @@ -1639,7 +1639,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, = TCGLabelQemuLdst *lb) TCGReg addr_reg =3D lb->addrlo_reg; TCGReg data_reg =3D lb->datalo_reg; TCGMemOpIdx oi =3D lb->oi; - TCGMemOp opc =3D get_memop(oi); + TCGMemOp opc =3D get_tcgmemop(oi); if (!patch_reloc(lb->label_ptr[0], R_390_PC16DBL, (intptr_t)s->code_ptr, 2)) { @@ -1694,7 +1694,7 @@ static void tcg_prepare_user_ldst(TCGContext *s, TCGR= eg *addr_reg, static void tcg_out_qemu_ld(TCGContext* s, TCGReg data_reg, TCGReg addr_re= g, TCGMemOpIdx oi) { - TCGMemOp opc =3D get_memop(oi); + TCGMemOp opc =3D get_tcgmemop(oi); #ifdef CONFIG_SOFTMMU unsigned mem_index =3D get_mmuidx(oi); tcg_insn_unit *label_ptr; @@ -1721,7 +1721,7 @@ static void tcg_out_qemu_ld(TCGContext* s, TCGReg dat= a_reg, TCGReg addr_reg, static void tcg_out_qemu_st(TCGContext* s, TCGReg data_reg, TCGReg addr_re= g, TCGMemOpIdx oi) { - TCGMemOp opc =3D get_memop(oi); + TCGMemOp opc =3D get_tcgmemop(oi); #ifdef CONFIG_SOFTMMU unsigned mem_index =3D get_mmuidx(oi); tcg_insn_unit *label_ptr; diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c index 7c50118..e6cf2c4 100644 --- a/tcg/sparc/tcg-target.inc.c +++ b/tcg/sparc/tcg-target.inc.c @@ -1164,7 +1164,7 @@ static const int qemu_st_opc[16] =3D { static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr, TCGMemOpIdx oi, bool is_64) { - TCGMemOp memop =3D get_memop(oi); + TCGMemOp memop =3D get_tcgmemop(oi); #ifdef CONFIG_SOFTMMU unsigned memi =3D get_mmuidx(oi); TCGReg addrz, param; @@ -1246,7 +1246,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg dat= a, TCGReg addr, static void tcg_out_qemu_st(TCGContext *s, TCGReg data, TCGReg addr, TCGMemOpIdx oi) { - TCGMemOp memop =3D get_memop(oi); + TCGMemOp memop =3D get_tcgmemop(oi); #ifdef CONFIG_SOFTMMU unsigned memi =3D get_mmuidx(oi); TCGReg addrz, param; diff --git a/tcg/tcg.c b/tcg/tcg.c index be2c33c..492d7c6 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -2056,7 +2056,7 @@ static void tcg_dump_ops(TCGContext *s, bool have_pre= fs) case INDEX_op_qemu_st_i64: { TCGMemOpIdx oi =3D op->args[k++]; - TCGMemOp op =3D get_memop(oi); + TCGMemOp op =3D get_tcgmemop(oi); unsigned ix =3D get_mmuidx(oi); if (op & ~(MO_AMASK | MO_BSWAP | MO_SSIZE)) { diff --git a/tcg/tcg.h b/tcg/tcg.h index 18b91fe..8a3f912 100644 --- a/tcg/tcg.h +++ b/tcg/tcg.h @@ -1197,12 +1197,12 @@ static inline TCGMemOpIdx make_memop_idx(TCGMemOp o= p, unsigned idx) } /** - * get_memop + * get_tcgmemop * @oi: combined op/idx parameter * * Extract the memory operation from the combined value. */ -static inline TCGMemOp get_memop(TCGMemOpIdx oi) +static inline TCGMemOp get_tcgmemop(TCGMemOpIdx oi) { return oi >> 4; } diff --git a/tcg/tci.c b/tcg/tci.c index 33edca1..b3c5795 100644 --- a/tcg/tci.c +++ b/tcg/tci.c @@ -1109,7 +1109,7 @@ uintptr_t tcg_qemu_tb_exec(CPUArchState *env, uint8_t= *tb_ptr) t0 =3D *tb_ptr++; taddr =3D tci_read_ulong(regs, &tb_ptr); oi =3D tci_read_i(&tb_ptr); - switch (get_memop(oi) & (MO_BSWAP | MO_SSIZE)) { + switch (get_tcgmemop(oi) & (MO_BSWAP | MO_SSIZE)) { case MO_UB: tmp32 =3D qemu_ld_ub; break; @@ -1146,7 +1146,7 @@ uintptr_t tcg_qemu_tb_exec(CPUArchState *env, uint8_t= *tb_ptr) } taddr =3D tci_read_ulong(regs, &tb_ptr); oi =3D tci_read_i(&tb_ptr); - switch (get_memop(oi) & (MO_BSWAP | MO_SSIZE)) { + switch (get_tcgmemop(oi) & (MO_BSWAP | MO_SSIZE)) { case MO_UB: tmp64 =3D qemu_ld_ub; break; @@ -1195,7 +1195,7 @@ uintptr_t tcg_qemu_tb_exec(CPUArchState *env, uint8_t= *tb_ptr) t0 =3D tci_read_r(regs, &tb_ptr); taddr =3D tci_read_ulong(regs, &tb_ptr); oi =3D tci_read_i(&tb_ptr); - switch (get_memop(oi) & (MO_BSWAP | MO_SIZE)) { + switch (get_tcgmemop(oi) & (MO_BSWAP | MO_SIZE)) { case MO_UB: qemu_st_b(t0); break; @@ -1219,7 +1219,7 @@ uintptr_t tcg_qemu_tb_exec(CPUArchState *env, uint8_t= *tb_ptr) tmp64 =3D tci_read_r64(regs, &tb_ptr); taddr =3D tci_read_ulong(regs, &tb_ptr); oi =3D tci_read_i(&tb_ptr); - switch (get_memop(oi) & (MO_BSWAP | MO_SIZE)) { + switch (get_tcgmemop(oi) & (MO_BSWAP | MO_SIZE)) { case MO_UB: qemu_st_b(tmp64); break; -- 1.8.3.1 From nobody Mon Feb 9 05:43:11 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=bt.com ARC-Seal: i=1; a=rsa-sha256; t=1563810352; cv=none; d=zoho.com; s=zohoarc; b=G6JmiIdah9bRiNBAop/3nnVsDUswv6bokmBLacGRybSkwLdCSr/vXzPDvfApA9M1vcRTsHGuICW1NijuR3WIybDaYUCa6VwVmsp2or9DKkBv8hx5J0TLeCM6haGIxJZ24X9uFFqrvjwql/jPjlnGmlnMAbU5P9w8exId+vpLan8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1563810352; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=gTPgWaBSFvFPQXRNqbyP+7hAd9M8IgkKE03r4RpSFpk=; b=kD0SBgL8zzYthcGHgaZE51o/DPzxvd/uZPrnpehOlohhM0OW7YH/BOeI6H80jP4H8uJnSJgn3FNDo+aKEI/gCT0OgBDOmeTzsACaWvmsMLavda2lrSvMKSMOvM4pWgL0B0gziWegrPa5KJHBQocDDdD1aozb4MpXP4lriQtgsNA= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1563810352132207.16221183499908; Mon, 22 Jul 2019 08:45:52 -0700 (PDT) Received: from localhost ([::1]:34862 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaVY-0007kM-0C for importer@patchew.org; Mon, 22 Jul 2019 11:45:48 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:52909) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaVJ-0007JN-0B for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:45:34 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hpaVH-00059a-1G for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:45:32 -0400 Received: from smtpe1.intersmtp.com ([213.121.35.75]:58993) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hpaUx-0004wL-IO; Mon, 22 Jul 2019 11:45:11 -0400 Received: from tpw09926dag18h.domain1.systemhost.net (10.9.212.42) by BWP09926080.bt.com (10.36.82.111) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.1.1713.5; Mon, 22 Jul 2019 16:45:09 +0100 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by tpw09926dag18h.domain1.systemhost.net (10.9.212.42) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Jul 2019 16:45:09 +0100 Received: from tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c]) by tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c%12]) with mapi id 15.00.1395.000; Mon, 22 Jul 2019 16:45:09 +0100 From: To: Thread-Topic: [Qemu-devel] [PATCH v2 07/20] memory: Access MemoryRegion with MemOp Thread-Index: AQHVQKRxdY5ewALnpUKcvqEw7inS9Q== Date: Mon, 22 Jul 2019 15:45:09 +0000 Message-ID: <1563810308843.1378@bt.com> References: In-Reply-To: Accept-Language: en-AU, en-GB, en-US Content-Language: en-AU X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.187.101.37] MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 [fuzzy] X-Received-From: 213.121.35.75 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.23 Subject: [Qemu-devel] [PATCH v2 07/20] memory: Access MemoryRegion with MemOp X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, walling@linux.ibm.com, mst@redhat.com, palmer@sifive.com, mark.cave-ayland@ilande.co.uk, Alistair.Francis@wdc.com, arikalo@wavecomp.com, david@redhat.com, pasic@linux.ibm.com, borntraeger@de.ibm.com, rth@twiddle.net, atar4qemu@gmail.com, ehabkost@redhat.com, sw@weilnetz.de, qemu-s390x@nongnu.org, qemu-arm@nongnu.org, david@gibson.dropbear.id.au, qemu-riscv@nongnu.org, cohuck@redhat.com, claudio.fontana@huawei.com, alex.williamson@redhat.com, qemu-ppc@nongnu.org, amarkovic@wavecomp.com, pbonzini@redhat.com, aurelien@aurel32.net Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" Replacing size with size+sign+endianness (MemOp) will enable us to collapse the two byte swaps, adjust_endianness and handle_bswap, along the I/O path. While interfaces are converted, callers will have existing unsigned size coerced into a MemOp, and the callee will use this MemOp as an unsigned size. Signed-off-by: Tony Nguyen --- include/exec/memop.h | 4 ++++ include/exec/memory.h | 9 +++++---- memory.c | 7 +++++-- 3 files changed, 14 insertions(+), 6 deletions(-) diff --git a/include/exec/memop.h b/include/exec/memop.h index 43e99d7..73f1bf7 100644 --- a/include/exec/memop.h +++ b/include/exec/memop.h @@ -24,4 +24,8 @@ typedef enum MemOp { MO_BSWAP =3D 8, /* Host reverse endian. */ } MemOp; +/* No-op while memory_region_dispatch_[read|write] is converted to MemOp */ +#define MEMOP_SIZE(op) (op) /* MemOp to size. */ +#define SIZE_MEMOP(ul) (ul) /* Size to MemOp. */ + #endif diff --git a/include/exec/memory.h b/include/exec/memory.h index bb0961d..30b1c58 100644 --- a/include/exec/memory.h +++ b/include/exec/memory.h @@ -19,6 +19,7 @@ #include "exec/cpu-common.h" #include "exec/hwaddr.h" #include "exec/memattrs.h" +#include "exec/memop.h" #include "exec/ramlist.h" #include "qemu/queue.h" #include "qemu/int128.h" @@ -1731,13 +1732,13 @@ void mtree_info(bool flatview, bool dispatch_tree, = bool owner); * @mr: #MemoryRegion to access * @addr: address within that region * @pval: pointer to uint64_t which the data is written to - * @size: size of the access in bytes + * @op: encodes size of the access in bytes * @attrs: memory transaction attributes to use for the access */ MemTxResult memory_region_dispatch_read(MemoryRegion *mr, hwaddr addr, uint64_t *pval, - unsigned size, + MemOp op, MemTxAttrs attrs); /** * memory_region_dispatch_write: perform a write directly to the specified @@ -1746,13 +1747,13 @@ MemTxResult memory_region_dispatch_read(MemoryRegio= n *mr, * @mr: #MemoryRegion to access * @addr: address within that region * @data: data to write - * @size: size of the access in bytes + * @op: encodes size of the access in bytes * @attrs: memory transaction attributes to use for the access */ MemTxResult memory_region_dispatch_write(MemoryRegion *mr, hwaddr addr, uint64_t data, - unsigned size, + MemOp op, MemTxAttrs attrs); /** diff --git a/memory.c b/memory.c index d4579bb..73cb345 100644 --- a/memory.c +++ b/memory.c @@ -1437,10 +1437,11 @@ static MemTxResult memory_region_dispatch_read1(Mem= oryRegion *mr, MemTxResult memory_region_dispatch_read(MemoryRegion *mr, hwaddr addr, uint64_t *pval, - unsigned size, + MemOp op, MemTxAttrs attrs) { MemTxResult r; + unsigned size =3D MEMOP_SIZE(op); if (!memory_region_access_valid(mr, addr, size, false, attrs)) { *pval =3D unassigned_mem_read(mr, addr, size); @@ -1481,9 +1482,11 @@ static bool memory_region_dispatch_write_eventfds(Me= moryRegion *mr, MemTxResult memory_region_dispatch_write(MemoryRegion *mr, hwaddr addr, uint64_t data, - unsigned size, + MemOp op, MemTxAttrs attrs) { + unsigned size =3D MEMOP_SIZE(op); + if (!memory_region_access_valid(mr, addr, size, true, attrs)) { unassigned_mem_write(mr, addr, data, size); return MEMTX_DECODE_ERROR; -- 1.8.3.1 From nobody Mon Feb 9 05:43:11 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=bt.com ARC-Seal: i=1; a=rsa-sha256; t=1563810490; cv=none; d=zoho.com; s=zohoarc; b=j+NjDnIVOoZVSWzEto9wU+T0/IIguKUFImnYB5b1qKtQAL1XM+Hcweo7MM0k4f2JHBzJ8B1nojBqU3GL5g+u8311HDZxAnzPsq9dvBObA8fLwyAdUf16yE9QwuBeOTlO89+RORNZvQWXEJuZ1Vti0xe18Wh6rvpgZtdHcrBZjkI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1563810490; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=8HOZrlr6Y6uei3bLliqEFmiPLvfjT4QoTZN4MQUx+mk=; b=givETA48WERNWV0I3xf9so8qF4m7WWbxD/DE4PiAb2x59UiYVdhRn1hO4oypDZ7zBsvdmWbuosA0l9Y73WW2FeM+kgR6g7nDr4KnuMEgv8ut3jPqhJPZsH8VY+XZdH75ADhieoZtQ2yEu9ZzsEiSPSyWF9P4fkyhUvaavltEcbg= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1563810490828212.46171165018734; Mon, 22 Jul 2019 08:48:10 -0700 (PDT) Received: from localhost ([::1]:34926 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaXp-0001m2-LY for importer@patchew.org; Mon, 22 Jul 2019 11:48:09 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53420) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaXU-0000tP-0u for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:47:49 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hpaXQ-0007D1-1G for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:47:46 -0400 Received: from smtpe1.intersmtp.com ([213.121.35.78]:11937) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hpaX5-00071G-16; Mon, 22 Jul 2019 11:47:23 -0400 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by BWP09926083.bt.com (10.36.82.114) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.1.1713.5; Mon, 22 Jul 2019 16:45:56 +0100 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by tpw09926dag18e.domain1.systemhost.net (10.9.212.18) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Jul 2019 16:45:56 +0100 Received: from tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c]) by tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c%12]) with mapi id 15.00.1395.000; Mon, 22 Jul 2019 16:45:56 +0100 From: To: Thread-Topic: [Qemu-devel] [PATCH v2 08/20] target/mips: Access MemoryRegion with MemOp Thread-Index: AQHVQKSNE9Apjwo+sUuRTOACfuKYCQ== Date: Mon, 22 Jul 2019 15:45:56 +0000 Message-ID: <1563810356505.44472@bt.com> References: In-Reply-To: Accept-Language: en-AU, en-GB, en-US Content-Language: en-AU X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.187.101.37] MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 [fuzzy] X-Received-From: 213.121.35.78 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.23 Subject: [Qemu-devel] [PATCH v2 08/20] target/mips: Access MemoryRegion with MemOp X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, walling@linux.ibm.com, mst@redhat.com, palmer@sifive.com, mark.cave-ayland@ilande.co.uk, Alistair.Francis@wdc.com, arikalo@wavecomp.com, david@redhat.com, pasic@linux.ibm.com, borntraeger@de.ibm.com, rth@twiddle.net, atar4qemu@gmail.com, ehabkost@redhat.com, sw@weilnetz.de, qemu-s390x@nongnu.org, qemu-arm@nongnu.org, david@gibson.dropbear.id.au, qemu-riscv@nongnu.org, cohuck@redhat.com, claudio.fontana@huawei.com, alex.williamson@redhat.com, qemu-ppc@nongnu.org, amarkovic@wavecomp.com, pbonzini@redhat.com, aurelien@aurel32.net Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" Signed-off-by: Tony Nguyen --- target/mips/op_helper.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/target/mips/op_helper.c b/target/mips/op_helper.c index 9e2e02f..dccb8df 100644 --- a/target/mips/op_helper.c +++ b/target/mips/op_helper.c @@ -24,6 +24,7 @@ #include "exec/helper-proto.h" #include "exec/exec-all.h" #include "exec/cpu_ldst.h" +#include "exec/memop.h" #include "sysemu/kvm.h" /*************************************************************************= ****/ @@ -4740,11 +4741,11 @@ void helper_cache(CPUMIPSState *env, target_ulong a= ddr, uint32_t op) if (op =3D=3D 9) { /* Index Store Tag */ memory_region_dispatch_write(env->itc_tag, index, env->CP0_TagLo, - 8, MEMTXATTRS_UNSPECIFIED); + SIZE_MEMOP(8), MEMTXATTRS_UNSPECIFIED= ); } else if (op =3D=3D 5) { /* Index Load Tag */ memory_region_dispatch_read(env->itc_tag, index, &env->CP0_TagLo, - 8, MEMTXATTRS_UNSPECIFIED); + SIZE_MEMOP(8), MEMTXATTRS_UNSPECIFIED); } #endif } -- 1.8.3.1 From nobody Mon Feb 9 05:43:11 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=bt.com ARC-Seal: i=1; a=rsa-sha256; t=1563810479; cv=none; d=zoho.com; s=zohoarc; b=JOa1ZTCugj8H25ZTKR5zWHN4r1ciiOx7txSbNu84ijxqC8ojxL14sI8TVOZJvrdfm+KbqedPI4+pXCfMxi4CmKc2vaJnlIPlu+rFutOhUuEL/6vKDs/MYF//kbkk42Ph5d0q1txXa0lQAPP/IX57/wb+khKtmiCmP7OCVcBLnJI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1563810479; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=Gdnpw5ZdPWYjVdE2FDLseD0y+QcE4Ymw1tUi4gk1/8s=; b=bs5zy439QIJh22+i811VIjruHy69FP8uX5Jo2yDEB4cFYTToU2Y/mz1ino+jqvABZN2GiH01kjp5Vdgafjx/H8dcTCPxpwR/Hwt2r4zKXey+yHzA/Nmzb5j6Qub8zmuRqgx4UKoQowfwzOG6cQTKENXq41Shur/T9U8XMhbGGpM= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 156381047951719.260609475041633; Mon, 22 Jul 2019 08:47:59 -0700 (PDT) Received: from localhost ([::1]:34902 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaXZ-0000to-0r for importer@patchew.org; Mon, 22 Jul 2019 11:47:53 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53309) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaX8-0000Lf-Kj for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:47:29 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hpaX7-00073J-FT for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:47:26 -0400 Received: from smtpe1.intersmtp.com ([62.239.224.234]:17653) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hpaX2-00070T-NZ; Mon, 22 Jul 2019 11:47:20 -0400 Received: from tpw09926dag18g.domain1.systemhost.net (10.9.212.34) by RDW083A012ED68.bt.com (10.187.98.38) with Microsoft SMTP Server (TLS) id 14.3.439.0; Mon, 22 Jul 2019 16:47:04 +0100 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by tpw09926dag18g.domain1.systemhost.net (10.9.212.34) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Jul 2019 16:46:36 +0100 Received: from tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c]) by tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c%12]) with mapi id 15.00.1395.000; Mon, 22 Jul 2019 16:46:36 +0100 From: To: Thread-Topic: [Qemu-devel] [PATCH v2 09/20] hw/s390x: Access MemoryRegion with MemOp Thread-Index: AQHVQKSk8LcfgfYgPkm73Z/ToL9YDA== Date: Mon, 22 Jul 2019 15:46:36 +0000 Message-ID: <1563810395912.98369@bt.com> References: In-Reply-To: Accept-Language: en-AU, en-GB, en-US Content-Language: en-AU X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.187.101.37] MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 62.239.224.234 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.23 Subject: [Qemu-devel] [PATCH v2 09/20] hw/s390x: Access MemoryRegion with MemOp X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, walling@linux.ibm.com, mst@redhat.com, palmer@sifive.com, mark.cave-ayland@ilande.co.uk, Alistair.Francis@wdc.com, arikalo@wavecomp.com, david@redhat.com, pasic@linux.ibm.com, borntraeger@de.ibm.com, rth@twiddle.net, atar4qemu@gmail.com, ehabkost@redhat.com, sw@weilnetz.de, qemu-s390x@nongnu.org, qemu-arm@nongnu.org, david@gibson.dropbear.id.au, qemu-riscv@nongnu.org, cohuck@redhat.com, claudio.fontana@huawei.com, alex.williamson@redhat.com, qemu-ppc@nongnu.org, amarkovic@wavecomp.com, pbonzini@redhat.com, aurelien@aurel32.net Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" Signed-off-by: Tony Nguyen --- hw/s390x/s390-pci-inst.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c index 0023514..c126bcc 100644 --- a/hw/s390x/s390-pci-inst.c +++ b/hw/s390x/s390-pci-inst.c @@ -15,6 +15,7 @@ #include "cpu.h" #include "s390-pci-inst.h" #include "s390-pci-bus.h" +#include "exec/memop.h" #include "exec/memory-internal.h" #include "qemu/error-report.h" #include "sysemu/hw_accel.h" @@ -372,7 +373,7 @@ static MemTxResult zpci_read_bar(S390PCIBusDevice *pbde= v, uint8_t pcias, mr =3D pbdev->pdev->io_regions[pcias].memory; mr =3D s390_get_subregion(mr, offset, len); offset -=3D mr->addr; - return memory_region_dispatch_read(mr, offset, data, len, + return memory_region_dispatch_read(mr, offset, data, SIZE_MEMOP(len), MEMTXATTRS_UNSPECIFIED); } @@ -471,7 +472,7 @@ static MemTxResult zpci_write_bar(S390PCIBusDevice *pbd= ev, uint8_t pcias, mr =3D pbdev->pdev->io_regions[pcias].memory; mr =3D s390_get_subregion(mr, offset, len); offset -=3D mr->addr; - return memory_region_dispatch_write(mr, offset, data, len, + return memory_region_dispatch_write(mr, offset, data, SIZE_MEMOP(len), MEMTXATTRS_UNSPECIFIED); } @@ -780,7 +781,8 @@ int pcistb_service_call(S390CPU *cpu, uint8_t r1, uint8= _t r3, uint64_t gaddr, for (i =3D 0; i < len / 8; i++) { result =3D memory_region_dispatch_write(mr, offset + i * 8, - ldq_p(buffer + i * 8), 8, + ldq_p(buffer + i * 8), + SIZE_MEMOP(8), MEMTXATTRS_UNSPECIFIED); if (result !=3D MEMTX_OK) { s390_program_interrupt(env, PGM_OPERAND, 6, ra); -- 1.8.3.1 From nobody Mon Feb 9 05:43:11 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=bt.com ARC-Seal: i=1; a=rsa-sha256; t=1563810513; cv=none; d=zoho.com; s=zohoarc; b=OGchivruT0Pz2TY5/vlGbcKQkubXt/QnCT2GpiA0dJaAXbirl+VMBKw639gOmU+2PP26PnZeRV4Xw8GWZ99oJI2FnHt5b+XUBSi+JRjV3EDv/7fDen2r3epB49Z7Kc4wz/YJcjT1XJwleqLkAoA4UHlCDv1P+hrz9LdiUApq5Ss= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1563810513; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=h+Yyft3LkdYzi3P26BfviVaU3Vo2epUj+laX5N+t2e0=; b=hqm4/tPXqi6Lko39PzFJNz+llnR5LSh5R+VbajQqXpvpGbq6b0stpYduf8cGlALp0yj+yBOEMchpzzLZTdSgW5Kgc7JEBb+I4Cwsk/d4mvo/uH1jK4AilVHX0jbsnOzcJQVa1EwDWWP8e1YRTI4vIopKkCDVA9o/+ou+MKGpoFM= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1563810513171728.4795647399263; Mon, 22 Jul 2019 08:48:33 -0700 (PDT) Received: from localhost ([::1]:34952 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaYB-0002uR-Qp for importer@patchew.org; Mon, 22 Jul 2019 11:48:31 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53560) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaXr-00024B-Hk for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:48:12 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hpaXq-0007ak-An for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:48:11 -0400 Received: from smtpe1.intersmtp.com ([213.121.35.74]:26775) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hpaXQ-0007AZ-8o; Mon, 22 Jul 2019 11:47:44 -0400 Received: from tpw09926dag18f.domain1.systemhost.net (10.9.212.26) by BWP09926079.bt.com (10.36.82.110) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.1.1713.5; Mon, 22 Jul 2019 16:47:37 +0100 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by tpw09926dag18f.domain1.systemhost.net (10.9.212.26) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Jul 2019 16:47:15 +0100 Received: from tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c]) by tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c%12]) with mapi id 15.00.1395.000; Mon, 22 Jul 2019 16:47:15 +0100 From: To: Thread-Topic: [Qemu-devel] [PATCH v2 10/20] hw/intc/armv7m_nic: Access MemoryRegion with MemOp Thread-Index: AQHVQKS8ru+KIa1tqke/bCiWCh2RRA== Date: Mon, 22 Jul 2019 15:47:15 +0000 Message-ID: <1563810434810.27249@bt.com> References: In-Reply-To: Accept-Language: en-AU, en-GB, en-US Content-Language: en-AU X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.187.101.37] MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 [fuzzy] X-Received-From: 213.121.35.74 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.23 Subject: [Qemu-devel] [PATCH v2 10/20] hw/intc/armv7m_nic: Access MemoryRegion with MemOp X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, walling@linux.ibm.com, mst@redhat.com, palmer@sifive.com, mark.cave-ayland@ilande.co.uk, Alistair.Francis@wdc.com, arikalo@wavecomp.com, david@redhat.com, pasic@linux.ibm.com, borntraeger@de.ibm.com, rth@twiddle.net, atar4qemu@gmail.com, ehabkost@redhat.com, sw@weilnetz.de, qemu-s390x@nongnu.org, qemu-arm@nongnu.org, david@gibson.dropbear.id.au, qemu-riscv@nongnu.org, cohuck@redhat.com, claudio.fontana@huawei.com, alex.williamson@redhat.com, qemu-ppc@nongnu.org, amarkovic@wavecomp.com, pbonzini@redhat.com, aurelien@aurel32.net Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" Signed-off-by: Tony Nguyen --- hw/intc/armv7m_nvic.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c index 9f8f0d3..25bb88a 100644 --- a/hw/intc/armv7m_nvic.c +++ b/hw/intc/armv7m_nvic.c @@ -18,6 +18,7 @@ #include "hw/intc/armv7m_nvic.h" #include "target/arm/cpu.h" #include "exec/exec-all.h" +#include "exec/memop.h" #include "qemu/log.h" #include "qemu/module.h" #include "trace.h" @@ -2345,7 +2346,8 @@ static MemTxResult nvic_sysreg_ns_write(void *opaque,= hwaddr addr, if (attrs.secure) { /* S accesses to the alias act like NS accesses to the real region= */ attrs.secure =3D 0; - return memory_region_dispatch_write(mr, addr, value, size, attrs); + return memory_region_dispatch_write(mr, addr, value, SIZE_MEMOP(si= ze), + attrs); } else { /* NS attrs are RAZ/WI for privileged, and BusFault for user */ if (attrs.user) { @@ -2364,7 +2366,8 @@ static MemTxResult nvic_sysreg_ns_read(void *opaque, = hwaddr addr, if (attrs.secure) { /* S accesses to the alias act like NS accesses to the real region= */ attrs.secure =3D 0; - return memory_region_dispatch_read(mr, addr, data, size, attrs); + return memory_region_dispatch_read(mr, addr, data, SIZE_MEMOP(size= ), + attrs); } else { /* NS attrs are RAZ/WI for privileged, and BusFault for user */ if (attrs.user) { @@ -2390,7 +2393,8 @@ static MemTxResult nvic_systick_write(void *opaque, h= waddr addr, /* Direct the access to the correct systick */ mr =3D sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->systick[attrs.secure]= ), 0); - return memory_region_dispatch_write(mr, addr, value, size, attrs); + return memory_region_dispatch_write(mr, addr, value, SIZE_MEMOP(size), + attrs); } static MemTxResult nvic_systick_read(void *opaque, hwaddr addr, @@ -2402,7 +2406,7 @@ static MemTxResult nvic_systick_read(void *opaque, hw= addr addr, /* Direct the access to the correct systick */ mr =3D sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->systick[attrs.secure]= ), 0); - return memory_region_dispatch_read(mr, addr, data, size, attrs); + return memory_region_dispatch_read(mr, addr, data, SIZE_MEMOP(size), a= ttrs); } static const MemoryRegionOps nvic_systick_ops =3D { -- 1.8.3.1 From nobody Mon Feb 9 05:43:11 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=bt.com ARC-Seal: i=1; a=rsa-sha256; t=1563810518; cv=none; d=zoho.com; s=zohoarc; b=mzvNlc1TdcRiB6DKTcVAqnkBa2w3E68USTSWx5dUrrH9cBcg68YgfKVM5mXNIYsW5kyWFs5DGg5v3sRMEkRxZD/i+E1YGKAxRSsPcrTsiJ8rvpcUxgAE3Yga3BC8FpHM+ISnrLPOgZwgCtRvmqiYciYGwpbaBHY0McZHC9WMoL4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1563810518; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=zf3xdgvgL56FpRZLeZetkq9F1u7/uFRhGu0wLYE+jAE=; b=e9/TpgVieu2mg7/vHC3J0kL42n/kK4Pasi2wacEWv8E/EgRS8lRN8rSOW094K24GSB6jkl9uAH7IfhCU5Hv/lwIOF7r4Ep0DY5UmksQ9HX82Zs0eaLPLzybQ69k/say0rylW/vQI3lq1s24yZ0Sssv6IuKnUYXGPX/VC0uhn8+s= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 15638105180781015.7862663701219; Mon, 22 Jul 2019 08:48:38 -0700 (PDT) Received: from localhost ([::1]:34954 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaYG-0003BE-P3 for importer@patchew.org; Mon, 22 Jul 2019 11:48:36 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53579) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaXu-0002D1-2J for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:48:15 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hpaXs-0007bm-N0 for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:48:13 -0400 Received: from smtpe1.intersmtp.com ([62.239.224.234]:43242) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hpaXl-0007WK-Pe; Mon, 22 Jul 2019 11:48:06 -0400 Received: from tpw09926dag18g.domain1.systemhost.net (10.9.212.34) by RDW083A012ED68.bt.com (10.187.98.38) with Microsoft SMTP Server (TLS) id 14.3.439.0; Mon, 22 Jul 2019 16:47:47 +0100 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by tpw09926dag18g.domain1.systemhost.net (10.9.212.34) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Jul 2019 16:48:01 +0100 Received: from tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c]) by tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c%12]) with mapi id 15.00.1395.000; Mon, 22 Jul 2019 16:48:00 +0100 From: To: Thread-Topic: [Qemu-devel] [PATCH v2 11/20] hw/virtio: Access MemoryRegion with MemOp Thread-Index: AQHVQKTX+Ks5rP6+AkSy2PJ+oMCFiQ== Date: Mon, 22 Jul 2019 15:48:00 +0000 Message-ID: <1563810480347.95681@bt.com> References: In-Reply-To: Accept-Language: en-AU, en-GB, en-US Content-Language: en-AU X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.187.101.37] MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 62.239.224.234 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.23 Subject: [Qemu-devel] [PATCH v2 11/20] hw/virtio: Access MemoryRegion with MemOp X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, walling@linux.ibm.com, mst@redhat.com, palmer@sifive.com, mark.cave-ayland@ilande.co.uk, Alistair.Francis@wdc.com, arikalo@wavecomp.com, david@redhat.com, pasic@linux.ibm.com, borntraeger@de.ibm.com, rth@twiddle.net, atar4qemu@gmail.com, ehabkost@redhat.com, sw@weilnetz.de, qemu-s390x@nongnu.org, qemu-arm@nongnu.org, david@gibson.dropbear.id.au, qemu-riscv@nongnu.org, cohuck@redhat.com, claudio.fontana@huawei.com, alex.williamson@redhat.com, qemu-ppc@nongnu.org, amarkovic@wavecomp.com, pbonzini@redhat.com, aurelien@aurel32.net Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" On 17/07/19 08:06, Paolo Bonzini wrote: > My main concern is that MO_BE/MO_LE/MO_TE do not really apply to the > memory.c paths. MO_BSWAP is never passed into the MemOp, even if target > endianness !=3D host endianness. > > Therefore, you could return MO_TE | MO_{8,16,32,64} from this function, > and change memory_region_endianness_inverted to test > HOST_WORDS_BIGENDIAN instead of TARGET_WORDS_BIGENDIAN. Then the two > MO_BSWAPs (one from MO_TE, one from adjust_endianness because > memory_region_endianness_inverted returns true) cancel out if the > memory region's endianness is the same as the host's but different > from the target's. > > Some care is needed in virtio_address_space_write and zpci_write_bar. I > think the latter is okay, while the former could do something like this: > > diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c > index ce928f2429..61885f020c 100644 > --- a/hw/virtio/virtio-pci.c > +++ b/hw/virtio/virtio-pci.c > @@ -541,16 +541,16 @@ void virtio_address_space_write(VirtIOPCIProxy *pro= xy, > hwaddr addr, > val =3D pci_get_byte(buf); > break; > case 2: > - val =3D cpu_to_le16(pci_get_word(buf)); > + val =3D pci_get_word(buf); > break; > case 4: > - val =3D cpu_to_le32(pci_get_long(buf)); > + val =3D pci_get_long(buf); > break; > default: > /* As length is under guest control, handle illegal values. */ > return; > } > - memory_region_dispatch_write(mr, addr, val, len, MEMTXATTRS_UNSPECIF= IED); > + memory_region_dispatch_write(mr, addr, val, size_memop(len) & ~MO_BS= WAP, > MEMTXATTRS_UNSPECIFIED); > } > > static void Sorry Paolo, I noted the need to take care in virtio_address_space_write and zpci_write_bar but did not understand. > Some care is needed in virtio_address_space_write and zpci_write_bar. Is this advice for my v1 implementation, or in the case of the MO_TE | MO_{8,16,32,64} idead suggested in the paragraph before? Signed-off-by: Tony Nguyen --- hw/virtio/virtio-pci.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c index ce928f2..265f066 100644 --- a/hw/virtio/virtio-pci.c +++ b/hw/virtio/virtio-pci.c @@ -17,6 +17,7 @@ #include "qemu/osdep.h" +#include "exec/memop.h" #include "standard-headers/linux/virtio_pci.h" #include "hw/virtio/virtio.h" #include "hw/pci/pci.h" @@ -550,7 +551,8 @@ void virtio_address_space_write(VirtIOPCIProxy *proxy, = hwaddr addr, /* As length is under guest control, handle illegal values. */ return; } - memory_region_dispatch_write(mr, addr, val, len, MEMTXATTRS_UNSPECIFIE= D); + memory_region_dispatch_write(mr, addr, val, SIZE_MEMOP(len), + MEMTXATTRS_UNSPECIFIED); } static void @@ -573,7 +575,8 @@ virtio_address_space_read(VirtIOPCIProxy *proxy, hwaddr= addr, /* Make sure caller aligned buf properly */ assert(!(((uintptr_t)buf) & (len - 1))); - memory_region_dispatch_read(mr, addr, &val, len, MEMTXATTRS_UNSPECIFIE= D); + memory_region_dispatch_read(mr, addr, &val, SIZE_MEMOP(len), + MEMTXATTRS_UNSPECIFIED); switch (len) { case 1: pci_set_byte(buf, val); -- 1.8.3.1 From nobody Mon Feb 9 05:43:11 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=bt.com ARC-Seal: i=1; a=rsa-sha256; t=1563810580; cv=none; d=zoho.com; s=zohoarc; b=CsyCo1A0S9iT0Jam0v1lJMHF0Y40bSqKkCqDXts8RdjHfdchZU9pqm2jVo04Me7vCOYPoIOk0CmqHR588o2w/U/r/LW0YNlPfx3+JXpzKRK+Rg8xOHFrPwT7NQL2yfgrgbipdgo5YIOmDvHv6rGWm5nCO3Oemdx5gqa1gwd93WY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1563810580; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=dfyM73kgQZT9I5vqxI/BpjkH7wcjaGtXxvPL/X1fP/Q=; b=A5Vo9r+5/ts+dIpwY0ZopOoIAI9yzsdEDRE24u2rzIDj8vDtUh6GzNzkfJHo38zHTMtdFX0d2/QX1FHDAOeAHGo4S0lOIfTbvaoihU5mJwf+ba4O8Eh5tosnC3hpIoduP5JJvLJTCcRygA6BHP3C+5e16oHTZMT8Uon80k8PkZs= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1563810579907393.25836247525353; Mon, 22 Jul 2019 08:49:39 -0700 (PDT) Received: from localhost ([::1]:35034 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaZF-0006Vg-1r for importer@patchew.org; Mon, 22 Jul 2019 11:49:37 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53799) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaYS-0004BX-2a for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:48:49 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hpaYQ-0007tT-NO for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:48:47 -0400 Received: from smtpe1.intersmtp.com ([62.239.224.234]:54471) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hpaYL-0007pT-Sf; Mon, 22 Jul 2019 11:48:42 -0400 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by RDW083A012ED68.bt.com (10.187.98.38) with Microsoft SMTP Server (TLS) id 14.3.439.0; Mon, 22 Jul 2019 16:48:25 +0100 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by tpw09926dag18e.domain1.systemhost.net (10.9.212.18) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Jul 2019 16:48:39 +0100 Received: from tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c]) by tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c%12]) with mapi id 15.00.1395.000; Mon, 22 Jul 2019 16:48:39 +0100 From: To: Thread-Topic: [Qemu-devel] [PATCH v2 12/20] hw/vfio: Access MemoryRegion with MemOp Thread-Index: AQHVQKTuS8e10R7j0EyH7DK1WdG0Fg== Date: Mon, 22 Jul 2019 15:48:39 +0000 Message-ID: <1563810519326.25905@bt.com> References: In-Reply-To: Accept-Language: en-AU, en-GB, en-US Content-Language: en-AU X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.187.101.37] MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 62.239.224.234 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.23 Subject: [Qemu-devel] [PATCH v2 12/20] hw/vfio: Access MemoryRegion with MemOp X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, walling@linux.ibm.com, mst@redhat.com, palmer@sifive.com, mark.cave-ayland@ilande.co.uk, Alistair.Francis@wdc.com, arikalo@wavecomp.com, david@redhat.com, pasic@linux.ibm.com, borntraeger@de.ibm.com, rth@twiddle.net, atar4qemu@gmail.com, ehabkost@redhat.com, sw@weilnetz.de, qemu-s390x@nongnu.org, qemu-arm@nongnu.org, david@gibson.dropbear.id.au, qemu-riscv@nongnu.org, cohuck@redhat.com, claudio.fontana@huawei.com, alex.williamson@redhat.com, qemu-ppc@nongnu.org, amarkovic@wavecomp.com, pbonzini@redhat.com, aurelien@aurel32.net Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" Signed-off-by: Tony Nguyen --- hw/vfio/pci-quirks.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c index b35a640..3240afa 100644 --- a/hw/vfio/pci-quirks.c +++ b/hw/vfio/pci-quirks.c @@ -1071,7 +1071,7 @@ static void vfio_rtl8168_quirk_address_write(void *op= aque, hwaddr addr, /* Write to the proper guest MSI-X table instead */ memory_region_dispatch_write(&vdev->pdev.msix_table_mmio, - offset, val, size, + offset, val, SIZE_MEMOP(size), MEMTXATTRS_UNSPECIFIED); } return; /* Do not write guest MSI-X data to hardware */ @@ -1102,7 +1102,8 @@ static uint64_t vfio_rtl8168_quirk_data_read(void *op= aque, if (rtl->enabled && (vdev->pdev.cap_present & QEMU_PCI_CAP_MSIX)) { hwaddr offset =3D rtl->addr & 0xfff; memory_region_dispatch_read(&vdev->pdev.msix_table_mmio, offset, - &data, size, MEMTXATTRS_UNSPECIFIED); + &data, SIZE_MEMOP(size), + MEMTXATTRS_UNSPECIFIED); trace_vfio_quirk_rtl8168_msix_read(vdev->vbasedev.name, offset, da= ta); } -- 1.8.3.1 From nobody Mon Feb 9 05:43:11 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=bt.com ARC-Seal: i=1; a=rsa-sha256; t=1563810602; cv=none; d=zoho.com; s=zohoarc; b=e9YM/XoSwsv4SZzOJCZIBg+K6zuzD3vs1AAF3fANTybMy7xwKVMNtfr8HnKrc3XiBlbmlQdeez+NZDGnb0nNwpg/V/YYVn2pOfj8zFHsbp247zVN8xrTxICKvWwgS9nau7UexRSmkSD+cqqMAFHvRviwjND7/QEE81ezORf3nRg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1563810602; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=76JF+9BKl0/GUafHsKBi90ZLrgHWaUoWUQpUMNNcSjY=; b=jiVFK8Xt7AiPGXr9TeJoF8rb/KxihfBtWu3MDP9jVckZlf0IGnGRH2CQfYO7Zw9BF9Z0Mj3Poa5e6hUEt04toe0TnRUJnByT3l1gE9QrUK/G5IBITLrZYKbvEMfk8kO+waVsfhuDSVWSvP/pQUCe4g1QPaFM6woalYl44Qbdgro= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 156381060265777.86314826942544; Mon, 22 Jul 2019 08:50:02 -0700 (PDT) Received: from localhost ([::1]:35078 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaZc-0007kd-Hk for importer@patchew.org; Mon, 22 Jul 2019 11:50:00 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54176) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaZN-0007Dh-Bg for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:49:47 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hpaZK-0000HC-P1 for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:49:45 -0400 Received: from smtpe1.intersmtp.com ([213.121.35.77]:3202) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hpaZ2-0008H7-Ey; Mon, 22 Jul 2019 11:49:24 -0400 Received: from tpw09926dag18h.domain1.systemhost.net (10.9.212.42) by BWP09926082.bt.com (10.36.82.113) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.1.1713.5; Mon, 22 Jul 2019 16:49:21 +0100 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by tpw09926dag18h.domain1.systemhost.net (10.9.212.42) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Jul 2019 16:49:22 +0100 Received: from tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c]) by tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c%12]) with mapi id 15.00.1395.000; Mon, 22 Jul 2019 16:49:21 +0100 From: To: Thread-Topic: [Qemu-devel] [PATCH v2 13/20] exec: Access MemoryRegion with MemOp Thread-Index: AQHVQKUHXZphlpC32U+r70UVQuqlvQ== Date: Mon, 22 Jul 2019 15:49:21 +0000 Message-ID: <1563810561191.16853@bt.com> References: In-Reply-To: Accept-Language: en-AU, en-GB, en-US Content-Language: en-AU X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.187.101.37] MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 [fuzzy] X-Received-From: 213.121.35.77 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.23 Subject: [Qemu-devel] [PATCH v2 13/20] exec: Access MemoryRegion with MemOp X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, walling@linux.ibm.com, mst@redhat.com, palmer@sifive.com, mark.cave-ayland@ilande.co.uk, Alistair.Francis@wdc.com, arikalo@wavecomp.com, david@redhat.com, pasic@linux.ibm.com, borntraeger@de.ibm.com, rth@twiddle.net, atar4qemu@gmail.com, ehabkost@redhat.com, sw@weilnetz.de, qemu-s390x@nongnu.org, qemu-arm@nongnu.org, david@gibson.dropbear.id.au, qemu-riscv@nongnu.org, cohuck@redhat.com, claudio.fontana@huawei.com, alex.williamson@redhat.com, qemu-ppc@nongnu.org, amarkovic@wavecomp.com, pbonzini@redhat.com, aurelien@aurel32.net Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" Signed-off-by: Tony Nguyen --- exec.c | 6 ++++-- memory_ldst.inc.c | 18 +++++++++--------- 2 files changed, 13 insertions(+), 11 deletions(-) diff --git a/exec.c b/exec.c index 3e78de3..5013864 100644 --- a/exec.c +++ b/exec.c @@ -3334,7 +3334,8 @@ static MemTxResult flatview_write_continue(FlatView *= fv, hwaddr addr, /* XXX: could force current_cpu to NULL to avoid potential bugs */ val =3D ldn_p(buf, l); - result |=3D memory_region_dispatch_write(mr, addr1, val, l, at= trs); + result |=3D memory_region_dispatch_write(mr, addr1, val, + SIZE_MEMOP(l), attrs); } else { /* RAM case */ ptr =3D qemu_ram_ptr_length(mr->ram_block, addr1, &l, false); @@ -3395,7 +3396,8 @@ MemTxResult flatview_read_continue(FlatView *fv, hwad= dr addr, /* I/O case */ release_lock |=3D prepare_mmio_access(mr); l =3D memory_access_size(mr, l, addr1); - result |=3D memory_region_dispatch_read(mr, addr1, &val, l, at= trs); + result |=3D memory_region_dispatch_read(mr, addr1, &val, + SIZE_MEMOP(l), attrs); stn_p(buf, l, val); } else { /* RAM case */ diff --git a/memory_ldst.inc.c b/memory_ldst.inc.c index acf865b..e073cf9 100644 --- a/memory_ldst.inc.c +++ b/memory_ldst.inc.c @@ -38,7 +38,7 @@ static inline uint32_t glue(address_space_ldl_internal, S= UFFIX)(ARG1_DECL, release_lock |=3D prepare_mmio_access(mr); /* I/O case */ - r =3D memory_region_dispatch_read(mr, addr1, &val, 4, attrs); + r =3D memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(4), = attrs); #if defined(TARGET_WORDS_BIGENDIAN) if (endian =3D=3D DEVICE_LITTLE_ENDIAN) { val =3D bswap32(val); @@ -114,7 +114,7 @@ static inline uint64_t glue(address_space_ldq_internal,= SUFFIX)(ARG1_DECL, release_lock |=3D prepare_mmio_access(mr); /* I/O case */ - r =3D memory_region_dispatch_read(mr, addr1, &val, 8, attrs); + r =3D memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(8), = attrs); #if defined(TARGET_WORDS_BIGENDIAN) if (endian =3D=3D DEVICE_LITTLE_ENDIAN) { val =3D bswap64(val); @@ -188,7 +188,7 @@ uint32_t glue(address_space_ldub, SUFFIX)(ARG1_DECL, release_lock |=3D prepare_mmio_access(mr); /* I/O case */ - r =3D memory_region_dispatch_read(mr, addr1, &val, 1, attrs); + r =3D memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(1), = attrs); } else { /* RAM case */ ptr =3D qemu_map_ram_ptr(mr->ram_block, addr1); @@ -224,7 +224,7 @@ static inline uint32_t glue(address_space_lduw_internal= , SUFFIX)(ARG1_DECL, release_lock |=3D prepare_mmio_access(mr); /* I/O case */ - r =3D memory_region_dispatch_read(mr, addr1, &val, 2, attrs); + r =3D memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(2), = attrs); #if defined(TARGET_WORDS_BIGENDIAN) if (endian =3D=3D DEVICE_LITTLE_ENDIAN) { val =3D bswap16(val); @@ -300,7 +300,7 @@ void glue(address_space_stl_notdirty, SUFFIX)(ARG1_DECL, if (l < 4 || !memory_access_is_direct(mr, true)) { release_lock |=3D prepare_mmio_access(mr); - r =3D memory_region_dispatch_write(mr, addr1, val, 4, attrs); + r =3D memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(4), = attrs); } else { ptr =3D qemu_map_ram_ptr(mr->ram_block, addr1); stl_p(ptr, val); @@ -346,7 +346,7 @@ static inline void glue(address_space_stl_internal, SUF= FIX)(ARG1_DECL, val =3D bswap32(val); } #endif - r =3D memory_region_dispatch_write(mr, addr1, val, 4, attrs); + r =3D memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(4), = attrs); } else { /* RAM case */ ptr =3D qemu_map_ram_ptr(mr->ram_block, addr1); @@ -408,7 +408,7 @@ void glue(address_space_stb, SUFFIX)(ARG1_DECL, mr =3D TRANSLATE(addr, &addr1, &l, true, attrs); if (!memory_access_is_direct(mr, true)) { release_lock |=3D prepare_mmio_access(mr); - r =3D memory_region_dispatch_write(mr, addr1, val, 1, attrs); + r =3D memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(1), = attrs); } else { /* RAM case */ ptr =3D qemu_map_ram_ptr(mr->ram_block, addr1); @@ -451,7 +451,7 @@ static inline void glue(address_space_stw_internal, SUF= FIX)(ARG1_DECL, val =3D bswap16(val); } #endif - r =3D memory_region_dispatch_write(mr, addr1, val, 2, attrs); + r =3D memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(2), = attrs); } else { /* RAM case */ ptr =3D qemu_map_ram_ptr(mr->ram_block, addr1); @@ -524,7 +524,7 @@ static void glue(address_space_stq_internal, SUFFIX)(AR= G1_DECL, val =3D bswap64(val); } #endif - r =3D memory_region_dispatch_write(mr, addr1, val, 8, attrs); + r =3D memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(8), = attrs); } else { /* RAM case */ ptr =3D qemu_map_ram_ptr(mr->ram_block, addr1); -- 1.8.3.1 From nobody Mon Feb 9 05:43:11 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=bt.com ARC-Seal: i=1; a=rsa-sha256; t=1563810652; cv=none; d=zoho.com; s=zohoarc; b=FcNGP8dhgFIVvhgUAMDtgg4v3GzRKhesDtIw4LotVxvIJ6iLIzQJXnjtGiwGdOYp3UiOg1QDA689qXuTVliBvCP81CWaK6rmKXL004Jhd0BMCKb+OcYNsSQwDS23S0wnUt08SZUT2OuqrKAXUqLCpCQY0rOSGI7bmsBXjdagc5Q= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1563810652; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=m1yuYI2iZumNQJXhX3lOc6PwM+SheivCSrDBf/P+yrY=; b=n/zN7BeSMJyLOqpYkDZhBExRvDCNgeiDcv6yVHEoFj1C5yTIoDGF+nNgbKufTIuHitDpnsvLJIrkuL+vub+dLeyD+blhd6UZZF5mh9zrRFMSGjaU7PpPNRONw5CetsMlQJsp0bkrhiWwUPSt87x3OYwVZdFIgZuivxoUYKkaP/E= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1563810652237974.3011707920155; Mon, 22 Jul 2019 08:50:52 -0700 (PDT) Received: from localhost ([::1]:35180 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaaR-0002J2-7E for importer@patchew.org; Mon, 22 Jul 2019 11:50:51 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54367) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaa0-0000xW-QN for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:50:25 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hpaZz-0000lu-Ki for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:50:24 -0400 Received: from smtpe1.intersmtp.com ([213.121.35.78]:11597) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hpaZe-0000QF-8y; Mon, 22 Jul 2019 11:50:02 -0400 Received: from tpw09926dag18g.domain1.systemhost.net (10.9.212.34) by BWP09926083.bt.com (10.36.82.114) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.1.1713.5; Mon, 22 Jul 2019 16:50:00 +0100 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by tpw09926dag18g.domain1.systemhost.net (10.9.212.34) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Jul 2019 16:50:00 +0100 Received: from tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c]) by tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c%12]) with mapi id 15.00.1395.000; Mon, 22 Jul 2019 16:50:00 +0100 From: To: Thread-Topic: [Qemu-devel] [PATCH v2 14/20] cputlb: Access MemoryRegion with MemOp Thread-Index: AQHVQKUeD/kggExhOkCZQT6SsUL58w== Date: Mon, 22 Jul 2019 15:50:00 +0000 Message-ID: <1563810600056.6763@bt.com> References: In-Reply-To: Accept-Language: en-AU, en-GB, en-US Content-Language: en-AU X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.187.101.37] MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 [fuzzy] X-Received-From: 213.121.35.78 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.23 Subject: [Qemu-devel] [PATCH v2 14/20] cputlb: Access MemoryRegion with MemOp X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, walling@linux.ibm.com, mst@redhat.com, palmer@sifive.com, mark.cave-ayland@ilande.co.uk, Alistair.Francis@wdc.com, arikalo@wavecomp.com, david@redhat.com, pasic@linux.ibm.com, borntraeger@de.ibm.com, rth@twiddle.net, atar4qemu@gmail.com, ehabkost@redhat.com, sw@weilnetz.de, qemu-s390x@nongnu.org, qemu-arm@nongnu.org, david@gibson.dropbear.id.au, qemu-riscv@nongnu.org, cohuck@redhat.com, claudio.fontana@huawei.com, alex.williamson@redhat.com, qemu-ppc@nongnu.org, amarkovic@wavecomp.com, pbonzini@redhat.com, aurelien@aurel32.net Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" Signed-off-by: Tony Nguyen --- accel/tcg/cputlb.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 184fc54..97d7a64 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -906,8 +906,8 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEnt= ry *iotlbentry, qemu_mutex_lock_iothread(); locked =3D true; } - r =3D memory_region_dispatch_read(mr, mr_offset, - &val, size, iotlbentry->attrs); + r =3D memory_region_dispatch_read(mr, mr_offset, &val, SIZE_MEMOP(size= ), + iotlbentry->attrs); if (r !=3D MEMTX_OK) { hwaddr physaddr =3D mr_offset + section->offset_within_address_space - @@ -947,8 +947,8 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry = *iotlbentry, qemu_mutex_lock_iothread(); locked =3D true; } - r =3D memory_region_dispatch_write(mr, mr_offset, - val, size, iotlbentry->attrs); + r =3D memory_region_dispatch_write(mr, mr_offset, val, SIZE_MEMOP(size= ), + iotlbentry->attrs); if (r !=3D MEMTX_OK) { hwaddr physaddr =3D mr_offset + section->offset_within_address_space - -- 1.8.3.1 From nobody Mon Feb 9 05:43:11 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=bt.com ARC-Seal: i=1; a=rsa-sha256; t=1563810689; cv=none; d=zoho.com; s=zohoarc; b=JriFbkSCrci6HeaVlZwDH/hWTqjyDl2vpUKHf7yoMy+gc/+hGmMM9Nx2ajsUUtiWU8N82jje/QNYkKZZlz9IIOBU1daFsTQCGySMMhWQlh2uw5L44WZ9CyJNMxEUyBvHktUAun/XuGdy1pM5NDF58OJ2MHCny/GZoCiYkupvOZM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1563810689; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=suJriw3s7F/8PYF/V39EKnoXJm0+NIN93dscZ9myv1E=; b=X6x+7qZXuxZxXUweK3EisSkkg2sr6h22o94gN3nrW07svMCA5PgGi3ITGXjrnmYoyU0Rwqrelru7dlbCJiQKmqx7RAbvZo0ikVgwHL7qeJFZ2W21F5zkluY9V9HHcLmR+bkUH0vbxWmxSnAVeQJ5LvbrGSXJtwRrR65VkF2bpkM= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1563810689504210.9534692875286; Mon, 22 Jul 2019 08:51:29 -0700 (PDT) Received: from localhost ([::1]:35240 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpab0-0003w2-1H for importer@patchew.org; Mon, 22 Jul 2019 11:51:26 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54601) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaal-0003GV-2a for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:51:12 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hpaai-0001YA-Nt for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:51:10 -0400 Received: from smtpe1.intersmtp.com ([213.121.35.77]:35425) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hpaaJ-00015w-Se; Mon, 22 Jul 2019 11:50:44 -0400 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by BWP09926082.bt.com (10.36.82.113) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.1.1713.5; Mon, 22 Jul 2019 16:50:40 +0100 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by tpw09926dag18e.domain1.systemhost.net (10.9.212.18) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Jul 2019 16:50:41 +0100 Received: from tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c]) by tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c%12]) with mapi id 15.00.1395.000; Mon, 22 Jul 2019 16:50:41 +0100 From: To: Thread-Topic: [Qemu-devel] [PATCH v2 15/20] memory: Access MemoryRegion with MemOp semantics Thread-Index: AQHVQKU2j1nfQh7y3km4doLaBYdpEw== Date: Mon, 22 Jul 2019 15:50:41 +0000 Message-ID: <1563810640556.47123@bt.com> References: In-Reply-To: Accept-Language: en-AU, en-GB, en-US Content-Language: en-AU X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.187.101.37] MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 [fuzzy] X-Received-From: 213.121.35.77 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.23 Subject: [Qemu-devel] [PATCH v2 15/20] memory: Access MemoryRegion with MemOp semantics X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, walling@linux.ibm.com, mst@redhat.com, palmer@sifive.com, mark.cave-ayland@ilande.co.uk, Alistair.Francis@wdc.com, arikalo@wavecomp.com, david@redhat.com, pasic@linux.ibm.com, borntraeger@de.ibm.com, rth@twiddle.net, atar4qemu@gmail.com, ehabkost@redhat.com, sw@weilnetz.de, qemu-s390x@nongnu.org, qemu-arm@nongnu.org, david@gibson.dropbear.id.au, qemu-riscv@nongnu.org, cohuck@redhat.com, claudio.fontana@huawei.com, alex.williamson@redhat.com, qemu-ppc@nongnu.org, amarkovic@wavecomp.com, pbonzini@redhat.com, aurelien@aurel32.net Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" To convert interfaces of MemoryRegion access, MEMOP_SIZE and SIZE_MEMOP no-op stubs were introduced to change syntax while keeping the existing semantics. Now with interfaces converted, we fill the stubs and use MemOp semantics. Signed-off-by: Tony Nguyen --- include/exec/memop.h | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/include/exec/memop.h b/include/exec/memop.h index 73f1bf7..dff6da2 100644 --- a/include/exec/memop.h +++ b/include/exec/memop.h @@ -24,8 +24,7 @@ typedef enum MemOp { MO_BSWAP =3D 8, /* Host reverse endian. */ } MemOp; -/* No-op while memory_region_dispatch_[read|write] is converted to MemOp */ -#define MEMOP_SIZE(op) (op) /* MemOp to size. */ -#define SIZE_MEMOP(ul) (ul) /* Size to MemOp. */ +#define MEMOP_SIZE(op) (1 << ((op) & MO_SIZE)) /* MemOp to size. */ +#define SIZE_MEMOP(ul) (ctzl(ul)) /* Size to MemOp. */ #endif -- 1.8.3.1 From nobody Mon Feb 9 05:43:11 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=bt.com ARC-Seal: i=1; a=rsa-sha256; t=1563810744; cv=none; d=zoho.com; s=zohoarc; b=l0J/HbGeR2rr3DbeWwllRJn3jWsYroVCuIsaR10zBB+kEJy/IYXCGu/w3DQh4GDlZdEBLlJuqwOrksCcL9qCedsWSmiK5iCyz6VyjU63M5Ewd8rV+m1PVcszzigmI8wW08ZMwcKx8hLSUz6qYOrNW7RtlYYXlGT3J8rWDiq4bEc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1563810744; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=A4z3aRasmZcW1C37Xm0O9H6AFLjBMxeCQIOVkwgsST0=; b=aRmUZmkCLt6nfyHJ5w3OQeX1Y3jm89nVlFfn2Hedmc1R5H+xzz6mg4mo+Mkc69ufJmK5eYojSTewl0PO7L6K9fwzK0d4nNuS7BVq1GDEG9zhpNpz90VhdVR81nQRwlNC41i4n2+V6QPqn6EadzvM3LNcJhj5TyC5CbENeVWR7QI= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1563810744242854.7268619388482; Mon, 22 Jul 2019 08:52:24 -0700 (PDT) Received: from localhost ([::1]:35270 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpabv-0005a3-3e for importer@patchew.org; Mon, 22 Jul 2019 11:52:23 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54999) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpaba-0004eh-Nc for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:52:04 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hpabY-0002JT-H0 for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:52:02 -0400 Received: from smtpe1.intersmtp.com ([213.121.35.71]:17434) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hpab1-0001fU-O5; Mon, 22 Jul 2019 11:51:29 -0400 Received: from tpw09926dag18h.domain1.systemhost.net (10.9.212.42) by BWP09926076.bt.com (10.36.82.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.1.1713.5; Mon, 22 Jul 2019 16:51:11 +0100 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by tpw09926dag18h.domain1.systemhost.net (10.9.212.42) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Jul 2019 16:51:20 +0100 Received: from tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c]) by tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c%12]) with mapi id 15.00.1395.000; Mon, 22 Jul 2019 16:51:20 +0100 From: To: Thread-Topic: [Qemu-devel] [PATCH v2 16/20] memory: Single byte swap along the I/O path Thread-Index: AQHVQKVO0H+17st0okakbfi2+9jUgg== Date: Mon, 22 Jul 2019 15:51:20 +0000 Message-ID: <1563810679138.99152@bt.com> References: In-Reply-To: Accept-Language: en-AU, en-GB, en-US Content-Language: en-AU X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.187.101.37] MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 [fuzzy] X-Received-From: 213.121.35.71 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.23 Subject: [Qemu-devel] [PATCH v2 16/20] memory: Single byte swap along the I/O path X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, walling@linux.ibm.com, mst@redhat.com, palmer@sifive.com, mark.cave-ayland@ilande.co.uk, Alistair.Francis@wdc.com, arikalo@wavecomp.com, david@redhat.com, pasic@linux.ibm.com, borntraeger@de.ibm.com, rth@twiddle.net, atar4qemu@gmail.com, ehabkost@redhat.com, sw@weilnetz.de, qemu-s390x@nongnu.org, qemu-arm@nongnu.org, david@gibson.dropbear.id.au, qemu-riscv@nongnu.org, cohuck@redhat.com, claudio.fontana@huawei.com, alex.williamson@redhat.com, qemu-ppc@nongnu.org, amarkovic@wavecomp.com, pbonzini@redhat.com, aurelien@aurel32.net Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" Now that MemOp has been pushed down into the memory API, we can collapse the two byte swaps adjust_endianness and handle_bswap into the former. Collapsing byte swaps along the I/O path enables additional endian inversion logic, e.g. SPARC64 Invert Endian TTE bit, with redundant byte swaps cancelling out. Signed-off-by: Tony Nguyen --- accel/tcg/cputlb.c | 58 +++++++++++++++++++++++++-------------------------= ---- memory.c | 30 ++++++++++++++++------------ 2 files changed, 44 insertions(+), 44 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 97d7a64..6f5262c 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -881,7 +881,7 @@ static void tlb_fill(CPUState *cpu, target_ulong addr, = int size, static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry, int mmu_idx, target_ulong addr, uintptr_t retaddr, - MMUAccessType access_type, int size) + MMUAccessType access_type, MemOp op) { CPUState *cpu =3D env_cpu(env); hwaddr mr_offset; @@ -906,14 +906,13 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBE= ntry *iotlbentry, qemu_mutex_lock_iothread(); locked =3D true; } - r =3D memory_region_dispatch_read(mr, mr_offset, &val, SIZE_MEMOP(size= ), - iotlbentry->attrs); + r =3D memory_region_dispatch_read(mr, mr_offset, &val, op, iotlbentry-= >attrs); if (r !=3D MEMTX_OK) { hwaddr physaddr =3D mr_offset + section->offset_within_address_space - section->offset_within_region; - cpu_transaction_failed(cpu, physaddr, addr, size, access_type, + cpu_transaction_failed(cpu, physaddr, addr, MEMOP_SIZE(op), access= _type, mmu_idx, iotlbentry->attrs, r, retaddr); } if (locked) { @@ -925,7 +924,7 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEnt= ry *iotlbentry, static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry, int mmu_idx, uint64_t val, target_ulong addr, - uintptr_t retaddr, int size) + uintptr_t retaddr, MemOp op) { CPUState *cpu =3D env_cpu(env); hwaddr mr_offset; @@ -947,15 +946,15 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntr= y *iotlbentry, qemu_mutex_lock_iothread(); locked =3D true; } - r =3D memory_region_dispatch_write(mr, mr_offset, val, SIZE_MEMOP(size= ), - iotlbentry->attrs); + r =3D memory_region_dispatch_write(mr, mr_offset, val, op, iotlbentry-= >attrs); if (r !=3D MEMTX_OK) { hwaddr physaddr =3D mr_offset + section->offset_within_address_space - section->offset_within_region; - cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_STORE, - mmu_idx, iotlbentry->attrs, r, retaddr); + cpu_transaction_failed(cpu, physaddr, addr, MEMOP_SIZE(op), + MMU_DATA_STORE, mmu_idx, iotlbentry->attrs,= r, + retaddr); } if (locked) { qemu_mutex_unlock_iothread(); @@ -1210,26 +1209,13 @@ static void *atomic_mmu_lookup(CPUArchState *env, t= arget_ulong addr, #endif /* - * Byte Swap Helper + * Byte Swap Checker * - * This should all dead code away depending on the build host and - * access type. + * Dead code should all go away depending on the build host and access typ= e. */ - -static inline uint64_t handle_bswap(uint64_t val, int size, bool big_endia= n) +static inline bool need_bswap(bool big_endian) { - if ((big_endian && NEED_BE_BSWAP) || (!big_endian && NEED_LE_BSWAP)) { - switch (size) { - case 1: return val; - case 2: return bswap16(val); - case 4: return bswap32(val); - case 8: return bswap64(val); - default: - g_assert_not_reached(); - } - } else { - return val; - } + return (big_endian && NEED_BE_BSWAP) || (!big_endian && NEED_LE_BSWAP); } /* @@ -1260,6 +1246,7 @@ load_helper(CPUArchState *env, target_ulong addr, TCG= MemOpIdx oi, unsigned a_bits =3D get_alignment_bits(get_tcgmemop(oi)); void *haddr; uint64_t res; + MemOp op; /* Handle CPU specific unaligned behaviour */ if (addr & ((1 << a_bits) - 1)) { @@ -1305,9 +1292,13 @@ load_helper(CPUArchState *env, target_ulong addr, TC= GMemOpIdx oi, } } - res =3D io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index], - mmu_idx, addr, retaddr, access_type, size); - return handle_bswap(res, size, big_endian); + op =3D SIZE_MEMOP(size); + if (need_bswap(big_endian)) { + op ^=3D MO_BSWAP; + } + + return io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index], + mmu_idx, addr, retaddr, access_type, op); } /* Handle slow unaligned access (it spans two pages or IO). */ @@ -1508,6 +1499,7 @@ store_helper(CPUArchState *env, target_ulong addr, ui= nt64_t val, const size_t tlb_off =3D offsetof(CPUTLBEntry, addr_write); unsigned a_bits =3D get_alignment_bits(get_tcgmemop(oi)); void *haddr; + MemOp op; /* Handle CPU specific unaligned behaviour */ if (addr & ((1 << a_bits) - 1)) { @@ -1553,9 +1545,13 @@ store_helper(CPUArchState *env, target_ulong addr, u= int64_t val, } } + op =3D SIZE_MEMOP(size); + if (need_bswap(big_endian)) { + op ^=3D MO_BSWAP; + } + io_writex(env, &env_tlb(env)->d[mmu_idx].iotlb[index], mmu_idx, - handle_bswap(val, size, big_endian), - addr, retaddr, size); + val, addr, retaddr, op); return; } diff --git a/memory.c b/memory.c index 73cb345..0aaa0a7 100644 --- a/memory.c +++ b/memory.c @@ -350,7 +350,7 @@ static bool memory_region_big_endian(MemoryRegion *mr) #endif } -static bool memory_region_wrong_endianness(MemoryRegion *mr) +static bool memory_region_endianness_inverted(MemoryRegion *mr) { #ifdef TARGET_WORDS_BIGENDIAN return mr->ops->endianness =3D=3D DEVICE_LITTLE_ENDIAN; @@ -359,23 +359,27 @@ static bool memory_region_wrong_endianness(MemoryRegi= on *mr) #endif } -static void adjust_endianness(MemoryRegion *mr, uint64_t *data, unsigned s= ize) +static void adjust_endianness(MemoryRegion *mr, uint64_t *data, MemOp op) { - if (memory_region_wrong_endianness(mr)) { - switch (size) { - case 1: + if (memory_region_endianness_inverted(mr)) { + op ^=3D MO_BSWAP; + } + + if (op & MO_BSWAP) { + switch (op & MO_SIZE) { + case MO_8: break; - case 2: + case MO_16: *data =3D bswap16(*data); break; - case 4: + case MO_32: *data =3D bswap32(*data); break; - case 8: + case MO_64: *data =3D bswap64(*data); break; default: - abort(); + g_assert_not_reached(); } } } @@ -1449,7 +1453,7 @@ MemTxResult memory_region_dispatch_read(MemoryRegion = *mr, } r =3D memory_region_dispatch_read1(mr, addr, pval, size, attrs); - adjust_endianness(mr, pval, size); + adjust_endianness(mr, pval, op); return r; } @@ -1492,7 +1496,7 @@ MemTxResult memory_region_dispatch_write(MemoryRegion= *mr, return MEMTX_DECODE_ERROR; } - adjust_endianness(mr, &data, size); + adjust_endianness(mr, &data, op); if ((!kvm_eventfds_enabled()) && memory_region_dispatch_write_eventfds(mr, addr, data, size, attrs)= ) { @@ -2338,7 +2342,7 @@ void memory_region_add_eventfd(MemoryRegion *mr, } if (size) { - adjust_endianness(mr, &mrfd.data, size); + adjust_endianness(mr, &mrfd.data, SIZE_MEMOP(size)); } memory_region_transaction_begin(); for (i =3D 0; i < mr->ioeventfd_nb; ++i) { @@ -2373,7 +2377,7 @@ void memory_region_del_eventfd(MemoryRegion *mr, unsigned i; if (size) { - adjust_endianness(mr, &mrfd.data, size); + adjust_endianness(mr, &mrfd.data, SIZE_MEMOP(size)); } memory_region_transaction_begin(); for (i =3D 0; i < mr->ioeventfd_nb; ++i) { -- 1.8.3.1 From nobody Mon Feb 9 05:43:11 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=bt.com ARC-Seal: i=1; a=rsa-sha256; t=1563810744; cv=none; d=zoho.com; s=zohoarc; b=H2a4fMgNLkApQRxfZsdURwSSE4a1FlXBWa/i7CCJWuwDW6Wqs5X3khw0vdB7BnqJ2xoHfPcjPQDU42PanVz0E9zh0j7Y60Squc0EWTYjNmcipprMRkmxDayKzTbPsArcn6+bMmqJ84RYpvrwPNnLHkHaxzbcRw431W1WNvj6WXs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1563810744; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=R+l+Etfy/tU8K2NBLsauRX4SRCpSkUYv3ZDZFBMavJ0=; b=WGWBvjHEfxMgdAIQDXYT60sWZuUR4VtYTeN8YsVp/j5dqqXrO+qJ537P/wxjPfWafIdiMUBFhfXETZ2Se47I78QNgBHKUQ0asdTTB1dBE61lNapsUEivu0UjctARkoNHjpS/BwAlVkMb7+MIg//LgoEdLhVtHBetKWQHchE2OXU= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 156381074478187.94397884229829; Mon, 22 Jul 2019 08:52:24 -0700 (PDT) Received: from localhost ([::1]:35272 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpabv-0005fd-JJ for importer@patchew.org; Mon, 22 Jul 2019 11:52:23 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55036) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpabd-0004hm-GW for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:52:06 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hpabc-0002Ps-Er for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:52:05 -0400 Received: from smtpe1.intersmtp.com ([62.239.224.234]:8997) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hpabX-0002Gw-9U; Mon, 22 Jul 2019 11:51:59 -0400 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by RDW083A012ED68.bt.com (10.187.98.38) with Microsoft SMTP Server (TLS) id 14.3.439.0; Mon, 22 Jul 2019 16:51:42 +0100 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by tpw09926dag18e.domain1.systemhost.net (10.9.212.18) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Jul 2019 16:51:56 +0100 Received: from tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c]) by tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c%12]) with mapi id 15.00.1395.000; Mon, 22 Jul 2019 16:51:56 +0100 From: To: Thread-Topic: [Qemu-devel] [PATCH v2 17/20] cpu: TLB_FLAGS_MASK bit to force memory slow path Thread-Index: AQHVQKVjDC9KxC3sRk6QAmZwUx7EEw== Date: Mon, 22 Jul 2019 15:51:56 +0000 Message-ID: <1563810716254.18886@bt.com> References: In-Reply-To: Accept-Language: en-AU, en-GB, en-US Content-Language: en-AU X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.187.101.37] MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 62.239.224.234 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.23 Subject: [Qemu-devel] [PATCH v2 17/20] cpu: TLB_FLAGS_MASK bit to force memory slow path X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, walling@linux.ibm.com, mst@redhat.com, palmer@sifive.com, mark.cave-ayland@ilande.co.uk, Alistair.Francis@wdc.com, arikalo@wavecomp.com, david@redhat.com, pasic@linux.ibm.com, borntraeger@de.ibm.com, rth@twiddle.net, atar4qemu@gmail.com, ehabkost@redhat.com, sw@weilnetz.de, qemu-s390x@nongnu.org, qemu-arm@nongnu.org, david@gibson.dropbear.id.au, qemu-riscv@nongnu.org, cohuck@redhat.com, claudio.fontana@huawei.com, alex.williamson@redhat.com, qemu-ppc@nongnu.org, amarkovic@wavecomp.com, pbonzini@redhat.com, aurelien@aurel32.net Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" The fast path is taken when TLB_FLAGS_MASK is all zero. TLB_FORCE_SLOW is simply a TLB_FLAGS_MASK bit to force the slow path, there are no other side effects. Signed-off-by: Tony Nguyen --- include/exec/cpu-all.h | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h index 536ea58..e496f99 100644 --- a/include/exec/cpu-all.h +++ b/include/exec/cpu-all.h @@ -331,12 +331,18 @@ CPUArchState *cpu_copy(CPUArchState *env); #define TLB_MMIO (1 << (TARGET_PAGE_BITS - 3)) /* Set if TLB entry must have MMU lookup repeated for every access */ #define TLB_RECHECK (1 << (TARGET_PAGE_BITS - 4)) +/* Set if TLB entry must take the slow path. */ +#define TLB_FORCE_SLOW (1 << (TARGET_PAGE_BITS - 5)) /* Use this mask to check interception with an alignment mask * in a TCG backend. */ -#define TLB_FLAGS_MASK (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO \ - | TLB_RECHECK) +#define TLB_FLAGS_MASK \ + (TLB_INVALID_MASK \ + | TLB_NOTDIRTY \ + | TLB_MMIO \ + | TLB_RECHECK \ + | TLB_FORCE_SLOW) /** * tlb_hit_page: return true if page aligned @addr is a hit against the -- 1.8.3.1 From nobody Mon Feb 9 05:43:11 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=bt.com ARC-Seal: i=1; a=rsa-sha256; t=1563810777; cv=none; d=zoho.com; s=zohoarc; b=R5nt8sOfb5onHT39Mf1ivFzCHfUjFo+IH/DY5Hzbsto13a5FgLo6jYmQSe5hJYKR9BF5kLSC+TySuoLIKVEEtdtSrZFFcO/AlTJrOUD+4IYTFg3ZKkY98wx5vONLsux1/7/7QF9ugQivPBW+gz3xMBEUu0iOjZ4m4JwGILI9SXs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1563810777; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=6FmnALtZaLRPuX2p7D7GiApDxW9zqAD3SbB8BVDofso=; b=CS7Afd5lihN97Zm8g7UEyc0G3o4kXlW188AXETNLFozaXHJDQlPpTSuI4BUpF8wy0Re7AV5ZWNiVEt+pDrmBzhFC2Qd5v/1My8hg6anS8egFKQMfgNDDt4yMWMYX/x/pB3nykVoqBLPXZ6HhI0nd7mlRNIc+G4F1djyI9/h9gTc= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1563810777311560.7490091120364; Mon, 22 Jul 2019 08:52:57 -0700 (PDT) Received: from localhost ([::1]:35288 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpacS-00079w-AM for importer@patchew.org; Mon, 22 Jul 2019 11:52:56 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55285) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpacI-0006eT-0B for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:52:47 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hpacG-00039q-PF for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:52:45 -0400 Received: from smtpe1.intersmtp.com ([62.239.224.236]:41601) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hpacA-00032V-LW; Mon, 22 Jul 2019 11:52:38 -0400 Received: from tpw09926dag18h.domain1.systemhost.net (10.9.212.42) by RDW083A009ED65.bt.com (10.187.98.35) with Microsoft SMTP Server (TLS) id 14.3.439.0; Mon, 22 Jul 2019 16:48:58 +0100 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by tpw09926dag18h.domain1.systemhost.net (10.9.212.42) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Jul 2019 16:52:35 +0100 Received: from tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c]) by tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c%12]) with mapi id 15.00.1395.000; Mon, 22 Jul 2019 16:52:35 +0100 From: To: Thread-Topic: [Qemu-devel] [PATCH v2 18/20] cputlb: Byte swap memory transaction attribute Thread-Index: AQHVQKV7c9bUDlL56U+J3UKnlx1+xw== Date: Mon, 22 Jul 2019 15:52:35 +0000 Message-ID: <1563810755207.16357@bt.com> References: In-Reply-To: Accept-Language: en-AU, en-GB, en-US Content-Language: en-AU X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.187.101.37] MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 62.239.224.236 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.23 Subject: [Qemu-devel] [PATCH v2 18/20] cputlb: Byte swap memory transaction attribute X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, walling@linux.ibm.com, mst@redhat.com, palmer@sifive.com, mark.cave-ayland@ilande.co.uk, Alistair.Francis@wdc.com, arikalo@wavecomp.com, david@redhat.com, pasic@linux.ibm.com, borntraeger@de.ibm.com, rth@twiddle.net, atar4qemu@gmail.com, ehabkost@redhat.com, sw@weilnetz.de, qemu-s390x@nongnu.org, qemu-arm@nongnu.org, david@gibson.dropbear.id.au, qemu-riscv@nongnu.org, cohuck@redhat.com, claudio.fontana@huawei.com, alex.williamson@redhat.com, qemu-ppc@nongnu.org, amarkovic@wavecomp.com, pbonzini@redhat.com, aurelien@aurel32.net Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" Notice new attribute, byte swap, and force the transaction through the memory slow path. Required by architectures that can invert endianness of memory transaction, e.g. SPARC64 has the Invert Endian TTE bit. Signed-off-by: Tony Nguyen --- accel/tcg/cputlb.c | 11 +++++++++++ include/exec/memattrs.h | 2 ++ 2 files changed, 13 insertions(+) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 6f5262c..619787b 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -738,6 +738,9 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulon= g vaddr, */ address |=3D TLB_RECHECK; } + if (attrs.byte_swap) { + address |=3D TLB_FORCE_SLOW; + } if (!memory_region_is_ram(section->mr) && !memory_region_is_romd(section->mr)) { /* IO memory case */ @@ -891,6 +894,10 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEn= try *iotlbentry, bool locked =3D false; MemTxResult r; + if (iotlbentry->attrs.byte_swap) { + op ^=3D MO_BSWAP; + } + section =3D iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs); mr =3D section->mr; mr_offset =3D (iotlbentry->addr & TARGET_PAGE_MASK) + addr; @@ -933,6 +940,10 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry= *iotlbentry, bool locked =3D false; MemTxResult r; + if (iotlbentry->attrs.byte_swap) { + op ^=3D MO_BSWAP; + } + section =3D iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs); mr =3D section->mr; mr_offset =3D (iotlbentry->addr & TARGET_PAGE_MASK) + addr; diff --git a/include/exec/memattrs.h b/include/exec/memattrs.h index d4a3477..a0644eb 100644 --- a/include/exec/memattrs.h +++ b/include/exec/memattrs.h @@ -37,6 +37,8 @@ typedef struct MemTxAttrs { unsigned int user:1; /* Requester ID (for MSI for example) */ unsigned int requester_id:16; + /* SPARC64: TTE invert endianness */ + unsigned int byte_swap:1; /* * The following are target-specific page-table bits. These are not * related to actual memory transactions at all. However, this struct= ure -- 1.8.3.1 From nobody Mon Feb 9 05:43:11 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=bt.com ARC-Seal: i=1; a=rsa-sha256; t=1563810830; cv=none; d=zoho.com; s=zohoarc; b=bXXeza48lxoFdWa9rRCgW+VX1SI5xq7HnFuotU5hQcbzYaAFfojM6+LD+/EdpAVsmRfxcvo57PRnvhi2QIk4fL9k7xas2saAuOhdy5U5UvJEgsQfZ7rwkQaafXym2HyI68YBrGftsJIIthvKWIFID+90fHsXrErXG1VD5jJ4phw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1563810830; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=YWgwL46dqlF6jJzbrDItARi5CEsIA/2UW83Le5FwLF8=; b=MfBpUzegOvXV2GA02atBs9K6flbdGugNxNB51JvceVy1TCGrg+Y/dYyJOosrJOtAeTbo4n8kK0A/S8dX/8Iud01TeufKW2cDqnRzjdgevclnNNqE6c21TdIoE3LKeOT10yJvpMcf2lh+ElwY+z6rXqqo29lqf08nP9TpE0STU4o= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1563810830595997.3884557590094; Mon, 22 Jul 2019 08:53:50 -0700 (PDT) Received: from localhost ([::1]:35302 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpadJ-0008Es-9h for importer@patchew.org; Mon, 22 Jul 2019 11:53:49 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55452) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpad3-0007nt-Km for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:53:39 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hpacx-0003w3-Bq for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:53:33 -0400 Received: from smtpe1.intersmtp.com ([62.239.224.237]:16585) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hpacm-0003c0-3M; Mon, 22 Jul 2019 11:53:16 -0400 Received: from tpw09926dag18h.domain1.systemhost.net (10.9.212.42) by RDW083A010ED66.bt.com (10.187.98.36) with Microsoft SMTP Server (TLS) id 14.3.439.0; Mon, 22 Jul 2019 16:52:21 +0100 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by tpw09926dag18h.domain1.systemhost.net (10.9.212.42) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Jul 2019 16:53:13 +0100 Received: from tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c]) by tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c%12]) with mapi id 15.00.1395.000; Mon, 22 Jul 2019 16:53:13 +0100 From: To: Thread-Topic: [Qemu-devel] [PATCH v2 19/20] target/sparc: Add TLB entry with attributes Thread-Index: AQHVQKWRPKfSJB9U1UaCdD9N0ZtEbg== Date: Mon, 22 Jul 2019 15:53:13 +0000 Message-ID: <1563810792776.27767@bt.com> References: In-Reply-To: Accept-Language: en-AU, en-GB, en-US Content-Language: en-AU X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.187.101.37] MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 62.239.224.237 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.23 Subject: [Qemu-devel] [PATCH v2 19/20] target/sparc: Add TLB entry with attributes X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, walling@linux.ibm.com, mst@redhat.com, palmer@sifive.com, mark.cave-ayland@ilande.co.uk, Alistair.Francis@wdc.com, arikalo@wavecomp.com, david@redhat.com, pasic@linux.ibm.com, borntraeger@de.ibm.com, rth@twiddle.net, atar4qemu@gmail.com, ehabkost@redhat.com, sw@weilnetz.de, qemu-s390x@nongnu.org, qemu-arm@nongnu.org, david@gibson.dropbear.id.au, qemu-riscv@nongnu.org, cohuck@redhat.com, claudio.fontana@huawei.com, alex.williamson@redhat.com, qemu-ppc@nongnu.org, amarkovic@wavecomp.com, pbonzini@redhat.com, aurelien@aurel32.net Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" Append MemTxAttrs to interfaces so we can pass along up coming Invert Endian TTE bit on SPARC64. Signed-off-by: Tony Nguyen --- target/sparc/mmu_helper.c | 32 ++++++++++++++++++-------------- 1 file changed, 18 insertions(+), 14 deletions(-) diff --git a/target/sparc/mmu_helper.c b/target/sparc/mmu_helper.c index cbd1e91..826e14b 100644 --- a/target/sparc/mmu_helper.c +++ b/target/sparc/mmu_helper.c @@ -88,7 +88,7 @@ static const int perm_table[2][8] =3D { }; static int get_physical_address(CPUSPARCState *env, hwaddr *physical, - int *prot, int *access_index, + int *prot, int *access_index, MemTxAttrs *= attrs, target_ulong address, int rw, int mmu_idx, target_ulong *page_size) { @@ -219,6 +219,7 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, in= t size, target_ulong vaddr; target_ulong page_size; int error_code =3D 0, prot, access_index; + MemTxAttrs attrs =3D {}; /* * TODO: If we ever need tlb_vaddr_to_host for this target, @@ -229,7 +230,7 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, in= t size, assert(!probe); address &=3D TARGET_PAGE_MASK; - error_code =3D get_physical_address(env, &paddr, &prot, &access_index, + error_code =3D get_physical_address(env, &paddr, &prot, &access_index,= &attrs, address, access_type, mmu_idx, &page_size); vaddr =3D address; @@ -490,8 +491,8 @@ static inline int ultrasparc_tag_match(SparcTLBEntry *t= lb, return 0; } -static int get_physical_address_data(CPUSPARCState *env, - hwaddr *physical, int *prot, +static int get_physical_address_data(CPUSPARCState *env, hwaddr *physical, + int *prot, MemTxAttrs *attrs, target_ulong address, int rw, int mmu= _idx) { CPUState *cs =3D env_cpu(env); @@ -608,8 +609,8 @@ static int get_physical_address_data(CPUSPARCState *env, return 1; } -static int get_physical_address_code(CPUSPARCState *env, - hwaddr *physical, int *prot, +static int get_physical_address_code(CPUSPARCState *env, hwaddr *physical, + int *prot, MemTxAttrs *attrs, target_ulong address, int mmu_idx) { CPUState *cs =3D env_cpu(env); @@ -686,7 +687,7 @@ static int get_physical_address_code(CPUSPARCState *env, } static int get_physical_address(CPUSPARCState *env, hwaddr *physical, - int *prot, int *access_index, + int *prot, int *access_index, MemTxAttrs *= attrs, target_ulong address, int rw, int mmu_idx, target_ulong *page_size) { @@ -716,11 +717,11 @@ static int get_physical_address(CPUSPARCState *env, h= waddr *physical, } if (rw =3D=3D 2) { - return get_physical_address_code(env, physical, prot, address, + return get_physical_address_code(env, physical, prot, attrs, addre= ss, mmu_idx); } else { - return get_physical_address_data(env, physical, prot, address, rw, - mmu_idx); + return get_physical_address_data(env, physical, prot, attrs, addre= ss, + rw, mmu_idx); } } @@ -734,10 +735,11 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, = int size, target_ulong vaddr; hwaddr paddr; target_ulong page_size; + MemTxAttrs attrs =3D {}; int error_code =3D 0, prot, access_index; address &=3D TARGET_PAGE_MASK; - error_code =3D get_physical_address(env, &paddr, &prot, &access_index, + error_code =3D get_physical_address(env, &paddr, &prot, &access_index,= &attrs, address, access_type, mmu_idx, &page_size); if (likely(error_code =3D=3D 0)) { @@ -747,7 +749,8 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, in= t size, env->dmmu.mmu_primary_context, env->dmmu.mmu_secondary_context); - tlb_set_page(cs, vaddr, paddr, prot, mmu_idx, page_size); + tlb_set_page_with_attrs(cs, vaddr, paddr, attrs, prot, mmu_idx, + page_size); return true; } if (probe) { @@ -849,9 +852,10 @@ static int cpu_sparc_get_phys_page(CPUSPARCState *env,= hwaddr *phys, { target_ulong page_size; int prot, access_index; + MemTxAttrs attrs =3D {}; - return get_physical_address(env, phys, &prot, &access_index, addr, rw, - mmu_idx, &page_size); + return get_physical_address(env, phys, &prot, &access_index, &attrs, a= ddr, + rw, mmu_idx, &page_size); } #if defined(TARGET_SPARC64) -- 1.8.3.1 From nobody Mon Feb 9 05:43:11 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=bt.com ARC-Seal: i=1; a=rsa-sha256; t=1563810867; cv=none; d=zoho.com; s=zohoarc; b=lwe9UzBZS0IehpjDpLZ73cit/Gf8YI3oDW5ALC9kFh7lopLWvjQF5xX1GABr97ZJk8djFLoysfwI6zy9frbnDUdHQzFM+MJgij51TKA0h7BIQte+ZBQbOaW65OXhsaxg5b8s4ORriOBoaj+X5uhr98r/rSHliBdUW+4hK9g5S3M= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1563810867; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=dVT6DtRvUWWqLoci/GhYrsTMRRVa0xDio+GlAoSxm1o=; b=NE7EeeRL+mPhkjI6ui02bD57rvti9dAoiwK2U4yPl/IWn/7PtK1MfRj9ZQ+UGXjHNNfoGRJ9FZDMoDUeFAkXDgCxAirIeWM5Xru0OqT44CV/uW74Aa+/wBIStuhhPrD+5ASoix9EElCZw3iUXRcZC7dkr25wdOyETXjxLoawQ6Y= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1563810867254151.86117361433344; Mon, 22 Jul 2019 08:54:27 -0700 (PDT) Received: from localhost ([::1]:35324 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpadu-00012m-4s for importer@patchew.org; Mon, 22 Jul 2019 11:54:26 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55585) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpadi-0000dR-Si for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:54:16 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hpadg-0004N9-Rg for qemu-devel@nongnu.org; Mon, 22 Jul 2019 11:54:14 -0400 Received: from smtpe1.intersmtp.com ([62.239.224.237]:21209) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hpadX-0004IC-6E; Mon, 22 Jul 2019 11:54:04 -0400 Received: from tpw09926dag18g.domain1.systemhost.net (10.9.212.34) by RDW083A010ED66.bt.com (10.187.98.36) with Microsoft SMTP Server (TLS) id 14.3.439.0; Mon, 22 Jul 2019 16:53:08 +0100 Received: from tpw09926dag18e.domain1.systemhost.net (10.9.212.18) by tpw09926dag18g.domain1.systemhost.net (10.9.212.34) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 22 Jul 2019 16:54:00 +0100 Received: from tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c]) by tpw09926dag18e.domain1.systemhost.net ([fe80::a946:6348:ccf4:fa6c%12]) with mapi id 15.00.1395.000; Mon, 22 Jul 2019 16:54:00 +0100 From: To: Thread-Topic: [Qemu-devel] [PATCH v2 20/20] target/sparc: sun4u Invert Endian TTE bit Thread-Index: AQHVQKWtc7DbOsb7Y06HeJbM6CaVHA== Date: Mon, 22 Jul 2019 15:54:00 +0000 Message-ID: <1563810840218.14603@bt.com> References: In-Reply-To: Accept-Language: en-AU, en-GB, en-US Content-Language: en-AU X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.187.101.37] MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 62.239.224.237 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.23 Subject: [Qemu-devel] [PATCH v2 20/20] target/sparc: sun4u Invert Endian TTE bit X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, walling@linux.ibm.com, mst@redhat.com, palmer@sifive.com, mark.cave-ayland@ilande.co.uk, Alistair.Francis@wdc.com, arikalo@wavecomp.com, david@redhat.com, pasic@linux.ibm.com, borntraeger@de.ibm.com, rth@twiddle.net, atar4qemu@gmail.com, ehabkost@redhat.com, sw@weilnetz.de, qemu-s390x@nongnu.org, qemu-arm@nongnu.org, david@gibson.dropbear.id.au, qemu-riscv@nongnu.org, cohuck@redhat.com, claudio.fontana@huawei.com, alex.williamson@redhat.com, qemu-ppc@nongnu.org, amarkovic@wavecomp.com, pbonzini@redhat.com, aurelien@aurel32.net Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" This bit configures endianness of PCI MMIO devices. It is used by Solaris and OpenBSD sunhme drivers. Tested working on OpenBSD. Unfortunately Solaris 10 had a unrelated keyboard issue blocking testing... another inch towards Solaris 10 on SPARC64 =3D) Signed-off-by: Tony Nguyen --- target/sparc/cpu.h | 2 ++ target/sparc/mmu_helper.c | 8 +++++++- 2 files changed, 9 insertions(+), 1 deletion(-) diff --git a/target/sparc/cpu.h b/target/sparc/cpu.h index 8ed2250..77e8e07 100644 --- a/target/sparc/cpu.h +++ b/target/sparc/cpu.h @@ -277,6 +277,7 @@ enum { #define TTE_VALID_BIT (1ULL << 63) #define TTE_NFO_BIT (1ULL << 60) +#define TTE_IE_BIT (1ULL << 59) #define TTE_USED_BIT (1ULL << 41) #define TTE_LOCKED_BIT (1ULL << 6) #define TTE_SIDEEFFECT_BIT (1ULL << 3) @@ -293,6 +294,7 @@ enum { #define TTE_IS_VALID(tte) ((tte) & TTE_VALID_BIT) #define TTE_IS_NFO(tte) ((tte) & TTE_NFO_BIT) +#define TTE_IS_IE(tte) ((tte) & TTE_IE_BIT) #define TTE_IS_USED(tte) ((tte) & TTE_USED_BIT) #define TTE_IS_LOCKED(tte) ((tte) & TTE_LOCKED_BIT) #define TTE_IS_SIDEEFFECT(tte) ((tte) & TTE_SIDEEFFECT_BIT) diff --git a/target/sparc/mmu_helper.c b/target/sparc/mmu_helper.c index 826e14b..77dc86a 100644 --- a/target/sparc/mmu_helper.c +++ b/target/sparc/mmu_helper.c @@ -537,6 +537,10 @@ static int get_physical_address_data(CPUSPARCState *en= v, hwaddr *physical, if (ultrasparc_tag_match(&env->dtlb[i], address, context, physical= )) { int do_fault =3D 0; + if (TTE_IS_IE(env->dtlb[i].tte)) { + attrs->byte_swap =3D true; + } + /* access ok? */ /* multiple bits in SFSR.FT may be set on TT_DFAULT */ if (TTE_IS_PRIV(env->dtlb[i].tte) && is_user) { @@ -792,7 +796,7 @@ void dump_mmu(CPUSPARCState *env) } if (TTE_IS_VALID(env->dtlb[i].tte)) { qemu_printf("[%02u] VA: %" PRIx64 ", PA: %llx" - ", %s, %s, %s, %s, ctx %" PRId64 " %s\n", + ", %s, %s, %s, %s, ie %s, ctx %" PRId64 " %s\n= ", i, env->dtlb[i].tag & (uint64_t)~0x1fffULL, TTE_PA(env->dtlb[i].tte), @@ -801,6 +805,8 @@ void dump_mmu(CPUSPARCState *env) TTE_IS_W_OK(env->dtlb[i].tte) ? "RW" : "RO", TTE_IS_LOCKED(env->dtlb[i].tte) ? "locked" : "unlocked", + TTE_IS_IE(env->dtlb[i].tte) ? + "yes" : "no", env->dtlb[i].tag & (uint64_t)0x1fffULL, TTE_IS_GLOBAL(env->dtlb[i].tte) ? "global" : "local"); -- 1.8.3.1