From nobody Tue Nov 18 11:51:25 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of _spf.google.com designates 209.85.208.47 as permitted sender) client-ip=209.85.208.47; envelope-from=philippe.mathieu.daude@gmail.com; helo=mail-ed1-f47.google.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of _spf.google.com designates 209.85.208.47 as permitted sender) smtp.mailfrom=philippe.mathieu.daude@gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610377286; cv=none; d=zohomail.com; s=zohoarc; b=lhyCrnwMFifoYqcouO1IbT82zH6DTn0+VcF1ahQvBn036bet7iwbh0QIC6uoGZ2rsYdihybSY7ZVy0I0m/B8emyH0qFYoLO8VFwTTIm3SBGyXHDGZqOwkBVw0qpG0YeuBBrdoId1ocDnNujiT7dEkrd/ChOW+ogVqKpjRFCco4c= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610377286; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Sender:Subject:To; bh=7h+ZogQhx4K7jgIahkOF5Dqk41tHkgsSVlTN0LVAfCk=; b=ZhXpnhS3VsATW+fmsMw6YzNmcssybjSIEXkWE+x03YoQRRQWyIzvbqB2DT/vTKSLBwfXXWrOmJ0A5GM7KSTY4Dl38FHKHIyUlBTbAoSgsFQGDD2AM1HvxF9crD9HqZJbYZFUft8QanmmVNmDWKRSpV95h0B38l6kVJ2SpnDCtR0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of _spf.google.com designates 209.85.208.47 as permitted sender) smtp.mailfrom=philippe.mathieu.daude@gmail.com Received: from mail-ed1-f47.google.com (mail-ed1-f47.google.com [209.85.208.47]) by mx.zohomail.com with SMTPS id 1610377286143156.55917590233196; Mon, 11 Jan 2021 07:01:26 -0800 (PST) Received: by mail-ed1-f47.google.com with SMTP id i24so9100edj.8 for ; Mon, 11 Jan 2021 07:01:25 -0800 (PST) Return-Path: Return-Path: Received: from x1w.redhat.com (129.red-88-21-205.staticip.rima-tde.net. [88.21.205.129]) by smtp.gmail.com with ESMTPSA id cw7sm5317247ejc.13.2021.01.11.07.01.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Jan 2021 07:01:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7h+ZogQhx4K7jgIahkOF5Dqk41tHkgsSVlTN0LVAfCk=; b=dP4/4q4XQWeA6wBuzQkSzM8QWT1yNWctBl4v3qpL9diGRWgKWVySO3+0AwWetA7z7q WJb5CWk97B+VcDrbdXBatONWkudewoH040b9og40B1GpKmJStkNLH0uuFVgQcL/qopT7 dCgpJUbOnmUByhU+kvGzWAntK6PU1Ag2Zr4PTfEAzICRkxSfUmISopPo+xKvfm2BZH+K 8eE+lZK2EXN3JETu5+5e0j30mWYwVrv+XkuWVZNDqgnMaaoPxFspsOJqtRZl3R4+rtTQ MMPhZMK9p+1+zHNmXY5VebVyTnixI2L5a452D3Q7cXXKZUJa6dQL4DWxR8rDwvwM8V4s NY3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=7h+ZogQhx4K7jgIahkOF5Dqk41tHkgsSVlTN0LVAfCk=; b=W+749d6TZz3z4fzBoPLvIg2H5c0ExItL1nSnPLh5xkiGHQ6kMmKxXVyTTEPKyKZHSg jT86lUU9BQnWpnKYJ1m9bzYQZwzFxZX6rV2P2uw0rWVJ0vz09hYvzx3mb9EvZGw/l8tR JA2FNS6c2Ba3AFVe7KeBX57L2XwW0RWCSKoAXtsP8rJyGjEDfEgksiCD4qfU4nf156qN ZdeVvE5rppnzE6vjRwL4OYDfod52m4RswDPKKZkhk7T7inEuHibRHf8sXAUEYnW48oaT kz25pGVPRENKTdkYXxeB168Y0gtrfH9XeldKroM+b3V8XGigSeT3h8JXqYMGpDdwK7Fw vYkg== X-Gm-Message-State: AOAM530zBeKJXXxVOtG58Onx2/7g8nVEQpegxkD1a/X0CLS+Cc8U708W 73bFwa2NP+Y2Bf43QTCCobQ= X-Google-Smtp-Source: ABdhPJw0uOiojo/r1bm05kVgFe0MoLXFzs4Tx0Yl8HbgBEyJFjgJ0jlUULuplszIh99BQk/9az2MjQ== X-Received: by 2002:aa7:d485:: with SMTP id b5mr13875717edr.214.1610377284077; Mon, 11 Jan 2021 07:01:24 -0800 (PST) Sender: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Cc: Huacai Chen , Aurelien Jarno , Aleksandar Rikalo , Thomas Huth , Stefan Weil , qemu-riscv@nongnu.org, qemu-arm@nongnu.org, Jiaxun Yang , qemu-s390x@nongnu.org, =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Cornelia Huck , Richard Henderson , Andrzej Zaborowski , Alistair Francis , Palmer Dabbelt Subject: [PATCH 1/5] tcg/arm: Hoist common argument loads in tcg_out_op() Date: Mon, 11 Jan 2021 16:01:10 +0100 Message-Id: <20210111150114.1415930-2-f4bug@amsat.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210111150114.1415930-1-f4bug@amsat.org> References: <20210111150114.1415930-1-f4bug@amsat.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) Signed-off-by: Philippe Mathieu-Daud=C3=A9 --- tcg/arm/tcg-target.c.inc | 173 +++++++++++++++++++-------------------- 1 file changed, 86 insertions(+), 87 deletions(-) diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc index 0fd11264544..94cc12a0fc6 100644 --- a/tcg/arm/tcg-target.c.inc +++ b/tcg/arm/tcg-target.c.inc @@ -1747,15 +1747,24 @@ static void tcg_out_qemu_st(TCGContext *s, const TC= GArg *args, bool is64) =20 static void tcg_out_epilogue(TCGContext *s); =20 -static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, - const TCGArg *args, const int *const_args) +static void tcg_out_op(TCGContext *s, TCGOpcode opc, + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]) { TCGArg a0, a1, a2, a3, a4, a5; int c; =20 + /* Hoist the loads of the most common arguments. */ + a0 =3D args[0]; + a1 =3D args[1]; + a2 =3D args[2]; + a3 =3D args[3]; + a4 =3D args[4]; + a5 =3D args[5]; + switch (opc) { case INDEX_op_exit_tb: - tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_R0, args[0]); + tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_R0, a0); tcg_out_epilogue(s); break; case INDEX_op_goto_tb: @@ -1765,7 +1774,7 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcod= e opc, TCGReg base =3D TCG_REG_PC; =20 tcg_debug_assert(s->tb_jmp_insn_offset =3D=3D 0); - ptr =3D (intptr_t)tcg_splitwx_to_rx(s->tb_jmp_target_addr + ar= gs[0]); + ptr =3D (intptr_t)tcg_splitwx_to_rx(s->tb_jmp_target_addr + a0= ); dif =3D tcg_pcrel_diff(s, (void *)ptr) - 8; dil =3D sextract32(dif, 0, 12); if (dif !=3D dil) { @@ -1778,39 +1787,39 @@ static inline void tcg_out_op(TCGContext *s, TCGOpc= ode opc, tcg_out_movi32(s, COND_AL, base, ptr - dil); } tcg_out_ld32_12(s, COND_AL, TCG_REG_PC, base, dil); - set_jmp_reset_offset(s, args[0]); + set_jmp_reset_offset(s, a0); } break; case INDEX_op_goto_ptr: - tcg_out_bx(s, COND_AL, args[0]); + tcg_out_bx(s, COND_AL, a0); break; case INDEX_op_br: - tcg_out_goto_label(s, COND_AL, arg_label(args[0])); + tcg_out_goto_label(s, COND_AL, arg_label(a0)); break; =20 case INDEX_op_ld8u_i32: - tcg_out_ld8u(s, COND_AL, args[0], args[1], args[2]); + tcg_out_ld8u(s, COND_AL, a0, a1, a2); break; case INDEX_op_ld8s_i32: - tcg_out_ld8s(s, COND_AL, args[0], args[1], args[2]); + tcg_out_ld8s(s, COND_AL, a0, a1, a2); break; case INDEX_op_ld16u_i32: - tcg_out_ld16u(s, COND_AL, args[0], args[1], args[2]); + tcg_out_ld16u(s, COND_AL, a0, a1, a2); break; case INDEX_op_ld16s_i32: - tcg_out_ld16s(s, COND_AL, args[0], args[1], args[2]); + tcg_out_ld16s(s, COND_AL, a0, a1, a2); break; case INDEX_op_ld_i32: - tcg_out_ld32u(s, COND_AL, args[0], args[1], args[2]); + tcg_out_ld32u(s, COND_AL, a0, a1, a2); break; case INDEX_op_st8_i32: - tcg_out_st8(s, COND_AL, args[0], args[1], args[2]); + tcg_out_st8(s, COND_AL, a0, a1, a2); break; case INDEX_op_st16_i32: - tcg_out_st16(s, COND_AL, args[0], args[1], args[2]); + tcg_out_st16(s, COND_AL, a0, a1, a2); break; case INDEX_op_st_i32: - tcg_out_st32(s, COND_AL, args[0], args[1], args[2]); + tcg_out_st32(s, COND_AL, a0, a1, a2); break; =20 case INDEX_op_movcond_i32: @@ -1818,34 +1827,33 @@ static inline void tcg_out_op(TCGContext *s, TCGOpc= ode opc, * so we only need to do "if condition passed, move v1 to dest". */ tcg_out_dat_rIN(s, COND_AL, ARITH_CMP, ARITH_CMN, 0, - args[1], args[2], const_args[2]); - tcg_out_dat_rIK(s, tcg_cond_to_arm_cond[args[5]], ARITH_MOV, - ARITH_MVN, args[0], 0, args[3], const_args[3]); + a1, a2, const_args[2]); + tcg_out_dat_rIK(s, tcg_cond_to_arm_cond[a5], ARITH_MOV, + ARITH_MVN, a0, 0, a3, const_args[3]); break; case INDEX_op_add_i32: tcg_out_dat_rIN(s, COND_AL, ARITH_ADD, ARITH_SUB, - args[0], args[1], args[2], const_args[2]); + a0, a1, a2, const_args[2]); break; case INDEX_op_sub_i32: if (const_args[1]) { if (const_args[2]) { - tcg_out_movi32(s, COND_AL, args[0], args[1] - args[2]); + tcg_out_movi32(s, COND_AL, a0, a1 - a2); } else { - tcg_out_dat_rI(s, COND_AL, ARITH_RSB, - args[0], args[2], args[1], 1); + tcg_out_dat_rI(s, COND_AL, ARITH_RSB, a0, a2, a1, 1); } } else { tcg_out_dat_rIN(s, COND_AL, ARITH_SUB, ARITH_ADD, - args[0], args[1], args[2], const_args[2]); + a0, a1, a2, const_args[2]); } break; case INDEX_op_and_i32: tcg_out_dat_rIK(s, COND_AL, ARITH_AND, ARITH_BIC, - args[0], args[1], args[2], const_args[2]); + a0, a1, a2, const_args[2]); break; case INDEX_op_andc_i32: tcg_out_dat_rIK(s, COND_AL, ARITH_BIC, ARITH_AND, - args[0], args[1], args[2], const_args[2]); + a0, a1, a2, const_args[2]); break; case INDEX_op_or_i32: c =3D ARITH_ORR; @@ -1854,11 +1862,9 @@ static inline void tcg_out_op(TCGContext *s, TCGOpco= de opc, c =3D ARITH_EOR; /* Fall through. */ gen_arith: - tcg_out_dat_rI(s, COND_AL, c, args[0], args[1], args[2], const_arg= s[2]); + tcg_out_dat_rI(s, COND_AL, c, a0, a1, a2, const_args[2]); break; case INDEX_op_add2_i32: - a0 =3D args[0], a1 =3D args[1], a2 =3D args[2]; - a3 =3D args[3], a4 =3D args[4], a5 =3D args[5]; if (a0 =3D=3D a3 || (a0 =3D=3D a5 && !const_args[5])) { a0 =3D TCG_REG_TMP; } @@ -1866,11 +1872,9 @@ static inline void tcg_out_op(TCGContext *s, TCGOpco= de opc, a0, a2, a4, const_args[4]); tcg_out_dat_rIK(s, COND_AL, ARITH_ADC, ARITH_SBC, a1, a3, a5, const_args[5]); - tcg_out_mov_reg(s, COND_AL, args[0], a0); + tcg_out_mov_reg(s, COND_AL, a0, a0); break; case INDEX_op_sub2_i32: - a0 =3D args[0], a1 =3D args[1], a2 =3D args[2]; - a3 =3D args[3], a4 =3D args[4], a5 =3D args[5]; if ((a0 =3D=3D a3 && !const_args[3]) || (a0 =3D=3D a5 && !const_ar= gs[5])) { a0 =3D TCG_REG_TMP; } @@ -1894,68 +1898,64 @@ static inline void tcg_out_op(TCGContext *s, TCGOpc= ode opc, tcg_out_dat_rIK(s, COND_AL, ARITH_SBC, ARITH_ADC, a1, a3, a5, const_args[5]); } - tcg_out_mov_reg(s, COND_AL, args[0], a0); + tcg_out_mov_reg(s, COND_AL, a0, a0); break; case INDEX_op_neg_i32: - tcg_out_dat_imm(s, COND_AL, ARITH_RSB, args[0], args[1], 0); + tcg_out_dat_imm(s, COND_AL, ARITH_RSB, a0, a1, 0); break; case INDEX_op_not_i32: - tcg_out_dat_reg(s, COND_AL, - ARITH_MVN, args[0], 0, args[1], SHIFT_IMM_LSL(0)); + tcg_out_dat_reg(s, COND_AL, ARITH_MVN, a0, 0, a1, SHIFT_IMM_LSL(0)= ); break; case INDEX_op_mul_i32: - tcg_out_mul32(s, COND_AL, args[0], args[1], args[2]); + tcg_out_mul32(s, COND_AL, a0, a1, a2); break; case INDEX_op_mulu2_i32: - tcg_out_umull32(s, COND_AL, args[0], args[1], args[2], args[3]); + tcg_out_umull32(s, COND_AL, a0, a1, a2, a3); break; case INDEX_op_muls2_i32: - tcg_out_smull32(s, COND_AL, args[0], args[1], args[2], args[3]); + tcg_out_smull32(s, COND_AL, a0, a1, a2, a3); break; - /* XXX: Perhaps args[2] & 0x1f is wrong */ + /* XXX: Perhaps a2 & 0x1f is wrong */ case INDEX_op_shl_i32: c =3D const_args[2] ? - SHIFT_IMM_LSL(args[2] & 0x1f) : SHIFT_REG_LSL(args[2]); + SHIFT_IMM_LSL(a2 & 0x1f) : SHIFT_REG_LSL(a2); goto gen_shift32; case INDEX_op_shr_i32: - c =3D const_args[2] ? (args[2] & 0x1f) ? SHIFT_IMM_LSR(args[2] & 0= x1f) : - SHIFT_IMM_LSL(0) : SHIFT_REG_LSR(args[2]); + c =3D const_args[2] ? (a2 & 0x1f) ? SHIFT_IMM_LSR(a2 & 0x1f) : + SHIFT_IMM_LSL(0) : SHIFT_REG_LSR(a2); goto gen_shift32; case INDEX_op_sar_i32: - c =3D const_args[2] ? (args[2] & 0x1f) ? SHIFT_IMM_ASR(args[2] & 0= x1f) : - SHIFT_IMM_LSL(0) : SHIFT_REG_ASR(args[2]); + c =3D const_args[2] ? (a2 & 0x1f) ? SHIFT_IMM_ASR(a2 & 0x1f) : + SHIFT_IMM_LSL(0) : SHIFT_REG_ASR(a2); goto gen_shift32; case INDEX_op_rotr_i32: - c =3D const_args[2] ? (args[2] & 0x1f) ? SHIFT_IMM_ROR(args[2] & 0= x1f) : - SHIFT_IMM_LSL(0) : SHIFT_REG_ROR(args[2]); + c =3D const_args[2] ? (a2 & 0x1f) ? SHIFT_IMM_ROR(a2 & 0x1f) : + SHIFT_IMM_LSL(0) : SHIFT_REG_ROR(a2); /* Fall through. */ gen_shift32: - tcg_out_dat_reg(s, COND_AL, ARITH_MOV, args[0], 0, args[1], c); + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, a0, 0, a1, c); break; =20 case INDEX_op_rotl_i32: if (const_args[2]) { - tcg_out_dat_reg(s, COND_AL, ARITH_MOV, args[0], 0, args[1], - ((0x20 - args[2]) & 0x1f) ? - SHIFT_IMM_ROR((0x20 - args[2]) & 0x1f) : + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, a0, 0, a1, + ((0x20 - a2) & 0x1f) ? + SHIFT_IMM_ROR((0x20 - a2) & 0x1f) : SHIFT_IMM_LSL(0)); } else { - tcg_out_dat_imm(s, COND_AL, ARITH_RSB, TCG_REG_TMP, args[2], 0= x20); - tcg_out_dat_reg(s, COND_AL, ARITH_MOV, args[0], 0, args[1], + tcg_out_dat_imm(s, COND_AL, ARITH_RSB, TCG_REG_TMP, a2, 0x20); + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, a0, 0, a1, SHIFT_REG_ROR(TCG_REG_TMP)); } break; =20 case INDEX_op_ctz_i32: - tcg_out_dat_reg(s, COND_AL, INSN_RBIT, TCG_REG_TMP, 0, args[1], 0); + tcg_out_dat_reg(s, COND_AL, INSN_RBIT, TCG_REG_TMP, 0, a1, 0); a1 =3D TCG_REG_TMP; goto do_clz; =20 case INDEX_op_clz_i32: - a1 =3D args[1]; do_clz: - a0 =3D args[0]; - a2 =3D args[2]; c =3D const_args[2]; if (c && a2 =3D=3D 32) { tcg_out_dat_reg(s, COND_AL, INSN_CLZ, a0, 0, a1, 0); @@ -1970,28 +1970,28 @@ static inline void tcg_out_op(TCGContext *s, TCGOpc= ode opc, =20 case INDEX_op_brcond_i32: tcg_out_dat_rIN(s, COND_AL, ARITH_CMP, ARITH_CMN, 0, - args[0], args[1], const_args[1]); - tcg_out_goto_label(s, tcg_cond_to_arm_cond[args[2]], - arg_label(args[3])); + a0, a1, const_args[1]); + tcg_out_goto_label(s, tcg_cond_to_arm_cond[a2], + arg_label(a3)); break; case INDEX_op_setcond_i32: tcg_out_dat_rIN(s, COND_AL, ARITH_CMP, ARITH_CMN, 0, - args[1], args[2], const_args[2]); - tcg_out_dat_imm(s, tcg_cond_to_arm_cond[args[3]], - ARITH_MOV, args[0], 0, 1); - tcg_out_dat_imm(s, tcg_cond_to_arm_cond[tcg_invert_cond(args[3])], - ARITH_MOV, args[0], 0, 0); + a1, a2, const_args[2]); + tcg_out_dat_imm(s, tcg_cond_to_arm_cond[a3], + ARITH_MOV, a0, 0, 1); + tcg_out_dat_imm(s, tcg_cond_to_arm_cond[tcg_invert_cond(a3)], + ARITH_MOV, a0, 0, 0); break; =20 case INDEX_op_brcond2_i32: c =3D tcg_out_cmp2(s, args, const_args); - tcg_out_goto_label(s, tcg_cond_to_arm_cond[c], arg_label(args[5])); + tcg_out_goto_label(s, tcg_cond_to_arm_cond[c], arg_label(a5)); break; case INDEX_op_setcond2_i32: c =3D tcg_out_cmp2(s, args + 1, const_args + 1); - tcg_out_dat_imm(s, tcg_cond_to_arm_cond[c], ARITH_MOV, args[0], 0,= 1); + tcg_out_dat_imm(s, tcg_cond_to_arm_cond[c], ARITH_MOV, a0, 0, 1); tcg_out_dat_imm(s, tcg_cond_to_arm_cond[tcg_invert_cond(c)], - ARITH_MOV, args[0], 0, 0); + ARITH_MOV, a0, 0, 0); break; =20 case INDEX_op_qemu_ld_i32: @@ -2008,63 +2008,62 @@ static inline void tcg_out_op(TCGContext *s, TCGOpc= ode opc, break; =20 case INDEX_op_bswap16_i32: - tcg_out_bswap16(s, COND_AL, args[0], args[1]); + tcg_out_bswap16(s, COND_AL, a0, a1); break; case INDEX_op_bswap32_i32: - tcg_out_bswap32(s, COND_AL, args[0], args[1]); + tcg_out_bswap32(s, COND_AL, a0, a1); break; =20 case INDEX_op_ext8s_i32: - tcg_out_ext8s(s, COND_AL, args[0], args[1]); + tcg_out_ext8s(s, COND_AL, a0, a1); break; case INDEX_op_ext16s_i32: - tcg_out_ext16s(s, COND_AL, args[0], args[1]); + tcg_out_ext16s(s, COND_AL, a0, a1); break; case INDEX_op_ext16u_i32: - tcg_out_ext16u(s, COND_AL, args[0], args[1]); + tcg_out_ext16u(s, COND_AL, a0, a1); break; =20 case INDEX_op_deposit_i32: - tcg_out_deposit(s, COND_AL, args[0], args[2], - args[3], args[4], const_args[2]); + tcg_out_deposit(s, COND_AL, a0, a2, a3, a4, const_args[2]); break; case INDEX_op_extract_i32: - tcg_out_extract(s, COND_AL, args[0], args[1], args[2], args[3]); + tcg_out_extract(s, COND_AL, a0, a1, a2, a3); break; case INDEX_op_sextract_i32: - tcg_out_sextract(s, COND_AL, args[0], args[1], args[2], args[3]); + tcg_out_sextract(s, COND_AL, a0, a1, a2, a3); break; case INDEX_op_extract2_i32: /* ??? These optimization vs zero should be generic. */ /* ??? But we can't substitute 2 for 1 in the opcode stream yet. = */ if (const_args[1]) { if (const_args[2]) { - tcg_out_movi(s, TCG_TYPE_REG, args[0], 0); + tcg_out_movi(s, TCG_TYPE_REG, a0, 0); } else { - tcg_out_dat_reg(s, COND_AL, ARITH_MOV, args[0], 0, - args[2], SHIFT_IMM_LSL(32 - args[3])); + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, a0, 0, + a2, SHIFT_IMM_LSL(32 - a3)); } } else if (const_args[2]) { - tcg_out_dat_reg(s, COND_AL, ARITH_MOV, args[0], 0, - args[1], SHIFT_IMM_LSR(args[3])); + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, a0, 0, + a1, SHIFT_IMM_LSR(a3)); } else { /* We can do extract2 in 2 insns, vs the 3 required otherwise.= */ tcg_out_dat_reg(s, COND_AL, ARITH_MOV, TCG_REG_TMP, 0, - args[2], SHIFT_IMM_LSL(32 - args[3])); - tcg_out_dat_reg(s, COND_AL, ARITH_ORR, args[0], TCG_REG_TMP, - args[1], SHIFT_IMM_LSR(args[3])); + a2, SHIFT_IMM_LSL(32 - a3)); + tcg_out_dat_reg(s, COND_AL, ARITH_ORR, a0, TCG_REG_TMP, + a1, SHIFT_IMM_LSR(a3)); } break; =20 case INDEX_op_div_i32: - tcg_out_sdiv(s, COND_AL, args[0], args[1], args[2]); + tcg_out_sdiv(s, COND_AL, a0, a1, a2); break; case INDEX_op_divu_i32: - tcg_out_udiv(s, COND_AL, args[0], args[1], args[2]); + tcg_out_udiv(s, COND_AL, a0, a1, a2); break; =20 case INDEX_op_mb: - tcg_out_mb(s, args[0]); + tcg_out_mb(s, a0); break; =20 case INDEX_op_mov_i32: /* Always emitted via tcg_out_mov. */ --=20 2.26.2