From nobody Wed Apr 1 20:40:53 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1775054784; cv=none; d=zohomail.com; s=zohoarc; b=QkYfjH5kv/axQCyDgG52fpkvLR7WbUtsAxyXpeqF8lJ9seFTQI181GnCcIw+p+CNzjreSNMQpcdm1RoHkFCq0Sc6Ly8Msh+Jqv8yOOVjleHY85QU+Rc41vZmguFAH2rYB2ZrosKASDHwPpfSGLxsns4Euvm9b+s0bQ3fahqU36g= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1775054784; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=IgOwgAiAG7tpg0g7lVw/3Db4kPl2tRIKEA+Xx9JPdeA=; b=GBp8XNLHQnJOc8qYlkNY1kWNq/fs+DOOU7eq1VQ4eWl0pwqirLDsxAin1xMleD7HwHAp7ns0JCBXr5pC/Nw471J5xvVC74mp6EKufdyCB+8vDYZM4X+HWqtxwNWk7NKnRdPO+ygOQjKGWYx6nnYZRTMP5gtkG/4c0kCRUofS3Lk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1775054784011494.19097188844944; Wed, 1 Apr 2026 07:46:24 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1w7wpi-000340-5D; Wed, 01 Apr 2026 10:46:15 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1w7wor-0002q3-91 for qemu-devel@nongnu.org; Wed, 01 Apr 2026 10:45:24 -0400 Received: from mail-wm1-x32d.google.com ([2a00:1450:4864:20::32d]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1w7wom-0000je-Ht for qemu-devel@nongnu.org; Wed, 01 Apr 2026 10:45:21 -0400 Received: by mail-wm1-x32d.google.com with SMTP id 5b1f17b1804b1-488895ad947so3311605e9.3 for ; Wed, 01 Apr 2026 07:45:16 -0700 (PDT) Received: from localhost.localdomain (88-187-86-199.subs.proxad.net. [88.187.86.199]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4888a62616dsm4822575e9.3.2026.04.01.07.45.11 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 01 Apr 2026 07:45:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1775054714; x=1775659514; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=IgOwgAiAG7tpg0g7lVw/3Db4kPl2tRIKEA+Xx9JPdeA=; b=Xi91Q6jWWUB2E3QAWYf2HGrZs5jJU98/7SDCyZ/zZy8OZhKjBIst3UajtYx+WT0uwu 74Pfvobgvse0hlgTeUPqdK5cc2Iogfg7o4XdMPH69wpdpfdkUdFApVhWprA0/9fSZnoS B9x4PoBdRyl69yfnGRo+SO4F2OQg8i1pcL263QxsS78X1M+Tl34bNEsNOt+3JtHC1/B3 qehIm/YLz+0eMihOZzsgqF8i8xvGtjKHIQ3U3R6GYuamHxsz8oKYOZOyYDVJdSQiO9uJ eQfoVrK84N8p31u10IhzUD0YP4m5UZSiIDxuqtMYyMlnFDO5mI+Rm8UbBEU5k9KqWfsq EqSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775054714; x=1775659514; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=IgOwgAiAG7tpg0g7lVw/3Db4kPl2tRIKEA+Xx9JPdeA=; b=k+07WNsyXf78fn4MsC1EWqWst3NXSk1cX1nNrWukh+oeRnuuMcfDf5iLSUoz9nzL1U OwMgW/RS98ORCenYX1gRwkF4WjPEb0H8uSpMI/XgP1Z5CHGGhaQ6PQIQozii1X1rdd2o rY9CTVCzQf7wddFsF5d5vYenM5zXRwb9QAM0hq8QyGHwUehr4ukTU4n35irftl5ITVvh uqs/GKSQkgeUkiC87g71+YNHj3aTsCWMI5D42yjWnsIbSpvnMPNlf5cav3lsmexdxP+a vkur6wVZlAHQ7dR/Zcfr4Z1YmmnnAc9FaYJ0Ku+gHAiqOn2+CA55q52ucICd1cP+/3JB dHNA== X-Gm-Message-State: AOJu0YzSByBiOxMV3i2VjmKSNA4Cphval0DCFYN3DoXMW9Be3GiNJl+b m9B2ZuXaBgFMdDkiQCyBcEFGvIg4NwzRBF9cMmxk/esoKEyLbKAFzQUAoc3K6+u0BjdGShRH7Zt Th3JhrnQ= X-Gm-Gg: ATEYQzw6gEH5LeC1MB1Zon4LYM0N6HzZ5dGUgea1Y24btYTqABOskoNbbBCZN0avoFw 9JwIqGHtjrIuYSOm4tLn5nwqrW2lZx09VQoIq4ikYrzjc+MNjdiYh54jmT5ynS6W9SNHCz58L9s 406I0yopx/ABnB4P92zqnv1TwkRmrykWxPOHHlP2rL63owFUPAesFygnaFauROzyS3Nocxroeg/ gIF02qCE9sa7d2xNRGAYP7d6TJjIL4lSXv8YrHPve34vwg6jWRStBwm9aOgTzcuDEMpJCnY5IHb z4vTeh3yUWVV9KhCpp48is6+r0G4VqtGKRLyF8MzbF0O7BP5P5uTYoQbyDreh0Iw7BWnX64ZNms 6M9fHoOXaV0OBrWi8wJgBN35XPQxCSzD6ky9Yr04YQkZnhs5Z0u2zlPVye69GrmtHgs51GGmdS0 SU7w1qHomn2UupT+PcJgMQZ17JiRRcXlJrnYOkHy8H/luPBSkv2EC/eCS2p4ISD3jp9uHTiGxw X-Received: by 2002:a05:600c:34d0:b0:485:3c66:e21d with SMTP id 5b1f17b1804b1-4888355e6d5mr73087245e9.2.1775054712440; Wed, 01 Apr 2026 07:45:12 -0700 (PDT) From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Cc: Pierrick Bouvier , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Jiaxun Yang , Aurelien Jarno , Aleksandar Rikalo , Anton Johansson , Siarhei Volkau Subject: [PATCH-for-11.1 1/2] target/mips: Expand TCGv type as 32-bit for XBurst MXU Date: Wed, 1 Apr 2026 16:45:01 +0200 Message-ID: <20260401144503.80510-2-philmd@linaro.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260401144503.80510-1-philmd@linaro.org> References: <20260401144503.80510-1-philmd@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2a00:1450:4864:20::32d; envelope-from=philmd@linaro.org; helo=mail-wm1-x32d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1775054786984154100 The MXU extension is only built as 32-bit, so TCGv expands to TCGv_i32. Use the latter which is more explicit. In gen_mxu_s32madd_sub() directly expand: - tcg_gen_ext[u]_tl_i64 -> tcg_gen_ext[u]_i32_i64 - tcg_gen_concat_tl_i64 -> tcg_gen_concat_i32_i64 the rest being mechanical changes. Cc: Siarhei Volkau Signed-off-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Pierrick Bouvier --- target/mips/tcg/mxu_translate.c | 1954 +++++++++++++++---------------- 1 file changed, 977 insertions(+), 977 deletions(-) diff --git a/target/mips/tcg/mxu_translate.c b/target/mips/tcg/mxu_translat= e.c index 35ebb0397da..7961b073144 100644 --- a/target/mips/tcg/mxu_translate.c +++ b/target/mips/tcg/mxu_translate.c @@ -606,8 +606,8 @@ enum { #define MXU_OPTN3_PTN7 7 =20 /* MXU registers */ -static TCGv mxu_gpr[NUMBER_OF_MXU_REGISTERS - 1]; -static TCGv mxu_CR; +static TCGv_i32 mxu_gpr[NUMBER_OF_MXU_REGISTERS - 1]; +static TCGv_i32 mxu_CR; =20 static const char mxuregnames[NUMBER_OF_MXU_REGISTERS][4] =3D { "XR1", "XR2", "XR3", "XR4", "XR5", "XR6", "XR7", "XR8", @@ -628,42 +628,42 @@ void mxu_translate_init(void) } =20 /* MXU General purpose registers moves. */ -static inline void gen_load_mxu_gpr(TCGv t, unsigned int reg) +static inline void gen_load_mxu_gpr(TCGv_i32 t, unsigned int reg) { if (reg =3D=3D 0) { - tcg_gen_movi_tl(t, 0); + tcg_gen_movi_i32(t, 0); } else if (reg <=3D 15) { - tcg_gen_mov_tl(t, mxu_gpr[reg - 1]); + tcg_gen_mov_i32(t, mxu_gpr[reg - 1]); } } =20 -static inline void gen_store_mxu_gpr(TCGv t, unsigned int reg) +static inline void gen_store_mxu_gpr(TCGv_i32 t, unsigned int reg) { if (reg > 0 && reg <=3D 15) { - tcg_gen_mov_tl(mxu_gpr[reg - 1], t); + tcg_gen_mov_i32(mxu_gpr[reg - 1], t); } } =20 -static inline void gen_extract_mxu_gpr(TCGv t, unsigned int reg, +static inline void gen_extract_mxu_gpr(TCGv_i32 t, unsigned int reg, unsigned int ofs, unsigned int len) { if (reg =3D=3D 0) { - tcg_gen_movi_tl(t, 0); + tcg_gen_movi_i32(t, 0); } else if (reg <=3D 15) { - tcg_gen_extract_tl(t, mxu_gpr[reg - 1], ofs, len); + tcg_gen_extract_i32(t, mxu_gpr[reg - 1], ofs, len); } } =20 /* MXU control register moves. */ -static inline void gen_load_mxu_cr(TCGv t) +static inline void gen_load_mxu_cr(TCGv_i32 t) { - tcg_gen_mov_tl(t, mxu_CR); + tcg_gen_mov_i32(t, mxu_CR); } =20 -static inline void gen_store_mxu_cr(TCGv t) +static inline void gen_store_mxu_cr(TCGv_i32 t) { /* TODO: Add handling of RW rules for MXU_CR. */ - tcg_gen_mov_tl(mxu_CR, t); + tcg_gen_mov_i32(mxu_CR, t); } =20 /* @@ -671,10 +671,10 @@ static inline void gen_store_mxu_cr(TCGv t) */ static void gen_mxu_s32i2m(DisasContext *ctx) { - TCGv t0; + TCGv_i32 t0; uint32_t XRa, Rb; =20 - t0 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i32(); =20 XRa =3D extract32(ctx->opcode, 6, 5); Rb =3D extract32(ctx->opcode, 16, 5); @@ -692,10 +692,10 @@ static void gen_mxu_s32i2m(DisasContext *ctx) */ static void gen_mxu_s32m2i(DisasContext *ctx) { - TCGv t0; + TCGv_i32 t0; uint32_t XRa, Rb; =20 - t0 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i32(); =20 XRa =3D extract32(ctx->opcode, 6, 5); Rb =3D extract32(ctx->opcode, 16, 5); @@ -717,11 +717,11 @@ static void gen_mxu_s32m2i(DisasContext *ctx) */ static void gen_mxu_s8ldd(DisasContext *ctx, bool postmodify) { - TCGv t0, t1; + TCGv_i32 t0, t1; uint32_t XRa, Rb, s8, optn3; =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i32(); + t1 =3D tcg_temp_new_i32(); =20 XRa =3D extract32(ctx->opcode, 6, 4); s8 =3D extract32(ctx->opcode, 10, 8); @@ -729,7 +729,7 @@ static void gen_mxu_s8ldd(DisasContext *ctx, bool postm= odify) Rb =3D extract32(ctx->opcode, 21, 5); =20 gen_load_gpr(t0, Rb); - tcg_gen_addi_tl(t0, t0, (int8_t)s8); + tcg_gen_addi_i32(t0, t0, (int8_t)s8); if (postmodify) { gen_store_gpr(t0, Rb); } @@ -737,52 +737,52 @@ static void gen_mxu_s8ldd(DisasContext *ctx, bool pos= tmodify) switch (optn3) { /* XRa[7:0] =3D tmp8 */ case MXU_OPTN3_PTN0: - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_UB); + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_UB); gen_load_mxu_gpr(t0, XRa); - tcg_gen_deposit_tl(t0, t0, t1, 0, 8); + tcg_gen_deposit_i32(t0, t0, t1, 0, 8); break; /* XRa[15:8] =3D tmp8 */ case MXU_OPTN3_PTN1: - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_UB); + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_UB); gen_load_mxu_gpr(t0, XRa); - tcg_gen_deposit_tl(t0, t0, t1, 8, 8); + tcg_gen_deposit_i32(t0, t0, t1, 8, 8); break; /* XRa[23:16] =3D tmp8 */ case MXU_OPTN3_PTN2: - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_UB); + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_UB); gen_load_mxu_gpr(t0, XRa); - tcg_gen_deposit_tl(t0, t0, t1, 16, 8); + tcg_gen_deposit_i32(t0, t0, t1, 16, 8); break; /* XRa[31:24] =3D tmp8 */ case MXU_OPTN3_PTN3: - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_UB); + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_UB); gen_load_mxu_gpr(t0, XRa); - tcg_gen_deposit_tl(t0, t0, t1, 24, 8); + tcg_gen_deposit_i32(t0, t0, t1, 24, 8); break; /* XRa =3D {8'b0, tmp8, 8'b0, tmp8} */ case MXU_OPTN3_PTN4: - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_UB); - tcg_gen_deposit_tl(t0, t1, t1, 16, 16); + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_UB); + tcg_gen_deposit_i32(t0, t1, t1, 16, 16); break; /* XRa =3D {tmp8, 8'b0, tmp8, 8'b0} */ case MXU_OPTN3_PTN5: - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_UB); - tcg_gen_shli_tl(t1, t1, 8); - tcg_gen_deposit_tl(t0, t1, t1, 16, 16); + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_UB); + tcg_gen_shli_i32(t1, t1, 8); + tcg_gen_deposit_i32(t0, t1, t1, 16, 16); break; /* XRa =3D {{8{sign of tmp8}}, tmp8, {8{sign of tmp8}}, tmp8} */ case MXU_OPTN3_PTN6: - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_SB); - tcg_gen_mov_tl(t0, t1); - tcg_gen_andi_tl(t0, t0, 0xFF00FFFF); - tcg_gen_shli_tl(t1, t1, 16); - tcg_gen_or_tl(t0, t0, t1); + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_SB); + tcg_gen_mov_i32(t0, t1); + tcg_gen_andi_i32(t0, t0, 0xFF00FFFF); + tcg_gen_shli_i32(t1, t1, 16); + tcg_gen_or_i32(t0, t0, t1); break; /* XRa =3D {tmp8, tmp8, tmp8, tmp8} */ case MXU_OPTN3_PTN7: - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_UB); - tcg_gen_deposit_tl(t1, t1, t1, 8, 8); - tcg_gen_deposit_tl(t0, t1, t1, 16, 16); + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_UB); + tcg_gen_deposit_i32(t1, t1, t1, 8, 8); + tcg_gen_deposit_i32(t0, t1, t1, 16, 16); break; } =20 @@ -797,11 +797,11 @@ static void gen_mxu_s8ldd(DisasContext *ctx, bool pos= tmodify) */ static void gen_mxu_s8std(DisasContext *ctx, bool postmodify) { - TCGv t0, t1; + TCGv_i32 t0, t1; uint32_t XRa, Rb, s8, optn3; =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i32(); + t1 =3D tcg_temp_new_i32(); =20 XRa =3D extract32(ctx->opcode, 6, 4); s8 =3D extract32(ctx->opcode, 10, 8); @@ -814,7 +814,7 @@ static void gen_mxu_s8std(DisasContext *ctx, bool postm= odify) } =20 gen_load_gpr(t0, Rb); - tcg_gen_addi_tl(t0, t0, (int8_t)s8); + tcg_gen_addi_i32(t0, t0, (int8_t)s8); if (postmodify) { gen_store_gpr(t0, Rb); } @@ -823,23 +823,23 @@ static void gen_mxu_s8std(DisasContext *ctx, bool pos= tmodify) switch (optn3) { /* XRa[7:0] =3D> tmp8 */ case MXU_OPTN3_PTN0: - tcg_gen_extract_tl(t1, t1, 0, 8); + tcg_gen_extract_i32(t1, t1, 0, 8); break; /* XRa[15:8] =3D> tmp8 */ case MXU_OPTN3_PTN1: - tcg_gen_extract_tl(t1, t1, 8, 8); + tcg_gen_extract_i32(t1, t1, 8, 8); break; /* XRa[23:16] =3D> tmp8 */ case MXU_OPTN3_PTN2: - tcg_gen_extract_tl(t1, t1, 16, 8); + tcg_gen_extract_i32(t1, t1, 16, 8); break; /* XRa[31:24] =3D> tmp8 */ case MXU_OPTN3_PTN3: - tcg_gen_extract_tl(t1, t1, 24, 8); + tcg_gen_extract_i32(t1, t1, 24, 8); break; } =20 - tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_UB); + tcg_gen_qemu_st_i32(t1, t0, ctx->mem_idx, MO_UB); } =20 /* @@ -850,12 +850,12 @@ static void gen_mxu_s8std(DisasContext *ctx, bool pos= tmodify) */ static void gen_mxu_s16ldd(DisasContext *ctx, bool postmodify) { - TCGv t0, t1; + TCGv_i32 t0, t1; uint32_t XRa, Rb, optn2; int32_t s10; =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i32(); + t1 =3D tcg_temp_new_i32(); =20 XRa =3D extract32(ctx->opcode, 6, 4); s10 =3D sextract32(ctx->opcode, 10, 9) * 2; @@ -863,7 +863,7 @@ static void gen_mxu_s16ldd(DisasContext *ctx, bool post= modify) Rb =3D extract32(ctx->opcode, 21, 5); =20 gen_load_gpr(t0, Rb); - tcg_gen_addi_tl(t0, t0, s10); + tcg_gen_addi_i32(t0, t0, s10); if (postmodify) { gen_store_gpr(t0, Rb); } @@ -871,25 +871,25 @@ static void gen_mxu_s16ldd(DisasContext *ctx, bool po= stmodify) switch (optn2) { /* XRa[15:0] =3D tmp16 */ case MXU_OPTN2_PTN0: - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_UW); + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_UW); gen_load_mxu_gpr(t0, XRa); - tcg_gen_deposit_tl(t0, t0, t1, 0, 16); + tcg_gen_deposit_i32(t0, t0, t1, 0, 16); break; /* XRa[31:16] =3D tmp16 */ case MXU_OPTN2_PTN1: - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_UW); + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_UW); gen_load_mxu_gpr(t0, XRa); - tcg_gen_deposit_tl(t0, t0, t1, 16, 16); + tcg_gen_deposit_i32(t0, t0, t1, 16, 16); break; /* XRa =3D sign_extend(tmp16) */ case MXU_OPTN2_PTN2: - tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, MO_SW); + tcg_gen_qemu_ld_i32(t0, t0, ctx->mem_idx, MO_SW); break; /* XRa =3D {tmp16, tmp16} */ case MXU_OPTN2_PTN3: - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_UW); - tcg_gen_deposit_tl(t0, t1, t1, 0, 16); - tcg_gen_deposit_tl(t0, t1, t1, 16, 16); + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_UW); + tcg_gen_deposit_i32(t0, t1, t1, 0, 16); + tcg_gen_deposit_i32(t0, t1, t1, 16, 16); break; } =20 @@ -904,12 +904,12 @@ static void gen_mxu_s16ldd(DisasContext *ctx, bool po= stmodify) */ static void gen_mxu_s16std(DisasContext *ctx, bool postmodify) { - TCGv t0, t1; + TCGv_i32 t0, t1; uint32_t XRa, Rb, optn2; int32_t s10; =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i32(); + t1 =3D tcg_temp_new_i32(); =20 XRa =3D extract32(ctx->opcode, 6, 4); s10 =3D sextract32(ctx->opcode, 10, 9) * 2; @@ -922,7 +922,7 @@ static void gen_mxu_s16std(DisasContext *ctx, bool post= modify) } =20 gen_load_gpr(t0, Rb); - tcg_gen_addi_tl(t0, t0, s10); + tcg_gen_addi_i32(t0, t0, s10); if (postmodify) { gen_store_gpr(t0, Rb); } @@ -931,15 +931,15 @@ static void gen_mxu_s16std(DisasContext *ctx, bool po= stmodify) switch (optn2) { /* XRa[15:0] =3D> tmp16 */ case MXU_OPTN2_PTN0: - tcg_gen_extract_tl(t1, t1, 0, 16); + tcg_gen_extract_i32(t1, t1, 0, 16); break; /* XRa[31:16] =3D> tmp16 */ case MXU_OPTN2_PTN1: - tcg_gen_extract_tl(t1, t1, 16, 16); + tcg_gen_extract_i32(t1, t1, 16, 16); break; } =20 - tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_UW); + tcg_gen_qemu_st_i32(t1, t0, ctx->mem_idx, MO_UW); } =20 /* @@ -953,11 +953,11 @@ static void gen_mxu_s16std(DisasContext *ctx, bool po= stmodify) */ static void gen_mxu_s32mul(DisasContext *ctx, bool mulu) { - TCGv t0, t1; + TCGv_i32 t0, t1; uint32_t XRa, XRd, rs, rt; =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i32(); + t1 =3D tcg_temp_new_i32(); =20 XRa =3D extract32(ctx->opcode, 6, 4); XRd =3D extract32(ctx->opcode, 10, 4); @@ -965,20 +965,20 @@ static void gen_mxu_s32mul(DisasContext *ctx, bool mu= lu) rt =3D extract32(ctx->opcode, 21, 5); =20 if (unlikely(rs =3D=3D 0 || rt =3D=3D 0)) { - tcg_gen_movi_tl(t0, 0); - tcg_gen_movi_tl(t1, 0); + tcg_gen_movi_i32(t0, 0); + tcg_gen_movi_i32(t1, 0); } else { gen_load_gpr(t0, rs); gen_load_gpr(t1, rt); =20 if (mulu) { - tcg_gen_mulu2_tl(t0, t1, t0, t1); + tcg_gen_mulu2_i32(t0, t1, t0, t1); } else { - tcg_gen_muls2_tl(t0, t1, t0, t1); + tcg_gen_muls2_i32(t0, t1, t0, t1); } } - tcg_gen_mov_tl(cpu_HI[0], t1); - tcg_gen_mov_tl(cpu_LO[0], t0); + tcg_gen_mov_i32(cpu_HI[0], t1); + tcg_gen_mov_i32(cpu_LO[0], t0); gen_store_mxu_gpr(t1, XRa); gen_store_mxu_gpr(t0, XRd); } @@ -993,13 +993,13 @@ static void gen_mxu_s32mul(DisasContext *ctx, bool mu= lu) static void gen_mxu_d16mul(DisasContext *ctx, bool fractional, bool packed_result) { - TCGv t0, t1, t2, t3; + TCGv_i32 t0, t1, t2, t3; uint32_t XRa, XRb, XRc, XRd, optn2; =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); - t2 =3D tcg_temp_new(); - t3 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i32(); + t1 =3D tcg_temp_new_i32(); + t2 =3D tcg_temp_new_i32(); + t3 =3D tcg_temp_new_i32(); =20 XRa =3D extract32(ctx->opcode, 6, 4); XRb =3D extract32(ctx->opcode, 10, 4); @@ -1014,64 +1014,64 @@ static void gen_mxu_d16mul(DisasContext *ctx, bool = fractional, */ =20 gen_load_mxu_gpr(t1, XRb); - tcg_gen_sextract_tl(t0, t1, 0, 16); - tcg_gen_sextract_tl(t1, t1, 16, 16); + tcg_gen_sextract_i32(t0, t1, 0, 16); + tcg_gen_sextract_i32(t1, t1, 16, 16); gen_load_mxu_gpr(t3, XRc); - tcg_gen_sextract_tl(t2, t3, 0, 16); - tcg_gen_sextract_tl(t3, t3, 16, 16); + tcg_gen_sextract_i32(t2, t3, 0, 16); + tcg_gen_sextract_i32(t3, t3, 16, 16); =20 switch (optn2) { case MXU_OPTN2_WW: /* XRB.H*XRC.H =3D=3D lop, XRB.L*XRC.L =3D=3D rop */ - tcg_gen_mul_tl(t3, t1, t3); - tcg_gen_mul_tl(t2, t0, t2); + tcg_gen_mul_i32(t3, t1, t3); + tcg_gen_mul_i32(t2, t0, t2); break; case MXU_OPTN2_LW: /* XRB.L*XRC.H =3D=3D lop, XRB.L*XRC.L =3D=3D rop */ - tcg_gen_mul_tl(t3, t0, t3); - tcg_gen_mul_tl(t2, t0, t2); + tcg_gen_mul_i32(t3, t0, t3); + tcg_gen_mul_i32(t2, t0, t2); break; case MXU_OPTN2_HW: /* XRB.H*XRC.H =3D=3D lop, XRB.H*XRC.L =3D=3D rop */ - tcg_gen_mul_tl(t3, t1, t3); - tcg_gen_mul_tl(t2, t1, t2); + tcg_gen_mul_i32(t3, t1, t3); + tcg_gen_mul_i32(t2, t1, t2); break; case MXU_OPTN2_XW: /* XRB.L*XRC.H =3D=3D lop, XRB.H*XRC.L =3D=3D rop */ - tcg_gen_mul_tl(t3, t0, t3); - tcg_gen_mul_tl(t2, t1, t2); + tcg_gen_mul_i32(t3, t0, t3); + tcg_gen_mul_i32(t2, t1, t2); break; } if (fractional) { TCGLabel *l_done =3D gen_new_label(); - TCGv rounding =3D tcg_temp_new(); + TCGv_i32 rounding =3D tcg_temp_new_i32(); =20 - tcg_gen_shli_tl(t3, t3, 1); - tcg_gen_shli_tl(t2, t2, 1); - tcg_gen_andi_tl(rounding, mxu_CR, 0x2); - tcg_gen_brcondi_tl(TCG_COND_EQ, rounding, 0, l_done); + tcg_gen_shli_i32(t3, t3, 1); + tcg_gen_shli_i32(t2, t2, 1); + tcg_gen_andi_i32(rounding, mxu_CR, 0x2); + tcg_gen_brcondi_i32(TCG_COND_EQ, rounding, 0, l_done); if (packed_result) { TCGLabel *l_apply_bias_l =3D gen_new_label(); TCGLabel *l_apply_bias_r =3D gen_new_label(); TCGLabel *l_half_done =3D gen_new_label(); - TCGv bias =3D tcg_temp_new(); + TCGv_i32 bias =3D tcg_temp_new_i32(); =20 /* * D16MULF supports unbiased rounding aka "bankers rounding", * "round to even", "convergent rounding" */ - tcg_gen_andi_tl(bias, mxu_CR, 0x4); - tcg_gen_brcondi_tl(TCG_COND_NE, bias, 0, l_apply_bias_l); - tcg_gen_andi_tl(t0, t3, 0x1ffff); - tcg_gen_brcondi_tl(TCG_COND_EQ, t0, 0x8000, l_half_done); + tcg_gen_andi_i32(bias, mxu_CR, 0x4); + tcg_gen_brcondi_i32(TCG_COND_NE, bias, 0, l_apply_bias_l); + tcg_gen_andi_i32(t0, t3, 0x1ffff); + tcg_gen_brcondi_i32(TCG_COND_EQ, t0, 0x8000, l_half_done); gen_set_label(l_apply_bias_l); - tcg_gen_addi_tl(t3, t3, 0x8000); + tcg_gen_addi_i32(t3, t3, 0x8000); gen_set_label(l_half_done); - tcg_gen_brcondi_tl(TCG_COND_NE, bias, 0, l_apply_bias_r); - tcg_gen_andi_tl(t0, t2, 0x1ffff); - tcg_gen_brcondi_tl(TCG_COND_EQ, t0, 0x8000, l_done); + tcg_gen_brcondi_i32(TCG_COND_NE, bias, 0, l_apply_bias_r); + tcg_gen_andi_i32(t0, t2, 0x1ffff); + tcg_gen_brcondi_i32(TCG_COND_EQ, t0, 0x8000, l_done); gen_set_label(l_apply_bias_r); - tcg_gen_addi_tl(t2, t2, 0x8000); + tcg_gen_addi_i32(t2, t2, 0x8000); } else { /* D16MULE doesn't support unbiased rounding */ - tcg_gen_addi_tl(t3, t3, 0x8000); - tcg_gen_addi_tl(t2, t2, 0x8000); + tcg_gen_addi_i32(t3, t3, 0x8000); + tcg_gen_addi_i32(t2, t2, 0x8000); } gen_set_label(l_done); } @@ -1079,9 +1079,9 @@ static void gen_mxu_d16mul(DisasContext *ctx, bool fr= actional, gen_store_mxu_gpr(t3, XRa); gen_store_mxu_gpr(t2, XRd); } else { - tcg_gen_andi_tl(t3, t3, 0xffff0000); - tcg_gen_shri_tl(t2, t2, 16); - tcg_gen_or_tl(t3, t3, t2); + tcg_gen_andi_i32(t3, t3, 0xffff0000); + tcg_gen_shri_i32(t2, t2, 16); + tcg_gen_or_i32(t3, t3, t2); gen_store_mxu_gpr(t3, XRa); } } @@ -1097,13 +1097,13 @@ static void gen_mxu_d16mul(DisasContext *ctx, bool = fractional, static void gen_mxu_d16mac(DisasContext *ctx, bool fractional, bool packed_result) { - TCGv t0, t1, t2, t3; + TCGv_i32 t0, t1, t2, t3; uint32_t XRa, XRb, XRc, XRd, optn2, aptn2; =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); - t2 =3D tcg_temp_new(); - t3 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i32(); + t1 =3D tcg_temp_new_i32(); + t2 =3D tcg_temp_new_i32(); + t3 =3D tcg_temp_new_i32(); =20 XRa =3D extract32(ctx->opcode, 6, 4); XRb =3D extract32(ctx->opcode, 10, 4); @@ -1113,90 +1113,90 @@ static void gen_mxu_d16mac(DisasContext *ctx, bool = fractional, aptn2 =3D extract32(ctx->opcode, 24, 2); =20 gen_load_mxu_gpr(t1, XRb); - tcg_gen_sextract_tl(t0, t1, 0, 16); - tcg_gen_sextract_tl(t1, t1, 16, 16); + tcg_gen_sextract_i32(t0, t1, 0, 16); + tcg_gen_sextract_i32(t1, t1, 16, 16); =20 gen_load_mxu_gpr(t3, XRc); - tcg_gen_sextract_tl(t2, t3, 0, 16); - tcg_gen_sextract_tl(t3, t3, 16, 16); + tcg_gen_sextract_i32(t2, t3, 0, 16); + tcg_gen_sextract_i32(t3, t3, 16, 16); =20 switch (optn2) { case MXU_OPTN2_WW: /* XRB.H*XRC.H =3D=3D lop, XRB.L*XRC.L =3D=3D rop */ - tcg_gen_mul_tl(t3, t1, t3); - tcg_gen_mul_tl(t2, t0, t2); + tcg_gen_mul_i32(t3, t1, t3); + tcg_gen_mul_i32(t2, t0, t2); break; case MXU_OPTN2_LW: /* XRB.L*XRC.H =3D=3D lop, XRB.L*XRC.L =3D=3D rop */ - tcg_gen_mul_tl(t3, t0, t3); - tcg_gen_mul_tl(t2, t0, t2); + tcg_gen_mul_i32(t3, t0, t3); + tcg_gen_mul_i32(t2, t0, t2); break; case MXU_OPTN2_HW: /* XRB.H*XRC.H =3D=3D lop, XRB.H*XRC.L =3D=3D rop */ - tcg_gen_mul_tl(t3, t1, t3); - tcg_gen_mul_tl(t2, t1, t2); + tcg_gen_mul_i32(t3, t1, t3); + tcg_gen_mul_i32(t2, t1, t2); break; case MXU_OPTN2_XW: /* XRB.L*XRC.H =3D=3D lop, XRB.H*XRC.L =3D=3D rop */ - tcg_gen_mul_tl(t3, t0, t3); - tcg_gen_mul_tl(t2, t1, t2); + tcg_gen_mul_i32(t3, t0, t3); + tcg_gen_mul_i32(t2, t1, t2); break; } =20 if (fractional) { - tcg_gen_shli_tl(t3, t3, 1); - tcg_gen_shli_tl(t2, t2, 1); + tcg_gen_shli_i32(t3, t3, 1); + tcg_gen_shli_i32(t2, t2, 1); } gen_load_mxu_gpr(t0, XRa); gen_load_mxu_gpr(t1, XRd); =20 switch (aptn2) { case MXU_APTN2_AA: - tcg_gen_add_tl(t3, t0, t3); - tcg_gen_add_tl(t2, t1, t2); + tcg_gen_add_i32(t3, t0, t3); + tcg_gen_add_i32(t2, t1, t2); break; case MXU_APTN2_AS: - tcg_gen_add_tl(t3, t0, t3); - tcg_gen_sub_tl(t2, t1, t2); + tcg_gen_add_i32(t3, t0, t3); + tcg_gen_sub_i32(t2, t1, t2); break; case MXU_APTN2_SA: - tcg_gen_sub_tl(t3, t0, t3); - tcg_gen_add_tl(t2, t1, t2); + tcg_gen_sub_i32(t3, t0, t3); + tcg_gen_add_i32(t2, t1, t2); break; case MXU_APTN2_SS: - tcg_gen_sub_tl(t3, t0, t3); - tcg_gen_sub_tl(t2, t1, t2); + tcg_gen_sub_i32(t3, t0, t3); + tcg_gen_sub_i32(t2, t1, t2); break; } =20 if (fractional) { TCGLabel *l_done =3D gen_new_label(); - TCGv rounding =3D tcg_temp_new(); + TCGv_i32 rounding =3D tcg_temp_new_i32(); =20 - tcg_gen_andi_tl(rounding, mxu_CR, 0x2); - tcg_gen_brcondi_tl(TCG_COND_EQ, rounding, 0, l_done); + tcg_gen_andi_i32(rounding, mxu_CR, 0x2); + tcg_gen_brcondi_i32(TCG_COND_EQ, rounding, 0, l_done); if (packed_result) { TCGLabel *l_apply_bias_l =3D gen_new_label(); TCGLabel *l_apply_bias_r =3D gen_new_label(); TCGLabel *l_half_done =3D gen_new_label(); - TCGv bias =3D tcg_temp_new(); + TCGv_i32 bias =3D tcg_temp_new_i32(); =20 /* * D16MACF supports unbiased rounding aka "bankers rounding", * "round to even", "convergent rounding" */ - tcg_gen_andi_tl(bias, mxu_CR, 0x4); - tcg_gen_brcondi_tl(TCG_COND_NE, bias, 0, l_apply_bias_l); - tcg_gen_andi_tl(t0, t3, 0x1ffff); - tcg_gen_brcondi_tl(TCG_COND_EQ, t0, 0x8000, l_half_done); + tcg_gen_andi_i32(bias, mxu_CR, 0x4); + tcg_gen_brcondi_i32(TCG_COND_NE, bias, 0, l_apply_bias_l); + tcg_gen_andi_i32(t0, t3, 0x1ffff); + tcg_gen_brcondi_i32(TCG_COND_EQ, t0, 0x8000, l_half_done); gen_set_label(l_apply_bias_l); - tcg_gen_addi_tl(t3, t3, 0x8000); + tcg_gen_addi_i32(t3, t3, 0x8000); gen_set_label(l_half_done); - tcg_gen_brcondi_tl(TCG_COND_NE, bias, 0, l_apply_bias_r); - tcg_gen_andi_tl(t0, t2, 0x1ffff); - tcg_gen_brcondi_tl(TCG_COND_EQ, t0, 0x8000, l_done); + tcg_gen_brcondi_i32(TCG_COND_NE, bias, 0, l_apply_bias_r); + tcg_gen_andi_i32(t0, t2, 0x1ffff); + tcg_gen_brcondi_i32(TCG_COND_EQ, t0, 0x8000, l_done); gen_set_label(l_apply_bias_r); - tcg_gen_addi_tl(t2, t2, 0x8000); + tcg_gen_addi_i32(t2, t2, 0x8000); } else { /* D16MACE doesn't support unbiased rounding */ - tcg_gen_addi_tl(t3, t3, 0x8000); - tcg_gen_addi_tl(t2, t2, 0x8000); + tcg_gen_addi_i32(t3, t3, 0x8000); + tcg_gen_addi_i32(t2, t2, 0x8000); } gen_set_label(l_done); } @@ -1205,9 +1205,9 @@ static void gen_mxu_d16mac(DisasContext *ctx, bool fr= actional, gen_store_mxu_gpr(t3, XRa); gen_store_mxu_gpr(t2, XRd); } else { - tcg_gen_andi_tl(t3, t3, 0xffff0000); - tcg_gen_shri_tl(t2, t2, 16); - tcg_gen_or_tl(t3, t3, t2); + tcg_gen_andi_i32(t3, t3, 0xffff0000); + tcg_gen_shri_i32(t2, t2, 16); + tcg_gen_or_i32(t3, t3, t2); gen_store_mxu_gpr(t3, XRa); } } @@ -1218,13 +1218,13 @@ static void gen_mxu_d16mac(DisasContext *ctx, bool = fractional, */ static void gen_mxu_d16madl(DisasContext *ctx) { - TCGv t0, t1, t2, t3; + TCGv_i32 t0, t1, t2, t3; uint32_t XRa, XRb, XRc, XRd, optn2, aptn2; =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); - t2 =3D tcg_temp_new(); - t3 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i32(); + t1 =3D tcg_temp_new_i32(); + t2 =3D tcg_temp_new_i32(); + t3 =3D tcg_temp_new_i32(); =20 XRa =3D extract32(ctx->opcode, 6, 4); XRb =3D extract32(ctx->opcode, 10, 4); @@ -1234,60 +1234,60 @@ static void gen_mxu_d16madl(DisasContext *ctx) aptn2 =3D extract32(ctx->opcode, 24, 2); =20 gen_load_mxu_gpr(t1, XRb); - tcg_gen_sextract_tl(t0, t1, 0, 16); - tcg_gen_sextract_tl(t1, t1, 16, 16); + tcg_gen_sextract_i32(t0, t1, 0, 16); + tcg_gen_sextract_i32(t1, t1, 16, 16); =20 gen_load_mxu_gpr(t3, XRc); - tcg_gen_sextract_tl(t2, t3, 0, 16); - tcg_gen_sextract_tl(t3, t3, 16, 16); + tcg_gen_sextract_i32(t2, t3, 0, 16); + tcg_gen_sextract_i32(t3, t3, 16, 16); =20 switch (optn2) { case MXU_OPTN2_WW: /* XRB.H*XRC.H =3D=3D lop, XRB.L*XRC.L =3D=3D rop */ - tcg_gen_mul_tl(t3, t1, t3); - tcg_gen_mul_tl(t2, t0, t2); + tcg_gen_mul_i32(t3, t1, t3); + tcg_gen_mul_i32(t2, t0, t2); break; case MXU_OPTN2_LW: /* XRB.L*XRC.H =3D=3D lop, XRB.L*XRC.L =3D=3D rop */ - tcg_gen_mul_tl(t3, t0, t3); - tcg_gen_mul_tl(t2, t0, t2); + tcg_gen_mul_i32(t3, t0, t3); + tcg_gen_mul_i32(t2, t0, t2); break; case MXU_OPTN2_HW: /* XRB.H*XRC.H =3D=3D lop, XRB.H*XRC.L =3D=3D rop */ - tcg_gen_mul_tl(t3, t1, t3); - tcg_gen_mul_tl(t2, t1, t2); + tcg_gen_mul_i32(t3, t1, t3); + tcg_gen_mul_i32(t2, t1, t2); break; case MXU_OPTN2_XW: /* XRB.L*XRC.H =3D=3D lop, XRB.H*XRC.L =3D=3D rop */ - tcg_gen_mul_tl(t3, t0, t3); - tcg_gen_mul_tl(t2, t1, t2); + tcg_gen_mul_i32(t3, t0, t3); + tcg_gen_mul_i32(t2, t1, t2); break; } - tcg_gen_extract_tl(t2, t2, 0, 16); - tcg_gen_extract_tl(t3, t3, 0, 16); + tcg_gen_extract_i32(t2, t2, 0, 16); + tcg_gen_extract_i32(t3, t3, 0, 16); =20 gen_load_mxu_gpr(t1, XRa); - tcg_gen_extract_tl(t0, t1, 0, 16); - tcg_gen_extract_tl(t1, t1, 16, 16); + tcg_gen_extract_i32(t0, t1, 0, 16); + tcg_gen_extract_i32(t1, t1, 16, 16); =20 switch (aptn2) { case MXU_APTN2_AA: - tcg_gen_add_tl(t3, t1, t3); - tcg_gen_add_tl(t2, t0, t2); + tcg_gen_add_i32(t3, t1, t3); + tcg_gen_add_i32(t2, t0, t2); break; case MXU_APTN2_AS: - tcg_gen_add_tl(t3, t1, t3); - tcg_gen_sub_tl(t2, t0, t2); + tcg_gen_add_i32(t3, t1, t3); + tcg_gen_sub_i32(t2, t0, t2); break; case MXU_APTN2_SA: - tcg_gen_sub_tl(t3, t1, t3); - tcg_gen_add_tl(t2, t0, t2); + tcg_gen_sub_i32(t3, t1, t3); + tcg_gen_add_i32(t2, t0, t2); break; case MXU_APTN2_SS: - tcg_gen_sub_tl(t3, t1, t3); - tcg_gen_sub_tl(t2, t0, t2); + tcg_gen_sub_i32(t3, t1, t3); + tcg_gen_sub_i32(t2, t0, t2); break; } =20 - tcg_gen_andi_tl(t2, t2, 0xffff); - tcg_gen_shli_tl(t3, t3, 16); - tcg_gen_or_tl(mxu_gpr[XRd - 1], t3, t2); + tcg_gen_andi_i32(t2, t2, 0xffff); + tcg_gen_shli_i32(t3, t3, 16); + tcg_gen_or_i32(mxu_gpr[XRd - 1], t3, t2); } =20 /* @@ -1296,11 +1296,11 @@ static void gen_mxu_d16madl(DisasContext *ctx) */ static void gen_mxu_s16mad(DisasContext *ctx) { - TCGv t0, t1; + TCGv_i32 t0, t1; uint32_t XRa, XRb, XRc, XRd, optn2, aptn1, pad; =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i32(); + t1 =3D tcg_temp_new_i32(); =20 XRa =3D extract32(ctx->opcode, 6, 4); XRb =3D extract32(ctx->opcode, 10, 4); @@ -1319,32 +1319,32 @@ static void gen_mxu_s16mad(DisasContext *ctx) =20 switch (optn2) { case MXU_OPTN2_WW: /* XRB.H*XRC.H */ - tcg_gen_sextract_tl(t0, t0, 16, 16); - tcg_gen_sextract_tl(t1, t1, 16, 16); + tcg_gen_sextract_i32(t0, t0, 16, 16); + tcg_gen_sextract_i32(t1, t1, 16, 16); break; case MXU_OPTN2_LW: /* XRB.L*XRC.L */ - tcg_gen_sextract_tl(t0, t0, 0, 16); - tcg_gen_sextract_tl(t1, t1, 0, 16); + tcg_gen_sextract_i32(t0, t0, 0, 16); + tcg_gen_sextract_i32(t1, t1, 0, 16); break; case MXU_OPTN2_HW: /* XRB.H*XRC.L */ - tcg_gen_sextract_tl(t0, t0, 16, 16); - tcg_gen_sextract_tl(t1, t1, 0, 16); + tcg_gen_sextract_i32(t0, t0, 16, 16); + tcg_gen_sextract_i32(t1, t1, 0, 16); break; case MXU_OPTN2_XW: /* XRB.L*XRC.H */ - tcg_gen_sextract_tl(t0, t0, 0, 16); - tcg_gen_sextract_tl(t1, t1, 16, 16); + tcg_gen_sextract_i32(t0, t0, 0, 16); + tcg_gen_sextract_i32(t1, t1, 16, 16); break; } - tcg_gen_mul_tl(t0, t0, t1); + tcg_gen_mul_i32(t0, t0, t1); =20 gen_load_mxu_gpr(t1, XRa); =20 switch (aptn1) { case MXU_APTN1_A: - tcg_gen_add_tl(t1, t1, t0); + tcg_gen_add_i32(t1, t1, t0); break; case MXU_APTN1_S: - tcg_gen_sub_tl(t1, t1, t0); + tcg_gen_sub_i32(t1, t1, t0); break; } =20 @@ -1361,17 +1361,17 @@ static void gen_mxu_s16mad(DisasContext *ctx) */ static void gen_mxu_q8mul_mac(DisasContext *ctx, bool su, bool mac) { - TCGv t0, t1, t2, t3, t4, t5, t6, t7; + TCGv_i32 t0, t1, t2, t3, t4, t5, t6, t7; uint32_t XRa, XRb, XRc, XRd, aptn2; =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); - t2 =3D tcg_temp_new(); - t3 =3D tcg_temp_new(); - t4 =3D tcg_temp_new(); - t5 =3D tcg_temp_new(); - t6 =3D tcg_temp_new(); - t7 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i32(); + t1 =3D tcg_temp_new_i32(); + t2 =3D tcg_temp_new_i32(); + t3 =3D tcg_temp_new_i32(); + t4 =3D tcg_temp_new_i32(); + t5 =3D tcg_temp_new_i32(); + t6 =3D tcg_temp_new_i32(); + t7 =3D tcg_temp_new_i32(); =20 XRa =3D extract32(ctx->opcode, 6, 4); XRb =3D extract32(ctx->opcode, 10, 4); @@ -1384,53 +1384,53 @@ static void gen_mxu_q8mul_mac(DisasContext *ctx, bo= ol su, bool mac) =20 if (su) { /* Q8MULSU / Q8MACSU */ - tcg_gen_sextract_tl(t0, t3, 0, 8); - tcg_gen_sextract_tl(t1, t3, 8, 8); - tcg_gen_sextract_tl(t2, t3, 16, 8); - tcg_gen_sextract_tl(t3, t3, 24, 8); + tcg_gen_sextract_i32(t0, t3, 0, 8); + tcg_gen_sextract_i32(t1, t3, 8, 8); + tcg_gen_sextract_i32(t2, t3, 16, 8); + tcg_gen_sextract_i32(t3, t3, 24, 8); } else { /* Q8MUL / Q8MAC */ - tcg_gen_extract_tl(t0, t3, 0, 8); - tcg_gen_extract_tl(t1, t3, 8, 8); - tcg_gen_extract_tl(t2, t3, 16, 8); - tcg_gen_extract_tl(t3, t3, 24, 8); + tcg_gen_extract_i32(t0, t3, 0, 8); + tcg_gen_extract_i32(t1, t3, 8, 8); + tcg_gen_extract_i32(t2, t3, 16, 8); + tcg_gen_extract_i32(t3, t3, 24, 8); } =20 - tcg_gen_extract_tl(t4, t7, 0, 8); - tcg_gen_extract_tl(t5, t7, 8, 8); - tcg_gen_extract_tl(t6, t7, 16, 8); - tcg_gen_extract_tl(t7, t7, 24, 8); + tcg_gen_extract_i32(t4, t7, 0, 8); + tcg_gen_extract_i32(t5, t7, 8, 8); + tcg_gen_extract_i32(t6, t7, 16, 8); + tcg_gen_extract_i32(t7, t7, 24, 8); =20 - tcg_gen_mul_tl(t0, t0, t4); - tcg_gen_mul_tl(t1, t1, t5); - tcg_gen_mul_tl(t2, t2, t6); - tcg_gen_mul_tl(t3, t3, t7); + tcg_gen_mul_i32(t0, t0, t4); + tcg_gen_mul_i32(t1, t1, t5); + tcg_gen_mul_i32(t2, t2, t6); + tcg_gen_mul_i32(t3, t3, t7); =20 if (mac) { gen_load_mxu_gpr(t4, XRd); gen_load_mxu_gpr(t5, XRa); - tcg_gen_extract_tl(t6, t4, 0, 16); - tcg_gen_extract_tl(t7, t4, 16, 16); + tcg_gen_extract_i32(t6, t4, 0, 16); + tcg_gen_extract_i32(t7, t4, 16, 16); if (aptn2 & 1) { - tcg_gen_sub_tl(t0, t6, t0); - tcg_gen_sub_tl(t1, t7, t1); + tcg_gen_sub_i32(t0, t6, t0); + tcg_gen_sub_i32(t1, t7, t1); } else { - tcg_gen_add_tl(t0, t6, t0); - tcg_gen_add_tl(t1, t7, t1); + tcg_gen_add_i32(t0, t6, t0); + tcg_gen_add_i32(t1, t7, t1); } - tcg_gen_extract_tl(t6, t5, 0, 16); - tcg_gen_extract_tl(t7, t5, 16, 16); + tcg_gen_extract_i32(t6, t5, 0, 16); + tcg_gen_extract_i32(t7, t5, 16, 16); if (aptn2 & 2) { - tcg_gen_sub_tl(t2, t6, t2); - tcg_gen_sub_tl(t3, t7, t3); + tcg_gen_sub_i32(t2, t6, t2); + tcg_gen_sub_i32(t3, t7, t3); } else { - tcg_gen_add_tl(t2, t6, t2); - tcg_gen_add_tl(t3, t7, t3); + tcg_gen_add_i32(t2, t6, t2); + tcg_gen_add_i32(t3, t7, t3); } } =20 - tcg_gen_deposit_tl(t0, t0, t1, 16, 16); - tcg_gen_deposit_tl(t1, t2, t3, 16, 16); + tcg_gen_deposit_i32(t0, t0, t1, 16, 16); + tcg_gen_deposit_i32(t1, t2, t3, 16, 16); =20 gen_store_mxu_gpr(t0, XRd); gen_store_mxu_gpr(t1, XRa); @@ -1443,17 +1443,17 @@ static void gen_mxu_q8mul_mac(DisasContext *ctx, bo= ol su, bool mac) */ static void gen_mxu_q8madl(DisasContext *ctx) { - TCGv t0, t1, t2, t3, t4, t5, t6, t7; + TCGv_i32 t0, t1, t2, t3, t4, t5, t6, t7; uint32_t XRa, XRb, XRc, XRd, aptn2; =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); - t2 =3D tcg_temp_new(); - t3 =3D tcg_temp_new(); - t4 =3D tcg_temp_new(); - t5 =3D tcg_temp_new(); - t6 =3D tcg_temp_new(); - t7 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i32(); + t1 =3D tcg_temp_new_i32(); + t2 =3D tcg_temp_new_i32(); + t3 =3D tcg_temp_new_i32(); + t4 =3D tcg_temp_new_i32(); + t5 =3D tcg_temp_new_i32(); + t6 =3D tcg_temp_new_i32(); + t7 =3D tcg_temp_new_i32(); =20 XRa =3D extract32(ctx->opcode, 6, 4); XRb =3D extract32(ctx->opcode, 10, 4); @@ -1464,45 +1464,45 @@ static void gen_mxu_q8madl(DisasContext *ctx) gen_load_mxu_gpr(t3, XRb); gen_load_mxu_gpr(t7, XRc); =20 - tcg_gen_extract_tl(t0, t3, 0, 8); - tcg_gen_extract_tl(t1, t3, 8, 8); - tcg_gen_extract_tl(t2, t3, 16, 8); - tcg_gen_extract_tl(t3, t3, 24, 8); + tcg_gen_extract_i32(t0, t3, 0, 8); + tcg_gen_extract_i32(t1, t3, 8, 8); + tcg_gen_extract_i32(t2, t3, 16, 8); + tcg_gen_extract_i32(t3, t3, 24, 8); =20 - tcg_gen_extract_tl(t4, t7, 0, 8); - tcg_gen_extract_tl(t5, t7, 8, 8); - tcg_gen_extract_tl(t6, t7, 16, 8); - tcg_gen_extract_tl(t7, t7, 24, 8); + tcg_gen_extract_i32(t4, t7, 0, 8); + tcg_gen_extract_i32(t5, t7, 8, 8); + tcg_gen_extract_i32(t6, t7, 16, 8); + tcg_gen_extract_i32(t7, t7, 24, 8); =20 - tcg_gen_mul_tl(t0, t0, t4); - tcg_gen_mul_tl(t1, t1, t5); - tcg_gen_mul_tl(t2, t2, t6); - tcg_gen_mul_tl(t3, t3, t7); + tcg_gen_mul_i32(t0, t0, t4); + tcg_gen_mul_i32(t1, t1, t5); + tcg_gen_mul_i32(t2, t2, t6); + tcg_gen_mul_i32(t3, t3, t7); =20 gen_load_mxu_gpr(t4, XRa); - tcg_gen_extract_tl(t6, t4, 0, 8); - tcg_gen_extract_tl(t7, t4, 8, 8); + tcg_gen_extract_i32(t6, t4, 0, 8); + tcg_gen_extract_i32(t7, t4, 8, 8); if (aptn2 & 1) { - tcg_gen_sub_tl(t0, t6, t0); - tcg_gen_sub_tl(t1, t7, t1); + tcg_gen_sub_i32(t0, t6, t0); + tcg_gen_sub_i32(t1, t7, t1); } else { - tcg_gen_add_tl(t0, t6, t0); - tcg_gen_add_tl(t1, t7, t1); + tcg_gen_add_i32(t0, t6, t0); + tcg_gen_add_i32(t1, t7, t1); } - tcg_gen_extract_tl(t6, t4, 16, 8); - tcg_gen_extract_tl(t7, t4, 24, 8); + tcg_gen_extract_i32(t6, t4, 16, 8); + tcg_gen_extract_i32(t7, t4, 24, 8); if (aptn2 & 2) { - tcg_gen_sub_tl(t2, t6, t2); - tcg_gen_sub_tl(t3, t7, t3); + tcg_gen_sub_i32(t2, t6, t2); + tcg_gen_sub_i32(t3, t7, t3); } else { - tcg_gen_add_tl(t2, t6, t2); - tcg_gen_add_tl(t3, t7, t3); + tcg_gen_add_i32(t2, t6, t2); + tcg_gen_add_i32(t3, t7, t3); } =20 - tcg_gen_andi_tl(t5, t0, 0xff); - tcg_gen_deposit_tl(t5, t5, t1, 8, 8); - tcg_gen_deposit_tl(t5, t5, t2, 16, 8); - tcg_gen_deposit_tl(t5, t5, t3, 24, 8); + tcg_gen_andi_i32(t5, t0, 0xff); + tcg_gen_deposit_i32(t5, t5, t1, 8, 8); + tcg_gen_deposit_i32(t5, t5, t2, 16, 8); + tcg_gen_deposit_i32(t5, t5, t3, 24, 8); =20 gen_store_mxu_gpr(t5, XRd); } @@ -1518,21 +1518,21 @@ static void gen_mxu_q8madl(DisasContext *ctx) */ static void gen_mxu_s32ldxx(DisasContext *ctx, bool reversed, bool postinc) { - TCGv t0, t1; + TCGv_i32 t0, t1; uint32_t XRa, Rb, s12; =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i32(); + t1 =3D tcg_temp_new_i32(); =20 XRa =3D extract32(ctx->opcode, 6, 4); s12 =3D sextract32(ctx->opcode, 10, 10); Rb =3D extract32(ctx->opcode, 21, 5); =20 gen_load_gpr(t0, Rb); - tcg_gen_movi_tl(t1, s12 * 4); - tcg_gen_add_tl(t0, t0, t1); + tcg_gen_movi_i32(t1, s12 * 4); + tcg_gen_add_i32(t0, t0, t1); =20 - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_SL | mo_endian_rev(ctx, reversed) | ctx->default_tcg_memop_mask); gen_store_mxu_gpr(t1, XRa); @@ -1553,22 +1553,22 @@ static void gen_mxu_s32ldxx(DisasContext *ctx, bool= reversed, bool postinc) */ static void gen_mxu_s32stxx(DisasContext *ctx, bool reversed, bool postinc) { - TCGv t0, t1; + TCGv_i32 t0, t1; uint32_t XRa, Rb, s12; =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i32(); + t1 =3D tcg_temp_new_i32(); =20 XRa =3D extract32(ctx->opcode, 6, 4); s12 =3D sextract32(ctx->opcode, 10, 10); Rb =3D extract32(ctx->opcode, 21, 5); =20 gen_load_gpr(t0, Rb); - tcg_gen_movi_tl(t1, s12 * 4); - tcg_gen_add_tl(t0, t0, t1); + tcg_gen_movi_i32(t1, s12 * 4); + tcg_gen_add_i32(t0, t0, t1); =20 gen_load_mxu_gpr(t1, XRa); - tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, + tcg_gen_qemu_st_i32(t1, t0, ctx->mem_idx, MO_SL | mo_endian_rev(ctx, reversed) | ctx->default_tcg_memop_mask); =20 @@ -1589,11 +1589,11 @@ static void gen_mxu_s32stxx(DisasContext *ctx, bool= reversed, bool postinc) static void gen_mxu_s32ldxvx(DisasContext *ctx, bool reversed, bool postinc, uint32_t strd2) { - TCGv t0, t1; + TCGv_i32 t0, t1; uint32_t XRa, Rb, Rc; =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i32(); + t1 =3D tcg_temp_new_i32(); =20 XRa =3D extract32(ctx->opcode, 6, 4); Rc =3D extract32(ctx->opcode, 16, 5); @@ -1601,10 +1601,10 @@ static void gen_mxu_s32ldxvx(DisasContext *ctx, boo= l reversed, =20 gen_load_gpr(t0, Rb); gen_load_gpr(t1, Rc); - tcg_gen_shli_tl(t1, t1, strd2); - tcg_gen_add_tl(t0, t0, t1); + tcg_gen_shli_i32(t1, t1, strd2); + tcg_gen_add_i32(t0, t0, t1); =20 - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_SL | mo_endian_rev(ctx, reversed) | ctx->default_tcg_memop_mask); gen_store_mxu_gpr(t1, XRa); @@ -1627,11 +1627,11 @@ static void gen_mxu_s32ldxvx(DisasContext *ctx, boo= l reversed, */ static void gen_mxu_lxx(DisasContext *ctx, uint32_t strd2, MemOp mop) { - TCGv t0, t1; + TCGv_i32 t0, t1; uint32_t Ra, Rb, Rc; =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i32(); + t1 =3D tcg_temp_new_i32(); =20 Ra =3D extract32(ctx->opcode, 11, 5); Rc =3D extract32(ctx->opcode, 16, 5); @@ -1639,10 +1639,10 @@ static void gen_mxu_lxx(DisasContext *ctx, uint32_t= strd2, MemOp mop) =20 gen_load_gpr(t0, Rb); gen_load_gpr(t1, Rc); - tcg_gen_shli_tl(t1, t1, strd2); - tcg_gen_add_tl(t0, t0, t1); + tcg_gen_shli_i32(t1, t1, strd2); + tcg_gen_add_i32(t0, t0, t1); =20 - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, mop | ctx->default_tcg_memop_= mask); + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, mop | ctx->default_tcg_memop= _mask); gen_store_gpr(t1, Ra); } =20 @@ -1658,11 +1658,11 @@ static void gen_mxu_lxx(DisasContext *ctx, uint32_t= strd2, MemOp mop) static void gen_mxu_s32stxvx(DisasContext *ctx, bool reversed, bool postinc, uint32_t strd2) { - TCGv t0, t1; + TCGv_i32 t0, t1; uint32_t XRa, Rb, Rc; =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i32(); + t1 =3D tcg_temp_new_i32(); =20 XRa =3D extract32(ctx->opcode, 6, 4); Rc =3D extract32(ctx->opcode, 16, 5); @@ -1670,11 +1670,11 @@ static void gen_mxu_s32stxvx(DisasContext *ctx, boo= l reversed, =20 gen_load_gpr(t0, Rb); gen_load_gpr(t1, Rc); - tcg_gen_shli_tl(t1, t1, strd2); - tcg_gen_add_tl(t0, t0, t1); + tcg_gen_shli_i32(t1, t1, strd2); + tcg_gen_add_i32(t0, t0, t1); =20 gen_load_mxu_gpr(t1, XRa); - tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, + tcg_gen_qemu_st_i32(t1, t0, ctx->mem_idx, MO_SL | mo_endian_rev(ctx, reversed) | ctx->default_tcg_memop_mask); =20 @@ -1859,23 +1859,23 @@ static void gen_mxu_d32sxx(DisasContext *ctx, bool = right, bool arithmetic) XRd =3D extract32(ctx->opcode, 18, 4); sft4 =3D extract32(ctx->opcode, 22, 4); =20 - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); =20 gen_load_mxu_gpr(t0, XRb); gen_load_mxu_gpr(t1, XRc); =20 if (right) { if (arithmetic) { - tcg_gen_sari_tl(t0, t0, sft4); - tcg_gen_sari_tl(t1, t1, sft4); + tcg_gen_sari_i32(t0, t0, sft4); + tcg_gen_sari_i32(t1, t1, sft4); } else { - tcg_gen_shri_tl(t0, t0, sft4); - tcg_gen_shri_tl(t1, t1, sft4); + tcg_gen_shri_i32(t0, t0, sft4); + tcg_gen_shri_i32(t1, t1, sft4); } } else { - tcg_gen_shli_tl(t0, t0, sft4); - tcg_gen_shli_tl(t1, t1, sft4); + tcg_gen_shli_i32(t0, t0, sft4); + tcg_gen_shli_i32(t1, t1, sft4); } gen_store_mxu_gpr(t0, XRa); gen_store_mxu_gpr(t1, XRd); @@ -1900,26 +1900,26 @@ static void gen_mxu_d32sxxv(DisasContext *ctx, bool= right, bool arithmetic) XRd =3D extract32(ctx->opcode, 14, 4); rs =3D extract32(ctx->opcode, 21, 5); =20 - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); =20 gen_load_mxu_gpr(t0, XRa); gen_load_mxu_gpr(t1, XRd); gen_load_gpr(t2, rs); - tcg_gen_andi_tl(t2, t2, 0x0f); + tcg_gen_andi_i32(t2, t2, 0x0f); =20 if (right) { if (arithmetic) { - tcg_gen_sar_tl(t0, t0, t2); - tcg_gen_sar_tl(t1, t1, t2); + tcg_gen_sar_i32(t0, t0, t2); + tcg_gen_sar_i32(t1, t1, t2); } else { - tcg_gen_shr_tl(t0, t0, t2); - tcg_gen_shr_tl(t1, t1, t2); + tcg_gen_shr_i32(t0, t0, t2); + tcg_gen_shr_i32(t1, t1, t2); } } else { - tcg_gen_shl_tl(t0, t0, t2); - tcg_gen_shl_tl(t1, t1, t2); + tcg_gen_shl_i32(t0, t0, t2); + tcg_gen_shl_i32(t1, t1, t2); } gen_store_mxu_gpr(t0, XRa); gen_store_mxu_gpr(t1, XRd); @@ -1946,23 +1946,23 @@ static void gen_mxu_d32sarl(DisasContext *ctx, bool= sarw) if (unlikely(XRa =3D=3D 0)) { /* destination is zero register -> do nothing */ } else { - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); =20 if (!sarw) { /* Make SFT4 from rb field */ - tcg_gen_movi_tl(t2, rb >> 1); + tcg_gen_movi_i32(t2, rb >> 1); } else { gen_load_gpr(t2, rb); - tcg_gen_andi_tl(t2, t2, 0x0f); + tcg_gen_andi_i32(t2, t2, 0x0f); } gen_load_mxu_gpr(t0, XRb); gen_load_mxu_gpr(t1, XRc); - tcg_gen_sar_tl(t0, t0, t2); - tcg_gen_sar_tl(t1, t1, t2); - tcg_gen_extract_tl(t2, t1, 0, 16); - tcg_gen_deposit_tl(t2, t2, t0, 16, 16); + tcg_gen_sar_i32(t0, t0, t2); + tcg_gen_sar_i32(t1, t1, t2); + tcg_gen_extract_i32(t2, t1, 0, 16); + tcg_gen_deposit_i32(t2, t2, t0, 16, 16); gen_store_mxu_gpr(t2, XRa); } } @@ -1988,46 +1988,46 @@ static void gen_mxu_q16sxx(DisasContext *ctx, bool = right, bool arithmetic) XRd =3D extract32(ctx->opcode, 18, 4); sft4 =3D extract32(ctx->opcode, 22, 4); =20 - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); - TCGv t3 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t3 =3D tcg_temp_new_i32(); =20 gen_load_mxu_gpr(t0, XRb); gen_load_mxu_gpr(t2, XRc); =20 if (arithmetic) { - tcg_gen_sextract_tl(t1, t0, 16, 16); - tcg_gen_sextract_tl(t0, t0, 0, 16); - tcg_gen_sextract_tl(t3, t2, 16, 16); - tcg_gen_sextract_tl(t2, t2, 0, 16); + tcg_gen_sextract_i32(t1, t0, 16, 16); + tcg_gen_sextract_i32(t0, t0, 0, 16); + tcg_gen_sextract_i32(t3, t2, 16, 16); + tcg_gen_sextract_i32(t2, t2, 0, 16); } else { - tcg_gen_extract_tl(t1, t0, 16, 16); - tcg_gen_extract_tl(t0, t0, 0, 16); - tcg_gen_extract_tl(t3, t2, 16, 16); - tcg_gen_extract_tl(t2, t2, 0, 16); + tcg_gen_extract_i32(t1, t0, 16, 16); + tcg_gen_extract_i32(t0, t0, 0, 16); + tcg_gen_extract_i32(t3, t2, 16, 16); + tcg_gen_extract_i32(t2, t2, 0, 16); } =20 if (right) { if (arithmetic) { - tcg_gen_sari_tl(t0, t0, sft4); - tcg_gen_sari_tl(t1, t1, sft4); - tcg_gen_sari_tl(t2, t2, sft4); - tcg_gen_sari_tl(t3, t3, sft4); + tcg_gen_sari_i32(t0, t0, sft4); + tcg_gen_sari_i32(t1, t1, sft4); + tcg_gen_sari_i32(t2, t2, sft4); + tcg_gen_sari_i32(t3, t3, sft4); } else { - tcg_gen_shri_tl(t0, t0, sft4); - tcg_gen_shri_tl(t1, t1, sft4); - tcg_gen_shri_tl(t2, t2, sft4); - tcg_gen_shri_tl(t3, t3, sft4); + tcg_gen_shri_i32(t0, t0, sft4); + tcg_gen_shri_i32(t1, t1, sft4); + tcg_gen_shri_i32(t2, t2, sft4); + tcg_gen_shri_i32(t3, t3, sft4); } } else { - tcg_gen_shli_tl(t0, t0, sft4); - tcg_gen_shli_tl(t1, t1, sft4); - tcg_gen_shli_tl(t2, t2, sft4); - tcg_gen_shli_tl(t3, t3, sft4); + tcg_gen_shli_i32(t0, t0, sft4); + tcg_gen_shli_i32(t1, t1, sft4); + tcg_gen_shli_i32(t2, t2, sft4); + tcg_gen_shli_i32(t3, t3, sft4); } - tcg_gen_deposit_tl(t0, t0, t1, 16, 16); - tcg_gen_deposit_tl(t2, t2, t3, 16, 16); + tcg_gen_deposit_i32(t0, t0, t1, 16, 16); + tcg_gen_deposit_i32(t2, t2, t3, 16, 16); =20 gen_store_mxu_gpr(t0, XRa); gen_store_mxu_gpr(t2, XRd); @@ -2052,50 +2052,50 @@ static void gen_mxu_q16sxxv(DisasContext *ctx, bool= right, bool arithmetic) XRd =3D extract32(ctx->opcode, 14, 4); rs =3D extract32(ctx->opcode, 21, 5); =20 - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); - TCGv t3 =3D tcg_temp_new(); - TCGv t5 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t3 =3D tcg_temp_new_i32(); + TCGv_i32 t5 =3D tcg_temp_new_i32(); =20 gen_load_mxu_gpr(t0, XRa); gen_load_mxu_gpr(t2, XRd); gen_load_gpr(t5, rs); - tcg_gen_andi_tl(t5, t5, 0x0f); + tcg_gen_andi_i32(t5, t5, 0x0f); =20 =20 if (arithmetic) { - tcg_gen_sextract_tl(t1, t0, 16, 16); - tcg_gen_sextract_tl(t0, t0, 0, 16); - tcg_gen_sextract_tl(t3, t2, 16, 16); - tcg_gen_sextract_tl(t2, t2, 0, 16); + tcg_gen_sextract_i32(t1, t0, 16, 16); + tcg_gen_sextract_i32(t0, t0, 0, 16); + tcg_gen_sextract_i32(t3, t2, 16, 16); + tcg_gen_sextract_i32(t2, t2, 0, 16); } else { - tcg_gen_extract_tl(t1, t0, 16, 16); - tcg_gen_extract_tl(t0, t0, 0, 16); - tcg_gen_extract_tl(t3, t2, 16, 16); - tcg_gen_extract_tl(t2, t2, 0, 16); + tcg_gen_extract_i32(t1, t0, 16, 16); + tcg_gen_extract_i32(t0, t0, 0, 16); + tcg_gen_extract_i32(t3, t2, 16, 16); + tcg_gen_extract_i32(t2, t2, 0, 16); } =20 if (right) { if (arithmetic) { - tcg_gen_sar_tl(t0, t0, t5); - tcg_gen_sar_tl(t1, t1, t5); - tcg_gen_sar_tl(t2, t2, t5); - tcg_gen_sar_tl(t3, t3, t5); + tcg_gen_sar_i32(t0, t0, t5); + tcg_gen_sar_i32(t1, t1, t5); + tcg_gen_sar_i32(t2, t2, t5); + tcg_gen_sar_i32(t3, t3, t5); } else { - tcg_gen_shr_tl(t0, t0, t5); - tcg_gen_shr_tl(t1, t1, t5); - tcg_gen_shr_tl(t2, t2, t5); - tcg_gen_shr_tl(t3, t3, t5); + tcg_gen_shr_i32(t0, t0, t5); + tcg_gen_shr_i32(t1, t1, t5); + tcg_gen_shr_i32(t2, t2, t5); + tcg_gen_shr_i32(t3, t3, t5); } } else { - tcg_gen_shl_tl(t0, t0, t5); - tcg_gen_shl_tl(t1, t1, t5); - tcg_gen_shl_tl(t2, t2, t5); - tcg_gen_shl_tl(t3, t3, t5); + tcg_gen_shl_i32(t0, t0, t5); + tcg_gen_shl_i32(t1, t1, t5); + tcg_gen_shl_i32(t2, t2, t5); + tcg_gen_shl_i32(t3, t3, t5); } - tcg_gen_deposit_tl(t0, t0, t1, 16, 16); - tcg_gen_deposit_tl(t2, t2, t3, 16, 16); + tcg_gen_deposit_i32(t0, t0, t1, 16, 16); + tcg_gen_deposit_i32(t2, t2, t3, 16, 16); =20 gen_store_mxu_gpr(t0, XRa); gen_store_mxu_gpr(t2, XRd); @@ -2195,9 +2195,9 @@ static void gen_mxu_D16MAX_D16MIN(DisasContext *ctx) /* exactly one operand is zero register - find which one is not...= */ uint32_t XRx =3D XRb ? XRb : XRc; /* ...and do half-word-wise max/min with one operand 0 */ - TCGv_i32 t0 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); TCGv_i32 t1 =3D tcg_constant_i32(0); - TCGv_i32 t2 =3D tcg_temp_new(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); =20 /* the left half-word first */ tcg_gen_andi_i32(t0, mxu_gpr[XRx - 1], 0xFFFF0000); @@ -2226,9 +2226,9 @@ static void gen_mxu_D16MAX_D16MIN(DisasContext *ctx) tcg_gen_mov_i32(mxu_gpr[XRa - 1], mxu_gpr[XRb - 1]); } else { /* the most general case */ - TCGv_i32 t0 =3D tcg_temp_new(); - TCGv_i32 t1 =3D tcg_temp_new(); - TCGv_i32 t2 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); =20 /* the left half-word first */ tcg_gen_andi_i32(t0, mxu_gpr[XRb - 1], 0xFFFF0000); @@ -2288,9 +2288,9 @@ static void gen_mxu_Q8MAX_Q8MIN(DisasContext *ctx) /* exactly one operand is zero register - make it be the first...*/ uint32_t XRx =3D XRb ? XRb : XRc; /* ...and do byte-wise max/min with one operand 0 */ - TCGv_i32 t0 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); TCGv_i32 t1 =3D tcg_constant_i32(0); - TCGv_i32 t2 =3D tcg_temp_new(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); int32_t i; =20 /* the leftmost byte (byte 3) first */ @@ -2324,9 +2324,9 @@ static void gen_mxu_Q8MAX_Q8MIN(DisasContext *ctx) tcg_gen_mov_i32(mxu_gpr[XRa - 1], mxu_gpr[XRb - 1]); } else { /* the most general case */ - TCGv_i32 t0 =3D tcg_temp_new(); - TCGv_i32 t1 =3D tcg_temp_new(); - TCGv_i32 t2 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); int32_t i; =20 /* the leftmost bytes (bytes 3) first */ @@ -2387,32 +2387,32 @@ static void gen_mxu_q8slt(DisasContext *ctx, bool s= ltu) /* destination is zero register -> do nothing */ } else if (unlikely((XRb =3D=3D 0) && (XRc =3D=3D 0))) { /* both operands zero registers -> just set destination to zero */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else if (unlikely(XRb =3D=3D XRc)) { /* both operands same registers -> just set destination to zero */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else { /* the most general case */ - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); - TCGv t3 =3D tcg_temp_new(); - TCGv t4 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t3 =3D tcg_temp_new_i32(); + TCGv_i32 t4 =3D tcg_temp_new_i32(); =20 gen_load_mxu_gpr(t3, XRb); gen_load_mxu_gpr(t4, XRc); - tcg_gen_movi_tl(t2, 0); + tcg_gen_movi_i32(t2, 0); =20 for (int i =3D 0; i < 4; i++) { if (sltu) { - tcg_gen_extract_tl(t0, t3, 8 * i, 8); - tcg_gen_extract_tl(t1, t4, 8 * i, 8); + tcg_gen_extract_i32(t0, t3, 8 * i, 8); + tcg_gen_extract_i32(t1, t4, 8 * i, 8); } else { - tcg_gen_sextract_tl(t0, t3, 8 * i, 8); - tcg_gen_sextract_tl(t1, t4, 8 * i, 8); + tcg_gen_sextract_i32(t0, t3, 8 * i, 8); + tcg_gen_sextract_i32(t1, t4, 8 * i, 8); } - tcg_gen_setcond_tl(TCG_COND_LT, t0, t0, t1); - tcg_gen_deposit_tl(t2, t2, t0, 8 * i, 8); + tcg_gen_setcond_i32(TCG_COND_LT, t0, t0, t1); + tcg_gen_deposit_i32(t2, t2, t0, 8 * i, 8); } gen_store_mxu_gpr(t2, XRa); } @@ -2438,18 +2438,18 @@ static void gen_mxu_S32SLT(DisasContext *ctx) /* destination is zero register -> do nothing */ } else if (unlikely((XRb =3D=3D 0) && (XRc =3D=3D 0))) { /* both operands zero registers -> just set destination to zero */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else if (unlikely(XRb =3D=3D XRc)) { /* both operands same registers -> just set destination to zero */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else { /* the most general case */ - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); =20 gen_load_mxu_gpr(t0, XRb); gen_load_mxu_gpr(t1, XRc); - tcg_gen_setcond_tl(TCG_COND_LT, mxu_gpr[XRa - 1], t0, t1); + tcg_gen_setcond_i32(TCG_COND_LT, mxu_gpr[XRa - 1], t0, t1); } } =20 @@ -2474,28 +2474,28 @@ static void gen_mxu_D16SLT(DisasContext *ctx) /* destination is zero register -> do nothing */ } else if (unlikely((XRb =3D=3D 0) && (XRc =3D=3D 0))) { /* both operands zero registers -> just set destination to zero */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else if (unlikely(XRb =3D=3D XRc)) { /* both operands same registers -> just set destination to zero */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else { /* the most general case */ - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); - TCGv t3 =3D tcg_temp_new(); - TCGv t4 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t3 =3D tcg_temp_new_i32(); + TCGv_i32 t4 =3D tcg_temp_new_i32(); =20 gen_load_mxu_gpr(t3, XRb); gen_load_mxu_gpr(t4, XRc); - tcg_gen_sextract_tl(t0, t3, 16, 16); - tcg_gen_sextract_tl(t1, t4, 16, 16); - tcg_gen_setcond_tl(TCG_COND_LT, t0, t0, t1); - tcg_gen_shli_tl(t2, t0, 16); - tcg_gen_sextract_tl(t0, t3, 0, 16); - tcg_gen_sextract_tl(t1, t4, 0, 16); - tcg_gen_setcond_tl(TCG_COND_LT, t0, t0, t1); - tcg_gen_or_tl(mxu_gpr[XRa - 1], t2, t0); + tcg_gen_sextract_i32(t0, t3, 16, 16); + tcg_gen_sextract_i32(t1, t4, 16, 16); + tcg_gen_setcond_i32(TCG_COND_LT, t0, t0, t1); + tcg_gen_shli_i32(t2, t0, 16); + tcg_gen_sextract_i32(t0, t3, 0, 16); + tcg_gen_sextract_i32(t1, t4, 0, 16); + tcg_gen_setcond_i32(TCG_COND_LT, t0, t0, t1); + tcg_gen_or_i32(mxu_gpr[XRa - 1], t2, t0); } } =20 @@ -2525,36 +2525,36 @@ static void gen_mxu_d16avg(DisasContext *ctx, bool = round45) /* destination is zero register -> do nothing */ } else if (unlikely((XRb =3D=3D 0) && (XRc =3D=3D 0))) { /* both operands zero registers -> just set destination to zero */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else if (unlikely(XRb =3D=3D XRc)) { /* both operands same registers -> just set destination to same */ - tcg_gen_mov_tl(mxu_gpr[XRa - 1], mxu_gpr[XRb - 1]); + tcg_gen_mov_i32(mxu_gpr[XRa - 1], mxu_gpr[XRb - 1]); } else { /* the most general case */ - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); - TCGv t3 =3D tcg_temp_new(); - TCGv t4 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t3 =3D tcg_temp_new_i32(); + TCGv_i32 t4 =3D tcg_temp_new_i32(); =20 gen_load_mxu_gpr(t3, XRb); gen_load_mxu_gpr(t4, XRc); - tcg_gen_sextract_tl(t0, t3, 16, 16); - tcg_gen_sextract_tl(t1, t4, 16, 16); - tcg_gen_add_tl(t0, t0, t1); + tcg_gen_sextract_i32(t0, t3, 16, 16); + tcg_gen_sextract_i32(t1, t4, 16, 16); + tcg_gen_add_i32(t0, t0, t1); if (round45) { - tcg_gen_addi_tl(t0, t0, 1); + tcg_gen_addi_i32(t0, t0, 1); } - tcg_gen_shli_tl(t2, t0, 15); - tcg_gen_andi_tl(t2, t2, 0xffff0000); - tcg_gen_sextract_tl(t0, t3, 0, 16); - tcg_gen_sextract_tl(t1, t4, 0, 16); - tcg_gen_add_tl(t0, t0, t1); + tcg_gen_shli_i32(t2, t0, 15); + tcg_gen_andi_i32(t2, t2, 0xffff0000); + tcg_gen_sextract_i32(t0, t3, 0, 16); + tcg_gen_sextract_i32(t1, t4, 0, 16); + tcg_gen_add_i32(t0, t0, t1); if (round45) { - tcg_gen_addi_tl(t0, t0, 1); + tcg_gen_addi_i32(t0, t0, 1); } - tcg_gen_shri_tl(t0, t0, 1); - tcg_gen_deposit_tl(t2, t2, t0, 0, 16); + tcg_gen_shri_i32(t0, t0, 1); + tcg_gen_deposit_i32(t2, t2, t0, 0, 16); gen_store_mxu_gpr(t2, XRa); } } @@ -2585,31 +2585,31 @@ static void gen_mxu_q8avg(DisasContext *ctx, bool r= ound45) /* destination is zero register -> do nothing */ } else if (unlikely((XRb =3D=3D 0) && (XRc =3D=3D 0))) { /* both operands zero registers -> just set destination to zero */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else if (unlikely(XRb =3D=3D XRc)) { /* both operands same registers -> just set destination to same */ - tcg_gen_mov_tl(mxu_gpr[XRa - 1], mxu_gpr[XRb - 1]); + tcg_gen_mov_i32(mxu_gpr[XRa - 1], mxu_gpr[XRb - 1]); } else { /* the most general case */ - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); - TCGv t3 =3D tcg_temp_new(); - TCGv t4 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t3 =3D tcg_temp_new_i32(); + TCGv_i32 t4 =3D tcg_temp_new_i32(); =20 gen_load_mxu_gpr(t3, XRb); gen_load_mxu_gpr(t4, XRc); - tcg_gen_movi_tl(t2, 0); + tcg_gen_movi_i32(t2, 0); =20 for (int i =3D 0; i < 4; i++) { - tcg_gen_extract_tl(t0, t3, 8 * i, 8); - tcg_gen_extract_tl(t1, t4, 8 * i, 8); - tcg_gen_add_tl(t0, t0, t1); + tcg_gen_extract_i32(t0, t3, 8 * i, 8); + tcg_gen_extract_i32(t1, t4, 8 * i, 8); + tcg_gen_add_i32(t0, t0, t1); if (round45) { - tcg_gen_addi_tl(t0, t0, 1); + tcg_gen_addi_i32(t0, t0, 1); } - tcg_gen_shri_tl(t0, t0, 1); - tcg_gen_deposit_tl(t2, t2, t0, 8 * i, 8); + tcg_gen_shri_i32(t0, t0, 1); + tcg_gen_deposit_i32(t2, t2, t0, 8 * i, 8); } gen_store_mxu_gpr(t2, XRa); } @@ -2636,10 +2636,10 @@ static void gen_mxu_q8movzn(DisasContext *ctx, TCGC= ond cond) XRb =3D extract32(ctx->opcode, 10, 4); XRc =3D extract32(ctx->opcode, 14, 4); =20 - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); - TCGv t3 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t3 =3D tcg_temp_new_i32(); TCGLabel *l_quarterdone =3D gen_new_label(); TCGLabel *l_halfdone =3D gen_new_label(); TCGLabel *l_quarterrest =3D gen_new_label(); @@ -2649,28 +2649,28 @@ static void gen_mxu_q8movzn(DisasContext *ctx, TCGC= ond cond) gen_load_mxu_gpr(t1, XRb); gen_load_mxu_gpr(t2, XRa); =20 - tcg_gen_extract_tl(t3, t1, 24, 8); - tcg_gen_brcondi_tl(cond, t3, 0, l_quarterdone); - tcg_gen_extract_tl(t3, t0, 24, 8); - tcg_gen_deposit_tl(t2, t2, t3, 24, 8); + tcg_gen_extract_i32(t3, t1, 24, 8); + tcg_gen_brcondi_i32(cond, t3, 0, l_quarterdone); + tcg_gen_extract_i32(t3, t0, 24, 8); + tcg_gen_deposit_i32(t2, t2, t3, 24, 8); =20 gen_set_label(l_quarterdone); - tcg_gen_extract_tl(t3, t1, 16, 8); - tcg_gen_brcondi_tl(cond, t3, 0, l_halfdone); - tcg_gen_extract_tl(t3, t0, 16, 8); - tcg_gen_deposit_tl(t2, t2, t3, 16, 8); + tcg_gen_extract_i32(t3, t1, 16, 8); + tcg_gen_brcondi_i32(cond, t3, 0, l_halfdone); + tcg_gen_extract_i32(t3, t0, 16, 8); + tcg_gen_deposit_i32(t2, t2, t3, 16, 8); =20 gen_set_label(l_halfdone); - tcg_gen_extract_tl(t3, t1, 8, 8); - tcg_gen_brcondi_tl(cond, t3, 0, l_quarterrest); - tcg_gen_extract_tl(t3, t0, 8, 8); - tcg_gen_deposit_tl(t2, t2, t3, 8, 8); + tcg_gen_extract_i32(t3, t1, 8, 8); + tcg_gen_brcondi_i32(cond, t3, 0, l_quarterrest); + tcg_gen_extract_i32(t3, t0, 8, 8); + tcg_gen_deposit_i32(t2, t2, t3, 8, 8); =20 gen_set_label(l_quarterrest); - tcg_gen_extract_tl(t3, t1, 0, 8); - tcg_gen_brcondi_tl(cond, t3, 0, l_done); - tcg_gen_extract_tl(t3, t0, 0, 8); - tcg_gen_deposit_tl(t2, t2, t3, 0, 8); + tcg_gen_extract_i32(t3, t1, 0, 8); + tcg_gen_brcondi_i32(cond, t3, 0, l_done); + tcg_gen_extract_i32(t3, t0, 0, 8); + tcg_gen_deposit_i32(t2, t2, t3, 0, 8); =20 gen_set_label(l_done); gen_store_mxu_gpr(t2, XRa); @@ -2697,10 +2697,10 @@ static void gen_mxu_d16movzn(DisasContext *ctx, TCG= Cond cond) XRb =3D extract32(ctx->opcode, 10, 4); XRc =3D extract32(ctx->opcode, 14, 4); =20 - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); - TCGv t3 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t3 =3D tcg_temp_new_i32(); TCGLabel *l_halfdone =3D gen_new_label(); TCGLabel *l_done =3D gen_new_label(); =20 @@ -2708,16 +2708,16 @@ static void gen_mxu_d16movzn(DisasContext *ctx, TCG= Cond cond) gen_load_mxu_gpr(t1, XRb); gen_load_mxu_gpr(t2, XRa); =20 - tcg_gen_extract_tl(t3, t1, 16, 16); - tcg_gen_brcondi_tl(cond, t3, 0, l_halfdone); - tcg_gen_extract_tl(t3, t0, 16, 16); - tcg_gen_deposit_tl(t2, t2, t3, 16, 16); + tcg_gen_extract_i32(t3, t1, 16, 16); + tcg_gen_brcondi_i32(cond, t3, 0, l_halfdone); + tcg_gen_extract_i32(t3, t0, 16, 16); + tcg_gen_deposit_i32(t2, t2, t3, 16, 16); =20 gen_set_label(l_halfdone); - tcg_gen_extract_tl(t3, t1, 0, 16); - tcg_gen_brcondi_tl(cond, t3, 0, l_done); - tcg_gen_extract_tl(t3, t0, 0, 16); - tcg_gen_deposit_tl(t2, t2, t3, 0, 16); + tcg_gen_extract_i32(t3, t1, 0, 16); + tcg_gen_brcondi_i32(cond, t3, 0, l_done); + tcg_gen_extract_i32(t3, t0, 0, 16); + tcg_gen_deposit_i32(t2, t2, t3, 0, 16); =20 gen_set_label(l_done); gen_store_mxu_gpr(t2, XRa); @@ -2744,14 +2744,14 @@ static void gen_mxu_s32movzn(DisasContext *ctx, TCG= Cond cond) XRb =3D extract32(ctx->opcode, 10, 4); XRc =3D extract32(ctx->opcode, 14, 4); =20 - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); TCGLabel *l_done =3D gen_new_label(); =20 gen_load_mxu_gpr(t0, XRc); gen_load_mxu_gpr(t1, XRb); =20 - tcg_gen_brcondi_tl(cond, t1, 0, l_done); + tcg_gen_brcondi_i32(cond, t1, 0, l_done); gen_store_mxu_gpr(t0, XRa); gen_set_label(l_done); } @@ -2784,18 +2784,18 @@ static void gen_mxu_S32CPS(DisasContext *ctx) /* destination is zero register -> do nothing */ } else if (unlikely(XRb =3D=3D 0)) { /* XRc make no sense 0 - 0 =3D 0 -> just set destination to zero */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else if (unlikely(XRc =3D=3D 0)) { /* condition always false -> just move XRb to XRa */ - tcg_gen_mov_tl(mxu_gpr[XRa - 1], mxu_gpr[XRb - 1]); + tcg_gen_mov_i32(mxu_gpr[XRa - 1], mxu_gpr[XRb - 1]); } else { /* the most general case */ - TCGv t0 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); TCGLabel *l_not_less =3D gen_new_label(); TCGLabel *l_done =3D gen_new_label(); =20 - tcg_gen_brcondi_tl(TCG_COND_GE, mxu_gpr[XRc - 1], 0, l_not_less); - tcg_gen_neg_tl(t0, mxu_gpr[XRb - 1]); + tcg_gen_brcondi_i32(TCG_COND_GE, mxu_gpr[XRc - 1], 0, l_not_less); + tcg_gen_neg_i32(t0, mxu_gpr[XRb - 1]); tcg_gen_br(l_done); gen_set_label(l_not_less); gen_load_mxu_gpr(t0, XRb); @@ -2824,37 +2824,37 @@ static void gen_mxu_D16CPS(DisasContext *ctx) /* destination is zero register -> do nothing */ } else if (unlikely(XRb =3D=3D 0)) { /* XRc make no sense 0 - 0 =3D 0 -> just set destination to zero */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else if (unlikely(XRc =3D=3D 0)) { /* condition always false -> just move XRb to XRa */ - tcg_gen_mov_tl(mxu_gpr[XRa - 1], mxu_gpr[XRb - 1]); + tcg_gen_mov_i32(mxu_gpr[XRa - 1], mxu_gpr[XRb - 1]); } else { /* the most general case */ - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); TCGLabel *l_done_hi =3D gen_new_label(); TCGLabel *l_not_less_lo =3D gen_new_label(); TCGLabel *l_done_lo =3D gen_new_label(); =20 - tcg_gen_sextract_tl(t0, mxu_gpr[XRc - 1], 16, 16); - tcg_gen_sextract_tl(t1, mxu_gpr[XRb - 1], 16, 16); - tcg_gen_brcondi_tl(TCG_COND_GE, t0, 0, l_done_hi); - tcg_gen_subfi_tl(t1, 0, t1); + tcg_gen_sextract_i32(t0, mxu_gpr[XRc - 1], 16, 16); + tcg_gen_sextract_i32(t1, mxu_gpr[XRb - 1], 16, 16); + tcg_gen_brcondi_i32(TCG_COND_GE, t0, 0, l_done_hi); + tcg_gen_subfi_i32(t1, 0, t1); =20 gen_set_label(l_done_hi); tcg_gen_shli_i32(t1, t1, 16); =20 - tcg_gen_sextract_tl(t0, mxu_gpr[XRc - 1], 0, 16); - tcg_gen_brcondi_tl(TCG_COND_GE, t0, 0, l_not_less_lo); - tcg_gen_sextract_tl(t0, mxu_gpr[XRb - 1], 0, 16); - tcg_gen_subfi_tl(t0, 0, t0); + tcg_gen_sextract_i32(t0, mxu_gpr[XRc - 1], 0, 16); + tcg_gen_brcondi_i32(TCG_COND_GE, t0, 0, l_not_less_lo); + tcg_gen_sextract_i32(t0, mxu_gpr[XRb - 1], 0, 16); + tcg_gen_subfi_i32(t0, 0, t0); tcg_gen_br(l_done_lo); =20 gen_set_label(l_not_less_lo); - tcg_gen_extract_tl(t0, mxu_gpr[XRb - 1], 0, 16); + tcg_gen_extract_i32(t0, mxu_gpr[XRb - 1], 0, 16); =20 gen_set_label(l_done_lo); - tcg_gen_deposit_tl(mxu_gpr[XRa - 1], t1, t0, 0, 16); + tcg_gen_deposit_i32(mxu_gpr[XRa - 1], t1, t0, 0, 16); } } =20 @@ -2880,27 +2880,27 @@ static void gen_mxu_Q8ABD(DisasContext *ctx) /* destination is zero register -> do nothing */ } else if (unlikely((XRb =3D=3D 0) && (XRc =3D=3D 0))) { /* both operands zero registers -> just set destination to zero */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else { /* the most general case */ - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); - TCGv t3 =3D tcg_temp_new(); - TCGv t4 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t3 =3D tcg_temp_new_i32(); + TCGv_i32 t4 =3D tcg_temp_new_i32(); =20 gen_load_mxu_gpr(t3, XRb); gen_load_mxu_gpr(t4, XRc); - tcg_gen_movi_tl(t2, 0); + tcg_gen_movi_i32(t2, 0); =20 for (int i =3D 0; i < 4; i++) { - tcg_gen_extract_tl(t0, t3, 8 * i, 8); - tcg_gen_extract_tl(t1, t4, 8 * i, 8); + tcg_gen_extract_i32(t0, t3, 8 * i, 8); + tcg_gen_extract_i32(t1, t4, 8 * i, 8); =20 - tcg_gen_sub_tl(t0, t0, t1); - tcg_gen_abs_tl(t0, t0); + tcg_gen_sub_i32(t0, t0, t1); + tcg_gen_abs_i32(t0, t0); =20 - tcg_gen_deposit_tl(t2, t2, t0, 8 * i, 8); + tcg_gen_deposit_i32(t2, t2, t0, 8 * i, 8); } gen_store_mxu_gpr(t2, XRa); } @@ -2930,41 +2930,41 @@ static void gen_mxu_Q8ADD(DisasContext *ctx) tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else { /* the most general case */ - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); - TCGv t3 =3D tcg_temp_new(); - TCGv t4 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t3 =3D tcg_temp_new_i32(); + TCGv_i32 t4 =3D tcg_temp_new_i32(); =20 gen_load_mxu_gpr(t3, XRb); gen_load_mxu_gpr(t4, XRc); =20 for (int i =3D 0; i < 4; i++) { - tcg_gen_andi_tl(t0, t3, 0xff); - tcg_gen_andi_tl(t1, t4, 0xff); + tcg_gen_andi_i32(t0, t3, 0xff); + tcg_gen_andi_i32(t1, t4, 0xff); =20 if (i < 2) { if (aptn2 & 0x01) { - tcg_gen_sub_tl(t0, t0, t1); + tcg_gen_sub_i32(t0, t0, t1); } else { - tcg_gen_add_tl(t0, t0, t1); + tcg_gen_add_i32(t0, t0, t1); } } else { if (aptn2 & 0x02) { - tcg_gen_sub_tl(t0, t0, t1); + tcg_gen_sub_i32(t0, t0, t1); } else { - tcg_gen_add_tl(t0, t0, t1); + tcg_gen_add_i32(t0, t0, t1); } } if (i < 3) { - tcg_gen_shri_tl(t3, t3, 8); - tcg_gen_shri_tl(t4, t4, 8); + tcg_gen_shri_i32(t3, t3, 8); + tcg_gen_shri_i32(t4, t4, 8); } if (i > 0) { - tcg_gen_deposit_tl(t2, t2, t0, 8 * i, 8); + tcg_gen_deposit_i32(t2, t2, t0, 8 * i, 8); } else { - tcg_gen_andi_tl(t0, t0, 0xff); - tcg_gen_mov_tl(t2, t0); + tcg_gen_andi_i32(t0, t0, 0xff); + tcg_gen_mov_i32(t2, t0); } } gen_store_mxu_gpr(t2, XRa); @@ -2999,19 +2999,19 @@ static void gen_mxu_q8adde(DisasContext *ctx, bool = accumulate) if (unlikely((XRb =3D=3D 0) && (XRc =3D=3D 0))) { /* both operands zero registers -> just set destination to zero */ if (XRa !=3D 0) { - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } if (XRd !=3D 0) { - tcg_gen_movi_tl(mxu_gpr[XRd - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRd - 1], 0); } } else { /* the most general case */ - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); - TCGv t3 =3D tcg_temp_new(); - TCGv t4 =3D tcg_temp_new(); - TCGv t5 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t3 =3D tcg_temp_new_i32(); + TCGv_i32 t4 =3D tcg_temp_new_i32(); + TCGv_i32 t5 =3D tcg_temp_new_i32(); =20 if (XRa !=3D 0) { gen_extract_mxu_gpr(t0, XRb, 16, 8); @@ -3019,22 +3019,22 @@ static void gen_mxu_q8adde(DisasContext *ctx, bool = accumulate) gen_extract_mxu_gpr(t2, XRb, 24, 8); gen_extract_mxu_gpr(t3, XRc, 24, 8); if (aptn2 & 2) { - tcg_gen_sub_tl(t0, t0, t1); - tcg_gen_sub_tl(t2, t2, t3); + tcg_gen_sub_i32(t0, t0, t1); + tcg_gen_sub_i32(t2, t2, t3); } else { - tcg_gen_add_tl(t0, t0, t1); - tcg_gen_add_tl(t2, t2, t3); + tcg_gen_add_i32(t0, t0, t1); + tcg_gen_add_i32(t2, t2, t3); } if (accumulate) { gen_load_mxu_gpr(t5, XRa); - tcg_gen_extract_tl(t1, t5, 0, 16); - tcg_gen_extract_tl(t3, t5, 16, 16); - tcg_gen_add_tl(t0, t0, t1); - tcg_gen_add_tl(t2, t2, t3); + tcg_gen_extract_i32(t1, t5, 0, 16); + tcg_gen_extract_i32(t3, t5, 16, 16); + tcg_gen_add_i32(t0, t0, t1); + tcg_gen_add_i32(t2, t2, t3); } - tcg_gen_shli_tl(t2, t2, 16); - tcg_gen_extract_tl(t0, t0, 0, 16); - tcg_gen_or_tl(t4, t2, t0); + tcg_gen_shli_i32(t2, t2, 16); + tcg_gen_extract_i32(t0, t0, 0, 16); + tcg_gen_or_i32(t4, t2, t0); } if (XRd !=3D 0) { gen_extract_mxu_gpr(t0, XRb, 0, 8); @@ -3042,22 +3042,22 @@ static void gen_mxu_q8adde(DisasContext *ctx, bool = accumulate) gen_extract_mxu_gpr(t2, XRb, 8, 8); gen_extract_mxu_gpr(t3, XRc, 8, 8); if (aptn2 & 1) { - tcg_gen_sub_tl(t0, t0, t1); - tcg_gen_sub_tl(t2, t2, t3); + tcg_gen_sub_i32(t0, t0, t1); + tcg_gen_sub_i32(t2, t2, t3); } else { - tcg_gen_add_tl(t0, t0, t1); - tcg_gen_add_tl(t2, t2, t3); + tcg_gen_add_i32(t0, t0, t1); + tcg_gen_add_i32(t2, t2, t3); } if (accumulate) { gen_load_mxu_gpr(t5, XRd); - tcg_gen_extract_tl(t1, t5, 0, 16); - tcg_gen_extract_tl(t3, t5, 16, 16); - tcg_gen_add_tl(t0, t0, t1); - tcg_gen_add_tl(t2, t2, t3); + tcg_gen_extract_i32(t1, t5, 0, 16); + tcg_gen_extract_i32(t3, t5, 16, 16); + tcg_gen_add_i32(t0, t0, t1); + tcg_gen_add_i32(t2, t2, t3); } - tcg_gen_shli_tl(t2, t2, 16); - tcg_gen_extract_tl(t0, t0, 0, 16); - tcg_gen_or_tl(t5, t2, t0); + tcg_gen_shli_i32(t2, t2, 16); + tcg_gen_extract_i32(t0, t0, 0, 16); + tcg_gen_or_i32(t5, t2, t0); } =20 gen_store_mxu_gpr(t4, XRa); @@ -3090,46 +3090,46 @@ static void gen_mxu_d8sum(DisasContext *ctx, bool s= umc) /* destination is zero register -> do nothing */ } else if (unlikely((XRb =3D=3D 0) && (XRc =3D=3D 0))) { /* both operands zero registers -> just set destination to zero */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else { /* the most general case */ - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); - TCGv t3 =3D tcg_temp_new(); - TCGv t4 =3D tcg_temp_new(); - TCGv t5 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t3 =3D tcg_temp_new_i32(); + TCGv_i32 t4 =3D tcg_temp_new_i32(); + TCGv_i32 t5 =3D tcg_temp_new_i32(); =20 if (XRb !=3D 0) { - tcg_gen_extract_tl(t0, mxu_gpr[XRb - 1], 0, 8); - tcg_gen_extract_tl(t1, mxu_gpr[XRb - 1], 8, 8); - tcg_gen_extract_tl(t2, mxu_gpr[XRb - 1], 16, 8); - tcg_gen_extract_tl(t3, mxu_gpr[XRb - 1], 24, 8); - tcg_gen_add_tl(t4, t0, t1); - tcg_gen_add_tl(t4, t4, t2); - tcg_gen_add_tl(t4, t4, t3); + tcg_gen_extract_i32(t0, mxu_gpr[XRb - 1], 0, 8); + tcg_gen_extract_i32(t1, mxu_gpr[XRb - 1], 8, 8); + tcg_gen_extract_i32(t2, mxu_gpr[XRb - 1], 16, 8); + tcg_gen_extract_i32(t3, mxu_gpr[XRb - 1], 24, 8); + tcg_gen_add_i32(t4, t0, t1); + tcg_gen_add_i32(t4, t4, t2); + tcg_gen_add_i32(t4, t4, t3); } else { - tcg_gen_mov_tl(t4, 0); + tcg_gen_mov_i32(t4, 0); } if (XRc !=3D 0) { - tcg_gen_extract_tl(t0, mxu_gpr[XRc - 1], 0, 8); - tcg_gen_extract_tl(t1, mxu_gpr[XRc - 1], 8, 8); - tcg_gen_extract_tl(t2, mxu_gpr[XRc - 1], 16, 8); - tcg_gen_extract_tl(t3, mxu_gpr[XRc - 1], 24, 8); - tcg_gen_add_tl(t5, t0, t1); - tcg_gen_add_tl(t5, t5, t2); - tcg_gen_add_tl(t5, t5, t3); + tcg_gen_extract_i32(t0, mxu_gpr[XRc - 1], 0, 8); + tcg_gen_extract_i32(t1, mxu_gpr[XRc - 1], 8, 8); + tcg_gen_extract_i32(t2, mxu_gpr[XRc - 1], 16, 8); + tcg_gen_extract_i32(t3, mxu_gpr[XRc - 1], 24, 8); + tcg_gen_add_i32(t5, t0, t1); + tcg_gen_add_i32(t5, t5, t2); + tcg_gen_add_i32(t5, t5, t3); } else { - tcg_gen_mov_tl(t5, 0); + tcg_gen_mov_i32(t5, 0); } =20 if (sumc) { - tcg_gen_addi_tl(t4, t4, 2); - tcg_gen_addi_tl(t5, t5, 2); + tcg_gen_addi_i32(t4, t4, 2); + tcg_gen_addi_i32(t5, t5, 2); } - tcg_gen_shli_tl(t4, t4, 16); + tcg_gen_shli_i32(t4, t4, 16); =20 - tcg_gen_or_tl(mxu_gpr[XRa - 1], t4, t5); + tcg_gen_or_i32(mxu_gpr[XRa - 1], t4, t5); } } =20 @@ -3148,74 +3148,74 @@ static void gen_mxu_q16add(DisasContext *ctx) XRb =3D extract32(ctx->opcode, 10, 4); XRa =3D extract32(ctx->opcode, 6, 4); =20 - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); - TCGv t3 =3D tcg_temp_new(); - TCGv t4 =3D tcg_temp_new(); - TCGv t5 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t3 =3D tcg_temp_new_i32(); + TCGv_i32 t4 =3D tcg_temp_new_i32(); + TCGv_i32 t5 =3D tcg_temp_new_i32(); =20 gen_load_mxu_gpr(t1, XRb); - tcg_gen_extract_tl(t0, t1, 0, 16); - tcg_gen_extract_tl(t1, t1, 16, 16); + tcg_gen_extract_i32(t0, t1, 0, 16); + tcg_gen_extract_i32(t1, t1, 16, 16); =20 gen_load_mxu_gpr(t3, XRc); - tcg_gen_extract_tl(t2, t3, 0, 16); - tcg_gen_extract_tl(t3, t3, 16, 16); + tcg_gen_extract_i32(t2, t3, 0, 16); + tcg_gen_extract_i32(t3, t3, 16, 16); =20 switch (optn2) { case MXU_OPTN2_WW: /* XRB.H+XRC.H =3D=3D lop, XRB.L+XRC.L =3D=3D rop */ - tcg_gen_mov_tl(t4, t1); - tcg_gen_mov_tl(t5, t0); + tcg_gen_mov_i32(t4, t1); + tcg_gen_mov_i32(t5, t0); break; case MXU_OPTN2_LW: /* XRB.L+XRC.H =3D=3D lop, XRB.L+XRC.L =3D=3D rop */ - tcg_gen_mov_tl(t4, t0); - tcg_gen_mov_tl(t5, t0); + tcg_gen_mov_i32(t4, t0); + tcg_gen_mov_i32(t5, t0); break; case MXU_OPTN2_HW: /* XRB.H+XRC.H =3D=3D lop, XRB.H+XRC.L =3D=3D rop */ - tcg_gen_mov_tl(t4, t1); - tcg_gen_mov_tl(t5, t1); + tcg_gen_mov_i32(t4, t1); + tcg_gen_mov_i32(t5, t1); break; case MXU_OPTN2_XW: /* XRB.L+XRC.H =3D=3D lop, XRB.H+XRC.L =3D=3D rop */ - tcg_gen_mov_tl(t4, t0); - tcg_gen_mov_tl(t5, t1); + tcg_gen_mov_i32(t4, t0); + tcg_gen_mov_i32(t5, t1); break; } =20 switch (aptn2) { case MXU_APTN2_AA: /* lop +, rop + */ - tcg_gen_add_tl(t0, t4, t3); - tcg_gen_add_tl(t1, t5, t2); - tcg_gen_add_tl(t4, t4, t3); - tcg_gen_add_tl(t5, t5, t2); + tcg_gen_add_i32(t0, t4, t3); + tcg_gen_add_i32(t1, t5, t2); + tcg_gen_add_i32(t4, t4, t3); + tcg_gen_add_i32(t5, t5, t2); break; case MXU_APTN2_AS: /* lop +, rop + */ - tcg_gen_sub_tl(t0, t4, t3); - tcg_gen_sub_tl(t1, t5, t2); - tcg_gen_add_tl(t4, t4, t3); - tcg_gen_add_tl(t5, t5, t2); + tcg_gen_sub_i32(t0, t4, t3); + tcg_gen_sub_i32(t1, t5, t2); + tcg_gen_add_i32(t4, t4, t3); + tcg_gen_add_i32(t5, t5, t2); break; case MXU_APTN2_SA: /* lop +, rop + */ - tcg_gen_add_tl(t0, t4, t3); - tcg_gen_add_tl(t1, t5, t2); - tcg_gen_sub_tl(t4, t4, t3); - tcg_gen_sub_tl(t5, t5, t2); + tcg_gen_add_i32(t0, t4, t3); + tcg_gen_add_i32(t1, t5, t2); + tcg_gen_sub_i32(t4, t4, t3); + tcg_gen_sub_i32(t5, t5, t2); break; case MXU_APTN2_SS: /* lop +, rop + */ - tcg_gen_sub_tl(t0, t4, t3); - tcg_gen_sub_tl(t1, t5, t2); - tcg_gen_sub_tl(t4, t4, t3); - tcg_gen_sub_tl(t5, t5, t2); + tcg_gen_sub_i32(t0, t4, t3); + tcg_gen_sub_i32(t1, t5, t2); + tcg_gen_sub_i32(t4, t4, t3); + tcg_gen_sub_i32(t5, t5, t2); break; } =20 - tcg_gen_shli_tl(t0, t0, 16); - tcg_gen_extract_tl(t1, t1, 0, 16); - tcg_gen_shli_tl(t4, t4, 16); - tcg_gen_extract_tl(t5, t5, 0, 16); + tcg_gen_shli_i32(t0, t0, 16); + tcg_gen_extract_i32(t1, t1, 0, 16); + tcg_gen_shli_i32(t4, t4, 16); + tcg_gen_extract_i32(t5, t5, 0, 16); =20 - tcg_gen_or_tl(mxu_gpr[XRa - 1], t4, t5); - tcg_gen_or_tl(mxu_gpr[XRd - 1], t0, t1); + tcg_gen_or_i32(mxu_gpr[XRa - 1], t4, t5); + tcg_gen_or_i32(mxu_gpr[XRd - 1], t0, t1); } =20 /* @@ -3232,66 +3232,66 @@ static void gen_mxu_q16acc(DisasContext *ctx) XRb =3D extract32(ctx->opcode, 10, 4); XRa =3D extract32(ctx->opcode, 6, 4); =20 - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); - TCGv t3 =3D tcg_temp_new(); - TCGv s3 =3D tcg_temp_new(); - TCGv s2 =3D tcg_temp_new(); - TCGv s1 =3D tcg_temp_new(); - TCGv s0 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t3 =3D tcg_temp_new_i32(); + TCGv_i32 s3 =3D tcg_temp_new_i32(); + TCGv_i32 s2 =3D tcg_temp_new_i32(); + TCGv_i32 s1 =3D tcg_temp_new_i32(); + TCGv_i32 s0 =3D tcg_temp_new_i32(); =20 gen_load_mxu_gpr(t1, XRb); - tcg_gen_extract_tl(t0, t1, 0, 16); - tcg_gen_extract_tl(t1, t1, 16, 16); + tcg_gen_extract_i32(t0, t1, 0, 16); + tcg_gen_extract_i32(t1, t1, 16, 16); =20 gen_load_mxu_gpr(t3, XRc); - tcg_gen_extract_tl(t2, t3, 0, 16); - tcg_gen_extract_tl(t3, t3, 16, 16); + tcg_gen_extract_i32(t2, t3, 0, 16); + tcg_gen_extract_i32(t3, t3, 16, 16); =20 switch (aptn2) { case MXU_APTN2_AA: /* lop +, rop + */ - tcg_gen_add_tl(s3, t1, t3); - tcg_gen_add_tl(s2, t0, t2); - tcg_gen_add_tl(s1, t1, t3); - tcg_gen_add_tl(s0, t0, t2); + tcg_gen_add_i32(s3, t1, t3); + tcg_gen_add_i32(s2, t0, t2); + tcg_gen_add_i32(s1, t1, t3); + tcg_gen_add_i32(s0, t0, t2); break; case MXU_APTN2_AS: /* lop +, rop - */ - tcg_gen_sub_tl(s3, t1, t3); - tcg_gen_sub_tl(s2, t0, t2); - tcg_gen_add_tl(s1, t1, t3); - tcg_gen_add_tl(s0, t0, t2); + tcg_gen_sub_i32(s3, t1, t3); + tcg_gen_sub_i32(s2, t0, t2); + tcg_gen_add_i32(s1, t1, t3); + tcg_gen_add_i32(s0, t0, t2); break; case MXU_APTN2_SA: /* lop -, rop + */ - tcg_gen_add_tl(s3, t1, t3); - tcg_gen_add_tl(s2, t0, t2); - tcg_gen_sub_tl(s1, t1, t3); - tcg_gen_sub_tl(s0, t0, t2); + tcg_gen_add_i32(s3, t1, t3); + tcg_gen_add_i32(s2, t0, t2); + tcg_gen_sub_i32(s1, t1, t3); + tcg_gen_sub_i32(s0, t0, t2); break; case MXU_APTN2_SS: /* lop -, rop - */ - tcg_gen_sub_tl(s3, t1, t3); - tcg_gen_sub_tl(s2, t0, t2); - tcg_gen_sub_tl(s1, t1, t3); - tcg_gen_sub_tl(s0, t0, t2); + tcg_gen_sub_i32(s3, t1, t3); + tcg_gen_sub_i32(s2, t0, t2); + tcg_gen_sub_i32(s1, t1, t3); + tcg_gen_sub_i32(s0, t0, t2); break; } =20 if (XRa !=3D 0) { - tcg_gen_add_tl(t0, mxu_gpr[XRa - 1], s0); - tcg_gen_extract_tl(t0, t0, 0, 16); - tcg_gen_extract_tl(t1, mxu_gpr[XRa - 1], 16, 16); - tcg_gen_add_tl(t1, t1, s1); - tcg_gen_shli_tl(t1, t1, 16); - tcg_gen_or_tl(mxu_gpr[XRa - 1], t1, t0); + tcg_gen_add_i32(t0, mxu_gpr[XRa - 1], s0); + tcg_gen_extract_i32(t0, t0, 0, 16); + tcg_gen_extract_i32(t1, mxu_gpr[XRa - 1], 16, 16); + tcg_gen_add_i32(t1, t1, s1); + tcg_gen_shli_i32(t1, t1, 16); + tcg_gen_or_i32(mxu_gpr[XRa - 1], t1, t0); } =20 if (XRd !=3D 0) { - tcg_gen_add_tl(t0, mxu_gpr[XRd - 1], s2); - tcg_gen_extract_tl(t0, t0, 0, 16); - tcg_gen_extract_tl(t1, mxu_gpr[XRd - 1], 16, 16); - tcg_gen_add_tl(t1, t1, s3); - tcg_gen_shli_tl(t1, t1, 16); - tcg_gen_or_tl(mxu_gpr[XRd - 1], t1, t0); + tcg_gen_add_i32(t0, mxu_gpr[XRd - 1], s2); + tcg_gen_extract_i32(t0, t0, 0, 16); + tcg_gen_extract_i32(t1, mxu_gpr[XRd - 1], 16, 16); + tcg_gen_add_i32(t1, t1, s3); + tcg_gen_shli_i32(t1, t1, 16); + tcg_gen_or_i32(mxu_gpr[XRd - 1], t1, t0); } } =20 @@ -3309,58 +3309,58 @@ static void gen_mxu_q16accm(DisasContext *ctx) XRb =3D extract32(ctx->opcode, 10, 4); XRa =3D extract32(ctx->opcode, 6, 4); =20 - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); - TCGv t3 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t3 =3D tcg_temp_new_i32(); =20 gen_load_mxu_gpr(t2, XRb); gen_load_mxu_gpr(t3, XRc); =20 if (XRa !=3D 0) { - TCGv a0 =3D tcg_temp_new(); - TCGv a1 =3D tcg_temp_new(); + TCGv_i32 a0 =3D tcg_temp_new_i32(); + TCGv_i32 a1 =3D tcg_temp_new_i32(); =20 - tcg_gen_extract_tl(t0, t2, 0, 16); - tcg_gen_extract_tl(t1, t2, 16, 16); + tcg_gen_extract_i32(t0, t2, 0, 16); + tcg_gen_extract_i32(t1, t2, 16, 16); =20 gen_load_mxu_gpr(a1, XRa); - tcg_gen_extract_tl(a0, a1, 0, 16); - tcg_gen_extract_tl(a1, a1, 16, 16); + tcg_gen_extract_i32(a0, a1, 0, 16); + tcg_gen_extract_i32(a1, a1, 16, 16); =20 if (aptn2 & 2) { - tcg_gen_sub_tl(a0, a0, t0); - tcg_gen_sub_tl(a1, a1, t1); + tcg_gen_sub_i32(a0, a0, t0); + tcg_gen_sub_i32(a1, a1, t1); } else { - tcg_gen_add_tl(a0, a0, t0); - tcg_gen_add_tl(a1, a1, t1); + tcg_gen_add_i32(a0, a0, t0); + tcg_gen_add_i32(a1, a1, t1); } - tcg_gen_extract_tl(a0, a0, 0, 16); - tcg_gen_shli_tl(a1, a1, 16); - tcg_gen_or_tl(mxu_gpr[XRa - 1], a1, a0); + tcg_gen_extract_i32(a0, a0, 0, 16); + tcg_gen_shli_i32(a1, a1, 16); + tcg_gen_or_i32(mxu_gpr[XRa - 1], a1, a0); } =20 if (XRd !=3D 0) { - TCGv a0 =3D tcg_temp_new(); - TCGv a1 =3D tcg_temp_new(); + TCGv_i32 a0 =3D tcg_temp_new_i32(); + TCGv_i32 a1 =3D tcg_temp_new_i32(); =20 - tcg_gen_extract_tl(t0, t3, 0, 16); - tcg_gen_extract_tl(t1, t3, 16, 16); + tcg_gen_extract_i32(t0, t3, 0, 16); + tcg_gen_extract_i32(t1, t3, 16, 16); =20 gen_load_mxu_gpr(a1, XRd); - tcg_gen_extract_tl(a0, a1, 0, 16); - tcg_gen_extract_tl(a1, a1, 16, 16); + tcg_gen_extract_i32(a0, a1, 0, 16); + tcg_gen_extract_i32(a1, a1, 16, 16); =20 if (aptn2 & 1) { - tcg_gen_sub_tl(a0, a0, t0); - tcg_gen_sub_tl(a1, a1, t1); + tcg_gen_sub_i32(a0, a0, t0); + tcg_gen_sub_i32(a1, a1, t1); } else { - tcg_gen_add_tl(a0, a0, t0); - tcg_gen_add_tl(a1, a1, t1); + tcg_gen_add_i32(a0, a0, t0); + tcg_gen_add_i32(a1, a1, t1); } - tcg_gen_extract_tl(a0, a0, 0, 16); - tcg_gen_shli_tl(a1, a1, 16); - tcg_gen_or_tl(mxu_gpr[XRd - 1], a1, a0); + tcg_gen_extract_i32(a0, a0, 0, 16); + tcg_gen_shli_i32(a1, a1, 16); + tcg_gen_or_i32(mxu_gpr[XRd - 1], a1, a0); } } =20 @@ -3379,33 +3379,33 @@ static void gen_mxu_d16asum(DisasContext *ctx) XRb =3D extract32(ctx->opcode, 10, 4); XRa =3D extract32(ctx->opcode, 6, 4); =20 - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); - TCGv t3 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t3 =3D tcg_temp_new_i32(); =20 gen_load_mxu_gpr(t2, XRb); gen_load_mxu_gpr(t3, XRc); =20 if (XRa !=3D 0) { - tcg_gen_sextract_tl(t0, t2, 0, 16); - tcg_gen_sextract_tl(t1, t2, 16, 16); - tcg_gen_add_tl(t0, t0, t1); + tcg_gen_sextract_i32(t0, t2, 0, 16); + tcg_gen_sextract_i32(t1, t2, 16, 16); + tcg_gen_add_i32(t0, t0, t1); if (aptn2 & 2) { - tcg_gen_sub_tl(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t0); + tcg_gen_sub_i32(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t0); } else { - tcg_gen_add_tl(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t0); + tcg_gen_add_i32(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t0); } } =20 if (XRd !=3D 0) { - tcg_gen_sextract_tl(t0, t3, 0, 16); - tcg_gen_sextract_tl(t1, t3, 16, 16); - tcg_gen_add_tl(t0, t0, t1); + tcg_gen_sextract_i32(t0, t3, 0, 16); + tcg_gen_sextract_i32(t1, t3, 16, 16); + tcg_gen_add_i32(t0, t0, t1); if (aptn2 & 1) { - tcg_gen_sub_tl(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t0); + tcg_gen_sub_i32(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t0); } else { - tcg_gen_add_tl(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t0); + tcg_gen_add_i32(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t0); } } } @@ -3428,10 +3428,10 @@ static void gen_mxu_d32add(DisasContext *ctx) XRb =3D extract32(ctx->opcode, 10, 4); XRa =3D extract32(ctx->opcode, 6, 4); =20 - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); - TCGv cr =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 cr =3D tcg_temp_new_i32(); =20 if (unlikely(addc > 1)) { /* opcode incorrect -> do nothing */ @@ -3445,14 +3445,14 @@ static void gen_mxu_d32add(DisasContext *ctx) gen_load_mxu_gpr(t1, XRc); gen_load_mxu_cr(cr); if (XRa !=3D 0) { - tcg_gen_extract_tl(t2, cr, 31, 1); - tcg_gen_add_tl(t0, t0, t2); - tcg_gen_add_tl(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t0); + tcg_gen_extract_i32(t2, cr, 31, 1); + tcg_gen_add_i32(t0, t0, t2); + tcg_gen_add_i32(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t0); } if (XRd !=3D 0) { - tcg_gen_extract_tl(t2, cr, 30, 1); - tcg_gen_add_tl(t1, t1, t2); - tcg_gen_add_tl(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t1); + tcg_gen_extract_i32(t2, cr, 30, 1); + tcg_gen_add_i32(t1, t1, t2); + tcg_gen_add_i32(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t1); } } } else if (unlikely(XRa =3D=3D 0 && XRd =3D=3D 0)) { @@ -3460,7 +3460,7 @@ static void gen_mxu_d32add(DisasContext *ctx) } else { /* common case */ /* FIXME ??? What if XRa =3D=3D XRd ??? */ - TCGv carry =3D tcg_temp_new(); + TCGv_i32 carry =3D tcg_temp_new_i32(); =20 gen_load_mxu_gpr(t0, XRb); gen_load_mxu_gpr(t1, XRc); @@ -3468,27 +3468,27 @@ static void gen_mxu_d32add(DisasContext *ctx) if (XRa !=3D 0) { if (aptn2 & 2) { tcg_gen_sub_i32(t2, t0, t1); - tcg_gen_setcond_tl(TCG_COND_GTU, carry, t0, t1); + tcg_gen_setcond_i32(TCG_COND_GTU, carry, t0, t1); } else { tcg_gen_add_i32(t2, t0, t1); - tcg_gen_setcond_tl(TCG_COND_GTU, carry, t0, t2); + tcg_gen_setcond_i32(TCG_COND_GTU, carry, t0, t2); } - tcg_gen_andi_tl(cr, cr, 0x7fffffff); - tcg_gen_shli_tl(carry, carry, 31); - tcg_gen_or_tl(cr, cr, carry); + tcg_gen_andi_i32(cr, cr, 0x7fffffff); + tcg_gen_shli_i32(carry, carry, 31); + tcg_gen_or_i32(cr, cr, carry); gen_store_mxu_gpr(t2, XRa); } if (XRd !=3D 0) { if (aptn2 & 1) { tcg_gen_sub_i32(t2, t0, t1); - tcg_gen_setcond_tl(TCG_COND_GTU, carry, t0, t1); + tcg_gen_setcond_i32(TCG_COND_GTU, carry, t0, t1); } else { tcg_gen_add_i32(t2, t0, t1); - tcg_gen_setcond_tl(TCG_COND_GTU, carry, t0, t2); + tcg_gen_setcond_i32(TCG_COND_GTU, carry, t0, t2); } - tcg_gen_andi_tl(cr, cr, 0xbfffffff); - tcg_gen_shli_tl(carry, carry, 30); - tcg_gen_or_tl(cr, cr, carry); + tcg_gen_andi_i32(cr, cr, 0xbfffffff); + tcg_gen_shli_i32(carry, carry, 30); + tcg_gen_or_i32(cr, cr, carry); gen_store_mxu_gpr(t2, XRd); } gen_store_mxu_cr(cr); @@ -3509,9 +3509,9 @@ static void gen_mxu_d32acc(DisasContext *ctx) XRb =3D extract32(ctx->opcode, 10, 4); XRa =3D extract32(ctx->opcode, 6, 4); =20 - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); =20 if (unlikely(XRa =3D=3D 0 && XRd =3D=3D 0)) { /* destinations are zero register -> do nothing */ @@ -3521,19 +3521,19 @@ static void gen_mxu_d32acc(DisasContext *ctx) gen_load_mxu_gpr(t1, XRc); if (XRa !=3D 0) { if (aptn2 & 2) { - tcg_gen_sub_tl(t2, t0, t1); + tcg_gen_sub_i32(t2, t0, t1); } else { - tcg_gen_add_tl(t2, t0, t1); + tcg_gen_add_i32(t2, t0, t1); } - tcg_gen_add_tl(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t2); + tcg_gen_add_i32(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t2); } if (XRd !=3D 0) { if (aptn2 & 1) { - tcg_gen_sub_tl(t2, t0, t1); + tcg_gen_sub_i32(t2, t0, t1); } else { - tcg_gen_add_tl(t2, t0, t1); + tcg_gen_add_i32(t2, t0, t1); } - tcg_gen_add_tl(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t2); + tcg_gen_add_i32(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t2); } } } @@ -3552,9 +3552,9 @@ static void gen_mxu_d32accm(DisasContext *ctx) XRb =3D extract32(ctx->opcode, 10, 4); XRa =3D extract32(ctx->opcode, 6, 4); =20 - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); =20 if (unlikely(XRa =3D=3D 0 && XRd =3D=3D 0)) { /* destinations are zero register -> do nothing */ @@ -3563,19 +3563,19 @@ static void gen_mxu_d32accm(DisasContext *ctx) gen_load_mxu_gpr(t0, XRb); gen_load_mxu_gpr(t1, XRc); if (XRa !=3D 0) { - tcg_gen_add_tl(t2, t0, t1); + tcg_gen_add_i32(t2, t0, t1); if (aptn2 & 2) { - tcg_gen_sub_tl(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t2); + tcg_gen_sub_i32(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t2); } else { - tcg_gen_add_tl(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t2); + tcg_gen_add_i32(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t2); } } if (XRd !=3D 0) { - tcg_gen_sub_tl(t2, t0, t1); + tcg_gen_sub_i32(t2, t0, t1); if (aptn2 & 1) { - tcg_gen_sub_tl(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t2); + tcg_gen_sub_i32(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t2); } else { - tcg_gen_add_tl(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t2); + tcg_gen_add_i32(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t2); } } } @@ -3595,8 +3595,8 @@ static void gen_mxu_d32asum(DisasContext *ctx) XRb =3D extract32(ctx->opcode, 10, 4); XRa =3D extract32(ctx->opcode, 6, 4); =20 - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); =20 if (unlikely(XRa =3D=3D 0 && XRd =3D=3D 0)) { /* destinations are zero register -> do nothing */ @@ -3606,16 +3606,16 @@ static void gen_mxu_d32asum(DisasContext *ctx) gen_load_mxu_gpr(t1, XRc); if (XRa !=3D 0) { if (aptn2 & 2) { - tcg_gen_sub_tl(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t0); + tcg_gen_sub_i32(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t0); } else { - tcg_gen_add_tl(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t0); + tcg_gen_add_i32(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t0); } } if (XRd !=3D 0) { if (aptn2 & 1) { - tcg_gen_sub_tl(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t1); + tcg_gen_sub_i32(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t1); } else { - tcg_gen_add_tl(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t1); + tcg_gen_add_i32(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t1); } } } @@ -3638,13 +3638,13 @@ static void gen_mxu_d32asum(DisasContext *ctx) */ static void gen_mxu_s32extr(DisasContext *ctx) { - TCGv t0, t1, t2, t3; + TCGv_i32 t0, t1, t2, t3; uint32_t XRa, XRd, rs, bits5; =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); - t2 =3D tcg_temp_new(); - t3 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i32(); + t1 =3D tcg_temp_new_i32(); + t2 =3D tcg_temp_new_i32(); + t3 =3D tcg_temp_new_i32(); =20 XRa =3D extract32(ctx->opcode, 6, 4); XRd =3D extract32(ctx->opcode, 10, 4); @@ -3660,23 +3660,23 @@ static void gen_mxu_s32extr(DisasContext *ctx) gen_load_mxu_gpr(t0, XRd); gen_load_mxu_gpr(t1, XRa); gen_load_gpr(t2, rs); - tcg_gen_andi_tl(t2, t2, 0x1f); - tcg_gen_subfi_tl(t2, 32, t2); - tcg_gen_brcondi_tl(TCG_COND_GE, t2, bits5, l_xra_only); - tcg_gen_subfi_tl(t2, bits5, t2); - tcg_gen_subfi_tl(t3, 32, t2); - tcg_gen_shr_tl(t0, t0, t3); - tcg_gen_shl_tl(t1, t1, t2); - tcg_gen_or_tl(t0, t0, t1); + tcg_gen_andi_i32(t2, t2, 0x1f); + tcg_gen_subfi_i32(t2, 32, t2); + tcg_gen_brcondi_i32(TCG_COND_GE, t2, bits5, l_xra_only); + tcg_gen_subfi_i32(t2, bits5, t2); + tcg_gen_subfi_i32(t3, 32, t2); + tcg_gen_shr_i32(t0, t0, t3); + tcg_gen_shl_i32(t1, t1, t2); + tcg_gen_or_i32(t0, t0, t1); tcg_gen_br(l_done); gen_set_label(l_xra_only); - tcg_gen_subi_tl(t2, t2, bits5); - tcg_gen_shr_tl(t0, t1, t2); + tcg_gen_subi_i32(t2, t2, bits5); + tcg_gen_shr_i32(t0, t1, t2); gen_set_label(l_done); - tcg_gen_extract_tl(t0, t0, 0, bits5); + tcg_gen_extract_i32(t0, t0, 0, bits5); } else { /* unspecified behavior but matches tests on real hardware*/ - tcg_gen_movi_tl(t0, 0); + tcg_gen_movi_i32(t0, 0); } gen_store_mxu_gpr(t0, XRa); } @@ -3688,14 +3688,14 @@ static void gen_mxu_s32extr(DisasContext *ctx) */ static void gen_mxu_s32extrv(DisasContext *ctx) { - TCGv t0, t1, t2, t3, t4; + TCGv_i32 t0, t1, t2, t3, t4; uint32_t XRa, XRd, rs, rt; =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); - t2 =3D tcg_temp_new(); - t3 =3D tcg_temp_new(); - t4 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i32(); + t1 =3D tcg_temp_new_i32(); + t2 =3D tcg_temp_new_i32(); + t3 =3D tcg_temp_new_i32(); + t4 =3D tcg_temp_new_i32(); TCGLabel *l_xra_only =3D gen_new_label(); TCGLabel *l_done =3D gen_new_label(); TCGLabel *l_zero =3D gen_new_label(); @@ -3711,32 +3711,32 @@ static void gen_mxu_s32extrv(DisasContext *ctx) gen_load_mxu_gpr(t1, XRa); gen_load_gpr(t2, rs); gen_load_gpr(t4, rt); - tcg_gen_brcondi_tl(TCG_COND_EQ, t4, 0, l_zero); - tcg_gen_andi_tl(t2, t2, 0x1f); - tcg_gen_subfi_tl(t2, 32, t2); - tcg_gen_brcond_tl(TCG_COND_GE, t2, t4, l_xra_only); - tcg_gen_sub_tl(t2, t4, t2); - tcg_gen_subfi_tl(t3, 32, t2); - tcg_gen_shr_tl(t0, t0, t3); - tcg_gen_shl_tl(t1, t1, t2); - tcg_gen_or_tl(t0, t0, t1); + tcg_gen_brcondi_i32(TCG_COND_EQ, t4, 0, l_zero); + tcg_gen_andi_i32(t2, t2, 0x1f); + tcg_gen_subfi_i32(t2, 32, t2); + tcg_gen_brcond_i32(TCG_COND_GE, t2, t4, l_xra_only); + tcg_gen_sub_i32(t2, t4, t2); + tcg_gen_subfi_i32(t3, 32, t2); + tcg_gen_shr_i32(t0, t0, t3); + tcg_gen_shl_i32(t1, t1, t2); + tcg_gen_or_i32(t0, t0, t1); tcg_gen_br(l_extract); =20 gen_set_label(l_xra_only); - tcg_gen_sub_tl(t2, t2, t4); - tcg_gen_shr_tl(t0, t1, t2); + tcg_gen_sub_i32(t2, t2, t4); + tcg_gen_shr_i32(t0, t1, t2); tcg_gen_br(l_extract); =20 /* unspecified behavior but matches tests on real hardware*/ gen_set_label(l_zero); - tcg_gen_movi_tl(t0, 0); + tcg_gen_movi_i32(t0, 0); tcg_gen_br(l_done); =20 /* {XRa} =3D extract({tmp}, 0, rt) */ gen_set_label(l_extract); - tcg_gen_subfi_tl(t4, 32, t4); - tcg_gen_shl_tl(t0, t0, t4); - tcg_gen_shr_tl(t0, t0, t4); + tcg_gen_subfi_i32(t4, 32, t4); + tcg_gen_shl_i32(t0, t0, t4); + tcg_gen_shr_i32(t0, t0, t4); =20 gen_set_label(l_done); gen_store_mxu_gpr(t0, XRa); @@ -3762,33 +3762,33 @@ static void gen_mxu_s32lui(DisasContext *ctx) /* destination is zero register -> do nothing */ } else { uint32_t s16; - TCGv t0 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); =20 switch (optn3) { case 0: - tcg_gen_movi_tl(t0, s8); + tcg_gen_movi_i32(t0, s8); break; case 1: - tcg_gen_movi_tl(t0, s8 << 8); + tcg_gen_movi_i32(t0, s8 << 8); break; case 2: - tcg_gen_movi_tl(t0, s8 << 16); + tcg_gen_movi_i32(t0, s8 << 16); break; case 3: - tcg_gen_movi_tl(t0, s8 << 24); + tcg_gen_movi_i32(t0, s8 << 24); break; case 4: - tcg_gen_movi_tl(t0, (s8 << 16) | s8); + tcg_gen_movi_i32(t0, (s8 << 16) | s8); break; case 5: - tcg_gen_movi_tl(t0, (s8 << 24) | (s8 << 8)); + tcg_gen_movi_i32(t0, (s8 << 24) | (s8 << 8)); break; case 6: s16 =3D (uint16_t)(int16_t)(int8_t)s8; - tcg_gen_movi_tl(t0, (s16 << 16) | s16); + tcg_gen_movi_i32(t0, (s16 << 16) | s16); break; case 7: - tcg_gen_movi_tl(t0, (s8 << 24) | (s8 << 16) | (s8 << 8) | s8); + tcg_gen_movi_i32(t0, (s8 << 24) | (s8 << 16) | (s8 << 8) | s8); break; } gen_store_mxu_gpr(t0, XRa); @@ -3816,11 +3816,11 @@ static void gen_mxu_Q16SAT(DisasContext *ctx) /* destination is zero register -> do nothing */ } else { /* the most general case */ - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); =20 - tcg_gen_movi_tl(t2, 0); + tcg_gen_movi_i32(t2, 0); if (XRb !=3D 0) { TCGLabel *l_less_hi =3D gen_new_label(); TCGLabel *l_less_lo =3D gen_new_label(); @@ -3829,32 +3829,32 @@ static void gen_mxu_Q16SAT(DisasContext *ctx) TCGLabel *l_greater_lo =3D gen_new_label(); TCGLabel *l_done =3D gen_new_label(); =20 - tcg_gen_sari_tl(t0, mxu_gpr[XRb - 1], 16); - tcg_gen_brcondi_tl(TCG_COND_LT, t0, 0, l_less_hi); - tcg_gen_brcondi_tl(TCG_COND_GT, t0, 255, l_greater_hi); + tcg_gen_sari_i32(t0, mxu_gpr[XRb - 1], 16); + tcg_gen_brcondi_i32(TCG_COND_LT, t0, 0, l_less_hi); + tcg_gen_brcondi_i32(TCG_COND_GT, t0, 255, l_greater_hi); tcg_gen_br(l_lo); gen_set_label(l_less_hi); - tcg_gen_movi_tl(t0, 0); + tcg_gen_movi_i32(t0, 0); tcg_gen_br(l_lo); gen_set_label(l_greater_hi); - tcg_gen_movi_tl(t0, 255); + tcg_gen_movi_i32(t0, 255); =20 gen_set_label(l_lo); - tcg_gen_shli_tl(t1, mxu_gpr[XRb - 1], 16); - tcg_gen_sari_tl(t1, t1, 16); - tcg_gen_brcondi_tl(TCG_COND_LT, t1, 0, l_less_lo); - tcg_gen_brcondi_tl(TCG_COND_GT, t1, 255, l_greater_lo); + tcg_gen_shli_i32(t1, mxu_gpr[XRb - 1], 16); + tcg_gen_sari_i32(t1, t1, 16); + tcg_gen_brcondi_i32(TCG_COND_LT, t1, 0, l_less_lo); + tcg_gen_brcondi_i32(TCG_COND_GT, t1, 255, l_greater_lo); tcg_gen_br(l_done); gen_set_label(l_less_lo); - tcg_gen_movi_tl(t1, 0); + tcg_gen_movi_i32(t1, 0); tcg_gen_br(l_done); gen_set_label(l_greater_lo); - tcg_gen_movi_tl(t1, 255); + tcg_gen_movi_i32(t1, 255); =20 gen_set_label(l_done); - tcg_gen_shli_tl(t2, t0, 24); - tcg_gen_shli_tl(t1, t1, 16); - tcg_gen_or_tl(t2, t2, t1); + tcg_gen_shli_i32(t2, t0, 24); + tcg_gen_shli_i32(t1, t1, 16); + tcg_gen_or_i32(t2, t2, t1); } =20 if (XRc !=3D 0) { @@ -3865,32 +3865,32 @@ static void gen_mxu_Q16SAT(DisasContext *ctx) TCGLabel *l_greater_lo =3D gen_new_label(); TCGLabel *l_done =3D gen_new_label(); =20 - tcg_gen_sari_tl(t0, mxu_gpr[XRc - 1], 16); - tcg_gen_brcondi_tl(TCG_COND_LT, t0, 0, l_less_hi); - tcg_gen_brcondi_tl(TCG_COND_GT, t0, 255, l_greater_hi); + tcg_gen_sari_i32(t0, mxu_gpr[XRc - 1], 16); + tcg_gen_brcondi_i32(TCG_COND_LT, t0, 0, l_less_hi); + tcg_gen_brcondi_i32(TCG_COND_GT, t0, 255, l_greater_hi); tcg_gen_br(l_lo); gen_set_label(l_less_hi); - tcg_gen_movi_tl(t0, 0); + tcg_gen_movi_i32(t0, 0); tcg_gen_br(l_lo); gen_set_label(l_greater_hi); - tcg_gen_movi_tl(t0, 255); + tcg_gen_movi_i32(t0, 255); =20 gen_set_label(l_lo); - tcg_gen_shli_tl(t1, mxu_gpr[XRc - 1], 16); - tcg_gen_sari_tl(t1, t1, 16); - tcg_gen_brcondi_tl(TCG_COND_LT, t1, 0, l_less_lo); - tcg_gen_brcondi_tl(TCG_COND_GT, t1, 255, l_greater_lo); + tcg_gen_shli_i32(t1, mxu_gpr[XRc - 1], 16); + tcg_gen_sari_i32(t1, t1, 16); + tcg_gen_brcondi_i32(TCG_COND_LT, t1, 0, l_less_lo); + tcg_gen_brcondi_i32(TCG_COND_GT, t1, 255, l_greater_lo); tcg_gen_br(l_done); gen_set_label(l_less_lo); - tcg_gen_movi_tl(t1, 0); + tcg_gen_movi_i32(t1, 0); tcg_gen_br(l_done); gen_set_label(l_greater_lo); - tcg_gen_movi_tl(t1, 255); + tcg_gen_movi_i32(t1, 255); =20 gen_set_label(l_done); - tcg_gen_shli_tl(t0, t0, 8); - tcg_gen_or_tl(t2, t2, t0); - tcg_gen_or_tl(t2, t2, t1); + tcg_gen_shli_i32(t0, t0, 8); + tcg_gen_or_i32(t2, t2, t0); + tcg_gen_or_i32(t2, t2, t1); } gen_store_mxu_gpr(t2, XRa); } @@ -3910,11 +3910,11 @@ static void gen_mxu_q16scop(DisasContext *ctx) XRb =3D extract32(ctx->opcode, 10, 4); XRa =3D extract32(ctx->opcode, 6, 4); =20 - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); - TCGv t3 =3D tcg_temp_new(); - TCGv t4 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t3 =3D tcg_temp_new_i32(); + TCGv_i32 t4 =3D tcg_temp_new_i32(); =20 TCGLabel *l_b_hi_lt =3D gen_new_label(); TCGLabel *l_b_hi_gt =3D gen_new_label(); @@ -3930,47 +3930,47 @@ static void gen_mxu_q16scop(DisasContext *ctx) gen_load_mxu_gpr(t0, XRb); gen_load_mxu_gpr(t1, XRc); =20 - tcg_gen_sextract_tl(t2, t0, 16, 16); - tcg_gen_brcondi_tl(TCG_COND_LT, t2, 0, l_b_hi_lt); - tcg_gen_brcondi_tl(TCG_COND_GT, t2, 0, l_b_hi_gt); - tcg_gen_movi_tl(t3, 0); + tcg_gen_sextract_i32(t2, t0, 16, 16); + tcg_gen_brcondi_i32(TCG_COND_LT, t2, 0, l_b_hi_lt); + tcg_gen_brcondi_i32(TCG_COND_GT, t2, 0, l_b_hi_gt); + tcg_gen_movi_i32(t3, 0); tcg_gen_br(l_b_lo); gen_set_label(l_b_hi_lt); - tcg_gen_movi_tl(t3, 0xffff0000); + tcg_gen_movi_i32(t3, 0xffff0000); tcg_gen_br(l_b_lo); gen_set_label(l_b_hi_gt); - tcg_gen_movi_tl(t3, 0x00010000); + tcg_gen_movi_i32(t3, 0x00010000); =20 gen_set_label(l_b_lo); - tcg_gen_sextract_tl(t2, t0, 0, 16); - tcg_gen_brcondi_tl(TCG_COND_EQ, t2, 0, l_c_hi); - tcg_gen_brcondi_tl(TCG_COND_LT, t2, 0, l_b_lo_lt); - tcg_gen_ori_tl(t3, t3, 0x00000001); + tcg_gen_sextract_i32(t2, t0, 0, 16); + tcg_gen_brcondi_i32(TCG_COND_EQ, t2, 0, l_c_hi); + tcg_gen_brcondi_i32(TCG_COND_LT, t2, 0, l_b_lo_lt); + tcg_gen_ori_i32(t3, t3, 0x00000001); tcg_gen_br(l_c_hi); gen_set_label(l_b_lo_lt); - tcg_gen_ori_tl(t3, t3, 0x0000ffff); + tcg_gen_ori_i32(t3, t3, 0x0000ffff); tcg_gen_br(l_c_hi); =20 gen_set_label(l_c_hi); - tcg_gen_sextract_tl(t2, t1, 16, 16); - tcg_gen_brcondi_tl(TCG_COND_LT, t2, 0, l_c_hi_lt); - tcg_gen_brcondi_tl(TCG_COND_GT, t2, 0, l_c_hi_gt); - tcg_gen_movi_tl(t4, 0); + tcg_gen_sextract_i32(t2, t1, 16, 16); + tcg_gen_brcondi_i32(TCG_COND_LT, t2, 0, l_c_hi_lt); + tcg_gen_brcondi_i32(TCG_COND_GT, t2, 0, l_c_hi_gt); + tcg_gen_movi_i32(t4, 0); tcg_gen_br(l_c_lo); gen_set_label(l_c_hi_lt); - tcg_gen_movi_tl(t4, 0xffff0000); + tcg_gen_movi_i32(t4, 0xffff0000); tcg_gen_br(l_c_lo); gen_set_label(l_c_hi_gt); - tcg_gen_movi_tl(t4, 0x00010000); + tcg_gen_movi_i32(t4, 0x00010000); =20 gen_set_label(l_c_lo); - tcg_gen_sextract_tl(t2, t1, 0, 16); - tcg_gen_brcondi_tl(TCG_COND_EQ, t2, 0, l_done); - tcg_gen_brcondi_tl(TCG_COND_LT, t2, 0, l_c_lo_lt); - tcg_gen_ori_tl(t4, t4, 0x00000001); + tcg_gen_sextract_i32(t2, t1, 0, 16); + tcg_gen_brcondi_i32(TCG_COND_EQ, t2, 0, l_done); + tcg_gen_brcondi_i32(TCG_COND_LT, t2, 0, l_c_lo_lt); + tcg_gen_ori_i32(t4, t4, 0x00000001); tcg_gen_br(l_done); gen_set_label(l_c_lo_lt); - tcg_gen_ori_tl(t4, t4, 0x0000ffff); + tcg_gen_ori_i32(t4, t4, 0x0000ffff); =20 gen_set_label(l_done); gen_store_mxu_gpr(t3, XRa); @@ -3991,62 +3991,62 @@ static void gen_mxu_s32sfl(DisasContext *ctx) XRa =3D extract32(ctx->opcode, 6, 4); ptn2 =3D extract32(ctx->opcode, 24, 2); =20 - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); - TCGv t3 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t3 =3D tcg_temp_new_i32(); =20 gen_load_mxu_gpr(t0, XRb); gen_load_mxu_gpr(t1, XRc); =20 switch (ptn2) { case 0: - tcg_gen_andi_tl(t2, t0, 0xff000000); - tcg_gen_andi_tl(t3, t1, 0x000000ff); - tcg_gen_deposit_tl(t3, t3, t0, 8, 8); - tcg_gen_shri_tl(t0, t0, 8); - tcg_gen_shri_tl(t1, t1, 8); - tcg_gen_deposit_tl(t3, t3, t0, 24, 8); - tcg_gen_deposit_tl(t3, t3, t1, 16, 8); - tcg_gen_shri_tl(t0, t0, 8); - tcg_gen_shri_tl(t1, t1, 8); - tcg_gen_deposit_tl(t2, t2, t0, 8, 8); - tcg_gen_deposit_tl(t2, t2, t1, 0, 8); - tcg_gen_shri_tl(t1, t1, 8); - tcg_gen_deposit_tl(t2, t2, t1, 16, 8); + tcg_gen_andi_i32(t2, t0, 0xff000000); + tcg_gen_andi_i32(t3, t1, 0x000000ff); + tcg_gen_deposit_i32(t3, t3, t0, 8, 8); + tcg_gen_shri_i32(t0, t0, 8); + tcg_gen_shri_i32(t1, t1, 8); + tcg_gen_deposit_i32(t3, t3, t0, 24, 8); + tcg_gen_deposit_i32(t3, t3, t1, 16, 8); + tcg_gen_shri_i32(t0, t0, 8); + tcg_gen_shri_i32(t1, t1, 8); + tcg_gen_deposit_i32(t2, t2, t0, 8, 8); + tcg_gen_deposit_i32(t2, t2, t1, 0, 8); + tcg_gen_shri_i32(t1, t1, 8); + tcg_gen_deposit_i32(t2, t2, t1, 16, 8); break; case 1: - tcg_gen_andi_tl(t2, t0, 0xff000000); - tcg_gen_andi_tl(t3, t1, 0x000000ff); - tcg_gen_deposit_tl(t3, t3, t0, 16, 8); - tcg_gen_shri_tl(t0, t0, 8); - tcg_gen_shri_tl(t1, t1, 8); - tcg_gen_deposit_tl(t2, t2, t0, 16, 8); - tcg_gen_deposit_tl(t2, t2, t1, 0, 8); - tcg_gen_shri_tl(t0, t0, 8); - tcg_gen_shri_tl(t1, t1, 8); - tcg_gen_deposit_tl(t3, t3, t0, 24, 8); - tcg_gen_deposit_tl(t3, t3, t1, 8, 8); - tcg_gen_shri_tl(t1, t1, 8); - tcg_gen_deposit_tl(t2, t2, t1, 8, 8); + tcg_gen_andi_i32(t2, t0, 0xff000000); + tcg_gen_andi_i32(t3, t1, 0x000000ff); + tcg_gen_deposit_i32(t3, t3, t0, 16, 8); + tcg_gen_shri_i32(t0, t0, 8); + tcg_gen_shri_i32(t1, t1, 8); + tcg_gen_deposit_i32(t2, t2, t0, 16, 8); + tcg_gen_deposit_i32(t2, t2, t1, 0, 8); + tcg_gen_shri_i32(t0, t0, 8); + tcg_gen_shri_i32(t1, t1, 8); + tcg_gen_deposit_i32(t3, t3, t0, 24, 8); + tcg_gen_deposit_i32(t3, t3, t1, 8, 8); + tcg_gen_shri_i32(t1, t1, 8); + tcg_gen_deposit_i32(t2, t2, t1, 8, 8); break; case 2: - tcg_gen_andi_tl(t2, t0, 0xff00ff00); - tcg_gen_andi_tl(t3, t1, 0x00ff00ff); - tcg_gen_deposit_tl(t3, t3, t0, 8, 8); - tcg_gen_shri_tl(t0, t0, 16); - tcg_gen_shri_tl(t1, t1, 8); - tcg_gen_deposit_tl(t2, t2, t1, 0, 8); - tcg_gen_deposit_tl(t3, t3, t0, 24, 8); - tcg_gen_shri_tl(t1, t1, 16); - tcg_gen_deposit_tl(t2, t2, t1, 16, 8); + tcg_gen_andi_i32(t2, t0, 0xff00ff00); + tcg_gen_andi_i32(t3, t1, 0x00ff00ff); + tcg_gen_deposit_i32(t3, t3, t0, 8, 8); + tcg_gen_shri_i32(t0, t0, 16); + tcg_gen_shri_i32(t1, t1, 8); + tcg_gen_deposit_i32(t2, t2, t1, 0, 8); + tcg_gen_deposit_i32(t3, t3, t0, 24, 8); + tcg_gen_shri_i32(t1, t1, 16); + tcg_gen_deposit_i32(t2, t2, t1, 16, 8); break; case 3: - tcg_gen_andi_tl(t2, t0, 0xffff0000); - tcg_gen_andi_tl(t3, t1, 0x0000ffff); - tcg_gen_shri_tl(t1, t1, 16); - tcg_gen_deposit_tl(t2, t2, t1, 0, 16); - tcg_gen_deposit_tl(t3, t3, t0, 16, 16); + tcg_gen_andi_i32(t2, t0, 0xffff0000); + tcg_gen_andi_i32(t3, t1, 0x0000ffff); + tcg_gen_shri_i32(t1, t1, 16); + tcg_gen_deposit_i32(t2, t2, t1, 0, 16); + tcg_gen_deposit_i32(t3, t3, t0, 16, 16); break; } =20 @@ -4067,30 +4067,30 @@ static void gen_mxu_q8sad(DisasContext *ctx) XRb =3D extract32(ctx->opcode, 10, 4); XRa =3D extract32(ctx->opcode, 6, 4); =20 - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); - TCGv t3 =3D tcg_temp_new(); - TCGv t4 =3D tcg_temp_new(); - TCGv t5 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t3 =3D tcg_temp_new_i32(); + TCGv_i32 t4 =3D tcg_temp_new_i32(); + TCGv_i32 t5 =3D tcg_temp_new_i32(); =20 gen_load_mxu_gpr(t2, XRb); gen_load_mxu_gpr(t3, XRc); gen_load_mxu_gpr(t5, XRd); - tcg_gen_movi_tl(t4, 0); + tcg_gen_movi_i32(t4, 0); =20 for (int i =3D 0; i < 4; i++) { - tcg_gen_andi_tl(t0, t2, 0xff); - tcg_gen_andi_tl(t1, t3, 0xff); - tcg_gen_sub_tl(t0, t0, t1); - tcg_gen_abs_tl(t0, t0); - tcg_gen_add_tl(t4, t4, t0); + tcg_gen_andi_i32(t0, t2, 0xff); + tcg_gen_andi_i32(t1, t3, 0xff); + tcg_gen_sub_i32(t0, t0, t1); + tcg_gen_abs_i32(t0, t0); + tcg_gen_add_i32(t4, t4, t0); if (i < 3) { - tcg_gen_shri_tl(t2, t2, 8); - tcg_gen_shri_tl(t3, t3, 8); + tcg_gen_shri_i32(t2, t2, 8); + tcg_gen_shri_i32(t3, t3, 8); } } - tcg_gen_add_tl(t5, t5, t4); + tcg_gen_add_i32(t5, t5, t4); gen_store_mxu_gpr(t4, XRa); gen_store_mxu_gpr(t5, XRd); } @@ -4196,8 +4196,8 @@ static void gen_mxu_S32ALNI(DisasContext *ctx) /* XRa */ /* */ =20 - TCGv_i32 t0 =3D tcg_temp_new(); - TCGv_i32 t1 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); =20 tcg_gen_andi_i32(t0, mxu_gpr[XRb - 1], 0x00FFFFFF); tcg_gen_shli_i32(t0, t0, 8); @@ -4219,8 +4219,8 @@ static void gen_mxu_S32ALNI(DisasContext *ctx) /* XRa */ /* */ =20 - TCGv_i32 t0 =3D tcg_temp_new(); - TCGv_i32 t1 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); =20 tcg_gen_andi_i32(t0, mxu_gpr[XRb - 1], 0x0000FFFF); tcg_gen_shli_i32(t0, t0, 16); @@ -4242,8 +4242,8 @@ static void gen_mxu_S32ALNI(DisasContext *ctx) /* XRa */ /* */ =20 - TCGv_i32 t0 =3D tcg_temp_new(); - TCGv_i32 t1 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); =20 tcg_gen_andi_i32(t0, mxu_gpr[XRb - 1], 0x000000FF); tcg_gen_shli_i32(t0, t0, 24); @@ -4290,13 +4290,13 @@ static void gen_mxu_S32ALN(DisasContext *ctx) /* destination is zero register -> do nothing */ } else if (unlikely((XRb =3D=3D 0) && (XRc =3D=3D 0))) { /* both operands zero registers -> just set destination to all 0s = */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else { /* the most general case */ - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); - TCGv t2 =3D tcg_temp_new(); - TCGv t3 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); + TCGv_i32 t2 =3D tcg_temp_new_i32(); + TCGv_i32 t3 =3D tcg_temp_new_i32(); TCGLabel *l_exit =3D gen_new_label(); TCGLabel *l_b_only =3D gen_new_label(); TCGLabel *l_c_only =3D gen_new_label(); @@ -4304,20 +4304,20 @@ static void gen_mxu_S32ALN(DisasContext *ctx) gen_load_mxu_gpr(t0, XRb); gen_load_mxu_gpr(t1, XRc); gen_load_gpr(t2, rs); - tcg_gen_andi_tl(t2, t2, 0x07); + tcg_gen_andi_i32(t2, t2, 0x07); =20 /* do nothing for undefined cases */ - tcg_gen_brcondi_tl(TCG_COND_GE, t2, 5, l_exit); + tcg_gen_brcondi_i32(TCG_COND_GE, t2, 5, l_exit); =20 - tcg_gen_brcondi_tl(TCG_COND_EQ, t2, 0, l_b_only); - tcg_gen_brcondi_tl(TCG_COND_EQ, t2, 4, l_c_only); + tcg_gen_brcondi_i32(TCG_COND_EQ, t2, 0, l_b_only); + tcg_gen_brcondi_i32(TCG_COND_EQ, t2, 4, l_c_only); =20 - tcg_gen_shli_tl(t2, t2, 3); - tcg_gen_subfi_tl(t3, 32, t2); + tcg_gen_shli_i32(t2, t2, 3); + tcg_gen_subfi_i32(t3, 32, t2); =20 - tcg_gen_shl_tl(t0, t0, t2); - tcg_gen_shr_tl(t1, t1, t3); - tcg_gen_or_tl(mxu_gpr[XRa - 1], t0, t1); + tcg_gen_shl_i32(t0, t0, t2); + tcg_gen_shr_i32(t1, t1, t3); + tcg_gen_or_i32(mxu_gpr[XRa - 1], t0, t1); tcg_gen_br(l_exit); =20 gen_set_label(l_b_only); @@ -4359,8 +4359,8 @@ static void gen_mxu_s32madd_sub(DisasContext *ctx, bo= ol sub, bool uns) } else if (unlikely(XRa =3D=3D 0 && XRd =3D=3D 0)) { /* do nothing because result just dropped */ } else { - TCGv t0 =3D tcg_temp_new(); - TCGv t1 =3D tcg_temp_new(); + TCGv_i32 t0 =3D tcg_temp_new_i32(); + TCGv_i32 t1 =3D tcg_temp_new_i32(); TCGv_i64 t2 =3D tcg_temp_new_i64(); TCGv_i64 t3 =3D tcg_temp_new_i64(); =20 @@ -4368,18 +4368,18 @@ static void gen_mxu_s32madd_sub(DisasContext *ctx, = bool sub, bool uns) gen_load_gpr(t1, Rc); =20 if (uns) { - tcg_gen_extu_tl_i64(t2, t0); - tcg_gen_extu_tl_i64(t3, t1); + tcg_gen_extu_i32_i64(t2, t0); + tcg_gen_extu_i32_i64(t3, t1); } else { - tcg_gen_ext_tl_i64(t2, t0); - tcg_gen_ext_tl_i64(t3, t1); + tcg_gen_ext_i32_i64(t2, t0); + tcg_gen_ext_i32_i64(t3, t1); } tcg_gen_mul_i64(t2, t2, t3); =20 gen_load_mxu_gpr(t0, XRa); gen_load_mxu_gpr(t1, XRd); =20 - tcg_gen_concat_tl_i64(t3, t1, t0); + tcg_gen_concat_i32_i64(t3, t1, t0); if (sub) { tcg_gen_sub_i64(t3, t3, t2); } else { @@ -4388,8 +4388,8 @@ static void gen_mxu_s32madd_sub(DisasContext *ctx, bo= ol sub, bool uns) gen_move_low32(t1, t3); gen_move_high32(t0, t3); =20 - tcg_gen_mov_tl(cpu_HI[0], t0); - tcg_gen_mov_tl(cpu_LO[0], t1); + tcg_gen_mov_i32(cpu_HI[0], t0); + tcg_gen_mov_i32(cpu_LO[0], t1); =20 gen_store_mxu_gpr(t1, XRd); gen_store_mxu_gpr(t0, XRa); @@ -4936,12 +4936,12 @@ bool decode_ase_mxu(DisasContext *ctx, uint32_t ins= n) } =20 { - TCGv t_mxu_cr =3D tcg_temp_new(); + TCGv_i32 t_mxu_cr =3D tcg_temp_new_i32(); TCGLabel *l_exit =3D gen_new_label(); =20 gen_load_mxu_cr(t_mxu_cr); - tcg_gen_andi_tl(t_mxu_cr, t_mxu_cr, MXU_CR_MXU_EN); - tcg_gen_brcondi_tl(TCG_COND_NE, t_mxu_cr, MXU_CR_MXU_EN, l_exit); + tcg_gen_andi_i32(t_mxu_cr, t_mxu_cr, MXU_CR_MXU_EN); + tcg_gen_brcondi_i32(TCG_COND_NE, t_mxu_cr, MXU_CR_MXU_EN, l_exit); =20 switch (opcode) { case OPC_MXU_S32MADD: --=20 2.53.0 From nobody Wed Apr 1 20:40:53 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1775054770; cv=none; d=zohomail.com; s=zohoarc; b=CQzIZbu6OukPvWwZSAnrUdfugTnSmx+6MM9vV1RYTfvBcqaO1JKbivZZ44bfV5jN/WyEjVzLdXeenlBQAB0yjp3IptiN8tX7CVhcfFX8+cXLjmKQZeUBv0sDfy4Pqryo9r0aOnSNsglmqkmem4zzHyUUhD1ygrAM3L9/z6SH1cA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1775054770; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=5g35LwpSw27zQ9aG66d1cvT6EAToOFVdNkIG3aDLtDk=; b=KPsDAj9bub581SJxawQGnQb7ULpQHEgPMG2BP6lZy4a1AieEzaq0Gt6X6y811T/dMi0+vWTC4G6++Pft6q3DouzEdz+oC+KyPVdo5ER6bywYPndJRwqo4i/YTxW2ukIeKxA3WovBEaSSCsy/8nlCn21LBf4gyiuJFi+xVSlJ1es= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1775054770934369.55194837124805; Wed, 1 Apr 2026 07:46:10 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1w7wpU-0002tb-NX; Wed, 01 Apr 2026 10:46:00 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1w7wou-0002qK-6Y for qemu-devel@nongnu.org; Wed, 01 Apr 2026 10:45:24 -0400 Received: from mail-wm1-x32f.google.com ([2a00:1450:4864:20::32f]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1w7wor-0000k9-N5 for qemu-devel@nongnu.org; Wed, 01 Apr 2026 10:45:23 -0400 Received: by mail-wm1-x32f.google.com with SMTP id 5b1f17b1804b1-4852a9c6309so58895965e9.0 for ; Wed, 01 Apr 2026 07:45:21 -0700 (PDT) Received: from localhost.localdomain (88-187-86-199.subs.proxad.net. [88.187.86.199]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4888a7162cesm3814585e9.13.2026.04.01.07.45.18 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 01 Apr 2026 07:45:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1775054720; x=1775659520; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5g35LwpSw27zQ9aG66d1cvT6EAToOFVdNkIG3aDLtDk=; b=ZRQfdpgJLXlKvxXhgC3gZNXrqq94NN3yNTSZtJn23ipOUZgx65ZzlH0BQ5RNI4wyqD ODci7IxV45o1rgXWYBbakNPA4LQ8I0F4cPms+yHD28OtaFdrb4tr8MudtuEeN8kwWP6F o9MddUISyP9es4sJut5Tqgiq8vFwuR0rh2k/KAeXgMv8M8/ZO+JSqTlSfsN2TG5k8SYB Kejd5jwFvztgYYUmuaYD2v93wiFp6isvf995vaEwIXEMOnmkefIcimQiW8hML5JYrsSi yeDFqLvHqCjFvlh2mlkWgHTd2wvxX9Hh/Cm8t66wPMDNLhXpKdUEjZbvXdM/O5Fdc7ho plhw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775054720; x=1775659520; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=5g35LwpSw27zQ9aG66d1cvT6EAToOFVdNkIG3aDLtDk=; b=gF6Pa3h9suMMbYsFLir6CPMysPvN+x2RGdPg9K3StDrp4CRWVNikPfocljGrwPgacI gNTH+ArNVf3sbzEVyPyjnMQz/ZdK39hevSurbagiR+BoEWGQToMlNflDuaOX0hXFKV3U 5mjby5YWn5GyVvkqfCl5OGPFd2lZopQis5W9vIKx1j0ur4jR6GlFIt4zgvvOvsDU066a HXSSWD+uXvmeC/hiyMAHfB4U6y+iBuzzoYKkeq6G8fENKnPckgXh/ufkw3gDw/fyL7ao ku418YJDUuWIA+txJuRzdm/9MNF8VmmzDwBUv6cn85Itz3UnUUwQwgfhlkJV5np5/YcI oCtg== X-Gm-Message-State: AOJu0YyvHrn2X8PmGw+0rlv6aNDTgXdTNWV5F3FPm2VYeAMI1dd3QO44 bhX+ugsidFRcSrEEIXG4TEcv0PgnPUslCJMBWAXMWFPVbXMZ6/Cq4tSBNfYX3lG2V0pqO487nbM YaDo+xoo= X-Gm-Gg: ATEYQzxFq7eF7vu0P+9fKIfFDP96Ala5WMlN5FyR9uVVhfG6ratnVomvUgCsNF7EE1h mqFBzPc9WFmBrGJ5NN+5jqYkF5aX8kYXaxgdnfzONbZ8oEaN6NIU5L9VpqtQo7BOe3Lx0hVMX2H j2lWurVbDOgJ8IJbegPqAz8yAr0Nc2nd/RqL1DKaQJAtZocZ+ZL4vNdu0vRABc934ucS+mRC2rL lQawVF5Mb1PYtrGwntYTfHUPiYhFWg/te9Mf2X3sDoj/O7C0lmm4UBgW1wLDEKqtnNZ1tOHK+i1 gJ2g9yDcGieeuJ2QeX7LKGlaMxhgVC/uWXqsGRasXWS5onff1LCyUVA5VzW3p8q9e3qm4IOwpTh XpT0i5EGX2SO8+mETZLPiDq/3Y45ci2Z69l2p9bPoWe3vm9egcscmGegTr8etXrv2Ak4f2uuEaH jwDJ6tPvgQ/meYL3JNVsE47pOwhBdS92z2vqJZKAWi03TJAee3e2gW1Nhleaj1QKTn80AVpsNa X-Received: by 2002:a05:600c:a0d:b0:487:55c:e0c1 with SMTP id 5b1f17b1804b1-48883591733mr69316145e9.14.1775054719368; Wed, 01 Apr 2026 07:45:19 -0700 (PDT) From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Cc: Pierrick Bouvier , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Jiaxun Yang , Aurelien Jarno , Aleksandar Rikalo , Anton Johansson Subject: [PATCH-for-11.1 2/2] target/mips: Expand TCGv type for 64-bit extensions Date: Wed, 1 Apr 2026 16:45:02 +0200 Message-ID: <20260401144503.80510-3-philmd@linaro.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260401144503.80510-1-philmd@linaro.org> References: <20260401144503.80510-1-philmd@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2a00:1450:4864:20::32f; envelope-from=philmd@linaro.org; helo=mail-wm1-x32f.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1775054772943154100 These TX79, Octeon and Loongarch extensions are only built as 64-bit, so TCGv expands to TCGv_i64. Use the latter which is more explicit. Mechanical changes. Signed-off-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Pierrick Bouvier --- target/mips/tcg/lcsr_translate.c | 16 +++--- target/mips/tcg/loong_translate.c | 92 +++++++++++++++--------------- target/mips/tcg/octeon_translate.c | 60 +++++++++---------- target/mips/tcg/tx79_translate.c | 14 ++--- 4 files changed, 91 insertions(+), 91 deletions(-) diff --git a/target/mips/tcg/lcsr_translate.c b/target/mips/tcg/lcsr_transl= ate.c index 352b0f43282..128c17a9181 100644 --- a/target/mips/tcg/lcsr_translate.c +++ b/target/mips/tcg/lcsr_translate.c @@ -18,8 +18,8 @@ =20 static bool trans_CPUCFG(DisasContext *ctx, arg_CPUCFG *a) { - TCGv dest =3D tcg_temp_new(); - TCGv src1 =3D tcg_temp_new(); + TCGv_i64 dest =3D tcg_temp_new_i64(); + TCGv_i64 src1 =3D tcg_temp_new_i64(); =20 gen_load_gpr(src1, a->rs); gen_helper_lcsr_cpucfg(dest, tcg_env, src1); @@ -30,10 +30,10 @@ static bool trans_CPUCFG(DisasContext *ctx, arg_CPUCFG = *a) =20 #ifndef CONFIG_USER_ONLY static bool gen_rdcsr(DisasContext *ctx, arg_r *a, - void (*func)(TCGv, TCGv_ptr, TCGv)) + void (*func)(TCGv_i64, TCGv_ptr, TCGv_i64)) { - TCGv dest =3D tcg_temp_new(); - TCGv src1 =3D tcg_temp_new(); + TCGv_i64 dest =3D tcg_temp_new_i64(); + TCGv_i64 src1 =3D tcg_temp_new_i64(); =20 check_cp0_enabled(ctx); gen_load_gpr(src1, a->rs); @@ -44,10 +44,10 @@ static bool gen_rdcsr(DisasContext *ctx, arg_r *a, } =20 static bool gen_wrcsr(DisasContext *ctx, arg_r *a, - void (*func)(TCGv_ptr, TCGv, TCGv)) + void (*func)(TCGv_ptr, TCGv_i64, TCGv_i64)) { - TCGv val =3D tcg_temp_new(); - TCGv addr =3D tcg_temp_new(); + TCGv_i64 val =3D tcg_temp_new_i64(); + TCGv_i64 addr =3D tcg_temp_new_i64(); =20 check_cp0_enabled(ctx); gen_load_gpr(addr, a->rs); diff --git a/target/mips/tcg/loong_translate.c b/target/mips/tcg/loong_tran= slate.c index 7d74cc34f8a..797e3b5f721 100644 --- a/target/mips/tcg/loong_translate.c +++ b/target/mips/tcg/loong_translate.c @@ -28,7 +28,7 @@ static bool gen_lext_DIV_G(DisasContext *s, int rd, int rs, int rt, bool is_double) { - TCGv t0, t1; + TCGv_i64 t0, t1; TCGLabel *l1, *l2, *l3; =20 if (rd =3D=3D 0) { @@ -36,8 +36,8 @@ static bool gen_lext_DIV_G(DisasContext *s, int rd, int r= s, int rt, return true; } =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i64(); + t1 =3D tcg_temp_new_i64(); l1 =3D gen_new_label(); l2 =3D gen_new_label(); l3 =3D gen_new_label(); @@ -46,23 +46,23 @@ static bool gen_lext_DIV_G(DisasContext *s, int rd, int= rs, int rt, gen_load_gpr(t1, rt); =20 if (!is_double) { - tcg_gen_ext32s_tl(t0, t0); - tcg_gen_ext32s_tl(t1, t1); + tcg_gen_ext32s_i64(t0, t0); + tcg_gen_ext32s_i64(t1, t1); } - tcg_gen_brcondi_tl(TCG_COND_NE, t1, 0, l1); - tcg_gen_movi_tl(cpu_gpr[rd], 0); + tcg_gen_brcondi_i64(TCG_COND_NE, t1, 0, l1); + tcg_gen_movi_i64(cpu_gpr[rd], 0); tcg_gen_br(l3); gen_set_label(l1); =20 - tcg_gen_brcondi_tl(TCG_COND_NE, t0, is_double ? LLONG_MIN : INT_MIN, l= 2); - tcg_gen_brcondi_tl(TCG_COND_NE, t1, -1LL, l2); - tcg_gen_mov_tl(cpu_gpr[rd], t0); + tcg_gen_brcondi_i64(TCG_COND_NE, t0, is_double ? LLONG_MIN : INT_MIN, = l2); + tcg_gen_brcondi_i64(TCG_COND_NE, t1, -1LL, l2); + tcg_gen_mov_i64(cpu_gpr[rd], t0); =20 tcg_gen_br(l3); gen_set_label(l2); - tcg_gen_div_tl(cpu_gpr[rd], t0, t1); + tcg_gen_div_i64(cpu_gpr[rd], t0, t1); if (!is_double) { - tcg_gen_ext32s_tl(cpu_gpr[rd], cpu_gpr[rd]); + tcg_gen_ext32s_i64(cpu_gpr[rd], cpu_gpr[rd]); } gen_set_label(l3); =20 @@ -82,7 +82,7 @@ static bool trans_DDIV_G(DisasContext *s, arg_muldiv *a) static bool gen_lext_DIVU_G(DisasContext *s, int rd, int rs, int rt, bool is_double) { - TCGv t0, t1; + TCGv_i64 t0, t1; TCGLabel *l1, *l2; =20 if (rd =3D=3D 0) { @@ -90,8 +90,8 @@ static bool gen_lext_DIVU_G(DisasContext *s, int rd, int = rs, int rt, return true; } =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i64(); + t1 =3D tcg_temp_new_i64(); l1 =3D gen_new_label(); l2 =3D gen_new_label(); =20 @@ -99,17 +99,17 @@ static bool gen_lext_DIVU_G(DisasContext *s, int rd, in= t rs, int rt, gen_load_gpr(t1, rt); =20 if (!is_double) { - tcg_gen_ext32u_tl(t0, t0); - tcg_gen_ext32u_tl(t1, t1); + tcg_gen_ext32u_i64(t0, t0); + tcg_gen_ext32u_i64(t1, t1); } - tcg_gen_brcondi_tl(TCG_COND_NE, t1, 0, l1); - tcg_gen_movi_tl(cpu_gpr[rd], 0); + tcg_gen_brcondi_i64(TCG_COND_NE, t1, 0, l1); + tcg_gen_movi_i64(cpu_gpr[rd], 0); =20 tcg_gen_br(l2); gen_set_label(l1); - tcg_gen_divu_tl(cpu_gpr[rd], t0, t1); + tcg_gen_divu_i64(cpu_gpr[rd], t0, t1); if (!is_double) { - tcg_gen_ext32s_tl(cpu_gpr[rd], cpu_gpr[rd]); + tcg_gen_ext32s_i64(cpu_gpr[rd], cpu_gpr[rd]); } gen_set_label(l2); =20 @@ -129,7 +129,7 @@ static bool trans_DDIVU_G(DisasContext *s, arg_muldiv *= a) static bool gen_lext_MOD_G(DisasContext *s, int rd, int rs, int rt, bool is_double) { - TCGv t0, t1; + TCGv_i64 t0, t1; TCGLabel *l1, *l2, *l3; =20 if (rd =3D=3D 0) { @@ -137,8 +137,8 @@ static bool gen_lext_MOD_G(DisasContext *s, int rd, int= rs, int rt, return true; } =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i64(); + t1 =3D tcg_temp_new_i64(); l1 =3D gen_new_label(); l2 =3D gen_new_label(); l3 =3D gen_new_label(); @@ -147,19 +147,19 @@ static bool gen_lext_MOD_G(DisasContext *s, int rd, i= nt rs, int rt, gen_load_gpr(t1, rt); =20 if (!is_double) { - tcg_gen_ext32u_tl(t0, t0); - tcg_gen_ext32u_tl(t1, t1); + tcg_gen_ext32u_i64(t0, t0); + tcg_gen_ext32u_i64(t1, t1); } - tcg_gen_brcondi_tl(TCG_COND_EQ, t1, 0, l1); - tcg_gen_brcondi_tl(TCG_COND_NE, t0, is_double ? LLONG_MIN : INT_MIN, l= 2); - tcg_gen_brcondi_tl(TCG_COND_NE, t1, -1LL, l2); + tcg_gen_brcondi_i64(TCG_COND_EQ, t1, 0, l1); + tcg_gen_brcondi_i64(TCG_COND_NE, t0, is_double ? LLONG_MIN : INT_MIN, = l2); + tcg_gen_brcondi_i64(TCG_COND_NE, t1, -1LL, l2); gen_set_label(l1); - tcg_gen_movi_tl(cpu_gpr[rd], 0); + tcg_gen_movi_i64(cpu_gpr[rd], 0); tcg_gen_br(l3); gen_set_label(l2); - tcg_gen_rem_tl(cpu_gpr[rd], t0, t1); + tcg_gen_rem_i64(cpu_gpr[rd], t0, t1); if (!is_double) { - tcg_gen_ext32s_tl(cpu_gpr[rd], cpu_gpr[rd]); + tcg_gen_ext32s_i64(cpu_gpr[rd], cpu_gpr[rd]); } gen_set_label(l3); =20 @@ -179,7 +179,7 @@ static bool trans_DMOD_G(DisasContext *s, arg_muldiv *a) static bool gen_lext_MODU_G(DisasContext *s, int rd, int rs, int rt, bool is_double) { - TCGv t0, t1; + TCGv_i64 t0, t1; TCGLabel *l1, *l2; =20 if (rd =3D=3D 0) { @@ -187,8 +187,8 @@ static bool gen_lext_MODU_G(DisasContext *s, int rd, in= t rs, int rt, return true; } =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i64(); + t1 =3D tcg_temp_new_i64(); l1 =3D gen_new_label(); l2 =3D gen_new_label(); =20 @@ -196,16 +196,16 @@ static bool gen_lext_MODU_G(DisasContext *s, int rd, = int rs, int rt, gen_load_gpr(t1, rt); =20 if (!is_double) { - tcg_gen_ext32u_tl(t0, t0); - tcg_gen_ext32u_tl(t1, t1); + tcg_gen_ext32u_i64(t0, t0); + tcg_gen_ext32u_i64(t1, t1); } - tcg_gen_brcondi_tl(TCG_COND_NE, t1, 0, l1); - tcg_gen_movi_tl(cpu_gpr[rd], 0); + tcg_gen_brcondi_i64(TCG_COND_NE, t1, 0, l1); + tcg_gen_movi_i64(cpu_gpr[rd], 0); tcg_gen_br(l2); gen_set_label(l1); - tcg_gen_remu_tl(cpu_gpr[rd], t0, t1); + tcg_gen_remu_i64(cpu_gpr[rd], t0, t1); if (!is_double) { - tcg_gen_ext32s_tl(cpu_gpr[rd], cpu_gpr[rd]); + tcg_gen_ext32s_i64(cpu_gpr[rd], cpu_gpr[rd]); } gen_set_label(l2); =20 @@ -225,22 +225,22 @@ static bool trans_DMODU_G(DisasContext *s, arg_muldiv= *a) static bool gen_lext_MULT_G(DisasContext *s, int rd, int rs, int rt, bool is_double) { - TCGv t0, t1; + TCGv_i64 t0, t1; =20 if (rd =3D=3D 0) { /* Treat as NOP. */ return true; } =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i64(); + t1 =3D tcg_temp_new_i64(); =20 gen_load_gpr(t0, rs); gen_load_gpr(t1, rt); =20 - tcg_gen_mul_tl(cpu_gpr[rd], t0, t1); + tcg_gen_mul_i64(cpu_gpr[rd], t0, t1); if (!is_double) { - tcg_gen_ext32s_tl(cpu_gpr[rd], cpu_gpr[rd]); + tcg_gen_ext32s_i64(cpu_gpr[rd], cpu_gpr[rd]); } =20 return true; diff --git a/target/mips/tcg/octeon_translate.c b/target/mips/tcg/octeon_tr= anslate.c index b2eca29e06c..e1f52d444aa 100644 --- a/target/mips/tcg/octeon_translate.c +++ b/target/mips/tcg/octeon_translate.c @@ -15,7 +15,7 @@ =20 static bool trans_BBIT(DisasContext *ctx, arg_BBIT *a) { - TCGv p; + TCGv_i64 p; =20 if (ctx->hflags & MIPS_HFLAG_BMASK) { LOG_DISAS("Branch in delay / forbidden slot at PC 0x%" VADDR_PRIx = "\n", @@ -25,14 +25,14 @@ static bool trans_BBIT(DisasContext *ctx, arg_BBIT *a) } =20 /* Load needed operands */ - TCGv t0 =3D tcg_temp_new(); + TCGv_i64 t0 =3D tcg_temp_new_i64(); gen_load_gpr(t0, a->rs); =20 - p =3D tcg_constant_tl(1ULL << a->p); + p =3D tcg_constant_i64(1ULL << a->p); if (a->set) { - tcg_gen_and_tl(bcond, p, t0); + tcg_gen_and_i64(bcond, p, t0); } else { - tcg_gen_andc_tl(bcond, p, t0); + tcg_gen_andc_i64(bcond, p, t0); } =20 ctx->hflags |=3D MIPS_HFLAG_BC; @@ -43,34 +43,34 @@ static bool trans_BBIT(DisasContext *ctx, arg_BBIT *a) =20 static bool trans_BADDU(DisasContext *ctx, arg_BADDU *a) { - TCGv t0, t1; + TCGv_i64 t0, t1; =20 if (a->rt =3D=3D 0) { /* nop */ return true; } =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i64(); + t1 =3D tcg_temp_new_i64(); gen_load_gpr(t0, a->rs); gen_load_gpr(t1, a->rt); =20 - tcg_gen_add_tl(t0, t0, t1); + tcg_gen_add_i64(t0, t0, t1); tcg_gen_andi_i64(cpu_gpr[a->rd], t0, 0xff); return true; } =20 static bool trans_DMUL(DisasContext *ctx, arg_DMUL *a) { - TCGv t0, t1; + TCGv_i64 t0, t1; =20 if (a->rt =3D=3D 0) { /* nop */ return true; } =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i64(); + t1 =3D tcg_temp_new_i64(); gen_load_gpr(t0, a->rs); gen_load_gpr(t1, a->rt); =20 @@ -80,97 +80,97 @@ static bool trans_DMUL(DisasContext *ctx, arg_DMUL *a) =20 static bool trans_EXTS(DisasContext *ctx, arg_EXTS *a) { - TCGv t0; + TCGv_i64 t0; =20 if (a->rt =3D=3D 0) { /* nop */ return true; } =20 - t0 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i64(); gen_load_gpr(t0, a->rs); - tcg_gen_sextract_tl(t0, t0, a->p, a->lenm1 + 1); + tcg_gen_sextract_i64(t0, t0, a->p, a->lenm1 + 1); gen_store_gpr(t0, a->rt); return true; } =20 static bool trans_CINS(DisasContext *ctx, arg_CINS *a) { - TCGv t0; + TCGv_i64 t0; =20 if (a->rt =3D=3D 0) { /* nop */ return true; } =20 - t0 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i64(); gen_load_gpr(t0, a->rs); - tcg_gen_deposit_z_tl(t0, t0, a->p, a->lenm1 + 1); + tcg_gen_deposit_z_i64(t0, t0, a->p, a->lenm1 + 1); gen_store_gpr(t0, a->rt); return true; } =20 static bool trans_POP(DisasContext *ctx, arg_POP *a) { - TCGv t0; + TCGv_i64 t0; =20 if (a->rd =3D=3D 0) { /* nop */ return true; } =20 - t0 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i64(); gen_load_gpr(t0, a->rs); if (!a->dw) { tcg_gen_andi_i64(t0, t0, 0xffffffff); } - tcg_gen_ctpop_tl(t0, t0); + tcg_gen_ctpop_i64(t0, t0); gen_store_gpr(t0, a->rd); return true; } =20 static bool trans_SEQNE(DisasContext *ctx, arg_SEQNE *a) { - TCGv t0, t1; + TCGv_i64 t0, t1; =20 if (a->rd =3D=3D 0) { /* nop */ return true; } =20 - t0 =3D tcg_temp_new(); - t1 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i64(); + t1 =3D tcg_temp_new_i64(); =20 gen_load_gpr(t0, a->rs); gen_load_gpr(t1, a->rt); =20 if (a->ne) { - tcg_gen_setcond_tl(TCG_COND_NE, cpu_gpr[a->rd], t1, t0); + tcg_gen_setcond_i64(TCG_COND_NE, cpu_gpr[a->rd], t1, t0); } else { - tcg_gen_setcond_tl(TCG_COND_EQ, cpu_gpr[a->rd], t1, t0); + tcg_gen_setcond_i64(TCG_COND_EQ, cpu_gpr[a->rd], t1, t0); } return true; } =20 static bool trans_SEQNEI(DisasContext *ctx, arg_SEQNEI *a) { - TCGv t0; + TCGv_i64 t0; =20 if (a->rt =3D=3D 0) { /* nop */ return true; } =20 - t0 =3D tcg_temp_new(); + t0 =3D tcg_temp_new_i64(); =20 gen_load_gpr(t0, a->rs); =20 /* Sign-extend to 64 bit value */ target_ulong imm =3D a->imm; if (a->ne) { - tcg_gen_setcondi_tl(TCG_COND_NE, cpu_gpr[a->rt], t0, imm); + tcg_gen_setcondi_i64(TCG_COND_NE, cpu_gpr[a->rt], t0, imm); } else { - tcg_gen_setcondi_tl(TCG_COND_EQ, cpu_gpr[a->rt], t0, imm); + tcg_gen_setcondi_i64(TCG_COND_EQ, cpu_gpr[a->rt], t0, imm); } return true; } diff --git a/target/mips/tcg/tx79_translate.c b/target/mips/tcg/tx79_transl= ate.c index ae3f5e19c43..e071c867631 100644 --- a/target/mips/tcg/tx79_translate.c +++ b/target/mips/tcg/tx79_translate.c @@ -241,8 +241,8 @@ static bool trans_parallel_compare(DisasContext *ctx, a= rg_r *a, return true; } =20 - c0 =3D tcg_constant_tl(0); - c1 =3D tcg_constant_tl(0xffffffff); + c0 =3D tcg_constant_i64(0); + c1 =3D tcg_constant_i64(0xffffffff); ax =3D tcg_temp_new_i64(); bx =3D tcg_temp_new_i64(); t0 =3D tcg_temp_new_i64(); @@ -322,7 +322,7 @@ static bool trans_PCEQW(DisasContext *ctx, arg_r *a) static bool trans_LQ(DisasContext *ctx, arg_i *a) { TCGv_i64 t0; - TCGv addr; + TCGv_i64 addr; =20 if (a->rt =3D=3D 0) { /* nop */ @@ -330,14 +330,14 @@ static bool trans_LQ(DisasContext *ctx, arg_i *a) } =20 t0 =3D tcg_temp_new_i64(); - addr =3D tcg_temp_new(); + addr =3D tcg_temp_new_i64(); =20 gen_base_offset_addr(ctx, addr, a->base, a->offset); /* * Clear least-significant four bits of the effective * address, effectively creating an aligned address. */ - tcg_gen_andi_tl(addr, addr, ~0xf); + tcg_gen_andi_i64(addr, addr, ~0xf); =20 /* Lower half */ tcg_gen_qemu_ld_i64(t0, addr, ctx->mem_idx, mo_endian(ctx) | MO_UQ); @@ -353,14 +353,14 @@ static bool trans_LQ(DisasContext *ctx, arg_i *a) static bool trans_SQ(DisasContext *ctx, arg_i *a) { TCGv_i64 t0 =3D tcg_temp_new_i64(); - TCGv addr =3D tcg_temp_new(); + TCGv_i64 addr =3D tcg_temp_new_i64(); =20 gen_base_offset_addr(ctx, addr, a->base, a->offset); /* * Clear least-significant four bits of the effective * address, effectively creating an aligned address. */ - tcg_gen_andi_tl(addr, addr, ~0xf); + tcg_gen_andi_i64(addr, addr, ~0xf); =20 /* Lower half */ gen_load_gpr(t0, a->rt); --=20 2.53.0