From: WANG Xuerui <git@xen0n.name>
Support for TCGCond's in loongarch64 cmp_vec codegen is not uniform: NE
is not supported at all and will trip over assertions, and legalization
(currently just operand-swapping) is not done for reg-imm comparisons.
Since the TCG middle-end will not legalize the comparison conditions for
us, we have to do it ourselves like other targets.
Because EQ/LT/LTU/LE/LEU are natively supported, we only have to keep
the current operand swapping treatment for GT/GTU/GE/GEU but ensure it
is done for both reg-reg and reg-imm cases, and use a bitwise NOT to
help legalize NE.
Fixes: d8b6fa593d2d ("tcg/loongarch64: Lower cmp_vec to vseq/vsle/vslt")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/3237
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Philippe Mathieu-Daudé <philmd@linaro.org>
Reported-by: Bingwu Zhang <xtexchooser@duck.com>
Signed-off-by: WANG Xuerui <git@xen0n.name>
Message-ID: <20251207055626.3685415-1-i.qemu@xen0n.name>
[PMD: Split of bigger patch, part 2/2]
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
tcg/loongarch64/tcg-target.c.inc | 45 +++++++++++++++++++++++++-------
1 file changed, 35 insertions(+), 10 deletions(-)
diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc
index dbb36a2a816..1a243a57beb 100644
--- a/tcg/loongarch64/tcg-target.c.inc
+++ b/tcg/loongarch64/tcg-target.c.inc
@@ -2184,6 +2184,33 @@ static void tcg_out_cmp_vec(TCGContext *s, bool lasx, unsigned vece,
bool a2_is_const, TCGCond cond)
{
LoongArchInsn insn;
+ bool need_invert = false;
+
+ switch (cond) {
+ case TCG_COND_EQ:
+ case TCG_COND_LE:
+ case TCG_COND_LEU:
+ case TCG_COND_LT:
+ case TCG_COND_LTU:
+ /* These are directly expressible. */
+ break;
+ case TCG_COND_NE:
+ need_invert = true;
+ cond = TCG_COND_EQ;
+ break;
+ case TCG_COND_GE:
+ case TCG_COND_GEU:
+ case TCG_COND_GT:
+ case TCG_COND_GTU:
+ {
+ TCGArg t;
+ t = a1, a1 = a2, a2 = t;
+ cond = tcg_swap_cond(cond);
+ break;
+ }
+ default:
+ g_assert_not_reached();
+ }
static const LoongArchInsn cmp_vec_insn[16][2][4] = {
[TCG_COND_EQ] = {
@@ -2236,32 +2263,30 @@ static void tcg_out_cmp_vec(TCGContext *s, bool lasx, unsigned vece,
* Try vseqi/vslei/vslti
*/
int64_t value = sextract64(a2, 0, 8 << vece);
+
+ insn = cmp_vec_imm_insn[cond][lasx][vece];
switch (cond) {
case TCG_COND_EQ:
case TCG_COND_LE:
case TCG_COND_LT:
- insn = cmp_vec_imm_insn[cond][lasx][vece];
tcg_out32(s, encode_vdvjsk5_insn(insn, a0, a1, value));
break;
case TCG_COND_LEU:
case TCG_COND_LTU:
- insn = cmp_vec_imm_insn[cond][lasx][vece];
tcg_out32(s, encode_vdvjuk5_insn(insn, a0, a1, value));
break;
default:
g_assert_not_reached();
}
+ } else {
+ insn = cmp_vec_insn[cond][lasx][vece];
+ tcg_out32(s, encode_vdvjvk_insn(insn, a0, a1, a2));
}
- insn = cmp_vec_insn[cond][lasx][vece];
- if (insn == 0) {
- TCGArg t;
- t = a1, a1 = a2, a2 = t;
- cond = tcg_swap_cond(cond);
- insn = cmp_vec_insn[cond][lasx][vece];
- tcg_debug_assert(insn != 0);
+ if (need_invert) {
+ insn = lasx ? OPC_XVNOR_V : OPC_VNOR_V;
+ tcg_out32(s, encode_vdvjvk_insn(insn, a0, a0, a0));
}
- tcg_out32(s, encode_vdvjvk_insn(insn, a0, a1, a2));
}
static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc,
--
2.51.0
On 12/8/25 03:53, Philippe Mathieu-Daudé wrote:
> From: WANG Xuerui <git@xen0n.name>
>
> Support for TCGCond's in loongarch64 cmp_vec codegen is not uniform: NE
> is not supported at all and will trip over assertions, and legalization
> (currently just operand-swapping) is not done for reg-imm comparisons.
> Since the TCG middle-end will not legalize the comparison conditions for
> us, we have to do it ourselves like other targets.
>
> Because EQ/LT/LTU/LE/LEU are natively supported, we only have to keep
> the current operand swapping treatment for GT/GTU/GE/GEU but ensure it
> is done for both reg-reg and reg-imm cases, and use a bitwise NOT to
> help legalize NE.
>
> Fixes: d8b6fa593d2d ("tcg/loongarch64: Lower cmp_vec to vseq/vsle/vslt")
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/3237
> Cc: Richard Henderson <richard.henderson@linaro.org>
> Cc: Philippe Mathieu-Daudé <philmd@linaro.org>
> Reported-by: Bingwu Zhang <xtexchooser@duck.com>
> Signed-off-by: WANG Xuerui <git@xen0n.name>
> Message-ID: <20251207055626.3685415-1-i.qemu@xen0n.name>
> [PMD: Split of bigger patch, part 2/2]
> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
> ---
> tcg/loongarch64/tcg-target.c.inc | 45 +++++++++++++++++++++++++-------
> 1 file changed, 35 insertions(+), 10 deletions(-)
>
> diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc
> index dbb36a2a816..1a243a57beb 100644
> --- a/tcg/loongarch64/tcg-target.c.inc
> +++ b/tcg/loongarch64/tcg-target.c.inc
> @@ -2184,6 +2184,33 @@ static void tcg_out_cmp_vec(TCGContext *s, bool lasx, unsigned vece,
> bool a2_is_const, TCGCond cond)
> {
> LoongArchInsn insn;
> + bool need_invert = false;
> +
> + switch (cond) {
> + case TCG_COND_EQ:
> + case TCG_COND_LE:
> + case TCG_COND_LEU:
> + case TCG_COND_LT:
> + case TCG_COND_LTU:
> + /* These are directly expressible. */
> + break;
> + case TCG_COND_NE:
> + need_invert = true;
> + cond = TCG_COND_EQ;
> + break;
> + case TCG_COND_GE:
> + case TCG_COND_GEU:
> + case TCG_COND_GT:
> + case TCG_COND_GTU:
> + {
> + TCGArg t;
> + t = a1, a1 = a2, a2 = t;
> + cond = tcg_swap_cond(cond);
> + break;
> + }
To repeat my review of v1, you can't swap here if a2_is_const.
r~
> + default:
> + g_assert_not_reached();
> + }
>
> static const LoongArchInsn cmp_vec_insn[16][2][4] = {
> [TCG_COND_EQ] = {
> @@ -2236,32 +2263,30 @@ static void tcg_out_cmp_vec(TCGContext *s, bool lasx, unsigned vece,
> * Try vseqi/vslei/vslti
> */
> int64_t value = sextract64(a2, 0, 8 << vece);
> +
> + insn = cmp_vec_imm_insn[cond][lasx][vece];
> switch (cond) {
> case TCG_COND_EQ:
> case TCG_COND_LE:
> case TCG_COND_LT:
> - insn = cmp_vec_imm_insn[cond][lasx][vece];
> tcg_out32(s, encode_vdvjsk5_insn(insn, a0, a1, value));
> break;
> case TCG_COND_LEU:
> case TCG_COND_LTU:
> - insn = cmp_vec_imm_insn[cond][lasx][vece];
> tcg_out32(s, encode_vdvjuk5_insn(insn, a0, a1, value));
> break;
> default:
> g_assert_not_reached();
> }
> + } else {
> + insn = cmp_vec_insn[cond][lasx][vece];
> + tcg_out32(s, encode_vdvjvk_insn(insn, a0, a1, a2));
> }
>
> - insn = cmp_vec_insn[cond][lasx][vece];
> - if (insn == 0) {
> - TCGArg t;
> - t = a1, a1 = a2, a2 = t;
> - cond = tcg_swap_cond(cond);
> - insn = cmp_vec_insn[cond][lasx][vece];
> - tcg_debug_assert(insn != 0);
> + if (need_invert) {
> + insn = lasx ? OPC_XVNOR_V : OPC_VNOR_V;
> + tcg_out32(s, encode_vdvjvk_insn(insn, a0, a0, a0));
> }
> - tcg_out32(s, encode_vdvjvk_insn(insn, a0, a1, a2));
> }
>
> static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc,
On 8/12/25 10:53, Philippe Mathieu-Daudé wrote:
> From: WANG Xuerui <git@xen0n.name>
>
> Support for TCGCond's in loongarch64 cmp_vec codegen is not uniform: NE
> is not supported at all and will trip over assertions, and legalization
> (currently just operand-swapping) is not done for reg-imm comparisons.
> Since the TCG middle-end will not legalize the comparison conditions for
> us, we have to do it ourselves like other targets.
>
> Because EQ/LT/LTU/LE/LEU are natively supported, we only have to keep
> the current operand swapping treatment for GT/GTU/GE/GEU but ensure it
> is done for both reg-reg and reg-imm cases, and use a bitwise NOT to
> help legalize NE.
>
> Fixes: d8b6fa593d2d ("tcg/loongarch64: Lower cmp_vec to vseq/vsle/vslt")
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/3237
> Cc: Richard Henderson <richard.henderson@linaro.org>
> Cc: Philippe Mathieu-Daudé <philmd@linaro.org>
> Reported-by: Bingwu Zhang <xtexchooser@duck.com>
> Signed-off-by: WANG Xuerui <git@xen0n.name>
> Message-ID: <20251207055626.3685415-1-i.qemu@xen0n.name>
> [PMD: Split of bigger patch, part 2/2]
> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
> ---
> tcg/loongarch64/tcg-target.c.inc | 45 +++++++++++++++++++++++++-------
> 1 file changed, 35 insertions(+), 10 deletions(-)
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
On 8/12/25 10:55, Philippe Mathieu-Daudé wrote:
> On 8/12/25 10:53, Philippe Mathieu-Daudé wrote:
>> From: WANG Xuerui <git@xen0n.name>
>>
>> Support for TCGCond's in loongarch64 cmp_vec codegen is not uniform: NE
>> is not supported at all and will trip over assertions, and legalization
>> (currently just operand-swapping) is not done for reg-imm comparisons.
>> Since the TCG middle-end will not legalize the comparison conditions for
>> us, we have to do it ourselves like other targets.
>>
>> Because EQ/LT/LTU/LE/LEU are natively supported, we only have to keep
>> the current operand swapping treatment for GT/GTU/GE/GEU but ensure it
>> is done for both reg-reg and reg-imm cases, and use a bitwise NOT to
>> help legalize NE.
>>
>> Fixes: d8b6fa593d2d ("tcg/loongarch64: Lower cmp_vec to vseq/vsle/vslt")
>> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/3237
>> Cc: Richard Henderson <richard.henderson@linaro.org>
>> Cc: Philippe Mathieu-Daudé <philmd@linaro.org>
>> Reported-by: Bingwu Zhang <xtexchooser@duck.com>
>> Signed-off-by: WANG Xuerui <git@xen0n.name>
>> Message-ID: <20251207055626.3685415-1-i.qemu@xen0n.name>
>> [PMD: Split of bigger patch, part 2/2]
>> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
>> ---
>> tcg/loongarch64/tcg-target.c.inc | 45 +++++++++++++++++++++++++-------
>> 1 file changed, 35 insertions(+), 10 deletions(-)
>
> Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
© 2016 - 2025 Red Hat, Inc.