From nobody Tue Feb 10 08:03:31 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1568184615; cv=none; d=zoho.com; s=zohoarc; b=NFA5eR51253WtsPropuTbApxMn38T9Msqc4Zp6pEpFcijDWwJLJIfDh4mHXnGAcB/o4YhNjffwGusJ8IvbX8oG0EfGJNk1pptIq8spO3pG++sp52/S+8ab9qtzbCFk9PyVFGJAkAnt5Zuc+B4oMgEqoosBXZgllvAWyf/DClxBg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1568184615; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=ZHmtROOI8qgBlpujqfS090opS+Ep9tpzYW65Mo0EZZg=; b=nkWW4fknXAnfQvRo8Tx93PfeR3cMQ0JAJDxZF9ar1sqdYBuStmHbT5GM2VziO7fUqtPP1s6aQzb+cy08KE+Br7hm4ibLlheMwDzQe7dOqoFFZGy80O+cLz5OzR+d/GkgpWY7OkgW5iE0K1lbanVR24YXN1LBnadOBhAN58WwmtM= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 156818461582510.681020126506837; Tue, 10 Sep 2019 23:50:15 -0700 (PDT) Received: from localhost ([::1]:46982 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1i7wSD-0005yU-VP for importer@patchew.org; Wed, 11 Sep 2019 02:50:14 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:38701) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1i7wE6-0007np-RO for qemu-devel@nongnu.org; Wed, 11 Sep 2019 02:35:43 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1i7wE0-0008AP-9u for qemu-devel@nongnu.org; Wed, 11 Sep 2019 02:35:38 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:55853) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1i7wDz-0007sI-71; Wed, 11 Sep 2019 02:35:32 -0400 Received: from localhost(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.FSRZlKe_1568183704) by smtp.aliyun-inc.com(10.147.42.135); Wed, 11 Sep 2019 14:35:04 +0800 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.03883426|-1; CH=green; DM=CONTINUE|CONTINUE|true|0.257642-0.00741329-0.734945; FP=0|0|0|0|0|-1|-1|-1; HT=e01a16367; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=11; RT=11; SR=0; TI=SMTPD_---.FSRZlKe_1568183704; From: liuzhiwei To: Alistair.Francis@wdc.com, palmer@sifive.com, sagark@eecs.berkeley.edu, kbastian@mail.uni-paderborn.de, riku.voipio@iki.fi, laurent@vivier.eu, wenmeng_zhang@c-sky.com Date: Wed, 11 Sep 2019 14:25:39 +0800 Message-Id: <1568183141-67641-16-git-send-email-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1568183141-67641-1-git-send-email-zhiwei_liu@c-sky.com> References: <1568183141-67641-1-git-send-email-zhiwei_liu@c-sky.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] X-Received-From: 121.197.200.217 Subject: [Qemu-devel] [PATCH v2 15/17] RISC-V: add vector extension reduction instructions X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, LIU Zhiwei Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: LIU Zhiwei Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 17 + target/riscv/insn32.decode | 17 + target/riscv/insn_trans/trans_rvv.inc.c | 17 + target/riscv/vector_helper.c | 1275 +++++++++++++++++++++++++++= ++++ 4 files changed, 1326 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index e2384eb..d36bd00 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -384,5 +384,22 @@ DEF_HELPER_4(vector_vfncvt_f_xu_v, void, env, i32, i32= , i32) DEF_HELPER_4(vector_vfncvt_f_x_v, void, env, i32, i32, i32) DEF_HELPER_4(vector_vfncvt_f_f_v, void, env, i32, i32, i32) =20 +DEF_HELPER_5(vector_vredsum_vs, void, env, i32, i32, i32, i32) +DEF_HELPER_5(vector_vredand_vs, void, env, i32, i32, i32, i32) +DEF_HELPER_5(vector_vfredsum_vs, void, env, i32, i32, i32, i32) +DEF_HELPER_5(vector_vredor_vs, void, env, i32, i32, i32, i32) +DEF_HELPER_5(vector_vredxor_vs, void, env, i32, i32, i32, i32) +DEF_HELPER_5(vector_vfredosum_vs, void, env, i32, i32, i32, i32) +DEF_HELPER_5(vector_vredminu_vs, void, env, i32, i32, i32, i32) +DEF_HELPER_5(vector_vredmin_vs, void, env, i32, i32, i32, i32) +DEF_HELPER_5(vector_vfredmin_vs, void, env, i32, i32, i32, i32) +DEF_HELPER_5(vector_vredmaxu_vs, void, env, i32, i32, i32, i32) +DEF_HELPER_5(vector_vredmax_vs, void, env, i32, i32, i32, i32) +DEF_HELPER_5(vector_vfredmax_vs, void, env, i32, i32, i32, i32) +DEF_HELPER_5(vector_vwredsumu_vs, void, env, i32, i32, i32, i32) +DEF_HELPER_5(vector_vwredsum_vs, void, env, i32, i32, i32, i32) +DEF_HELPER_5(vector_vfwredsum_vs, void, env, i32, i32, i32, i32) +DEF_HELPER_5(vector_vfwredosum_vs, void, env, i32, i32, i32, i32) + DEF_HELPER_4(vector_vsetvli, void, env, i32, i32, i32) DEF_HELPER_4(vector_vsetvl, void, env, i32, i32, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index 256d8ea..3f63bc1 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -524,5 +524,22 @@ vfncvt_f_xu_v 100010 . ..... 10010 001 ..... 1010111= @r2_vm vfncvt_f_x_v 100010 . ..... 10011 001 ..... 1010111 @r2_vm vfncvt_f_f_v 100010 . ..... 10100 001 ..... 1010111 @r2_vm =20 +vredsum_vs 000000 . ..... ..... 010 ..... 1010111 @r_vm +vredand_vs 000001 . ..... ..... 010 ..... 1010111 @r_vm +vredor_vs 000010 . ..... ..... 010 ..... 1010111 @r_vm +vredxor_vs 000011 . ..... ..... 010 ..... 1010111 @r_vm +vredminu_vs 000100 . ..... ..... 010 ..... 1010111 @r_vm +vredmin_vs 000101 . ..... ..... 010 ..... 1010111 @r_vm +vredmaxu_vs 000110 . ..... ..... 010 ..... 1010111 @r_vm +vredmax_vs 000111 . ..... ..... 010 ..... 1010111 @r_vm +vwredsumu_vs 110000 . ..... ..... 000 ..... 1010111 @r_vm +vwredsum_vs 110001 . ..... ..... 000 ..... 1010111 @r_vm +vfredsum_vs 000001 . ..... ..... 001 ..... 1010111 @r_vm +vfredosum_vs 000011 . ..... ..... 001 ..... 1010111 @r_vm +vfredmin_vs 000101 . ..... ..... 001 ..... 1010111 @r_vm +vfredmax_vs 000111 . ..... ..... 001 ..... 1010111 @r_vm +vfwredsum_vs 110001 . ..... ..... 001 ..... 1010111 @r_vm +vfwredosum_vs 110011 . ..... ..... 001 ..... 1010111 @r_vm + vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_tr= ans/trans_rvv.inc.c index e4d4576..9a3d31b 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -427,5 +427,22 @@ GEN_VECTOR_R2_VM(vfncvt_f_xu_v) GEN_VECTOR_R2_VM(vfncvt_f_x_v) GEN_VECTOR_R2_VM(vfncvt_f_f_v) =20 +GEN_VECTOR_R_VM(vredsum_vs) +GEN_VECTOR_R_VM(vredand_vs) +GEN_VECTOR_R_VM(vredor_vs) +GEN_VECTOR_R_VM(vredxor_vs) +GEN_VECTOR_R_VM(vredminu_vs) +GEN_VECTOR_R_VM(vredmin_vs) +GEN_VECTOR_R_VM(vredmaxu_vs) +GEN_VECTOR_R_VM(vredmax_vs) +GEN_VECTOR_R_VM(vwredsumu_vs) +GEN_VECTOR_R_VM(vwredsum_vs) +GEN_VECTOR_R_VM(vfredsum_vs) +GEN_VECTOR_R_VM(vfredosum_vs) +GEN_VECTOR_R_VM(vfredmin_vs) +GEN_VECTOR_R_VM(vfredmax_vs) +GEN_VECTOR_R_VM(vfwredsum_vs) +GEN_VECTOR_R_VM(vfwredosum_vs) + GEN_VECTOR_R2_ZIMM(vsetvli) GEN_VECTOR_R(vsetvl) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index fd2ecb7..4a9083b 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -22720,4 +22720,1279 @@ void VECTOR_HELPER(vfncvt_f_f_v)(CPURISCVState *= env, uint32_t vm, return; } =20 +/* vredsum.vs vd, vs2, vs1, vm # vd[0] =3D sum(vs1[0] , vs2[*]) */ +void VECTOR_HELPER(vredsum_vs)(CPURISCVState *env, uint32_t vm, uint32_t r= s1, + uint32_t rs2, uint32_t rd) +{ =20 + int width, lmul, vl, vlmax; + int i, j, src2; + uint64_t sum =3D 0; + + lmul =3D vector_get_lmul(env); + vector_lmul_check_reg(env, lmul, rs2, false); + + if (vector_vtype_ill(env) || vector_overlap_vm_common(lmul, vm, rd)) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + if (env->vfp.vstart !=3D 0) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + + vl =3D env->vfp.vl; + if (vl =3D=3D 0) { + return; + } + + width =3D vector_get_width(env); + vlmax =3D vector_get_vlmax(env); + + for (i =3D 0; i < VLEN / 64; i++) { + env->vfp.vreg[rd].u64[i] =3D 0; + } + + for (i =3D 0; i < vlmax; i++) { + src2 =3D rs2 + (i / (VLEN / width)); + j =3D i % (VLEN / width); + + if (i < vl) { + switch (width) { + case 8: + if (vector_elem_mask(env, vm, width, lmul, i)) { + sum +=3D env->vfp.vreg[src2].u8[j]; + } + if (i =3D=3D 0) { + sum +=3D env->vfp.vreg[rs1].u8[0]; + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u8[0] =3D sum; + } + break; + case 16: + if (vector_elem_mask(env, vm, width, lmul, i)) { + sum +=3D env->vfp.vreg[src2].u16[j]; + } + if (i =3D=3D 0) { + sum +=3D env->vfp.vreg[rs1].u16[0]; + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u16[0] =3D sum; + } + break; + case 32: + if (vector_elem_mask(env, vm, width, lmul, i)) { + sum +=3D env->vfp.vreg[src2].u32[j]; + } + if (i =3D=3D 0) { + sum +=3D env->vfp.vreg[rs1].u32[0]; + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u32[0] =3D sum; + } + break; + case 64: + if (vector_elem_mask(env, vm, width, lmul, i)) { + sum +=3D env->vfp.vreg[src2].u64[j]; + } + if (i =3D=3D 0) { + sum +=3D env->vfp.vreg[rs1].u64[0]; + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u64[0] =3D sum; + } + break; + default: + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC(= )); + return; + } + } + } + env->vfp.vstart =3D 0; + return; +} + + +/* vredand.vs vd, vs2, vs1, vm # vd[0] =3D and( vs1[0] , vs2[*] ) */ +void VECTOR_HELPER(vredand_vs)(CPURISCVState *env, uint32_t vm, uint32_t r= s1, + uint32_t rs2, uint32_t rd) +{ + int width, lmul, vl, vlmax; + int i, j, src2; + uint64_t res =3D 0; + + lmul =3D vector_get_lmul(env); + vector_lmul_check_reg(env, lmul, rs2, false); + + if (vector_vtype_ill(env) || vector_overlap_vm_common(lmul, vm, rd)) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + if (env->vfp.vstart !=3D 0) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + vl =3D env->vfp.vl; + if (vl =3D=3D 0) { + return; + } + + width =3D vector_get_width(env); + vlmax =3D vector_get_vlmax(env); + + for (i =3D 0; i < VLEN / 64; i++) { + env->vfp.vreg[rd].u64[i] =3D 0; + } + + for (i =3D 0; i < vlmax; i++) { + src2 =3D rs2 + (i / (VLEN / width)); + j =3D i % (VLEN / width); + + if (i < vl) { + switch (width) { + case 8: + if (i =3D=3D 0) { + res =3D env->vfp.vreg[rs1].u8[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + res &=3D env->vfp.vreg[src2].u8[j]; + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u8[0] =3D res; + } + break; + case 16: + if (i =3D=3D 0) { + res =3D env->vfp.vreg[rs1].u16[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + res &=3D env->vfp.vreg[src2].u16[j]; + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u16[0] =3D res; + } + break; + case 32: + if (i =3D=3D 0) { + res =3D env->vfp.vreg[rs1].u32[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + res &=3D env->vfp.vreg[src2].u32[j]; + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u32[0] =3D res; + } + break; + case 64: + if (i =3D=3D 0) { + res =3D env->vfp.vreg[rs1].u64[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + res &=3D env->vfp.vreg[src2].u64[j]; + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u64[0] =3D res; + } + break; + default: + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC(= )); + return; + } + } + } + env->vfp.vstart =3D 0; + return; +} + +/* vfredsum.vs vd, vs2, vs1, vm # Unordered sum */ +void VECTOR_HELPER(vfredsum_vs)(CPURISCVState *env, uint32_t vm, uint32_t = rs1, + uint32_t rs2, uint32_t rd) +{ + int width, lmul, vl, vlmax; + int i, j, src2; + float16 sum16 =3D 0.0f; + float32 sum32 =3D 0.0f; + float64 sum64 =3D 0.0f; + + lmul =3D vector_get_lmul(env); + vector_lmul_check_reg(env, lmul, rs2, false); + + if (vector_vtype_ill(env)) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + if (env->vfp.vstart !=3D 0) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + vl =3D env->vfp.vl; + if (vl =3D=3D 0) { + return; + } + + width =3D vector_get_width(env); + vlmax =3D vector_get_vlmax(env); + + for (i =3D 0; i < VLEN / 64; i++) { + env->vfp.vreg[rd].u64[i] =3D 0; + } + + for (i =3D 0; i < vlmax; i++) { + src2 =3D rs2 + (i / (VLEN / width)); + j =3D i % (VLEN / width); + + if (i < vl) { + switch (width) { + case 16: + if (i =3D=3D 0) { + sum16 =3D env->vfp.vreg[rs1].f16[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + sum16 =3D float16_add(sum16, env->vfp.vreg[src2].f16[j= ], + &env->fp_status); + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].f16[0] =3D sum16; + } + break; + case 32: + if (i =3D=3D 0) { + sum32 =3D env->vfp.vreg[rs1].f32[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + sum32 =3D float32_add(sum32, env->vfp.vreg[src2].f32[j= ], + &env->fp_status); + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].f32[0] =3D sum32; + } + break; + case 64: + if (i =3D=3D 0) { + sum64 =3D env->vfp.vreg[rs1].f64[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + sum64 =3D float64_add(sum64, env->vfp.vreg[src2].f64[j= ], + &env->fp_status); + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].f64[0] =3D sum64; + } + break; + default: + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC(= )); + return; + } + } + } + env->vfp.vstart =3D 0; + return; +} + +/* vredor.vs vd, vs2, vs1, vm # vd[0] =3D or( vs1[0] , vs2[*] ) */ +void VECTOR_HELPER(vredor_vs)(CPURISCVState *env, uint32_t vm, uint32_t rs= 1, + uint32_t rs2, uint32_t rd) +{ + int width, lmul, vl, vlmax; + int i, j, src2; + uint64_t res =3D 0; + + lmul =3D vector_get_lmul(env); + vector_lmul_check_reg(env, lmul, rs2, false); + + if (vector_vtype_ill(env) || vector_overlap_vm_common(lmul, vm, rd)) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + if (env->vfp.vstart !=3D 0) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + vl =3D env->vfp.vl; + if (vl =3D=3D 0) { + return; + } + + width =3D vector_get_width(env); + vlmax =3D vector_get_vlmax(env); + + for (i =3D 0; i < VLEN / 64; i++) { + env->vfp.vreg[rd].u64[i] =3D 0; + } + + for (i =3D 0; i < vlmax; i++) { + src2 =3D rs2 + (i / (VLEN / width)); + j =3D i % (VLEN / width); + + if (i < vl) { + switch (width) { + case 8: + if (i =3D=3D 0) { + res =3D env->vfp.vreg[rs1].u8[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + res |=3D env->vfp.vreg[src2].u8[j]; + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u8[0] =3D res; + } + break; + case 16: + if (i =3D=3D 0) { + res =3D env->vfp.vreg[rs1].u16[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + res |=3D env->vfp.vreg[src2].u16[j]; + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u16[0] =3D res; + } + break; + case 32: + if (i =3D=3D 0) { + res =3D env->vfp.vreg[rs1].u32[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + res |=3D env->vfp.vreg[src2].u32[j]; + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u32[0] =3D res; + } + break; + case 64: + if (i =3D=3D 0) { + res =3D env->vfp.vreg[rs1].u64[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + res |=3D env->vfp.vreg[src2].u64[j]; + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u64[0] =3D res; + } + break; + default: + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC(= )); + return; + } + } + } + env->vfp.vstart =3D 0; + return; +} + +/* vredxor.vs vd, vs2, vs1, vm # vd[0] =3D xor( vs1[0] , vs2[*] ) */ +void VECTOR_HELPER(vredxor_vs)(CPURISCVState *env, uint32_t vm, uint32_t r= s1, + uint32_t rs2, uint32_t rd) +{ + int width, lmul, vl, vlmax; + int i, j, src2; + uint64_t res =3D 0; + + lmul =3D vector_get_lmul(env); + vector_lmul_check_reg(env, lmul, rs2, false); + + if (vector_vtype_ill(env) || vector_overlap_vm_common(lmul, vm, rd)) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + if (env->vfp.vstart !=3D 0) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + vl =3D env->vfp.vl; + if (vl =3D=3D 0) { + return; + } + + width =3D vector_get_width(env); + vlmax =3D vector_get_vlmax(env); + + for (i =3D 0; i < VLEN / 64; i++) { + env->vfp.vreg[rd].u64[i] =3D 0; + } + + for (i =3D 0; i < vlmax; i++) { + src2 =3D rs2 + (i / (VLEN / width)); + j =3D i % (VLEN / width); + + if (i < vl) { + switch (width) { + case 8: + if (i =3D=3D 0) { + res =3D env->vfp.vreg[rs1].u8[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + res ^=3D env->vfp.vreg[src2].u8[j]; + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u8[0] =3D res; + } + break; + case 16: + if (i =3D=3D 0) { + res =3D env->vfp.vreg[rs1].u16[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + res ^=3D env->vfp.vreg[src2].u16[j]; + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u16[0] =3D res; + } + break; + case 32: + if (i =3D=3D 0) { + res =3D env->vfp.vreg[rs1].u32[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + res ^=3D env->vfp.vreg[src2].u32[j]; + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u32[0] =3D res; + } + break; + case 64: + if (i =3D=3D 0) { + res =3D env->vfp.vreg[rs1].u64[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + res ^=3D env->vfp.vreg[src2].u64[j]; + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u64[0] =3D res; + } + break; + default: + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC(= )); + return; + } + } + } + env->vfp.vstart =3D 0; + return; +} + +/* vfredosum.vs vd, vs2, vs1, vm # Ordered sum */ +void VECTOR_HELPER(vfredosum_vs)(CPURISCVState *env, uint32_t vm, uint32_t= rs1, + uint32_t rs2, uint32_t rd) +{ + helper_vector_vfredsum_vs(env, vm, rs1, rs2, rd); + env->vfp.vstart =3D 0; + return; +} + +/* vredminu.vs vd, vs2, vs1, vm # vd[0] =3D minu( vs1[0] , vs2[*] ) */ +void VECTOR_HELPER(vredminu_vs)(CPURISCVState *env, uint32_t vm, uint32_t = rs1, + uint32_t rs2, uint32_t rd) +{ + int width, lmul, vl, vlmax; + int i, j, src2; + uint64_t minu =3D 0; + + lmul =3D vector_get_lmul(env); + vector_lmul_check_reg(env, lmul, rs2, false); + + if (vector_vtype_ill(env)) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + if (env->vfp.vstart !=3D 0) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + vl =3D env->vfp.vl; + if (vl =3D=3D 0) { + return; + } + + width =3D vector_get_width(env); + vlmax =3D vector_get_vlmax(env); + + for (i =3D 0; i < VLEN / 64; i++) { + env->vfp.vreg[rd].u64[i] =3D 0; + } + + for (i =3D 0; i < vlmax; i++) { + src2 =3D rs2 + (i / (VLEN / width)); + j =3D i % (VLEN / width); + + if (i < vl) { + switch (width) { + case 8: + if (i =3D=3D 0) { + minu =3D env->vfp.vreg[rs1].u8[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + if (minu > env->vfp.vreg[src2].u8[j]) { + minu =3D env->vfp.vreg[src2].u8[j]; + } + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u8[0] =3D minu; + } + break; + case 16: + if (i =3D=3D 0) { + minu =3D env->vfp.vreg[rs1].u16[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + if (minu > env->vfp.vreg[src2].u16[j]) { + minu =3D env->vfp.vreg[src2].u16[j]; + } + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u16[0] =3D minu; + } + break; + case 32: + if (i =3D=3D 0) { + minu =3D env->vfp.vreg[rs1].u32[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + if (minu > env->vfp.vreg[src2].u32[j]) { + minu =3D env->vfp.vreg[src2].u32[j]; + } + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u32[0] =3D minu; + } + break; + case 64: + if (i =3D=3D 0) { + minu =3D env->vfp.vreg[rs1].u64[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + if (minu > env->vfp.vreg[src2].u64[j]) { + minu =3D env->vfp.vreg[src2].u64[j]; + } + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u64[0] =3D minu; + } + break; + default: + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC(= )); + return; + } + } + } + env->vfp.vstart =3D 0; + return; +} + +/* vredmin.vs vd, vs2, vs1, vm # vd[0] =3D min( vs1[0] , vs2[*] ) */ +void VECTOR_HELPER(vredmin_vs)(CPURISCVState *env, uint32_t vm, uint32_t r= s1, + uint32_t rs2, uint32_t rd) +{ + int width, lmul, vl, vlmax; + int i, j, src2; + int64_t min =3D 0; + + lmul =3D vector_get_lmul(env); + vector_lmul_check_reg(env, lmul, rs2, false); + + if (vector_vtype_ill(env)) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + if (env->vfp.vstart !=3D 0) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + vl =3D env->vfp.vl; + if (vl =3D=3D 0) { + return; + } + + width =3D vector_get_width(env); + vlmax =3D vector_get_vlmax(env); + + for (i =3D 0; i < VLEN / 64; i++) { + env->vfp.vreg[rd].u64[i] =3D 0; + } + + for (i =3D 0; i < vlmax; i++) { + src2 =3D rs2 + (i / (VLEN / width)); + j =3D i % (VLEN / width); + + if (i < vl) { + switch (width) { + case 8: + if (i =3D=3D 0) { + min =3D env->vfp.vreg[rs1].s8[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + if (min > env->vfp.vreg[src2].s8[j]) { + min =3D env->vfp.vreg[src2].s8[j]; + } + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].s8[0] =3D min; + } + break; + case 16: + if (i =3D=3D 0) { + min =3D env->vfp.vreg[rs1].s16[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + if (min > env->vfp.vreg[src2].s16[j]) { + min =3D env->vfp.vreg[src2].s16[j]; + } + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].s16[0] =3D min; + } + break; + case 32: + if (i =3D=3D 0) { + min =3D env->vfp.vreg[rs1].s32[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + if (min > env->vfp.vreg[src2].s32[j]) { + min =3D env->vfp.vreg[src2].s32[j]; + } + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].s32[0] =3D min; + } + break; + case 64: + if (i =3D=3D 0) { + min =3D env->vfp.vreg[rs1].s64[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + if (min > env->vfp.vreg[src2].s64[j]) { + min =3D env->vfp.vreg[src2].s64[j]; + } + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].s64[0] =3D min; + } + break; + default: + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC(= )); + return; + } + } + } + env->vfp.vstart =3D 0; + return; +} + +/* vfredmin.vs vd, vs2, vs1, vm # Minimum value */ +void VECTOR_HELPER(vfredmin_vs)(CPURISCVState *env, uint32_t vm, uint32_t = rs1, + uint32_t rs2, uint32_t rd) +{ + int width, lmul, vl, vlmax; + int i, j, src2; + float16 min16 =3D 0.0f; + float32 min32 =3D 0.0f; + float64 min64 =3D 0.0f; + + lmul =3D vector_get_lmul(env); + vector_lmul_check_reg(env, lmul, rs2, false); + + if (vector_vtype_ill(env)) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + if (env->vfp.vstart !=3D 0) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + vl =3D env->vfp.vl; + if (vl =3D=3D 0) { + return; + } + + width =3D vector_get_width(env); + vlmax =3D vector_get_vlmax(env); + + for (i =3D 0; i < VLEN / 64; i++) { + env->vfp.vreg[rd].u64[i] =3D 0; + } + + for (i =3D 0; i < vlmax; i++) { + src2 =3D rs2 + (i / (VLEN / width)); + j =3D i % (VLEN / width); + + if (i < vl) { + switch (width) { + case 16: + if (i =3D=3D 0) { + min16 =3D env->vfp.vreg[rs1].f16[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + min16 =3D float16_minnum(min16, env->vfp.vreg[src2].f1= 6[j], + &env->fp_status); + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].f16[0] =3D min16; + } + break; + case 32: + if (i =3D=3D 0) { + min32 =3D env->vfp.vreg[rs1].f32[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + min32 =3D float32_minnum(min32, env->vfp.vreg[src2].f3= 2[j], + &env->fp_status); + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].f32[0] =3D min32; + } + break; + case 64: + if (i =3D=3D 0) { + min64 =3D env->vfp.vreg[rs1].f64[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + min64 =3D float64_minnum(min64, env->vfp.vreg[src2].f6= 4[j], + &env->fp_status); + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].f64[0] =3D min64; + } + break; + default: + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC(= )); + return; + } + } + } + env->vfp.vstart =3D 0; + return; +} + +/* vredmaxu.vs vd, vs2, vs1, vm # vd[0] =3D maxu( vs1[0] , vs2[*] ) */ +void VECTOR_HELPER(vredmaxu_vs)(CPURISCVState *env, uint32_t vm, uint32_t = rs1, + uint32_t rs2, uint32_t rd) +{ + int width, lmul, vl, vlmax; + int i, j, src2; + uint64_t maxu =3D 0; + + lmul =3D vector_get_lmul(env); + vector_lmul_check_reg(env, lmul, rs2, false); + + if (vector_vtype_ill(env)) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + if (env->vfp.vstart !=3D 0) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + vl =3D env->vfp.vl; + if (vl =3D=3D 0) { + return; + } + + width =3D vector_get_width(env); + vlmax =3D vector_get_vlmax(env); + + for (i =3D 0; i < VLEN / 64; i++) { + env->vfp.vreg[rd].u64[i] =3D 0; + } + + for (i =3D 0; i < vlmax; i++) { + src2 =3D rs2 + (i / (VLEN / width)); + j =3D i % (VLEN / width); + + if (i < vl) { + switch (width) { + case 8: + if (i =3D=3D 0) { + maxu =3D env->vfp.vreg[rs1].u8[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + if (maxu < env->vfp.vreg[src2].u8[j]) { + maxu =3D env->vfp.vreg[src2].u8[j]; + } + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u8[0] =3D maxu; + } + break; + case 16: + if (i =3D=3D 0) { + maxu =3D env->vfp.vreg[rs1].u16[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + if (maxu < env->vfp.vreg[src2].u16[j]) { + maxu =3D env->vfp.vreg[src2].u16[j]; + } + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u16[0] =3D maxu; + } + break; + case 32: + if (i =3D=3D 0) { + maxu =3D env->vfp.vreg[rs1].u32[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + if (maxu < env->vfp.vreg[src2].u32[j]) { + maxu =3D env->vfp.vreg[src2].u32[j]; + } + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u32[0] =3D maxu; + } + break; + case 64: + if (i =3D=3D 0) { + maxu =3D env->vfp.vreg[rs1].u64[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + if (maxu < env->vfp.vreg[src2].u64[j]) { + maxu =3D env->vfp.vreg[src2].u64[j]; + } + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u64[0] =3D maxu; + } + break; + default: + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC(= )); + return; + } + } + } + env->vfp.vstart =3D 0; + return; +} +/* vredmax.vs vd, vs2, vs1, vm # vd[0] =3D max( vs1[0] , vs2[*] ) */ +void VECTOR_HELPER(vredmax_vs)(CPURISCVState *env, uint32_t vm, uint32_t r= s1, + uint32_t rs2, uint32_t rd) +{ + int width, lmul, vl, vlmax; + int i, j, src2; + int64_t max =3D 0; + + lmul =3D vector_get_lmul(env); + vector_lmul_check_reg(env, lmul, rs2, false); + + if (vector_vtype_ill(env)) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + if (env->vfp.vstart !=3D 0) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + vl =3D env->vfp.vl; + if (vl =3D=3D 0) { + return; + } + + width =3D vector_get_width(env); + vlmax =3D vector_get_vlmax(env); + + for (i =3D 0; i < VLEN / 64; i++) { + env->vfp.vreg[rd].u64[i] =3D 0; + } + + for (i =3D 0; i < vlmax; i++) { + src2 =3D rs2 + (i / (VLEN / width)); + j =3D i % (VLEN / width); + + if (i < vl) { + switch (width) { + case 8: + if (i =3D=3D 0) { + max =3D env->vfp.vreg[rs1].s8[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + if (max < env->vfp.vreg[src2].s8[j]) { + max =3D env->vfp.vreg[src2].s8[j]; + } + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].s8[0] =3D max; + } + break; + case 16: + if (i =3D=3D 0) { + max =3D env->vfp.vreg[rs1].s16[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + if (max < env->vfp.vreg[src2].s16[j]) { + max =3D env->vfp.vreg[src2].s16[j]; + } + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].s16[0] =3D max; + } + break; + case 32: + if (i =3D=3D 0) { + max =3D env->vfp.vreg[rs1].s32[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + if (max < env->vfp.vreg[src2].s32[j]) { + max =3D env->vfp.vreg[src2].s32[j]; + } + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].s32[0] =3D max; + } + break; + case 64: + if (i =3D=3D 0) { + max =3D env->vfp.vreg[rs1].s64[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + if (max < env->vfp.vreg[src2].s64[j]) { + max =3D env->vfp.vreg[src2].s64[j]; + } + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].s64[0] =3D max; + } + break; + default: + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC(= )); + return; + } + } + } + env->vfp.vstart =3D 0; + return; +} + +/* vfredmax.vs vd, vs2, vs1, vm # Maximum value */ +void VECTOR_HELPER(vfredmax_vs)(CPURISCVState *env, uint32_t vm, uint32_t = rs1, + uint32_t rs2, uint32_t rd) +{ + int width, lmul, vl, vlmax; + int i, j, src2; + float16 max16 =3D 0.0f; + float32 max32 =3D 0.0f; + float64 max64 =3D 0.0f; + + lmul =3D vector_get_lmul(env); + vector_lmul_check_reg(env, lmul, rs2, false); + + if (vector_vtype_ill(env)) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + if (env->vfp.vstart !=3D 0) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + vl =3D env->vfp.vl; + if (vl =3D=3D 0) { + return; + } + + width =3D vector_get_width(env); + vlmax =3D vector_get_vlmax(env); + + for (i =3D 0; i < VLEN / 64; i++) { + env->vfp.vreg[rd].u64[i] =3D 0; + } + + for (i =3D 0; i < vlmax; i++) { + src2 =3D rs2 + (i / (VLEN / width)); + j =3D i % (VLEN / width); + + if (i < vl) { + switch (width) { + case 16: + if (i =3D=3D 0) { + max16 =3D env->vfp.vreg[rs1].f16[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + max16 =3D float16_maxnum(max16, env->vfp.vreg[src2].f1= 6[j], + &env->fp_status); + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].f16[0] =3D max16; + } + break; + case 32: + if (i =3D=3D 0) { + max32 =3D env->vfp.vreg[rs1].f32[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + max32 =3D float32_maxnum(max32, env->vfp.vreg[src2].f3= 2[j], + &env->fp_status); + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].f32[0] =3D max32; + } + break; + case 64: + if (i =3D=3D 0) { + max64 =3D env->vfp.vreg[rs1].f64[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + max64 =3D float64_maxnum(max64, env->vfp.vreg[src2].f6= 4[j], + &env->fp_status); + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].f64[0] =3D max64; + } + break; + default: + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC(= )); + return; + } + } + } + env->vfp.vstart =3D 0; + return; +} + +/* vwredsumu.vs vd, vs2, vs1, vm # 2*SEW =3D 2*SEW + sum(zero-extend(SEW))= */ +void VECTOR_HELPER(vwredsumu_vs)(CPURISCVState *env, uint32_t vm, uint32_t= rs1, + uint32_t rs2, uint32_t rd) +{ + int width, lmul, vl, vlmax; + int i, j, src2; + uint64_t sum =3D 0; + + lmul =3D vector_get_lmul(env); + vector_lmul_check_reg(env, lmul, rs2, false); + + if (vector_vtype_ill(env)) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + if (env->vfp.vstart !=3D 0) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + vl =3D env->vfp.vl; + if (vl =3D=3D 0) { + return; + } + + width =3D vector_get_width(env); + vlmax =3D vector_get_vlmax(env); + + for (i =3D 0; i < VLEN / 64; i++) { + env->vfp.vreg[rd].u64[i] =3D 0; + } + + for (i =3D 0; i < vlmax; i++) { + src2 =3D rs2 + (i / (VLEN / width)); + j =3D i % (VLEN / width); + + if (i < vl) { + switch (width) { + case 8: + if (vector_elem_mask(env, vm, width, lmul, i)) { + sum +=3D env->vfp.vreg[src2].u8[j]; + } + if (i =3D=3D 0) { + sum +=3D env->vfp.vreg[rs1].u16[0]; + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u16[0] =3D sum; + } + break; + case 16: + if (vector_elem_mask(env, vm, width, lmul, i)) { + sum +=3D env->vfp.vreg[src2].u16[j]; + } + if (i =3D=3D 0) { + sum +=3D env->vfp.vreg[rs1].u32[0]; + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u32[0] =3D sum; + } + break; + case 32: + if (vector_elem_mask(env, vm, width, lmul, i)) { + sum +=3D env->vfp.vreg[src2].u32[j]; + } + if (i =3D=3D 0) { + sum +=3D env->vfp.vreg[rs1].u64[0]; + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].u64[0] =3D sum; + } + break; + default: + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC(= )); + return; + } + } + } + env->vfp.vstart =3D 0; + return; +} + +/* vwredsum.vs vd, vs2, vs1, vm # 2*SEW =3D 2*SEW + sum(sign-extend(SEW)) = */ +void VECTOR_HELPER(vwredsum_vs)(CPURISCVState *env, uint32_t vm, uint32_t = rs1, + uint32_t rs2, uint32_t rd) +{ + int width, lmul, vl, vlmax; + int i, j, src2; + int64_t sum =3D 0; + + lmul =3D vector_get_lmul(env); + vector_lmul_check_reg(env, lmul, rs2, false); + + if (vector_vtype_ill(env)) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + if (env->vfp.vstart !=3D 0) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + vl =3D env->vfp.vl; + if (vl =3D=3D 0) { + return; + } + + width =3D vector_get_width(env); + vlmax =3D vector_get_vlmax(env); + + for (i =3D 0; i < VLEN / 64; i++) { + env->vfp.vreg[rd].u64[i] =3D 0; + } + + for (i =3D 0; i < vlmax; i++) { + src2 =3D rs2 + (i / (VLEN / width)); + j =3D i % (VLEN / width); + + if (i < vl) { + switch (width) { + case 8: + if (vector_elem_mask(env, vm, width, lmul, i)) { + sum +=3D (int16_t)env->vfp.vreg[src2].s8[j] << 8 >> 8; + } + if (i =3D=3D 0) { + sum +=3D env->vfp.vreg[rs1].s16[0]; + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].s16[0] =3D sum; + } + break; + case 16: + if (vector_elem_mask(env, vm, width, lmul, i)) { + sum +=3D (int32_t)env->vfp.vreg[src2].s16[j] << 16 >> = 16; + } + if (i =3D=3D 0) { + sum +=3D env->vfp.vreg[rs1].s32[0]; + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].s32[0] =3D sum; + } + break; + case 32: + if (vector_elem_mask(env, vm, width, lmul, i)) { + sum +=3D (int64_t)env->vfp.vreg[src2].s32[j] << 32 >> = 32; + } + if (i =3D=3D 0) { + sum +=3D env->vfp.vreg[rs1].s64[0]; + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].s64[0] =3D sum; + } + break; + default: + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC(= )); + return; + } + } + } + env->vfp.vstart =3D 0; + return; +} + +/* + * vfwredsum.vs vd, vs2, vs1, vm # + * Unordered reduce 2*SEW =3D 2*SEW + sum(promote(SEW)) + */ +void VECTOR_HELPER(vfwredsum_vs)(CPURISCVState *env, uint32_t vm, uint32_t= rs1, + uint32_t rs2, uint32_t rd) +{ + int width, lmul, vl, vlmax; + int i, j, src2; + float32 sum32 =3D 0.0f; + float64 sum64 =3D 0.0f; + + lmul =3D vector_get_lmul(env); + vector_lmul_check_reg(env, lmul, rs2, false); + + if (vector_vtype_ill(env)) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + if (env->vfp.vstart !=3D 0) { + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC()); + return; + } + vl =3D env->vfp.vl; + if (vl =3D=3D 0) { + return; + } + + width =3D vector_get_width(env); + vlmax =3D vector_get_vlmax(env); + + for (i =3D 0; i < VLEN / 64; i++) { + env->vfp.vreg[rd].u64[i] =3D 0; + } + + for (i =3D 0; i < vlmax; i++) { + src2 =3D rs2 + (i / (VLEN / width)); + j =3D i % (VLEN / width); + + if (i < vl) { + switch (width) { + case 16: + if (i =3D=3D 0) { + sum32 =3D env->vfp.vreg[rs1].f32[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + sum32 =3D float32_add(sum32, + float16_to_float32(env->vfp.vreg[src2].f16= [j], + true, &env->fp_status), + &env->fp_status); + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].f32[0] =3D sum32; + } + break; + case 32: + if (i =3D=3D 0) { + sum64 =3D env->vfp.vreg[rs1].f64[0]; + } + if (vector_elem_mask(env, vm, width, lmul, i)) { + sum64 =3D float64_add(sum64, + float32_to_float64(env->vfp.vreg[src2].f32= [j], + &env->fp_status), + &env->fp_status); + } + if (i =3D=3D vl - 1) { + env->vfp.vreg[rd].f64[0] =3D sum64; + } + break; + default: + riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC(= )); + return; + } + } + } + env->vfp.vstart =3D 0; + return; +} + +/* + * vfwredosum.vs vd, vs2, vs1, vm # + * Ordered reduce 2*SEW =3D 2*SEW + sum(promote(SEW)) + */ +void VECTOR_HELPER(vfwredosum_vs)(CPURISCVState *env, uint32_t vm, + uint32_t rs1, uint32_t rs2, uint32_t rd) +{ + helper_vector_vfwredsum_vs(env, vm, rs1, rs2, rd); + env->vfp.vstart =3D 0; + return; +} --=20 2.7.4