From nobody Sat Nov 15 07:22:29 2025 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=yeah.net ARC-Seal: i=1; a=rsa-sha256; t=1755334768; cv=none; d=zohomail.com; s=zohoarc; b=gDK8Tsvnu3ugP/CVt0uknnbj2s+oExw1SdGgLCNKgWqfx2UI02sZ5Zm0RiFdhyCnEUSvKsJMyBrQPHfal7YD4v5mLqG1Pqqe6jLYr2ZXXjH+zwR2gkzWbqRuVTN3yOfwKldAD0oEIoBL50SbWRDNna4VW1gYBXVsmjedLApD52U= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1755334768; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=QAdHXPGGVwdxytsYAPWBBeeX+fX9c37x/ECxTfzML/o=; b=UjX7HmhEXQDJIE9FVe1Lb2z7klopb8e+J8iKK9qR+KW06XXRK9nTRs8I6cjhblT3Rwk3TM1XMiNrYRGfJw8xX3Mzl8DyVFg2d/5i/9ywRsKTJI7riTrho7BgDrWoh2InhTRON1vUmREepMySrQxS0FVhYlpYCb3PCgh1aWHMyQI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1755334768840856.338888346701; Sat, 16 Aug 2025 01:59:28 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1unCkB-0004Vc-UE; Sat, 16 Aug 2025 04:58:32 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1unCk8-0004UB-Ls; Sat, 16 Aug 2025 04:58:28 -0400 Received: from mail-m16.yeah.net ([1.95.21.14]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1unCk1-0003lQ-CN; Sat, 16 Aug 2025 04:58:28 -0400 Received: from localhost.localdomain (unknown []) by gzsmtp2 (Coremail) with SMTP id Ms8vCgC37_f_R6BoEUXBAg--.43192S3; Sat, 16 Aug 2025 16:57:39 +0800 (CST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yeah.net; s=s110527; h=From:To:Subject:Date:Message-ID:MIME-Version; bh=QA dHXPGGVwdxytsYAPWBBeeX+fX9c37x/ECxTfzML/o=; b=DXiExzJx8V8Rub0O8V oLlfjJ74LzlSMJkRAEfCnmmB+B89+6DC8Y91rCVJUFHYY8pI0UxbIWI68TQdHC7o 8MZ6aLzqMx1DrJhYt66EyHD+j20uzXnwsb+9pCBsRY0Y2I/qlRIT2/lLPE4ICyhw nA3vIc/CQQDBBTWxejqgbzLwk= From: Chao Liu To: richard.henderson@linaro.org, paolo.savini@embecosm.com, ebiggers@kernel.org, dbarboza@ventanamicro.com, palmer@dabbelt.com, alistair.francis@wdc.com, liwei1518@gmail.com, zhiwei_liu@linux.alibaba.com Cc: qemu-riscv@nongnu.org, qemu-devel@nongnu.org, Chao Liu Subject: [PATCH v4 1/2] target/riscv: Generate strided vector loads/stores with tcg nodes. Date: Sat, 16 Aug 2025 16:56:35 +0800 Message-ID: <98251cbcb170a4124642fc6e924bfad199c5b0b1.1755333616.git.chao.liu@yeah.net> X-Mailer: git-send-email 2.48.1.windows.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: Ms8vCgC37_f_R6BoEUXBAg--.43192S3 X-Coremail-Antispam: 1Uf129KBjvJXoW3Kr4kXr1UGr4xGr1furykKrg_yoWkKw47pF 1rJ3y7XFs5GF1fXr9xua1j9rs0gF4vkr4jqwn8Kw4rKrW5Xw1kJrsFkayY9348CrZ3Zrya yF4DZF1j9a15Ga7anT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x07UJuc_UUUUU= X-Originating-IP: [114.88.98.193] X-CM-SenderInfo: pfkd0hxolxq5hhdkh0dhw/1tbiBBGrKGigBQJ-5AAAsC Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=1.95.21.14; envelope-from=chao.liu@yeah.net; helo=mail-m16.yeah.net X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, UNPARSEABLE_RELAY=0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @yeah.net) X-ZM-MESSAGEID: 1755334773895124100 Content-Type: text/plain; charset="utf-8" This commit improves the performance of QEMU when emulating strided vector loads and stores by substituting the call for the helper function with the generation of equivalent TCG operations. Signed-off-by: Paolo Savini Signed-off-by: Chao Liu Tested-by: Eric Biggers Reviewed-by: Daniel Henrique Barboza --- target/riscv/insn_trans/trans_rvv.c.inc | 319 ++++++++++++++++++++---- 1 file changed, 269 insertions(+), 50 deletions(-) diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_tr= ans/trans_rvv.c.inc index 71f98fb350..341e392064 100644 --- a/target/riscv/insn_trans/trans_rvv.c.inc +++ b/target/riscv/insn_trans/trans_rvv.c.inc @@ -864,32 +864,282 @@ GEN_VEXT_TRANS(vlm_v, MO_8, vlm_v, ld_us_mask_op, ld= _us_mask_check) GEN_VEXT_TRANS(vsm_v, MO_8, vsm_v, st_us_mask_op, st_us_mask_check) =20 /* - *** stride load and store + * MAXSZ returns the maximum vector size can be operated in bytes, + * which is used in GVEC IR when vl_eq_vlmax flag is set to true + * to accelerate vector operation. + */ +static inline uint32_t MAXSZ(DisasContext *s) +{ + int max_sz =3D s->cfg_ptr->vlenb << 3; + return max_sz >> (3 - s->lmul); +} + +static inline uint32_t get_log2(uint32_t a) +{ + assert(is_power_of_2(a)); + return ctz32(a); +} + +typedef void gen_tl_ldst(TCGv, TCGv_ptr, tcg_target_long); + +/* + * Simulate the strided load/store main loop: + * + * for (i =3D env->vstart; i < env->vl; env->vstart =3D ++i) { + * k =3D 0; + * while (k < nf) { + * if (!vm && !vext_elem_mask(v0, i)) { + * vext_set_elems_1s(vd, vma, (i + k * max_elems) * esz, + * (i + k * max_elems + 1) * esz); + * k++; + * continue; + * } + * target_ulong addr =3D base + stride * i + (k << log2_esz); + * ldst(env, adjust_addr(env, addr), i + k * max_elems, vd, ra); + * k++; + * } + * } */ -typedef void gen_helper_ldst_stride(TCGv_ptr, TCGv_ptr, TCGv, - TCGv, TCGv_env, TCGv_i32); +static void gen_ldst_stride_main_loop(DisasContext *s, TCGv dest, uint32_t= rs1, + uint32_t rs2, uint32_t vm, uint32_t = nf, + gen_tl_ldst *ld_fn, gen_tl_ldst *st_= fn, + bool is_load) +{ + TCGv addr =3D tcg_temp_new(); + TCGv base =3D get_gpr(s, rs1, EXT_NONE); + TCGv stride =3D get_gpr(s, rs2, EXT_NONE); + + TCGv i =3D tcg_temp_new(); + TCGv i_esz =3D tcg_temp_new(); + TCGv k =3D tcg_temp_new(); + TCGv k_esz =3D tcg_temp_new(); + TCGv k_max =3D tcg_temp_new(); + TCGv mask =3D tcg_temp_new(); + TCGv mask_offs =3D tcg_temp_new(); + TCGv mask_offs_64 =3D tcg_temp_new(); + TCGv mask_elem =3D tcg_temp_new(); + TCGv mask_offs_rem =3D tcg_temp_new(); + TCGv vreg =3D tcg_temp_new(); + TCGv dest_offs =3D tcg_temp_new(); + TCGv stride_offs =3D tcg_temp_new(); + + uint32_t max_elems =3D MAXSZ(s) >> s->sew; + + TCGLabel *start =3D gen_new_label(); + TCGLabel *end =3D gen_new_label(); + TCGLabel *start_k =3D gen_new_label(); + TCGLabel *inc_k =3D gen_new_label(); + TCGLabel *end_k =3D gen_new_label(); + + MemOp atomicity =3D MO_ATOM_NONE; + if (s->sew =3D=3D 0) { + atomicity =3D MO_ATOM_NONE; + } else { + atomicity =3D MO_ATOM_IFALIGN_PAIR; + } + + mark_vs_dirty(s); + + tcg_gen_addi_tl(mask, (TCGv)tcg_env, vreg_ofs(s, 0)); + + /* Start of outer loop. */ + tcg_gen_mov_tl(i, cpu_vstart); + gen_set_label(start); + tcg_gen_brcond_tl(TCG_COND_GE, i, cpu_vl, end); + tcg_gen_shli_tl(i_esz, i, s->sew); + /* Start of inner loop. */ + tcg_gen_movi_tl(k, 0); + gen_set_label(start_k); + tcg_gen_brcond_tl(TCG_COND_GE, k, tcg_constant_tl(nf), end_k); + /* + * If we are in mask agnostic regime and the operation is not unmasked= we + * set the inactive elements to 1. + */ + if (!vm && s->vma) { + TCGLabel *active_element =3D gen_new_label(); + /* (i + k * max_elems) * esz */ + tcg_gen_shli_tl(mask_offs, k, get_log2(max_elems << s->sew)); + tcg_gen_add_tl(mask_offs, mask_offs, i_esz); + + /* + * Check whether the i bit of the mask is 0 or 1. + * + * static inline int vext_elem_mask(void *v0, int index) + * { + * int idx =3D index / 64; + * int pos =3D index % 64; + * return (((uint64_t *)v0)[idx] >> pos) & 1; + * } + */ + tcg_gen_shri_tl(mask_offs_64, mask_offs, 3); + tcg_gen_add_tl(mask_offs_64, mask_offs_64, mask); + tcg_gen_ld_i64((TCGv_i64)mask_elem, (TCGv_ptr)mask_offs_64, 0); + tcg_gen_rem_tl(mask_offs_rem, mask_offs, tcg_constant_tl(8)); + tcg_gen_shr_tl(mask_elem, mask_elem, mask_offs_rem); + tcg_gen_andi_tl(mask_elem, mask_elem, 1); + tcg_gen_brcond_tl(TCG_COND_NE, mask_elem, tcg_constant_tl(0), + active_element); + /* + * Set masked-off elements in the destination vector register to 1= s. + * Store instructions simply skip this bit as memory ops access me= mory + * only for active elements. + */ + if (is_load) { + tcg_gen_shli_tl(mask_offs, mask_offs, s->sew); + tcg_gen_add_tl(mask_offs, mask_offs, dest); + st_fn(tcg_constant_tl(-1), (TCGv_ptr)mask_offs, 0); + } + tcg_gen_br(inc_k); + gen_set_label(active_element); + } + /* + * The element is active, calculate the address with stride: + * target_ulong addr =3D base + stride * i + (k << log2_esz); + */ + tcg_gen_mul_tl(stride_offs, stride, i); + tcg_gen_shli_tl(k_esz, k, s->sew); + tcg_gen_add_tl(stride_offs, stride_offs, k_esz); + tcg_gen_add_tl(addr, base, stride_offs); + /* Calculate the offset in the dst/src vector register. */ + tcg_gen_shli_tl(k_max, k, get_log2(max_elems)); + tcg_gen_add_tl(dest_offs, i, k_max); + tcg_gen_shli_tl(dest_offs, dest_offs, s->sew); + tcg_gen_add_tl(dest_offs, dest_offs, dest); + if (is_load) { + tcg_gen_qemu_ld_tl(vreg, addr, s->mem_idx, MO_LE | s->sew | atomic= ity); + st_fn((TCGv)vreg, (TCGv_ptr)dest_offs, 0); + } else { + ld_fn((TCGv)vreg, (TCGv_ptr)dest_offs, 0); + tcg_gen_qemu_st_tl(vreg, addr, s->mem_idx, MO_LE | s->sew | atomic= ity); + } + /* + * We don't execute the load/store above if the element was inactive. + * We jump instead directly to incrementing k and continuing the loop. + */ + if (!vm && s->vma) { + gen_set_label(inc_k); + } + tcg_gen_addi_tl(k, k, 1); + tcg_gen_br(start_k); + /* End of the inner loop. */ + gen_set_label(end_k); + + tcg_gen_addi_tl(i, i, 1); + tcg_gen_mov_tl(cpu_vstart, i); + tcg_gen_br(start); + + /* End of the outer loop. */ + gen_set_label(end); + + return; +} + + +/* + * Set the tail bytes of the strided loads/stores to 1: + * + * for (k =3D 0; k < nf; ++k) { + * cnt =3D (k * max_elems + vl) * esz; + * tot =3D (k * max_elems + max_elems) * esz; + * for (i =3D cnt; i < tot; i +=3D esz) { + * store_1s(-1, vd[vl+i]); + * } + * } + */ +static void gen_ldst_stride_tail_loop(DisasContext *s, TCGv dest, uint32_t= nf, + gen_tl_ldst *st_fn) +{ + TCGv i =3D tcg_temp_new(); + TCGv k =3D tcg_temp_new(); + TCGv tail_cnt =3D tcg_temp_new(); + TCGv tail_tot =3D tcg_temp_new(); + TCGv tail_addr =3D tcg_temp_new(); + + TCGLabel *start =3D gen_new_label(); + TCGLabel *end =3D gen_new_label(); + TCGLabel *start_i =3D gen_new_label(); + TCGLabel *end_i =3D gen_new_label(); + + uint32_t max_elems_b =3D MAXSZ(s); + uint32_t esz =3D 1 << s->sew; + + /* Start of the outer loop. */ + tcg_gen_movi_tl(k, 0); + tcg_gen_shli_tl(tail_cnt, cpu_vl, s->sew); + tcg_gen_movi_tl(tail_tot, max_elems_b); + tcg_gen_add_tl(tail_addr, dest, tail_cnt); + gen_set_label(start); + tcg_gen_brcond_tl(TCG_COND_GE, k, tcg_constant_tl(nf), end); + /* Start of the inner loop. */ + tcg_gen_mov_tl(i, tail_cnt); + gen_set_label(start_i); + tcg_gen_brcond_tl(TCG_COND_GE, i, tail_tot, end_i); + /* store_1s(-1, vd[vl+i]); */ + st_fn(tcg_constant_tl(-1), (TCGv_ptr)tail_addr, 0); + tcg_gen_addi_tl(tail_addr, tail_addr, esz); + tcg_gen_addi_tl(i, i, esz); + tcg_gen_br(start_i); + /* End of the inner loop. */ + gen_set_label(end_i); + /* Update the counts */ + tcg_gen_addi_tl(tail_cnt, tail_cnt, max_elems_b); + tcg_gen_addi_tl(tail_tot, tail_cnt, max_elems_b); + tcg_gen_addi_tl(k, k, 1); + tcg_gen_br(start); + /* End of the outer loop. */ + gen_set_label(end); + + return; +} =20 static bool ldst_stride_trans(uint32_t vd, uint32_t rs1, uint32_t rs2, - uint32_t data, gen_helper_ldst_stride *fn, - DisasContext *s) + uint32_t data, DisasContext *s, bool is_load) { - TCGv_ptr dest, mask; - TCGv base, stride; - TCGv_i32 desc; + if (!s->vstart_eq_zero) { + return false; + } =20 - dest =3D tcg_temp_new_ptr(); - mask =3D tcg_temp_new_ptr(); - base =3D get_gpr(s, rs1, EXT_NONE); - stride =3D get_gpr(s, rs2, EXT_NONE); - desc =3D tcg_constant_i32(simd_desc(s->cfg_ptr->vlenb, - s->cfg_ptr->vlenb, data)); + TCGv dest =3D tcg_temp_new(); =20 - tcg_gen_addi_ptr(dest, tcg_env, vreg_ofs(s, vd)); - tcg_gen_addi_ptr(mask, tcg_env, vreg_ofs(s, 0)); + uint32_t nf =3D FIELD_EX32(data, VDATA, NF); + uint32_t vm =3D FIELD_EX32(data, VDATA, VM); + + /* Destination register and mask register */ + tcg_gen_addi_tl(dest, (TCGv)tcg_env, vreg_ofs(s, vd)); + + /* + * Select the appropriate load/tore to retrieve data from the vector + * register given a specific sew. + */ + static gen_tl_ldst * const ld_fns[4] =3D { + tcg_gen_ld8u_tl, tcg_gen_ld16u_tl, + tcg_gen_ld32u_tl, tcg_gen_ld_tl + }; + + static gen_tl_ldst * const st_fns[4] =3D { + tcg_gen_st8_tl, tcg_gen_st16_tl, + tcg_gen_st32_tl, tcg_gen_st_tl + }; + + gen_tl_ldst *ld_fn =3D ld_fns[s->sew]; + gen_tl_ldst *st_fn =3D st_fns[s->sew]; + + if (ld_fn =3D=3D NULL || st_fn =3D=3D NULL) { + return false; + } =20 mark_vs_dirty(s); =20 - fn(dest, mask, base, stride, tcg_env, desc); + gen_ldst_stride_main_loop(s, dest, rs1, rs2, vm, nf, ld_fn, st_fn, is_= load); + + tcg_gen_movi_tl(cpu_vstart, 0); + + /* + * Set the tail bytes to 1 if tail agnostic: + */ + if (s->vta !=3D 0 && is_load) { + gen_ldst_stride_tail_loop(s, dest, nf, st_fn); + } =20 finalize_rvv_inst(s); return true; @@ -898,16 +1148,6 @@ static bool ldst_stride_trans(uint32_t vd, uint32_t r= s1, uint32_t rs2, static bool ld_stride_op(DisasContext *s, arg_rnfvm *a, uint8_t eew) { uint32_t data =3D 0; - gen_helper_ldst_stride *fn; - static gen_helper_ldst_stride * const fns[4] =3D { - gen_helper_vlse8_v, gen_helper_vlse16_v, - gen_helper_vlse32_v, gen_helper_vlse64_v - }; - - fn =3D fns[eew]; - if (fn =3D=3D NULL) { - return false; - } =20 uint8_t emul =3D vext_get_emul(s, eew); data =3D FIELD_DP32(data, VDATA, VM, a->vm); @@ -915,7 +1155,7 @@ static bool ld_stride_op(DisasContext *s, arg_rnfvm *a= , uint8_t eew) data =3D FIELD_DP32(data, VDATA, NF, a->nf); data =3D FIELD_DP32(data, VDATA, VTA, s->vta); data =3D FIELD_DP32(data, VDATA, VMA, s->vma); - return ldst_stride_trans(a->rd, a->rs1, a->rs2, data, fn, s); + return ldst_stride_trans(a->rd, a->rs1, a->rs2, data, s, true); } =20 static bool ld_stride_check(DisasContext *s, arg_rnfvm* a, uint8_t eew) @@ -933,23 +1173,13 @@ GEN_VEXT_TRANS(vlse64_v, MO_64, rnfvm, ld_stride_op,= ld_stride_check) static bool st_stride_op(DisasContext *s, arg_rnfvm *a, uint8_t eew) { uint32_t data =3D 0; - gen_helper_ldst_stride *fn; - static gen_helper_ldst_stride * const fns[4] =3D { - /* masked stride store */ - gen_helper_vsse8_v, gen_helper_vsse16_v, - gen_helper_vsse32_v, gen_helper_vsse64_v - }; =20 uint8_t emul =3D vext_get_emul(s, eew); data =3D FIELD_DP32(data, VDATA, VM, a->vm); data =3D FIELD_DP32(data, VDATA, LMUL, emul); data =3D FIELD_DP32(data, VDATA, NF, a->nf); - fn =3D fns[eew]; - if (fn =3D=3D NULL) { - return false; - } =20 - return ldst_stride_trans(a->rd, a->rs1, a->rs2, data, fn, s); + return ldst_stride_trans(a->rd, a->rs1, a->rs2, data, s, false); } =20 static bool st_stride_check(DisasContext *s, arg_rnfvm* a, uint8_t eew) @@ -1300,17 +1530,6 @@ GEN_LDST_WHOLE_TRANS(vs8r_v, int8_t, 8, false) *** Vector Integer Arithmetic Instructions */ =20 -/* - * MAXSZ returns the maximum vector size can be operated in bytes, - * which is used in GVEC IR when vl_eq_vlmax flag is set to true - * to accelerate vector operation. - */ -static inline uint32_t MAXSZ(DisasContext *s) -{ - int max_sz =3D s->cfg_ptr->vlenb * 8; - return max_sz >> (3 - s->lmul); -} - static bool opivv_check(DisasContext *s, arg_rmrr *a) { return require_rvv(s) && --=20 2.50.1