On 1/13/24 08:38, Daniel Henrique Barboza wrote:
> Use the new 'vlenb' CPU config to validate fractional LMUL. The original
> comparison is done with 'vlen' and 'sew', both in bits. Use sew/8, or
> sew in bytes, to do a direct comparison with vlenb.
>
> Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
> ---
> target/riscv/vector_helper.c | 9 ++++++---
> 1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
> index cb944229b0..0c1a485d1e 100644
> --- a/target/riscv/vector_helper.c
> +++ b/target/riscv/vector_helper.c
> @@ -45,9 +45,12 @@ target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1,
> xlen - 1 - R_VTYPE_RESERVED_SHIFT);
>
> if (lmul & 4) {
> - /* Fractional LMUL - check LMUL * VLEN >= SEW */
> - if (lmul == 4 ||
> - cpu->cfg.vlen >> (8 - lmul) < sew) {
> + /*
> + * Fractional LMUL - check LMUL * VLEN >= SEW, or
> + * vlen >> (8 - lmul) < sew. We have 'vlenb', so
> + * compare it with sew in bytes.
> + */
> + if (lmul == 4 || cpu->cfg.vlenb >> (8 - lmul) < (sew << 3)) {
You're shifting sew the wrong direction. But better to simply adjust the vlenb shift:
- cpu->cfg.vlen >> (8 - lmul) < sew
+ cpu->cfg.vlenb >> (8 - 3 - lmul) < sew
r~