[PATCH] target/riscv/vector_helper.c: Avoid shifting negative in fractional LMUL checking

Max Chou posted 1 patch 8 months, 3 weeks ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/20240306161036.938931-1-max.chou@sifive.com
Maintainers: Palmer Dabbelt <palmer@dabbelt.com>, Alistair Francis <alistair.francis@wdc.com>, Bin Meng <bin.meng@windriver.com>, Weiwei Li <liwei1518@gmail.com>, Daniel Henrique Barboza <dbarboza@ventanamicro.com>, Liu Zhiwei <zhiwei_liu@linux.alibaba.com>
target/riscv/vector_helper.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
[PATCH] target/riscv/vector_helper.c: Avoid shifting negative in fractional LMUL checking
Posted by Max Chou 8 months, 3 weeks ago
When vlmul is larger than 5, the original fractional LMUL checking may
gets unexpected result.

Signed-off-by: Max Chou <max.chou@sifive.com>
---
 target/riscv/vector_helper.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
index 84cec73eb20..adceec378fd 100644
--- a/target/riscv/vector_helper.c
+++ b/target/riscv/vector_helper.c
@@ -53,10 +53,9 @@ target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1,
          * VLEN * LMUL >= SEW
          * VLEN >> (8 - lmul) >= sew
          * (vlenb << 3) >> (8 - lmul) >= sew
-         * vlenb >> (8 - 3 - lmul) >= sew
          */
         if (vlmul == 4 ||
-            cpu->cfg.vlenb >> (8 - 3 - vlmul) < sew) {
+            ((cpu->cfg.vlenb << 3) >> (8 - vlmul)) < sew) {
             vill = true;
         }
     }
-- 
2.34.1
Re: [PATCH] target/riscv/vector_helper.c: Avoid shifting negative in fractional LMUL checking
Posted by Daniel Henrique Barboza 8 months, 3 weeks ago

On 3/6/24 13:10, Max Chou wrote:
> When vlmul is larger than 5, the original fractional LMUL checking may
> gets unexpected result.
> 
> Signed-off-by: Max Chou <max.chou@sifive.com>
> ---

There's already a fix for it in the ML:

"[PATCH v3] target/riscv: Fix shift count overflow"

https://lore.kernel.org/qemu-riscv/20240225174114.5298-1-demin.han@starfivetech.com/


Hopefully it'll be queued for the next PR. Thanks,


Daniel


>   target/riscv/vector_helper.c | 3 +--
>   1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
> index 84cec73eb20..adceec378fd 100644
> --- a/target/riscv/vector_helper.c
> +++ b/target/riscv/vector_helper.c
> @@ -53,10 +53,9 @@ target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1,
>            * VLEN * LMUL >= SEW
>            * VLEN >> (8 - lmul) >= sew
>            * (vlenb << 3) >> (8 - lmul) >= sew
> -         * vlenb >> (8 - 3 - lmul) >= sew
>            */
>           if (vlmul == 4 ||
> -            cpu->cfg.vlenb >> (8 - 3 - vlmul) < sew) {
> +            ((cpu->cfg.vlenb << 3) >> (8 - vlmul)) < sew) {
>               vill = true;
>           }
>       }
Re: [PATCH] target/riscv/vector_helper.c: Avoid shifting negative in fractional LMUL checking
Posted by Max Chou 8 months, 3 weeks ago
Looks liked that I missed this one.

Thank you Daniel

Max.

On 2024/3/7 1:17 AM, Daniel Henrique Barboza wrote:
>
>
> On 3/6/24 13:10, Max Chou wrote:
>> When vlmul is larger than 5, the original fractional LMUL checking may
>> gets unexpected result.
>>
>> Signed-off-by: Max Chou <max.chou@sifive.com>
>> ---
>
> There's already a fix for it in the ML:
>
> "[PATCH v3] target/riscv: Fix shift count overflow"
>
> https://lore.kernel.org/qemu-riscv/20240225174114.5298-1-demin.han@starfivetech.com/ 
>
>
>
> Hopefully it'll be queued for the next PR. Thanks,
>
>
> Daniel
>
>
>>   target/riscv/vector_helper.c | 3 +--
>>   1 file changed, 1 insertion(+), 2 deletions(-)
>>
>> diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
>> index 84cec73eb20..adceec378fd 100644
>> --- a/target/riscv/vector_helper.c
>> +++ b/target/riscv/vector_helper.c
>> @@ -53,10 +53,9 @@ target_ulong HELPER(vsetvl)(CPURISCVState *env, 
>> target_ulong s1,
>>            * VLEN * LMUL >= SEW
>>            * VLEN >> (8 - lmul) >= sew
>>            * (vlenb << 3) >> (8 - lmul) >= sew
>> -         * vlenb >> (8 - 3 - lmul) >= sew
>>            */
>>           if (vlmul == 4 ||
>> -            cpu->cfg.vlenb >> (8 - 3 - vlmul) < sew) {
>> +            ((cpu->cfg.vlenb << 3) >> (8 - vlmul)) < sew) {
>>               vill = true;
>>           }
>>       }