1
From: Alistair Francis <alistair.francis@wdc.com>
1
The following changes since commit ad10b4badc1dd5b28305f9b9f1168cf0aa3ae946:
2
2
3
The following changes since commit d52dff5d8048d4982437db9606c27bb4127cf9d0:
3
Merge tag 'pull-error-2024-05-27' of https://repo.or.cz/qemu/armbru into staging (2024-05-27 06:40:42 -0700)
4
5
Merge remote-tracking branch 'remotes/marcandre/tags/clip-pull-request' into staging (2021-08-31 14:38:15 +0100)
6
4
7
are available in the Git repository at:
5
are available in the Git repository at:
8
6
9
git@github.com:alistair23/qemu.git tags/pull-riscv-to-apply-20210901-2
7
https://github.com/alistair23/qemu.git tags/pull-riscv-to-apply-20240528
10
8
11
for you to fetch changes up to 8e034ae44dba6291beb07f7f2a932c1e5ab83e98:
9
for you to fetch changes up to 1806da76cb81088ea026ca3441551782b850e393:
12
10
13
target/riscv: Use {get,dest}_gpr for RVV (2021-09-01 11:59:12 +1000)
11
target/riscv: raise an exception when CSRRS/CSRRC writes a read-only CSR (2024-05-28 12:20:27 +1000)
14
12
15
----------------------------------------------------------------
13
----------------------------------------------------------------
16
First RISC-V PR for QEMU 6.2
14
RISC-V PR for 9.1
17
15
18
- Add a config for Shakti UART
16
* APLICs add child earlier than realize
19
- Fixup virt flash node
17
* Fix exposure of Zkr
20
- Don't override users supplied ISA version
18
* Raise exceptions on wrs.nto
21
- Fixup some CSR accesses
19
* Implement SBI debug console (DBCN) calls for KVM
22
- Use g_strjoinv() for virt machine PLIC string config
20
* Support 64-bit addresses for initrd
23
- Fix an overflow in the SiFive CLINT
21
* Change RISCV_EXCP_SEMIHOST exception number to 63
24
- Add 64-bit register access helpers
22
* Tolerate KVM disable ext errors
25
- Replace tcg_const_* with direct constant usage
23
* Set tval in breakpoints
24
* Add support for Zve32x extension
25
* Add support for Zve64x extension
26
* Relax vector register check in RISCV gdbstub
27
* Fix the element agnostic Vector function problem
28
* Fix Zvkb extension config
29
* Implement dynamic establishment of custom decoder
30
* Add th.sxstatus CSR emulation
31
* Fix Zvfhmin checking for vfwcvt.f.f.v and vfncvt.f.f.w instructions
32
* Check single width operator for vector fp widen instructions
33
* Check single width operator for vfncvt.rod.f.f.w
34
* Remove redudant SEW checking for vector fp narrow/widen instructions
35
* Prioritize pmp errors in raise_mmu_exception()
36
* Do not set mtval2 for non guest-page faults
37
* Remove experimental prefix from "B" extension
38
* Fixup CBO extension register calculation
39
* Fix the hart bit setting of AIA
40
* Fix reg_width in ricsv_gen_dynamic_vector_feature()
41
* Decode all of the pmpcfg and pmpaddr CSRs
42
* Raise an exception when CSRRS/CSRRC writes a read-only CSR
26
43
27
----------------------------------------------------------------
44
----------------------------------------------------------------
28
Bin Meng (2):
45
Alexei Filippov (1):
29
hw/riscv: virt: Move flash node to root
46
target/riscv: do not set mtval2 for non guest-page faults
30
target/riscv: Correct a comment in riscv_csrrw()
31
47
32
David Hoppenbrouwers (1):
48
Alistair Francis (2):
33
hw/intc/sifive_clint: Fix muldiv64 overflow in sifive_clint_write_timecmp()
49
target/riscv: rvzicbo: Fixup CBO extension register calculation
50
disas/riscv: Decode all of the pmpcfg and pmpaddr CSRs
34
51
35
Joe Komlodi (2):
52
Andrew Jones (2):
36
hw/core/register: Add more 64-bit utilities
53
target/riscv/kvm: Fix exposure of Zkr
37
hw/registerfields: Use 64-bit bitfield for FIELD_DP64
54
target/riscv: Raise exceptions on wrs.nto
38
55
39
LIU Zhiwei (2):
56
Cheng Yang (1):
40
target/riscv: Don't wrongly override isa version
57
hw/riscv/boot.c: Support 64-bit address for initrd
41
target/riscv: Add User CSRs read-only check
42
58
43
Peter Maydell (1):
59
Christoph Müllner (1):
44
hw/riscv/virt.c: Assemble plic_hart_config string with g_strjoinv()
60
riscv: thead: Add th.sxstatus CSR emulation
45
61
46
Richard Henderson (24):
62
Clément Léger (1):
47
target/riscv: Use tcg_constant_*
63
target/riscv: change RISCV_EXCP_SEMIHOST exception number to 63
48
tests/tcg/riscv64: Add test for division
49
target/riscv: Clean up division helpers
50
target/riscv: Add DisasContext to gen_get_gpr, gen_set_gpr
51
target/riscv: Introduce DisasExtend and new helpers
52
target/riscv: Add DisasExtend to gen_arith*
53
target/riscv: Remove gen_arith_div*
54
target/riscv: Use gen_arith for mulh and mulhu
55
target/riscv: Move gen_* helpers for RVM
56
target/riscv: Move gen_* helpers for RVB
57
target/riscv: Add DisasExtend to gen_unary
58
target/riscv: Use DisasExtend in shift operations
59
target/riscv: Use extracts for sraiw and srliw
60
target/riscv: Use get_gpr in branches
61
target/riscv: Use {get, dest}_gpr for integer load/store
62
target/riscv: Fix rmw_sip, rmw_vsip, rmw_hsip vs write-only operation
63
target/riscv: Fix hgeie, hgeip
64
target/riscv: Reorg csr instructions
65
target/riscv: Use {get,dest}_gpr for RVA
66
target/riscv: Use gen_shift_imm_fn for slli_uw
67
target/riscv: Use {get,dest}_gpr for RVF
68
target/riscv: Use {get,dest}_gpr for RVD
69
target/riscv: Tidy trans_rvh.c.inc
70
target/riscv: Use {get,dest}_gpr for RVV
71
64
72
Vijai Kumar K (1):
65
Daniel Henrique Barboza (6):
73
hw/char: Add config for shakti uart
66
target/riscv/kvm: implement SBI debug console (DBCN) calls
67
target/riscv/kvm: tolerate KVM disable ext errors
68
target/riscv/debug: set tval=pc in breakpoint exceptions
69
trans_privileged.c.inc: set (m|s)tval on ebreak breakpoint
70
target/riscv: prioritize pmp errors in raise_mmu_exception()
71
riscv, gdbstub.c: fix reg_width in ricsv_gen_dynamic_vector_feature()
74
72
75
include/hw/register.h | 8 +
73
Huang Tao (2):
76
include/hw/registerfields.h | 10 +-
74
target/riscv: Fix the element agnostic function problem
77
target/riscv/helper.h | 6 +-
75
target/riscv: Implement dynamic establishment of custom decoder
78
target/riscv/insn32.decode | 1 +
79
hw/core/register.c | 12 +
80
hw/intc/sifive_clint.c | 25 +-
81
hw/riscv/virt.c | 35 +-
82
target/riscv/cpu.c | 14 +-
83
target/riscv/csr.c | 59 ++-
84
target/riscv/op_helper.c | 18 +-
85
target/riscv/translate.c | 689 +++++++-------------------------
86
tests/tcg/riscv64/test-div.c | 58 +++
87
target/riscv/insn_trans/trans_rva.c.inc | 49 +--
88
target/riscv/insn_trans/trans_rvb.c.inc | 366 +++++++++++++----
89
target/riscv/insn_trans/trans_rvd.c.inc | 127 +++---
90
target/riscv/insn_trans/trans_rvf.c.inc | 149 ++++---
91
target/riscv/insn_trans/trans_rvh.c.inc | 266 +++---------
92
target/riscv/insn_trans/trans_rvi.c.inc | 372 +++++++++--------
93
target/riscv/insn_trans/trans_rvm.c.inc | 193 +++++++--
94
target/riscv/insn_trans/trans_rvv.c.inc | 149 +++----
95
hw/char/Kconfig | 3 +
96
hw/char/meson.build | 2 +-
97
hw/riscv/Kconfig | 5 +-
98
tests/tcg/riscv64/Makefile.target | 5 +
99
24 files changed, 1240 insertions(+), 1381 deletions(-)
100
create mode 100644 tests/tcg/riscv64/test-div.c
101
create mode 100644 tests/tcg/riscv64/Makefile.target
102
76
77
Jason Chien (3):
78
target/riscv: Add support for Zve32x extension
79
target/riscv: Add support for Zve64x extension
80
target/riscv: Relax vector register check in RISCV gdbstub
81
82
Max Chou (4):
83
target/riscv: rvv: Fix Zvfhmin checking for vfwcvt.f.f.v and vfncvt.f.f.w instructions
84
target/riscv: rvv: Check single width operator for vector fp widen instructions
85
target/riscv: rvv: Check single width operator for vfncvt.rod.f.f.w
86
target/riscv: rvv: Remove redudant SEW checking for vector fp narrow/widen instructions
87
88
Rob Bradford (1):
89
target/riscv: Remove experimental prefix from "B" extension
90
91
Yangyu Chen (1):
92
target/riscv/cpu.c: fix Zvkb extension config
93
94
Yong-Xuan Wang (1):
95
target/riscv/kvm.c: Fix the hart bit setting of AIA
96
97
Yu-Ming Chang (1):
98
target/riscv: raise an exception when CSRRS/CSRRC writes a read-only CSR
99
100
yang.zhang (1):
101
hw/intc/riscv_aplic: APLICs should add child earlier than realize
102
103
MAINTAINERS | 1 +
104
target/riscv/cpu.h | 11 ++
105
target/riscv/cpu_bits.h | 2 +-
106
target/riscv/cpu_cfg.h | 2 +
107
target/riscv/helper.h | 1 +
108
target/riscv/sbi_ecall_interface.h | 17 +++
109
target/riscv/tcg/tcg-cpu.h | 15 +++
110
disas/riscv.c | 65 +++++++++-
111
hw/intc/riscv_aplic.c | 8 +-
112
hw/riscv/boot.c | 4 +-
113
target/riscv/cpu.c | 10 +-
114
target/riscv/cpu_helper.c | 37 +++---
115
target/riscv/csr.c | 71 +++++++++--
116
target/riscv/debug.c | 3 +
117
target/riscv/gdbstub.c | 8 +-
118
target/riscv/kvm/kvm-cpu.c | 157 ++++++++++++++++++++++++-
119
target/riscv/op_helper.c | 17 ++-
120
target/riscv/tcg/tcg-cpu.c | 50 +++++---
121
target/riscv/th_csr.c | 79 +++++++++++++
122
target/riscv/translate.c | 31 +++--
123
target/riscv/vector_internals.c | 22 ++++
124
target/riscv/insn_trans/trans_privileged.c.inc | 2 +
125
target/riscv/insn_trans/trans_rvv.c.inc | 46 +++++---
126
target/riscv/insn_trans/trans_rvzawrs.c.inc | 29 +++--
127
target/riscv/insn_trans/trans_rvzicbo.c.inc | 16 ++-
128
target/riscv/meson.build | 1 +
129
26 files changed, 596 insertions(+), 109 deletions(-)
130
create mode 100644 target/riscv/th_csr.c
131
diff view generated by jsdifflib
Deleted patch
1
From: Vijai Kumar K <vijai@behindbytes.com>
2
1
3
Use a dedicated UART config(CONFIG_SHAKTI_UART) to select
4
shakti uart.
5
6
Signed-off-by: Vijai Kumar K <vijai@behindbytes.com>
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Message-id: 20210731190229.137483-1-vijai@behindbytes.com
9
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
---
11
hw/char/Kconfig | 3 +++
12
hw/char/meson.build | 2 +-
13
hw/riscv/Kconfig | 5 +----
14
3 files changed, 5 insertions(+), 5 deletions(-)
15
16
diff --git a/hw/char/Kconfig b/hw/char/Kconfig
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/char/Kconfig
19
+++ b/hw/char/Kconfig
20
@@ -XXX,XX +XXX,XX @@ config SIFIVE_UART
21
22
config GOLDFISH_TTY
23
bool
24
+
25
+config SHAKTI_UART
26
+ bool
27
diff --git a/hw/char/meson.build b/hw/char/meson.build
28
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/char/meson.build
30
+++ b/hw/char/meson.build
31
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_SERIAL', if_true: files('serial.c'))
32
softmmu_ss.add(when: 'CONFIG_SERIAL_ISA', if_true: files('serial-isa.c'))
33
softmmu_ss.add(when: 'CONFIG_SERIAL_PCI', if_true: files('serial-pci.c'))
34
softmmu_ss.add(when: 'CONFIG_SERIAL_PCI_MULTI', if_true: files('serial-pci-multi.c'))
35
-softmmu_ss.add(when: 'CONFIG_SHAKTI', if_true: files('shakti_uart.c'))
36
+softmmu_ss.add(when: 'CONFIG_SHAKTI_UART', if_true: files('shakti_uart.c'))
37
softmmu_ss.add(when: 'CONFIG_VIRTIO_SERIAL', if_true: files('virtio-console.c'))
38
softmmu_ss.add(when: 'CONFIG_XEN', if_true: files('xen_console.c'))
39
softmmu_ss.add(when: 'CONFIG_XILINX', if_true: files('xilinx_uartlite.c'))
40
diff --git a/hw/riscv/Kconfig b/hw/riscv/Kconfig
41
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/riscv/Kconfig
43
+++ b/hw/riscv/Kconfig
44
@@ -XXX,XX +XXX,XX @@ config OPENTITAN
45
select IBEX
46
select UNIMP
47
48
-config SHAKTI
49
- bool
50
-
51
config SHAKTI_C
52
bool
53
select UNIMP
54
- select SHAKTI
55
+ select SHAKTI_UART
56
select SIFIVE_CLINT
57
select SIFIVE_PLIC
58
59
--
60
2.31.1
61
62
diff view generated by jsdifflib
Deleted patch
1
From: Bin Meng <bmeng.cn@gmail.com>
2
1
3
The flash is not inside the SoC, so it's inappropriate to put it
4
under the /soc node. Move it to root instead.
5
6
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Message-id: 20210807035641.22449-1-bmeng.cn@gmail.com
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
---
12
hw/riscv/virt.c | 2 +-
13
1 file changed, 1 insertion(+), 1 deletion(-)
14
15
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/riscv/virt.c
18
+++ b/hw/riscv/virt.c
19
@@ -XXX,XX +XXX,XX @@ static void create_fdt(RISCVVirtState *s, const MemMapEntry *memmap,
20
qemu_fdt_setprop_cell(fdt, name, "interrupts", RTC_IRQ);
21
g_free(name);
22
23
- name = g_strdup_printf("/soc/flash@%" PRIx64, flashbase);
24
+ name = g_strdup_printf("/flash@%" PRIx64, flashbase);
25
qemu_fdt_add_subnode(mc->fdt, name);
26
qemu_fdt_setprop_string(mc->fdt, name, "compatible", "cfi-flash");
27
qemu_fdt_setprop_sized_cells(mc->fdt, name, "reg",
28
--
29
2.31.1
30
31
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: "yang.zhang" <yang.zhang@hexintek.com>
2
2
3
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
3
Since only root APLICs can have hw IRQ lines, aplic->parent should
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
be initialized first.
5
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
5
6
Message-id: 20210823195529.560295-23-richard.henderson@linaro.org
6
Fixes: e8f79343cf ("hw/intc: Add RISC-V AIA APLIC device emulation")
7
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
8
Signed-off-by: yang.zhang <yang.zhang@hexintek.com>
9
Cc: qemu-stable <qemu-stable@nongnu.org>
10
Message-ID: <20240409014445.278-1-gaoshanliukou@163.com>
7
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
8
---
12
---
9
target/riscv/insn_trans/trans_rvd.c.inc | 125 ++++++++++++------------
13
hw/intc/riscv_aplic.c | 8 ++++----
10
1 file changed, 60 insertions(+), 65 deletions(-)
14
1 file changed, 4 insertions(+), 4 deletions(-)
11
15
12
diff --git a/target/riscv/insn_trans/trans_rvd.c.inc b/target/riscv/insn_trans/trans_rvd.c.inc
16
diff --git a/hw/intc/riscv_aplic.c b/hw/intc/riscv_aplic.c
13
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
14
--- a/target/riscv/insn_trans/trans_rvd.c.inc
18
--- a/hw/intc/riscv_aplic.c
15
+++ b/target/riscv/insn_trans/trans_rvd.c.inc
19
+++ b/hw/intc/riscv_aplic.c
16
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ DeviceState *riscv_aplic_create(hwaddr addr, hwaddr size,
17
21
qdev_prop_set_bit(dev, "msimode", msimode);
18
static bool trans_fld(DisasContext *ctx, arg_fld *a)
22
qdev_prop_set_bit(dev, "mmode", mmode);
19
{
23
20
+ TCGv addr;
24
+ if (parent) {
21
+
25
+ riscv_aplic_add_child(parent, dev);
22
REQUIRE_FPU;
23
REQUIRE_EXT(ctx, RVD);
24
- TCGv t0 = tcg_temp_new();
25
- gen_get_gpr(ctx, t0, a->rs1);
26
- tcg_gen_addi_tl(t0, t0, a->imm);
27
28
- tcg_gen_qemu_ld_i64(cpu_fpr[a->rd], t0, ctx->mem_idx, MO_TEQ);
29
+ addr = get_gpr(ctx, a->rs1, EXT_NONE);
30
+ if (a->imm) {
31
+ TCGv temp = temp_new(ctx);
32
+ tcg_gen_addi_tl(temp, addr, a->imm);
33
+ addr = temp;
34
+ }
26
+ }
35
+
27
+
36
+ tcg_gen_qemu_ld_i64(cpu_fpr[a->rd], addr, ctx->mem_idx, MO_TEQ);
28
sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
37
29
38
mark_fs_dirty(ctx);
30
if (!is_kvm_aia(msimode)) {
39
- tcg_temp_free(t0);
31
sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, addr);
40
return true;
32
}
41
}
33
42
34
- if (parent) {
43
static bool trans_fsd(DisasContext *ctx, arg_fsd *a)
35
- riscv_aplic_add_child(parent, dev);
44
{
36
- }
45
+ TCGv addr;
46
+
47
REQUIRE_FPU;
48
REQUIRE_EXT(ctx, RVD);
49
- TCGv t0 = tcg_temp_new();
50
- gen_get_gpr(ctx, t0, a->rs1);
51
- tcg_gen_addi_tl(t0, t0, a->imm);
52
53
- tcg_gen_qemu_st_i64(cpu_fpr[a->rs2], t0, ctx->mem_idx, MO_TEQ);
54
+ addr = get_gpr(ctx, a->rs1, EXT_NONE);
55
+ if (a->imm) {
56
+ TCGv temp = temp_new(ctx);
57
+ tcg_gen_addi_tl(temp, addr, a->imm);
58
+ addr = temp;
59
+ }
60
+
61
+ tcg_gen_qemu_st_i64(cpu_fpr[a->rs2], addr, ctx->mem_idx, MO_TEQ);
62
63
- tcg_temp_free(t0);
64
return true;
65
}
66
67
@@ -XXX,XX +XXX,XX @@ static bool trans_feq_d(DisasContext *ctx, arg_feq_d *a)
68
REQUIRE_FPU;
69
REQUIRE_EXT(ctx, RVD);
70
71
- TCGv t0 = tcg_temp_new();
72
- gen_helper_feq_d(t0, cpu_env, cpu_fpr[a->rs1], cpu_fpr[a->rs2]);
73
- gen_set_gpr(ctx, a->rd, t0);
74
- tcg_temp_free(t0);
75
+ TCGv dest = dest_gpr(ctx, a->rd);
76
77
+ gen_helper_feq_d(dest, cpu_env, cpu_fpr[a->rs1], cpu_fpr[a->rs2]);
78
+ gen_set_gpr(ctx, a->rd, dest);
79
return true;
80
}
81
82
@@ -XXX,XX +XXX,XX @@ static bool trans_flt_d(DisasContext *ctx, arg_flt_d *a)
83
REQUIRE_FPU;
84
REQUIRE_EXT(ctx, RVD);
85
86
- TCGv t0 = tcg_temp_new();
87
- gen_helper_flt_d(t0, cpu_env, cpu_fpr[a->rs1], cpu_fpr[a->rs2]);
88
- gen_set_gpr(ctx, a->rd, t0);
89
- tcg_temp_free(t0);
90
+ TCGv dest = dest_gpr(ctx, a->rd);
91
92
+ gen_helper_flt_d(dest, cpu_env, cpu_fpr[a->rs1], cpu_fpr[a->rs2]);
93
+ gen_set_gpr(ctx, a->rd, dest);
94
return true;
95
}
96
97
@@ -XXX,XX +XXX,XX @@ static bool trans_fle_d(DisasContext *ctx, arg_fle_d *a)
98
REQUIRE_FPU;
99
REQUIRE_EXT(ctx, RVD);
100
101
- TCGv t0 = tcg_temp_new();
102
- gen_helper_fle_d(t0, cpu_env, cpu_fpr[a->rs1], cpu_fpr[a->rs2]);
103
- gen_set_gpr(ctx, a->rd, t0);
104
- tcg_temp_free(t0);
105
+ TCGv dest = dest_gpr(ctx, a->rd);
106
107
+ gen_helper_fle_d(dest, cpu_env, cpu_fpr[a->rs1], cpu_fpr[a->rs2]);
108
+ gen_set_gpr(ctx, a->rd, dest);
109
return true;
110
}
111
112
@@ -XXX,XX +XXX,XX @@ static bool trans_fclass_d(DisasContext *ctx, arg_fclass_d *a)
113
REQUIRE_FPU;
114
REQUIRE_EXT(ctx, RVD);
115
116
- TCGv t0 = tcg_temp_new();
117
- gen_helper_fclass_d(t0, cpu_fpr[a->rs1]);
118
- gen_set_gpr(ctx, a->rd, t0);
119
- tcg_temp_free(t0);
120
+ TCGv dest = dest_gpr(ctx, a->rd);
121
+
122
+ gen_helper_fclass_d(dest, cpu_fpr[a->rs1]);
123
+ gen_set_gpr(ctx, a->rd, dest);
124
return true;
125
}
126
127
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_w_d(DisasContext *ctx, arg_fcvt_w_d *a)
128
REQUIRE_FPU;
129
REQUIRE_EXT(ctx, RVD);
130
131
- TCGv t0 = tcg_temp_new();
132
- gen_set_rm(ctx, a->rm);
133
- gen_helper_fcvt_w_d(t0, cpu_env, cpu_fpr[a->rs1]);
134
- gen_set_gpr(ctx, a->rd, t0);
135
- tcg_temp_free(t0);
136
+ TCGv dest = dest_gpr(ctx, a->rd);
137
138
+ gen_set_rm(ctx, a->rm);
139
+ gen_helper_fcvt_w_d(dest, cpu_env, cpu_fpr[a->rs1]);
140
+ gen_set_gpr(ctx, a->rd, dest);
141
return true;
142
}
143
144
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_wu_d(DisasContext *ctx, arg_fcvt_wu_d *a)
145
REQUIRE_FPU;
146
REQUIRE_EXT(ctx, RVD);
147
148
- TCGv t0 = tcg_temp_new();
149
- gen_set_rm(ctx, a->rm);
150
- gen_helper_fcvt_wu_d(t0, cpu_env, cpu_fpr[a->rs1]);
151
- gen_set_gpr(ctx, a->rd, t0);
152
- tcg_temp_free(t0);
153
+ TCGv dest = dest_gpr(ctx, a->rd);
154
155
+ gen_set_rm(ctx, a->rm);
156
+ gen_helper_fcvt_wu_d(dest, cpu_env, cpu_fpr[a->rs1]);
157
+ gen_set_gpr(ctx, a->rd, dest);
158
return true;
159
}
160
161
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_d_w(DisasContext *ctx, arg_fcvt_d_w *a)
162
REQUIRE_FPU;
163
REQUIRE_EXT(ctx, RVD);
164
165
- TCGv t0 = tcg_temp_new();
166
- gen_get_gpr(ctx, t0, a->rs1);
167
+ TCGv src = get_gpr(ctx, a->rs1, EXT_SIGN);
168
169
gen_set_rm(ctx, a->rm);
170
- gen_helper_fcvt_d_w(cpu_fpr[a->rd], cpu_env, t0);
171
- tcg_temp_free(t0);
172
+ gen_helper_fcvt_d_w(cpu_fpr[a->rd], cpu_env, src);
173
174
mark_fs_dirty(ctx);
175
return true;
176
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_d_wu(DisasContext *ctx, arg_fcvt_d_wu *a)
177
REQUIRE_FPU;
178
REQUIRE_EXT(ctx, RVD);
179
180
- TCGv t0 = tcg_temp_new();
181
- gen_get_gpr(ctx, t0, a->rs1);
182
+ TCGv src = get_gpr(ctx, a->rs1, EXT_ZERO);
183
184
gen_set_rm(ctx, a->rm);
185
- gen_helper_fcvt_d_wu(cpu_fpr[a->rd], cpu_env, t0);
186
- tcg_temp_free(t0);
187
+ gen_helper_fcvt_d_wu(cpu_fpr[a->rd], cpu_env, src);
188
189
mark_fs_dirty(ctx);
190
return true;
191
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_l_d(DisasContext *ctx, arg_fcvt_l_d *a)
192
REQUIRE_FPU;
193
REQUIRE_EXT(ctx, RVD);
194
195
- TCGv t0 = tcg_temp_new();
196
+ TCGv dest = dest_gpr(ctx, a->rd);
197
+
198
gen_set_rm(ctx, a->rm);
199
- gen_helper_fcvt_l_d(t0, cpu_env, cpu_fpr[a->rs1]);
200
- gen_set_gpr(ctx, a->rd, t0);
201
- tcg_temp_free(t0);
202
+ gen_helper_fcvt_l_d(dest, cpu_env, cpu_fpr[a->rs1]);
203
+ gen_set_gpr(ctx, a->rd, dest);
204
return true;
205
}
206
207
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_lu_d(DisasContext *ctx, arg_fcvt_lu_d *a)
208
REQUIRE_FPU;
209
REQUIRE_EXT(ctx, RVD);
210
211
- TCGv t0 = tcg_temp_new();
212
+ TCGv dest = dest_gpr(ctx, a->rd);
213
+
214
gen_set_rm(ctx, a->rm);
215
- gen_helper_fcvt_lu_d(t0, cpu_env, cpu_fpr[a->rs1]);
216
- gen_set_gpr(ctx, a->rd, t0);
217
- tcg_temp_free(t0);
218
+ gen_helper_fcvt_lu_d(dest, cpu_env, cpu_fpr[a->rs1]);
219
+ gen_set_gpr(ctx, a->rd, dest);
220
return true;
221
}
222
223
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_d_l(DisasContext *ctx, arg_fcvt_d_l *a)
224
REQUIRE_FPU;
225
REQUIRE_EXT(ctx, RVD);
226
227
- TCGv t0 = tcg_temp_new();
228
- gen_get_gpr(ctx, t0, a->rs1);
229
+ TCGv src = get_gpr(ctx, a->rs1, EXT_SIGN);
230
231
gen_set_rm(ctx, a->rm);
232
- gen_helper_fcvt_d_l(cpu_fpr[a->rd], cpu_env, t0);
233
- tcg_temp_free(t0);
234
+ gen_helper_fcvt_d_l(cpu_fpr[a->rd], cpu_env, src);
235
+
236
mark_fs_dirty(ctx);
237
return true;
238
}
239
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_d_lu(DisasContext *ctx, arg_fcvt_d_lu *a)
240
REQUIRE_FPU;
241
REQUIRE_EXT(ctx, RVD);
242
243
- TCGv t0 = tcg_temp_new();
244
- gen_get_gpr(ctx, t0, a->rs1);
245
+ TCGv src = get_gpr(ctx, a->rs1, EXT_ZERO);
246
247
gen_set_rm(ctx, a->rm);
248
- gen_helper_fcvt_d_lu(cpu_fpr[a->rd], cpu_env, t0);
249
- tcg_temp_free(t0);
250
+ gen_helper_fcvt_d_lu(cpu_fpr[a->rd], cpu_env, src);
251
+
252
mark_fs_dirty(ctx);
253
return true;
254
}
255
@@ -XXX,XX +XXX,XX @@ static bool trans_fmv_d_x(DisasContext *ctx, arg_fmv_d_x *a)
256
REQUIRE_EXT(ctx, RVD);
257
258
#ifdef TARGET_RISCV64
259
- TCGv t0 = tcg_temp_new();
260
- gen_get_gpr(ctx, t0, a->rs1);
261
-
37
-
262
- tcg_gen_mov_tl(cpu_fpr[a->rd], t0);
38
if (!msimode) {
263
- tcg_temp_free(t0);
39
for (i = 0; i < num_harts; i++) {
264
+ tcg_gen_mov_tl(cpu_fpr[a->rd], get_gpr(ctx, a->rs1, EXT_NONE));
40
CPUState *cpu = cpu_by_arch_id(hartid_base + i);
265
mark_fs_dirty(ctx);
266
return true;
267
#else
268
--
41
--
269
2.31.1
42
2.45.1
270
271
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Andrew Jones <ajones@ventanamicro.com>
2
2
3
We distinguish write-only by passing ret_value as NULL.
3
The Zkr extension may only be exposed to KVM guests if the VMM
4
implements the SEED CSR. Use the same implementation as TCG.
4
5
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Without this patch, running with a KVM which does not forward the
6
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
7
SEED CSR access to QEMU will result in an ILL exception being
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
injected into the guest (this results in Linux guests crashing on
8
Message-id: 20210823195529.560295-17-richard.henderson@linaro.org
9
boot). And, when running with a KVM which does forward the access,
10
QEMU will crash, since QEMU doesn't know what to do with the exit.
11
12
Fixes: 3108e2f1c69d ("target/riscv/kvm: update KVM exts to Linux 6.8")
13
Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
14
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
15
Cc: qemu-stable <qemu-stable@nongnu.org>
16
Message-ID: <20240422134605.534207-2-ajones@ventanamicro.com>
9
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
17
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
---
18
---
11
target/riscv/csr.c | 23 +++++++++++++++--------
19
target/riscv/cpu.h | 3 +++
12
1 file changed, 15 insertions(+), 8 deletions(-)
20
target/riscv/csr.c | 18 ++++++++++++++----
21
target/riscv/kvm/kvm-cpu.c | 25 +++++++++++++++++++++++++
22
3 files changed, 42 insertions(+), 4 deletions(-)
13
23
24
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/riscv/cpu.h
27
+++ b/target/riscv/cpu.h
28
@@ -XXX,XX +XXX,XX @@ void riscv_set_csr_ops(int csrno, riscv_csr_operations *ops);
29
30
void riscv_cpu_register_gdb_regs_for_features(CPUState *cs);
31
32
+target_ulong riscv_new_csr_seed(target_ulong new_value,
33
+ target_ulong write_mask);
34
+
35
uint8_t satp_mode_max_from_map(uint32_t map);
36
const char *satp_mode_str(uint8_t satp_mode, bool is_32_bit);
37
14
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
38
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
15
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
16
--- a/target/riscv/csr.c
40
--- a/target/riscv/csr.c
17
+++ b/target/riscv/csr.c
41
+++ b/target/riscv/csr.c
18
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_vsip(CPURISCVState *env, int csrno,
42
@@ -XXX,XX +XXX,XX @@ static RISCVException write_upmbase(CPURISCVState *env, int csrno,
19
/* Shift the S bits to their VS bit location in mip */
43
#endif
20
int ret = rmw_mip(env, 0, ret_value, new_value << 1,
44
21
(write_mask << 1) & vsip_writable_mask & env->hideleg);
45
/* Crypto Extension */
22
- *ret_value &= VS_MODE_INTERRUPTS;
46
-static RISCVException rmw_seed(CPURISCVState *env, int csrno,
23
- /* Shift the VS bits to their S bit location in vsip */
47
- target_ulong *ret_value,
24
- *ret_value >>= 1;
48
- target_ulong new_value,
49
- target_ulong write_mask)
50
+target_ulong riscv_new_csr_seed(target_ulong new_value,
51
+ target_ulong write_mask)
52
{
53
uint16_t random_v;
54
Error *random_e = NULL;
55
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_seed(CPURISCVState *env, int csrno,
56
rval = random_v | SEED_OPST_ES16;
57
}
58
59
+ return rval;
60
+}
25
+
61
+
26
+ if (ret_value) {
62
+static RISCVException rmw_seed(CPURISCVState *env, int csrno,
27
+ *ret_value &= VS_MODE_INTERRUPTS;
63
+ target_ulong *ret_value,
28
+ /* Shift the VS bits to their S bit location in vsip */
64
+ target_ulong new_value,
29
+ *ret_value >>= 1;
65
+ target_ulong write_mask)
30
+ }
66
+{
67
+ target_ulong rval;
68
+
69
+ rval = riscv_new_csr_seed(new_value, write_mask);
70
+
71
if (ret_value) {
72
*ret_value = rval;
73
}
74
diff --git a/target/riscv/kvm/kvm-cpu.c b/target/riscv/kvm/kvm-cpu.c
75
index XXXXXXX..XXXXXXX 100644
76
--- a/target/riscv/kvm/kvm-cpu.c
77
+++ b/target/riscv/kvm/kvm-cpu.c
78
@@ -XXX,XX +XXX,XX @@ static int kvm_riscv_handle_sbi(CPUState *cs, struct kvm_run *run)
31
return ret;
79
return ret;
32
}
80
}
33
81
34
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_sip(CPURISCVState *env, int csrno,
82
+static int kvm_riscv_handle_csr(CPUState *cs, struct kvm_run *run)
35
write_mask & env->mideleg & sip_writable_mask);
83
+{
36
}
84
+ target_ulong csr_num = run->riscv_csr.csr_num;
37
85
+ target_ulong new_value = run->riscv_csr.new_value;
38
- *ret_value &= env->mideleg;
86
+ target_ulong write_mask = run->riscv_csr.write_mask;
39
+ if (ret_value) {
87
+ int ret = 0;
40
+ *ret_value &= env->mideleg;
88
+
89
+ switch (csr_num) {
90
+ case CSR_SEED:
91
+ run->riscv_csr.ret_value = riscv_new_csr_seed(new_value, write_mask);
92
+ break;
93
+ default:
94
+ qemu_log_mask(LOG_UNIMP,
95
+ "%s: un-handled CSR EXIT for CSR %lx\n",
96
+ __func__, csr_num);
97
+ ret = -1;
98
+ break;
41
+ }
99
+ }
42
return ret;
100
+
43
}
101
+ return ret;
44
102
+}
45
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_hvip(CPURISCVState *env, int csrno,
103
+
46
int ret = rmw_mip(env, 0, ret_value, new_value,
104
int kvm_arch_handle_exit(CPUState *cs, struct kvm_run *run)
47
write_mask & hvip_writable_mask);
105
{
48
106
int ret = 0;
49
- *ret_value &= hvip_writable_mask;
107
@@ -XXX,XX +XXX,XX @@ int kvm_arch_handle_exit(CPUState *cs, struct kvm_run *run)
50
-
108
case KVM_EXIT_RISCV_SBI:
51
+ if (ret_value) {
109
ret = kvm_riscv_handle_sbi(cs, run);
52
+ *ret_value &= hvip_writable_mask;
110
break;
53
+ }
111
+ case KVM_EXIT_RISCV_CSR:
54
return ret;
112
+ ret = kvm_riscv_handle_csr(cs, run);
55
}
113
+ break;
56
114
default:
57
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_hip(CPURISCVState *env, int csrno,
115
qemu_log_mask(LOG_UNIMP, "%s: un-handled exit reason %d\n",
58
int ret = rmw_mip(env, 0, ret_value, new_value,
116
__func__, run->exit_reason);
59
write_mask & hip_writable_mask);
60
61
- *ret_value &= hip_writable_mask;
62
-
63
+ if (ret_value) {
64
+ *ret_value &= hip_writable_mask;
65
+ }
66
return ret;
67
}
68
69
--
117
--
70
2.31.1
118
2.45.1
71
72
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Andrew Jones <ajones@ventanamicro.com>
2
2
3
Introduce csrr and csrw helpers, for read-only and write-only insns.
3
Implementing wrs.nto to always just return is consistent with the
4
specification, as the instruction is permitted to terminate the
5
stall for any reason, but it's not useful for virtualization, where
6
we'd like the guest to trap to the hypervisor in order to allow
7
scheduling of the lock holding VCPU. Change to always immediately
8
raise exceptions when the appropriate conditions are present,
9
otherwise continue to just return. Note, immediately raising
10
exceptions is also consistent with the specification since the
11
time limit that should expire prior to the exception is
12
implementation-specific.
4
13
5
Note that we do not properly implement this in riscv_csrrw, in that
14
Signed-off-by: Andrew Jones <ajones@ventanamicro.com>
6
we cannot distinguish true read-only (rs1 == 0) from any other zero
15
Reviewed-by: Christoph Müllner <christoph.muellner@vrull.eu>
7
write_mask another source register -- this should still raise an
16
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
8
exception for read-only registers.
9
10
Only issue gen_io_start for CF_USE_ICOUNT.
11
Use ctx->zero for csrrc.
12
Use get_gpr and dest_gpr.
13
14
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
15
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
16
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
17
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
17
Message-id: 20210823195529.560295-19-richard.henderson@linaro.org
18
Message-ID: <20240424142808.62936-2-ajones@ventanamicro.com>
18
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
19
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
19
---
20
---
20
target/riscv/helper.h | 6 +-
21
target/riscv/helper.h | 1 +
21
target/riscv/op_helper.c | 18 +--
22
target/riscv/op_helper.c | 11 ++++++++
22
target/riscv/insn_trans/trans_rvi.c.inc | 174 +++++++++++++++++-------
23
target/riscv/insn_trans/trans_rvzawrs.c.inc | 29 ++++++++++++++-------
23
3 files changed, 132 insertions(+), 66 deletions(-)
24
3 files changed, 32 insertions(+), 9 deletions(-)
24
25
25
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
26
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
26
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
27
--- a/target/riscv/helper.h
28
--- a/target/riscv/helper.h
28
+++ b/target/riscv/helper.h
29
+++ b/target/riscv/helper.h
29
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_2(gorc, TCG_CALL_NO_RWG_SE, tl, tl, tl)
30
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_6(csrrw_i128, tl, env, int, tl, tl, tl, tl)
30
DEF_HELPER_FLAGS_2(gorcw, TCG_CALL_NO_RWG_SE, tl, tl, tl)
31
DEF_HELPER_1(sret, tl, env)
31
32
DEF_HELPER_1(mret, tl, env)
32
/* Special functions */
33
DEF_HELPER_1(wfi, void, env)
33
-DEF_HELPER_3(csrrw, tl, env, tl, tl)
34
+DEF_HELPER_1(wrs_nto, void, env)
34
-DEF_HELPER_4(csrrs, tl, env, tl, tl, tl)
35
DEF_HELPER_1(tlb_flush, void, env)
35
-DEF_HELPER_4(csrrc, tl, env, tl, tl, tl)
36
DEF_HELPER_1(tlb_flush_all, void, env)
36
+DEF_HELPER_2(csrr, tl, env, int)
37
/* Native Debug */
37
+DEF_HELPER_3(csrw, void, env, int, tl)
38
+DEF_HELPER_4(csrrw, tl, env, int, tl, tl)
39
#ifndef CONFIG_USER_ONLY
40
DEF_HELPER_2(sret, tl, env, tl)
41
DEF_HELPER_2(mret, tl, env, tl)
42
diff --git a/target/riscv/op_helper.c b/target/riscv/op_helper.c
38
diff --git a/target/riscv/op_helper.c b/target/riscv/op_helper.c
43
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
44
--- a/target/riscv/op_helper.c
40
--- a/target/riscv/op_helper.c
45
+++ b/target/riscv/op_helper.c
41
+++ b/target/riscv/op_helper.c
46
@@ -XXX,XX +XXX,XX @@ void helper_raise_exception(CPURISCVState *env, uint32_t exception)
42
@@ -XXX,XX +XXX,XX @@ void helper_wfi(CPURISCVState *env)
47
riscv_raise_exception(env, exception, 0);
43
}
48
}
44
}
49
45
50
-target_ulong helper_csrrw(CPURISCVState *env, target_ulong src,
46
+void helper_wrs_nto(CPURISCVState *env)
51
- target_ulong csr)
47
+{
52
+target_ulong helper_csrr(CPURISCVState *env, int csr)
48
+ if (env->virt_enabled && (env->priv == PRV_S || env->priv == PRV_U) &&
49
+ get_field(env->hstatus, HSTATUS_VTW) &&
50
+ !get_field(env->mstatus, MSTATUS_TW)) {
51
+ riscv_raise_exception(env, RISCV_EXCP_VIRT_INSTRUCTION_FAULT, GETPC());
52
+ } else if (env->priv != PRV_M && get_field(env->mstatus, MSTATUS_TW)) {
53
+ riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC());
54
+ }
55
+}
56
+
57
void helper_tlb_flush(CPURISCVState *env)
53
{
58
{
54
target_ulong val = 0;
59
CPUState *cs = env_cpu(env);
55
- RISCVException ret = riscv_csrrw(env, csr, &val, src, -1);
60
diff --git a/target/riscv/insn_trans/trans_rvzawrs.c.inc b/target/riscv/insn_trans/trans_rvzawrs.c.inc
56
+ RISCVException ret = riscv_csrrw(env, csr, &val, 0, 0);
61
index XXXXXXX..XXXXXXX 100644
57
62
--- a/target/riscv/insn_trans/trans_rvzawrs.c.inc
58
if (ret != RISCV_EXCP_NONE) {
63
+++ b/target/riscv/insn_trans/trans_rvzawrs.c.inc
59
riscv_raise_exception(env, ret, GETPC());
64
@@ -XXX,XX +XXX,XX @@
60
@@ -XXX,XX +XXX,XX @@ target_ulong helper_csrrw(CPURISCVState *env, target_ulong src,
65
* this program. If not, see <http://www.gnu.org/licenses/>.
61
return val;
66
*/
62
}
67
63
68
-static bool trans_wrs(DisasContext *ctx)
64
-target_ulong helper_csrrs(CPURISCVState *env, target_ulong src,
69
+static bool trans_wrs_sto(DisasContext *ctx, arg_wrs_sto *a)
65
- target_ulong csr, target_ulong rs1_pass)
66
+void helper_csrw(CPURISCVState *env, int csr, target_ulong src)
67
{
70
{
68
- target_ulong val = 0;
71
if (!ctx->cfg_ptr->ext_zawrs) {
69
- RISCVException ret = riscv_csrrw(env, csr, &val, -1, rs1_pass ? src : 0);
72
return false;
70
+ RISCVException ret = riscv_csrrw(env, csr, NULL, src, -1);
73
@@ -XXX,XX +XXX,XX @@ static bool trans_wrs(DisasContext *ctx)
71
72
if (ret != RISCV_EXCP_NONE) {
73
riscv_raise_exception(env, ret, GETPC());
74
}
75
- return val;
76
}
77
78
-target_ulong helper_csrrc(CPURISCVState *env, target_ulong src,
79
- target_ulong csr, target_ulong rs1_pass)
80
+target_ulong helper_csrrw(CPURISCVState *env, int csr,
81
+ target_ulong src, target_ulong write_mask)
82
{
83
target_ulong val = 0;
84
- RISCVException ret = riscv_csrrw(env, csr, &val, 0, rs1_pass ? src : 0);
85
+ RISCVException ret = riscv_csrrw(env, csr, &val, src, write_mask);
86
87
if (ret != RISCV_EXCP_NONE) {
88
riscv_raise_exception(env, ret, GETPC());
89
diff --git a/target/riscv/insn_trans/trans_rvi.c.inc b/target/riscv/insn_trans/trans_rvi.c.inc
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/riscv/insn_trans/trans_rvi.c.inc
92
+++ b/target/riscv/insn_trans/trans_rvi.c.inc
93
@@ -XXX,XX +XXX,XX @@ static bool trans_fence_i(DisasContext *ctx, arg_fence_i *a)
94
return true;
74
return true;
95
}
75
}
96
76
97
-#define RISCV_OP_CSR_PRE do {\
77
-#define GEN_TRANS_WRS(insn) \
98
- source1 = tcg_temp_new(); \
78
-static bool trans_ ## insn(DisasContext *ctx, arg_ ## insn *a) \
99
- csr_store = tcg_temp_new(); \
79
-{ \
100
- dest = tcg_temp_new(); \
80
- (void)a; \
101
- rs1_pass = tcg_temp_new(); \
81
- return trans_wrs(ctx); \
102
- gen_get_gpr(ctx, source1, a->rs1); \
82
-}
103
- tcg_gen_movi_tl(cpu_pc, ctx->base.pc_next); \
83
+static bool trans_wrs_nto(DisasContext *ctx, arg_wrs_nto *a)
104
- tcg_gen_movi_tl(rs1_pass, a->rs1); \
105
- tcg_gen_movi_tl(csr_store, a->csr); \
106
- gen_io_start();\
107
-} while (0)
108
-
109
-#define RISCV_OP_CSR_POST do {\
110
- gen_set_gpr(ctx, a->rd, dest); \
111
- tcg_gen_movi_tl(cpu_pc, ctx->pc_succ_insn); \
112
- exit_tb(ctx); \
113
- ctx->base.is_jmp = DISAS_NORETURN; \
114
- tcg_temp_free(source1); \
115
- tcg_temp_free(csr_store); \
116
- tcg_temp_free(dest); \
117
- tcg_temp_free(rs1_pass); \
118
-} while (0)
119
+static bool do_csr_post(DisasContext *ctx)
120
+{
84
+{
121
+ /* We may have changed important cpu state -- exit to main loop. */
85
+ if (!ctx->cfg_ptr->ext_zawrs) {
122
+ tcg_gen_movi_tl(cpu_pc, ctx->pc_succ_insn);
86
+ return false;
123
+ exit_tb(ctx);
87
+ }
124
+ ctx->base.is_jmp = DISAS_NORETURN;
88
125
+ return true;
89
-GEN_TRANS_WRS(wrs_nto)
90
-GEN_TRANS_WRS(wrs_sto)
91
+ /*
92
+ * Depending on the mode of execution, mstatus.TW and hstatus.VTW, wrs.nto
93
+ * should raise an exception when the implementation-specific bounded time
94
+ * limit has expired. Our time limit is zero, so we either return
95
+ * immediately, as does our implementation of wrs.sto, or raise an
96
+ * exception, as handled by the wrs.nto helper.
97
+ */
98
+#ifndef CONFIG_USER_ONLY
99
+ gen_helper_wrs_nto(tcg_env);
100
+#endif
101
+
102
+ /* We only get here when helper_wrs_nto() doesn't raise an exception. */
103
+ return trans_wrs_sto(ctx, NULL);
126
+}
104
+}
127
+
128
+static bool do_csrr(DisasContext *ctx, int rd, int rc)
129
+{
130
+ TCGv dest = dest_gpr(ctx, rd);
131
+ TCGv_i32 csr = tcg_constant_i32(rc);
132
+
133
+ if (tb_cflags(ctx->base.tb) & CF_USE_ICOUNT) {
134
+ gen_io_start();
135
+ }
136
+ gen_helper_csrr(dest, cpu_env, csr);
137
+ gen_set_gpr(ctx, rd, dest);
138
+ return do_csr_post(ctx);
139
+}
140
+
141
+static bool do_csrw(DisasContext *ctx, int rc, TCGv src)
142
+{
143
+ TCGv_i32 csr = tcg_constant_i32(rc);
144
145
+ if (tb_cflags(ctx->base.tb) & CF_USE_ICOUNT) {
146
+ gen_io_start();
147
+ }
148
+ gen_helper_csrw(cpu_env, csr, src);
149
+ return do_csr_post(ctx);
150
+}
151
+
152
+static bool do_csrrw(DisasContext *ctx, int rd, int rc, TCGv src, TCGv mask)
153
+{
154
+ TCGv dest = dest_gpr(ctx, rd);
155
+ TCGv_i32 csr = tcg_constant_i32(rc);
156
+
157
+ if (tb_cflags(ctx->base.tb) & CF_USE_ICOUNT) {
158
+ gen_io_start();
159
+ }
160
+ gen_helper_csrrw(dest, cpu_env, csr, src, mask);
161
+ gen_set_gpr(ctx, rd, dest);
162
+ return do_csr_post(ctx);
163
+}
164
165
static bool trans_csrrw(DisasContext *ctx, arg_csrrw *a)
166
{
167
- TCGv source1, csr_store, dest, rs1_pass;
168
- RISCV_OP_CSR_PRE;
169
- gen_helper_csrrw(dest, cpu_env, source1, csr_store);
170
- RISCV_OP_CSR_POST;
171
- return true;
172
+ TCGv src = get_gpr(ctx, a->rs1, EXT_NONE);
173
+
174
+ /*
175
+ * If rd == 0, the insn shall not read the csr, nor cause any of the
176
+ * side effects that might occur on a csr read.
177
+ */
178
+ if (a->rd == 0) {
179
+ return do_csrw(ctx, a->csr, src);
180
+ }
181
+
182
+ TCGv mask = tcg_constant_tl(-1);
183
+ return do_csrrw(ctx, a->rd, a->csr, src, mask);
184
}
185
186
static bool trans_csrrs(DisasContext *ctx, arg_csrrs *a)
187
{
188
- TCGv source1, csr_store, dest, rs1_pass;
189
- RISCV_OP_CSR_PRE;
190
- gen_helper_csrrs(dest, cpu_env, source1, csr_store, rs1_pass);
191
- RISCV_OP_CSR_POST;
192
- return true;
193
+ /*
194
+ * If rs1 == 0, the insn shall not write to the csr at all, nor
195
+ * cause any of the side effects that might occur on a csr write.
196
+ * Note that if rs1 specifies a register other than x0, holding
197
+ * a zero value, the instruction will still attempt to write the
198
+ * unmodified value back to the csr and will cause side effects.
199
+ */
200
+ if (a->rs1 == 0) {
201
+ return do_csrr(ctx, a->rd, a->csr);
202
+ }
203
+
204
+ TCGv ones = tcg_constant_tl(-1);
205
+ TCGv mask = get_gpr(ctx, a->rs1, EXT_ZERO);
206
+ return do_csrrw(ctx, a->rd, a->csr, ones, mask);
207
}
208
209
static bool trans_csrrc(DisasContext *ctx, arg_csrrc *a)
210
{
211
- TCGv source1, csr_store, dest, rs1_pass;
212
- RISCV_OP_CSR_PRE;
213
- gen_helper_csrrc(dest, cpu_env, source1, csr_store, rs1_pass);
214
- RISCV_OP_CSR_POST;
215
- return true;
216
+ /*
217
+ * If rs1 == 0, the insn shall not write to the csr at all, nor
218
+ * cause any of the side effects that might occur on a csr write.
219
+ * Note that if rs1 specifies a register other than x0, holding
220
+ * a zero value, the instruction will still attempt to write the
221
+ * unmodified value back to the csr and will cause side effects.
222
+ */
223
+ if (a->rs1 == 0) {
224
+ return do_csrr(ctx, a->rd, a->csr);
225
+ }
226
+
227
+ TCGv mask = get_gpr(ctx, a->rs1, EXT_ZERO);
228
+ return do_csrrw(ctx, a->rd, a->csr, ctx->zero, mask);
229
}
230
231
static bool trans_csrrwi(DisasContext *ctx, arg_csrrwi *a)
232
{
233
- TCGv source1, csr_store, dest, rs1_pass;
234
- RISCV_OP_CSR_PRE;
235
- gen_helper_csrrw(dest, cpu_env, rs1_pass, csr_store);
236
- RISCV_OP_CSR_POST;
237
- return true;
238
+ TCGv src = tcg_constant_tl(a->rs1);
239
+
240
+ /*
241
+ * If rd == 0, the insn shall not read the csr, nor cause any of the
242
+ * side effects that might occur on a csr read.
243
+ */
244
+ if (a->rd == 0) {
245
+ return do_csrw(ctx, a->csr, src);
246
+ }
247
+
248
+ TCGv mask = tcg_constant_tl(-1);
249
+ return do_csrrw(ctx, a->rd, a->csr, src, mask);
250
}
251
252
static bool trans_csrrsi(DisasContext *ctx, arg_csrrsi *a)
253
{
254
- TCGv source1, csr_store, dest, rs1_pass;
255
- RISCV_OP_CSR_PRE;
256
- gen_helper_csrrs(dest, cpu_env, rs1_pass, csr_store, rs1_pass);
257
- RISCV_OP_CSR_POST;
258
- return true;
259
+ /*
260
+ * If rs1 == 0, the insn shall not write to the csr at all, nor
261
+ * cause any of the side effects that might occur on a csr write.
262
+ * Note that if rs1 specifies a register other than x0, holding
263
+ * a zero value, the instruction will still attempt to write the
264
+ * unmodified value back to the csr and will cause side effects.
265
+ */
266
+ if (a->rs1 == 0) {
267
+ return do_csrr(ctx, a->rd, a->csr);
268
+ }
269
+
270
+ TCGv ones = tcg_constant_tl(-1);
271
+ TCGv mask = tcg_constant_tl(a->rs1);
272
+ return do_csrrw(ctx, a->rd, a->csr, ones, mask);
273
}
274
275
static bool trans_csrrci(DisasContext *ctx, arg_csrrci *a)
276
{
277
- TCGv source1, csr_store, dest, rs1_pass;
278
- RISCV_OP_CSR_PRE;
279
- gen_helper_csrrc(dest, cpu_env, rs1_pass, csr_store, rs1_pass);
280
- RISCV_OP_CSR_POST;
281
- return true;
282
+ /*
283
+ * If rs1 == 0, the insn shall not write to the csr at all, nor
284
+ * cause any of the side effects that might occur on a csr write.
285
+ * Note that if rs1 specifies a register other than x0, holding
286
+ * a zero value, the instruction will still attempt to write the
287
+ * unmodified value back to the csr and will cause side effects.
288
+ */
289
+ if (a->rs1 == 0) {
290
+ return do_csrr(ctx, a->rd, a->csr);
291
+ }
292
+
293
+ TCGv mask = tcg_constant_tl(a->rs1);
294
+ return do_csrrw(ctx, a->rd, a->csr, ctx->zero, mask);
295
}
296
--
105
--
297
2.31.1
106
2.45.1
298
107
299
108
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
2
2
3
These operations are greatly simplified by ctx->w, which allows
3
SBI defines a Debug Console extension "DBCN" that will, in time, replace
4
us to fold gen_shiftw into gen_shift. Split gen_shifti into
4
the legacy console putchar and getchar SBI extensions.
5
gen_shift_imm_{fn,tl} like we do for gen_arith_imm_{fn,tl}.
5
6
6
The appeal of the DBCN extension is that it allows multiple bytes to be
7
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
7
read/written in the SBI console in a single SBI call.
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
As far as KVM goes, the DBCN calls are forwarded by an in-kernel KVM
10
Message-id: 20210823195529.560295-13-richard.henderson@linaro.org
10
module to userspace. But this will only happens if the KVM module
11
actually supports this SBI extension and we activate it.
12
13
We'll check for DBCN support during init time, checking if get-reg-list
14
is advertising KVM_RISCV_SBI_EXT_DBCN. In that case, we'll enable it via
15
kvm_set_one_reg() during kvm_arch_init_vcpu().
16
17
Finally, change kvm_riscv_handle_sbi() to handle the incoming calls for
18
SBI_EXT_DBCN, reading and writing as required.
19
20
A simple KVM guest with 'earlycon=sbi', running in an emulated RISC-V
21
host, takes around 20 seconds to boot without using DBCN. With this
22
patch we're taking around 14 seconds to boot due to the speed-up in the
23
terminal output. There's no change in boot time if the guest isn't
24
using earlycon.
25
26
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
27
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
28
Message-ID: <20240425155012.581366-1-dbarboza@ventanamicro.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
29
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
30
---
13
target/riscv/translate.c | 110 +++++++++-----------
31
target/riscv/sbi_ecall_interface.h | 17 +++++
14
target/riscv/insn_trans/trans_rvb.c.inc | 129 +++++++++++-------------
32
target/riscv/kvm/kvm-cpu.c | 111 +++++++++++++++++++++++++++++
15
target/riscv/insn_trans/trans_rvi.c.inc | 88 ++++------------
33
2 files changed, 128 insertions(+)
16
3 files changed, 125 insertions(+), 202 deletions(-)
34
17
35
diff --git a/target/riscv/sbi_ecall_interface.h b/target/riscv/sbi_ecall_interface.h
18
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
19
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/translate.c
37
--- a/target/riscv/sbi_ecall_interface.h
21
+++ b/target/riscv/translate.c
38
+++ b/target/riscv/sbi_ecall_interface.h
22
@@ -XXX,XX +XXX,XX @@ static inline bool is_32bit(DisasContext *ctx)
39
@@ -XXX,XX +XXX,XX @@
23
}
40
24
#endif
41
/* clang-format off */
25
42
26
+/* The word size for this operation. */
43
+#define SBI_SUCCESS 0
27
+static inline int oper_len(DisasContext *ctx)
44
+#define SBI_ERR_FAILED -1
45
+#define SBI_ERR_NOT_SUPPORTED -2
46
+#define SBI_ERR_INVALID_PARAM -3
47
+#define SBI_ERR_DENIED -4
48
+#define SBI_ERR_INVALID_ADDRESS -5
49
+#define SBI_ERR_ALREADY_AVAILABLE -6
50
+#define SBI_ERR_ALREADY_STARTED -7
51
+#define SBI_ERR_ALREADY_STOPPED -8
52
+#define SBI_ERR_NO_SHMEM -9
53
+
54
/* SBI Extension IDs */
55
#define SBI_EXT_0_1_SET_TIMER 0x0
56
#define SBI_EXT_0_1_CONSOLE_PUTCHAR 0x1
57
@@ -XXX,XX +XXX,XX @@
58
#define SBI_EXT_IPI 0x735049
59
#define SBI_EXT_RFENCE 0x52464E43
60
#define SBI_EXT_HSM 0x48534D
61
+#define SBI_EXT_DBCN 0x4442434E
62
63
/* SBI function IDs for BASE extension */
64
#define SBI_EXT_BASE_GET_SPEC_VERSION 0x0
65
@@ -XXX,XX +XXX,XX @@
66
#define SBI_EXT_HSM_HART_STOP 0x1
67
#define SBI_EXT_HSM_HART_GET_STATUS 0x2
68
69
+/* SBI function IDs for DBCN extension */
70
+#define SBI_EXT_DBCN_CONSOLE_WRITE 0x0
71
+#define SBI_EXT_DBCN_CONSOLE_READ 0x1
72
+#define SBI_EXT_DBCN_CONSOLE_WRITE_BYTE 0x2
73
+
74
#define SBI_HSM_HART_STATUS_STARTED 0x0
75
#define SBI_HSM_HART_STATUS_STOPPED 0x1
76
#define SBI_HSM_HART_STATUS_START_PENDING 0x2
77
diff --git a/target/riscv/kvm/kvm-cpu.c b/target/riscv/kvm/kvm-cpu.c
78
index XXXXXXX..XXXXXXX 100644
79
--- a/target/riscv/kvm/kvm-cpu.c
80
+++ b/target/riscv/kvm/kvm-cpu.c
81
@@ -XXX,XX +XXX,XX @@ static KVMCPUConfig kvm_v_vlenb = {
82
KVM_REG_RISCV_VECTOR_CSR_REG(vlenb)
83
};
84
85
+static KVMCPUConfig kvm_sbi_dbcn = {
86
+ .name = "sbi_dbcn",
87
+ .kvm_reg_id = KVM_REG_RISCV | KVM_REG_SIZE_U64 |
88
+ KVM_REG_RISCV_SBI_EXT | KVM_RISCV_SBI_EXT_DBCN
89
+};
90
+
91
static void kvm_riscv_update_cpu_cfg_isa_ext(RISCVCPU *cpu, CPUState *cs)
92
{
93
CPURISCVState *env = &cpu->env;
94
@@ -XXX,XX +XXX,XX @@ static int uint64_cmp(const void *a, const void *b)
95
return 0;
96
}
97
98
+static void kvm_riscv_check_sbi_dbcn_support(RISCVCPU *cpu,
99
+ KVMScratchCPU *kvmcpu,
100
+ struct kvm_reg_list *reglist)
28
+{
101
+{
29
+ return ctx->w ? 32 : TARGET_LONG_BITS;
102
+ struct kvm_reg_list *reg_search;
103
+
104
+ reg_search = bsearch(&kvm_sbi_dbcn.kvm_reg_id, reglist->reg, reglist->n,
105
+ sizeof(uint64_t), uint64_cmp);
106
+
107
+ if (reg_search) {
108
+ kvm_sbi_dbcn.supported = true;
109
+ }
30
+}
110
+}
31
+
111
+
32
+
112
static void kvm_riscv_read_vlenb(RISCVCPU *cpu, KVMScratchCPU *kvmcpu,
33
/*
113
struct kvm_reg_list *reglist)
34
* RISC-V requires NaN-boxing of narrower width floating point values.
114
{
35
* This applies when a 32-bit value is assigned to a 64-bit FP register.
115
@@ -XXX,XX +XXX,XX @@ static void kvm_riscv_init_multiext_cfg(RISCVCPU *cpu, KVMScratchCPU *kvmcpu)
36
@@ -XXX,XX +XXX,XX @@ static bool gen_arith(DisasContext *ctx, arg_r *a, DisasExtend ext,
116
if (riscv_has_ext(&cpu->env, RVV)) {
117
kvm_riscv_read_vlenb(cpu, kvmcpu, reglist);
118
}
119
+
120
+ kvm_riscv_check_sbi_dbcn_support(cpu, kvmcpu, reglist);
121
}
122
123
static void riscv_init_kvm_registers(Object *cpu_obj)
124
@@ -XXX,XX +XXX,XX @@ static int kvm_vcpu_set_machine_ids(RISCVCPU *cpu, CPUState *cs)
125
return ret;
126
}
127
128
+static int kvm_vcpu_enable_sbi_dbcn(RISCVCPU *cpu, CPUState *cs)
129
+{
130
+ target_ulong reg = 1;
131
+
132
+ if (!kvm_sbi_dbcn.supported) {
133
+ return 0;
134
+ }
135
+
136
+ return kvm_set_one_reg(cs, kvm_sbi_dbcn.kvm_reg_id, &reg);
137
+}
138
+
139
int kvm_arch_init_vcpu(CPUState *cs)
140
{
141
int ret = 0;
142
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
143
kvm_riscv_update_cpu_misa_ext(cpu, cs);
144
kvm_riscv_update_cpu_cfg_isa_ext(cpu, cs);
145
146
+ ret = kvm_vcpu_enable_sbi_dbcn(cpu, cs);
147
+
148
return ret;
149
}
150
151
@@ -XXX,XX +XXX,XX @@ bool kvm_arch_stop_on_emulation_error(CPUState *cs)
37
return true;
152
return true;
38
}
153
}
39
154
40
-static bool gen_shift(DisasContext *ctx, arg_r *a,
155
+static void kvm_riscv_handle_sbi_dbcn(CPUState *cs, struct kvm_run *run)
41
- void(*func)(TCGv, TCGv, TCGv))
156
+{
42
-{
157
+ g_autofree uint8_t *buf = NULL;
43
- TCGv source1 = tcg_temp_new();
158
+ RISCVCPU *cpu = RISCV_CPU(cs);
44
- TCGv source2 = tcg_temp_new();
159
+ target_ulong num_bytes;
45
-
160
+ uint64_t addr;
46
- gen_get_gpr(ctx, source1, a->rs1);
161
+ unsigned char ch;
47
- gen_get_gpr(ctx, source2, a->rs2);
162
+ int ret;
48
-
163
+
49
- tcg_gen_andi_tl(source2, source2, TARGET_LONG_BITS - 1);
164
+ switch (run->riscv_sbi.function_id) {
50
- (*func)(source1, source1, source2);
165
+ case SBI_EXT_DBCN_CONSOLE_READ:
51
-
166
+ case SBI_EXT_DBCN_CONSOLE_WRITE:
52
- gen_set_gpr(ctx, a->rd, source1);
167
+ num_bytes = run->riscv_sbi.args[0];
53
- tcg_temp_free(source1);
168
+
54
- tcg_temp_free(source2);
169
+ if (num_bytes == 0) {
55
- return true;
170
+ run->riscv_sbi.ret[0] = SBI_SUCCESS;
56
-}
171
+ run->riscv_sbi.ret[1] = 0;
57
-
172
+ break;
58
-static uint32_t opcode_at(DisasContextBase *dcbase, target_ulong pc)
173
+ }
59
+static bool gen_shift_imm_fn(DisasContext *ctx, arg_shift *a, DisasExtend ext,
174
+
60
+ void (*func)(TCGv, TCGv, target_long))
175
+ addr = run->riscv_sbi.args[1];
61
{
176
+
62
- DisasContext *ctx = container_of(dcbase, DisasContext, base);
177
+ /*
63
- CPUState *cpu = ctx->cs;
178
+ * Handle the case where a 32 bit CPU is running in a
64
- CPURISCVState *env = cpu->env_ptr;
179
+ * 64 bit addressing env.
65
+ TCGv dest, src1;
180
+ */
66
+ int max_len = oper_len(ctx);
181
+ if (riscv_cpu_mxl(&cpu->env) == MXL_RV32) {
67
182
+ addr |= (uint64_t)run->riscv_sbi.args[2] << 32;
68
- return cpu_ldl_code(env, pc);
183
+ }
69
-}
184
+
70
-
185
+ buf = g_malloc0(num_bytes);
71
-static bool gen_shifti(DisasContext *ctx, arg_shift *a,
186
+
72
- void(*func)(TCGv, TCGv, TCGv))
187
+ if (run->riscv_sbi.function_id == SBI_EXT_DBCN_CONSOLE_READ) {
73
-{
188
+ ret = qemu_chr_fe_read_all(serial_hd(0)->be, buf, num_bytes);
74
- if (a->shamt >= TARGET_LONG_BITS) {
189
+ if (ret < 0) {
75
+ if (a->shamt >= max_len) {
190
+ error_report("SBI_EXT_DBCN_CONSOLE_READ: error when "
76
return false;
191
+ "reading chardev");
77
}
192
+ exit(1);
78
193
+ }
79
- TCGv source1 = tcg_temp_new();
194
+
80
- TCGv source2 = tcg_temp_new();
195
+ cpu_physical_memory_write(addr, buf, ret);
81
-
196
+ } else {
82
- gen_get_gpr(ctx, source1, a->rs1);
197
+ cpu_physical_memory_read(addr, buf, num_bytes);
83
+ dest = dest_gpr(ctx, a->rd);
198
+
84
+ src1 = get_gpr(ctx, a->rs1, ext);
199
+ ret = qemu_chr_fe_write_all(serial_hd(0)->be, buf, num_bytes);
85
200
+ if (ret < 0) {
86
- tcg_gen_movi_tl(source2, a->shamt);
201
+ error_report("SBI_EXT_DBCN_CONSOLE_WRITE: error when "
87
- (*func)(source1, source1, source2);
202
+ "writing chardev");
88
+ func(dest, src1, a->shamt);
203
+ exit(1);
89
204
+ }
90
- gen_set_gpr(ctx, a->rd, source1);
205
+ }
91
- tcg_temp_free(source1);
206
+
92
- tcg_temp_free(source2);
207
+ run->riscv_sbi.ret[0] = SBI_SUCCESS;
93
+ gen_set_gpr(ctx, a->rd, dest);
208
+ run->riscv_sbi.ret[1] = ret;
94
return true;
209
+ break;
95
}
210
+ case SBI_EXT_DBCN_CONSOLE_WRITE_BYTE:
96
211
+ ch = run->riscv_sbi.args[0];
97
-static bool gen_shiftw(DisasContext *ctx, arg_r *a,
212
+ ret = qemu_chr_fe_write(serial_hd(0)->be, &ch, sizeof(ch));
98
- void(*func)(TCGv, TCGv, TCGv))
213
+
99
+static bool gen_shift_imm_tl(DisasContext *ctx, arg_shift *a, DisasExtend ext,
214
+ if (ret < 0) {
100
+ void (*func)(TCGv, TCGv, TCGv))
215
+ error_report("SBI_EXT_DBCN_CONSOLE_WRITE_BYTE: error when "
101
{
216
+ "writing chardev");
102
- TCGv source1 = tcg_temp_new();
217
+ exit(1);
103
- TCGv source2 = tcg_temp_new();
218
+ }
104
+ TCGv dest, src1, src2;
219
+
105
+ int max_len = oper_len(ctx);
220
+ run->riscv_sbi.ret[0] = SBI_SUCCESS;
106
+
221
+ run->riscv_sbi.ret[1] = 0;
107
+ if (a->shamt >= max_len) {
222
+ break;
108
+ return false;
223
+ default:
224
+ run->riscv_sbi.ret[0] = SBI_ERR_NOT_SUPPORTED;
109
+ }
225
+ }
110
111
- gen_get_gpr(ctx, source1, a->rs1);
112
- gen_get_gpr(ctx, source2, a->rs2);
113
+ dest = dest_gpr(ctx, a->rd);
114
+ src1 = get_gpr(ctx, a->rs1, ext);
115
+ src2 = tcg_constant_tl(a->shamt);
116
117
- tcg_gen_andi_tl(source2, source2, 31);
118
- (*func)(source1, source1, source2);
119
- tcg_gen_ext32s_tl(source1, source1);
120
+ func(dest, src1, src2);
121
122
- gen_set_gpr(ctx, a->rd, source1);
123
- tcg_temp_free(source1);
124
- tcg_temp_free(source2);
125
+ gen_set_gpr(ctx, a->rd, dest);
126
return true;
127
}
128
129
-static bool gen_shiftiw(DisasContext *ctx, arg_shift *a,
130
- void(*func)(TCGv, TCGv, TCGv))
131
+static bool gen_shift(DisasContext *ctx, arg_r *a, DisasExtend ext,
132
+ void (*func)(TCGv, TCGv, TCGv))
133
{
134
- TCGv source1 = tcg_temp_new();
135
- TCGv source2 = tcg_temp_new();
136
-
137
- gen_get_gpr(ctx, source1, a->rs1);
138
- tcg_gen_movi_tl(source2, a->shamt);
139
+ TCGv dest = dest_gpr(ctx, a->rd);
140
+ TCGv src1 = get_gpr(ctx, a->rs1, ext);
141
+ TCGv src2 = get_gpr(ctx, a->rs2, EXT_NONE);
142
+ TCGv ext2 = tcg_temp_new();
143
144
- (*func)(source1, source1, source2);
145
- tcg_gen_ext32s_tl(source1, source1);
146
+ tcg_gen_andi_tl(ext2, src2, oper_len(ctx) - 1);
147
+ func(dest, src1, ext2);
148
149
- gen_set_gpr(ctx, a->rd, source1);
150
- tcg_temp_free(source1);
151
- tcg_temp_free(source2);
152
+ gen_set_gpr(ctx, a->rd, dest);
153
+ tcg_temp_free(ext2);
154
return true;
155
}
156
157
@@ -XXX,XX +XXX,XX @@ static bool gen_unary(DisasContext *ctx, arg_r2 *a, DisasExtend ext,
158
return true;
159
}
160
161
+static uint32_t opcode_at(DisasContextBase *dcbase, target_ulong pc)
162
+{
163
+ DisasContext *ctx = container_of(dcbase, DisasContext, base);
164
+ CPUState *cpu = ctx->cs;
165
+ CPURISCVState *env = cpu->env_ptr;
166
+
167
+ return cpu_ldl_code(env, pc);
168
+}
226
+}
169
+
227
+
170
/* Include insn module translation function */
228
static int kvm_riscv_handle_sbi(CPUState *cs, struct kvm_run *run)
171
#include "insn_trans/trans_rvi.c.inc"
229
{
172
#include "insn_trans/trans_rvm.c.inc"
230
int ret = 0;
173
diff --git a/target/riscv/insn_trans/trans_rvb.c.inc b/target/riscv/insn_trans/trans_rvb.c.inc
231
@@ -XXX,XX +XXX,XX @@ static int kvm_riscv_handle_sbi(CPUState *cs, struct kvm_run *run)
174
index XXXXXXX..XXXXXXX 100644
232
}
175
--- a/target/riscv/insn_trans/trans_rvb.c.inc
233
ret = 0;
176
+++ b/target/riscv/insn_trans/trans_rvb.c.inc
234
break;
177
@@ -XXX,XX +XXX,XX @@ static void gen_bset(TCGv ret, TCGv arg1, TCGv shamt)
235
+ case SBI_EXT_DBCN:
178
static bool trans_bset(DisasContext *ctx, arg_bset *a)
236
+ kvm_riscv_handle_sbi_dbcn(cs, run);
179
{
237
+ break;
180
REQUIRE_EXT(ctx, RVB);
238
default:
181
- return gen_shift(ctx, a, gen_bset);
239
qemu_log_mask(LOG_UNIMP,
182
+ return gen_shift(ctx, a, EXT_NONE, gen_bset);
240
"%s: un-handled SBI EXIT, specific reasons is %lu\n",
183
}
184
185
static bool trans_bseti(DisasContext *ctx, arg_bseti *a)
186
{
187
REQUIRE_EXT(ctx, RVB);
188
- return gen_shifti(ctx, a, gen_bset);
189
+ return gen_shift_imm_tl(ctx, a, EXT_NONE, gen_bset);
190
}
191
192
static void gen_bclr(TCGv ret, TCGv arg1, TCGv shamt)
193
@@ -XXX,XX +XXX,XX @@ static void gen_bclr(TCGv ret, TCGv arg1, TCGv shamt)
194
static bool trans_bclr(DisasContext *ctx, arg_bclr *a)
195
{
196
REQUIRE_EXT(ctx, RVB);
197
- return gen_shift(ctx, a, gen_bclr);
198
+ return gen_shift(ctx, a, EXT_NONE, gen_bclr);
199
}
200
201
static bool trans_bclri(DisasContext *ctx, arg_bclri *a)
202
{
203
REQUIRE_EXT(ctx, RVB);
204
- return gen_shifti(ctx, a, gen_bclr);
205
+ return gen_shift_imm_tl(ctx, a, EXT_NONE, gen_bclr);
206
}
207
208
static void gen_binv(TCGv ret, TCGv arg1, TCGv shamt)
209
@@ -XXX,XX +XXX,XX @@ static void gen_binv(TCGv ret, TCGv arg1, TCGv shamt)
210
static bool trans_binv(DisasContext *ctx, arg_binv *a)
211
{
212
REQUIRE_EXT(ctx, RVB);
213
- return gen_shift(ctx, a, gen_binv);
214
+ return gen_shift(ctx, a, EXT_NONE, gen_binv);
215
}
216
217
static bool trans_binvi(DisasContext *ctx, arg_binvi *a)
218
{
219
REQUIRE_EXT(ctx, RVB);
220
- return gen_shifti(ctx, a, gen_binv);
221
+ return gen_shift_imm_tl(ctx, a, EXT_NONE, gen_binv);
222
}
223
224
static void gen_bext(TCGv ret, TCGv arg1, TCGv shamt)
225
@@ -XXX,XX +XXX,XX @@ static void gen_bext(TCGv ret, TCGv arg1, TCGv shamt)
226
static bool trans_bext(DisasContext *ctx, arg_bext *a)
227
{
228
REQUIRE_EXT(ctx, RVB);
229
- return gen_shift(ctx, a, gen_bext);
230
+ return gen_shift(ctx, a, EXT_NONE, gen_bext);
231
}
232
233
static bool trans_bexti(DisasContext *ctx, arg_bexti *a)
234
{
235
REQUIRE_EXT(ctx, RVB);
236
- return gen_shifti(ctx, a, gen_bext);
237
+ return gen_shift_imm_tl(ctx, a, EXT_NONE, gen_bext);
238
}
239
240
static void gen_slo(TCGv ret, TCGv arg1, TCGv arg2)
241
@@ -XXX,XX +XXX,XX @@ static void gen_slo(TCGv ret, TCGv arg1, TCGv arg2)
242
static bool trans_slo(DisasContext *ctx, arg_slo *a)
243
{
244
REQUIRE_EXT(ctx, RVB);
245
- return gen_shift(ctx, a, gen_slo);
246
+ return gen_shift(ctx, a, EXT_NONE, gen_slo);
247
}
248
249
static bool trans_sloi(DisasContext *ctx, arg_sloi *a)
250
{
251
REQUIRE_EXT(ctx, RVB);
252
- return gen_shifti(ctx, a, gen_slo);
253
+ return gen_shift_imm_tl(ctx, a, EXT_NONE, gen_slo);
254
}
255
256
static void gen_sro(TCGv ret, TCGv arg1, TCGv arg2)
257
@@ -XXX,XX +XXX,XX @@ static void gen_sro(TCGv ret, TCGv arg1, TCGv arg2)
258
static bool trans_sro(DisasContext *ctx, arg_sro *a)
259
{
260
REQUIRE_EXT(ctx, RVB);
261
- return gen_shift(ctx, a, gen_sro);
262
+ return gen_shift(ctx, a, EXT_ZERO, gen_sro);
263
}
264
265
static bool trans_sroi(DisasContext *ctx, arg_sroi *a)
266
{
267
REQUIRE_EXT(ctx, RVB);
268
- return gen_shifti(ctx, a, gen_sro);
269
+ return gen_shift_imm_tl(ctx, a, EXT_ZERO, gen_sro);
270
}
271
272
static bool trans_ror(DisasContext *ctx, arg_ror *a)
273
{
274
REQUIRE_EXT(ctx, RVB);
275
- return gen_shift(ctx, a, tcg_gen_rotr_tl);
276
+ return gen_shift(ctx, a, EXT_NONE, tcg_gen_rotr_tl);
277
}
278
279
static bool trans_rori(DisasContext *ctx, arg_rori *a)
280
{
281
REQUIRE_EXT(ctx, RVB);
282
- return gen_shifti(ctx, a, tcg_gen_rotr_tl);
283
+ return gen_shift_imm_fn(ctx, a, EXT_NONE, tcg_gen_rotri_tl);
284
}
285
286
static bool trans_rol(DisasContext *ctx, arg_rol *a)
287
{
288
REQUIRE_EXT(ctx, RVB);
289
- return gen_shift(ctx, a, tcg_gen_rotl_tl);
290
+ return gen_shift(ctx, a, EXT_NONE, tcg_gen_rotl_tl);
291
}
292
293
static bool trans_grev(DisasContext *ctx, arg_grev *a)
294
{
295
REQUIRE_EXT(ctx, RVB);
296
- return gen_shift(ctx, a, gen_helper_grev);
297
+ return gen_shift(ctx, a, EXT_NONE, gen_helper_grev);
298
}
299
300
-static bool gen_grevi(DisasContext *ctx, arg_grevi *a)
301
+static void gen_grevi(TCGv dest, TCGv src, target_long shamt)
302
{
303
- TCGv source1 = tcg_temp_new();
304
- TCGv source2;
305
-
306
- gen_get_gpr(ctx, source1, a->rs1);
307
-
308
- if (a->shamt == (TARGET_LONG_BITS - 8)) {
309
+ if (shamt == TARGET_LONG_BITS - 8) {
310
/* rev8, byte swaps */
311
- tcg_gen_bswap_tl(source1, source1);
312
+ tcg_gen_bswap_tl(dest, src);
313
} else {
314
- source2 = tcg_temp_new();
315
- tcg_gen_movi_tl(source2, a->shamt);
316
- gen_helper_grev(source1, source1, source2);
317
- tcg_temp_free(source2);
318
+ gen_helper_grev(dest, src, tcg_constant_tl(shamt));
319
}
320
-
321
- gen_set_gpr(ctx, a->rd, source1);
322
- tcg_temp_free(source1);
323
- return true;
324
}
325
326
static bool trans_grevi(DisasContext *ctx, arg_grevi *a)
327
{
328
REQUIRE_EXT(ctx, RVB);
329
-
330
- if (a->shamt >= TARGET_LONG_BITS) {
331
- return false;
332
- }
333
-
334
- return gen_grevi(ctx, a);
335
+ return gen_shift_imm_fn(ctx, a, EXT_NONE, gen_grevi);
336
}
337
338
static bool trans_gorc(DisasContext *ctx, arg_gorc *a)
339
{
340
REQUIRE_EXT(ctx, RVB);
341
- return gen_shift(ctx, a, gen_helper_gorc);
342
+ return gen_shift(ctx, a, EXT_ZERO, gen_helper_gorc);
343
}
344
345
static bool trans_gorci(DisasContext *ctx, arg_gorci *a)
346
{
347
REQUIRE_EXT(ctx, RVB);
348
- return gen_shifti(ctx, a, gen_helper_gorc);
349
+ return gen_shift_imm_tl(ctx, a, EXT_ZERO, gen_helper_gorc);
350
}
351
352
#define GEN_SHADD(SHAMT) \
353
@@ -XXX,XX +XXX,XX @@ static bool trans_bsetw(DisasContext *ctx, arg_bsetw *a)
354
{
355
REQUIRE_64BIT(ctx);
356
REQUIRE_EXT(ctx, RVB);
357
- return gen_shiftw(ctx, a, gen_bset);
358
+ ctx->w = true;
359
+ return gen_shift(ctx, a, EXT_NONE, gen_bset);
360
}
361
362
static bool trans_bsetiw(DisasContext *ctx, arg_bsetiw *a)
363
{
364
REQUIRE_64BIT(ctx);
365
REQUIRE_EXT(ctx, RVB);
366
- return gen_shiftiw(ctx, a, gen_bset);
367
+ ctx->w = true;
368
+ return gen_shift_imm_tl(ctx, a, EXT_NONE, gen_bset);
369
}
370
371
static bool trans_bclrw(DisasContext *ctx, arg_bclrw *a)
372
{
373
REQUIRE_64BIT(ctx);
374
REQUIRE_EXT(ctx, RVB);
375
- return gen_shiftw(ctx, a, gen_bclr);
376
+ ctx->w = true;
377
+ return gen_shift(ctx, a, EXT_NONE, gen_bclr);
378
}
379
380
static bool trans_bclriw(DisasContext *ctx, arg_bclriw *a)
381
{
382
REQUIRE_64BIT(ctx);
383
REQUIRE_EXT(ctx, RVB);
384
- return gen_shiftiw(ctx, a, gen_bclr);
385
+ ctx->w = true;
386
+ return gen_shift_imm_tl(ctx, a, EXT_NONE, gen_bclr);
387
}
388
389
static bool trans_binvw(DisasContext *ctx, arg_binvw *a)
390
{
391
REQUIRE_64BIT(ctx);
392
REQUIRE_EXT(ctx, RVB);
393
- return gen_shiftw(ctx, a, gen_binv);
394
+ ctx->w = true;
395
+ return gen_shift(ctx, a, EXT_NONE, gen_binv);
396
}
397
398
static bool trans_binviw(DisasContext *ctx, arg_binviw *a)
399
{
400
REQUIRE_64BIT(ctx);
401
REQUIRE_EXT(ctx, RVB);
402
- return gen_shiftiw(ctx, a, gen_binv);
403
+ ctx->w = true;
404
+ return gen_shift_imm_tl(ctx, a, EXT_NONE, gen_binv);
405
}
406
407
static bool trans_bextw(DisasContext *ctx, arg_bextw *a)
408
{
409
REQUIRE_64BIT(ctx);
410
REQUIRE_EXT(ctx, RVB);
411
- return gen_shiftw(ctx, a, gen_bext);
412
+ ctx->w = true;
413
+ return gen_shift(ctx, a, EXT_NONE, gen_bext);
414
}
415
416
static bool trans_slow(DisasContext *ctx, arg_slow *a)
417
{
418
REQUIRE_64BIT(ctx);
419
REQUIRE_EXT(ctx, RVB);
420
- return gen_shiftw(ctx, a, gen_slo);
421
+ ctx->w = true;
422
+ return gen_shift(ctx, a, EXT_NONE, gen_slo);
423
}
424
425
static bool trans_sloiw(DisasContext *ctx, arg_sloiw *a)
426
{
427
REQUIRE_64BIT(ctx);
428
REQUIRE_EXT(ctx, RVB);
429
- return gen_shiftiw(ctx, a, gen_slo);
430
+ ctx->w = true;
431
+ return gen_shift_imm_tl(ctx, a, EXT_NONE, gen_slo);
432
}
433
434
static bool trans_srow(DisasContext *ctx, arg_srow *a)
435
{
436
REQUIRE_64BIT(ctx);
437
REQUIRE_EXT(ctx, RVB);
438
- return gen_shiftw(ctx, a, gen_sro);
439
+ ctx->w = true;
440
+ return gen_shift(ctx, a, EXT_ZERO, gen_sro);
441
}
442
443
static bool trans_sroiw(DisasContext *ctx, arg_sroiw *a)
444
{
445
REQUIRE_64BIT(ctx);
446
REQUIRE_EXT(ctx, RVB);
447
- return gen_shiftiw(ctx, a, gen_sro);
448
+ ctx->w = true;
449
+ return gen_shift_imm_tl(ctx, a, EXT_ZERO, gen_sro);
450
}
451
452
static void gen_rorw(TCGv ret, TCGv arg1, TCGv arg2)
453
@@ -XXX,XX +XXX,XX @@ static bool trans_rorw(DisasContext *ctx, arg_rorw *a)
454
{
455
REQUIRE_64BIT(ctx);
456
REQUIRE_EXT(ctx, RVB);
457
- return gen_shiftw(ctx, a, gen_rorw);
458
+ ctx->w = true;
459
+ return gen_shift(ctx, a, EXT_NONE, gen_rorw);
460
}
461
462
static bool trans_roriw(DisasContext *ctx, arg_roriw *a)
463
{
464
REQUIRE_64BIT(ctx);
465
REQUIRE_EXT(ctx, RVB);
466
- return gen_shiftiw(ctx, a, gen_rorw);
467
+ ctx->w = true;
468
+ return gen_shift_imm_tl(ctx, a, EXT_NONE, gen_rorw);
469
}
470
471
static void gen_rolw(TCGv ret, TCGv arg1, TCGv arg2)
472
@@ -XXX,XX +XXX,XX @@ static bool trans_rolw(DisasContext *ctx, arg_rolw *a)
473
{
474
REQUIRE_64BIT(ctx);
475
REQUIRE_EXT(ctx, RVB);
476
- return gen_shiftw(ctx, a, gen_rolw);
477
-}
478
-
479
-static void gen_grevw(TCGv ret, TCGv arg1, TCGv arg2)
480
-{
481
- tcg_gen_ext32u_tl(arg1, arg1);
482
- gen_helper_grev(ret, arg1, arg2);
483
+ ctx->w = true;
484
+ return gen_shift(ctx, a, EXT_NONE, gen_rolw);
485
}
486
487
static bool trans_grevw(DisasContext *ctx, arg_grevw *a)
488
{
489
REQUIRE_64BIT(ctx);
490
REQUIRE_EXT(ctx, RVB);
491
- return gen_shiftw(ctx, a, gen_grevw);
492
+ ctx->w = true;
493
+ return gen_shift(ctx, a, EXT_ZERO, gen_helper_grev);
494
}
495
496
static bool trans_greviw(DisasContext *ctx, arg_greviw *a)
497
{
498
REQUIRE_64BIT(ctx);
499
REQUIRE_EXT(ctx, RVB);
500
- return gen_shiftiw(ctx, a, gen_grevw);
501
-}
502
-
503
-static void gen_gorcw(TCGv ret, TCGv arg1, TCGv arg2)
504
-{
505
- tcg_gen_ext32u_tl(arg1, arg1);
506
- gen_helper_gorcw(ret, arg1, arg2);
507
+ ctx->w = true;
508
+ return gen_shift_imm_tl(ctx, a, EXT_ZERO, gen_helper_grev);
509
}
510
511
static bool trans_gorcw(DisasContext *ctx, arg_gorcw *a)
512
{
513
REQUIRE_64BIT(ctx);
514
REQUIRE_EXT(ctx, RVB);
515
- return gen_shiftw(ctx, a, gen_gorcw);
516
+ ctx->w = true;
517
+ return gen_shift(ctx, a, EXT_ZERO, gen_helper_gorc);
518
}
519
520
static bool trans_gorciw(DisasContext *ctx, arg_gorciw *a)
521
{
522
REQUIRE_64BIT(ctx);
523
REQUIRE_EXT(ctx, RVB);
524
- return gen_shiftiw(ctx, a, gen_gorcw);
525
+ ctx->w = true;
526
+ return gen_shift_imm_tl(ctx, a, EXT_ZERO, gen_helper_gorc);
527
}
528
529
#define GEN_SHADD_UW(SHAMT) \
530
diff --git a/target/riscv/insn_trans/trans_rvi.c.inc b/target/riscv/insn_trans/trans_rvi.c.inc
531
index XXXXXXX..XXXXXXX 100644
532
--- a/target/riscv/insn_trans/trans_rvi.c.inc
533
+++ b/target/riscv/insn_trans/trans_rvi.c.inc
534
@@ -XXX,XX +XXX,XX @@ static bool trans_andi(DisasContext *ctx, arg_andi *a)
535
536
static bool trans_slli(DisasContext *ctx, arg_slli *a)
537
{
538
- return gen_shifti(ctx, a, tcg_gen_shl_tl);
539
+ return gen_shift_imm_fn(ctx, a, EXT_NONE, tcg_gen_shli_tl);
540
}
541
542
static bool trans_srli(DisasContext *ctx, arg_srli *a)
543
{
544
- return gen_shifti(ctx, a, tcg_gen_shr_tl);
545
+ return gen_shift_imm_fn(ctx, a, EXT_ZERO, tcg_gen_shri_tl);
546
}
547
548
static bool trans_srai(DisasContext *ctx, arg_srai *a)
549
{
550
- return gen_shifti(ctx, a, tcg_gen_sar_tl);
551
+ return gen_shift_imm_fn(ctx, a, EXT_SIGN, tcg_gen_sari_tl);
552
}
553
554
static bool trans_add(DisasContext *ctx, arg_add *a)
555
@@ -XXX,XX +XXX,XX @@ static bool trans_sub(DisasContext *ctx, arg_sub *a)
556
557
static bool trans_sll(DisasContext *ctx, arg_sll *a)
558
{
559
- return gen_shift(ctx, a, &tcg_gen_shl_tl);
560
+ return gen_shift(ctx, a, EXT_NONE, tcg_gen_shl_tl);
561
}
562
563
static bool trans_slt(DisasContext *ctx, arg_slt *a)
564
@@ -XXX,XX +XXX,XX @@ static bool trans_xor(DisasContext *ctx, arg_xor *a)
565
566
static bool trans_srl(DisasContext *ctx, arg_srl *a)
567
{
568
- return gen_shift(ctx, a, &tcg_gen_shr_tl);
569
+ return gen_shift(ctx, a, EXT_ZERO, tcg_gen_shr_tl);
570
}
571
572
static bool trans_sra(DisasContext *ctx, arg_sra *a)
573
{
574
- return gen_shift(ctx, a, &tcg_gen_sar_tl);
575
+ return gen_shift(ctx, a, EXT_SIGN, tcg_gen_sar_tl);
576
}
577
578
static bool trans_or(DisasContext *ctx, arg_or *a)
579
@@ -XXX,XX +XXX,XX @@ static bool trans_addiw(DisasContext *ctx, arg_addiw *a)
580
static bool trans_slliw(DisasContext *ctx, arg_slliw *a)
581
{
582
REQUIRE_64BIT(ctx);
583
- return gen_shiftiw(ctx, a, tcg_gen_shl_tl);
584
+ ctx->w = true;
585
+ return gen_shift_imm_fn(ctx, a, EXT_NONE, tcg_gen_shli_tl);
586
}
587
588
static bool trans_srliw(DisasContext *ctx, arg_srliw *a)
589
{
590
REQUIRE_64BIT(ctx);
591
- TCGv t = tcg_temp_new();
592
- gen_get_gpr(ctx, t, a->rs1);
593
- tcg_gen_extract_tl(t, t, a->shamt, 32 - a->shamt);
594
- /* sign-extend for W instructions */
595
- tcg_gen_ext32s_tl(t, t);
596
- gen_set_gpr(ctx, a->rd, t);
597
- tcg_temp_free(t);
598
- return true;
599
+ ctx->w = true;
600
+ return gen_shift_imm_fn(ctx, a, EXT_ZERO, tcg_gen_shri_tl);
601
}
602
603
static bool trans_sraiw(DisasContext *ctx, arg_sraiw *a)
604
{
605
REQUIRE_64BIT(ctx);
606
- TCGv t = tcg_temp_new();
607
- gen_get_gpr(ctx, t, a->rs1);
608
- tcg_gen_sextract_tl(t, t, a->shamt, 32 - a->shamt);
609
- gen_set_gpr(ctx, a->rd, t);
610
- tcg_temp_free(t);
611
- return true;
612
+ ctx->w = true;
613
+ return gen_shift_imm_fn(ctx, a, EXT_SIGN, tcg_gen_sari_tl);
614
}
615
616
static bool trans_addw(DisasContext *ctx, arg_addw *a)
617
@@ -XXX,XX +XXX,XX @@ static bool trans_subw(DisasContext *ctx, arg_subw *a)
618
static bool trans_sllw(DisasContext *ctx, arg_sllw *a)
619
{
620
REQUIRE_64BIT(ctx);
621
- TCGv source1 = tcg_temp_new();
622
- TCGv source2 = tcg_temp_new();
623
-
624
- gen_get_gpr(ctx, source1, a->rs1);
625
- gen_get_gpr(ctx, source2, a->rs2);
626
-
627
- tcg_gen_andi_tl(source2, source2, 0x1F);
628
- tcg_gen_shl_tl(source1, source1, source2);
629
-
630
- tcg_gen_ext32s_tl(source1, source1);
631
- gen_set_gpr(ctx, a->rd, source1);
632
- tcg_temp_free(source1);
633
- tcg_temp_free(source2);
634
- return true;
635
+ ctx->w = true;
636
+ return gen_shift(ctx, a, EXT_NONE, tcg_gen_shl_tl);
637
}
638
639
static bool trans_srlw(DisasContext *ctx, arg_srlw *a)
640
{
641
REQUIRE_64BIT(ctx);
642
- TCGv source1 = tcg_temp_new();
643
- TCGv source2 = tcg_temp_new();
644
-
645
- gen_get_gpr(ctx, source1, a->rs1);
646
- gen_get_gpr(ctx, source2, a->rs2);
647
-
648
- /* clear upper 32 */
649
- tcg_gen_ext32u_tl(source1, source1);
650
- tcg_gen_andi_tl(source2, source2, 0x1F);
651
- tcg_gen_shr_tl(source1, source1, source2);
652
-
653
- tcg_gen_ext32s_tl(source1, source1);
654
- gen_set_gpr(ctx, a->rd, source1);
655
- tcg_temp_free(source1);
656
- tcg_temp_free(source2);
657
- return true;
658
+ ctx->w = true;
659
+ return gen_shift(ctx, a, EXT_ZERO, tcg_gen_shr_tl);
660
}
661
662
static bool trans_sraw(DisasContext *ctx, arg_sraw *a)
663
{
664
REQUIRE_64BIT(ctx);
665
- TCGv source1 = tcg_temp_new();
666
- TCGv source2 = tcg_temp_new();
667
-
668
- gen_get_gpr(ctx, source1, a->rs1);
669
- gen_get_gpr(ctx, source2, a->rs2);
670
-
671
- /*
672
- * first, trick to get it to act like working on 32 bits (get rid of
673
- * upper 32, sign extend to fill space)
674
- */
675
- tcg_gen_ext32s_tl(source1, source1);
676
- tcg_gen_andi_tl(source2, source2, 0x1F);
677
- tcg_gen_sar_tl(source1, source1, source2);
678
-
679
- gen_set_gpr(ctx, a->rd, source1);
680
- tcg_temp_free(source1);
681
- tcg_temp_free(source2);
682
-
683
- return true;
684
+ ctx->w = true;
685
+ return gen_shift(ctx, a, EXT_SIGN, tcg_gen_sar_tl);
686
}
687
688
static bool trans_fence(DisasContext *ctx, arg_fence *a)
689
--
241
--
690
2.31.1
242
2.45.1
691
692
diff view generated by jsdifflib
1
From: David Hoppenbrouwers <david@salt-inc.org>
1
From: Cheng Yang <yangcheng.work@foxmail.com>
2
2
3
`muldiv64` would overflow in cases where the final 96-bit value does not
3
Use qemu_fdt_setprop_u64() instead of qemu_fdt_setprop_cell()
4
fit in a `uint64_t`. This would result in small values that cause an
4
to set the address of initrd in FDT to support 64-bit address.
5
interrupt to be triggered much sooner than intended.
6
5
7
The overflow can be detected in most cases by checking if the new value is
6
Signed-off-by: Cheng Yang <yangcheng.work@foxmail.com>
8
smaller than the previous value. If the final result is larger than
9
`diff` it is either correct or it doesn't matter as it is effectively
10
infinite anyways.
11
12
`next` is an `uint64_t` value, but `timer_mod` takes an `int64_t`. This
13
resulted in high values such as `UINT64_MAX` being converted to `-1`,
14
which caused an immediate timer interrupt.
15
16
By limiting `next` to `INT64_MAX` no overflow will happen while the
17
timer will still be effectively set to "infinitely" far in the future.
18
19
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/493
20
Signed-off-by: David Hoppenbrouwers <david@salt-inc.org>
21
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
22
Message-id: 20210827152324.5201-1-david@salt-inc.org
8
Message-ID: <tencent_A4482251DD0890F312758FA6B33F60815609@qq.com>
23
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
9
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
24
---
10
---
25
hw/intc/sifive_clint.c | 25 +++++++++++++++++++++++--
11
hw/riscv/boot.c | 4 ++--
26
1 file changed, 23 insertions(+), 2 deletions(-)
12
1 file changed, 2 insertions(+), 2 deletions(-)
27
13
28
diff --git a/hw/intc/sifive_clint.c b/hw/intc/sifive_clint.c
14
diff --git a/hw/riscv/boot.c b/hw/riscv/boot.c
29
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/intc/sifive_clint.c
16
--- a/hw/riscv/boot.c
31
+++ b/hw/intc/sifive_clint.c
17
+++ b/hw/riscv/boot.c
32
@@ -XXX,XX +XXX,XX @@ static void sifive_clint_write_timecmp(RISCVCPU *cpu, uint64_t value,
18
@@ -XXX,XX +XXX,XX @@ static void riscv_load_initrd(MachineState *machine, uint64_t kernel_entry)
33
riscv_cpu_update_mip(cpu, MIP_MTIP, BOOL_TO_MASK(0));
19
/* Some RISC-V machines (e.g. opentitan) don't have a fdt. */
34
diff = cpu->env.timecmp - rtc_r;
20
if (fdt) {
35
/* back to ns (note args switched in muldiv64) */
21
end = start + size;
36
- next = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
22
- qemu_fdt_setprop_cell(fdt, "/chosen", "linux,initrd-start", start);
37
- muldiv64(diff, NANOSECONDS_PER_SECOND, timebase_freq);
23
- qemu_fdt_setprop_cell(fdt, "/chosen", "linux,initrd-end", end);
38
+ uint64_t ns_diff = muldiv64(diff, NANOSECONDS_PER_SECOND, timebase_freq);
24
+ qemu_fdt_setprop_u64(fdt, "/chosen", "linux,initrd-start", start);
39
+
25
+ qemu_fdt_setprop_u64(fdt, "/chosen", "linux,initrd-end", end);
40
+ /*
26
}
41
+ * check if ns_diff overflowed and check if the addition would potentially
42
+ * overflow
43
+ */
44
+ if ((NANOSECONDS_PER_SECOND > timebase_freq && ns_diff < diff) ||
45
+ ns_diff > INT64_MAX) {
46
+ next = INT64_MAX;
47
+ } else {
48
+ /*
49
+ * as it is very unlikely qemu_clock_get_ns will return a value
50
+ * greater than INT64_MAX, no additional check is needed for an
51
+ * unsigned integer overflow.
52
+ */
53
+ next = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + ns_diff;
54
+ /*
55
+ * if ns_diff is INT64_MAX next may still be outside the range
56
+ * of a signed integer.
57
+ */
58
+ next = MIN(next, INT64_MAX);
59
+ }
60
+
61
timer_mod(cpu->env.timer, next);
62
}
27
}
63
28
64
--
29
--
65
2.31.1
30
2.45.1
66
67
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Clément Léger <cleger@rivosinc.com>
2
2
3
Use ctx->w for ctpopw, which is the only one that can
3
The current semihost exception number (16) is a reserved number (range
4
re-use the generic algorithm for the narrow operation.
4
[16-17]). The upcoming double trap specification uses that number for
5
the double trap exception. Since the privileged spec (Table 22) defines
6
ranges for custom uses change the semihosting exception number to 63
7
which belongs to the range [48-63] in order to avoid any future
8
collisions with reserved exception.
9
10
Signed-off-by: Clément Léger <cleger@rivosinc.com>
5
11
6
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-ID: <20240422135840.1959967-1-cleger@rivosinc.com>
8
Message-id: 20210823195529.560295-12-richard.henderson@linaro.org
9
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
---
15
---
11
target/riscv/translate.c | 14 ++++++--------
16
target/riscv/cpu_bits.h | 2 +-
12
target/riscv/insn_trans/trans_rvb.c.inc | 24 +++++++++---------------
17
1 file changed, 1 insertion(+), 1 deletion(-)
13
2 files changed, 15 insertions(+), 23 deletions(-)
14
18
15
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
19
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
16
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
17
--- a/target/riscv/translate.c
21
--- a/target/riscv/cpu_bits.h
18
+++ b/target/riscv/translate.c
22
+++ b/target/riscv/cpu_bits.h
19
@@ -XXX,XX +XXX,XX @@ static bool gen_shiftiw(DisasContext *ctx, arg_shift *a,
23
@@ -XXX,XX +XXX,XX @@ typedef enum RISCVException {
20
return true;
24
RISCV_EXCP_INST_PAGE_FAULT = 0xc, /* since: priv-1.10.0 */
21
}
25
RISCV_EXCP_LOAD_PAGE_FAULT = 0xd, /* since: priv-1.10.0 */
22
26
RISCV_EXCP_STORE_PAGE_FAULT = 0xf, /* since: priv-1.10.0 */
23
-static bool gen_unary(DisasContext *ctx, arg_r2 *a,
27
- RISCV_EXCP_SEMIHOST = 0x10,
24
- void(*func)(TCGv, TCGv))
28
RISCV_EXCP_INST_GUEST_PAGE_FAULT = 0x14,
25
+static bool gen_unary(DisasContext *ctx, arg_r2 *a, DisasExtend ext,
29
RISCV_EXCP_LOAD_GUEST_ACCESS_FAULT = 0x15,
26
+ void (*func)(TCGv, TCGv))
30
RISCV_EXCP_VIRT_INSTRUCTION_FAULT = 0x16,
27
{
31
RISCV_EXCP_STORE_GUEST_AMO_ACCESS_FAULT = 0x17,
28
- TCGv source = tcg_temp_new();
32
+ RISCV_EXCP_SEMIHOST = 0x3f,
29
-
33
} RISCVException;
30
- gen_get_gpr(ctx, source, a->rs1);
34
31
+ TCGv dest = dest_gpr(ctx, a->rd);
35
#define RISCV_EXCP_INT_FLAG 0x80000000
32
+ TCGv src1 = get_gpr(ctx, a->rs1, ext);
33
34
- (*func)(source, source);
35
+ func(dest, src1);
36
37
- gen_set_gpr(ctx, a->rd, source);
38
- tcg_temp_free(source);
39
+ gen_set_gpr(ctx, a->rd, dest);
40
return true;
41
}
42
43
diff --git a/target/riscv/insn_trans/trans_rvb.c.inc b/target/riscv/insn_trans/trans_rvb.c.inc
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/riscv/insn_trans/trans_rvb.c.inc
46
+++ b/target/riscv/insn_trans/trans_rvb.c.inc
47
@@ -XXX,XX +XXX,XX @@ static void gen_clz(TCGv ret, TCGv arg1)
48
static bool trans_clz(DisasContext *ctx, arg_clz *a)
49
{
50
REQUIRE_EXT(ctx, RVB);
51
- return gen_unary(ctx, a, gen_clz);
52
+ return gen_unary(ctx, a, EXT_ZERO, gen_clz);
53
}
54
55
static void gen_ctz(TCGv ret, TCGv arg1)
56
@@ -XXX,XX +XXX,XX @@ static void gen_ctz(TCGv ret, TCGv arg1)
57
static bool trans_ctz(DisasContext *ctx, arg_ctz *a)
58
{
59
REQUIRE_EXT(ctx, RVB);
60
- return gen_unary(ctx, a, gen_ctz);
61
+ return gen_unary(ctx, a, EXT_ZERO, gen_ctz);
62
}
63
64
static bool trans_cpop(DisasContext *ctx, arg_cpop *a)
65
{
66
REQUIRE_EXT(ctx, RVB);
67
- return gen_unary(ctx, a, tcg_gen_ctpop_tl);
68
+ return gen_unary(ctx, a, EXT_ZERO, tcg_gen_ctpop_tl);
69
}
70
71
static bool trans_andn(DisasContext *ctx, arg_andn *a)
72
@@ -XXX,XX +XXX,XX @@ static bool trans_maxu(DisasContext *ctx, arg_maxu *a)
73
static bool trans_sext_b(DisasContext *ctx, arg_sext_b *a)
74
{
75
REQUIRE_EXT(ctx, RVB);
76
- return gen_unary(ctx, a, tcg_gen_ext8s_tl);
77
+ return gen_unary(ctx, a, EXT_NONE, tcg_gen_ext8s_tl);
78
}
79
80
static bool trans_sext_h(DisasContext *ctx, arg_sext_h *a)
81
{
82
REQUIRE_EXT(ctx, RVB);
83
- return gen_unary(ctx, a, tcg_gen_ext16s_tl);
84
+ return gen_unary(ctx, a, EXT_NONE, tcg_gen_ext16s_tl);
85
}
86
87
static void gen_sbop_mask(TCGv ret, TCGv shamt)
88
@@ -XXX,XX +XXX,XX @@ GEN_TRANS_SHADD(3)
89
90
static void gen_clzw(TCGv ret, TCGv arg1)
91
{
92
- tcg_gen_ext32u_tl(ret, arg1);
93
tcg_gen_clzi_tl(ret, ret, 64);
94
tcg_gen_subi_tl(ret, ret, 32);
95
}
96
@@ -XXX,XX +XXX,XX @@ static bool trans_clzw(DisasContext *ctx, arg_clzw *a)
97
{
98
REQUIRE_64BIT(ctx);
99
REQUIRE_EXT(ctx, RVB);
100
- return gen_unary(ctx, a, gen_clzw);
101
+ return gen_unary(ctx, a, EXT_ZERO, gen_clzw);
102
}
103
104
static void gen_ctzw(TCGv ret, TCGv arg1)
105
@@ -XXX,XX +XXX,XX @@ static bool trans_ctzw(DisasContext *ctx, arg_ctzw *a)
106
{
107
REQUIRE_64BIT(ctx);
108
REQUIRE_EXT(ctx, RVB);
109
- return gen_unary(ctx, a, gen_ctzw);
110
-}
111
-
112
-static void gen_cpopw(TCGv ret, TCGv arg1)
113
-{
114
- tcg_gen_ext32u_tl(arg1, arg1);
115
- tcg_gen_ctpop_tl(ret, arg1);
116
+ return gen_unary(ctx, a, EXT_NONE, gen_ctzw);
117
}
118
119
static bool trans_cpopw(DisasContext *ctx, arg_cpopw *a)
120
{
121
REQUIRE_64BIT(ctx);
122
REQUIRE_EXT(ctx, RVB);
123
- return gen_unary(ctx, a, gen_cpopw);
124
+ ctx->w = true;
125
+ return gen_unary(ctx, a, EXT_ZERO, tcg_gen_ctpop_tl);
126
}
127
128
static void gen_packw(TCGv ret, TCGv arg1, TCGv arg2)
129
--
36
--
130
2.31.1
37
2.45.1
131
38
132
39
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
2
2
3
Always use tcg_gen_deposit_z_tl; the special case for
3
Running a KVM guest using a 6.9-rc3 kernel, in a 6.8 host that has zkr
4
shamt >= 32 is handled there.
4
enabled, will fail with a kernel oops SIGILL right at the start. The
5
reason is that we can't expose zkr without implementing the SEED CSR.
6
Disabling zkr in the guest would be a workaround, but if the KVM doesn't
7
allow it we'll error out and never boot.
5
8
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
In hindsight this is too strict. If we keep proceeding, despite not
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
disabling the extension in the KVM vcpu, we'll not add the extension in
8
Message-id: 20210823195529.560295-21-richard.henderson@linaro.org
11
the riscv,isa. The guest kernel will be unaware of the extension, i.e.
12
it doesn't matter if the KVM vcpu has it enabled underneath or not. So
13
it's ok to keep booting in this case.
14
15
Change our current logic to not error out if we fail to disable an
16
extension in kvm_set_one_reg(), but show a warning and keep booting. It
17
is important to throw a warning because we must make the user aware that
18
the extension is still available in the vcpu, meaning that an
19
ill-behaved guest can ignore the riscv,isa settings and use the
20
extension.
21
22
The case we're handling happens with an EINVAL error code. If we fail to
23
disable the extension in KVM for any other reason, error out.
24
25
We'll also keep erroring out when we fail to enable an extension in KVM,
26
since adding the extension in riscv,isa at this point will cause a guest
27
malfunction because the extension isn't enabled in the vcpu.
28
29
Suggested-by: Andrew Jones <ajones@ventanamicro.com>
30
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
31
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
32
Cc: qemu-stable <qemu-stable@nongnu.org>
33
Message-ID: <20240422171425.333037-2-dbarboza@ventanamicro.com>
9
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
34
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
---
35
---
11
target/riscv/insn_trans/trans_rvb.c.inc | 19 ++++++-------------
36
target/riscv/kvm/kvm-cpu.c | 12 ++++++++----
12
1 file changed, 6 insertions(+), 13 deletions(-)
37
1 file changed, 8 insertions(+), 4 deletions(-)
13
38
14
diff --git a/target/riscv/insn_trans/trans_rvb.c.inc b/target/riscv/insn_trans/trans_rvb.c.inc
39
diff --git a/target/riscv/kvm/kvm-cpu.c b/target/riscv/kvm/kvm-cpu.c
15
index XXXXXXX..XXXXXXX 100644
40
index XXXXXXX..XXXXXXX 100644
16
--- a/target/riscv/insn_trans/trans_rvb.c.inc
41
--- a/target/riscv/kvm/kvm-cpu.c
17
+++ b/target/riscv/insn_trans/trans_rvb.c.inc
42
+++ b/target/riscv/kvm/kvm-cpu.c
18
@@ -XXX,XX +XXX,XX @@ static bool trans_add_uw(DisasContext *ctx, arg_add_uw *a)
43
@@ -XXX,XX +XXX,XX @@ static void kvm_riscv_update_cpu_cfg_isa_ext(RISCVCPU *cpu, CPUState *cs)
19
return gen_arith(ctx, a, EXT_NONE, gen_add_uw);
44
reg = kvm_cpu_cfg_get(cpu, multi_ext_cfg);
20
}
45
ret = kvm_set_one_reg(cs, id, &reg);
21
46
if (ret != 0) {
22
+static void gen_slli_uw(TCGv dest, TCGv src, target_long shamt)
47
- error_report("Unable to %s extension %s in KVM, error %d",
23
+{
48
- reg ? "enable" : "disable",
24
+ tcg_gen_deposit_z_tl(dest, src, shamt, MIN(32, TARGET_LONG_BITS - shamt));
49
- multi_ext_cfg->name, ret);
25
+}
50
- exit(EXIT_FAILURE);
26
+
51
+ if (!reg && ret == -EINVAL) {
27
static bool trans_slli_uw(DisasContext *ctx, arg_slli_uw *a)
52
+ warn_report("KVM cannot disable extension %s",
28
{
53
+ multi_ext_cfg->name);
29
REQUIRE_64BIT(ctx);
54
+ } else {
30
REQUIRE_EXT(ctx, RVB);
55
+ error_report("Unable to enable extension %s in KVM, error %d",
31
-
56
+ multi_ext_cfg->name, ret);
32
- TCGv source1 = tcg_temp_new();
57
+ exit(EXIT_FAILURE);
33
- gen_get_gpr(ctx, source1, a->rs1);
58
+ }
34
-
59
}
35
- if (a->shamt < 32) {
60
}
36
- tcg_gen_deposit_z_tl(source1, source1, a->shamt, 32);
37
- } else {
38
- tcg_gen_shli_tl(source1, source1, a->shamt);
39
- }
40
-
41
- gen_set_gpr(ctx, a->rd, source1);
42
- tcg_temp_free(source1);
43
- return true;
44
+ return gen_shift_imm_fn(ctx, a, EXT_NONE, gen_slli_uw);
45
}
61
}
46
--
62
--
47
2.31.1
63
2.45.1
48
49
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
2
2
3
Exit early if check_access fails.
3
We're not setting (s/m)tval when triggering breakpoints of type 2
4
Split out do_hlv, do_hsv, do_hlvx subroutines.
4
(mcontrol) and 6 (mcontrol6). According to the debug spec section
5
Use dest_gpr, get_gpr in the new subroutines.
5
5.7.12, "Match Control Type 6":
6
6
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
"The Privileged Spec says that breakpoint exceptions that occur on
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
instruction fetches, loads, or stores update the tval CSR with either
9
zero or the faulting virtual address. The faulting virtual address for
10
an mcontrol6 trigger with action = 0 is the address being accessed and
11
which caused that trigger to fire."
12
13
A similar text is also found in the Debug spec section 5.7.11 w.r.t.
14
mcontrol.
15
16
Note that what we're doing ATM is not violating the spec, but it's
17
simple enough to set mtval/stval and it makes life easier for any
18
software that relies on this info.
19
20
Given that we always use action = 0, save the faulting address for the
21
mcontrol and mcontrol6 trigger breakpoints into env->badaddr, which is
22
used as as scratch area for traps with address information. 'tval' is
23
then set during riscv_cpu_do_interrupt().
24
25
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
26
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Message-id: 20210823195529.560295-24-richard.henderson@linaro.org
27
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
28
Message-ID: <20240416230437.1869024-2-dbarboza@ventanamicro.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
29
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
30
---
13
target/riscv/insn32.decode | 1 +
31
target/riscv/cpu_helper.c | 1 +
14
target/riscv/insn_trans/trans_rvh.c.inc | 266 +++++-------------------
32
target/riscv/debug.c | 3 +++
15
2 files changed, 57 insertions(+), 210 deletions(-)
33
2 files changed, 4 insertions(+)
16
34
17
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
35
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
18
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
19
--- a/target/riscv/insn32.decode
37
--- a/target/riscv/cpu_helper.c
20
+++ b/target/riscv/insn32.decode
38
+++ b/target/riscv/cpu_helper.c
21
@@ -XXX,XX +XXX,XX @@
39
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_do_interrupt(CPUState *cs)
22
&j imm rd
40
tval = env->bins;
23
&r rd rs1 rs2
41
break;
24
&r2 rd rs1
42
case RISCV_EXCP_BREAKPOINT:
25
+&r2_s rs1 rs2
43
+ tval = env->badaddr;
26
&s imm rs1 rs2
44
if (cs->watchpoint_hit) {
27
&u imm rd
45
tval = cs->watchpoint_hit->hitaddr;
28
&shift shamt rs1 rd
46
cs->watchpoint_hit = NULL;
29
diff --git a/target/riscv/insn_trans/trans_rvh.c.inc b/target/riscv/insn_trans/trans_rvh.c.inc
47
diff --git a/target/riscv/debug.c b/target/riscv/debug.c
30
index XXXXXXX..XXXXXXX 100644
48
index XXXXXXX..XXXXXXX 100644
31
--- a/target/riscv/insn_trans/trans_rvh.c.inc
49
--- a/target/riscv/debug.c
32
+++ b/target/riscv/insn_trans/trans_rvh.c.inc
50
+++ b/target/riscv/debug.c
33
@@ -XXX,XX +XXX,XX @@
51
@@ -XXX,XX +XXX,XX @@ bool riscv_cpu_debug_check_breakpoint(CPUState *cs)
34
*/
52
if ((ctrl & TYPE2_EXEC) && (bp->pc == pc)) {
35
53
/* check U/S/M bit against current privilege level */
36
#ifndef CONFIG_USER_ONLY
54
if ((ctrl >> 3) & BIT(env->priv)) {
37
-static void check_access(DisasContext *ctx) {
55
+ env->badaddr = pc;
38
+static bool check_access(DisasContext *ctx)
56
return true;
39
+{
57
}
40
if (!ctx->hlsx) {
58
}
41
if (ctx->virt_enabled) {
59
@@ -XXX,XX +XXX,XX @@ bool riscv_cpu_debug_check_breakpoint(CPUState *cs)
42
generate_exception(ctx, RISCV_EXCP_VIRT_INSTRUCTION_FAULT);
60
if (env->virt_enabled) {
43
} else {
61
/* check VU/VS bit against current privilege level */
44
generate_exception(ctx, RISCV_EXCP_ILLEGAL_INST);
62
if ((ctrl >> 23) & BIT(env->priv)) {
45
}
63
+ env->badaddr = pc;
46
+ return false;
64
return true;
47
}
65
}
48
+ return true;
66
} else {
49
}
67
/* check U/S/M bit against current privilege level */
50
#endif
68
if ((ctrl >> 3) & BIT(env->priv)) {
51
69
+ env->badaddr = pc;
52
-static bool trans_hlv_b(DisasContext *ctx, arg_hlv_b *a)
70
return true;
53
+static bool do_hlv(DisasContext *ctx, arg_r2 *a, MemOp mop)
71
}
54
{
72
}
55
- REQUIRE_EXT(ctx, RVH);
56
-#ifndef CONFIG_USER_ONLY
57
- TCGv t0 = tcg_temp_new();
58
- TCGv t1 = tcg_temp_new();
59
-
60
- check_access(ctx);
61
-
62
- gen_get_gpr(ctx, t0, a->rs1);
63
-
64
- tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx | TB_FLAGS_PRIV_HYP_ACCESS_MASK, MO_SB);
65
- gen_set_gpr(ctx, a->rd, t1);
66
-
67
- tcg_temp_free(t0);
68
- tcg_temp_free(t1);
69
- return true;
70
-#else
71
+#ifdef CONFIG_USER_ONLY
72
return false;
73
+#else
74
+ if (check_access(ctx)) {
75
+ TCGv dest = dest_gpr(ctx, a->rd);
76
+ TCGv addr = get_gpr(ctx, a->rs1, EXT_NONE);
77
+ int mem_idx = ctx->mem_idx | TB_FLAGS_PRIV_HYP_ACCESS_MASK;
78
+ tcg_gen_qemu_ld_tl(dest, addr, mem_idx, mop);
79
+ gen_set_gpr(ctx, a->rd, dest);
80
+ }
81
+ return true;
82
#endif
83
}
84
85
-static bool trans_hlv_h(DisasContext *ctx, arg_hlv_h *a)
86
+static bool trans_hlv_b(DisasContext *ctx, arg_hlv_b *a)
87
{
88
REQUIRE_EXT(ctx, RVH);
89
-#ifndef CONFIG_USER_ONLY
90
- TCGv t0 = tcg_temp_new();
91
- TCGv t1 = tcg_temp_new();
92
-
93
- check_access(ctx);
94
-
95
- gen_get_gpr(ctx, t0, a->rs1);
96
-
97
- tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx | TB_FLAGS_PRIV_HYP_ACCESS_MASK, MO_TESW);
98
- gen_set_gpr(ctx, a->rd, t1);
99
+ return do_hlv(ctx, a, MO_SB);
100
+}
101
102
- tcg_temp_free(t0);
103
- tcg_temp_free(t1);
104
- return true;
105
-#else
106
- return false;
107
-#endif
108
+static bool trans_hlv_h(DisasContext *ctx, arg_hlv_h *a)
109
+{
110
+ REQUIRE_EXT(ctx, RVH);
111
+ return do_hlv(ctx, a, MO_TESW);
112
}
113
114
static bool trans_hlv_w(DisasContext *ctx, arg_hlv_w *a)
115
{
116
REQUIRE_EXT(ctx, RVH);
117
-#ifndef CONFIG_USER_ONLY
118
- TCGv t0 = tcg_temp_new();
119
- TCGv t1 = tcg_temp_new();
120
-
121
- check_access(ctx);
122
-
123
- gen_get_gpr(ctx, t0, a->rs1);
124
-
125
- tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx | TB_FLAGS_PRIV_HYP_ACCESS_MASK, MO_TESL);
126
- gen_set_gpr(ctx, a->rd, t1);
127
-
128
- tcg_temp_free(t0);
129
- tcg_temp_free(t1);
130
- return true;
131
-#else
132
- return false;
133
-#endif
134
+ return do_hlv(ctx, a, MO_TESL);
135
}
136
137
static bool trans_hlv_bu(DisasContext *ctx, arg_hlv_bu *a)
138
{
139
REQUIRE_EXT(ctx, RVH);
140
-#ifndef CONFIG_USER_ONLY
141
- TCGv t0 = tcg_temp_new();
142
- TCGv t1 = tcg_temp_new();
143
-
144
- check_access(ctx);
145
-
146
- gen_get_gpr(ctx, t0, a->rs1);
147
-
148
- tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx | TB_FLAGS_PRIV_HYP_ACCESS_MASK, MO_UB);
149
- gen_set_gpr(ctx, a->rd, t1);
150
-
151
- tcg_temp_free(t0);
152
- tcg_temp_free(t1);
153
- return true;
154
-#else
155
- return false;
156
-#endif
157
+ return do_hlv(ctx, a, MO_UB);
158
}
159
160
static bool trans_hlv_hu(DisasContext *ctx, arg_hlv_hu *a)
161
{
162
REQUIRE_EXT(ctx, RVH);
163
-#ifndef CONFIG_USER_ONLY
164
- TCGv t0 = tcg_temp_new();
165
- TCGv t1 = tcg_temp_new();
166
-
167
- check_access(ctx);
168
-
169
- gen_get_gpr(ctx, t0, a->rs1);
170
- tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx | TB_FLAGS_PRIV_HYP_ACCESS_MASK, MO_TEUW);
171
- gen_set_gpr(ctx, a->rd, t1);
172
+ return do_hlv(ctx, a, MO_TEUW);
173
+}
174
175
- tcg_temp_free(t0);
176
- tcg_temp_free(t1);
177
- return true;
178
-#else
179
+static bool do_hsv(DisasContext *ctx, arg_r2_s *a, MemOp mop)
180
+{
181
+#ifdef CONFIG_USER_ONLY
182
return false;
183
+#else
184
+ if (check_access(ctx)) {
185
+ TCGv addr = get_gpr(ctx, a->rs1, EXT_NONE);
186
+ TCGv data = get_gpr(ctx, a->rs2, EXT_NONE);
187
+ int mem_idx = ctx->mem_idx | TB_FLAGS_PRIV_HYP_ACCESS_MASK;
188
+ tcg_gen_qemu_st_tl(data, addr, mem_idx, mop);
189
+ }
190
+ return true;
191
#endif
192
}
193
194
static bool trans_hsv_b(DisasContext *ctx, arg_hsv_b *a)
195
{
196
REQUIRE_EXT(ctx, RVH);
197
-#ifndef CONFIG_USER_ONLY
198
- TCGv t0 = tcg_temp_new();
199
- TCGv dat = tcg_temp_new();
200
-
201
- check_access(ctx);
202
-
203
- gen_get_gpr(ctx, t0, a->rs1);
204
- gen_get_gpr(ctx, dat, a->rs2);
205
-
206
- tcg_gen_qemu_st_tl(dat, t0, ctx->mem_idx | TB_FLAGS_PRIV_HYP_ACCESS_MASK, MO_SB);
207
-
208
- tcg_temp_free(t0);
209
- tcg_temp_free(dat);
210
- return true;
211
-#else
212
- return false;
213
-#endif
214
+ return do_hsv(ctx, a, MO_SB);
215
}
216
217
static bool trans_hsv_h(DisasContext *ctx, arg_hsv_h *a)
218
{
219
REQUIRE_EXT(ctx, RVH);
220
-#ifndef CONFIG_USER_ONLY
221
- TCGv t0 = tcg_temp_new();
222
- TCGv dat = tcg_temp_new();
223
-
224
- check_access(ctx);
225
-
226
- gen_get_gpr(ctx, t0, a->rs1);
227
- gen_get_gpr(ctx, dat, a->rs2);
228
-
229
- tcg_gen_qemu_st_tl(dat, t0, ctx->mem_idx | TB_FLAGS_PRIV_HYP_ACCESS_MASK, MO_TESW);
230
-
231
- tcg_temp_free(t0);
232
- tcg_temp_free(dat);
233
- return true;
234
-#else
235
- return false;
236
-#endif
237
+ return do_hsv(ctx, a, MO_TESW);
238
}
239
240
static bool trans_hsv_w(DisasContext *ctx, arg_hsv_w *a)
241
{
242
REQUIRE_EXT(ctx, RVH);
243
-#ifndef CONFIG_USER_ONLY
244
- TCGv t0 = tcg_temp_new();
245
- TCGv dat = tcg_temp_new();
246
-
247
- check_access(ctx);
248
-
249
- gen_get_gpr(ctx, t0, a->rs1);
250
- gen_get_gpr(ctx, dat, a->rs2);
251
-
252
- tcg_gen_qemu_st_tl(dat, t0, ctx->mem_idx | TB_FLAGS_PRIV_HYP_ACCESS_MASK, MO_TESL);
253
-
254
- tcg_temp_free(t0);
255
- tcg_temp_free(dat);
256
- return true;
257
-#else
258
- return false;
259
-#endif
260
+ return do_hsv(ctx, a, MO_TESL);
261
}
262
263
static bool trans_hlv_wu(DisasContext *ctx, arg_hlv_wu *a)
264
{
265
REQUIRE_64BIT(ctx);
266
REQUIRE_EXT(ctx, RVH);
267
-
268
-#ifndef CONFIG_USER_ONLY
269
- TCGv t0 = tcg_temp_new();
270
- TCGv t1 = tcg_temp_new();
271
-
272
- check_access(ctx);
273
-
274
- gen_get_gpr(ctx, t0, a->rs1);
275
-
276
- tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx | TB_FLAGS_PRIV_HYP_ACCESS_MASK, MO_TEUL);
277
- gen_set_gpr(ctx, a->rd, t1);
278
-
279
- tcg_temp_free(t0);
280
- tcg_temp_free(t1);
281
- return true;
282
-#else
283
- return false;
284
-#endif
285
+ return do_hlv(ctx, a, MO_TEUL);
286
}
287
288
static bool trans_hlv_d(DisasContext *ctx, arg_hlv_d *a)
289
{
290
REQUIRE_64BIT(ctx);
291
REQUIRE_EXT(ctx, RVH);
292
-
293
-#ifndef CONFIG_USER_ONLY
294
- TCGv t0 = tcg_temp_new();
295
- TCGv t1 = tcg_temp_new();
296
-
297
- check_access(ctx);
298
-
299
- gen_get_gpr(ctx, t0, a->rs1);
300
-
301
- tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx | TB_FLAGS_PRIV_HYP_ACCESS_MASK, MO_TEQ);
302
- gen_set_gpr(ctx, a->rd, t1);
303
-
304
- tcg_temp_free(t0);
305
- tcg_temp_free(t1);
306
- return true;
307
-#else
308
- return false;
309
-#endif
310
+ return do_hlv(ctx, a, MO_TEQ);
311
}
312
313
static bool trans_hsv_d(DisasContext *ctx, arg_hsv_d *a)
314
{
315
REQUIRE_64BIT(ctx);
316
REQUIRE_EXT(ctx, RVH);
317
+ return do_hsv(ctx, a, MO_TEQ);
318
+}
319
320
#ifndef CONFIG_USER_ONLY
321
- TCGv t0 = tcg_temp_new();
322
- TCGv dat = tcg_temp_new();
323
-
324
- check_access(ctx);
325
-
326
- gen_get_gpr(ctx, t0, a->rs1);
327
- gen_get_gpr(ctx, dat, a->rs2);
328
-
329
- tcg_gen_qemu_st_tl(dat, t0, ctx->mem_idx | TB_FLAGS_PRIV_HYP_ACCESS_MASK, MO_TEQ);
330
-
331
- tcg_temp_free(t0);
332
- tcg_temp_free(dat);
333
+static bool do_hlvx(DisasContext *ctx, arg_r2 *a,
334
+ void (*func)(TCGv, TCGv_env, TCGv))
335
+{
336
+ if (check_access(ctx)) {
337
+ TCGv dest = dest_gpr(ctx, a->rd);
338
+ TCGv addr = get_gpr(ctx, a->rs1, EXT_NONE);
339
+ func(dest, cpu_env, addr);
340
+ gen_set_gpr(ctx, a->rd, dest);
341
+ }
342
return true;
343
-#else
344
- return false;
345
-#endif
346
}
347
+#endif
348
349
static bool trans_hlvx_hu(DisasContext *ctx, arg_hlvx_hu *a)
350
{
351
REQUIRE_EXT(ctx, RVH);
352
#ifndef CONFIG_USER_ONLY
353
- TCGv t0 = tcg_temp_new();
354
- TCGv t1 = tcg_temp_new();
355
-
356
- check_access(ctx);
357
-
358
- gen_get_gpr(ctx, t0, a->rs1);
359
-
360
- gen_helper_hyp_hlvx_hu(t1, cpu_env, t0);
361
- gen_set_gpr(ctx, a->rd, t1);
362
-
363
- tcg_temp_free(t0);
364
- tcg_temp_free(t1);
365
- return true;
366
+ return do_hlvx(ctx, a, gen_helper_hyp_hlvx_hu);
367
#else
368
return false;
369
#endif
370
@@ -XXX,XX +XXX,XX @@ static bool trans_hlvx_wu(DisasContext *ctx, arg_hlvx_wu *a)
371
{
372
REQUIRE_EXT(ctx, RVH);
373
#ifndef CONFIG_USER_ONLY
374
- TCGv t0 = tcg_temp_new();
375
- TCGv t1 = tcg_temp_new();
376
-
377
- check_access(ctx);
378
-
379
- gen_get_gpr(ctx, t0, a->rs1);
380
-
381
- gen_helper_hyp_hlvx_wu(t1, cpu_env, t0);
382
- gen_set_gpr(ctx, a->rd, t1);
383
-
384
- tcg_temp_free(t0);
385
- tcg_temp_free(t1);
386
- return true;
387
+ return do_hlvx(ctx, a, gen_helper_hyp_hlvx_wu);
388
#else
389
return false;
390
#endif
391
--
73
--
392
2.31.1
74
2.45.1
393
394
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
2
2
3
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
3
Privileged spec section 4.1.9 mentions:
4
5
"When a trap is taken into S-mode, stval is written with
6
exception-specific information to assist software in handling the trap.
7
(...)
8
9
If stval is written with a nonzero value when a breakpoint,
10
address-misaligned, access-fault, or page-fault exception occurs on an
11
instruction fetch, load, or store, then stval will contain the faulting
12
virtual address."
13
14
A similar text is found for mtval in section 3.1.16.
15
16
Setting mtval/stval in this scenario is optional, but some softwares read
17
these regs when handling ebreaks.
18
19
Write 'badaddr' in all ebreak breakpoints to write the appropriate
20
'tval' during riscv_do_cpu_interrrupt().
21
22
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
4
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
23
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
24
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
6
Message-id: 20210823195529.560295-16-richard.henderson@linaro.org
25
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
26
Message-ID: <20240416230437.1869024-3-dbarboza@ventanamicro.com>
7
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
27
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
8
---
28
---
9
target/riscv/insn_trans/trans_rvi.c.inc | 38 +++++++++++++------------
29
target/riscv/insn_trans/trans_privileged.c.inc | 2 ++
10
1 file changed, 20 insertions(+), 18 deletions(-)
30
1 file changed, 2 insertions(+)
11
31
12
diff --git a/target/riscv/insn_trans/trans_rvi.c.inc b/target/riscv/insn_trans/trans_rvi.c.inc
32
diff --git a/target/riscv/insn_trans/trans_privileged.c.inc b/target/riscv/insn_trans/trans_privileged.c.inc
13
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
14
--- a/target/riscv/insn_trans/trans_rvi.c.inc
34
--- a/target/riscv/insn_trans/trans_privileged.c.inc
15
+++ b/target/riscv/insn_trans/trans_rvi.c.inc
35
+++ b/target/riscv/insn_trans/trans_privileged.c.inc
16
@@ -XXX,XX +XXX,XX @@ static bool trans_bgeu(DisasContext *ctx, arg_bgeu *a)
36
@@ -XXX,XX +XXX,XX @@ static bool trans_ebreak(DisasContext *ctx, arg_ebreak *a)
17
37
if (pre == 0x01f01013 && ebreak == 0x00100073 && post == 0x40705013) {
18
static bool gen_load(DisasContext *ctx, arg_lb *a, MemOp memop)
38
generate_exception(ctx, RISCV_EXCP_SEMIHOST);
19
{
39
} else {
20
- TCGv t0 = tcg_temp_new();
40
+ tcg_gen_st_tl(tcg_constant_tl(ebreak_addr), tcg_env,
21
- TCGv t1 = tcg_temp_new();
41
+ offsetof(CPURISCVState, badaddr));
22
- gen_get_gpr(ctx, t0, a->rs1);
42
generate_exception(ctx, RISCV_EXCP_BREAKPOINT);
23
- tcg_gen_addi_tl(t0, t0, a->imm);
43
}
24
-
25
- tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, memop);
26
- gen_set_gpr(ctx, a->rd, t1);
27
- tcg_temp_free(t0);
28
- tcg_temp_free(t1);
29
+ TCGv dest = dest_gpr(ctx, a->rd);
30
+ TCGv addr = get_gpr(ctx, a->rs1, EXT_NONE);
31
+
32
+ if (a->imm) {
33
+ TCGv temp = temp_new(ctx);
34
+ tcg_gen_addi_tl(temp, addr, a->imm);
35
+ addr = temp;
36
+ }
37
+
38
+ tcg_gen_qemu_ld_tl(dest, addr, ctx->mem_idx, memop);
39
+ gen_set_gpr(ctx, a->rd, dest);
40
return true;
44
return true;
41
}
42
43
@@ -XXX,XX +XXX,XX @@ static bool trans_lhu(DisasContext *ctx, arg_lhu *a)
44
45
static bool gen_store(DisasContext *ctx, arg_sb *a, MemOp memop)
46
{
47
- TCGv t0 = tcg_temp_new();
48
- TCGv dat = tcg_temp_new();
49
- gen_get_gpr(ctx, t0, a->rs1);
50
- tcg_gen_addi_tl(t0, t0, a->imm);
51
- gen_get_gpr(ctx, dat, a->rs2);
52
+ TCGv addr = get_gpr(ctx, a->rs1, EXT_NONE);
53
+ TCGv data = get_gpr(ctx, a->rs2, EXT_NONE);
54
55
- tcg_gen_qemu_st_tl(dat, t0, ctx->mem_idx, memop);
56
- tcg_temp_free(t0);
57
- tcg_temp_free(dat);
58
+ if (a->imm) {
59
+ TCGv temp = temp_new(ctx);
60
+ tcg_gen_addi_tl(temp, addr, a->imm);
61
+ addr = temp;
62
+ }
63
+
64
+ tcg_gen_qemu_st_tl(data, addr, ctx->mem_idx, memop);
65
return true;
66
}
67
68
-
69
static bool trans_sb(DisasContext *ctx, arg_sb *a)
70
{
71
return gen_store(ctx, a, MO_SB);
72
--
45
--
73
2.31.1
46
2.45.1
74
75
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Jason Chien <jason.chien@sifive.com>
2
2
3
Replace uses of tcg_const_* with the allocate and free close together.
3
Add support for Zve32x extension and replace some checks for Zve32f with
4
Zve32x, since Zve32f depends on Zve32x.
4
5
5
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
6
Signed-off-by: Jason Chien <jason.chien@sifive.com>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Frank Chang <frank.chang@sifive.com>
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Reviewed-by: Max Chou <max.chou@sifive.com>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
9
Message-id: 20210823195529.560295-2-richard.henderson@linaro.org
10
Message-ID: <20240328022343.6871-2-jason.chien@sifive.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
---
12
---
12
target/riscv/translate.c | 36 ++++----------
13
target/riscv/cpu_cfg.h | 1 +
13
target/riscv/insn_trans/trans_rvf.c.inc | 3 +-
14
target/riscv/cpu.c | 2 ++
14
target/riscv/insn_trans/trans_rvv.c.inc | 65 +++++++++----------------
15
target/riscv/cpu_helper.c | 2 +-
15
3 files changed, 34 insertions(+), 70 deletions(-)
16
target/riscv/csr.c | 2 +-
17
target/riscv/tcg/tcg-cpu.c | 16 ++++++++--------
18
target/riscv/insn_trans/trans_rvv.c.inc | 4 ++--
19
6 files changed, 15 insertions(+), 12 deletions(-)
16
20
17
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
21
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
18
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
19
--- a/target/riscv/translate.c
23
--- a/target/riscv/cpu_cfg.h
20
+++ b/target/riscv/translate.c
24
+++ b/target/riscv/cpu_cfg.h
21
@@ -XXX,XX +XXX,XX @@ static void gen_nanbox_s(TCGv_i64 out, TCGv_i64 in)
25
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
22
*/
26
bool ext_zhinx;
23
static void gen_check_nanbox_s(TCGv_i64 out, TCGv_i64 in)
27
bool ext_zhinxmin;
28
bool ext_zve32f;
29
+ bool ext_zve32x;
30
bool ext_zve64f;
31
bool ext_zve64d;
32
bool ext_zvbb;
33
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/riscv/cpu.c
36
+++ b/target/riscv/cpu.c
37
@@ -XXX,XX +XXX,XX @@ const RISCVIsaExtData isa_edata_arr[] = {
38
ISA_EXT_DATA_ENTRY(zvbb, PRIV_VERSION_1_12_0, ext_zvbb),
39
ISA_EXT_DATA_ENTRY(zvbc, PRIV_VERSION_1_12_0, ext_zvbc),
40
ISA_EXT_DATA_ENTRY(zve32f, PRIV_VERSION_1_10_0, ext_zve32f),
41
+ ISA_EXT_DATA_ENTRY(zve32x, PRIV_VERSION_1_10_0, ext_zve32x),
42
ISA_EXT_DATA_ENTRY(zve64f, PRIV_VERSION_1_10_0, ext_zve64f),
43
ISA_EXT_DATA_ENTRY(zve64d, PRIV_VERSION_1_10_0, ext_zve64d),
44
ISA_EXT_DATA_ENTRY(zvfbfmin, PRIV_VERSION_1_12_0, ext_zvfbfmin),
45
@@ -XXX,XX +XXX,XX @@ const RISCVCPUMultiExtConfig riscv_cpu_extensions[] = {
46
MULTI_EXT_CFG_BOOL("zfh", ext_zfh, false),
47
MULTI_EXT_CFG_BOOL("zfhmin", ext_zfhmin, false),
48
MULTI_EXT_CFG_BOOL("zve32f", ext_zve32f, false),
49
+ MULTI_EXT_CFG_BOOL("zve32x", ext_zve32x, false),
50
MULTI_EXT_CFG_BOOL("zve64f", ext_zve64f, false),
51
MULTI_EXT_CFG_BOOL("zve64d", ext_zve64d, false),
52
MULTI_EXT_CFG_BOOL("zvfbfmin", ext_zvfbfmin, false),
53
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/riscv/cpu_helper.c
56
+++ b/target/riscv/cpu_helper.c
57
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPURISCVState *env, vaddr *pc,
58
*pc = env->xl == MXL_RV32 ? env->pc & UINT32_MAX : env->pc;
59
*cs_base = 0;
60
61
- if (cpu->cfg.ext_zve32f) {
62
+ if (cpu->cfg.ext_zve32x) {
63
/*
64
* If env->vl equals to VLMAX, we can use generic vector operation
65
* expanders (GVEC) to accerlate the vector operations.
66
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/riscv/csr.c
69
+++ b/target/riscv/csr.c
70
@@ -XXX,XX +XXX,XX @@ static RISCVException fs(CPURISCVState *env, int csrno)
71
72
static RISCVException vs(CPURISCVState *env, int csrno)
24
{
73
{
25
- TCGv_i64 t_max = tcg_const_i64(0xffffffff00000000ull);
74
- if (riscv_cpu_cfg(env)->ext_zve32f) {
26
- TCGv_i64 t_nan = tcg_const_i64(0xffffffff7fc00000ull);
75
+ if (riscv_cpu_cfg(env)->ext_zve32x) {
27
+ TCGv_i64 t_max = tcg_constant_i64(0xffffffff00000000ull);
76
#if !defined(CONFIG_USER_ONLY)
28
+ TCGv_i64 t_nan = tcg_constant_i64(0xffffffff7fc00000ull);
77
if (!env->debugger && !riscv_cpu_vector_enabled(env)) {
29
78
return RISCV_EXCP_ILLEGAL_INST;
30
tcg_gen_movcond_i64(TCG_COND_GEU, out, in, t_max, in, t_nan);
79
diff --git a/target/riscv/tcg/tcg-cpu.c b/target/riscv/tcg/tcg-cpu.c
31
- tcg_temp_free_i64(t_max);
80
index XXXXXXX..XXXXXXX 100644
32
- tcg_temp_free_i64(t_nan);
81
--- a/target/riscv/tcg/tcg-cpu.c
33
}
82
+++ b/target/riscv/tcg/tcg-cpu.c
34
83
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
35
static void generate_exception(DisasContext *ctx, int excp)
36
{
37
tcg_gen_movi_tl(cpu_pc, ctx->base.pc_next);
38
- TCGv_i32 helper_tmp = tcg_const_i32(excp);
39
- gen_helper_raise_exception(cpu_env, helper_tmp);
40
- tcg_temp_free_i32(helper_tmp);
41
+ gen_helper_raise_exception(cpu_env, tcg_constant_i32(excp));
42
ctx->base.is_jmp = DISAS_NORETURN;
43
}
44
45
@@ -XXX,XX +XXX,XX @@ static void generate_exception_mtval(DisasContext *ctx, int excp)
46
{
47
tcg_gen_movi_tl(cpu_pc, ctx->base.pc_next);
48
tcg_gen_st_tl(cpu_pc, cpu_env, offsetof(CPURISCVState, badaddr));
49
- TCGv_i32 helper_tmp = tcg_const_i32(excp);
50
- gen_helper_raise_exception(cpu_env, helper_tmp);
51
- tcg_temp_free_i32(helper_tmp);
52
+ gen_helper_raise_exception(cpu_env, tcg_constant_i32(excp));
53
ctx->base.is_jmp = DISAS_NORETURN;
54
}
55
56
static void gen_exception_debug(void)
57
{
58
- TCGv_i32 helper_tmp = tcg_const_i32(EXCP_DEBUG);
59
- gen_helper_raise_exception(cpu_env, helper_tmp);
60
- tcg_temp_free_i32(helper_tmp);
61
+ gen_helper_raise_exception(cpu_env, tcg_constant_i32(EXCP_DEBUG));
62
}
63
64
/* Wrapper around tcg_gen_exit_tb that handles single stepping */
65
@@ -XXX,XX +XXX,XX @@ static void gen_div(TCGv ret, TCGv source1, TCGv source2)
66
*/
67
cond1 = tcg_temp_new();
68
cond2 = tcg_temp_new();
69
- zeroreg = tcg_const_tl(0);
70
+ zeroreg = tcg_constant_tl(0);
71
resultopt1 = tcg_temp_new();
72
73
tcg_gen_movi_tl(resultopt1, (target_ulong)-1);
74
@@ -XXX,XX +XXX,XX @@ static void gen_div(TCGv ret, TCGv source1, TCGv source2)
75
76
tcg_temp_free(cond1);
77
tcg_temp_free(cond2);
78
- tcg_temp_free(zeroreg);
79
tcg_temp_free(resultopt1);
80
}
81
82
@@ -XXX,XX +XXX,XX @@ static void gen_divu(TCGv ret, TCGv source1, TCGv source2)
83
TCGv cond1, zeroreg, resultopt1;
84
cond1 = tcg_temp_new();
85
86
- zeroreg = tcg_const_tl(0);
87
+ zeroreg = tcg_constant_tl(0);
88
resultopt1 = tcg_temp_new();
89
90
tcg_gen_setcondi_tl(TCG_COND_EQ, cond1, source2, 0);
91
@@ -XXX,XX +XXX,XX @@ static void gen_divu(TCGv ret, TCGv source1, TCGv source2)
92
tcg_gen_divu_tl(ret, source1, source2);
93
94
tcg_temp_free(cond1);
95
- tcg_temp_free(zeroreg);
96
tcg_temp_free(resultopt1);
97
}
98
99
@@ -XXX,XX +XXX,XX @@ static void gen_rem(TCGv ret, TCGv source1, TCGv source2)
100
101
cond1 = tcg_temp_new();
102
cond2 = tcg_temp_new();
103
- zeroreg = tcg_const_tl(0);
104
+ zeroreg = tcg_constant_tl(0);
105
resultopt1 = tcg_temp_new();
106
107
tcg_gen_movi_tl(resultopt1, 1L);
108
@@ -XXX,XX +XXX,XX @@ static void gen_rem(TCGv ret, TCGv source1, TCGv source2)
109
110
tcg_temp_free(cond1);
111
tcg_temp_free(cond2);
112
- tcg_temp_free(zeroreg);
113
tcg_temp_free(resultopt1);
114
}
115
116
@@ -XXX,XX +XXX,XX @@ static void gen_remu(TCGv ret, TCGv source1, TCGv source2)
117
{
118
TCGv cond1, zeroreg, resultopt1;
119
cond1 = tcg_temp_new();
120
- zeroreg = tcg_const_tl(0);
121
+ zeroreg = tcg_constant_tl(0);
122
resultopt1 = tcg_temp_new();
123
124
tcg_gen_movi_tl(resultopt1, (target_ulong)1);
125
@@ -XXX,XX +XXX,XX @@ static void gen_remu(TCGv ret, TCGv source1, TCGv source2)
126
source1);
127
128
tcg_temp_free(cond1);
129
- tcg_temp_free(zeroreg);
130
tcg_temp_free(resultopt1);
131
}
132
133
@@ -XXX,XX +XXX,XX @@ static inline void mark_fs_dirty(DisasContext *ctx) { }
134
135
static void gen_set_rm(DisasContext *ctx, int rm)
136
{
137
- TCGv_i32 t0;
138
-
139
if (ctx->frm == rm) {
140
return;
84
return;
141
}
85
}
142
ctx->frm = rm;
86
143
- t0 = tcg_const_i32(rm);
87
- if (cpu->cfg.ext_zve32f && !riscv_has_ext(env, RVF)) {
144
- gen_helper_set_rounding_mode(cpu_env, t0);
88
- error_setg(errp, "Zve32f/Zve64f extensions require F extension");
145
- tcg_temp_free_i32(t0);
89
- return;
146
+ gen_helper_set_rounding_mode(cpu_env, tcg_constant_i32(rm));
90
+ /* The Zve32f extension depends on the Zve32x extension */
147
}
91
+ if (cpu->cfg.ext_zve32f) {
148
92
+ if (!riscv_has_ext(env, RVF)) {
149
static int ex_plus_1(DisasContext *ctx, int nf)
93
+ error_setg(errp, "Zve32f/Zve64f extensions require F extension");
150
diff --git a/target/riscv/insn_trans/trans_rvf.c.inc b/target/riscv/insn_trans/trans_rvf.c.inc
94
+ return;
151
index XXXXXXX..XXXXXXX 100644
95
+ }
152
--- a/target/riscv/insn_trans/trans_rvf.c.inc
96
+ cpu_cfg_ext_auto_update(cpu, CPU_CFG_OFFSET(ext_zve32x), true);
153
+++ b/target/riscv/insn_trans/trans_rvf.c.inc
154
@@ -XXX,XX +XXX,XX @@ static bool trans_fsgnjn_s(DisasContext *ctx, arg_fsgnjn_s *a)
155
* Replace bit 31 in rs1 with inverse in rs2.
156
* This formulation retains the nanboxing of rs1.
157
*/
158
- mask = tcg_const_i64(~MAKE_64BIT_MASK(31, 1));
159
+ mask = tcg_constant_i64(~MAKE_64BIT_MASK(31, 1));
160
tcg_gen_nor_i64(rs2, rs2, mask);
161
tcg_gen_and_i64(rs1, mask, rs1);
162
tcg_gen_or_i64(cpu_fpr[a->rd], rs1, rs2);
163
164
- tcg_temp_free_i64(mask);
165
tcg_temp_free_i64(rs2);
166
}
97
}
167
tcg_temp_free_i64(rs1);
98
99
if (cpu->cfg.ext_zvfh) {
100
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
101
cpu_cfg_ext_auto_update(cpu, CPU_CFG_OFFSET(ext_zvbc), true);
102
}
103
104
- /*
105
- * In principle Zve*x would also suffice here, were they supported
106
- * in qemu
107
- */
108
if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkb || cpu->cfg.ext_zvkg ||
109
cpu->cfg.ext_zvkned || cpu->cfg.ext_zvknha || cpu->cfg.ext_zvksed ||
110
- cpu->cfg.ext_zvksh) && !cpu->cfg.ext_zve32f) {
111
+ cpu->cfg.ext_zvksh) && !cpu->cfg.ext_zve32x) {
112
error_setg(errp,
113
"Vector crypto extensions require V or Zve* extensions");
114
return;
168
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
115
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
169
index XXXXXXX..XXXXXXX 100644
116
index XXXXXXX..XXXXXXX 100644
170
--- a/target/riscv/insn_trans/trans_rvv.c.inc
117
--- a/target/riscv/insn_trans/trans_rvv.c.inc
171
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
118
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
172
@@ -XXX,XX +XXX,XX @@ static bool trans_vsetvl(DisasContext *ctx, arg_vsetvl *a)
119
@@ -XXX,XX +XXX,XX @@ static bool do_vsetvl(DisasContext *s, int rd, int rs1, TCGv s2)
173
/* Using x0 as the rs1 register specifier, encodes an infinite AVL */
120
{
174
if (a->rs1 == 0) {
121
TCGv s1, dst;
175
/* As the mask is at least one bit, RV_VLEN_MAX is >= VLMAX */
122
176
- s1 = tcg_const_tl(RV_VLEN_MAX);
123
- if (!require_rvv(s) || !s->cfg_ptr->ext_zve32f) {
177
+ s1 = tcg_constant_tl(RV_VLEN_MAX);
124
+ if (!require_rvv(s) || !s->cfg_ptr->ext_zve32x) {
178
} else {
179
s1 = tcg_temp_new();
180
gen_get_gpr(s1, a->rs1);
181
@@ -XXX,XX +XXX,XX @@ static bool trans_vsetvli(DisasContext *ctx, arg_vsetvli *a)
182
return false;
125
return false;
183
}
126
}
184
127
185
- s2 = tcg_const_tl(a->zimm);
128
@@ -XXX,XX +XXX,XX @@ static bool do_vsetivli(DisasContext *s, int rd, TCGv s1, TCGv s2)
186
+ s2 = tcg_constant_tl(a->zimm);
129
{
187
dst = tcg_temp_new();
130
TCGv dst;
188
131
189
/* Using x0 as the rs1 register specifier, encodes an infinite AVL */
132
- if (!require_rvv(s) || !s->cfg_ptr->ext_zve32f) {
190
if (a->rs1 == 0) {
133
+ if (!require_rvv(s) || !s->cfg_ptr->ext_zve32x) {
191
/* As the mask is at least one bit, RV_VLEN_MAX is >= VLMAX */
134
return false;
192
- s1 = tcg_const_tl(RV_VLEN_MAX);
193
+ s1 = tcg_constant_tl(RV_VLEN_MAX);
194
} else {
195
s1 = tcg_temp_new();
196
gen_get_gpr(s1, a->rs1);
197
@@ -XXX,XX +XXX,XX @@ static bool trans_vsetvli(DisasContext *ctx, arg_vsetvli *a)
198
ctx->base.is_jmp = DISAS_NORETURN;
199
200
tcg_temp_free(s1);
201
- tcg_temp_free(s2);
202
tcg_temp_free(dst);
203
return true;
204
}
205
@@ -XXX,XX +XXX,XX @@ static bool ldst_us_trans(uint32_t vd, uint32_t rs1, uint32_t data,
206
* The first part is vlen in bytes, encoded in maxsz of simd_desc.
207
* The second part is lmul, encoded in data of simd_desc.
208
*/
209
- desc = tcg_const_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
210
+ desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
211
212
gen_get_gpr(base, rs1);
213
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
214
@@ -XXX,XX +XXX,XX @@ static bool ldst_us_trans(uint32_t vd, uint32_t rs1, uint32_t data,
215
tcg_temp_free_ptr(dest);
216
tcg_temp_free_ptr(mask);
217
tcg_temp_free(base);
218
- tcg_temp_free_i32(desc);
219
gen_set_label(over);
220
return true;
221
}
222
@@ -XXX,XX +XXX,XX @@ static bool ldst_stride_trans(uint32_t vd, uint32_t rs1, uint32_t rs2,
223
mask = tcg_temp_new_ptr();
224
base = tcg_temp_new();
225
stride = tcg_temp_new();
226
- desc = tcg_const_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
227
+ desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
228
229
gen_get_gpr(base, rs1);
230
gen_get_gpr(stride, rs2);
231
@@ -XXX,XX +XXX,XX @@ static bool ldst_stride_trans(uint32_t vd, uint32_t rs1, uint32_t rs2,
232
tcg_temp_free_ptr(mask);
233
tcg_temp_free(base);
234
tcg_temp_free(stride);
235
- tcg_temp_free_i32(desc);
236
gen_set_label(over);
237
return true;
238
}
239
@@ -XXX,XX +XXX,XX @@ static bool ldst_index_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
240
mask = tcg_temp_new_ptr();
241
index = tcg_temp_new_ptr();
242
base = tcg_temp_new();
243
- desc = tcg_const_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
244
+ desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
245
246
gen_get_gpr(base, rs1);
247
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
248
@@ -XXX,XX +XXX,XX @@ static bool ldst_index_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
249
tcg_temp_free_ptr(mask);
250
tcg_temp_free_ptr(index);
251
tcg_temp_free(base);
252
- tcg_temp_free_i32(desc);
253
gen_set_label(over);
254
return true;
255
}
256
@@ -XXX,XX +XXX,XX @@ static bool ldff_trans(uint32_t vd, uint32_t rs1, uint32_t data,
257
dest = tcg_temp_new_ptr();
258
mask = tcg_temp_new_ptr();
259
base = tcg_temp_new();
260
- desc = tcg_const_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
261
+ desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
262
263
gen_get_gpr(base, rs1);
264
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
265
@@ -XXX,XX +XXX,XX @@ static bool ldff_trans(uint32_t vd, uint32_t rs1, uint32_t data,
266
tcg_temp_free_ptr(dest);
267
tcg_temp_free_ptr(mask);
268
tcg_temp_free(base);
269
- tcg_temp_free_i32(desc);
270
gen_set_label(over);
271
return true;
272
}
273
@@ -XXX,XX +XXX,XX @@ static bool amo_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
274
mask = tcg_temp_new_ptr();
275
index = tcg_temp_new_ptr();
276
base = tcg_temp_new();
277
- desc = tcg_const_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
278
+ desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
279
280
gen_get_gpr(base, rs1);
281
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
282
@@ -XXX,XX +XXX,XX @@ static bool amo_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
283
tcg_temp_free_ptr(mask);
284
tcg_temp_free_ptr(index);
285
tcg_temp_free(base);
286
- tcg_temp_free_i32(desc);
287
gen_set_label(over);
288
return true;
289
}
290
@@ -XXX,XX +XXX,XX @@ static bool opivx_trans(uint32_t vd, uint32_t rs1, uint32_t vs2, uint32_t vm,
291
data = FIELD_DP32(data, VDATA, MLEN, s->mlen);
292
data = FIELD_DP32(data, VDATA, VM, vm);
293
data = FIELD_DP32(data, VDATA, LMUL, s->lmul);
294
- desc = tcg_const_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
295
+ desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
296
297
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
298
tcg_gen_addi_ptr(src2, cpu_env, vreg_ofs(s, vs2));
299
@@ -XXX,XX +XXX,XX @@ static bool opivx_trans(uint32_t vd, uint32_t rs1, uint32_t vs2, uint32_t vm,
300
tcg_temp_free_ptr(mask);
301
tcg_temp_free_ptr(src2);
302
tcg_temp_free(src1);
303
- tcg_temp_free_i32(desc);
304
gen_set_label(over);
305
return true;
306
}
307
@@ -XXX,XX +XXX,XX @@ static bool opivi_trans(uint32_t vd, uint32_t imm, uint32_t vs2, uint32_t vm,
308
mask = tcg_temp_new_ptr();
309
src2 = tcg_temp_new_ptr();
310
if (zx) {
311
- src1 = tcg_const_tl(imm);
312
+ src1 = tcg_constant_tl(imm);
313
} else {
314
- src1 = tcg_const_tl(sextract64(imm, 0, 5));
315
+ src1 = tcg_constant_tl(sextract64(imm, 0, 5));
316
}
135
}
317
data = FIELD_DP32(data, VDATA, MLEN, s->mlen);
318
data = FIELD_DP32(data, VDATA, VM, vm);
319
data = FIELD_DP32(data, VDATA, LMUL, s->lmul);
320
- desc = tcg_const_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
321
+ desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
322
323
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
324
tcg_gen_addi_ptr(src2, cpu_env, vreg_ofs(s, vs2));
325
@@ -XXX,XX +XXX,XX @@ static bool opivi_trans(uint32_t vd, uint32_t imm, uint32_t vs2, uint32_t vm,
326
tcg_temp_free_ptr(dest);
327
tcg_temp_free_ptr(mask);
328
tcg_temp_free_ptr(src2);
329
- tcg_temp_free(src1);
330
- tcg_temp_free_i32(desc);
331
gen_set_label(over);
332
return true;
333
}
334
@@ -XXX,XX +XXX,XX @@ GEN_OPIVI_GVEC_TRANS(vadd_vi, 0, vadd_vx, addi)
335
static void tcg_gen_gvec_rsubi(unsigned vece, uint32_t dofs, uint32_t aofs,
336
int64_t c, uint32_t oprsz, uint32_t maxsz)
337
{
338
- TCGv_i64 tmp = tcg_const_i64(c);
339
+ TCGv_i64 tmp = tcg_constant_i64(c);
340
tcg_gen_gvec_rsubs(vece, dofs, aofs, tmp, oprsz, maxsz);
341
- tcg_temp_free_i64(tmp);
342
}
343
344
GEN_OPIVI_GVEC_TRANS(vrsub_vi, 0, vrsub_vx, rsubi)
345
@@ -XXX,XX +XXX,XX @@ static bool trans_vmv_v_x(DisasContext *s, arg_vmv_v_x *a)
346
tcg_gen_gvec_dup_tl(s->sew, vreg_ofs(s, a->rd),
347
MAXSZ(s), MAXSZ(s), s1);
348
} else {
349
- TCGv_i32 desc ;
350
+ TCGv_i32 desc;
351
TCGv_i64 s1_i64 = tcg_temp_new_i64();
352
TCGv_ptr dest = tcg_temp_new_ptr();
353
uint32_t data = FIELD_DP32(0, VDATA, LMUL, s->lmul);
354
@@ -XXX,XX +XXX,XX @@ static bool trans_vmv_v_x(DisasContext *s, arg_vmv_v_x *a)
355
};
356
357
tcg_gen_ext_tl_i64(s1_i64, s1);
358
- desc = tcg_const_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
359
+ desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
360
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, a->rd));
361
fns[s->sew](dest, s1_i64, cpu_env, desc);
362
363
tcg_temp_free_ptr(dest);
364
- tcg_temp_free_i32(desc);
365
tcg_temp_free_i64(s1_i64);
366
}
367
368
@@ -XXX,XX +XXX,XX @@ static bool trans_vmv_v_i(DisasContext *s, arg_vmv_v_i *a)
369
TCGLabel *over = gen_new_label();
370
tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
371
372
- s1 = tcg_const_i64(simm);
373
+ s1 = tcg_constant_i64(simm);
374
dest = tcg_temp_new_ptr();
375
- desc = tcg_const_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
376
+ desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
377
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, a->rd));
378
fns[s->sew](dest, s1, cpu_env, desc);
379
380
tcg_temp_free_ptr(dest);
381
- tcg_temp_free_i32(desc);
382
- tcg_temp_free_i64(s1);
383
gen_set_label(over);
384
}
385
return true;
386
@@ -XXX,XX +XXX,XX @@ static bool opfvf_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
387
dest = tcg_temp_new_ptr();
388
mask = tcg_temp_new_ptr();
389
src2 = tcg_temp_new_ptr();
390
- desc = tcg_const_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
391
+ desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
392
393
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
394
tcg_gen_addi_ptr(src2, cpu_env, vreg_ofs(s, vs2));
395
@@ -XXX,XX +XXX,XX @@ static bool opfvf_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
396
tcg_temp_free_ptr(dest);
397
tcg_temp_free_ptr(mask);
398
tcg_temp_free_ptr(src2);
399
- tcg_temp_free_i32(desc);
400
gen_set_label(over);
401
return true;
402
}
403
@@ -XXX,XX +XXX,XX @@ static bool trans_vfmv_v_f(DisasContext *s, arg_vfmv_v_f *a)
404
tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
405
406
dest = tcg_temp_new_ptr();
407
- desc = tcg_const_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
408
+ desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
409
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, a->rd));
410
fns[s->sew - 1](dest, cpu_fpr[a->rs1], cpu_env, desc);
411
412
tcg_temp_free_ptr(dest);
413
- tcg_temp_free_i32(desc);
414
gen_set_label(over);
415
}
416
return true;
417
@@ -XXX,XX +XXX,XX @@ static bool trans_vmpopc_m(DisasContext *s, arg_rmr *a)
418
mask = tcg_temp_new_ptr();
419
src2 = tcg_temp_new_ptr();
420
dst = tcg_temp_new();
421
- desc = tcg_const_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
422
+ desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
423
424
tcg_gen_addi_ptr(src2, cpu_env, vreg_ofs(s, a->rs2));
425
tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0));
426
@@ -XXX,XX +XXX,XX @@ static bool trans_vmpopc_m(DisasContext *s, arg_rmr *a)
427
tcg_temp_free_ptr(mask);
428
tcg_temp_free_ptr(src2);
429
tcg_temp_free(dst);
430
- tcg_temp_free_i32(desc);
431
return true;
432
}
433
return false;
434
@@ -XXX,XX +XXX,XX @@ static bool trans_vmfirst_m(DisasContext *s, arg_rmr *a)
435
mask = tcg_temp_new_ptr();
436
src2 = tcg_temp_new_ptr();
437
dst = tcg_temp_new();
438
- desc = tcg_const_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
439
+ desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
440
441
tcg_gen_addi_ptr(src2, cpu_env, vreg_ofs(s, a->rs2));
442
tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0));
443
@@ -XXX,XX +XXX,XX @@ static bool trans_vmfirst_m(DisasContext *s, arg_rmr *a)
444
tcg_temp_free_ptr(mask);
445
tcg_temp_free_ptr(src2);
446
tcg_temp_free(dst);
447
- tcg_temp_free_i32(desc);
448
return true;
449
}
450
return false;
451
@@ -XXX,XX +XXX,XX @@ static void vec_element_loadx(DisasContext *s, TCGv_i64 dest,
452
tcg_temp_free_i32(ofs);
453
454
/* Flush out-of-range indexing to zero. */
455
- t_vlmax = tcg_const_i64(vlmax);
456
- t_zero = tcg_const_i64(0);
457
+ t_vlmax = tcg_constant_i64(vlmax);
458
+ t_zero = tcg_constant_i64(0);
459
tcg_gen_extu_tl_i64(t_idx, idx);
460
461
tcg_gen_movcond_i64(TCG_COND_LTU, dest, t_idx,
462
t_vlmax, dest, t_zero);
463
464
- tcg_temp_free_i64(t_vlmax);
465
- tcg_temp_free_i64(t_zero);
466
tcg_temp_free_i64(t_idx);
467
}
468
136
469
--
137
--
470
2.31.1
138
2.45.1
471
472
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Jason Chien <jason.chien@sifive.com>
2
2
3
Narrow the scope of t0 in trans_jalr.
3
Add support for Zve64x extension. Enabling Zve64f enables Zve64x and
4
enabling Zve64x enables Zve32x according to their dependency.
4
5
5
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
6
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2107
6
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
Signed-off-by: Jason Chien <jason.chien@sifive.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Frank Chang <frank.chang@sifive.com>
8
Message-id: 20210823195529.560295-15-richard.henderson@linaro.org
9
Reviewed-by: Max Chou <max.chou@sifive.com>
10
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
11
Message-ID: <20240328022343.6871-3-jason.chien@sifive.com>
9
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
---
13
---
11
target/riscv/insn_trans/trans_rvi.c.inc | 25 ++++++++++---------------
14
target/riscv/cpu_cfg.h | 1 +
12
1 file changed, 10 insertions(+), 15 deletions(-)
15
target/riscv/cpu.c | 2 ++
16
target/riscv/tcg/tcg-cpu.c | 17 +++++++++++------
17
3 files changed, 14 insertions(+), 6 deletions(-)
13
18
14
diff --git a/target/riscv/insn_trans/trans_rvi.c.inc b/target/riscv/insn_trans/trans_rvi.c.inc
19
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
15
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
16
--- a/target/riscv/insn_trans/trans_rvi.c.inc
21
--- a/target/riscv/cpu_cfg.h
17
+++ b/target/riscv/insn_trans/trans_rvi.c.inc
22
+++ b/target/riscv/cpu_cfg.h
18
@@ -XXX,XX +XXX,XX @@ static bool trans_jal(DisasContext *ctx, arg_jal *a)
23
@@ -XXX,XX +XXX,XX @@ struct RISCVCPUConfig {
19
24
bool ext_zve32x;
20
static bool trans_jalr(DisasContext *ctx, arg_jalr *a)
25
bool ext_zve64f;
21
{
26
bool ext_zve64d;
22
- /* no chaining with JALR */
27
+ bool ext_zve64x;
23
TCGLabel *misaligned = NULL;
28
bool ext_zvbb;
24
- TCGv t0 = tcg_temp_new();
29
bool ext_zvbc;
25
-
30
bool ext_zvkb;
26
31
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
27
- gen_get_gpr(ctx, cpu_pc, a->rs1);
32
index XXXXXXX..XXXXXXX 100644
28
- tcg_gen_addi_tl(cpu_pc, cpu_pc, a->imm);
33
--- a/target/riscv/cpu.c
29
+ tcg_gen_addi_tl(cpu_pc, get_gpr(ctx, a->rs1, EXT_NONE), a->imm);
34
+++ b/target/riscv/cpu.c
30
tcg_gen_andi_tl(cpu_pc, cpu_pc, (target_ulong)-2);
35
@@ -XXX,XX +XXX,XX @@ const RISCVIsaExtData isa_edata_arr[] = {
31
36
ISA_EXT_DATA_ENTRY(zve32x, PRIV_VERSION_1_10_0, ext_zve32x),
32
if (!has_ext(ctx, RVC)) {
37
ISA_EXT_DATA_ENTRY(zve64f, PRIV_VERSION_1_10_0, ext_zve64f),
33
+ TCGv t0 = tcg_temp_new();
38
ISA_EXT_DATA_ENTRY(zve64d, PRIV_VERSION_1_10_0, ext_zve64d),
34
+
39
+ ISA_EXT_DATA_ENTRY(zve64x, PRIV_VERSION_1_10_0, ext_zve64x),
35
misaligned = gen_new_label();
40
ISA_EXT_DATA_ENTRY(zvfbfmin, PRIV_VERSION_1_12_0, ext_zvfbfmin),
36
tcg_gen_andi_tl(t0, cpu_pc, 0x2);
41
ISA_EXT_DATA_ENTRY(zvfbfwma, PRIV_VERSION_1_12_0, ext_zvfbfwma),
37
tcg_gen_brcondi_tl(TCG_COND_NE, t0, 0x0, misaligned);
42
ISA_EXT_DATA_ENTRY(zvfh, PRIV_VERSION_1_12_0, ext_zvfh),
38
+ tcg_temp_free(t0);
43
@@ -XXX,XX +XXX,XX @@ const RISCVCPUMultiExtConfig riscv_cpu_extensions[] = {
44
MULTI_EXT_CFG_BOOL("zve32x", ext_zve32x, false),
45
MULTI_EXT_CFG_BOOL("zve64f", ext_zve64f, false),
46
MULTI_EXT_CFG_BOOL("zve64d", ext_zve64d, false),
47
+ MULTI_EXT_CFG_BOOL("zve64x", ext_zve64x, false),
48
MULTI_EXT_CFG_BOOL("zvfbfmin", ext_zvfbfmin, false),
49
MULTI_EXT_CFG_BOOL("zvfbfwma", ext_zvfbfwma, false),
50
MULTI_EXT_CFG_BOOL("zvfh", ext_zvfh, false),
51
diff --git a/target/riscv/tcg/tcg-cpu.c b/target/riscv/tcg/tcg-cpu.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/riscv/tcg/tcg-cpu.c
54
+++ b/target/riscv/tcg/tcg-cpu.c
55
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
56
57
/* The Zve64d extension depends on the Zve64f extension */
58
if (cpu->cfg.ext_zve64d) {
59
+ if (!riscv_has_ext(env, RVD)) {
60
+ error_setg(errp, "Zve64d/V extensions require D extension");
61
+ return;
62
+ }
63
cpu_cfg_ext_auto_update(cpu, CPU_CFG_OFFSET(ext_zve64f), true);
39
}
64
}
40
65
41
if (a->rd != 0) {
66
- /* The Zve64f extension depends on the Zve32f extension */
42
tcg_gen_movi_tl(cpu_gpr[a->rd], ctx->pc_succ_insn);
67
+ /* The Zve64f extension depends on the Zve64x and Zve32f extensions */
68
if (cpu->cfg.ext_zve64f) {
69
+ cpu_cfg_ext_auto_update(cpu, CPU_CFG_OFFSET(ext_zve64x), true);
70
cpu_cfg_ext_auto_update(cpu, CPU_CFG_OFFSET(ext_zve32f), true);
43
}
71
}
44
+
72
45
+ /* No chaining with JALR. */
73
- if (cpu->cfg.ext_zve64d && !riscv_has_ext(env, RVD)) {
46
lookup_and_goto_ptr(ctx);
74
- error_setg(errp, "Zve64d/V extensions require D extension");
47
75
- return;
48
if (misaligned) {
76
+ /* The Zve64x extension depends on the Zve32x extension */
49
@@ -XXX,XX +XXX,XX @@ static bool trans_jalr(DisasContext *ctx, arg_jalr *a)
77
+ if (cpu->cfg.ext_zve64x) {
78
+ cpu_cfg_ext_auto_update(cpu, CPU_CFG_OFFSET(ext_zve32x), true);
50
}
79
}
51
ctx->base.is_jmp = DISAS_NORETURN;
80
52
81
/* The Zve32f extension depends on the Zve32x extension */
53
- tcg_temp_free(t0);
82
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
54
return true;
83
return;
55
}
56
57
static bool gen_branch(DisasContext *ctx, arg_b *a, TCGCond cond)
58
{
59
TCGLabel *l = gen_new_label();
60
- TCGv source1, source2;
61
- source1 = tcg_temp_new();
62
- source2 = tcg_temp_new();
63
- gen_get_gpr(ctx, source1, a->rs1);
64
- gen_get_gpr(ctx, source2, a->rs2);
65
+ TCGv src1 = get_gpr(ctx, a->rs1, EXT_SIGN);
66
+ TCGv src2 = get_gpr(ctx, a->rs2, EXT_SIGN);
67
68
- tcg_gen_brcond_tl(cond, source1, source2, l);
69
+ tcg_gen_brcond_tl(cond, src1, src2, l);
70
gen_goto_tb(ctx, 1, ctx->pc_succ_insn);
71
+
72
gen_set_label(l); /* branch taken */
73
74
if (!has_ext(ctx, RVC) && ((ctx->base.pc_next + a->imm) & 0x3)) {
75
@@ -XXX,XX +XXX,XX @@ static bool gen_branch(DisasContext *ctx, arg_b *a, TCGCond cond)
76
}
84
}
77
ctx->base.is_jmp = DISAS_NORETURN;
85
78
86
- if ((cpu->cfg.ext_zvbc || cpu->cfg.ext_zvknhb) && !cpu->cfg.ext_zve64f) {
79
- tcg_temp_free(source1);
87
+ if ((cpu->cfg.ext_zvbc || cpu->cfg.ext_zvknhb) && !cpu->cfg.ext_zve64x) {
80
- tcg_temp_free(source2);
88
error_setg(
81
-
89
errp,
82
return true;
90
- "Zvbc and Zvknhb extensions require V or Zve64{f,d} extensions");
83
}
91
+ "Zvbc and Zvknhb extensions require V or Zve64x extensions");
92
return;
93
}
84
94
85
--
95
--
86
2.31.1
96
2.45.1
87
88
diff view generated by jsdifflib
1
From: Joe Komlodi <joe.komlodi@xilinx.com>
1
From: Jason Chien <jason.chien@sifive.com>
2
2
3
If we have a field that's wider than 32-bits, we need a data type wide enough to
3
In current implementation, the gdbstub allows reading vector registers
4
be able to create the bitfield used to deposit the value.
4
only if V extension is supported. However, all vector extensions and
5
vector crypto extensions have the vector registers and they all depend
6
on Zve32x. The gdbstub should check for Zve32x instead.
5
7
6
Signed-off-by: Joe Komlodi <joe.komlodi@xilinx.com>
8
Signed-off-by: Jason Chien <jason.chien@sifive.com>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Frank Chang <frank.chang@sifive.com>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Reviewed-by: Max Chou <max.chou@sifive.com>
9
Message-id: 1626805903-162860-3-git-send-email-joe.komlodi@xilinx.com
11
Message-ID: <20240328022343.6871-4-jason.chien@sifive.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
---
13
---
12
include/hw/registerfields.h | 2 +-
14
target/riscv/gdbstub.c | 2 +-
13
1 file changed, 1 insertion(+), 1 deletion(-)
15
1 file changed, 1 insertion(+), 1 deletion(-)
14
16
15
diff --git a/include/hw/registerfields.h b/include/hw/registerfields.h
17
diff --git a/target/riscv/gdbstub.c b/target/riscv/gdbstub.c
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/registerfields.h
19
--- a/target/riscv/gdbstub.c
18
+++ b/include/hw/registerfields.h
20
+++ b/target/riscv/gdbstub.c
19
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_register_gdb_regs_for_features(CPUState *cs)
20
_d; })
22
gdb_find_static_feature("riscv-32bit-fpu.xml"),
21
#define FIELD_DP64(storage, reg, field, val) ({ \
23
0);
22
struct { \
24
}
23
- unsigned int v:R_ ## reg ## _ ## field ## _LENGTH; \
25
- if (env->misa_ext & RVV) {
24
+ uint64_t v:R_ ## reg ## _ ## field ## _LENGTH; \
26
+ if (cpu->cfg.ext_zve32x) {
25
} _v = { .v = val }; \
27
gdb_register_coprocessor(cs, riscv_gdb_get_vector,
26
uint64_t _d; \
28
riscv_gdb_set_vector,
27
_d = deposit64((storage), R_ ## reg ## _ ## field ## _SHIFT, \
29
ricsv_gen_dynamic_vector_feature(cs, cs->gdb_num_regs),
28
--
30
--
29
2.31.1
31
2.45.1
30
31
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Huang Tao <eric.huang@linux.alibaba.com>
2
2
3
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
3
In RVV and vcrypto instructions, the masked and tail elements are set to 1s
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
using vext_set_elems_1s function if the vma/vta bit is set. It is the element
5
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
5
agnostic policy.
6
Message-id: 20210823195529.560295-20-richard.henderson@linaro.org
6
7
However, this function can't deal the big endian situation. This patch fixes
8
the problem by adding handling of such case.
9
10
Signed-off-by: Huang Tao <eric.huang@linux.alibaba.com>
11
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
13
Cc: qemu-stable <qemu-stable@nongnu.org>
14
Message-ID: <20240325021654.6594-1-eric.huang@linux.alibaba.com>
7
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
15
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
8
---
16
---
9
target/riscv/insn_trans/trans_rva.c.inc | 47 ++++++++++---------------
17
target/riscv/vector_internals.c | 22 ++++++++++++++++++++++
10
1 file changed, 19 insertions(+), 28 deletions(-)
18
1 file changed, 22 insertions(+)
11
19
12
diff --git a/target/riscv/insn_trans/trans_rva.c.inc b/target/riscv/insn_trans/trans_rva.c.inc
20
diff --git a/target/riscv/vector_internals.c b/target/riscv/vector_internals.c
13
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
14
--- a/target/riscv/insn_trans/trans_rva.c.inc
22
--- a/target/riscv/vector_internals.c
15
+++ b/target/riscv/insn_trans/trans_rva.c.inc
23
+++ b/target/riscv/vector_internals.c
16
@@ -XXX,XX +XXX,XX @@
24
@@ -XXX,XX +XXX,XX @@ void vext_set_elems_1s(void *base, uint32_t is_agnostic, uint32_t cnt,
17
* this program. If not, see <http://www.gnu.org/licenses/>.
25
if (tot - cnt == 0) {
18
*/
26
return ;
19
20
-static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, MemOp mop)
21
+static bool gen_lr(DisasContext *ctx, arg_atomic *a, MemOp mop)
22
{
23
- TCGv src1 = tcg_temp_new();
24
- /* Put addr in load_res, data in load_val. */
25
- gen_get_gpr(ctx, src1, a->rs1);
26
+ TCGv src1 = get_gpr(ctx, a->rs1, EXT_ZERO);
27
+
28
if (a->rl) {
29
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
30
}
31
@@ -XXX,XX +XXX,XX @@ static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, MemOp mop)
32
if (a->aq) {
33
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
34
}
27
}
35
+
28
+
36
+ /* Put addr in load_res, data in load_val. */
29
+ if (HOST_BIG_ENDIAN) {
37
tcg_gen_mov_tl(load_res, src1);
30
+ /*
38
gen_set_gpr(ctx, a->rd, load_val);
31
+ * Deal the situation when the elements are insdie
39
32
+ * only one uint64 block including setting the
40
- tcg_temp_free(src1);
33
+ * masked-off element.
41
return true;
34
+ */
35
+ if (((tot - 1) ^ cnt) < 8) {
36
+ memset(base + H1(tot - 1), -1, tot - cnt);
37
+ return;
38
+ }
39
+ /*
40
+ * Otherwise, at least cross two uint64_t blocks.
41
+ * Set first unaligned block.
42
+ */
43
+ if (cnt % 8 != 0) {
44
+ uint32_t j = ROUND_UP(cnt, 8);
45
+ memset(base + H1(j - 1), -1, j - cnt);
46
+ cnt = j;
47
+ }
48
+ /* Set other 64bit aligend blocks */
49
+ }
50
memset(base + cnt, -1, tot - cnt);
42
}
51
}
43
52
44
-static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, MemOp mop)
45
+static bool gen_sc(DisasContext *ctx, arg_atomic *a, MemOp mop)
46
{
47
- TCGv src1 = tcg_temp_new();
48
- TCGv src2 = tcg_temp_new();
49
- TCGv dat = tcg_temp_new();
50
+ TCGv dest, src1, src2;
51
TCGLabel *l1 = gen_new_label();
52
TCGLabel *l2 = gen_new_label();
53
54
- gen_get_gpr(ctx, src1, a->rs1);
55
+ src1 = get_gpr(ctx, a->rs1, EXT_ZERO);
56
tcg_gen_brcond_tl(TCG_COND_NE, load_res, src1, l1);
57
58
- gen_get_gpr(ctx, src2, a->rs2);
59
/*
60
* Note that the TCG atomic primitives are SC,
61
* so we can ignore AQ/RL along this path.
62
*/
63
- tcg_gen_atomic_cmpxchg_tl(src1, load_res, load_val, src2,
64
+ dest = dest_gpr(ctx, a->rd);
65
+ src2 = get_gpr(ctx, a->rs2, EXT_NONE);
66
+ tcg_gen_atomic_cmpxchg_tl(dest, load_res, load_val, src2,
67
ctx->mem_idx, mop);
68
- tcg_gen_setcond_tl(TCG_COND_NE, dat, src1, load_val);
69
- gen_set_gpr(ctx, a->rd, dat);
70
+ tcg_gen_setcond_tl(TCG_COND_NE, dest, dest, load_val);
71
+ gen_set_gpr(ctx, a->rd, dest);
72
tcg_gen_br(l2);
73
74
gen_set_label(l1);
75
@@ -XXX,XX +XXX,XX @@ static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, MemOp mop)
76
* provide the memory barrier implied by AQ/RL.
77
*/
78
tcg_gen_mb(TCG_MO_ALL + a->aq * TCG_BAR_LDAQ + a->rl * TCG_BAR_STRL);
79
- tcg_gen_movi_tl(dat, 1);
80
- gen_set_gpr(ctx, a->rd, dat);
81
+ gen_set_gpr(ctx, a->rd, tcg_constant_tl(1));
82
83
gen_set_label(l2);
84
/*
85
@@ -XXX,XX +XXX,XX @@ static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, MemOp mop)
86
*/
87
tcg_gen_movi_tl(load_res, -1);
88
89
- tcg_temp_free(dat);
90
- tcg_temp_free(src1);
91
- tcg_temp_free(src2);
92
return true;
93
}
94
95
@@ -XXX,XX +XXX,XX @@ static bool gen_amo(DisasContext *ctx, arg_atomic *a,
96
void(*func)(TCGv, TCGv, TCGv, TCGArg, MemOp),
97
MemOp mop)
98
{
99
- TCGv src1 = tcg_temp_new();
100
- TCGv src2 = tcg_temp_new();
101
-
102
- gen_get_gpr(ctx, src1, a->rs1);
103
- gen_get_gpr(ctx, src2, a->rs2);
104
+ TCGv dest = dest_gpr(ctx, a->rd);
105
+ TCGv src1 = get_gpr(ctx, a->rs1, EXT_NONE);
106
+ TCGv src2 = get_gpr(ctx, a->rs2, EXT_NONE);
107
108
- (*func)(src2, src1, src2, ctx->mem_idx, mop);
109
+ func(dest, src1, src2, ctx->mem_idx, mop);
110
111
- gen_set_gpr(ctx, a->rd, src2);
112
- tcg_temp_free(src1);
113
- tcg_temp_free(src2);
114
+ gen_set_gpr(ctx, a->rd, dest);
115
return true;
116
}
117
118
--
53
--
119
2.31.1
54
2.45.1
120
121
diff view generated by jsdifflib
1
From: Bin Meng <bmeng.cn@gmail.com>
1
From: Yangyu Chen <cyy@cyyself.name>
2
2
3
When privilege check fails, RISCV_EXCP_ILLEGAL_INST is returned,
3
This code has a typo that writes zvkb to zvkg, causing users can't
4
not -1 (RISCV_EXCP_NONE).
4
enable zvkb through the config. This patch gets this fixed.
5
5
6
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
6
Signed-off-by: Yangyu Chen <cyy@cyyself.name>
7
Fixes: ea61ef7097d0 ("target/riscv: Move vector crypto extensions to riscv_cpu_extensions")
8
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Message-id: 20210807141025.31808-1-bmeng.cn@gmail.com
10
Reviewed-by: Max Chou <max.chou@sifive.com>
11
Reviewed-by:  Weiwei Li <liwei1518@gmail.com>
12
Message-ID: <tencent_7E34EEF0F90B9A68BF38BEE09EC6D4877C0A@qq.com>
13
Cc: qemu-stable <qemu-stable@nongnu.org>
9
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
---
15
---
11
target/riscv/csr.c | 2 +-
16
target/riscv/cpu.c | 2 +-
12
1 file changed, 1 insertion(+), 1 deletion(-)
17
1 file changed, 1 insertion(+), 1 deletion(-)
13
18
14
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
19
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
15
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
16
--- a/target/riscv/csr.c
21
--- a/target/riscv/cpu.c
17
+++ b/target/riscv/csr.c
22
+++ b/target/riscv/cpu.c
18
@@ -XXX,XX +XXX,XX @@ RISCVException riscv_csrrw(CPURISCVState *env, int csrno,
23
@@ -XXX,XX +XXX,XX @@ const RISCVCPUMultiExtConfig riscv_cpu_extensions[] = {
19
target_ulong old_value;
24
/* Vector cryptography extensions */
20
RISCVCPU *cpu = env_archcpu(env);
25
MULTI_EXT_CFG_BOOL("zvbb", ext_zvbb, false),
21
26
MULTI_EXT_CFG_BOOL("zvbc", ext_zvbc, false),
22
- /* check privileges and return -1 if check fails */
27
- MULTI_EXT_CFG_BOOL("zvkb", ext_zvkg, false),
23
+ /* check privileges and return RISCV_EXCP_ILLEGAL_INST if check fails */
28
+ MULTI_EXT_CFG_BOOL("zvkb", ext_zvkb, false),
24
#if !defined(CONFIG_USER_ONLY)
29
MULTI_EXT_CFG_BOOL("zvkg", ext_zvkg, false),
25
int effective_priv = env->priv;
30
MULTI_EXT_CFG_BOOL("zvkned", ext_zvkned, false),
26
int read_only = get_field(csrno, 0xC00) == 3;
31
MULTI_EXT_CFG_BOOL("zvknha", ext_zvknha, false),
27
--
32
--
28
2.31.1
33
2.45.1
29
34
30
35
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Huang Tao <eric.huang@linux.alibaba.com>
2
2
3
Introduce get_gpr, dest_gpr, temp_new -- new helpers that do not force
3
In this patch, we modify the decoder to be a freely composable data
4
tcg globals into temps, returning a constant 0 for $zero as source and
4
structure instead of a hardcoded one. It can be dynamically builded up
5
a new temp for $zero as destination.
5
according to the extensions.
6
This approach has several benefits:
7
1. Provides support for heterogeneous cpu architectures. As we add decoder in
8
RISCVCPU, each cpu can have their own decoder, and the decoders can be
9
different due to cpu's features.
10
2. Improve the decoding efficiency. We run the guard_func to see if the decoder
11
can be added to the dynamic_decoder when building up the decoder. Therefore,
12
there is no need to run the guard_func when decoding each instruction. It can
13
improve the decoding efficiency
14
3. For vendor or dynamic cpus, it allows them to customize their own decoder
15
functions to improve decoding efficiency, especially when vendor-defined
16
instruction sets increase. Because of dynamic building up, it can skip the other
17
decoder guard functions when decoding.
18
4. Pre patch for allowing adding a vendor decoder before decode_insn32() with minimal
19
overhead for users that don't need this particular vendor decoder.
6
20
7
Introduce ctx->w for simplifying word operations, such as addw.
21
Signed-off-by: Huang Tao <eric.huang@linux.alibaba.com>
8
22
Suggested-by: Christoph Muellner <christoph.muellner@vrull.eu>
9
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
23
Co-authored-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
24
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
25
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
26
Message-ID: <20240506023607.29544-1-eric.huang@linux.alibaba.com>
12
Message-id: 20210823195529.560295-6-richard.henderson@linaro.org
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
27
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
---
28
---
15
target/riscv/translate.c | 97 +++++++++++++++++++++++++++++++++-------
29
target/riscv/cpu.h | 1 +
16
1 file changed, 81 insertions(+), 16 deletions(-)
30
target/riscv/tcg/tcg-cpu.h | 15 +++++++++++++++
31
target/riscv/cpu.c | 1 +
32
target/riscv/tcg/tcg-cpu.c | 15 +++++++++++++++
33
target/riscv/translate.c | 31 +++++++++++++++----------------
34
5 files changed, 47 insertions(+), 16 deletions(-)
17
35
36
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/riscv/cpu.h
39
+++ b/target/riscv/cpu.h
40
@@ -XXX,XX +XXX,XX @@ struct ArchCPU {
41
uint32_t pmu_avail_ctrs;
42
/* Mapping of events to counters */
43
GHashTable *pmu_event_ctr_map;
44
+ const GPtrArray *decoders;
45
};
46
47
/**
48
diff --git a/target/riscv/tcg/tcg-cpu.h b/target/riscv/tcg/tcg-cpu.h
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/riscv/tcg/tcg-cpu.h
51
+++ b/target/riscv/tcg/tcg-cpu.h
52
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp);
53
void riscv_tcg_cpu_finalize_features(RISCVCPU *cpu, Error **errp);
54
bool riscv_cpu_tcg_compatible(RISCVCPU *cpu);
55
56
+struct DisasContext;
57
+struct RISCVCPUConfig;
58
+typedef struct RISCVDecoder {
59
+ bool (*guard_func)(const struct RISCVCPUConfig *);
60
+ bool (*riscv_cpu_decode_fn)(struct DisasContext *, uint32_t);
61
+} RISCVDecoder;
62
+
63
+typedef bool (*riscv_cpu_decode_fn)(struct DisasContext *, uint32_t);
64
+
65
+extern const size_t decoder_table_size;
66
+
67
+extern const RISCVDecoder decoder_table[];
68
+
69
+void riscv_tcg_cpu_finalize_dynamic_decoder(RISCVCPU *cpu);
70
+
71
#endif
72
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/target/riscv/cpu.c
75
+++ b/target/riscv/cpu.c
76
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_finalize_features(RISCVCPU *cpu, Error **errp)
77
error_propagate(errp, local_err);
78
return;
79
}
80
+ riscv_tcg_cpu_finalize_dynamic_decoder(cpu);
81
} else if (kvm_enabled()) {
82
riscv_kvm_cpu_finalize_features(cpu, &local_err);
83
if (local_err != NULL) {
84
diff --git a/target/riscv/tcg/tcg-cpu.c b/target/riscv/tcg/tcg-cpu.c
85
index XXXXXXX..XXXXXXX 100644
86
--- a/target/riscv/tcg/tcg-cpu.c
87
+++ b/target/riscv/tcg/tcg-cpu.c
88
@@ -XXX,XX +XXX,XX @@ void riscv_tcg_cpu_finalize_features(RISCVCPU *cpu, Error **errp)
89
}
90
}
91
92
+void riscv_tcg_cpu_finalize_dynamic_decoder(RISCVCPU *cpu)
93
+{
94
+ GPtrArray *dynamic_decoders;
95
+ dynamic_decoders = g_ptr_array_sized_new(decoder_table_size);
96
+ for (size_t i = 0; i < decoder_table_size; ++i) {
97
+ if (decoder_table[i].guard_func &&
98
+ decoder_table[i].guard_func(&cpu->cfg)) {
99
+ g_ptr_array_add(dynamic_decoders,
100
+ (gpointer)decoder_table[i].riscv_cpu_decode_fn);
101
+ }
102
+ }
103
+
104
+ cpu->decoders = dynamic_decoders;
105
+}
106
+
107
bool riscv_cpu_tcg_compatible(RISCVCPU *cpu)
108
{
109
return object_dynamic_cast(OBJECT(cpu), TYPE_RISCV_CPU_HOST) == NULL;
18
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
110
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
19
index XXXXXXX..XXXXXXX 100644
111
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/translate.c
112
--- a/target/riscv/translate.c
21
+++ b/target/riscv/translate.c
113
+++ b/target/riscv/translate.c
22
@@ -XXX,XX +XXX,XX @@ static TCGv load_val;
114
@@ -XXX,XX +XXX,XX @@
23
115
#include "exec/helper-info.c.inc"
24
#include "exec/gen-icount.h"
116
#undef HELPER_H
25
117
26
+/*
118
+#include "tcg/tcg-cpu.h"
27
+ * If an operation is being performed on less than TARGET_LONG_BITS,
28
+ * it may require the inputs to be sign- or zero-extended; which will
29
+ * depend on the exact operation being performed.
30
+ */
31
+typedef enum {
32
+ EXT_NONE,
33
+ EXT_SIGN,
34
+ EXT_ZERO,
35
+} DisasExtend;
36
+
119
+
37
typedef struct DisasContext {
120
/* global register indices */
38
DisasContextBase base;
121
static TCGv cpu_gpr[32], cpu_gprh[32], cpu_pc, cpu_vl, cpu_vstart;
39
/* pc_succ_insn points to the instruction following base.pc_next */
122
static TCGv_i64 cpu_fpr[32]; /* assume F and D extensions */
40
target_ulong pc_succ_insn;
41
target_ulong priv_ver;
42
- bool virt_enabled;
43
+ target_ulong misa;
44
uint32_t opcode;
45
uint32_t mstatus_fs;
46
- target_ulong misa;
47
uint32_t mem_idx;
48
/* Remember the rounding mode encoded in the previous fp instruction,
49
which we have already installed into env->fp_status. Or -1 for
50
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
123
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
51
to any system register, which includes CSR_FRM, so we do not have
124
/* FRM is known to contain a valid value. */
52
to reset this known value. */
125
bool frm_valid;
53
int frm;
126
bool insn_start_updated;
54
+ bool w;
127
+ const GPtrArray *decoders;
55
+ bool virt_enabled;
56
bool ext_ifencei;
57
bool hlsx;
58
/* vector extension */
59
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
60
uint16_t vlen;
61
uint16_t mlen;
62
bool vl_eq_vlmax;
63
+ uint8_t ntemp;
64
CPUState *cs;
65
+ TCGv zero;
66
+ /* Space for 3 operands plus 1 extra for address computation. */
67
+ TCGv temp[4];
68
} DisasContext;
128
} DisasContext;
69
129
70
static inline bool has_ext(DisasContext *ctx, uint32_t ext)
130
static inline bool has_ext(DisasContext *ctx, uint32_t ext)
71
@@ -XXX,XX +XXX,XX @@ static void gen_goto_tb(DisasContext *ctx, int n, target_ulong dest)
131
@@ -XXX,XX +XXX,XX @@ static inline int insn_len(uint16_t first_word)
72
}
132
return (first_word & 3) == 3 ? 4 : 2;
73
}
133
}
74
134
75
-/* Wrapper for getting reg values - need to check of reg is zero since
135
+const RISCVDecoder decoder_table[] = {
76
- * cpu_gpr[0] is not actually allocated
136
+ { always_true_p, decode_insn32 },
77
+/*
137
+ { has_xthead_p, decode_xthead},
78
+ * Wrappers for getting reg values.
138
+ { has_XVentanaCondOps_p, decode_XVentanaCodeOps},
79
+ *
139
+};
80
+ * The $zero register does not have cpu_gpr[0] allocated -- we supply the
81
+ * constant zero as a source, and an uninitialized sink as destination.
82
+ *
83
+ * Further, we may provide an extension for word operations.
84
*/
85
-static void gen_get_gpr(DisasContext *ctx, TCGv t, int reg_num)
86
+static TCGv temp_new(DisasContext *ctx)
87
+{
88
+ assert(ctx->ntemp < ARRAY_SIZE(ctx->temp));
89
+ return ctx->temp[ctx->ntemp++] = tcg_temp_new();
90
+}
91
+
140
+
92
+static TCGv get_gpr(DisasContext *ctx, int reg_num, DisasExtend ext)
141
+const size_t decoder_table_size = ARRAY_SIZE(decoder_table);
142
+
143
static void decode_opc(CPURISCVState *env, DisasContext *ctx, uint16_t opcode)
93
{
144
{
94
+ TCGv t;
145
- /*
95
+
146
- * A table with predicate (i.e., guard) functions and decoder functions
96
if (reg_num == 0) {
147
- * that are tested in-order until a decoder matches onto the opcode.
97
- tcg_gen_movi_tl(t, 0);
148
- */
98
- } else {
149
- static const struct {
99
- tcg_gen_mov_tl(t, cpu_gpr[reg_num]);
150
- bool (*guard_func)(const RISCVCPUConfig *);
100
+ return ctx->zero;
151
- bool (*decode_func)(DisasContext *, uint32_t);
101
}
152
- } decoders[] = {
102
+
153
- { always_true_p, decode_insn32 },
103
+ switch (ctx->w ? ext : EXT_NONE) {
154
- { has_xthead_p, decode_xthead },
104
+ case EXT_NONE:
155
- { has_XVentanaCondOps_p, decode_XVentanaCodeOps },
105
+ return cpu_gpr[reg_num];
156
- };
106
+ case EXT_SIGN:
157
-
107
+ t = temp_new(ctx);
158
ctx->virt_inst_excp = false;
108
+ tcg_gen_ext32s_tl(t, cpu_gpr[reg_num]);
159
ctx->cur_insn_len = insn_len(opcode);
109
+ return t;
160
/* Check for compressed insn */
110
+ case EXT_ZERO:
161
@@ -XXX,XX +XXX,XX @@ static void decode_opc(CPURISCVState *env, DisasContext *ctx, uint16_t opcode)
111
+ t = temp_new(ctx);
162
ctx->base.pc_next + 2));
112
+ tcg_gen_ext32u_tl(t, cpu_gpr[reg_num]);
163
ctx->opcode = opcode32;
113
+ return t;
164
114
+ }
165
- for (size_t i = 0; i < ARRAY_SIZE(decoders); ++i) {
115
+ g_assert_not_reached();
166
- if (decoders[i].guard_func(ctx->cfg_ptr) &&
167
- decoders[i].decode_func(ctx, opcode32)) {
168
+ for (guint i = 0; i < ctx->decoders->len; ++i) {
169
+ riscv_cpu_decode_fn func = g_ptr_array_index(ctx->decoders, i);
170
+ if (func(ctx, opcode32)) {
171
return;
172
}
173
}
174
@@ -XXX,XX +XXX,XX @@ static void riscv_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
175
ctx->itrigger = FIELD_EX32(tb_flags, TB_FLAGS, ITRIGGER);
176
ctx->zero = tcg_constant_tl(0);
177
ctx->virt_inst_excp = false;
178
+ ctx->decoders = cpu->decoders;
116
}
179
}
117
180
118
-/* Wrapper for setting reg values - need to check of reg is zero since
119
- * cpu_gpr[0] is not actually allocated. this is more for safety purposes,
120
- * since we usually avoid calling the OP_TYPE_gen function if we see a write to
121
- * $zero
122
- */
123
-static void gen_set_gpr(DisasContext *ctx, int reg_num_dst, TCGv t)
124
+static void gen_get_gpr(DisasContext *ctx, TCGv t, int reg_num)
125
+{
126
+ tcg_gen_mov_tl(t, get_gpr(ctx, reg_num, EXT_NONE));
127
+}
128
+
129
+static TCGv __attribute__((unused)) dest_gpr(DisasContext *ctx, int reg_num)
130
+{
131
+ if (reg_num == 0 || ctx->w) {
132
+ return temp_new(ctx);
133
+ }
134
+ return cpu_gpr[reg_num];
135
+}
136
+
137
+static void gen_set_gpr(DisasContext *ctx, int reg_num, TCGv t)
138
{
139
- if (reg_num_dst != 0) {
140
- tcg_gen_mov_tl(cpu_gpr[reg_num_dst], t);
141
+ if (reg_num != 0) {
142
+ if (ctx->w) {
143
+ tcg_gen_ext32s_tl(cpu_gpr[reg_num], t);
144
+ } else {
145
+ tcg_gen_mov_tl(cpu_gpr[reg_num], t);
146
+ }
147
}
148
}
149
150
@@ -XXX,XX +XXX,XX @@ static void riscv_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
151
ctx->mlen = 1 << (ctx->sew + 3 - ctx->lmul);
152
ctx->vl_eq_vlmax = FIELD_EX32(tb_flags, TB_FLAGS, VL_EQ_VLMAX);
153
ctx->cs = cs;
154
+ ctx->w = false;
155
+ ctx->ntemp = 0;
156
+ memset(ctx->temp, 0, sizeof(ctx->temp));
157
+
158
+ ctx->zero = tcg_constant_tl(0);
159
}
160
161
static void riscv_tr_tb_start(DisasContextBase *db, CPUState *cpu)
181
static void riscv_tr_tb_start(DisasContextBase *db, CPUState *cpu)
162
@@ -XXX,XX +XXX,XX @@ static void riscv_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
163
164
decode_opc(env, ctx, opcode16);
165
ctx->base.pc_next = ctx->pc_succ_insn;
166
+ ctx->w = false;
167
+
168
+ for (int i = ctx->ntemp - 1; i >= 0; --i) {
169
+ tcg_temp_free(ctx->temp[i]);
170
+ ctx->temp[i] = NULL;
171
+ }
172
+ ctx->ntemp = 0;
173
174
if (ctx->base.is_jmp == DISAS_NEXT) {
175
target_ulong page_start;
176
--
182
--
177
2.31.1
183
2.45.1
178
179
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Christoph Müllner <christoph.muellner@vrull.eu>
2
2
3
Tested-by: Bin Meng <bmeng.cn@gmail.com>
3
The th.sxstatus CSR can be used to identify available custom extension
4
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
4
on T-Head CPUs. The CSR is documented here:
5
https://github.com/T-head-Semi/thead-extension-spec/blob/master/xtheadsxstatus.adoc
6
7
An important property of this patch is, that the th.sxstatus MAEE field
8
is not set (indicating that XTheadMae is not available).
9
XTheadMae is a memory attribute extension (similar to Svpbmt) which is
10
implemented in many T-Head CPUs (C906, C910, etc.) and utilizes bits
11
in PTEs that are marked as reserved. QEMU maintainers prefer to not
12
implement XTheadMae, so we need give kernels a mechanism to identify
13
if XTheadMae is available in a system or not. And this patch introduces
14
this mechanism in QEMU in a way that's compatible with real HW
15
(i.e., probing the th.sxstatus.MAEE bit).
16
17
Further context can be found on the list:
18
https://lists.gnu.org/archive/html/qemu-devel/2024-02/msg00775.html
19
20
Reviewed-by: LIU Zhiwei <zhiwe_liu@linux.alibaba.com>
5
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
21
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
22
Signed-off-by: Christoph Müllner <christoph.muellner@vrull.eu>
7
Message-id: 20210823195529.560295-3-richard.henderson@linaro.org
23
Message-ID: <20240429073656.2486732-1-christoph.muellner@vrull.eu>
8
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
24
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
9
---
25
---
10
tests/tcg/riscv64/test-div.c | 58 +++++++++++++++++++++++++++++++
26
MAINTAINERS | 1 +
11
tests/tcg/riscv64/Makefile.target | 5 +++
27
target/riscv/cpu.h | 3 ++
12
2 files changed, 63 insertions(+)
28
target/riscv/cpu.c | 1 +
13
create mode 100644 tests/tcg/riscv64/test-div.c
29
target/riscv/th_csr.c | 79 ++++++++++++++++++++++++++++++++++++++++
14
create mode 100644 tests/tcg/riscv64/Makefile.target
30
target/riscv/meson.build | 1 +
31
5 files changed, 85 insertions(+)
32
create mode 100644 target/riscv/th_csr.c
15
33
16
diff --git a/tests/tcg/riscv64/test-div.c b/tests/tcg/riscv64/test-div.c
34
diff --git a/MAINTAINERS b/MAINTAINERS
35
index XXXXXXX..XXXXXXX 100644
36
--- a/MAINTAINERS
37
+++ b/MAINTAINERS
38
@@ -XXX,XX +XXX,XX @@ L: qemu-riscv@nongnu.org
39
S: Supported
40
F: target/riscv/insn_trans/trans_xthead.c.inc
41
F: target/riscv/xthead*.decode
42
+F: target/riscv/th_*
43
F: disas/riscv-xthead*
44
45
RISC-V XVentanaCondOps extension
46
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/riscv/cpu.h
49
+++ b/target/riscv/cpu.h
50
@@ -XXX,XX +XXX,XX @@ target_ulong riscv_new_csr_seed(target_ulong new_value,
51
uint8_t satp_mode_max_from_map(uint32_t map);
52
const char *satp_mode_str(uint8_t satp_mode, bool is_32_bit);
53
54
+/* Implemented in th_csr.c */
55
+void th_register_custom_csrs(RISCVCPU *cpu);
56
+
57
#endif /* RISCV_CPU_H */
58
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/riscv/cpu.c
61
+++ b/target/riscv/cpu.c
62
@@ -XXX,XX +XXX,XX @@ static void rv64_thead_c906_cpu_init(Object *obj)
63
cpu->cfg.mvendorid = THEAD_VENDOR_ID;
64
#ifndef CONFIG_USER_ONLY
65
set_satp_mode_max_supported(cpu, VM_1_10_SV39);
66
+ th_register_custom_csrs(cpu);
67
#endif
68
69
/* inherited from parent obj via riscv_cpu_init() */
70
diff --git a/target/riscv/th_csr.c b/target/riscv/th_csr.c
17
new file mode 100644
71
new file mode 100644
18
index XXXXXXX..XXXXXXX
72
index XXXXXXX..XXXXXXX
19
--- /dev/null
73
--- /dev/null
20
+++ b/tests/tcg/riscv64/test-div.c
74
+++ b/target/riscv/th_csr.c
21
@@ -XXX,XX +XXX,XX @@
75
@@ -XXX,XX +XXX,XX @@
22
+#include <assert.h>
76
+/*
23
+#include <limits.h>
77
+ * T-Head-specific CSRs.
78
+ *
79
+ * Copyright (c) 2024 VRULL GmbH
80
+ *
81
+ * This program is free software; you can redistribute it and/or modify it
82
+ * under the terms and conditions of the GNU General Public License,
83
+ * version 2 or later, as published by the Free Software Foundation.
84
+ *
85
+ * This program is distributed in the hope it will be useful, but WITHOUT
86
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
87
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
88
+ * more details.
89
+ *
90
+ * You should have received a copy of the GNU General Public License along with
91
+ * this program. If not, see <http://www.gnu.org/licenses/>.
92
+ */
24
+
93
+
25
+struct TestS {
94
+#include "qemu/osdep.h"
26
+ long x, y, q, r;
95
+#include "cpu.h"
27
+};
96
+#include "cpu_vendorid.h"
28
+
97
+
29
+static struct TestS test_s[] = {
98
+#define CSR_TH_SXSTATUS 0x5c0
30
+ { 4, 2, 2, 0 }, /* normal cases */
31
+ { 9, 7, 1, 2 },
32
+ { 0, 0, -1, 0 }, /* div by zero cases */
33
+ { 9, 0, -1, 9 },
34
+ { LONG_MIN, -1, LONG_MIN, 0 }, /* overflow case */
35
+};
36
+
99
+
37
+struct TestU {
100
+/* TH_SXSTATUS bits */
38
+ unsigned long x, y, q, r;
101
+#define TH_SXSTATUS_UCME BIT(16)
39
+};
102
+#define TH_SXSTATUS_MAEE BIT(21)
103
+#define TH_SXSTATUS_THEADISAEE BIT(22)
40
+
104
+
41
+static struct TestU test_u[] = {
105
+typedef struct {
42
+ { 4, 2, 2, 0 }, /* normal cases */
106
+ int csrno;
43
+ { 9, 7, 1, 2 },
107
+ int (*insertion_test)(RISCVCPU *cpu);
44
+ { 0, 0, ULONG_MAX, 0 }, /* div by zero cases */
108
+ riscv_csr_operations csr_ops;
45
+ { 9, 0, ULONG_MAX, 9 },
109
+} riscv_csr;
46
+};
47
+
110
+
48
+#define ARRAY_SIZE(X) (sizeof(X) / sizeof(*(X)))
111
+static RISCVException smode(CPURISCVState *env, int csrno)
49
+
50
+int main (void)
51
+{
112
+{
52
+ int i;
113
+ if (riscv_has_ext(env, RVS)) {
53
+
114
+ return RISCV_EXCP_NONE;
54
+ for (i = 0; i < ARRAY_SIZE(test_s); i++) {
55
+ long q, r;
56
+
57
+ asm("div %0, %2, %3\n\t"
58
+ "rem %1, %2, %3"
59
+ : "=&r" (q), "=r" (r)
60
+ : "r" (test_s[i].x), "r" (test_s[i].y));
61
+
62
+ assert(q == test_s[i].q);
63
+ assert(r == test_s[i].r);
64
+ }
115
+ }
65
+
116
+
66
+ for (i = 0; i < ARRAY_SIZE(test_u); i++) {
117
+ return RISCV_EXCP_ILLEGAL_INST;
67
+ unsigned long q, r;
118
+}
68
+
119
+
69
+ asm("divu %0, %2, %3\n\t"
120
+static int test_thead_mvendorid(RISCVCPU *cpu)
70
+ "remu %1, %2, %3"
121
+{
71
+ : "=&r" (q), "=r" (r)
122
+ if (cpu->cfg.mvendorid != THEAD_VENDOR_ID) {
72
+ : "r" (test_u[i].x), "r" (test_u[i].y));
123
+ return -1;
73
+
74
+ assert(q == test_u[i].q);
75
+ assert(r == test_u[i].r);
76
+ }
124
+ }
77
+
125
+
78
+ return 0;
126
+ return 0;
79
+}
127
+}
80
diff --git a/tests/tcg/riscv64/Makefile.target b/tests/tcg/riscv64/Makefile.target
81
new file mode 100644
82
index XXXXXXX..XXXXXXX
83
--- /dev/null
84
+++ b/tests/tcg/riscv64/Makefile.target
85
@@ -XXX,XX +XXX,XX @@
86
+# -*- Mode: makefile -*-
87
+# RISC-V specific tweaks
88
+
128
+
89
+VPATH += $(SRC_PATH)/tests/tcg/riscv64
129
+static RISCVException read_th_sxstatus(CPURISCVState *env, int csrno,
90
+TESTS += test-div
130
+ target_ulong *val)
131
+{
132
+ /* We don't set MAEE here, because QEMU does not implement MAEE. */
133
+ *val = TH_SXSTATUS_UCME | TH_SXSTATUS_THEADISAEE;
134
+ return RISCV_EXCP_NONE;
135
+}
136
+
137
+static riscv_csr th_csr_list[] = {
138
+ {
139
+ .csrno = CSR_TH_SXSTATUS,
140
+ .insertion_test = test_thead_mvendorid,
141
+ .csr_ops = { "th.sxstatus", smode, read_th_sxstatus }
142
+ }
143
+};
144
+
145
+void th_register_custom_csrs(RISCVCPU *cpu)
146
+{
147
+ for (size_t i = 0; i < ARRAY_SIZE(th_csr_list); i++) {
148
+ int csrno = th_csr_list[i].csrno;
149
+ riscv_csr_operations *csr_ops = &th_csr_list[i].csr_ops;
150
+ if (!th_csr_list[i].insertion_test(cpu)) {
151
+ riscv_set_csr_ops(csrno, csr_ops);
152
+ }
153
+ }
154
+}
155
diff --git a/target/riscv/meson.build b/target/riscv/meson.build
156
index XXXXXXX..XXXXXXX 100644
157
--- a/target/riscv/meson.build
158
+++ b/target/riscv/meson.build
159
@@ -XXX,XX +XXX,XX @@ riscv_system_ss.add(files(
160
'monitor.c',
161
'machine.c',
162
'pmu.c',
163
+ 'th_csr.c',
164
'time_helper.c',
165
'riscv-qmp-cmds.c',
166
))
91
--
167
--
92
2.31.1
168
2.45.1
93
169
94
170
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Max Chou <max.chou@sifive.com>
2
2
3
Move these helpers near their use by the trans_*
3
According v spec 18.4, only the vfwcvt.f.f.v and vfncvt.f.f.w
4
functions within insn_trans/trans_rvm.c.inc.
4
instructions will be affected by Zvfhmin extension.
5
And the vfwcvt.f.f.v and vfncvt.f.f.w instructions only support the
6
conversions of
5
7
6
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
8
* From 1*SEW(16/32) to 2*SEW(32/64)
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
* From 2*SEW(32/64) to 1*SEW(16/32)
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Signed-off-by: Max Chou <max.chou@sifive.com>
10
Message-id: 20210823195529.560295-10-richard.henderson@linaro.org
12
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
13
Cc: qemu-stable <qemu-stable@nongnu.org>
14
Message-ID: <20240322092600.1198921-2-max.chou@sifive.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
15
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
16
---
13
target/riscv/translate.c | 127 ------------------------
17
target/riscv/insn_trans/trans_rvv.c.inc | 20 ++++++++++++++++++--
14
target/riscv/insn_trans/trans_rvm.c.inc | 127 ++++++++++++++++++++++++
18
1 file changed, 18 insertions(+), 2 deletions(-)
15
2 files changed, 127 insertions(+), 127 deletions(-)
16
19
17
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
20
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
18
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
19
--- a/target/riscv/translate.c
22
--- a/target/riscv/insn_trans/trans_rvv.c.inc
20
+++ b/target/riscv/translate.c
23
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
21
@@ -XXX,XX +XXX,XX @@ static void gen_set_gpr(DisasContext *ctx, int reg_num, TCGv t)
24
@@ -XXX,XX +XXX,XX @@ static bool require_rvf(DisasContext *s)
22
}
25
}
23
}
26
}
24
27
25
-static void gen_mulhsu(TCGv ret, TCGv arg1, TCGv arg2)
28
+static bool require_rvfmin(DisasContext *s)
26
-{
27
- TCGv rl = tcg_temp_new();
28
- TCGv rh = tcg_temp_new();
29
-
30
- tcg_gen_mulu2_tl(rl, rh, arg1, arg2);
31
- /* fix up for one negative */
32
- tcg_gen_sari_tl(rl, arg1, TARGET_LONG_BITS - 1);
33
- tcg_gen_and_tl(rl, rl, arg2);
34
- tcg_gen_sub_tl(ret, rh, rl);
35
-
36
- tcg_temp_free(rl);
37
- tcg_temp_free(rh);
38
-}
39
-
40
-static void gen_div(TCGv ret, TCGv source1, TCGv source2)
41
-{
42
- TCGv temp1, temp2, zero, one, mone, min;
43
-
44
- temp1 = tcg_temp_new();
45
- temp2 = tcg_temp_new();
46
- zero = tcg_constant_tl(0);
47
- one = tcg_constant_tl(1);
48
- mone = tcg_constant_tl(-1);
49
- min = tcg_constant_tl(1ull << (TARGET_LONG_BITS - 1));
50
-
51
- /*
52
- * If overflow, set temp2 to 1, else source2.
53
- * This produces the required result of min.
54
- */
55
- tcg_gen_setcond_tl(TCG_COND_EQ, temp1, source1, min);
56
- tcg_gen_setcond_tl(TCG_COND_EQ, temp2, source2, mone);
57
- tcg_gen_and_tl(temp1, temp1, temp2);
58
- tcg_gen_movcond_tl(TCG_COND_NE, temp2, temp1, zero, one, source2);
59
-
60
- /*
61
- * If div by zero, set temp1 to -1 and temp2 to 1 to
62
- * produce the required result of -1.
63
- */
64
- tcg_gen_movcond_tl(TCG_COND_EQ, temp1, source2, zero, mone, source1);
65
- tcg_gen_movcond_tl(TCG_COND_EQ, temp2, source2, zero, one, temp2);
66
-
67
- tcg_gen_div_tl(ret, temp1, temp2);
68
-
69
- tcg_temp_free(temp1);
70
- tcg_temp_free(temp2);
71
-}
72
-
73
-static void gen_divu(TCGv ret, TCGv source1, TCGv source2)
74
-{
75
- TCGv temp1, temp2, zero, one, max;
76
-
77
- temp1 = tcg_temp_new();
78
- temp2 = tcg_temp_new();
79
- zero = tcg_constant_tl(0);
80
- one = tcg_constant_tl(1);
81
- max = tcg_constant_tl(~0);
82
-
83
- /*
84
- * If div by zero, set temp1 to max and temp2 to 1 to
85
- * produce the required result of max.
86
- */
87
- tcg_gen_movcond_tl(TCG_COND_EQ, temp1, source2, zero, max, source1);
88
- tcg_gen_movcond_tl(TCG_COND_EQ, temp2, source2, zero, one, source2);
89
- tcg_gen_divu_tl(ret, temp1, temp2);
90
-
91
- tcg_temp_free(temp1);
92
- tcg_temp_free(temp2);
93
-}
94
-
95
-static void gen_rem(TCGv ret, TCGv source1, TCGv source2)
96
-{
97
- TCGv temp1, temp2, zero, one, mone, min;
98
-
99
- temp1 = tcg_temp_new();
100
- temp2 = tcg_temp_new();
101
- zero = tcg_constant_tl(0);
102
- one = tcg_constant_tl(1);
103
- mone = tcg_constant_tl(-1);
104
- min = tcg_constant_tl(1ull << (TARGET_LONG_BITS - 1));
105
-
106
- /*
107
- * If overflow, set temp1 to 0, else source1.
108
- * This avoids a possible host trap, and produces the required result of 0.
109
- */
110
- tcg_gen_setcond_tl(TCG_COND_EQ, temp1, source1, min);
111
- tcg_gen_setcond_tl(TCG_COND_EQ, temp2, source2, mone);
112
- tcg_gen_and_tl(temp1, temp1, temp2);
113
- tcg_gen_movcond_tl(TCG_COND_NE, temp1, temp1, zero, zero, source1);
114
-
115
- /*
116
- * If div by zero, set temp2 to 1, else source2.
117
- * This avoids a possible host trap, but produces an incorrect result.
118
- */
119
- tcg_gen_movcond_tl(TCG_COND_EQ, temp2, source2, zero, one, source2);
120
-
121
- tcg_gen_rem_tl(temp1, temp1, temp2);
122
-
123
- /* If div by zero, the required result is the original dividend. */
124
- tcg_gen_movcond_tl(TCG_COND_EQ, ret, source2, zero, source1, temp1);
125
-
126
- tcg_temp_free(temp1);
127
- tcg_temp_free(temp2);
128
-}
129
-
130
-static void gen_remu(TCGv ret, TCGv source1, TCGv source2)
131
-{
132
- TCGv temp, zero, one;
133
-
134
- temp = tcg_temp_new();
135
- zero = tcg_constant_tl(0);
136
- one = tcg_constant_tl(1);
137
-
138
- /*
139
- * If div by zero, set temp to 1, else source2.
140
- * This avoids a possible host trap, but produces an incorrect result.
141
- */
142
- tcg_gen_movcond_tl(TCG_COND_EQ, temp, source2, zero, one, source2);
143
-
144
- tcg_gen_remu_tl(temp, source1, temp);
145
-
146
- /* If div by zero, the required result is the original dividend. */
147
- tcg_gen_movcond_tl(TCG_COND_EQ, ret, source2, zero, source1, temp);
148
-
149
- tcg_temp_free(temp);
150
-}
151
-
152
static void gen_jal(DisasContext *ctx, int rd, target_ulong imm)
153
{
154
target_ulong next_pc;
155
diff --git a/target/riscv/insn_trans/trans_rvm.c.inc b/target/riscv/insn_trans/trans_rvm.c.inc
156
index XXXXXXX..XXXXXXX 100644
157
--- a/target/riscv/insn_trans/trans_rvm.c.inc
158
+++ b/target/riscv/insn_trans/trans_rvm.c.inc
159
@@ -XXX,XX +XXX,XX @@ static bool trans_mulh(DisasContext *ctx, arg_mulh *a)
160
return gen_arith(ctx, a, EXT_NONE, gen_mulh);
161
}
162
163
+static void gen_mulhsu(TCGv ret, TCGv arg1, TCGv arg2)
164
+{
29
+{
165
+ TCGv rl = tcg_temp_new();
30
+ if (s->mstatus_fs == EXT_STATUS_DISABLED) {
166
+ TCGv rh = tcg_temp_new();
31
+ return false;
32
+ }
167
+
33
+
168
+ tcg_gen_mulu2_tl(rl, rh, arg1, arg2);
34
+ switch (s->sew) {
169
+ /* fix up for one negative */
35
+ case MO_16:
170
+ tcg_gen_sari_tl(rl, arg1, TARGET_LONG_BITS - 1);
36
+ return s->cfg_ptr->ext_zvfhmin;
171
+ tcg_gen_and_tl(rl, rl, arg2);
37
+ case MO_32:
172
+ tcg_gen_sub_tl(ret, rh, rl);
38
+ return s->cfg_ptr->ext_zve32f;
173
+
39
+ default:
174
+ tcg_temp_free(rl);
40
+ return false;
175
+ tcg_temp_free(rh);
41
+ }
176
+}
42
+}
177
+
43
+
178
static bool trans_mulhsu(DisasContext *ctx, arg_mulhsu *a)
44
static bool require_scale_rvf(DisasContext *s)
179
{
45
{
180
REQUIRE_EXT(ctx, RVM);
46
if (s->mstatus_fs == EXT_STATUS_DISABLED) {
181
@@ -XXX,XX +XXX,XX @@ static bool trans_mulhu(DisasContext *ctx, arg_mulhu *a)
47
@@ -XXX,XX +XXX,XX @@ static bool require_scale_rvfmin(DisasContext *s)
182
return gen_arith(ctx, a, EXT_NONE, gen_mulhu);
48
}
49
50
switch (s->sew) {
51
- case MO_8:
52
- return s->cfg_ptr->ext_zvfhmin;
53
case MO_16:
54
return s->cfg_ptr->ext_zve32f;
55
case MO_32:
56
@@ -XXX,XX +XXX,XX @@ static bool opxfv_widen_check(DisasContext *s, arg_rmr *a)
57
static bool opffv_widen_check(DisasContext *s, arg_rmr *a)
58
{
59
return opfv_widen_check(s, a) &&
60
+ require_rvfmin(s) &&
61
require_scale_rvfmin(s) &&
62
(s->sew != MO_8);
183
}
63
}
184
64
@@ -XXX,XX +XXX,XX @@ static bool opfxv_narrow_check(DisasContext *s, arg_rmr *a)
185
+static void gen_div(TCGv ret, TCGv source1, TCGv source2)
65
static bool opffv_narrow_check(DisasContext *s, arg_rmr *a)
186
+{
187
+ TCGv temp1, temp2, zero, one, mone, min;
188
+
189
+ temp1 = tcg_temp_new();
190
+ temp2 = tcg_temp_new();
191
+ zero = tcg_constant_tl(0);
192
+ one = tcg_constant_tl(1);
193
+ mone = tcg_constant_tl(-1);
194
+ min = tcg_constant_tl(1ull << (TARGET_LONG_BITS - 1));
195
+
196
+ /*
197
+ * If overflow, set temp2 to 1, else source2.
198
+ * This produces the required result of min.
199
+ */
200
+ tcg_gen_setcond_tl(TCG_COND_EQ, temp1, source1, min);
201
+ tcg_gen_setcond_tl(TCG_COND_EQ, temp2, source2, mone);
202
+ tcg_gen_and_tl(temp1, temp1, temp2);
203
+ tcg_gen_movcond_tl(TCG_COND_NE, temp2, temp1, zero, one, source2);
204
+
205
+ /*
206
+ * If div by zero, set temp1 to -1 and temp2 to 1 to
207
+ * produce the required result of -1.
208
+ */
209
+ tcg_gen_movcond_tl(TCG_COND_EQ, temp1, source2, zero, mone, source1);
210
+ tcg_gen_movcond_tl(TCG_COND_EQ, temp2, source2, zero, one, temp2);
211
+
212
+ tcg_gen_div_tl(ret, temp1, temp2);
213
+
214
+ tcg_temp_free(temp1);
215
+ tcg_temp_free(temp2);
216
+}
217
+
218
static bool trans_div(DisasContext *ctx, arg_div *a)
219
{
66
{
220
REQUIRE_EXT(ctx, RVM);
67
return opfv_narrow_check(s, a) &&
221
return gen_arith(ctx, a, EXT_SIGN, gen_div);
68
+ require_rvfmin(s) &&
69
require_scale_rvfmin(s) &&
70
(s->sew != MO_8);
222
}
71
}
223
224
+static void gen_divu(TCGv ret, TCGv source1, TCGv source2)
225
+{
226
+ TCGv temp1, temp2, zero, one, max;
227
+
228
+ temp1 = tcg_temp_new();
229
+ temp2 = tcg_temp_new();
230
+ zero = tcg_constant_tl(0);
231
+ one = tcg_constant_tl(1);
232
+ max = tcg_constant_tl(~0);
233
+
234
+ /*
235
+ * If div by zero, set temp1 to max and temp2 to 1 to
236
+ * produce the required result of max.
237
+ */
238
+ tcg_gen_movcond_tl(TCG_COND_EQ, temp1, source2, zero, max, source1);
239
+ tcg_gen_movcond_tl(TCG_COND_EQ, temp2, source2, zero, one, source2);
240
+ tcg_gen_divu_tl(ret, temp1, temp2);
241
+
242
+ tcg_temp_free(temp1);
243
+ tcg_temp_free(temp2);
244
+}
245
+
246
static bool trans_divu(DisasContext *ctx, arg_divu *a)
247
{
248
REQUIRE_EXT(ctx, RVM);
249
return gen_arith(ctx, a, EXT_ZERO, gen_divu);
250
}
251
252
+static void gen_rem(TCGv ret, TCGv source1, TCGv source2)
253
+{
254
+ TCGv temp1, temp2, zero, one, mone, min;
255
+
256
+ temp1 = tcg_temp_new();
257
+ temp2 = tcg_temp_new();
258
+ zero = tcg_constant_tl(0);
259
+ one = tcg_constant_tl(1);
260
+ mone = tcg_constant_tl(-1);
261
+ min = tcg_constant_tl(1ull << (TARGET_LONG_BITS - 1));
262
+
263
+ /*
264
+ * If overflow, set temp1 to 0, else source1.
265
+ * This avoids a possible host trap, and produces the required result of 0.
266
+ */
267
+ tcg_gen_setcond_tl(TCG_COND_EQ, temp1, source1, min);
268
+ tcg_gen_setcond_tl(TCG_COND_EQ, temp2, source2, mone);
269
+ tcg_gen_and_tl(temp1, temp1, temp2);
270
+ tcg_gen_movcond_tl(TCG_COND_NE, temp1, temp1, zero, zero, source1);
271
+
272
+ /*
273
+ * If div by zero, set temp2 to 1, else source2.
274
+ * This avoids a possible host trap, but produces an incorrect result.
275
+ */
276
+ tcg_gen_movcond_tl(TCG_COND_EQ, temp2, source2, zero, one, source2);
277
+
278
+ tcg_gen_rem_tl(temp1, temp1, temp2);
279
+
280
+ /* If div by zero, the required result is the original dividend. */
281
+ tcg_gen_movcond_tl(TCG_COND_EQ, ret, source2, zero, source1, temp1);
282
+
283
+ tcg_temp_free(temp1);
284
+ tcg_temp_free(temp2);
285
+}
286
+
287
static bool trans_rem(DisasContext *ctx, arg_rem *a)
288
{
289
REQUIRE_EXT(ctx, RVM);
290
return gen_arith(ctx, a, EXT_SIGN, gen_rem);
291
}
292
293
+static void gen_remu(TCGv ret, TCGv source1, TCGv source2)
294
+{
295
+ TCGv temp, zero, one;
296
+
297
+ temp = tcg_temp_new();
298
+ zero = tcg_constant_tl(0);
299
+ one = tcg_constant_tl(1);
300
+
301
+ /*
302
+ * If div by zero, set temp to 1, else source2.
303
+ * This avoids a possible host trap, but produces an incorrect result.
304
+ */
305
+ tcg_gen_movcond_tl(TCG_COND_EQ, temp, source2, zero, one, source2);
306
+
307
+ tcg_gen_remu_tl(temp, source1, temp);
308
+
309
+ /* If div by zero, the required result is the original dividend. */
310
+ tcg_gen_movcond_tl(TCG_COND_EQ, ret, source2, zero, source1, temp);
311
+
312
+ tcg_temp_free(temp);
313
+}
314
+
315
static bool trans_remu(DisasContext *ctx, arg_remu *a)
316
{
317
REQUIRE_EXT(ctx, RVM);
318
--
72
--
319
2.31.1
73
2.45.1
320
321
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Max Chou <max.chou@sifive.com>
2
2
3
Remove gen_get_gpr, as the function becomes unused.
3
The require_scale_rvf function only checks the double width operator for
4
the vector floating point widen instructions, so most of the widen
5
checking functions need to add require_rvf for single width operator.
4
6
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
The vfwcvt.f.x.v and vfwcvt.f.xu.v instructions convert single width
6
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
integer to double width float, so the opfxv_widen_check function doesn’t
7
Message-id: 20210823195529.560295-25-richard.henderson@linaro.org
9
need require_rvf for the single width operator(integer).
10
11
Signed-off-by: Max Chou <max.chou@sifive.com>
12
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
13
Cc: qemu-stable <qemu-stable@nongnu.org>
14
Message-ID: <20240322092600.1198921-3-max.chou@sifive.com>
8
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
15
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
9
---
16
---
10
target/riscv/translate.c | 13 ++---
17
target/riscv/insn_trans/trans_rvv.c.inc | 5 +++++
11
target/riscv/insn_trans/trans_rvv.c.inc | 74 +++++++------------------
18
1 file changed, 5 insertions(+)
12
2 files changed, 26 insertions(+), 61 deletions(-)
13
19
14
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/riscv/translate.c
17
+++ b/target/riscv/translate.c
18
@@ -XXX,XX +XXX,XX @@ static TCGv get_gpr(DisasContext *ctx, int reg_num, DisasExtend ext)
19
g_assert_not_reached();
20
}
21
22
-static void gen_get_gpr(DisasContext *ctx, TCGv t, int reg_num)
23
-{
24
- tcg_gen_mov_tl(t, get_gpr(ctx, reg_num, EXT_NONE));
25
-}
26
-
27
static TCGv dest_gpr(DisasContext *ctx, int reg_num)
28
{
29
if (reg_num == 0 || ctx->w) {
30
@@ -XXX,XX +XXX,XX @@ void riscv_translate_init(void)
31
{
32
int i;
33
34
- /* cpu_gpr[0] is a placeholder for the zero register. Do not use it. */
35
- /* Use the gen_set_gpr and gen_get_gpr helper functions when accessing */
36
- /* registers, unless you specifically block reads/writes to reg 0 */
37
+ /*
38
+ * cpu_gpr[0] is a placeholder for the zero register. Do not use it.
39
+ * Use the gen_set_gpr and get_gpr helper functions when accessing regs,
40
+ * unless you specifically block reads/writes to reg 0.
41
+ */
42
cpu_gpr[0] = NULL;
43
44
for (i = 1; i < 32; i++) {
45
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
20
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
46
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
47
--- a/target/riscv/insn_trans/trans_rvv.c.inc
22
--- a/target/riscv/insn_trans/trans_rvv.c.inc
48
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
23
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
49
@@ -XXX,XX +XXX,XX @@ static bool trans_vsetvl(DisasContext *ctx, arg_vsetvl *a)
24
@@ -XXX,XX +XXX,XX @@ GEN_OPFVF_TRANS(vfrsub_vf, opfvf_check)
50
return false;
25
static bool opfvv_widen_check(DisasContext *s, arg_rmrr *a)
51
}
52
53
- s2 = tcg_temp_new();
54
- dst = tcg_temp_new();
55
+ s2 = get_gpr(ctx, a->rs2, EXT_ZERO);
56
+ dst = dest_gpr(ctx, a->rd);
57
58
/* Using x0 as the rs1 register specifier, encodes an infinite AVL */
59
if (a->rs1 == 0) {
60
/* As the mask is at least one bit, RV_VLEN_MAX is >= VLMAX */
61
s1 = tcg_constant_tl(RV_VLEN_MAX);
62
} else {
63
- s1 = tcg_temp_new();
64
- gen_get_gpr(ctx, s1, a->rs1);
65
+ s1 = get_gpr(ctx, a->rs1, EXT_ZERO);
66
}
67
- gen_get_gpr(ctx, s2, a->rs2);
68
gen_helper_vsetvl(dst, cpu_env, s1, s2);
69
gen_set_gpr(ctx, a->rd, dst);
70
+
71
tcg_gen_movi_tl(cpu_pc, ctx->pc_succ_insn);
72
lookup_and_goto_ptr(ctx);
73
ctx->base.is_jmp = DISAS_NORETURN;
74
-
75
- tcg_temp_free(s1);
76
- tcg_temp_free(s2);
77
- tcg_temp_free(dst);
78
return true;
79
}
80
81
@@ -XXX,XX +XXX,XX @@ static bool trans_vsetvli(DisasContext *ctx, arg_vsetvli *a)
82
}
83
84
s2 = tcg_constant_tl(a->zimm);
85
- dst = tcg_temp_new();
86
+ dst = dest_gpr(ctx, a->rd);
87
88
/* Using x0 as the rs1 register specifier, encodes an infinite AVL */
89
if (a->rs1 == 0) {
90
/* As the mask is at least one bit, RV_VLEN_MAX is >= VLMAX */
91
s1 = tcg_constant_tl(RV_VLEN_MAX);
92
} else {
93
- s1 = tcg_temp_new();
94
- gen_get_gpr(ctx, s1, a->rs1);
95
+ s1 = get_gpr(ctx, a->rs1, EXT_ZERO);
96
}
97
gen_helper_vsetvl(dst, cpu_env, s1, s2);
98
gen_set_gpr(ctx, a->rd, dst);
99
+
100
gen_goto_tb(ctx, 0, ctx->pc_succ_insn);
101
ctx->base.is_jmp = DISAS_NORETURN;
102
-
103
- tcg_temp_free(s1);
104
- tcg_temp_free(dst);
105
return true;
106
}
107
108
@@ -XXX,XX +XXX,XX @@ static bool ldst_us_trans(uint32_t vd, uint32_t rs1, uint32_t data,
109
110
dest = tcg_temp_new_ptr();
111
mask = tcg_temp_new_ptr();
112
- base = tcg_temp_new();
113
+ base = get_gpr(s, rs1, EXT_NONE);
114
115
/*
116
* As simd_desc supports at most 256 bytes, and in this implementation,
117
@@ -XXX,XX +XXX,XX @@ static bool ldst_us_trans(uint32_t vd, uint32_t rs1, uint32_t data,
118
*/
119
desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
120
121
- gen_get_gpr(s, base, rs1);
122
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
123
tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0));
124
125
@@ -XXX,XX +XXX,XX @@ static bool ldst_us_trans(uint32_t vd, uint32_t rs1, uint32_t data,
126
127
tcg_temp_free_ptr(dest);
128
tcg_temp_free_ptr(mask);
129
- tcg_temp_free(base);
130
gen_set_label(over);
131
return true;
132
}
133
@@ -XXX,XX +XXX,XX @@ static bool ldst_stride_trans(uint32_t vd, uint32_t rs1, uint32_t rs2,
134
135
dest = tcg_temp_new_ptr();
136
mask = tcg_temp_new_ptr();
137
- base = tcg_temp_new();
138
- stride = tcg_temp_new();
139
+ base = get_gpr(s, rs1, EXT_NONE);
140
+ stride = get_gpr(s, rs2, EXT_NONE);
141
desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
142
143
- gen_get_gpr(s, base, rs1);
144
- gen_get_gpr(s, stride, rs2);
145
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
146
tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0));
147
148
@@ -XXX,XX +XXX,XX @@ static bool ldst_stride_trans(uint32_t vd, uint32_t rs1, uint32_t rs2,
149
150
tcg_temp_free_ptr(dest);
151
tcg_temp_free_ptr(mask);
152
- tcg_temp_free(base);
153
- tcg_temp_free(stride);
154
gen_set_label(over);
155
return true;
156
}
157
@@ -XXX,XX +XXX,XX @@ static bool ldst_index_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
158
dest = tcg_temp_new_ptr();
159
mask = tcg_temp_new_ptr();
160
index = tcg_temp_new_ptr();
161
- base = tcg_temp_new();
162
+ base = get_gpr(s, rs1, EXT_NONE);
163
desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
164
165
- gen_get_gpr(s, base, rs1);
166
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
167
tcg_gen_addi_ptr(index, cpu_env, vreg_ofs(s, vs2));
168
tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0));
169
@@ -XXX,XX +XXX,XX @@ static bool ldst_index_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
170
tcg_temp_free_ptr(dest);
171
tcg_temp_free_ptr(mask);
172
tcg_temp_free_ptr(index);
173
- tcg_temp_free(base);
174
gen_set_label(over);
175
return true;
176
}
177
@@ -XXX,XX +XXX,XX @@ static bool ldff_trans(uint32_t vd, uint32_t rs1, uint32_t data,
178
179
dest = tcg_temp_new_ptr();
180
mask = tcg_temp_new_ptr();
181
- base = tcg_temp_new();
182
+ base = get_gpr(s, rs1, EXT_NONE);
183
desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
184
185
- gen_get_gpr(s, base, rs1);
186
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
187
tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0));
188
189
@@ -XXX,XX +XXX,XX @@ static bool ldff_trans(uint32_t vd, uint32_t rs1, uint32_t data,
190
191
tcg_temp_free_ptr(dest);
192
tcg_temp_free_ptr(mask);
193
- tcg_temp_free(base);
194
gen_set_label(over);
195
return true;
196
}
197
@@ -XXX,XX +XXX,XX @@ static bool amo_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
198
dest = tcg_temp_new_ptr();
199
mask = tcg_temp_new_ptr();
200
index = tcg_temp_new_ptr();
201
- base = tcg_temp_new();
202
+ base = get_gpr(s, rs1, EXT_NONE);
203
desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
204
205
- gen_get_gpr(s, base, rs1);
206
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
207
tcg_gen_addi_ptr(index, cpu_env, vreg_ofs(s, vs2));
208
tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0));
209
@@ -XXX,XX +XXX,XX @@ static bool amo_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
210
tcg_temp_free_ptr(dest);
211
tcg_temp_free_ptr(mask);
212
tcg_temp_free_ptr(index);
213
- tcg_temp_free(base);
214
gen_set_label(over);
215
return true;
216
}
217
@@ -XXX,XX +XXX,XX @@ static bool opivx_trans(uint32_t vd, uint32_t rs1, uint32_t vs2, uint32_t vm,
218
dest = tcg_temp_new_ptr();
219
mask = tcg_temp_new_ptr();
220
src2 = tcg_temp_new_ptr();
221
- src1 = tcg_temp_new();
222
- gen_get_gpr(s, src1, rs1);
223
+ src1 = get_gpr(s, rs1, EXT_NONE);
224
225
data = FIELD_DP32(data, VDATA, MLEN, s->mlen);
226
data = FIELD_DP32(data, VDATA, VM, vm);
227
@@ -XXX,XX +XXX,XX @@ static bool opivx_trans(uint32_t vd, uint32_t rs1, uint32_t vs2, uint32_t vm,
228
tcg_temp_free_ptr(dest);
229
tcg_temp_free_ptr(mask);
230
tcg_temp_free_ptr(src2);
231
- tcg_temp_free(src1);
232
gen_set_label(over);
233
return true;
234
}
235
@@ -XXX,XX +XXX,XX @@ do_opivx_gvec(DisasContext *s, arg_rmrr *a, GVecGen2sFn *gvec_fn,
236
237
if (a->vm && s->vl_eq_vlmax) {
238
TCGv_i64 src1 = tcg_temp_new_i64();
239
- TCGv tmp = tcg_temp_new();
240
241
- gen_get_gpr(s, tmp, a->rs1);
242
- tcg_gen_ext_tl_i64(src1, tmp);
243
+ tcg_gen_ext_tl_i64(src1, get_gpr(s, a->rs1, EXT_SIGN));
244
gvec_fn(s->sew, vreg_ofs(s, a->rd), vreg_ofs(s, a->rs2),
245
src1, MAXSZ(s), MAXSZ(s));
246
247
tcg_temp_free_i64(src1);
248
- tcg_temp_free(tmp);
249
return true;
250
}
251
return opivx_trans(a->rd, a->rs1, a->rs2, a->vm, fn, s);
252
@@ -XXX,XX +XXX,XX @@ do_opivx_gvec_shift(DisasContext *s, arg_rmrr *a, GVecGen2sFn32 *gvec_fn,
253
254
if (a->vm && s->vl_eq_vlmax) {
255
TCGv_i32 src1 = tcg_temp_new_i32();
256
- TCGv tmp = tcg_temp_new();
257
258
- gen_get_gpr(s, tmp, a->rs1);
259
- tcg_gen_trunc_tl_i32(src1, tmp);
260
+ tcg_gen_trunc_tl_i32(src1, get_gpr(s, a->rs1, EXT_NONE));
261
tcg_gen_extract_i32(src1, src1, 0, s->sew + 3);
262
gvec_fn(s->sew, vreg_ofs(s, a->rd), vreg_ofs(s, a->rs2),
263
src1, MAXSZ(s), MAXSZ(s));
264
265
tcg_temp_free_i32(src1);
266
- tcg_temp_free(tmp);
267
return true;
268
}
269
return opivx_trans(a->rd, a->rs1, a->rs2, a->vm, fn, s);
270
@@ -XXX,XX +XXX,XX @@ static bool trans_vmv_v_x(DisasContext *s, arg_vmv_v_x *a)
271
TCGLabel *over = gen_new_label();
272
tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
273
274
- s1 = tcg_temp_new();
275
- gen_get_gpr(s, s1, a->rs1);
276
+ s1 = get_gpr(s, a->rs1, EXT_SIGN);
277
278
if (s->vl_eq_vlmax) {
279
tcg_gen_gvec_dup_tl(s->sew, vreg_ofs(s, a->rd),
280
@@ -XXX,XX +XXX,XX @@ static bool trans_vmv_v_x(DisasContext *s, arg_vmv_v_x *a)
281
tcg_temp_free_i64(s1_i64);
282
}
283
284
- tcg_temp_free(s1);
285
gen_set_label(over);
286
return true;
287
}
288
@@ -XXX,XX +XXX,XX @@ static bool trans_vmpopc_m(DisasContext *s, arg_rmr *a)
289
290
mask = tcg_temp_new_ptr();
291
src2 = tcg_temp_new_ptr();
292
- dst = tcg_temp_new();
293
+ dst = dest_gpr(s, a->rd);
294
desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
295
296
tcg_gen_addi_ptr(src2, cpu_env, vreg_ofs(s, a->rs2));
297
@@ -XXX,XX +XXX,XX @@ static bool trans_vmpopc_m(DisasContext *s, arg_rmr *a)
298
299
tcg_temp_free_ptr(mask);
300
tcg_temp_free_ptr(src2);
301
- tcg_temp_free(dst);
302
return true;
303
}
304
return false;
305
@@ -XXX,XX +XXX,XX @@ static bool trans_vmfirst_m(DisasContext *s, arg_rmr *a)
306
307
mask = tcg_temp_new_ptr();
308
src2 = tcg_temp_new_ptr();
309
- dst = tcg_temp_new();
310
+ dst = dest_gpr(s, a->rd);
311
desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
312
313
tcg_gen_addi_ptr(src2, cpu_env, vreg_ofs(s, a->rs2));
314
@@ -XXX,XX +XXX,XX @@ static bool trans_vmfirst_m(DisasContext *s, arg_rmr *a)
315
316
tcg_temp_free_ptr(mask);
317
tcg_temp_free_ptr(src2);
318
- tcg_temp_free(dst);
319
return true;
320
}
321
return false;
322
@@ -XXX,XX +XXX,XX @@ static void vec_element_loadi(DisasContext *s, TCGv_i64 dest,
323
static bool trans_vext_x_v(DisasContext *s, arg_r *a)
324
{
26
{
325
TCGv_i64 tmp = tcg_temp_new_i64();
27
return require_rvv(s) &&
326
- TCGv dest = tcg_temp_new();
28
+ require_rvf(s) &&
327
+ TCGv dest = dest_gpr(s, a->rd);
29
require_scale_rvf(s) &&
328
30
(s->sew != MO_8) &&
329
if (a->rs1 == 0) {
31
vext_check_isa_ill(s) &&
330
/* Special case vmv.x.s rd, vs2. */
32
@@ -XXX,XX +XXX,XX @@ GEN_OPFVV_WIDEN_TRANS(vfwsub_vv, opfvv_widen_check)
331
@@ -XXX,XX +XXX,XX @@ static bool trans_vext_x_v(DisasContext *s, arg_r *a)
33
static bool opfvf_widen_check(DisasContext *s, arg_rmrr *a)
332
int vlmax = s->vlen >> (3 + s->sew);
34
{
333
vec_element_loadx(s, tmp, a->rs2, cpu_gpr[a->rs1], vlmax);
35
return require_rvv(s) &&
334
}
36
+ require_rvf(s) &&
335
+
37
require_scale_rvf(s) &&
336
tcg_gen_trunc_i64_tl(dest, tmp);
38
(s->sew != MO_8) &&
337
gen_set_gpr(s, a->rd, dest);
39
vext_check_isa_ill(s) &&
338
40
@@ -XXX,XX +XXX,XX @@ GEN_OPFVF_WIDEN_TRANS(vfwsub_vf)
339
- tcg_temp_free(dest);
41
static bool opfwv_widen_check(DisasContext *s, arg_rmrr *a)
340
tcg_temp_free_i64(tmp);
42
{
341
return true;
43
return require_rvv(s) &&
44
+ require_rvf(s) &&
45
require_scale_rvf(s) &&
46
(s->sew != MO_8) &&
47
vext_check_isa_ill(s) &&
48
@@ -XXX,XX +XXX,XX @@ GEN_OPFWV_WIDEN_TRANS(vfwsub_wv)
49
static bool opfwf_widen_check(DisasContext *s, arg_rmrr *a)
50
{
51
return require_rvv(s) &&
52
+ require_rvf(s) &&
53
require_scale_rvf(s) &&
54
(s->sew != MO_8) &&
55
vext_check_isa_ill(s) &&
56
@@ -XXX,XX +XXX,XX @@ GEN_OPFVV_TRANS(vfredmin_vs, freduction_check)
57
static bool freduction_widen_check(DisasContext *s, arg_rmrr *a)
58
{
59
return reduction_widen_check(s, a) &&
60
+ require_rvf(s) &&
61
require_scale_rvf(s) &&
62
(s->sew != MO_8);
342
}
63
}
343
--
64
--
344
2.31.1
65
2.45.1
345
66
346
67
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Max Chou <max.chou@sifive.com>
2
2
3
We will require the context to handle RV64 word operations.
3
The opfv_narrow_check needs to check the single width float operator by
4
require_rvf.
4
5
5
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
6
Signed-off-by: Max Chou <max.chou@sifive.com>
6
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
7
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Cc: qemu-stable <qemu-stable@nongnu.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-ID: <20240322092600.1198921-4-max.chou@sifive.com>
9
Message-id: 20210823195529.560295-5-richard.henderson@linaro.org
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
---
11
---
12
target/riscv/translate.c | 58 ++++++++++++-------------
12
target/riscv/insn_trans/trans_rvv.c.inc | 1 +
13
target/riscv/insn_trans/trans_rva.c.inc | 18 ++++----
13
1 file changed, 1 insertion(+)
14
target/riscv/insn_trans/trans_rvb.c.inc | 4 +-
15
target/riscv/insn_trans/trans_rvd.c.inc | 32 +++++++-------
16
target/riscv/insn_trans/trans_rvf.c.inc | 32 +++++++-------
17
target/riscv/insn_trans/trans_rvh.c.inc | 52 +++++++++++-----------
18
target/riscv/insn_trans/trans_rvi.c.inc | 44 +++++++++----------
19
target/riscv/insn_trans/trans_rvm.c.inc | 12 ++---
20
target/riscv/insn_trans/trans_rvv.c.inc | 36 +++++++--------
21
9 files changed, 144 insertions(+), 144 deletions(-)
22
14
23
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
24
index XXXXXXX..XXXXXXX 100644
25
--- a/target/riscv/translate.c
26
+++ b/target/riscv/translate.c
27
@@ -XXX,XX +XXX,XX @@ static void gen_goto_tb(DisasContext *ctx, int n, target_ulong dest)
28
/* Wrapper for getting reg values - need to check of reg is zero since
29
* cpu_gpr[0] is not actually allocated
30
*/
31
-static inline void gen_get_gpr(TCGv t, int reg_num)
32
+static void gen_get_gpr(DisasContext *ctx, TCGv t, int reg_num)
33
{
34
if (reg_num == 0) {
35
tcg_gen_movi_tl(t, 0);
36
@@ -XXX,XX +XXX,XX @@ static inline void gen_get_gpr(TCGv t, int reg_num)
37
* since we usually avoid calling the OP_TYPE_gen function if we see a write to
38
* $zero
39
*/
40
-static inline void gen_set_gpr(int reg_num_dst, TCGv t)
41
+static void gen_set_gpr(DisasContext *ctx, int reg_num_dst, TCGv t)
42
{
43
if (reg_num_dst != 0) {
44
tcg_gen_mov_tl(cpu_gpr[reg_num_dst], t);
45
@@ -XXX,XX +XXX,XX @@ static bool gen_arith_imm_fn(DisasContext *ctx, arg_i *a,
46
TCGv source1;
47
source1 = tcg_temp_new();
48
49
- gen_get_gpr(source1, a->rs1);
50
+ gen_get_gpr(ctx, source1, a->rs1);
51
52
(*func)(source1, source1, a->imm);
53
54
- gen_set_gpr(a->rd, source1);
55
+ gen_set_gpr(ctx, a->rd, source1);
56
tcg_temp_free(source1);
57
return true;
58
}
59
@@ -XXX,XX +XXX,XX @@ static bool gen_arith_imm_tl(DisasContext *ctx, arg_i *a,
60
source1 = tcg_temp_new();
61
source2 = tcg_temp_new();
62
63
- gen_get_gpr(source1, a->rs1);
64
+ gen_get_gpr(ctx, source1, a->rs1);
65
tcg_gen_movi_tl(source2, a->imm);
66
67
(*func)(source1, source1, source2);
68
69
- gen_set_gpr(a->rd, source1);
70
+ gen_set_gpr(ctx, a->rd, source1);
71
tcg_temp_free(source1);
72
tcg_temp_free(source2);
73
return true;
74
@@ -XXX,XX +XXX,XX @@ static bool gen_arith_div_w(DisasContext *ctx, arg_r *a,
75
source1 = tcg_temp_new();
76
source2 = tcg_temp_new();
77
78
- gen_get_gpr(source1, a->rs1);
79
- gen_get_gpr(source2, a->rs2);
80
+ gen_get_gpr(ctx, source1, a->rs1);
81
+ gen_get_gpr(ctx, source2, a->rs2);
82
tcg_gen_ext32s_tl(source1, source1);
83
tcg_gen_ext32s_tl(source2, source2);
84
85
(*func)(source1, source1, source2);
86
87
tcg_gen_ext32s_tl(source1, source1);
88
- gen_set_gpr(a->rd, source1);
89
+ gen_set_gpr(ctx, a->rd, source1);
90
tcg_temp_free(source1);
91
tcg_temp_free(source2);
92
return true;
93
@@ -XXX,XX +XXX,XX @@ static bool gen_arith_div_uw(DisasContext *ctx, arg_r *a,
94
source1 = tcg_temp_new();
95
source2 = tcg_temp_new();
96
97
- gen_get_gpr(source1, a->rs1);
98
- gen_get_gpr(source2, a->rs2);
99
+ gen_get_gpr(ctx, source1, a->rs1);
100
+ gen_get_gpr(ctx, source2, a->rs2);
101
tcg_gen_ext32u_tl(source1, source1);
102
tcg_gen_ext32u_tl(source2, source2);
103
104
(*func)(source1, source1, source2);
105
106
tcg_gen_ext32s_tl(source1, source1);
107
- gen_set_gpr(a->rd, source1);
108
+ gen_set_gpr(ctx, a->rd, source1);
109
tcg_temp_free(source1);
110
tcg_temp_free(source2);
111
return true;
112
@@ -XXX,XX +XXX,XX @@ static bool gen_grevi(DisasContext *ctx, arg_grevi *a)
113
TCGv source1 = tcg_temp_new();
114
TCGv source2;
115
116
- gen_get_gpr(source1, a->rs1);
117
+ gen_get_gpr(ctx, source1, a->rs1);
118
119
if (a->shamt == (TARGET_LONG_BITS - 8)) {
120
/* rev8, byte swaps */
121
@@ -XXX,XX +XXX,XX @@ static bool gen_grevi(DisasContext *ctx, arg_grevi *a)
122
tcg_temp_free(source2);
123
}
124
125
- gen_set_gpr(a->rd, source1);
126
+ gen_set_gpr(ctx, a->rd, source1);
127
tcg_temp_free(source1);
128
return true;
129
}
130
@@ -XXX,XX +XXX,XX @@ static bool gen_arith(DisasContext *ctx, arg_r *a,
131
source1 = tcg_temp_new();
132
source2 = tcg_temp_new();
133
134
- gen_get_gpr(source1, a->rs1);
135
- gen_get_gpr(source2, a->rs2);
136
+ gen_get_gpr(ctx, source1, a->rs1);
137
+ gen_get_gpr(ctx, source2, a->rs2);
138
139
(*func)(source1, source1, source2);
140
141
- gen_set_gpr(a->rd, source1);
142
+ gen_set_gpr(ctx, a->rd, source1);
143
tcg_temp_free(source1);
144
tcg_temp_free(source2);
145
return true;
146
@@ -XXX,XX +XXX,XX @@ static bool gen_shift(DisasContext *ctx, arg_r *a,
147
TCGv source1 = tcg_temp_new();
148
TCGv source2 = tcg_temp_new();
149
150
- gen_get_gpr(source1, a->rs1);
151
- gen_get_gpr(source2, a->rs2);
152
+ gen_get_gpr(ctx, source1, a->rs1);
153
+ gen_get_gpr(ctx, source2, a->rs2);
154
155
tcg_gen_andi_tl(source2, source2, TARGET_LONG_BITS - 1);
156
(*func)(source1, source1, source2);
157
158
- gen_set_gpr(a->rd, source1);
159
+ gen_set_gpr(ctx, a->rd, source1);
160
tcg_temp_free(source1);
161
tcg_temp_free(source2);
162
return true;
163
@@ -XXX,XX +XXX,XX @@ static bool gen_shifti(DisasContext *ctx, arg_shift *a,
164
TCGv source1 = tcg_temp_new();
165
TCGv source2 = tcg_temp_new();
166
167
- gen_get_gpr(source1, a->rs1);
168
+ gen_get_gpr(ctx, source1, a->rs1);
169
170
tcg_gen_movi_tl(source2, a->shamt);
171
(*func)(source1, source1, source2);
172
173
- gen_set_gpr(a->rd, source1);
174
+ gen_set_gpr(ctx, a->rd, source1);
175
tcg_temp_free(source1);
176
tcg_temp_free(source2);
177
return true;
178
@@ -XXX,XX +XXX,XX @@ static bool gen_shiftw(DisasContext *ctx, arg_r *a,
179
TCGv source1 = tcg_temp_new();
180
TCGv source2 = tcg_temp_new();
181
182
- gen_get_gpr(source1, a->rs1);
183
- gen_get_gpr(source2, a->rs2);
184
+ gen_get_gpr(ctx, source1, a->rs1);
185
+ gen_get_gpr(ctx, source2, a->rs2);
186
187
tcg_gen_andi_tl(source2, source2, 31);
188
(*func)(source1, source1, source2);
189
tcg_gen_ext32s_tl(source1, source1);
190
191
- gen_set_gpr(a->rd, source1);
192
+ gen_set_gpr(ctx, a->rd, source1);
193
tcg_temp_free(source1);
194
tcg_temp_free(source2);
195
return true;
196
@@ -XXX,XX +XXX,XX @@ static bool gen_shiftiw(DisasContext *ctx, arg_shift *a,
197
TCGv source1 = tcg_temp_new();
198
TCGv source2 = tcg_temp_new();
199
200
- gen_get_gpr(source1, a->rs1);
201
+ gen_get_gpr(ctx, source1, a->rs1);
202
tcg_gen_movi_tl(source2, a->shamt);
203
204
(*func)(source1, source1, source2);
205
tcg_gen_ext32s_tl(source1, source1);
206
207
- gen_set_gpr(a->rd, source1);
208
+ gen_set_gpr(ctx, a->rd, source1);
209
tcg_temp_free(source1);
210
tcg_temp_free(source2);
211
return true;
212
@@ -XXX,XX +XXX,XX @@ static bool gen_unary(DisasContext *ctx, arg_r2 *a,
213
{
214
TCGv source = tcg_temp_new();
215
216
- gen_get_gpr(source, a->rs1);
217
+ gen_get_gpr(ctx, source, a->rs1);
218
219
(*func)(source, source);
220
221
- gen_set_gpr(a->rd, source);
222
+ gen_set_gpr(ctx, a->rd, source);
223
tcg_temp_free(source);
224
return true;
225
}
226
diff --git a/target/riscv/insn_trans/trans_rva.c.inc b/target/riscv/insn_trans/trans_rva.c.inc
227
index XXXXXXX..XXXXXXX 100644
228
--- a/target/riscv/insn_trans/trans_rva.c.inc
229
+++ b/target/riscv/insn_trans/trans_rva.c.inc
230
@@ -XXX,XX +XXX,XX @@ static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, MemOp mop)
231
{
232
TCGv src1 = tcg_temp_new();
233
/* Put addr in load_res, data in load_val. */
234
- gen_get_gpr(src1, a->rs1);
235
+ gen_get_gpr(ctx, src1, a->rs1);
236
if (a->rl) {
237
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
238
}
239
@@ -XXX,XX +XXX,XX @@ static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, MemOp mop)
240
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
241
}
242
tcg_gen_mov_tl(load_res, src1);
243
- gen_set_gpr(a->rd, load_val);
244
+ gen_set_gpr(ctx, a->rd, load_val);
245
246
tcg_temp_free(src1);
247
return true;
248
@@ -XXX,XX +XXX,XX @@ static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, MemOp mop)
249
TCGLabel *l1 = gen_new_label();
250
TCGLabel *l2 = gen_new_label();
251
252
- gen_get_gpr(src1, a->rs1);
253
+ gen_get_gpr(ctx, src1, a->rs1);
254
tcg_gen_brcond_tl(TCG_COND_NE, load_res, src1, l1);
255
256
- gen_get_gpr(src2, a->rs2);
257
+ gen_get_gpr(ctx, src2, a->rs2);
258
/*
259
* Note that the TCG atomic primitives are SC,
260
* so we can ignore AQ/RL along this path.
261
@@ -XXX,XX +XXX,XX @@ static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, MemOp mop)
262
tcg_gen_atomic_cmpxchg_tl(src1, load_res, load_val, src2,
263
ctx->mem_idx, mop);
264
tcg_gen_setcond_tl(TCG_COND_NE, dat, src1, load_val);
265
- gen_set_gpr(a->rd, dat);
266
+ gen_set_gpr(ctx, a->rd, dat);
267
tcg_gen_br(l2);
268
269
gen_set_label(l1);
270
@@ -XXX,XX +XXX,XX @@ static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, MemOp mop)
271
*/
272
tcg_gen_mb(TCG_MO_ALL + a->aq * TCG_BAR_LDAQ + a->rl * TCG_BAR_STRL);
273
tcg_gen_movi_tl(dat, 1);
274
- gen_set_gpr(a->rd, dat);
275
+ gen_set_gpr(ctx, a->rd, dat);
276
277
gen_set_label(l2);
278
/*
279
@@ -XXX,XX +XXX,XX @@ static bool gen_amo(DisasContext *ctx, arg_atomic *a,
280
TCGv src1 = tcg_temp_new();
281
TCGv src2 = tcg_temp_new();
282
283
- gen_get_gpr(src1, a->rs1);
284
- gen_get_gpr(src2, a->rs2);
285
+ gen_get_gpr(ctx, src1, a->rs1);
286
+ gen_get_gpr(ctx, src2, a->rs2);
287
288
(*func)(src2, src1, src2, ctx->mem_idx, mop);
289
290
- gen_set_gpr(a->rd, src2);
291
+ gen_set_gpr(ctx, a->rd, src2);
292
tcg_temp_free(src1);
293
tcg_temp_free(src2);
294
return true;
295
diff --git a/target/riscv/insn_trans/trans_rvb.c.inc b/target/riscv/insn_trans/trans_rvb.c.inc
296
index XXXXXXX..XXXXXXX 100644
297
--- a/target/riscv/insn_trans/trans_rvb.c.inc
298
+++ b/target/riscv/insn_trans/trans_rvb.c.inc
299
@@ -XXX,XX +XXX,XX @@ static bool trans_slli_uw(DisasContext *ctx, arg_slli_uw *a)
300
REQUIRE_EXT(ctx, RVB);
301
302
TCGv source1 = tcg_temp_new();
303
- gen_get_gpr(source1, a->rs1);
304
+ gen_get_gpr(ctx, source1, a->rs1);
305
306
if (a->shamt < 32) {
307
tcg_gen_deposit_z_tl(source1, source1, a->shamt, 32);
308
@@ -XXX,XX +XXX,XX @@ static bool trans_slli_uw(DisasContext *ctx, arg_slli_uw *a)
309
tcg_gen_shli_tl(source1, source1, a->shamt);
310
}
311
312
- gen_set_gpr(a->rd, source1);
313
+ gen_set_gpr(ctx, a->rd, source1);
314
tcg_temp_free(source1);
315
return true;
316
}
317
diff --git a/target/riscv/insn_trans/trans_rvd.c.inc b/target/riscv/insn_trans/trans_rvd.c.inc
318
index XXXXXXX..XXXXXXX 100644
319
--- a/target/riscv/insn_trans/trans_rvd.c.inc
320
+++ b/target/riscv/insn_trans/trans_rvd.c.inc
321
@@ -XXX,XX +XXX,XX @@ static bool trans_fld(DisasContext *ctx, arg_fld *a)
322
REQUIRE_FPU;
323
REQUIRE_EXT(ctx, RVD);
324
TCGv t0 = tcg_temp_new();
325
- gen_get_gpr(t0, a->rs1);
326
+ gen_get_gpr(ctx, t0, a->rs1);
327
tcg_gen_addi_tl(t0, t0, a->imm);
328
329
tcg_gen_qemu_ld_i64(cpu_fpr[a->rd], t0, ctx->mem_idx, MO_TEQ);
330
@@ -XXX,XX +XXX,XX @@ static bool trans_fsd(DisasContext *ctx, arg_fsd *a)
331
REQUIRE_FPU;
332
REQUIRE_EXT(ctx, RVD);
333
TCGv t0 = tcg_temp_new();
334
- gen_get_gpr(t0, a->rs1);
335
+ gen_get_gpr(ctx, t0, a->rs1);
336
tcg_gen_addi_tl(t0, t0, a->imm);
337
338
tcg_gen_qemu_st_i64(cpu_fpr[a->rs2], t0, ctx->mem_idx, MO_TEQ);
339
@@ -XXX,XX +XXX,XX @@ static bool trans_feq_d(DisasContext *ctx, arg_feq_d *a)
340
341
TCGv t0 = tcg_temp_new();
342
gen_helper_feq_d(t0, cpu_env, cpu_fpr[a->rs1], cpu_fpr[a->rs2]);
343
- gen_set_gpr(a->rd, t0);
344
+ gen_set_gpr(ctx, a->rd, t0);
345
tcg_temp_free(t0);
346
347
return true;
348
@@ -XXX,XX +XXX,XX @@ static bool trans_flt_d(DisasContext *ctx, arg_flt_d *a)
349
350
TCGv t0 = tcg_temp_new();
351
gen_helper_flt_d(t0, cpu_env, cpu_fpr[a->rs1], cpu_fpr[a->rs2]);
352
- gen_set_gpr(a->rd, t0);
353
+ gen_set_gpr(ctx, a->rd, t0);
354
tcg_temp_free(t0);
355
356
return true;
357
@@ -XXX,XX +XXX,XX @@ static bool trans_fle_d(DisasContext *ctx, arg_fle_d *a)
358
359
TCGv t0 = tcg_temp_new();
360
gen_helper_fle_d(t0, cpu_env, cpu_fpr[a->rs1], cpu_fpr[a->rs2]);
361
- gen_set_gpr(a->rd, t0);
362
+ gen_set_gpr(ctx, a->rd, t0);
363
tcg_temp_free(t0);
364
365
return true;
366
@@ -XXX,XX +XXX,XX @@ static bool trans_fclass_d(DisasContext *ctx, arg_fclass_d *a)
367
368
TCGv t0 = tcg_temp_new();
369
gen_helper_fclass_d(t0, cpu_fpr[a->rs1]);
370
- gen_set_gpr(a->rd, t0);
371
+ gen_set_gpr(ctx, a->rd, t0);
372
tcg_temp_free(t0);
373
return true;
374
}
375
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_w_d(DisasContext *ctx, arg_fcvt_w_d *a)
376
TCGv t0 = tcg_temp_new();
377
gen_set_rm(ctx, a->rm);
378
gen_helper_fcvt_w_d(t0, cpu_env, cpu_fpr[a->rs1]);
379
- gen_set_gpr(a->rd, t0);
380
+ gen_set_gpr(ctx, a->rd, t0);
381
tcg_temp_free(t0);
382
383
return true;
384
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_wu_d(DisasContext *ctx, arg_fcvt_wu_d *a)
385
TCGv t0 = tcg_temp_new();
386
gen_set_rm(ctx, a->rm);
387
gen_helper_fcvt_wu_d(t0, cpu_env, cpu_fpr[a->rs1]);
388
- gen_set_gpr(a->rd, t0);
389
+ gen_set_gpr(ctx, a->rd, t0);
390
tcg_temp_free(t0);
391
392
return true;
393
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_d_w(DisasContext *ctx, arg_fcvt_d_w *a)
394
REQUIRE_EXT(ctx, RVD);
395
396
TCGv t0 = tcg_temp_new();
397
- gen_get_gpr(t0, a->rs1);
398
+ gen_get_gpr(ctx, t0, a->rs1);
399
400
gen_set_rm(ctx, a->rm);
401
gen_helper_fcvt_d_w(cpu_fpr[a->rd], cpu_env, t0);
402
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_d_wu(DisasContext *ctx, arg_fcvt_d_wu *a)
403
REQUIRE_EXT(ctx, RVD);
404
405
TCGv t0 = tcg_temp_new();
406
- gen_get_gpr(t0, a->rs1);
407
+ gen_get_gpr(ctx, t0, a->rs1);
408
409
gen_set_rm(ctx, a->rm);
410
gen_helper_fcvt_d_wu(cpu_fpr[a->rd], cpu_env, t0);
411
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_l_d(DisasContext *ctx, arg_fcvt_l_d *a)
412
TCGv t0 = tcg_temp_new();
413
gen_set_rm(ctx, a->rm);
414
gen_helper_fcvt_l_d(t0, cpu_env, cpu_fpr[a->rs1]);
415
- gen_set_gpr(a->rd, t0);
416
+ gen_set_gpr(ctx, a->rd, t0);
417
tcg_temp_free(t0);
418
return true;
419
}
420
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_lu_d(DisasContext *ctx, arg_fcvt_lu_d *a)
421
TCGv t0 = tcg_temp_new();
422
gen_set_rm(ctx, a->rm);
423
gen_helper_fcvt_lu_d(t0, cpu_env, cpu_fpr[a->rs1]);
424
- gen_set_gpr(a->rd, t0);
425
+ gen_set_gpr(ctx, a->rd, t0);
426
tcg_temp_free(t0);
427
return true;
428
}
429
@@ -XXX,XX +XXX,XX @@ static bool trans_fmv_x_d(DisasContext *ctx, arg_fmv_x_d *a)
430
REQUIRE_EXT(ctx, RVD);
431
432
#ifdef TARGET_RISCV64
433
- gen_set_gpr(a->rd, cpu_fpr[a->rs1]);
434
+ gen_set_gpr(ctx, a->rd, cpu_fpr[a->rs1]);
435
return true;
436
#else
437
qemu_build_not_reached();
438
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_d_l(DisasContext *ctx, arg_fcvt_d_l *a)
439
REQUIRE_EXT(ctx, RVD);
440
441
TCGv t0 = tcg_temp_new();
442
- gen_get_gpr(t0, a->rs1);
443
+ gen_get_gpr(ctx, t0, a->rs1);
444
445
gen_set_rm(ctx, a->rm);
446
gen_helper_fcvt_d_l(cpu_fpr[a->rd], cpu_env, t0);
447
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_d_lu(DisasContext *ctx, arg_fcvt_d_lu *a)
448
REQUIRE_EXT(ctx, RVD);
449
450
TCGv t0 = tcg_temp_new();
451
- gen_get_gpr(t0, a->rs1);
452
+ gen_get_gpr(ctx, t0, a->rs1);
453
454
gen_set_rm(ctx, a->rm);
455
gen_helper_fcvt_d_lu(cpu_fpr[a->rd], cpu_env, t0);
456
@@ -XXX,XX +XXX,XX @@ static bool trans_fmv_d_x(DisasContext *ctx, arg_fmv_d_x *a)
457
458
#ifdef TARGET_RISCV64
459
TCGv t0 = tcg_temp_new();
460
- gen_get_gpr(t0, a->rs1);
461
+ gen_get_gpr(ctx, t0, a->rs1);
462
463
tcg_gen_mov_tl(cpu_fpr[a->rd], t0);
464
tcg_temp_free(t0);
465
diff --git a/target/riscv/insn_trans/trans_rvf.c.inc b/target/riscv/insn_trans/trans_rvf.c.inc
466
index XXXXXXX..XXXXXXX 100644
467
--- a/target/riscv/insn_trans/trans_rvf.c.inc
468
+++ b/target/riscv/insn_trans/trans_rvf.c.inc
469
@@ -XXX,XX +XXX,XX @@ static bool trans_flw(DisasContext *ctx, arg_flw *a)
470
REQUIRE_FPU;
471
REQUIRE_EXT(ctx, RVF);
472
TCGv t0 = tcg_temp_new();
473
- gen_get_gpr(t0, a->rs1);
474
+ gen_get_gpr(ctx, t0, a->rs1);
475
tcg_gen_addi_tl(t0, t0, a->imm);
476
477
tcg_gen_qemu_ld_i64(cpu_fpr[a->rd], t0, ctx->mem_idx, MO_TEUL);
478
@@ -XXX,XX +XXX,XX @@ static bool trans_fsw(DisasContext *ctx, arg_fsw *a)
479
REQUIRE_FPU;
480
REQUIRE_EXT(ctx, RVF);
481
TCGv t0 = tcg_temp_new();
482
- gen_get_gpr(t0, a->rs1);
483
+ gen_get_gpr(ctx, t0, a->rs1);
484
485
tcg_gen_addi_tl(t0, t0, a->imm);
486
487
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_w_s(DisasContext *ctx, arg_fcvt_w_s *a)
488
TCGv t0 = tcg_temp_new();
489
gen_set_rm(ctx, a->rm);
490
gen_helper_fcvt_w_s(t0, cpu_env, cpu_fpr[a->rs1]);
491
- gen_set_gpr(a->rd, t0);
492
+ gen_set_gpr(ctx, a->rd, t0);
493
tcg_temp_free(t0);
494
495
return true;
496
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_wu_s(DisasContext *ctx, arg_fcvt_wu_s *a)
497
TCGv t0 = tcg_temp_new();
498
gen_set_rm(ctx, a->rm);
499
gen_helper_fcvt_wu_s(t0, cpu_env, cpu_fpr[a->rs1]);
500
- gen_set_gpr(a->rd, t0);
501
+ gen_set_gpr(ctx, a->rd, t0);
502
tcg_temp_free(t0);
503
504
return true;
505
@@ -XXX,XX +XXX,XX @@ static bool trans_fmv_x_w(DisasContext *ctx, arg_fmv_x_w *a)
506
tcg_gen_extrl_i64_i32(t0, cpu_fpr[a->rs1]);
507
#endif
508
509
- gen_set_gpr(a->rd, t0);
510
+ gen_set_gpr(ctx, a->rd, t0);
511
tcg_temp_free(t0);
512
513
return true;
514
@@ -XXX,XX +XXX,XX @@ static bool trans_feq_s(DisasContext *ctx, arg_feq_s *a)
515
REQUIRE_EXT(ctx, RVF);
516
TCGv t0 = tcg_temp_new();
517
gen_helper_feq_s(t0, cpu_env, cpu_fpr[a->rs1], cpu_fpr[a->rs2]);
518
- gen_set_gpr(a->rd, t0);
519
+ gen_set_gpr(ctx, a->rd, t0);
520
tcg_temp_free(t0);
521
return true;
522
}
523
@@ -XXX,XX +XXX,XX @@ static bool trans_flt_s(DisasContext *ctx, arg_flt_s *a)
524
REQUIRE_EXT(ctx, RVF);
525
TCGv t0 = tcg_temp_new();
526
gen_helper_flt_s(t0, cpu_env, cpu_fpr[a->rs1], cpu_fpr[a->rs2]);
527
- gen_set_gpr(a->rd, t0);
528
+ gen_set_gpr(ctx, a->rd, t0);
529
tcg_temp_free(t0);
530
return true;
531
}
532
@@ -XXX,XX +XXX,XX @@ static bool trans_fle_s(DisasContext *ctx, arg_fle_s *a)
533
REQUIRE_EXT(ctx, RVF);
534
TCGv t0 = tcg_temp_new();
535
gen_helper_fle_s(t0, cpu_env, cpu_fpr[a->rs1], cpu_fpr[a->rs2]);
536
- gen_set_gpr(a->rd, t0);
537
+ gen_set_gpr(ctx, a->rd, t0);
538
tcg_temp_free(t0);
539
return true;
540
}
541
@@ -XXX,XX +XXX,XX @@ static bool trans_fclass_s(DisasContext *ctx, arg_fclass_s *a)
542
543
gen_helper_fclass_s(t0, cpu_fpr[a->rs1]);
544
545
- gen_set_gpr(a->rd, t0);
546
+ gen_set_gpr(ctx, a->rd, t0);
547
tcg_temp_free(t0);
548
549
return true;
550
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_s_w(DisasContext *ctx, arg_fcvt_s_w *a)
551
REQUIRE_EXT(ctx, RVF);
552
553
TCGv t0 = tcg_temp_new();
554
- gen_get_gpr(t0, a->rs1);
555
+ gen_get_gpr(ctx, t0, a->rs1);
556
557
gen_set_rm(ctx, a->rm);
558
gen_helper_fcvt_s_w(cpu_fpr[a->rd], cpu_env, t0);
559
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_s_wu(DisasContext *ctx, arg_fcvt_s_wu *a)
560
REQUIRE_EXT(ctx, RVF);
561
562
TCGv t0 = tcg_temp_new();
563
- gen_get_gpr(t0, a->rs1);
564
+ gen_get_gpr(ctx, t0, a->rs1);
565
566
gen_set_rm(ctx, a->rm);
567
gen_helper_fcvt_s_wu(cpu_fpr[a->rd], cpu_env, t0);
568
@@ -XXX,XX +XXX,XX @@ static bool trans_fmv_w_x(DisasContext *ctx, arg_fmv_w_x *a)
569
REQUIRE_EXT(ctx, RVF);
570
571
TCGv t0 = tcg_temp_new();
572
- gen_get_gpr(t0, a->rs1);
573
+ gen_get_gpr(ctx, t0, a->rs1);
574
575
tcg_gen_extu_tl_i64(cpu_fpr[a->rd], t0);
576
gen_nanbox_s(cpu_fpr[a->rd], cpu_fpr[a->rd]);
577
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_l_s(DisasContext *ctx, arg_fcvt_l_s *a)
578
TCGv t0 = tcg_temp_new();
579
gen_set_rm(ctx, a->rm);
580
gen_helper_fcvt_l_s(t0, cpu_env, cpu_fpr[a->rs1]);
581
- gen_set_gpr(a->rd, t0);
582
+ gen_set_gpr(ctx, a->rd, t0);
583
tcg_temp_free(t0);
584
return true;
585
}
586
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_lu_s(DisasContext *ctx, arg_fcvt_lu_s *a)
587
TCGv t0 = tcg_temp_new();
588
gen_set_rm(ctx, a->rm);
589
gen_helper_fcvt_lu_s(t0, cpu_env, cpu_fpr[a->rs1]);
590
- gen_set_gpr(a->rd, t0);
591
+ gen_set_gpr(ctx, a->rd, t0);
592
tcg_temp_free(t0);
593
return true;
594
}
595
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_s_l(DisasContext *ctx, arg_fcvt_s_l *a)
596
REQUIRE_EXT(ctx, RVF);
597
598
TCGv t0 = tcg_temp_new();
599
- gen_get_gpr(t0, a->rs1);
600
+ gen_get_gpr(ctx, t0, a->rs1);
601
602
gen_set_rm(ctx, a->rm);
603
gen_helper_fcvt_s_l(cpu_fpr[a->rd], cpu_env, t0);
604
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_s_lu(DisasContext *ctx, arg_fcvt_s_lu *a)
605
REQUIRE_EXT(ctx, RVF);
606
607
TCGv t0 = tcg_temp_new();
608
- gen_get_gpr(t0, a->rs1);
609
+ gen_get_gpr(ctx, t0, a->rs1);
610
611
gen_set_rm(ctx, a->rm);
612
gen_helper_fcvt_s_lu(cpu_fpr[a->rd], cpu_env, t0);
613
diff --git a/target/riscv/insn_trans/trans_rvh.c.inc b/target/riscv/insn_trans/trans_rvh.c.inc
614
index XXXXXXX..XXXXXXX 100644
615
--- a/target/riscv/insn_trans/trans_rvh.c.inc
616
+++ b/target/riscv/insn_trans/trans_rvh.c.inc
617
@@ -XXX,XX +XXX,XX @@ static bool trans_hlv_b(DisasContext *ctx, arg_hlv_b *a)
618
619
check_access(ctx);
620
621
- gen_get_gpr(t0, a->rs1);
622
+ gen_get_gpr(ctx, t0, a->rs1);
623
624
tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx | TB_FLAGS_PRIV_HYP_ACCESS_MASK, MO_SB);
625
- gen_set_gpr(a->rd, t1);
626
+ gen_set_gpr(ctx, a->rd, t1);
627
628
tcg_temp_free(t0);
629
tcg_temp_free(t1);
630
@@ -XXX,XX +XXX,XX @@ static bool trans_hlv_h(DisasContext *ctx, arg_hlv_h *a)
631
632
check_access(ctx);
633
634
- gen_get_gpr(t0, a->rs1);
635
+ gen_get_gpr(ctx, t0, a->rs1);
636
637
tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx | TB_FLAGS_PRIV_HYP_ACCESS_MASK, MO_TESW);
638
- gen_set_gpr(a->rd, t1);
639
+ gen_set_gpr(ctx, a->rd, t1);
640
641
tcg_temp_free(t0);
642
tcg_temp_free(t1);
643
@@ -XXX,XX +XXX,XX @@ static bool trans_hlv_w(DisasContext *ctx, arg_hlv_w *a)
644
645
check_access(ctx);
646
647
- gen_get_gpr(t0, a->rs1);
648
+ gen_get_gpr(ctx, t0, a->rs1);
649
650
tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx | TB_FLAGS_PRIV_HYP_ACCESS_MASK, MO_TESL);
651
- gen_set_gpr(a->rd, t1);
652
+ gen_set_gpr(ctx, a->rd, t1);
653
654
tcg_temp_free(t0);
655
tcg_temp_free(t1);
656
@@ -XXX,XX +XXX,XX @@ static bool trans_hlv_bu(DisasContext *ctx, arg_hlv_bu *a)
657
658
check_access(ctx);
659
660
- gen_get_gpr(t0, a->rs1);
661
+ gen_get_gpr(ctx, t0, a->rs1);
662
663
tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx | TB_FLAGS_PRIV_HYP_ACCESS_MASK, MO_UB);
664
- gen_set_gpr(a->rd, t1);
665
+ gen_set_gpr(ctx, a->rd, t1);
666
667
tcg_temp_free(t0);
668
tcg_temp_free(t1);
669
@@ -XXX,XX +XXX,XX @@ static bool trans_hlv_hu(DisasContext *ctx, arg_hlv_hu *a)
670
671
check_access(ctx);
672
673
- gen_get_gpr(t0, a->rs1);
674
+ gen_get_gpr(ctx, t0, a->rs1);
675
tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx | TB_FLAGS_PRIV_HYP_ACCESS_MASK, MO_TEUW);
676
- gen_set_gpr(a->rd, t1);
677
+ gen_set_gpr(ctx, a->rd, t1);
678
679
tcg_temp_free(t0);
680
tcg_temp_free(t1);
681
@@ -XXX,XX +XXX,XX @@ static bool trans_hsv_b(DisasContext *ctx, arg_hsv_b *a)
682
683
check_access(ctx);
684
685
- gen_get_gpr(t0, a->rs1);
686
- gen_get_gpr(dat, a->rs2);
687
+ gen_get_gpr(ctx, t0, a->rs1);
688
+ gen_get_gpr(ctx, dat, a->rs2);
689
690
tcg_gen_qemu_st_tl(dat, t0, ctx->mem_idx | TB_FLAGS_PRIV_HYP_ACCESS_MASK, MO_SB);
691
692
@@ -XXX,XX +XXX,XX @@ static bool trans_hsv_h(DisasContext *ctx, arg_hsv_h *a)
693
694
check_access(ctx);
695
696
- gen_get_gpr(t0, a->rs1);
697
- gen_get_gpr(dat, a->rs2);
698
+ gen_get_gpr(ctx, t0, a->rs1);
699
+ gen_get_gpr(ctx, dat, a->rs2);
700
701
tcg_gen_qemu_st_tl(dat, t0, ctx->mem_idx | TB_FLAGS_PRIV_HYP_ACCESS_MASK, MO_TESW);
702
703
@@ -XXX,XX +XXX,XX @@ static bool trans_hsv_w(DisasContext *ctx, arg_hsv_w *a)
704
705
check_access(ctx);
706
707
- gen_get_gpr(t0, a->rs1);
708
- gen_get_gpr(dat, a->rs2);
709
+ gen_get_gpr(ctx, t0, a->rs1);
710
+ gen_get_gpr(ctx, dat, a->rs2);
711
712
tcg_gen_qemu_st_tl(dat, t0, ctx->mem_idx | TB_FLAGS_PRIV_HYP_ACCESS_MASK, MO_TESL);
713
714
@@ -XXX,XX +XXX,XX @@ static bool trans_hlv_wu(DisasContext *ctx, arg_hlv_wu *a)
715
716
check_access(ctx);
717
718
- gen_get_gpr(t0, a->rs1);
719
+ gen_get_gpr(ctx, t0, a->rs1);
720
721
tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx | TB_FLAGS_PRIV_HYP_ACCESS_MASK, MO_TEUL);
722
- gen_set_gpr(a->rd, t1);
723
+ gen_set_gpr(ctx, a->rd, t1);
724
725
tcg_temp_free(t0);
726
tcg_temp_free(t1);
727
@@ -XXX,XX +XXX,XX @@ static bool trans_hlv_d(DisasContext *ctx, arg_hlv_d *a)
728
729
check_access(ctx);
730
731
- gen_get_gpr(t0, a->rs1);
732
+ gen_get_gpr(ctx, t0, a->rs1);
733
734
tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx | TB_FLAGS_PRIV_HYP_ACCESS_MASK, MO_TEQ);
735
- gen_set_gpr(a->rd, t1);
736
+ gen_set_gpr(ctx, a->rd, t1);
737
738
tcg_temp_free(t0);
739
tcg_temp_free(t1);
740
@@ -XXX,XX +XXX,XX @@ static bool trans_hsv_d(DisasContext *ctx, arg_hsv_d *a)
741
742
check_access(ctx);
743
744
- gen_get_gpr(t0, a->rs1);
745
- gen_get_gpr(dat, a->rs2);
746
+ gen_get_gpr(ctx, t0, a->rs1);
747
+ gen_get_gpr(ctx, dat, a->rs2);
748
749
tcg_gen_qemu_st_tl(dat, t0, ctx->mem_idx | TB_FLAGS_PRIV_HYP_ACCESS_MASK, MO_TEQ);
750
751
@@ -XXX,XX +XXX,XX @@ static bool trans_hlvx_hu(DisasContext *ctx, arg_hlvx_hu *a)
752
753
check_access(ctx);
754
755
- gen_get_gpr(t0, a->rs1);
756
+ gen_get_gpr(ctx, t0, a->rs1);
757
758
gen_helper_hyp_hlvx_hu(t1, cpu_env, t0);
759
- gen_set_gpr(a->rd, t1);
760
+ gen_set_gpr(ctx, a->rd, t1);
761
762
tcg_temp_free(t0);
763
tcg_temp_free(t1);
764
@@ -XXX,XX +XXX,XX @@ static bool trans_hlvx_wu(DisasContext *ctx, arg_hlvx_wu *a)
765
766
check_access(ctx);
767
768
- gen_get_gpr(t0, a->rs1);
769
+ gen_get_gpr(ctx, t0, a->rs1);
770
771
gen_helper_hyp_hlvx_wu(t1, cpu_env, t0);
772
- gen_set_gpr(a->rd, t1);
773
+ gen_set_gpr(ctx, a->rd, t1);
774
775
tcg_temp_free(t0);
776
tcg_temp_free(t1);
777
diff --git a/target/riscv/insn_trans/trans_rvi.c.inc b/target/riscv/insn_trans/trans_rvi.c.inc
778
index XXXXXXX..XXXXXXX 100644
779
--- a/target/riscv/insn_trans/trans_rvi.c.inc
780
+++ b/target/riscv/insn_trans/trans_rvi.c.inc
781
@@ -XXX,XX +XXX,XX @@ static bool trans_jalr(DisasContext *ctx, arg_jalr *a)
782
TCGv t0 = tcg_temp_new();
783
784
785
- gen_get_gpr(cpu_pc, a->rs1);
786
+ gen_get_gpr(ctx, cpu_pc, a->rs1);
787
tcg_gen_addi_tl(cpu_pc, cpu_pc, a->imm);
788
tcg_gen_andi_tl(cpu_pc, cpu_pc, (target_ulong)-2);
789
790
@@ -XXX,XX +XXX,XX @@ static bool gen_branch(DisasContext *ctx, arg_b *a, TCGCond cond)
791
TCGv source1, source2;
792
source1 = tcg_temp_new();
793
source2 = tcg_temp_new();
794
- gen_get_gpr(source1, a->rs1);
795
- gen_get_gpr(source2, a->rs2);
796
+ gen_get_gpr(ctx, source1, a->rs1);
797
+ gen_get_gpr(ctx, source2, a->rs2);
798
799
tcg_gen_brcond_tl(cond, source1, source2, l);
800
gen_goto_tb(ctx, 1, ctx->pc_succ_insn);
801
@@ -XXX,XX +XXX,XX @@ static bool gen_load(DisasContext *ctx, arg_lb *a, MemOp memop)
802
{
803
TCGv t0 = tcg_temp_new();
804
TCGv t1 = tcg_temp_new();
805
- gen_get_gpr(t0, a->rs1);
806
+ gen_get_gpr(ctx, t0, a->rs1);
807
tcg_gen_addi_tl(t0, t0, a->imm);
808
809
tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, memop);
810
- gen_set_gpr(a->rd, t1);
811
+ gen_set_gpr(ctx, a->rd, t1);
812
tcg_temp_free(t0);
813
tcg_temp_free(t1);
814
return true;
815
@@ -XXX,XX +XXX,XX @@ static bool gen_store(DisasContext *ctx, arg_sb *a, MemOp memop)
816
{
817
TCGv t0 = tcg_temp_new();
818
TCGv dat = tcg_temp_new();
819
- gen_get_gpr(t0, a->rs1);
820
+ gen_get_gpr(ctx, t0, a->rs1);
821
tcg_gen_addi_tl(t0, t0, a->imm);
822
- gen_get_gpr(dat, a->rs2);
823
+ gen_get_gpr(ctx, dat, a->rs2);
824
825
tcg_gen_qemu_st_tl(dat, t0, ctx->mem_idx, memop);
826
tcg_temp_free(t0);
827
@@ -XXX,XX +XXX,XX @@ static bool trans_srliw(DisasContext *ctx, arg_srliw *a)
828
{
829
REQUIRE_64BIT(ctx);
830
TCGv t = tcg_temp_new();
831
- gen_get_gpr(t, a->rs1);
832
+ gen_get_gpr(ctx, t, a->rs1);
833
tcg_gen_extract_tl(t, t, a->shamt, 32 - a->shamt);
834
/* sign-extend for W instructions */
835
tcg_gen_ext32s_tl(t, t);
836
- gen_set_gpr(a->rd, t);
837
+ gen_set_gpr(ctx, a->rd, t);
838
tcg_temp_free(t);
839
return true;
840
}
841
@@ -XXX,XX +XXX,XX @@ static bool trans_sraiw(DisasContext *ctx, arg_sraiw *a)
842
{
843
REQUIRE_64BIT(ctx);
844
TCGv t = tcg_temp_new();
845
- gen_get_gpr(t, a->rs1);
846
+ gen_get_gpr(ctx, t, a->rs1);
847
tcg_gen_sextract_tl(t, t, a->shamt, 32 - a->shamt);
848
- gen_set_gpr(a->rd, t);
849
+ gen_set_gpr(ctx, a->rd, t);
850
tcg_temp_free(t);
851
return true;
852
}
853
@@ -XXX,XX +XXX,XX @@ static bool trans_sllw(DisasContext *ctx, arg_sllw *a)
854
TCGv source1 = tcg_temp_new();
855
TCGv source2 = tcg_temp_new();
856
857
- gen_get_gpr(source1, a->rs1);
858
- gen_get_gpr(source2, a->rs2);
859
+ gen_get_gpr(ctx, source1, a->rs1);
860
+ gen_get_gpr(ctx, source2, a->rs2);
861
862
tcg_gen_andi_tl(source2, source2, 0x1F);
863
tcg_gen_shl_tl(source1, source1, source2);
864
865
tcg_gen_ext32s_tl(source1, source1);
866
- gen_set_gpr(a->rd, source1);
867
+ gen_set_gpr(ctx, a->rd, source1);
868
tcg_temp_free(source1);
869
tcg_temp_free(source2);
870
return true;
871
@@ -XXX,XX +XXX,XX @@ static bool trans_srlw(DisasContext *ctx, arg_srlw *a)
872
TCGv source1 = tcg_temp_new();
873
TCGv source2 = tcg_temp_new();
874
875
- gen_get_gpr(source1, a->rs1);
876
- gen_get_gpr(source2, a->rs2);
877
+ gen_get_gpr(ctx, source1, a->rs1);
878
+ gen_get_gpr(ctx, source2, a->rs2);
879
880
/* clear upper 32 */
881
tcg_gen_ext32u_tl(source1, source1);
882
@@ -XXX,XX +XXX,XX @@ static bool trans_srlw(DisasContext *ctx, arg_srlw *a)
883
tcg_gen_shr_tl(source1, source1, source2);
884
885
tcg_gen_ext32s_tl(source1, source1);
886
- gen_set_gpr(a->rd, source1);
887
+ gen_set_gpr(ctx, a->rd, source1);
888
tcg_temp_free(source1);
889
tcg_temp_free(source2);
890
return true;
891
@@ -XXX,XX +XXX,XX @@ static bool trans_sraw(DisasContext *ctx, arg_sraw *a)
892
TCGv source1 = tcg_temp_new();
893
TCGv source2 = tcg_temp_new();
894
895
- gen_get_gpr(source1, a->rs1);
896
- gen_get_gpr(source2, a->rs2);
897
+ gen_get_gpr(ctx, source1, a->rs1);
898
+ gen_get_gpr(ctx, source2, a->rs2);
899
900
/*
901
* first, trick to get it to act like working on 32 bits (get rid of
902
@@ -XXX,XX +XXX,XX @@ static bool trans_sraw(DisasContext *ctx, arg_sraw *a)
903
tcg_gen_andi_tl(source2, source2, 0x1F);
904
tcg_gen_sar_tl(source1, source1, source2);
905
906
- gen_set_gpr(a->rd, source1);
907
+ gen_set_gpr(ctx, a->rd, source1);
908
tcg_temp_free(source1);
909
tcg_temp_free(source2);
910
911
@@ -XXX,XX +XXX,XX @@ static bool trans_fence_i(DisasContext *ctx, arg_fence_i *a)
912
csr_store = tcg_temp_new(); \
913
dest = tcg_temp_new(); \
914
rs1_pass = tcg_temp_new(); \
915
- gen_get_gpr(source1, a->rs1); \
916
+ gen_get_gpr(ctx, source1, a->rs1); \
917
tcg_gen_movi_tl(cpu_pc, ctx->base.pc_next); \
918
tcg_gen_movi_tl(rs1_pass, a->rs1); \
919
tcg_gen_movi_tl(csr_store, a->csr); \
920
@@ -XXX,XX +XXX,XX @@ static bool trans_fence_i(DisasContext *ctx, arg_fence_i *a)
921
} while (0)
922
923
#define RISCV_OP_CSR_POST do {\
924
- gen_set_gpr(a->rd, dest); \
925
+ gen_set_gpr(ctx, a->rd, dest); \
926
tcg_gen_movi_tl(cpu_pc, ctx->pc_succ_insn); \
927
exit_tb(ctx); \
928
ctx->base.is_jmp = DISAS_NORETURN; \
929
diff --git a/target/riscv/insn_trans/trans_rvm.c.inc b/target/riscv/insn_trans/trans_rvm.c.inc
930
index XXXXXXX..XXXXXXX 100644
931
--- a/target/riscv/insn_trans/trans_rvm.c.inc
932
+++ b/target/riscv/insn_trans/trans_rvm.c.inc
933
@@ -XXX,XX +XXX,XX @@ static bool trans_mulh(DisasContext *ctx, arg_mulh *a)
934
REQUIRE_EXT(ctx, RVM);
935
TCGv source1 = tcg_temp_new();
936
TCGv source2 = tcg_temp_new();
937
- gen_get_gpr(source1, a->rs1);
938
- gen_get_gpr(source2, a->rs2);
939
+ gen_get_gpr(ctx, source1, a->rs1);
940
+ gen_get_gpr(ctx, source2, a->rs2);
941
942
tcg_gen_muls2_tl(source2, source1, source1, source2);
943
944
- gen_set_gpr(a->rd, source1);
945
+ gen_set_gpr(ctx, a->rd, source1);
946
tcg_temp_free(source1);
947
tcg_temp_free(source2);
948
return true;
949
@@ -XXX,XX +XXX,XX @@ static bool trans_mulhu(DisasContext *ctx, arg_mulhu *a)
950
REQUIRE_EXT(ctx, RVM);
951
TCGv source1 = tcg_temp_new();
952
TCGv source2 = tcg_temp_new();
953
- gen_get_gpr(source1, a->rs1);
954
- gen_get_gpr(source2, a->rs2);
955
+ gen_get_gpr(ctx, source1, a->rs1);
956
+ gen_get_gpr(ctx, source2, a->rs2);
957
958
tcg_gen_mulu2_tl(source2, source1, source1, source2);
959
960
- gen_set_gpr(a->rd, source1);
961
+ gen_set_gpr(ctx, a->rd, source1);
962
tcg_temp_free(source1);
963
tcg_temp_free(source2);
964
return true;
965
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
15
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
966
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
967
--- a/target/riscv/insn_trans/trans_rvv.c.inc
17
--- a/target/riscv/insn_trans/trans_rvv.c.inc
968
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
18
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
969
@@ -XXX,XX +XXX,XX @@ static bool trans_vsetvl(DisasContext *ctx, arg_vsetvl *a)
19
@@ -XXX,XX +XXX,XX @@ static bool opffv_narrow_check(DisasContext *s, arg_rmr *a)
970
s1 = tcg_constant_tl(RV_VLEN_MAX);
20
static bool opffv_rod_narrow_check(DisasContext *s, arg_rmr *a)
971
} else {
21
{
972
s1 = tcg_temp_new();
22
return opfv_narrow_check(s, a) &&
973
- gen_get_gpr(s1, a->rs1);
23
+ require_rvf(s) &&
974
+ gen_get_gpr(ctx, s1, a->rs1);
24
require_scale_rvf(s) &&
975
}
25
(s->sew != MO_8);
976
- gen_get_gpr(s2, a->rs2);
26
}
977
+ gen_get_gpr(ctx, s2, a->rs2);
978
gen_helper_vsetvl(dst, cpu_env, s1, s2);
979
- gen_set_gpr(a->rd, dst);
980
+ gen_set_gpr(ctx, a->rd, dst);
981
tcg_gen_movi_tl(cpu_pc, ctx->pc_succ_insn);
982
lookup_and_goto_ptr(ctx);
983
ctx->base.is_jmp = DISAS_NORETURN;
984
@@ -XXX,XX +XXX,XX @@ static bool trans_vsetvli(DisasContext *ctx, arg_vsetvli *a)
985
s1 = tcg_constant_tl(RV_VLEN_MAX);
986
} else {
987
s1 = tcg_temp_new();
988
- gen_get_gpr(s1, a->rs1);
989
+ gen_get_gpr(ctx, s1, a->rs1);
990
}
991
gen_helper_vsetvl(dst, cpu_env, s1, s2);
992
- gen_set_gpr(a->rd, dst);
993
+ gen_set_gpr(ctx, a->rd, dst);
994
gen_goto_tb(ctx, 0, ctx->pc_succ_insn);
995
ctx->base.is_jmp = DISAS_NORETURN;
996
997
@@ -XXX,XX +XXX,XX @@ static bool ldst_us_trans(uint32_t vd, uint32_t rs1, uint32_t data,
998
*/
999
desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
1000
1001
- gen_get_gpr(base, rs1);
1002
+ gen_get_gpr(s, base, rs1);
1003
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
1004
tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0));
1005
1006
@@ -XXX,XX +XXX,XX @@ static bool ldst_stride_trans(uint32_t vd, uint32_t rs1, uint32_t rs2,
1007
stride = tcg_temp_new();
1008
desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
1009
1010
- gen_get_gpr(base, rs1);
1011
- gen_get_gpr(stride, rs2);
1012
+ gen_get_gpr(s, base, rs1);
1013
+ gen_get_gpr(s, stride, rs2);
1014
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
1015
tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0));
1016
1017
@@ -XXX,XX +XXX,XX @@ static bool ldst_index_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
1018
base = tcg_temp_new();
1019
desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
1020
1021
- gen_get_gpr(base, rs1);
1022
+ gen_get_gpr(s, base, rs1);
1023
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
1024
tcg_gen_addi_ptr(index, cpu_env, vreg_ofs(s, vs2));
1025
tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0));
1026
@@ -XXX,XX +XXX,XX @@ static bool ldff_trans(uint32_t vd, uint32_t rs1, uint32_t data,
1027
base = tcg_temp_new();
1028
desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
1029
1030
- gen_get_gpr(base, rs1);
1031
+ gen_get_gpr(s, base, rs1);
1032
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
1033
tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0));
1034
1035
@@ -XXX,XX +XXX,XX @@ static bool amo_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
1036
base = tcg_temp_new();
1037
desc = tcg_constant_i32(simd_desc(s->vlen / 8, s->vlen / 8, data));
1038
1039
- gen_get_gpr(base, rs1);
1040
+ gen_get_gpr(s, base, rs1);
1041
tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd));
1042
tcg_gen_addi_ptr(index, cpu_env, vreg_ofs(s, vs2));
1043
tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0));
1044
@@ -XXX,XX +XXX,XX @@ static bool opivx_trans(uint32_t vd, uint32_t rs1, uint32_t vs2, uint32_t vm,
1045
mask = tcg_temp_new_ptr();
1046
src2 = tcg_temp_new_ptr();
1047
src1 = tcg_temp_new();
1048
- gen_get_gpr(src1, rs1);
1049
+ gen_get_gpr(s, src1, rs1);
1050
1051
data = FIELD_DP32(data, VDATA, MLEN, s->mlen);
1052
data = FIELD_DP32(data, VDATA, VM, vm);
1053
@@ -XXX,XX +XXX,XX @@ do_opivx_gvec(DisasContext *s, arg_rmrr *a, GVecGen2sFn *gvec_fn,
1054
TCGv_i64 src1 = tcg_temp_new_i64();
1055
TCGv tmp = tcg_temp_new();
1056
1057
- gen_get_gpr(tmp, a->rs1);
1058
+ gen_get_gpr(s, tmp, a->rs1);
1059
tcg_gen_ext_tl_i64(src1, tmp);
1060
gvec_fn(s->sew, vreg_ofs(s, a->rd), vreg_ofs(s, a->rs2),
1061
src1, MAXSZ(s), MAXSZ(s));
1062
@@ -XXX,XX +XXX,XX @@ do_opivx_gvec_shift(DisasContext *s, arg_rmrr *a, GVecGen2sFn32 *gvec_fn,
1063
TCGv_i32 src1 = tcg_temp_new_i32();
1064
TCGv tmp = tcg_temp_new();
1065
1066
- gen_get_gpr(tmp, a->rs1);
1067
+ gen_get_gpr(s, tmp, a->rs1);
1068
tcg_gen_trunc_tl_i32(src1, tmp);
1069
tcg_gen_extract_i32(src1, src1, 0, s->sew + 3);
1070
gvec_fn(s->sew, vreg_ofs(s, a->rd), vreg_ofs(s, a->rs2),
1071
@@ -XXX,XX +XXX,XX @@ static bool trans_vmv_v_x(DisasContext *s, arg_vmv_v_x *a)
1072
tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
1073
1074
s1 = tcg_temp_new();
1075
- gen_get_gpr(s1, a->rs1);
1076
+ gen_get_gpr(s, s1, a->rs1);
1077
1078
if (s->vl_eq_vlmax) {
1079
tcg_gen_gvec_dup_tl(s->sew, vreg_ofs(s, a->rd),
1080
@@ -XXX,XX +XXX,XX @@ static bool trans_vmpopc_m(DisasContext *s, arg_rmr *a)
1081
tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0));
1082
1083
gen_helper_vmpopc_m(dst, mask, src2, cpu_env, desc);
1084
- gen_set_gpr(a->rd, dst);
1085
+ gen_set_gpr(s, a->rd, dst);
1086
1087
tcg_temp_free_ptr(mask);
1088
tcg_temp_free_ptr(src2);
1089
@@ -XXX,XX +XXX,XX @@ static bool trans_vmfirst_m(DisasContext *s, arg_rmr *a)
1090
tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0));
1091
1092
gen_helper_vmfirst_m(dst, mask, src2, cpu_env, desc);
1093
- gen_set_gpr(a->rd, dst);
1094
+ gen_set_gpr(s, a->rd, dst);
1095
1096
tcg_temp_free_ptr(mask);
1097
tcg_temp_free_ptr(src2);
1098
@@ -XXX,XX +XXX,XX @@ static bool trans_vext_x_v(DisasContext *s, arg_r *a)
1099
vec_element_loadx(s, tmp, a->rs2, cpu_gpr[a->rs1], vlmax);
1100
}
1101
tcg_gen_trunc_i64_tl(dest, tmp);
1102
- gen_set_gpr(a->rd, dest);
1103
+ gen_set_gpr(s, a->rd, dest);
1104
1105
tcg_temp_free(dest);
1106
tcg_temp_free_i64(tmp);
1107
--
27
--
1108
2.31.1
28
2.45.1
1109
1110
1111
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Max Chou <max.chou@sifive.com>
2
2
3
Utilize the condition in the movcond more; this allows some of
3
If the checking functions check both the single and double width
4
the setcond that were feeding into movcond to be removed.
4
operators at the same time, then the single width operator checking
5
Do not write into source1 and source2. Re-name "condN" to "tempN"
5
functions (require_rvf[min]) will check whether the SEW is 8.
6
and use the temporaries for more than holding conditions.
7
6
8
Tested-by: Bin Meng <bmeng.cn@gmail.com>
7
Signed-off-by: Max Chou <max.chou@sifive.com>
9
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
8
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Cc: qemu-stable <qemu-stable@nongnu.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-ID: <20240322092600.1198921-5-max.chou@sifive.com>
12
Message-id: 20210823195529.560295-4-richard.henderson@linaro.org
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
---
12
---
15
target/riscv/translate.c | 174 ++++++++++++++++++++-------------------
13
target/riscv/insn_trans/trans_rvv.c.inc | 16 ++++------------
16
1 file changed, 91 insertions(+), 83 deletions(-)
14
1 file changed, 4 insertions(+), 12 deletions(-)
17
15
18
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
16
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
19
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
20
--- a/target/riscv/translate.c
18
--- a/target/riscv/insn_trans/trans_rvv.c.inc
21
+++ b/target/riscv/translate.c
19
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
22
@@ -XXX,XX +XXX,XX @@ static void gen_mulhsu(TCGv ret, TCGv arg1, TCGv arg2)
20
@@ -XXX,XX +XXX,XX @@ static bool opfvv_widen_check(DisasContext *s, arg_rmrr *a)
23
21
return require_rvv(s) &&
24
static void gen_div(TCGv ret, TCGv source1, TCGv source2)
22
require_rvf(s) &&
23
require_scale_rvf(s) &&
24
- (s->sew != MO_8) &&
25
vext_check_isa_ill(s) &&
26
vext_check_dss(s, a->rd, a->rs1, a->rs2, a->vm);
27
}
28
@@ -XXX,XX +XXX,XX @@ static bool opfvf_widen_check(DisasContext *s, arg_rmrr *a)
29
return require_rvv(s) &&
30
require_rvf(s) &&
31
require_scale_rvf(s) &&
32
- (s->sew != MO_8) &&
33
vext_check_isa_ill(s) &&
34
vext_check_ds(s, a->rd, a->rs2, a->vm);
35
}
36
@@ -XXX,XX +XXX,XX @@ static bool opfwv_widen_check(DisasContext *s, arg_rmrr *a)
37
return require_rvv(s) &&
38
require_rvf(s) &&
39
require_scale_rvf(s) &&
40
- (s->sew != MO_8) &&
41
vext_check_isa_ill(s) &&
42
vext_check_dds(s, a->rd, a->rs1, a->rs2, a->vm);
43
}
44
@@ -XXX,XX +XXX,XX @@ static bool opfwf_widen_check(DisasContext *s, arg_rmrr *a)
45
return require_rvv(s) &&
46
require_rvf(s) &&
47
require_scale_rvf(s) &&
48
- (s->sew != MO_8) &&
49
vext_check_isa_ill(s) &&
50
vext_check_dd(s, a->rd, a->rs2, a->vm);
51
}
52
@@ -XXX,XX +XXX,XX @@ static bool opffv_widen_check(DisasContext *s, arg_rmr *a)
25
{
53
{
26
- TCGv cond1, cond2, zeroreg, resultopt1;
54
return opfv_widen_check(s, a) &&
27
+ TCGv temp1, temp2, zero, one, mone, min;
55
require_rvfmin(s) &&
28
+
56
- require_scale_rvfmin(s) &&
29
+ temp1 = tcg_temp_new();
57
- (s->sew != MO_8);
30
+ temp2 = tcg_temp_new();
58
+ require_scale_rvfmin(s);
31
+ zero = tcg_constant_tl(0);
32
+ one = tcg_constant_tl(1);
33
+ mone = tcg_constant_tl(-1);
34
+ min = tcg_constant_tl(1ull << (TARGET_LONG_BITS - 1));
35
+
36
/*
37
- * Handle by altering args to tcg_gen_div to produce req'd results:
38
- * For overflow: want source1 in source1 and 1 in source2
39
- * For div by zero: want -1 in source1 and 1 in source2 -> -1 result
40
+ * If overflow, set temp2 to 1, else source2.
41
+ * This produces the required result of min.
42
*/
43
- cond1 = tcg_temp_new();
44
- cond2 = tcg_temp_new();
45
- zeroreg = tcg_constant_tl(0);
46
- resultopt1 = tcg_temp_new();
47
-
48
- tcg_gen_movi_tl(resultopt1, (target_ulong)-1);
49
- tcg_gen_setcondi_tl(TCG_COND_EQ, cond2, source2, (target_ulong)(~0L));
50
- tcg_gen_setcondi_tl(TCG_COND_EQ, cond1, source1,
51
- ((target_ulong)1) << (TARGET_LONG_BITS - 1));
52
- tcg_gen_and_tl(cond1, cond1, cond2); /* cond1 = overflow */
53
- tcg_gen_setcondi_tl(TCG_COND_EQ, cond2, source2, 0); /* cond2 = div 0 */
54
- /* if div by zero, set source1 to -1, otherwise don't change */
55
- tcg_gen_movcond_tl(TCG_COND_EQ, source1, cond2, zeroreg, source1,
56
- resultopt1);
57
- /* if overflow or div by zero, set source2 to 1, else don't change */
58
- tcg_gen_or_tl(cond1, cond1, cond2);
59
- tcg_gen_movi_tl(resultopt1, (target_ulong)1);
60
- tcg_gen_movcond_tl(TCG_COND_EQ, source2, cond1, zeroreg, source2,
61
- resultopt1);
62
- tcg_gen_div_tl(ret, source1, source2);
63
-
64
- tcg_temp_free(cond1);
65
- tcg_temp_free(cond2);
66
- tcg_temp_free(resultopt1);
67
+ tcg_gen_setcond_tl(TCG_COND_EQ, temp1, source1, min);
68
+ tcg_gen_setcond_tl(TCG_COND_EQ, temp2, source2, mone);
69
+ tcg_gen_and_tl(temp1, temp1, temp2);
70
+ tcg_gen_movcond_tl(TCG_COND_NE, temp2, temp1, zero, one, source2);
71
+
72
+ /*
73
+ * If div by zero, set temp1 to -1 and temp2 to 1 to
74
+ * produce the required result of -1.
75
+ */
76
+ tcg_gen_movcond_tl(TCG_COND_EQ, temp1, source2, zero, mone, source1);
77
+ tcg_gen_movcond_tl(TCG_COND_EQ, temp2, source2, zero, one, temp2);
78
+
79
+ tcg_gen_div_tl(ret, temp1, temp2);
80
+
81
+ tcg_temp_free(temp1);
82
+ tcg_temp_free(temp2);
83
}
59
}
84
60
85
static void gen_divu(TCGv ret, TCGv source1, TCGv source2)
61
#define GEN_OPFV_WIDEN_TRANS(NAME, CHECK, HELPER, FRM) \
62
@@ -XXX,XX +XXX,XX @@ static bool opffv_narrow_check(DisasContext *s, arg_rmr *a)
86
{
63
{
87
- TCGv cond1, zeroreg, resultopt1;
64
return opfv_narrow_check(s, a) &&
88
- cond1 = tcg_temp_new();
65
require_rvfmin(s) &&
89
+ TCGv temp1, temp2, zero, one, max;
66
- require_scale_rvfmin(s) &&
90
67
- (s->sew != MO_8);
91
- zeroreg = tcg_constant_tl(0);
68
+ require_scale_rvfmin(s);
92
- resultopt1 = tcg_temp_new();
93
+ temp1 = tcg_temp_new();
94
+ temp2 = tcg_temp_new();
95
+ zero = tcg_constant_tl(0);
96
+ one = tcg_constant_tl(1);
97
+ max = tcg_constant_tl(~0);
98
99
- tcg_gen_setcondi_tl(TCG_COND_EQ, cond1, source2, 0);
100
- tcg_gen_movi_tl(resultopt1, (target_ulong)-1);
101
- tcg_gen_movcond_tl(TCG_COND_EQ, source1, cond1, zeroreg, source1,
102
- resultopt1);
103
- tcg_gen_movi_tl(resultopt1, (target_ulong)1);
104
- tcg_gen_movcond_tl(TCG_COND_EQ, source2, cond1, zeroreg, source2,
105
- resultopt1);
106
- tcg_gen_divu_tl(ret, source1, source2);
107
+ /*
108
+ * If div by zero, set temp1 to max and temp2 to 1 to
109
+ * produce the required result of max.
110
+ */
111
+ tcg_gen_movcond_tl(TCG_COND_EQ, temp1, source2, zero, max, source1);
112
+ tcg_gen_movcond_tl(TCG_COND_EQ, temp2, source2, zero, one, source2);
113
+ tcg_gen_divu_tl(ret, temp1, temp2);
114
115
- tcg_temp_free(cond1);
116
- tcg_temp_free(resultopt1);
117
+ tcg_temp_free(temp1);
118
+ tcg_temp_free(temp2);
119
}
69
}
120
70
121
static void gen_rem(TCGv ret, TCGv source1, TCGv source2)
71
static bool opffv_rod_narrow_check(DisasContext *s, arg_rmr *a)
122
{
72
{
123
- TCGv cond1, cond2, zeroreg, resultopt1;
73
return opfv_narrow_check(s, a) &&
124
-
74
require_rvf(s) &&
125
- cond1 = tcg_temp_new();
75
- require_scale_rvf(s) &&
126
- cond2 = tcg_temp_new();
76
- (s->sew != MO_8);
127
- zeroreg = tcg_constant_tl(0);
77
+ require_scale_rvf(s);
128
- resultopt1 = tcg_temp_new();
129
-
130
- tcg_gen_movi_tl(resultopt1, 1L);
131
- tcg_gen_setcondi_tl(TCG_COND_EQ, cond2, source2, (target_ulong)-1);
132
- tcg_gen_setcondi_tl(TCG_COND_EQ, cond1, source1,
133
- (target_ulong)1 << (TARGET_LONG_BITS - 1));
134
- tcg_gen_and_tl(cond2, cond1, cond2); /* cond1 = overflow */
135
- tcg_gen_setcondi_tl(TCG_COND_EQ, cond1, source2, 0); /* cond2 = div 0 */
136
- /* if overflow or div by zero, set source2 to 1, else don't change */
137
- tcg_gen_or_tl(cond2, cond1, cond2);
138
- tcg_gen_movcond_tl(TCG_COND_EQ, source2, cond2, zeroreg, source2,
139
- resultopt1);
140
- tcg_gen_rem_tl(resultopt1, source1, source2);
141
- /* if div by zero, just return the original dividend */
142
- tcg_gen_movcond_tl(TCG_COND_EQ, ret, cond1, zeroreg, resultopt1,
143
- source1);
144
-
145
- tcg_temp_free(cond1);
146
- tcg_temp_free(cond2);
147
- tcg_temp_free(resultopt1);
148
+ TCGv temp1, temp2, zero, one, mone, min;
149
+
150
+ temp1 = tcg_temp_new();
151
+ temp2 = tcg_temp_new();
152
+ zero = tcg_constant_tl(0);
153
+ one = tcg_constant_tl(1);
154
+ mone = tcg_constant_tl(-1);
155
+ min = tcg_constant_tl(1ull << (TARGET_LONG_BITS - 1));
156
+
157
+ /*
158
+ * If overflow, set temp1 to 0, else source1.
159
+ * This avoids a possible host trap, and produces the required result of 0.
160
+ */
161
+ tcg_gen_setcond_tl(TCG_COND_EQ, temp1, source1, min);
162
+ tcg_gen_setcond_tl(TCG_COND_EQ, temp2, source2, mone);
163
+ tcg_gen_and_tl(temp1, temp1, temp2);
164
+ tcg_gen_movcond_tl(TCG_COND_NE, temp1, temp1, zero, zero, source1);
165
+
166
+ /*
167
+ * If div by zero, set temp2 to 1, else source2.
168
+ * This avoids a possible host trap, but produces an incorrect result.
169
+ */
170
+ tcg_gen_movcond_tl(TCG_COND_EQ, temp2, source2, zero, one, source2);
171
+
172
+ tcg_gen_rem_tl(temp1, temp1, temp2);
173
+
174
+ /* If div by zero, the required result is the original dividend. */
175
+ tcg_gen_movcond_tl(TCG_COND_EQ, ret, source2, zero, source1, temp1);
176
+
177
+ tcg_temp_free(temp1);
178
+ tcg_temp_free(temp2);
179
}
78
}
180
79
181
static void gen_remu(TCGv ret, TCGv source1, TCGv source2)
80
#define GEN_OPFV_NARROW_TRANS(NAME, CHECK, HELPER, FRM) \
81
@@ -XXX,XX +XXX,XX @@ static bool freduction_widen_check(DisasContext *s, arg_rmrr *a)
182
{
82
{
183
- TCGv cond1, zeroreg, resultopt1;
83
return reduction_widen_check(s, a) &&
184
- cond1 = tcg_temp_new();
84
require_rvf(s) &&
185
- zeroreg = tcg_constant_tl(0);
85
- require_scale_rvf(s) &&
186
- resultopt1 = tcg_temp_new();
86
- (s->sew != MO_8);
187
-
87
+ require_scale_rvf(s);
188
- tcg_gen_movi_tl(resultopt1, (target_ulong)1);
189
- tcg_gen_setcondi_tl(TCG_COND_EQ, cond1, source2, 0);
190
- tcg_gen_movcond_tl(TCG_COND_EQ, source2, cond1, zeroreg, source2,
191
- resultopt1);
192
- tcg_gen_remu_tl(resultopt1, source1, source2);
193
- /* if div by zero, just return the original dividend */
194
- tcg_gen_movcond_tl(TCG_COND_EQ, ret, cond1, zeroreg, resultopt1,
195
- source1);
196
-
197
- tcg_temp_free(cond1);
198
- tcg_temp_free(resultopt1);
199
+ TCGv temp, zero, one;
200
+
201
+ temp = tcg_temp_new();
202
+ zero = tcg_constant_tl(0);
203
+ one = tcg_constant_tl(1);
204
+
205
+ /*
206
+ * If div by zero, set temp to 1, else source2.
207
+ * This avoids a possible host trap, but produces an incorrect result.
208
+ */
209
+ tcg_gen_movcond_tl(TCG_COND_EQ, temp, source2, zero, one, source2);
210
+
211
+ tcg_gen_remu_tl(temp, source1, temp);
212
+
213
+ /* If div by zero, the required result is the original dividend. */
214
+ tcg_gen_movcond_tl(TCG_COND_EQ, ret, source2, zero, source1, temp);
215
+
216
+ tcg_temp_free(temp);
217
}
88
}
218
89
219
static void gen_jal(DisasContext *ctx, int rd, target_ulong imm)
90
GEN_OPFVV_WIDEN_TRANS(vfwredusum_vs, freduction_widen_check)
220
--
91
--
221
2.31.1
92
2.45.1
222
223
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
2
2
3
These operations can be done in one instruction on some hosts.
3
raise_mmu_exception(), as is today, is prioritizing guest page faults by
4
checking first if virt_enabled && !first_stage, and then considering the
5
regular inst/load/store faults.
4
6
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
There's no mention in the spec about guest page fault being a higher
6
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
8
priority that PMP faults. In fact, privileged spec section 3.7.1 says:
9
10
"Attempting to fetch an instruction from a PMP region that does not have
11
execute permissions raises an instruction access-fault exception.
12
Attempting to execute a load or load-reserved instruction which accesses
13
a physical address within a PMP region without read permissions raises a
14
load access-fault exception. Attempting to execute a store,
15
store-conditional, or AMO instruction which accesses a physical address
16
within a PMP region without write permissions raises a store
17
access-fault exception."
18
19
So, in fact, we're doing it wrong - PMP faults should always be thrown,
20
regardless of also being a first or second stage fault.
21
22
The way riscv_cpu_tlb_fill() and get_physical_address() work is
23
adequate: a TRANSLATE_PMP_FAIL error is immediately reported and
24
reflected in the 'pmp_violation' flag. What we need is to change
25
raise_mmu_exception() to prioritize it.
26
27
Reported-by: Joseph Chan <jchan@ventanamicro.com>
28
Fixes: 82d53adfbb ("target/riscv/cpu_helper.c: Invalid exception on MMU translation stage")
29
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
30
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Message-id: 20210823195529.560295-14-richard.henderson@linaro.org
31
Message-ID: <20240413105929.7030-1-alexei.filippov@syntacore.com>
32
Cc: qemu-stable <qemu-stable@nongnu.org>
9
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
33
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
---
34
---
11
target/riscv/insn_trans/trans_rvi.c.inc | 14 ++++++++++++--
35
target/riscv/cpu_helper.c | 22 ++++++++++++----------
12
1 file changed, 12 insertions(+), 2 deletions(-)
36
1 file changed, 12 insertions(+), 10 deletions(-)
13
37
14
diff --git a/target/riscv/insn_trans/trans_rvi.c.inc b/target/riscv/insn_trans/trans_rvi.c.inc
38
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
15
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
16
--- a/target/riscv/insn_trans/trans_rvi.c.inc
40
--- a/target/riscv/cpu_helper.c
17
+++ b/target/riscv/insn_trans/trans_rvi.c.inc
41
+++ b/target/riscv/cpu_helper.c
18
@@ -XXX,XX +XXX,XX @@ static bool trans_slliw(DisasContext *ctx, arg_slliw *a)
42
@@ -XXX,XX +XXX,XX @@ static void raise_mmu_exception(CPURISCVState *env, target_ulong address,
19
return gen_shift_imm_fn(ctx, a, EXT_NONE, tcg_gen_shli_tl);
43
20
}
44
switch (access_type) {
21
45
case MMU_INST_FETCH:
22
+static void gen_srliw(TCGv dst, TCGv src, target_long shamt)
46
- if (env->virt_enabled && !first_stage) {
23
+{
47
+ if (pmp_violation) {
24
+ tcg_gen_extract_tl(dst, src, shamt, 32 - shamt);
48
+ cs->exception_index = RISCV_EXCP_INST_ACCESS_FAULT;
25
+}
49
+ } else if (env->virt_enabled && !first_stage) {
26
+
50
cs->exception_index = RISCV_EXCP_INST_GUEST_PAGE_FAULT;
27
static bool trans_srliw(DisasContext *ctx, arg_srliw *a)
51
} else {
28
{
52
- cs->exception_index = pmp_violation ?
29
REQUIRE_64BIT(ctx);
53
- RISCV_EXCP_INST_ACCESS_FAULT : RISCV_EXCP_INST_PAGE_FAULT;
30
ctx->w = true;
54
+ cs->exception_index = RISCV_EXCP_INST_PAGE_FAULT;
31
- return gen_shift_imm_fn(ctx, a, EXT_ZERO, tcg_gen_shri_tl);
55
}
32
+ return gen_shift_imm_fn(ctx, a, EXT_NONE, gen_srliw);
56
break;
33
+}
57
case MMU_DATA_LOAD:
34
+
58
- if (two_stage && !first_stage) {
35
+static void gen_sraiw(TCGv dst, TCGv src, target_long shamt)
59
+ if (pmp_violation) {
36
+{
60
+ cs->exception_index = RISCV_EXCP_LOAD_ACCESS_FAULT;
37
+ tcg_gen_sextract_tl(dst, src, shamt, 32 - shamt);
61
+ } else if (two_stage && !first_stage) {
38
}
62
cs->exception_index = RISCV_EXCP_LOAD_GUEST_ACCESS_FAULT;
39
63
} else {
40
static bool trans_sraiw(DisasContext *ctx, arg_sraiw *a)
64
- cs->exception_index = pmp_violation ?
41
{
65
- RISCV_EXCP_LOAD_ACCESS_FAULT : RISCV_EXCP_LOAD_PAGE_FAULT;
42
REQUIRE_64BIT(ctx);
66
+ cs->exception_index = RISCV_EXCP_LOAD_PAGE_FAULT;
43
ctx->w = true;
67
}
44
- return gen_shift_imm_fn(ctx, a, EXT_SIGN, tcg_gen_sari_tl);
68
break;
45
+ return gen_shift_imm_fn(ctx, a, EXT_NONE, gen_sraiw);
69
case MMU_DATA_STORE:
46
}
70
- if (two_stage && !first_stage) {
47
71
+ if (pmp_violation) {
48
static bool trans_addw(DisasContext *ctx, arg_addw *a)
72
+ cs->exception_index = RISCV_EXCP_STORE_AMO_ACCESS_FAULT;
73
+ } else if (two_stage && !first_stage) {
74
cs->exception_index = RISCV_EXCP_STORE_GUEST_AMO_ACCESS_FAULT;
75
} else {
76
- cs->exception_index = pmp_violation ?
77
- RISCV_EXCP_STORE_AMO_ACCESS_FAULT :
78
- RISCV_EXCP_STORE_PAGE_FAULT;
79
+ cs->exception_index = RISCV_EXCP_STORE_PAGE_FAULT;
80
}
81
break;
82
default:
49
--
83
--
50
2.31.1
84
2.45.1
51
52
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Alexei Filippov <alexei.filippov@syntacore.com>
2
2
3
Use ctx->w and the enhanced gen_arith function.
3
Previous patch fixed the PMP priority in raise_mmu_exception() but we're still
4
setting mtval2 incorrectly. In riscv_cpu_tlb_fill(), after pmp check in 2 stage
5
translation part, mtval2 will be set in case of successes 2 stage translation but
6
failed pmp check.
4
7
5
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
8
In this case we gonna set mtval2 via env->guest_phys_fault_addr in context of
9
riscv_cpu_tlb_fill(), as this was a guest-page-fault, but it didn't and mtval2
10
should be zero, according to RISCV privileged spec sect. 9.4.4: When a guest
11
page-fault is taken into M-mode, mtval2 is written with either zero or guest
12
physical address that faulted, shifted by 2 bits. *For other traps, mtval2
13
is set to zero...*
14
15
Signed-off-by: Alexei Filippov <alexei.filippov@syntacore.com>
16
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
6
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
17
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-ID: <20240503103052.6819-1-alexei.filippov@syntacore.com>
8
Message-id: 20210823195529.560295-8-richard.henderson@linaro.org
19
Cc: qemu-stable <qemu-stable@nongnu.org>
9
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
20
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
---
21
---
11
target/riscv/translate.c | 42 -------------------------
22
target/riscv/cpu_helper.c | 12 ++++++------
12
target/riscv/insn_trans/trans_rvm.c.inc | 16 +++++-----
23
1 file changed, 6 insertions(+), 6 deletions(-)
13
2 files changed, 8 insertions(+), 50 deletions(-)
14
24
15
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
25
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
16
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
17
--- a/target/riscv/translate.c
27
--- a/target/riscv/cpu_helper.c
18
+++ b/target/riscv/translate.c
28
+++ b/target/riscv/cpu_helper.c
19
@@ -XXX,XX +XXX,XX @@ static bool gen_arith_imm_tl(DisasContext *ctx, arg_i *a, DisasExtend ext,
29
@@ -XXX,XX +XXX,XX @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
20
return true;
30
__func__, pa, ret, prot_pmp, tlb_size);
21
}
31
22
32
prot &= prot_pmp;
23
-static bool gen_arith_div_w(DisasContext *ctx, arg_r *a,
33
- }
24
- void(*func)(TCGv, TCGv, TCGv))
25
-{
26
- TCGv source1, source2;
27
- source1 = tcg_temp_new();
28
- source2 = tcg_temp_new();
29
-
34
-
30
- gen_get_gpr(ctx, source1, a->rs1);
35
- if (ret != TRANSLATE_SUCCESS) {
31
- gen_get_gpr(ctx, source2, a->rs2);
36
+ } else {
32
- tcg_gen_ext32s_tl(source1, source1);
37
/*
33
- tcg_gen_ext32s_tl(source2, source2);
38
* Guest physical address translation failed, this is a HS
34
-
39
* level exception
35
- (*func)(source1, source1, source2);
40
*/
36
-
41
first_stage_error = false;
37
- tcg_gen_ext32s_tl(source1, source1);
42
- env->guest_phys_fault_addr = (im_address |
38
- gen_set_gpr(ctx, a->rd, source1);
43
- (address &
39
- tcg_temp_free(source1);
44
- (TARGET_PAGE_SIZE - 1))) >> 2;
40
- tcg_temp_free(source2);
45
+ if (ret != TRANSLATE_PMP_FAIL) {
41
- return true;
46
+ env->guest_phys_fault_addr = (im_address |
42
-}
47
+ (address &
43
-
48
+ (TARGET_PAGE_SIZE - 1))) >> 2;
44
-static bool gen_arith_div_uw(DisasContext *ctx, arg_r *a,
49
+ }
45
- void(*func)(TCGv, TCGv, TCGv))
50
}
46
-{
51
}
47
- TCGv source1, source2;
52
} else {
48
- source1 = tcg_temp_new();
49
- source2 = tcg_temp_new();
50
-
51
- gen_get_gpr(ctx, source1, a->rs1);
52
- gen_get_gpr(ctx, source2, a->rs2);
53
- tcg_gen_ext32u_tl(source1, source1);
54
- tcg_gen_ext32u_tl(source2, source2);
55
-
56
- (*func)(source1, source1, source2);
57
-
58
- tcg_gen_ext32s_tl(source1, source1);
59
- gen_set_gpr(ctx, a->rd, source1);
60
- tcg_temp_free(source1);
61
- tcg_temp_free(source2);
62
- return true;
63
-}
64
-
65
static void gen_pack(TCGv ret, TCGv arg1, TCGv arg2)
66
{
67
tcg_gen_deposit_tl(ret, arg1, arg2,
68
diff --git a/target/riscv/insn_trans/trans_rvm.c.inc b/target/riscv/insn_trans/trans_rvm.c.inc
69
index XXXXXXX..XXXXXXX 100644
70
--- a/target/riscv/insn_trans/trans_rvm.c.inc
71
+++ b/target/riscv/insn_trans/trans_rvm.c.inc
72
@@ -XXX,XX +XXX,XX @@ static bool trans_divw(DisasContext *ctx, arg_divw *a)
73
{
74
REQUIRE_64BIT(ctx);
75
REQUIRE_EXT(ctx, RVM);
76
-
77
- return gen_arith_div_w(ctx, a, &gen_div);
78
+ ctx->w = true;
79
+ return gen_arith(ctx, a, EXT_SIGN, gen_div);
80
}
81
82
static bool trans_divuw(DisasContext *ctx, arg_divuw *a)
83
{
84
REQUIRE_64BIT(ctx);
85
REQUIRE_EXT(ctx, RVM);
86
-
87
- return gen_arith_div_uw(ctx, a, &gen_divu);
88
+ ctx->w = true;
89
+ return gen_arith(ctx, a, EXT_ZERO, gen_divu);
90
}
91
92
static bool trans_remw(DisasContext *ctx, arg_remw *a)
93
{
94
REQUIRE_64BIT(ctx);
95
REQUIRE_EXT(ctx, RVM);
96
-
97
- return gen_arith_div_w(ctx, a, &gen_rem);
98
+ ctx->w = true;
99
+ return gen_arith(ctx, a, EXT_SIGN, gen_rem);
100
}
101
102
static bool trans_remuw(DisasContext *ctx, arg_remuw *a)
103
{
104
REQUIRE_64BIT(ctx);
105
REQUIRE_EXT(ctx, RVM);
106
-
107
- return gen_arith_div_uw(ctx, a, &gen_remu);
108
+ ctx->w = true;
109
+ return gen_arith(ctx, a, EXT_ZERO, gen_remu);
110
}
111
--
53
--
112
2.31.1
54
2.45.1
113
114
diff view generated by jsdifflib
1
From: LIU Zhiwei <zhiwei_liu@c-sky.com>
1
From: Rob Bradford <rbradford@rivosinc.com>
2
2
3
For some cpu, the isa version has already been set in cpu init function.
3
This extension has now been ratified:
4
Thus only override the isa version when isa version is not set, or
4
https://jira.riscv.org/browse/RVS-2006 so the "x-" prefix can be
5
users set different isa version explicitly by cpu parameters.
5
removed.
6
6
7
Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
7
Since this is now a ratified extension add it to the list of extensions
8
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
8
included in the "max" CPU variant.
9
Message-id: 20210811144612.68674-1-zhiwei_liu@c-sky.com
9
10
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
11
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
12
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
13
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
14
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
15
Message-ID: <20240514110217.22516-1-rbradford@rivosinc.com>
10
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
16
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
11
---
17
---
12
target/riscv/cpu.c | 14 ++++++++------
18
target/riscv/cpu.c | 2 +-
13
1 file changed, 8 insertions(+), 6 deletions(-)
19
target/riscv/tcg/tcg-cpu.c | 2 +-
20
2 files changed, 2 insertions(+), 2 deletions(-)
14
21
15
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
22
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
16
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
17
--- a/target/riscv/cpu.c
24
--- a/target/riscv/cpu.c
18
+++ b/target/riscv/cpu.c
25
+++ b/target/riscv/cpu.c
19
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
26
@@ -XXX,XX +XXX,XX @@ static const MISAExtInfo misa_ext_info_arr[] = {
20
RISCVCPU *cpu = RISCV_CPU(dev);
27
MISA_EXT_INFO(RVJ, "x-j", "Dynamic translated languages"),
21
CPURISCVState *env = &cpu->env;
28
MISA_EXT_INFO(RVV, "v", "Vector operations"),
22
RISCVCPUClass *mcc = RISCV_CPU_GET_CLASS(dev);
29
MISA_EXT_INFO(RVG, "g", "General purpose (IMAFD_Zicsr_Zifencei)"),
23
- int priv_version = PRIV_VERSION_1_11_0;
30
- MISA_EXT_INFO(RVB, "x-b", "Bit manipulation (Zba_Zbb_Zbs)")
24
- int bext_version = BEXT_VERSION_0_93_0;
31
+ MISA_EXT_INFO(RVB, "b", "Bit manipulation (Zba_Zbb_Zbs)")
25
- int vext_version = VEXT_VERSION_0_07_1;
32
};
26
+ int priv_version = 0;
33
27
target_ulong target_misa = env->misa;
34
static void riscv_cpu_validate_misa_mxl(RISCVCPUClass *mcc)
28
Error *local_err = NULL;
35
diff --git a/target/riscv/tcg/tcg-cpu.c b/target/riscv/tcg/tcg-cpu.c
29
36
index XXXXXXX..XXXXXXX 100644
30
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
37
--- a/target/riscv/tcg/tcg-cpu.c
31
}
38
+++ b/target/riscv/tcg/tcg-cpu.c
32
}
39
@@ -XXX,XX +XXX,XX @@ static void riscv_init_max_cpu_extensions(Object *obj)
33
40
const RISCVCPUMultiExtConfig *prop;
34
- set_priv_version(env, priv_version);
41
35
- set_bext_version(env, bext_version);
42
/* Enable RVG, RVJ and RVV that are disabled by default */
36
- set_vext_version(env, vext_version);
43
- riscv_cpu_set_misa_ext(env, env->misa_ext | RVG | RVJ | RVV);
37
+ if (priv_version) {
44
+ riscv_cpu_set_misa_ext(env, env->misa_ext | RVB | RVG | RVJ | RVV);
38
+ set_priv_version(env, priv_version);
45
39
+ } else if (!env->priv_ver) {
46
for (prop = riscv_cpu_extensions; prop && prop->name; prop++) {
40
+ set_priv_version(env, PRIV_VERSION_1_11_0);
47
isa_ext_update_enabled(cpu, prop->offset, true);
41
+ }
42
43
if (cpu->cfg.mmu) {
44
set_feature(env, RISCV_FEATURE_MMU);
45
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
46
target_misa |= RVH;
47
}
48
if (cpu->cfg.ext_b) {
49
+ int bext_version = BEXT_VERSION_0_93_0;
50
target_misa |= RVB;
51
52
if (cpu->cfg.bext_spec) {
53
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
54
set_bext_version(env, bext_version);
55
}
56
if (cpu->cfg.ext_v) {
57
+ int vext_version = VEXT_VERSION_0_07_1;
58
target_misa |= RVV;
59
if (!is_power_of_2(cpu->cfg.vlen)) {
60
error_setg(errp,
61
--
48
--
62
2.31.1
49
2.45.1
63
64
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Alistair Francis <alistair23@gmail.com>
2
2
3
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
3
When running the instruction
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
5
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
5
```
6
Message-id: 20210823195529.560295-22-richard.henderson@linaro.org
6
cbo.flush 0(x0)
7
```
8
9
QEMU would segfault.
10
11
The issue was in cpu_gpr[a->rs1] as QEMU does not have cpu_gpr[0]
12
allocated.
13
14
In order to fix this let's use the existing get_address()
15
helper. This also has the benefit of performing pointer mask
16
calculations on the address specified in rs1.
17
18
The pointer masking specificiation specifically states:
19
20
"""
21
Cache Management Operations: All instructions in Zicbom, Zicbop and Zicboz
22
"""
23
24
So this is the correct behaviour and we previously have been incorrectly
25
not masking the address.
26
27
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
28
Reported-by: Fabian Thomas <fabian.thomas@cispa.de>
29
Fixes: e05da09b7cfd ("target/riscv: implement Zicbom extension")
30
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
31
Cc: qemu-stable <qemu-stable@nongnu.org>
32
Message-ID: <20240514023910.301766-1-alistair.francis@wdc.com>
7
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
33
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
8
---
34
---
9
target/riscv/insn_trans/trans_rvf.c.inc | 146 ++++++++++++------------
35
target/riscv/insn_trans/trans_rvzicbo.c.inc | 16 ++++++++++++----
10
1 file changed, 70 insertions(+), 76 deletions(-)
36
1 file changed, 12 insertions(+), 4 deletions(-)
11
37
12
diff --git a/target/riscv/insn_trans/trans_rvf.c.inc b/target/riscv/insn_trans/trans_rvf.c.inc
38
diff --git a/target/riscv/insn_trans/trans_rvzicbo.c.inc b/target/riscv/insn_trans/trans_rvzicbo.c.inc
13
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
14
--- a/target/riscv/insn_trans/trans_rvf.c.inc
40
--- a/target/riscv/insn_trans/trans_rvzicbo.c.inc
15
+++ b/target/riscv/insn_trans/trans_rvf.c.inc
41
+++ b/target/riscv/insn_trans/trans_rvzicbo.c.inc
16
@@ -XXX,XX +XXX,XX @@
42
@@ -XXX,XX +XXX,XX @@
17
43
static bool trans_cbo_clean(DisasContext *ctx, arg_cbo_clean *a)
18
static bool trans_flw(DisasContext *ctx, arg_flw *a)
19
{
44
{
20
+ TCGv_i64 dest;
45
REQUIRE_ZICBOM(ctx);
21
+ TCGv addr;
46
- gen_helper_cbo_clean_flush(tcg_env, cpu_gpr[a->rs1]);
47
+ TCGv src = get_address(ctx, a->rs1, 0);
22
+
48
+
23
REQUIRE_FPU;
49
+ gen_helper_cbo_clean_flush(tcg_env, src);
24
REQUIRE_EXT(ctx, RVF);
25
- TCGv t0 = tcg_temp_new();
26
- gen_get_gpr(ctx, t0, a->rs1);
27
- tcg_gen_addi_tl(t0, t0, a->imm);
28
29
- tcg_gen_qemu_ld_i64(cpu_fpr[a->rd], t0, ctx->mem_idx, MO_TEUL);
30
- gen_nanbox_s(cpu_fpr[a->rd], cpu_fpr[a->rd]);
31
+ addr = get_gpr(ctx, a->rs1, EXT_NONE);
32
+ if (a->imm) {
33
+ TCGv temp = temp_new(ctx);
34
+ tcg_gen_addi_tl(temp, addr, a->imm);
35
+ addr = temp;
36
+ }
37
+
38
+ dest = cpu_fpr[a->rd];
39
+ tcg_gen_qemu_ld_i64(dest, addr, ctx->mem_idx, MO_TEUL);
40
+ gen_nanbox_s(dest, dest);
41
42
- tcg_temp_free(t0);
43
mark_fs_dirty(ctx);
44
return true;
50
return true;
45
}
51
}
46
52
47
static bool trans_fsw(DisasContext *ctx, arg_fsw *a)
53
static bool trans_cbo_flush(DisasContext *ctx, arg_cbo_flush *a)
48
{
54
{
49
+ TCGv addr;
55
REQUIRE_ZICBOM(ctx);
56
- gen_helper_cbo_clean_flush(tcg_env, cpu_gpr[a->rs1]);
57
+ TCGv src = get_address(ctx, a->rs1, 0);
50
+
58
+
51
REQUIRE_FPU;
59
+ gen_helper_cbo_clean_flush(tcg_env, src);
52
REQUIRE_EXT(ctx, RVF);
53
- TCGv t0 = tcg_temp_new();
54
- gen_get_gpr(ctx, t0, a->rs1);
55
56
- tcg_gen_addi_tl(t0, t0, a->imm);
57
+ addr = get_gpr(ctx, a->rs1, EXT_NONE);
58
+ if (a->imm) {
59
+ TCGv temp = tcg_temp_new();
60
+ tcg_gen_addi_tl(temp, addr, a->imm);
61
+ addr = temp;
62
+ }
63
64
- tcg_gen_qemu_st_i64(cpu_fpr[a->rs2], t0, ctx->mem_idx, MO_TEUL);
65
+ tcg_gen_qemu_st_i64(cpu_fpr[a->rs2], addr, ctx->mem_idx, MO_TEUL);
66
67
- tcg_temp_free(t0);
68
return true;
60
return true;
69
}
61
}
70
62
71
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_w_s(DisasContext *ctx, arg_fcvt_w_s *a)
63
static bool trans_cbo_inval(DisasContext *ctx, arg_cbo_inval *a)
72
REQUIRE_FPU;
64
{
73
REQUIRE_EXT(ctx, RVF);
65
REQUIRE_ZICBOM(ctx);
74
66
- gen_helper_cbo_inval(tcg_env, cpu_gpr[a->rs1]);
75
- TCGv t0 = tcg_temp_new();
67
+ TCGv src = get_address(ctx, a->rs1, 0);
76
- gen_set_rm(ctx, a->rm);
68
+
77
- gen_helper_fcvt_w_s(t0, cpu_env, cpu_fpr[a->rs1]);
69
+ gen_helper_cbo_inval(tcg_env, src);
78
- gen_set_gpr(ctx, a->rd, t0);
79
- tcg_temp_free(t0);
80
+ TCGv dest = dest_gpr(ctx, a->rd);
81
82
+ gen_set_rm(ctx, a->rm);
83
+ gen_helper_fcvt_w_s(dest, cpu_env, cpu_fpr[a->rs1]);
84
+ gen_set_gpr(ctx, a->rd, dest);
85
return true;
70
return true;
86
}
71
}
87
72
88
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_wu_s(DisasContext *ctx, arg_fcvt_wu_s *a)
73
static bool trans_cbo_zero(DisasContext *ctx, arg_cbo_zero *a)
89
REQUIRE_FPU;
90
REQUIRE_EXT(ctx, RVF);
91
92
- TCGv t0 = tcg_temp_new();
93
- gen_set_rm(ctx, a->rm);
94
- gen_helper_fcvt_wu_s(t0, cpu_env, cpu_fpr[a->rs1]);
95
- gen_set_gpr(ctx, a->rd, t0);
96
- tcg_temp_free(t0);
97
+ TCGv dest = dest_gpr(ctx, a->rd);
98
99
+ gen_set_rm(ctx, a->rm);
100
+ gen_helper_fcvt_wu_s(dest, cpu_env, cpu_fpr[a->rs1]);
101
+ gen_set_gpr(ctx, a->rd, dest);
102
return true;
103
}
104
105
@@ -XXX,XX +XXX,XX @@ static bool trans_fmv_x_w(DisasContext *ctx, arg_fmv_x_w *a)
106
REQUIRE_FPU;
107
REQUIRE_EXT(ctx, RVF);
108
109
- TCGv t0 = tcg_temp_new();
110
+ TCGv dest = dest_gpr(ctx, a->rd);
111
112
#if defined(TARGET_RISCV64)
113
- tcg_gen_ext32s_tl(t0, cpu_fpr[a->rs1]);
114
+ tcg_gen_ext32s_tl(dest, cpu_fpr[a->rs1]);
115
#else
116
- tcg_gen_extrl_i64_i32(t0, cpu_fpr[a->rs1]);
117
+ tcg_gen_extrl_i64_i32(dest, cpu_fpr[a->rs1]);
118
#endif
119
120
- gen_set_gpr(ctx, a->rd, t0);
121
- tcg_temp_free(t0);
122
-
123
+ gen_set_gpr(ctx, a->rd, dest);
124
return true;
125
}
126
127
@@ -XXX,XX +XXX,XX @@ static bool trans_feq_s(DisasContext *ctx, arg_feq_s *a)
128
{
74
{
129
REQUIRE_FPU;
75
REQUIRE_ZICBOZ(ctx);
130
REQUIRE_EXT(ctx, RVF);
76
- gen_helper_cbo_zero(tcg_env, cpu_gpr[a->rs1]);
131
- TCGv t0 = tcg_temp_new();
77
+ TCGv src = get_address(ctx, a->rs1, 0);
132
- gen_helper_feq_s(t0, cpu_env, cpu_fpr[a->rs1], cpu_fpr[a->rs2]);
133
- gen_set_gpr(ctx, a->rd, t0);
134
- tcg_temp_free(t0);
135
+
78
+
136
+ TCGv dest = dest_gpr(ctx, a->rd);
79
+ gen_helper_cbo_zero(tcg_env, src);
137
+
138
+ gen_helper_feq_s(dest, cpu_env, cpu_fpr[a->rs1], cpu_fpr[a->rs2]);
139
+ gen_set_gpr(ctx, a->rd, dest);
140
return true;
141
}
142
143
@@ -XXX,XX +XXX,XX @@ static bool trans_flt_s(DisasContext *ctx, arg_flt_s *a)
144
{
145
REQUIRE_FPU;
146
REQUIRE_EXT(ctx, RVF);
147
- TCGv t0 = tcg_temp_new();
148
- gen_helper_flt_s(t0, cpu_env, cpu_fpr[a->rs1], cpu_fpr[a->rs2]);
149
- gen_set_gpr(ctx, a->rd, t0);
150
- tcg_temp_free(t0);
151
+
152
+ TCGv dest = dest_gpr(ctx, a->rd);
153
+
154
+ gen_helper_flt_s(dest, cpu_env, cpu_fpr[a->rs1], cpu_fpr[a->rs2]);
155
+ gen_set_gpr(ctx, a->rd, dest);
156
return true;
157
}
158
159
@@ -XXX,XX +XXX,XX @@ static bool trans_fle_s(DisasContext *ctx, arg_fle_s *a)
160
{
161
REQUIRE_FPU;
162
REQUIRE_EXT(ctx, RVF);
163
- TCGv t0 = tcg_temp_new();
164
- gen_helper_fle_s(t0, cpu_env, cpu_fpr[a->rs1], cpu_fpr[a->rs2]);
165
- gen_set_gpr(ctx, a->rd, t0);
166
- tcg_temp_free(t0);
167
+
168
+ TCGv dest = dest_gpr(ctx, a->rd);
169
+
170
+ gen_helper_fle_s(dest, cpu_env, cpu_fpr[a->rs1], cpu_fpr[a->rs2]);
171
+ gen_set_gpr(ctx, a->rd, dest);
172
return true;
173
}
174
175
@@ -XXX,XX +XXX,XX @@ static bool trans_fclass_s(DisasContext *ctx, arg_fclass_s *a)
176
REQUIRE_FPU;
177
REQUIRE_EXT(ctx, RVF);
178
179
- TCGv t0 = tcg_temp_new();
180
-
181
- gen_helper_fclass_s(t0, cpu_fpr[a->rs1]);
182
-
183
- gen_set_gpr(ctx, a->rd, t0);
184
- tcg_temp_free(t0);
185
+ TCGv dest = dest_gpr(ctx, a->rd);
186
187
+ gen_helper_fclass_s(dest, cpu_fpr[a->rs1]);
188
+ gen_set_gpr(ctx, a->rd, dest);
189
return true;
190
}
191
192
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_s_w(DisasContext *ctx, arg_fcvt_s_w *a)
193
REQUIRE_FPU;
194
REQUIRE_EXT(ctx, RVF);
195
196
- TCGv t0 = tcg_temp_new();
197
- gen_get_gpr(ctx, t0, a->rs1);
198
+ TCGv src = get_gpr(ctx, a->rs1, EXT_SIGN);
199
200
gen_set_rm(ctx, a->rm);
201
- gen_helper_fcvt_s_w(cpu_fpr[a->rd], cpu_env, t0);
202
+ gen_helper_fcvt_s_w(cpu_fpr[a->rd], cpu_env, src);
203
204
mark_fs_dirty(ctx);
205
- tcg_temp_free(t0);
206
-
207
return true;
208
}
209
210
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_s_wu(DisasContext *ctx, arg_fcvt_s_wu *a)
211
REQUIRE_FPU;
212
REQUIRE_EXT(ctx, RVF);
213
214
- TCGv t0 = tcg_temp_new();
215
- gen_get_gpr(ctx, t0, a->rs1);
216
+ TCGv src = get_gpr(ctx, a->rs1, EXT_ZERO);
217
218
gen_set_rm(ctx, a->rm);
219
- gen_helper_fcvt_s_wu(cpu_fpr[a->rd], cpu_env, t0);
220
+ gen_helper_fcvt_s_wu(cpu_fpr[a->rd], cpu_env, src);
221
222
mark_fs_dirty(ctx);
223
- tcg_temp_free(t0);
224
-
225
return true;
226
}
227
228
@@ -XXX,XX +XXX,XX @@ static bool trans_fmv_w_x(DisasContext *ctx, arg_fmv_w_x *a)
229
REQUIRE_FPU;
230
REQUIRE_EXT(ctx, RVF);
231
232
- TCGv t0 = tcg_temp_new();
233
- gen_get_gpr(ctx, t0, a->rs1);
234
+ TCGv src = get_gpr(ctx, a->rs1, EXT_ZERO);
235
236
- tcg_gen_extu_tl_i64(cpu_fpr[a->rd], t0);
237
+ tcg_gen_extu_tl_i64(cpu_fpr[a->rd], src);
238
gen_nanbox_s(cpu_fpr[a->rd], cpu_fpr[a->rd]);
239
240
mark_fs_dirty(ctx);
241
- tcg_temp_free(t0);
242
-
243
return true;
244
}
245
246
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_l_s(DisasContext *ctx, arg_fcvt_l_s *a)
247
REQUIRE_FPU;
248
REQUIRE_EXT(ctx, RVF);
249
250
- TCGv t0 = tcg_temp_new();
251
+ TCGv dest = dest_gpr(ctx, a->rd);
252
+
253
gen_set_rm(ctx, a->rm);
254
- gen_helper_fcvt_l_s(t0, cpu_env, cpu_fpr[a->rs1]);
255
- gen_set_gpr(ctx, a->rd, t0);
256
- tcg_temp_free(t0);
257
+ gen_helper_fcvt_l_s(dest, cpu_env, cpu_fpr[a->rs1]);
258
+ gen_set_gpr(ctx, a->rd, dest);
259
return true;
260
}
261
262
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_lu_s(DisasContext *ctx, arg_fcvt_lu_s *a)
263
REQUIRE_FPU;
264
REQUIRE_EXT(ctx, RVF);
265
266
- TCGv t0 = tcg_temp_new();
267
+ TCGv dest = dest_gpr(ctx, a->rd);
268
+
269
gen_set_rm(ctx, a->rm);
270
- gen_helper_fcvt_lu_s(t0, cpu_env, cpu_fpr[a->rs1]);
271
- gen_set_gpr(ctx, a->rd, t0);
272
- tcg_temp_free(t0);
273
+ gen_helper_fcvt_lu_s(dest, cpu_env, cpu_fpr[a->rs1]);
274
+ gen_set_gpr(ctx, a->rd, dest);
275
return true;
276
}
277
278
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_s_l(DisasContext *ctx, arg_fcvt_s_l *a)
279
REQUIRE_FPU;
280
REQUIRE_EXT(ctx, RVF);
281
282
- TCGv t0 = tcg_temp_new();
283
- gen_get_gpr(ctx, t0, a->rs1);
284
+ TCGv src = get_gpr(ctx, a->rs1, EXT_SIGN);
285
286
gen_set_rm(ctx, a->rm);
287
- gen_helper_fcvt_s_l(cpu_fpr[a->rd], cpu_env, t0);
288
+ gen_helper_fcvt_s_l(cpu_fpr[a->rd], cpu_env, src);
289
290
mark_fs_dirty(ctx);
291
- tcg_temp_free(t0);
292
return true;
293
}
294
295
@@ -XXX,XX +XXX,XX @@ static bool trans_fcvt_s_lu(DisasContext *ctx, arg_fcvt_s_lu *a)
296
REQUIRE_FPU;
297
REQUIRE_EXT(ctx, RVF);
298
299
- TCGv t0 = tcg_temp_new();
300
- gen_get_gpr(ctx, t0, a->rs1);
301
+ TCGv src = get_gpr(ctx, a->rs1, EXT_ZERO);
302
303
gen_set_rm(ctx, a->rm);
304
- gen_helper_fcvt_s_lu(cpu_fpr[a->rd], cpu_env, t0);
305
+ gen_helper_fcvt_s_lu(cpu_fpr[a->rd], cpu_env, src);
306
307
mark_fs_dirty(ctx);
308
- tcg_temp_free(t0);
309
return true;
80
return true;
310
}
81
}
311
--
82
--
312
2.31.1
83
2.45.1
313
314
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Yong-Xuan Wang <yongxuan.wang@sifive.com>
2
2
3
Move these helpers near their use by the trans_*
3
In AIA spec, each hart (or each hart within a group) has a unique hart
4
functions within insn_trans/trans_rvb.c.inc.
4
number to locate the memory pages of interrupt files in the address
5
space. The number of bits required to represent any hart number is equal
6
to ceil(log2(hmax + 1)), where hmax is the largest hart number among
7
groups.
5
8
6
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
9
However, if the largest hart number among groups is a power of 2, QEMU
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
will pass an inaccurate hart-index-bit setting to Linux. For example, when
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
11
the guest OS has 4 harts, only ceil(log2(3 + 1)) = 2 bits are sufficient
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
to represent 4 harts, but we passes 3 to Linux. The code needs to be
10
Message-id: 20210823195529.560295-11-richard.henderson@linaro.org
13
updated to ensure accurate hart-index-bit settings.
14
15
Additionally, a Linux patch[1] is necessary to correctly recover the hart
16
index when the guest OS has only 1 hart, where the hart-index-bit is 0.
17
18
[1] https://lore.kernel.org/lkml/20240415064905.25184-1-yongxuan.wang@sifive.com/t/
19
20
Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
21
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
22
Cc: qemu-stable <qemu-stable@nongnu.org>
23
Message-ID: <20240515091129.28116-1-yongxuan.wang@sifive.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
24
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
25
---
13
target/riscv/translate.c | 233 -----------------------
26
target/riscv/kvm/kvm-cpu.c | 9 ++++++++-
14
target/riscv/insn_trans/trans_rvb.c.inc | 234 ++++++++++++++++++++++++
27
1 file changed, 8 insertions(+), 1 deletion(-)
15
2 files changed, 234 insertions(+), 233 deletions(-)
16
28
17
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
29
diff --git a/target/riscv/kvm/kvm-cpu.c b/target/riscv/kvm/kvm-cpu.c
18
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
19
--- a/target/riscv/translate.c
31
--- a/target/riscv/kvm/kvm-cpu.c
20
+++ b/target/riscv/translate.c
32
+++ b/target/riscv/kvm/kvm-cpu.c
21
@@ -XXX,XX +XXX,XX @@ static bool gen_arith_imm_tl(DisasContext *ctx, arg_i *a, DisasExtend ext,
33
@@ -XXX,XX +XXX,XX @@ void kvm_riscv_aia_create(MachineState *machine, uint64_t group_shift,
22
return true;
34
}
23
}
35
}
24
36
25
-static void gen_pack(TCGv ret, TCGv arg1, TCGv arg2)
37
- hart_bits = find_last_bit(&max_hart_per_socket, BITS_PER_LONG) + 1;
26
-{
27
- tcg_gen_deposit_tl(ret, arg1, arg2,
28
- TARGET_LONG_BITS / 2,
29
- TARGET_LONG_BITS / 2);
30
-}
31
-
32
-static void gen_packu(TCGv ret, TCGv arg1, TCGv arg2)
33
-{
34
- TCGv t = tcg_temp_new();
35
- tcg_gen_shri_tl(t, arg1, TARGET_LONG_BITS / 2);
36
- tcg_gen_deposit_tl(ret, arg2, t, 0, TARGET_LONG_BITS / 2);
37
- tcg_temp_free(t);
38
-}
39
-
40
-static void gen_packh(TCGv ret, TCGv arg1, TCGv arg2)
41
-{
42
- TCGv t = tcg_temp_new();
43
- tcg_gen_ext8u_tl(t, arg2);
44
- tcg_gen_deposit_tl(ret, arg1, t, 8, TARGET_LONG_BITS - 8);
45
- tcg_temp_free(t);
46
-}
47
-
48
-static void gen_sbop_mask(TCGv ret, TCGv shamt)
49
-{
50
- tcg_gen_movi_tl(ret, 1);
51
- tcg_gen_shl_tl(ret, ret, shamt);
52
-}
53
-
54
-static void gen_bset(TCGv ret, TCGv arg1, TCGv shamt)
55
-{
56
- TCGv t = tcg_temp_new();
57
-
58
- gen_sbop_mask(t, shamt);
59
- tcg_gen_or_tl(ret, arg1, t);
60
-
61
- tcg_temp_free(t);
62
-}
63
-
64
-static void gen_bclr(TCGv ret, TCGv arg1, TCGv shamt)
65
-{
66
- TCGv t = tcg_temp_new();
67
-
68
- gen_sbop_mask(t, shamt);
69
- tcg_gen_andc_tl(ret, arg1, t);
70
-
71
- tcg_temp_free(t);
72
-}
73
-
74
-static void gen_binv(TCGv ret, TCGv arg1, TCGv shamt)
75
-{
76
- TCGv t = tcg_temp_new();
77
-
78
- gen_sbop_mask(t, shamt);
79
- tcg_gen_xor_tl(ret, arg1, t);
80
-
81
- tcg_temp_free(t);
82
-}
83
-
84
-static void gen_bext(TCGv ret, TCGv arg1, TCGv shamt)
85
-{
86
- tcg_gen_shr_tl(ret, arg1, shamt);
87
- tcg_gen_andi_tl(ret, ret, 1);
88
-}
89
-
90
-static void gen_slo(TCGv ret, TCGv arg1, TCGv arg2)
91
-{
92
- tcg_gen_not_tl(ret, arg1);
93
- tcg_gen_shl_tl(ret, ret, arg2);
94
- tcg_gen_not_tl(ret, ret);
95
-}
96
-
97
-static void gen_sro(TCGv ret, TCGv arg1, TCGv arg2)
98
-{
99
- tcg_gen_not_tl(ret, arg1);
100
- tcg_gen_shr_tl(ret, ret, arg2);
101
- tcg_gen_not_tl(ret, ret);
102
-}
103
-
104
-static bool gen_grevi(DisasContext *ctx, arg_grevi *a)
105
-{
106
- TCGv source1 = tcg_temp_new();
107
- TCGv source2;
108
-
109
- gen_get_gpr(ctx, source1, a->rs1);
110
-
111
- if (a->shamt == (TARGET_LONG_BITS - 8)) {
112
- /* rev8, byte swaps */
113
- tcg_gen_bswap_tl(source1, source1);
114
- } else {
115
- source2 = tcg_temp_new();
116
- tcg_gen_movi_tl(source2, a->shamt);
117
- gen_helper_grev(source1, source1, source2);
118
- tcg_temp_free(source2);
119
- }
120
-
121
- gen_set_gpr(ctx, a->rd, source1);
122
- tcg_temp_free(source1);
123
- return true;
124
-}
125
-
126
-#define GEN_SHADD(SHAMT) \
127
-static void gen_sh##SHAMT##add(TCGv ret, TCGv arg1, TCGv arg2) \
128
-{ \
129
- TCGv t = tcg_temp_new(); \
130
- \
131
- tcg_gen_shli_tl(t, arg1, SHAMT); \
132
- tcg_gen_add_tl(ret, t, arg2); \
133
- \
134
- tcg_temp_free(t); \
135
-}
136
-
137
-GEN_SHADD(1)
138
-GEN_SHADD(2)
139
-GEN_SHADD(3)
140
-
141
-static void gen_ctzw(TCGv ret, TCGv arg1)
142
-{
143
- tcg_gen_ori_tl(ret, arg1, (target_ulong)MAKE_64BIT_MASK(32, 32));
144
- tcg_gen_ctzi_tl(ret, ret, 64);
145
-}
146
-
147
-static void gen_clzw(TCGv ret, TCGv arg1)
148
-{
149
- tcg_gen_ext32u_tl(ret, arg1);
150
- tcg_gen_clzi_tl(ret, ret, 64);
151
- tcg_gen_subi_tl(ret, ret, 32);
152
-}
153
-
154
-static void gen_cpopw(TCGv ret, TCGv arg1)
155
-{
156
- tcg_gen_ext32u_tl(arg1, arg1);
157
- tcg_gen_ctpop_tl(ret, arg1);
158
-}
159
-
160
-static void gen_packw(TCGv ret, TCGv arg1, TCGv arg2)
161
-{
162
- TCGv t = tcg_temp_new();
163
- tcg_gen_ext16s_tl(t, arg2);
164
- tcg_gen_deposit_tl(ret, arg1, t, 16, 48);
165
- tcg_temp_free(t);
166
-}
167
-
168
-static void gen_packuw(TCGv ret, TCGv arg1, TCGv arg2)
169
-{
170
- TCGv t = tcg_temp_new();
171
- tcg_gen_shri_tl(t, arg1, 16);
172
- tcg_gen_deposit_tl(ret, arg2, t, 0, 16);
173
- tcg_gen_ext32s_tl(ret, ret);
174
- tcg_temp_free(t);
175
-}
176
-
177
-static void gen_rorw(TCGv ret, TCGv arg1, TCGv arg2)
178
-{
179
- TCGv_i32 t1 = tcg_temp_new_i32();
180
- TCGv_i32 t2 = tcg_temp_new_i32();
181
-
182
- /* truncate to 32-bits */
183
- tcg_gen_trunc_tl_i32(t1, arg1);
184
- tcg_gen_trunc_tl_i32(t2, arg2);
185
-
186
- tcg_gen_rotr_i32(t1, t1, t2);
187
-
188
- /* sign-extend 64-bits */
189
- tcg_gen_ext_i32_tl(ret, t1);
190
-
191
- tcg_temp_free_i32(t1);
192
- tcg_temp_free_i32(t2);
193
-}
194
-
195
-static void gen_rolw(TCGv ret, TCGv arg1, TCGv arg2)
196
-{
197
- TCGv_i32 t1 = tcg_temp_new_i32();
198
- TCGv_i32 t2 = tcg_temp_new_i32();
199
-
200
- /* truncate to 32-bits */
201
- tcg_gen_trunc_tl_i32(t1, arg1);
202
- tcg_gen_trunc_tl_i32(t2, arg2);
203
-
204
- tcg_gen_rotl_i32(t1, t1, t2);
205
-
206
- /* sign-extend 64-bits */
207
- tcg_gen_ext_i32_tl(ret, t1);
208
-
209
- tcg_temp_free_i32(t1);
210
- tcg_temp_free_i32(t2);
211
-}
212
-
213
-static void gen_grevw(TCGv ret, TCGv arg1, TCGv arg2)
214
-{
215
- tcg_gen_ext32u_tl(arg1, arg1);
216
- gen_helper_grev(ret, arg1, arg2);
217
-}
218
-
219
-static void gen_gorcw(TCGv ret, TCGv arg1, TCGv arg2)
220
-{
221
- tcg_gen_ext32u_tl(arg1, arg1);
222
- gen_helper_gorcw(ret, arg1, arg2);
223
-}
224
-
225
-#define GEN_SHADD_UW(SHAMT) \
226
-static void gen_sh##SHAMT##add_uw(TCGv ret, TCGv arg1, TCGv arg2) \
227
-{ \
228
- TCGv t = tcg_temp_new(); \
229
- \
230
- tcg_gen_ext32u_tl(t, arg1); \
231
- \
232
- tcg_gen_shli_tl(t, t, SHAMT); \
233
- tcg_gen_add_tl(ret, t, arg2); \
234
- \
235
- tcg_temp_free(t); \
236
-}
237
-
238
-GEN_SHADD_UW(1)
239
-GEN_SHADD_UW(2)
240
-GEN_SHADD_UW(3)
241
-
242
-static void gen_add_uw(TCGv ret, TCGv arg1, TCGv arg2)
243
-{
244
- tcg_gen_ext32u_tl(arg1, arg1);
245
- tcg_gen_add_tl(ret, arg1, arg2);
246
-}
247
-
248
static bool gen_arith(DisasContext *ctx, arg_r *a, DisasExtend ext,
249
void (*func)(TCGv, TCGv, TCGv))
250
{
251
@@ -XXX,XX +XXX,XX @@ static bool gen_shiftiw(DisasContext *ctx, arg_shift *a,
252
return true;
253
}
254
255
-static void gen_ctz(TCGv ret, TCGv arg1)
256
-{
257
- tcg_gen_ctzi_tl(ret, arg1, TARGET_LONG_BITS);
258
-}
259
-
260
-static void gen_clz(TCGv ret, TCGv arg1)
261
-{
262
- tcg_gen_clzi_tl(ret, arg1, TARGET_LONG_BITS);
263
-}
264
-
265
static bool gen_unary(DisasContext *ctx, arg_r2 *a,
266
void(*func)(TCGv, TCGv))
267
{
268
diff --git a/target/riscv/insn_trans/trans_rvb.c.inc b/target/riscv/insn_trans/trans_rvb.c.inc
269
index XXXXXXX..XXXXXXX 100644
270
--- a/target/riscv/insn_trans/trans_rvb.c.inc
271
+++ b/target/riscv/insn_trans/trans_rvb.c.inc
272
@@ -XXX,XX +XXX,XX @@
273
* this program. If not, see <http://www.gnu.org/licenses/>.
274
*/
275
276
+
38
+
277
+static void gen_clz(TCGv ret, TCGv arg1)
39
+ if (max_hart_per_socket > 1) {
278
+{
40
+ max_hart_per_socket--;
279
+ tcg_gen_clzi_tl(ret, arg1, TARGET_LONG_BITS);
41
+ hart_bits = find_last_bit(&max_hart_per_socket, BITS_PER_LONG) + 1;
280
+}
281
+
282
static bool trans_clz(DisasContext *ctx, arg_clz *a)
283
{
284
REQUIRE_EXT(ctx, RVB);
285
return gen_unary(ctx, a, gen_clz);
286
}
287
288
+static void gen_ctz(TCGv ret, TCGv arg1)
289
+{
290
+ tcg_gen_ctzi_tl(ret, arg1, TARGET_LONG_BITS);
291
+}
292
+
293
static bool trans_ctz(DisasContext *ctx, arg_ctz *a)
294
{
295
REQUIRE_EXT(ctx, RVB);
296
@@ -XXX,XX +XXX,XX @@ static bool trans_xnor(DisasContext *ctx, arg_xnor *a)
297
return gen_arith(ctx, a, EXT_NONE, tcg_gen_eqv_tl);
298
}
299
300
+static void gen_pack(TCGv ret, TCGv arg1, TCGv arg2)
301
+{
302
+ tcg_gen_deposit_tl(ret, arg1, arg2,
303
+ TARGET_LONG_BITS / 2,
304
+ TARGET_LONG_BITS / 2);
305
+}
306
+
307
static bool trans_pack(DisasContext *ctx, arg_pack *a)
308
{
309
REQUIRE_EXT(ctx, RVB);
310
return gen_arith(ctx, a, EXT_NONE, gen_pack);
311
}
312
313
+static void gen_packu(TCGv ret, TCGv arg1, TCGv arg2)
314
+{
315
+ TCGv t = tcg_temp_new();
316
+ tcg_gen_shri_tl(t, arg1, TARGET_LONG_BITS / 2);
317
+ tcg_gen_deposit_tl(ret, arg2, t, 0, TARGET_LONG_BITS / 2);
318
+ tcg_temp_free(t);
319
+}
320
+
321
static bool trans_packu(DisasContext *ctx, arg_packu *a)
322
{
323
REQUIRE_EXT(ctx, RVB);
324
return gen_arith(ctx, a, EXT_NONE, gen_packu);
325
}
326
327
+static void gen_packh(TCGv ret, TCGv arg1, TCGv arg2)
328
+{
329
+ TCGv t = tcg_temp_new();
330
+ tcg_gen_ext8u_tl(t, arg2);
331
+ tcg_gen_deposit_tl(ret, arg1, t, 8, TARGET_LONG_BITS - 8);
332
+ tcg_temp_free(t);
333
+}
334
+
335
static bool trans_packh(DisasContext *ctx, arg_packh *a)
336
{
337
REQUIRE_EXT(ctx, RVB);
338
@@ -XXX,XX +XXX,XX @@ static bool trans_sext_h(DisasContext *ctx, arg_sext_h *a)
339
return gen_unary(ctx, a, tcg_gen_ext16s_tl);
340
}
341
342
+static void gen_sbop_mask(TCGv ret, TCGv shamt)
343
+{
344
+ tcg_gen_movi_tl(ret, 1);
345
+ tcg_gen_shl_tl(ret, ret, shamt);
346
+}
347
+
348
+static void gen_bset(TCGv ret, TCGv arg1, TCGv shamt)
349
+{
350
+ TCGv t = tcg_temp_new();
351
+
352
+ gen_sbop_mask(t, shamt);
353
+ tcg_gen_or_tl(ret, arg1, t);
354
+
355
+ tcg_temp_free(t);
356
+}
357
+
358
static bool trans_bset(DisasContext *ctx, arg_bset *a)
359
{
360
REQUIRE_EXT(ctx, RVB);
361
@@ -XXX,XX +XXX,XX @@ static bool trans_bseti(DisasContext *ctx, arg_bseti *a)
362
return gen_shifti(ctx, a, gen_bset);
363
}
364
365
+static void gen_bclr(TCGv ret, TCGv arg1, TCGv shamt)
366
+{
367
+ TCGv t = tcg_temp_new();
368
+
369
+ gen_sbop_mask(t, shamt);
370
+ tcg_gen_andc_tl(ret, arg1, t);
371
+
372
+ tcg_temp_free(t);
373
+}
374
+
375
static bool trans_bclr(DisasContext *ctx, arg_bclr *a)
376
{
377
REQUIRE_EXT(ctx, RVB);
378
@@ -XXX,XX +XXX,XX @@ static bool trans_bclri(DisasContext *ctx, arg_bclri *a)
379
return gen_shifti(ctx, a, gen_bclr);
380
}
381
382
+static void gen_binv(TCGv ret, TCGv arg1, TCGv shamt)
383
+{
384
+ TCGv t = tcg_temp_new();
385
+
386
+ gen_sbop_mask(t, shamt);
387
+ tcg_gen_xor_tl(ret, arg1, t);
388
+
389
+ tcg_temp_free(t);
390
+}
391
+
392
static bool trans_binv(DisasContext *ctx, arg_binv *a)
393
{
394
REQUIRE_EXT(ctx, RVB);
395
@@ -XXX,XX +XXX,XX @@ static bool trans_binvi(DisasContext *ctx, arg_binvi *a)
396
return gen_shifti(ctx, a, gen_binv);
397
}
398
399
+static void gen_bext(TCGv ret, TCGv arg1, TCGv shamt)
400
+{
401
+ tcg_gen_shr_tl(ret, arg1, shamt);
402
+ tcg_gen_andi_tl(ret, ret, 1);
403
+}
404
+
405
static bool trans_bext(DisasContext *ctx, arg_bext *a)
406
{
407
REQUIRE_EXT(ctx, RVB);
408
@@ -XXX,XX +XXX,XX @@ static bool trans_bexti(DisasContext *ctx, arg_bexti *a)
409
return gen_shifti(ctx, a, gen_bext);
410
}
411
412
+static void gen_slo(TCGv ret, TCGv arg1, TCGv arg2)
413
+{
414
+ tcg_gen_not_tl(ret, arg1);
415
+ tcg_gen_shl_tl(ret, ret, arg2);
416
+ tcg_gen_not_tl(ret, ret);
417
+}
418
+
419
static bool trans_slo(DisasContext *ctx, arg_slo *a)
420
{
421
REQUIRE_EXT(ctx, RVB);
422
@@ -XXX,XX +XXX,XX @@ static bool trans_sloi(DisasContext *ctx, arg_sloi *a)
423
return gen_shifti(ctx, a, gen_slo);
424
}
425
426
+static void gen_sro(TCGv ret, TCGv arg1, TCGv arg2)
427
+{
428
+ tcg_gen_not_tl(ret, arg1);
429
+ tcg_gen_shr_tl(ret, ret, arg2);
430
+ tcg_gen_not_tl(ret, ret);
431
+}
432
+
433
static bool trans_sro(DisasContext *ctx, arg_sro *a)
434
{
435
REQUIRE_EXT(ctx, RVB);
436
@@ -XXX,XX +XXX,XX @@ static bool trans_grev(DisasContext *ctx, arg_grev *a)
437
return gen_shift(ctx, a, gen_helper_grev);
438
}
439
440
+static bool gen_grevi(DisasContext *ctx, arg_grevi *a)
441
+{
442
+ TCGv source1 = tcg_temp_new();
443
+ TCGv source2;
444
+
445
+ gen_get_gpr(ctx, source1, a->rs1);
446
+
447
+ if (a->shamt == (TARGET_LONG_BITS - 8)) {
448
+ /* rev8, byte swaps */
449
+ tcg_gen_bswap_tl(source1, source1);
450
+ } else {
42
+ } else {
451
+ source2 = tcg_temp_new();
43
+ hart_bits = 0;
452
+ tcg_gen_movi_tl(source2, a->shamt);
453
+ gen_helper_grev(source1, source1, source2);
454
+ tcg_temp_free(source2);
455
+ }
44
+ }
456
+
45
+
457
+ gen_set_gpr(ctx, a->rd, source1);
46
ret = kvm_device_access(aia_fd, KVM_DEV_RISCV_AIA_GRP_CONFIG,
458
+ tcg_temp_free(source1);
47
KVM_DEV_RISCV_AIA_CONFIG_HART_BITS,
459
+ return true;
48
&hart_bits, true, NULL);
460
+}
461
+
462
static bool trans_grevi(DisasContext *ctx, arg_grevi *a)
463
{
464
REQUIRE_EXT(ctx, RVB);
465
@@ -XXX,XX +XXX,XX @@ static bool trans_gorci(DisasContext *ctx, arg_gorci *a)
466
return gen_shifti(ctx, a, gen_helper_gorc);
467
}
468
469
+#define GEN_SHADD(SHAMT) \
470
+static void gen_sh##SHAMT##add(TCGv ret, TCGv arg1, TCGv arg2) \
471
+{ \
472
+ TCGv t = tcg_temp_new(); \
473
+ \
474
+ tcg_gen_shli_tl(t, arg1, SHAMT); \
475
+ tcg_gen_add_tl(ret, t, arg2); \
476
+ \
477
+ tcg_temp_free(t); \
478
+}
479
+
480
+GEN_SHADD(1)
481
+GEN_SHADD(2)
482
+GEN_SHADD(3)
483
+
484
#define GEN_TRANS_SHADD(SHAMT) \
485
static bool trans_sh##SHAMT##add(DisasContext *ctx, arg_sh##SHAMT##add *a) \
486
{ \
487
@@ -XXX,XX +XXX,XX @@ GEN_TRANS_SHADD(1)
488
GEN_TRANS_SHADD(2)
489
GEN_TRANS_SHADD(3)
490
491
+static void gen_clzw(TCGv ret, TCGv arg1)
492
+{
493
+ tcg_gen_ext32u_tl(ret, arg1);
494
+ tcg_gen_clzi_tl(ret, ret, 64);
495
+ tcg_gen_subi_tl(ret, ret, 32);
496
+}
497
+
498
static bool trans_clzw(DisasContext *ctx, arg_clzw *a)
499
{
500
REQUIRE_64BIT(ctx);
501
@@ -XXX,XX +XXX,XX @@ static bool trans_clzw(DisasContext *ctx, arg_clzw *a)
502
return gen_unary(ctx, a, gen_clzw);
503
}
504
505
+static void gen_ctzw(TCGv ret, TCGv arg1)
506
+{
507
+ tcg_gen_ori_tl(ret, arg1, (target_ulong)MAKE_64BIT_MASK(32, 32));
508
+ tcg_gen_ctzi_tl(ret, ret, 64);
509
+}
510
+
511
static bool trans_ctzw(DisasContext *ctx, arg_ctzw *a)
512
{
513
REQUIRE_64BIT(ctx);
514
@@ -XXX,XX +XXX,XX @@ static bool trans_ctzw(DisasContext *ctx, arg_ctzw *a)
515
return gen_unary(ctx, a, gen_ctzw);
516
}
517
518
+static void gen_cpopw(TCGv ret, TCGv arg1)
519
+{
520
+ tcg_gen_ext32u_tl(arg1, arg1);
521
+ tcg_gen_ctpop_tl(ret, arg1);
522
+}
523
+
524
static bool trans_cpopw(DisasContext *ctx, arg_cpopw *a)
525
{
526
REQUIRE_64BIT(ctx);
527
@@ -XXX,XX +XXX,XX @@ static bool trans_cpopw(DisasContext *ctx, arg_cpopw *a)
528
return gen_unary(ctx, a, gen_cpopw);
529
}
530
531
+static void gen_packw(TCGv ret, TCGv arg1, TCGv arg2)
532
+{
533
+ TCGv t = tcg_temp_new();
534
+ tcg_gen_ext16s_tl(t, arg2);
535
+ tcg_gen_deposit_tl(ret, arg1, t, 16, 48);
536
+ tcg_temp_free(t);
537
+}
538
+
539
static bool trans_packw(DisasContext *ctx, arg_packw *a)
540
{
541
REQUIRE_64BIT(ctx);
542
@@ -XXX,XX +XXX,XX @@ static bool trans_packw(DisasContext *ctx, arg_packw *a)
543
return gen_arith(ctx, a, EXT_NONE, gen_packw);
544
}
545
546
+static void gen_packuw(TCGv ret, TCGv arg1, TCGv arg2)
547
+{
548
+ TCGv t = tcg_temp_new();
549
+ tcg_gen_shri_tl(t, arg1, 16);
550
+ tcg_gen_deposit_tl(ret, arg2, t, 0, 16);
551
+ tcg_gen_ext32s_tl(ret, ret);
552
+ tcg_temp_free(t);
553
+}
554
+
555
static bool trans_packuw(DisasContext *ctx, arg_packuw *a)
556
{
557
REQUIRE_64BIT(ctx);
558
@@ -XXX,XX +XXX,XX @@ static bool trans_sroiw(DisasContext *ctx, arg_sroiw *a)
559
return gen_shiftiw(ctx, a, gen_sro);
560
}
561
562
+static void gen_rorw(TCGv ret, TCGv arg1, TCGv arg2)
563
+{
564
+ TCGv_i32 t1 = tcg_temp_new_i32();
565
+ TCGv_i32 t2 = tcg_temp_new_i32();
566
+
567
+ /* truncate to 32-bits */
568
+ tcg_gen_trunc_tl_i32(t1, arg1);
569
+ tcg_gen_trunc_tl_i32(t2, arg2);
570
+
571
+ tcg_gen_rotr_i32(t1, t1, t2);
572
+
573
+ /* sign-extend 64-bits */
574
+ tcg_gen_ext_i32_tl(ret, t1);
575
+
576
+ tcg_temp_free_i32(t1);
577
+ tcg_temp_free_i32(t2);
578
+}
579
+
580
static bool trans_rorw(DisasContext *ctx, arg_rorw *a)
581
{
582
REQUIRE_64BIT(ctx);
583
@@ -XXX,XX +XXX,XX @@ static bool trans_roriw(DisasContext *ctx, arg_roriw *a)
584
return gen_shiftiw(ctx, a, gen_rorw);
585
}
586
587
+static void gen_rolw(TCGv ret, TCGv arg1, TCGv arg2)
588
+{
589
+ TCGv_i32 t1 = tcg_temp_new_i32();
590
+ TCGv_i32 t2 = tcg_temp_new_i32();
591
+
592
+ /* truncate to 32-bits */
593
+ tcg_gen_trunc_tl_i32(t1, arg1);
594
+ tcg_gen_trunc_tl_i32(t2, arg2);
595
+
596
+ tcg_gen_rotl_i32(t1, t1, t2);
597
+
598
+ /* sign-extend 64-bits */
599
+ tcg_gen_ext_i32_tl(ret, t1);
600
+
601
+ tcg_temp_free_i32(t1);
602
+ tcg_temp_free_i32(t2);
603
+}
604
+
605
static bool trans_rolw(DisasContext *ctx, arg_rolw *a)
606
{
607
REQUIRE_64BIT(ctx);
608
@@ -XXX,XX +XXX,XX @@ static bool trans_rolw(DisasContext *ctx, arg_rolw *a)
609
return gen_shiftw(ctx, a, gen_rolw);
610
}
611
612
+static void gen_grevw(TCGv ret, TCGv arg1, TCGv arg2)
613
+{
614
+ tcg_gen_ext32u_tl(arg1, arg1);
615
+ gen_helper_grev(ret, arg1, arg2);
616
+}
617
+
618
static bool trans_grevw(DisasContext *ctx, arg_grevw *a)
619
{
620
REQUIRE_64BIT(ctx);
621
@@ -XXX,XX +XXX,XX @@ static bool trans_greviw(DisasContext *ctx, arg_greviw *a)
622
return gen_shiftiw(ctx, a, gen_grevw);
623
}
624
625
+static void gen_gorcw(TCGv ret, TCGv arg1, TCGv arg2)
626
+{
627
+ tcg_gen_ext32u_tl(arg1, arg1);
628
+ gen_helper_gorcw(ret, arg1, arg2);
629
+}
630
+
631
static bool trans_gorcw(DisasContext *ctx, arg_gorcw *a)
632
{
633
REQUIRE_64BIT(ctx);
634
@@ -XXX,XX +XXX,XX @@ static bool trans_gorciw(DisasContext *ctx, arg_gorciw *a)
635
return gen_shiftiw(ctx, a, gen_gorcw);
636
}
637
638
+#define GEN_SHADD_UW(SHAMT) \
639
+static void gen_sh##SHAMT##add_uw(TCGv ret, TCGv arg1, TCGv arg2) \
640
+{ \
641
+ TCGv t = tcg_temp_new(); \
642
+ \
643
+ tcg_gen_ext32u_tl(t, arg1); \
644
+ \
645
+ tcg_gen_shli_tl(t, t, SHAMT); \
646
+ tcg_gen_add_tl(ret, t, arg2); \
647
+ \
648
+ tcg_temp_free(t); \
649
+}
650
+
651
+GEN_SHADD_UW(1)
652
+GEN_SHADD_UW(2)
653
+GEN_SHADD_UW(3)
654
+
655
#define GEN_TRANS_SHADD_UW(SHAMT) \
656
static bool trans_sh##SHAMT##add_uw(DisasContext *ctx, \
657
arg_sh##SHAMT##add_uw *a) \
658
@@ -XXX,XX +XXX,XX @@ GEN_TRANS_SHADD_UW(1)
659
GEN_TRANS_SHADD_UW(2)
660
GEN_TRANS_SHADD_UW(3)
661
662
+static void gen_add_uw(TCGv ret, TCGv arg1, TCGv arg2)
663
+{
664
+ tcg_gen_ext32u_tl(arg1, arg1);
665
+ tcg_gen_add_tl(ret, arg1, arg2);
666
+}
667
+
668
static bool trans_add_uw(DisasContext *ctx, arg_add_uw *a)
669
{
670
REQUIRE_64BIT(ctx);
671
--
49
--
672
2.31.1
50
2.45.1
673
674
diff view generated by jsdifflib
1
From: LIU Zhiwei <zhiwei_liu@c-sky.com>
1
From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
2
2
3
For U-mode CSRs, read-only check is also needed.
3
Commit 33a24910ae changed 'reg_width' to use 'vlenb', i.e. vector length
4
in bytes, when in this context we want 'reg_width' as the length in
5
bits.
4
6
5
Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
7
Fix 'reg_width' back to the value in bits like 7cb59921c05a
6
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
8
("target/riscv/gdbstub.c: use 'vlenb' instead of shifting 'vlen'") set
7
Message-id: 20210810014552.4884-1-zhiwei_liu@c-sky.com
9
beforehand.
10
11
While we're at it, rename 'reg_width' to 'bitsize' to provide a bit more
12
clarity about what the variable represents. 'bitsize' is also used in
13
riscv_gen_dynamic_csr_feature() with the same purpose, i.e. as an input to
14
gdb_feature_builder_append_reg().
15
16
Cc: Akihiko Odaki <akihiko.odaki@daynix.com>
17
Cc: Alex Bennée <alex.bennee@linaro.org>
18
Reported-by: Robin Dapp <rdapp.gcc@gmail.com>
19
Fixes: 33a24910ae ("target/riscv: Use GDBFeature for dynamic XML")
20
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
21
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
22
Acked-by: Alex Bennée <alex.bennee@linaro.org>
23
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
24
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
25
Cc: qemu-stable <qemu-stable@nongnu.org>
26
Message-ID: <20240517203054.880861-2-dbarboza@ventanamicro.com>
8
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
27
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
9
---
28
---
10
target/riscv/csr.c | 8 +++++---
29
target/riscv/gdbstub.c | 6 +++---
11
1 file changed, 5 insertions(+), 3 deletions(-)
30
1 file changed, 3 insertions(+), 3 deletions(-)
12
31
13
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
32
diff --git a/target/riscv/gdbstub.c b/target/riscv/gdbstub.c
14
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
15
--- a/target/riscv/csr.c
34
--- a/target/riscv/gdbstub.c
16
+++ b/target/riscv/csr.c
35
+++ b/target/riscv/gdbstub.c
17
@@ -XXX,XX +XXX,XX @@ RISCVException riscv_csrrw(CPURISCVState *env, int csrno,
36
@@ -XXX,XX +XXX,XX @@ static GDBFeature *riscv_gen_dynamic_csr_feature(CPUState *cs, int base_reg)
18
RISCVException ret;
37
static GDBFeature *ricsv_gen_dynamic_vector_feature(CPUState *cs, int base_reg)
19
target_ulong old_value;
38
{
20
RISCVCPU *cpu = env_archcpu(env);
39
RISCVCPU *cpu = RISCV_CPU(cs);
21
+ int read_only = get_field(csrno, 0xC00) == 3;
40
- int reg_width = cpu->cfg.vlenb;
22
41
+ int bitsize = cpu->cfg.vlenb << 3;
23
/* check privileges and return RISCV_EXCP_ILLEGAL_INST if check fails */
42
GDBFeatureBuilder builder;
24
#if !defined(CONFIG_USER_ONLY)
43
int i;
25
int effective_priv = env->priv;
44
26
- int read_only = get_field(csrno, 0xC00) == 3;
45
@@ -XXX,XX +XXX,XX @@ static GDBFeature *ricsv_gen_dynamic_vector_feature(CPUState *cs, int base_reg)
27
46
28
if (riscv_has_ext(env, RVH) &&
47
/* First define types and totals in a whole VL */
29
env->priv == PRV_S &&
48
for (i = 0; i < ARRAY_SIZE(vec_lanes); i++) {
30
@@ -XXX,XX +XXX,XX @@ RISCVException riscv_csrrw(CPURISCVState *env, int csrno,
49
- int count = reg_width / vec_lanes[i].size;
31
effective_priv++;
50
+ int count = bitsize / vec_lanes[i].size;
51
gdb_feature_builder_append_tag(
52
&builder, "<vector id=\"%s\" type=\"%s\" count=\"%d\"/>",
53
vec_lanes[i].id, vec_lanes[i].gdb_type, count);
54
@@ -XXX,XX +XXX,XX @@ static GDBFeature *ricsv_gen_dynamic_vector_feature(CPUState *cs, int base_reg)
55
/* Define vector registers */
56
for (i = 0; i < 32; i++) {
57
gdb_feature_builder_append_reg(&builder, g_strdup_printf("v%d", i),
58
- reg_width, i, "riscv_vector", "vector");
59
+ bitsize, i, "riscv_vector", "vector");
32
}
60
}
33
61
34
- if ((write_mask && read_only) ||
62
gdb_feature_builder_end(&builder);
35
- (!env->debugger && (effective_priv < get_field(csrno, 0x300)))) {
36
+ if (!env->debugger && (effective_priv < get_field(csrno, 0x300))) {
37
return RISCV_EXCP_ILLEGAL_INST;
38
}
39
#endif
40
+ if (write_mask && read_only) {
41
+ return RISCV_EXCP_ILLEGAL_INST;
42
+ }
43
44
/* ensure the CSR extension is enabled. */
45
if (!cpu->cfg.ext_icsr) {
46
--
63
--
47
2.31.1
64
2.45.1
48
65
49
66
diff view generated by jsdifflib
Deleted patch
1
From: Peter Maydell <peter.maydell@linaro.org>
2
1
3
In the riscv virt machine init function, We assemble a string
4
plic_hart_config which is a comma-separated list of N copies of the
5
VIRT_PLIC_HART_CONFIG string. The code that does this has a
6
misunderstanding of the strncat() length argument. If the source
7
string is too large strncat() will write a maximum of length+1 bytes
8
(length bytes from the source string plus a trailing NUL), but the
9
code here assumes that it will write only length bytes at most.
10
11
This isn't an actual bug because the code has correctly precalculated
12
the amount of memory it needs to allocate so that it will never be
13
too small (i.e. we could have used plain old strcat()), but it does
14
mean that the code looks like it has a guard against accidental
15
overrun when it doesn't.
16
17
Rewrite the string handling here to use the glib g_strjoinv()
18
function, which means we don't need to do careful accountancy of
19
string lengths, and makes it clearer that what we're doing is
20
"create a comma-separated string".
21
22
Fixes: Coverity 1460752
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
25
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
26
Message-id: 20210812144647.10516-1-peter.maydell@linaro.org
27
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
28
---
29
hw/riscv/virt.c | 33 ++++++++++++++++++++-------------
30
1 file changed, 20 insertions(+), 13 deletions(-)
31
32
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/hw/riscv/virt.c
35
+++ b/hw/riscv/virt.c
36
@@ -XXX,XX +XXX,XX @@ static FWCfgState *create_fw_cfg(const MachineState *mc)
37
return fw_cfg;
38
}
39
40
+/*
41
+ * Return the per-socket PLIC hart topology configuration string
42
+ * (caller must free with g_free())
43
+ */
44
+static char *plic_hart_config_string(int hart_count)
45
+{
46
+ g_autofree const char **vals = g_new(const char *, hart_count + 1);
47
+ int i;
48
+
49
+ for (i = 0; i < hart_count; i++) {
50
+ vals[i] = VIRT_PLIC_HART_CONFIG;
51
+ }
52
+ vals[i] = NULL;
53
+
54
+ /* g_strjoinv() obliges us to cast away const here */
55
+ return g_strjoinv(",", (char **)vals);
56
+}
57
+
58
static void virt_machine_init(MachineState *machine)
59
{
60
const MemMapEntry *memmap = virt_memmap;
61
@@ -XXX,XX +XXX,XX @@ static void virt_machine_init(MachineState *machine)
62
MemoryRegion *main_mem = g_new(MemoryRegion, 1);
63
MemoryRegion *mask_rom = g_new(MemoryRegion, 1);
64
char *plic_hart_config, *soc_name;
65
- size_t plic_hart_config_len;
66
target_ulong start_addr = memmap[VIRT_DRAM].base;
67
target_ulong firmware_end_addr, kernel_start_addr;
68
uint32_t fdt_load_addr;
69
uint64_t kernel_entry;
70
DeviceState *mmio_plic, *virtio_plic, *pcie_plic;
71
- int i, j, base_hartid, hart_count;
72
+ int i, base_hartid, hart_count;
73
74
/* Check socket count limit */
75
if (VIRT_SOCKETS_MAX < riscv_socket_count(machine)) {
76
@@ -XXX,XX +XXX,XX @@ static void virt_machine_init(MachineState *machine)
77
SIFIVE_CLINT_TIMEBASE_FREQ, true);
78
79
/* Per-socket PLIC hart topology configuration string */
80
- plic_hart_config_len =
81
- (strlen(VIRT_PLIC_HART_CONFIG) + 1) * hart_count;
82
- plic_hart_config = g_malloc0(plic_hart_config_len);
83
- for (j = 0; j < hart_count; j++) {
84
- if (j != 0) {
85
- strncat(plic_hart_config, ",", plic_hart_config_len);
86
- }
87
- strncat(plic_hart_config, VIRT_PLIC_HART_CONFIG,
88
- plic_hart_config_len);
89
- plic_hart_config_len -= (strlen(VIRT_PLIC_HART_CONFIG) + 1);
90
- }
91
+ plic_hart_config = plic_hart_config_string(hart_count);
92
93
/* Per-socket PLIC */
94
s->plic[i] = sifive_plic_create(
95
--
96
2.31.1
97
98
diff view generated by jsdifflib
Deleted patch
1
From: Joe Komlodi <joe.komlodi@xilinx.com>
2
1
3
We already have some utilities to handle 64-bit wide registers, so this just
4
adds some more for:
5
- Initializing 64-bit registers
6
- Extracting and depositing to an array of 64-bit registers
7
8
Signed-off-by: Joe Komlodi <joe.komlodi@xilinx.com>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Message-id: 1626805903-162860-2-git-send-email-joe.komlodi@xilinx.com
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
13
include/hw/register.h | 8 ++++++++
14
include/hw/registerfields.h | 8 ++++++++
15
hw/core/register.c | 12 ++++++++++++
16
3 files changed, 28 insertions(+)
17
18
diff --git a/include/hw/register.h b/include/hw/register.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/register.h
21
+++ b/include/hw/register.h
22
@@ -XXX,XX +XXX,XX @@ RegisterInfoArray *register_init_block32(DeviceState *owner,
23
bool debug_enabled,
24
uint64_t memory_size);
25
26
+RegisterInfoArray *register_init_block64(DeviceState *owner,
27
+ const RegisterAccessInfo *rae,
28
+ int num, RegisterInfo *ri,
29
+ uint64_t *data,
30
+ const MemoryRegionOps *ops,
31
+ bool debug_enabled,
32
+ uint64_t memory_size);
33
+
34
/**
35
* This function should be called to cleanup the registers that were initialized
36
* when calling register_init_block32(). This function should only be called
37
diff --git a/include/hw/registerfields.h b/include/hw/registerfields.h
38
index XXXXXXX..XXXXXXX 100644
39
--- a/include/hw/registerfields.h
40
+++ b/include/hw/registerfields.h
41
@@ -XXX,XX +XXX,XX @@
42
enum { A_ ## reg = (addr) }; \
43
enum { R_ ## reg = (addr) / 2 };
44
45
+#define REG64(reg, addr) \
46
+ enum { A_ ## reg = (addr) }; \
47
+ enum { R_ ## reg = (addr) / 8 };
48
+
49
/* Define SHIFT, LENGTH and MASK constants for a field within a register */
50
51
/* This macro will define R_FOO_BAR_MASK, R_FOO_BAR_SHIFT and R_FOO_BAR_LENGTH
52
@@ -XXX,XX +XXX,XX @@
53
/* Extract a field from an array of registers */
54
#define ARRAY_FIELD_EX32(regs, reg, field) \
55
FIELD_EX32((regs)[R_ ## reg], reg, field)
56
+#define ARRAY_FIELD_EX64(regs, reg, field) \
57
+ FIELD_EX64((regs)[R_ ## reg], reg, field)
58
59
/* Deposit a register field.
60
* Assigning values larger then the target field will result in
61
@@ -XXX,XX +XXX,XX @@
62
/* Deposit a field to array of registers. */
63
#define ARRAY_FIELD_DP32(regs, reg, field, val) \
64
(regs)[R_ ## reg] = FIELD_DP32((regs)[R_ ## reg], reg, field, val);
65
+#define ARRAY_FIELD_DP64(regs, reg, field, val) \
66
+ (regs)[R_ ## reg] = FIELD_DP64((regs)[R_ ## reg], reg, field, val);
67
68
#endif
69
diff --git a/hw/core/register.c b/hw/core/register.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/hw/core/register.c
72
+++ b/hw/core/register.c
73
@@ -XXX,XX +XXX,XX @@ RegisterInfoArray *register_init_block32(DeviceState *owner,
74
data, ops, debug_enabled, memory_size, 32);
75
}
76
77
+RegisterInfoArray *register_init_block64(DeviceState *owner,
78
+ const RegisterAccessInfo *rae,
79
+ int num, RegisterInfo *ri,
80
+ uint64_t *data,
81
+ const MemoryRegionOps *ops,
82
+ bool debug_enabled,
83
+ uint64_t memory_size)
84
+{
85
+ return register_init_block(owner, rae, num, ri, (void *)
86
+ data, ops, debug_enabled, memory_size, 64);
87
+}
88
+
89
void register_finalize_block(RegisterInfoArray *r_array)
90
{
91
object_unparent(OBJECT(&r_array->mem));
92
--
93
2.31.1
94
95
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Most arithmetic does not require extending the inputs.
4
Exceptions include division, comparison and minmax.
5
6
Begin using ctx->w, which allows elimination of gen_addw,
7
gen_subw, gen_mulw.
8
9
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20210823195529.560295-7-richard.henderson@linaro.org
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
14
---
15
target/riscv/translate.c | 69 +++++++------------------
16
target/riscv/insn_trans/trans_rvb.c.inc | 30 +++++------
17
target/riscv/insn_trans/trans_rvi.c.inc | 39 ++++++++------
18
target/riscv/insn_trans/trans_rvm.c.inc | 16 +++---
19
4 files changed, 64 insertions(+), 90 deletions(-)
20
21
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/riscv/translate.c
24
+++ b/target/riscv/translate.c
25
@@ -XXX,XX +XXX,XX @@ static void gen_get_gpr(DisasContext *ctx, TCGv t, int reg_num)
26
tcg_gen_mov_tl(t, get_gpr(ctx, reg_num, EXT_NONE));
27
}
28
29
-static TCGv __attribute__((unused)) dest_gpr(DisasContext *ctx, int reg_num)
30
+static TCGv dest_gpr(DisasContext *ctx, int reg_num)
31
{
32
if (reg_num == 0 || ctx->w) {
33
return temp_new(ctx);
34
@@ -XXX,XX +XXX,XX @@ static int ex_rvc_shifti(DisasContext *ctx, int imm)
35
/* Include the auto-generated decoder for 32 bit insn */
36
#include "decode-insn32.c.inc"
37
38
-static bool gen_arith_imm_fn(DisasContext *ctx, arg_i *a,
39
+static bool gen_arith_imm_fn(DisasContext *ctx, arg_i *a, DisasExtend ext,
40
void (*func)(TCGv, TCGv, target_long))
41
{
42
- TCGv source1;
43
- source1 = tcg_temp_new();
44
-
45
- gen_get_gpr(ctx, source1, a->rs1);
46
+ TCGv dest = dest_gpr(ctx, a->rd);
47
+ TCGv src1 = get_gpr(ctx, a->rs1, ext);
48
49
- (*func)(source1, source1, a->imm);
50
+ func(dest, src1, a->imm);
51
52
- gen_set_gpr(ctx, a->rd, source1);
53
- tcg_temp_free(source1);
54
+ gen_set_gpr(ctx, a->rd, dest);
55
return true;
56
}
57
58
-static bool gen_arith_imm_tl(DisasContext *ctx, arg_i *a,
59
+static bool gen_arith_imm_tl(DisasContext *ctx, arg_i *a, DisasExtend ext,
60
void (*func)(TCGv, TCGv, TCGv))
61
{
62
- TCGv source1, source2;
63
- source1 = tcg_temp_new();
64
- source2 = tcg_temp_new();
65
+ TCGv dest = dest_gpr(ctx, a->rd);
66
+ TCGv src1 = get_gpr(ctx, a->rs1, ext);
67
+ TCGv src2 = tcg_constant_tl(a->imm);
68
69
- gen_get_gpr(ctx, source1, a->rs1);
70
- tcg_gen_movi_tl(source2, a->imm);
71
+ func(dest, src1, src2);
72
73
- (*func)(source1, source1, source2);
74
-
75
- gen_set_gpr(ctx, a->rd, source1);
76
- tcg_temp_free(source1);
77
- tcg_temp_free(source2);
78
+ gen_set_gpr(ctx, a->rd, dest);
79
return true;
80
}
81
82
-static void gen_addw(TCGv ret, TCGv arg1, TCGv arg2)
83
-{
84
- tcg_gen_add_tl(ret, arg1, arg2);
85
- tcg_gen_ext32s_tl(ret, ret);
86
-}
87
-
88
-static void gen_subw(TCGv ret, TCGv arg1, TCGv arg2)
89
-{
90
- tcg_gen_sub_tl(ret, arg1, arg2);
91
- tcg_gen_ext32s_tl(ret, ret);
92
-}
93
-
94
-static void gen_mulw(TCGv ret, TCGv arg1, TCGv arg2)
95
-{
96
- tcg_gen_mul_tl(ret, arg1, arg2);
97
- tcg_gen_ext32s_tl(ret, ret);
98
-}
99
-
100
static bool gen_arith_div_w(DisasContext *ctx, arg_r *a,
101
void(*func)(TCGv, TCGv, TCGv))
102
{
103
@@ -XXX,XX +XXX,XX @@ static void gen_add_uw(TCGv ret, TCGv arg1, TCGv arg2)
104
tcg_gen_add_tl(ret, arg1, arg2);
105
}
106
107
-static bool gen_arith(DisasContext *ctx, arg_r *a,
108
- void(*func)(TCGv, TCGv, TCGv))
109
+static bool gen_arith(DisasContext *ctx, arg_r *a, DisasExtend ext,
110
+ void (*func)(TCGv, TCGv, TCGv))
111
{
112
- TCGv source1, source2;
113
- source1 = tcg_temp_new();
114
- source2 = tcg_temp_new();
115
+ TCGv dest = dest_gpr(ctx, a->rd);
116
+ TCGv src1 = get_gpr(ctx, a->rs1, ext);
117
+ TCGv src2 = get_gpr(ctx, a->rs2, ext);
118
119
- gen_get_gpr(ctx, source1, a->rs1);
120
- gen_get_gpr(ctx, source2, a->rs2);
121
+ func(dest, src1, src2);
122
123
- (*func)(source1, source1, source2);
124
-
125
- gen_set_gpr(ctx, a->rd, source1);
126
- tcg_temp_free(source1);
127
- tcg_temp_free(source2);
128
+ gen_set_gpr(ctx, a->rd, dest);
129
return true;
130
}
131
132
diff --git a/target/riscv/insn_trans/trans_rvb.c.inc b/target/riscv/insn_trans/trans_rvb.c.inc
133
index XXXXXXX..XXXXXXX 100644
134
--- a/target/riscv/insn_trans/trans_rvb.c.inc
135
+++ b/target/riscv/insn_trans/trans_rvb.c.inc
136
@@ -XXX,XX +XXX,XX @@ static bool trans_cpop(DisasContext *ctx, arg_cpop *a)
137
static bool trans_andn(DisasContext *ctx, arg_andn *a)
138
{
139
REQUIRE_EXT(ctx, RVB);
140
- return gen_arith(ctx, a, tcg_gen_andc_tl);
141
+ return gen_arith(ctx, a, EXT_NONE, tcg_gen_andc_tl);
142
}
143
144
static bool trans_orn(DisasContext *ctx, arg_orn *a)
145
{
146
REQUIRE_EXT(ctx, RVB);
147
- return gen_arith(ctx, a, tcg_gen_orc_tl);
148
+ return gen_arith(ctx, a, EXT_NONE, tcg_gen_orc_tl);
149
}
150
151
static bool trans_xnor(DisasContext *ctx, arg_xnor *a)
152
{
153
REQUIRE_EXT(ctx, RVB);
154
- return gen_arith(ctx, a, tcg_gen_eqv_tl);
155
+ return gen_arith(ctx, a, EXT_NONE, tcg_gen_eqv_tl);
156
}
157
158
static bool trans_pack(DisasContext *ctx, arg_pack *a)
159
{
160
REQUIRE_EXT(ctx, RVB);
161
- return gen_arith(ctx, a, gen_pack);
162
+ return gen_arith(ctx, a, EXT_NONE, gen_pack);
163
}
164
165
static bool trans_packu(DisasContext *ctx, arg_packu *a)
166
{
167
REQUIRE_EXT(ctx, RVB);
168
- return gen_arith(ctx, a, gen_packu);
169
+ return gen_arith(ctx, a, EXT_NONE, gen_packu);
170
}
171
172
static bool trans_packh(DisasContext *ctx, arg_packh *a)
173
{
174
REQUIRE_EXT(ctx, RVB);
175
- return gen_arith(ctx, a, gen_packh);
176
+ return gen_arith(ctx, a, EXT_NONE, gen_packh);
177
}
178
179
static bool trans_min(DisasContext *ctx, arg_min *a)
180
{
181
REQUIRE_EXT(ctx, RVB);
182
- return gen_arith(ctx, a, tcg_gen_smin_tl);
183
+ return gen_arith(ctx, a, EXT_SIGN, tcg_gen_smin_tl);
184
}
185
186
static bool trans_max(DisasContext *ctx, arg_max *a)
187
{
188
REQUIRE_EXT(ctx, RVB);
189
- return gen_arith(ctx, a, tcg_gen_smax_tl);
190
+ return gen_arith(ctx, a, EXT_SIGN, tcg_gen_smax_tl);
191
}
192
193
static bool trans_minu(DisasContext *ctx, arg_minu *a)
194
{
195
REQUIRE_EXT(ctx, RVB);
196
- return gen_arith(ctx, a, tcg_gen_umin_tl);
197
+ return gen_arith(ctx, a, EXT_SIGN, tcg_gen_umin_tl);
198
}
199
200
static bool trans_maxu(DisasContext *ctx, arg_maxu *a)
201
{
202
REQUIRE_EXT(ctx, RVB);
203
- return gen_arith(ctx, a, tcg_gen_umax_tl);
204
+ return gen_arith(ctx, a, EXT_SIGN, tcg_gen_umax_tl);
205
}
206
207
static bool trans_sext_b(DisasContext *ctx, arg_sext_b *a)
208
@@ -XXX,XX +XXX,XX @@ static bool trans_gorci(DisasContext *ctx, arg_gorci *a)
209
static bool trans_sh##SHAMT##add(DisasContext *ctx, arg_sh##SHAMT##add *a) \
210
{ \
211
REQUIRE_EXT(ctx, RVB); \
212
- return gen_arith(ctx, a, gen_sh##SHAMT##add); \
213
+ return gen_arith(ctx, a, EXT_NONE, gen_sh##SHAMT##add); \
214
}
215
216
GEN_TRANS_SHADD(1)
217
@@ -XXX,XX +XXX,XX @@ static bool trans_packw(DisasContext *ctx, arg_packw *a)
218
{
219
REQUIRE_64BIT(ctx);
220
REQUIRE_EXT(ctx, RVB);
221
- return gen_arith(ctx, a, gen_packw);
222
+ return gen_arith(ctx, a, EXT_NONE, gen_packw);
223
}
224
225
static bool trans_packuw(DisasContext *ctx, arg_packuw *a)
226
{
227
REQUIRE_64BIT(ctx);
228
REQUIRE_EXT(ctx, RVB);
229
- return gen_arith(ctx, a, gen_packuw);
230
+ return gen_arith(ctx, a, EXT_NONE, gen_packuw);
231
}
232
233
static bool trans_bsetw(DisasContext *ctx, arg_bsetw *a)
234
@@ -XXX,XX +XXX,XX @@ static bool trans_sh##SHAMT##add_uw(DisasContext *ctx, \
235
{ \
236
REQUIRE_64BIT(ctx); \
237
REQUIRE_EXT(ctx, RVB); \
238
- return gen_arith(ctx, a, gen_sh##SHAMT##add_uw); \
239
+ return gen_arith(ctx, a, EXT_NONE, gen_sh##SHAMT##add_uw); \
240
}
241
242
GEN_TRANS_SHADD_UW(1)
243
@@ -XXX,XX +XXX,XX @@ static bool trans_add_uw(DisasContext *ctx, arg_add_uw *a)
244
{
245
REQUIRE_64BIT(ctx);
246
REQUIRE_EXT(ctx, RVB);
247
- return gen_arith(ctx, a, gen_add_uw);
248
+ return gen_arith(ctx, a, EXT_NONE, gen_add_uw);
249
}
250
251
static bool trans_slli_uw(DisasContext *ctx, arg_slli_uw *a)
252
diff --git a/target/riscv/insn_trans/trans_rvi.c.inc b/target/riscv/insn_trans/trans_rvi.c.inc
253
index XXXXXXX..XXXXXXX 100644
254
--- a/target/riscv/insn_trans/trans_rvi.c.inc
255
+++ b/target/riscv/insn_trans/trans_rvi.c.inc
256
@@ -XXX,XX +XXX,XX @@ static bool trans_sd(DisasContext *ctx, arg_sd *a)
257
258
static bool trans_addi(DisasContext *ctx, arg_addi *a)
259
{
260
- return gen_arith_imm_fn(ctx, a, &tcg_gen_addi_tl);
261
+ return gen_arith_imm_fn(ctx, a, EXT_NONE, tcg_gen_addi_tl);
262
}
263
264
static void gen_slt(TCGv ret, TCGv s1, TCGv s2)
265
@@ -XXX,XX +XXX,XX @@ static void gen_sltu(TCGv ret, TCGv s1, TCGv s2)
266
tcg_gen_setcond_tl(TCG_COND_LTU, ret, s1, s2);
267
}
268
269
-
270
static bool trans_slti(DisasContext *ctx, arg_slti *a)
271
{
272
- return gen_arith_imm_tl(ctx, a, &gen_slt);
273
+ return gen_arith_imm_tl(ctx, a, EXT_SIGN, gen_slt);
274
}
275
276
static bool trans_sltiu(DisasContext *ctx, arg_sltiu *a)
277
{
278
- return gen_arith_imm_tl(ctx, a, &gen_sltu);
279
+ return gen_arith_imm_tl(ctx, a, EXT_SIGN, gen_sltu);
280
}
281
282
static bool trans_xori(DisasContext *ctx, arg_xori *a)
283
{
284
- return gen_arith_imm_fn(ctx, a, &tcg_gen_xori_tl);
285
+ return gen_arith_imm_fn(ctx, a, EXT_NONE, tcg_gen_xori_tl);
286
}
287
+
288
static bool trans_ori(DisasContext *ctx, arg_ori *a)
289
{
290
- return gen_arith_imm_fn(ctx, a, &tcg_gen_ori_tl);
291
+ return gen_arith_imm_fn(ctx, a, EXT_NONE, tcg_gen_ori_tl);
292
}
293
+
294
static bool trans_andi(DisasContext *ctx, arg_andi *a)
295
{
296
- return gen_arith_imm_fn(ctx, a, &tcg_gen_andi_tl);
297
+ return gen_arith_imm_fn(ctx, a, EXT_NONE, tcg_gen_andi_tl);
298
}
299
+
300
static bool trans_slli(DisasContext *ctx, arg_slli *a)
301
{
302
return gen_shifti(ctx, a, tcg_gen_shl_tl);
303
@@ -XXX,XX +XXX,XX @@ static bool trans_srai(DisasContext *ctx, arg_srai *a)
304
305
static bool trans_add(DisasContext *ctx, arg_add *a)
306
{
307
- return gen_arith(ctx, a, &tcg_gen_add_tl);
308
+ return gen_arith(ctx, a, EXT_NONE, tcg_gen_add_tl);
309
}
310
311
static bool trans_sub(DisasContext *ctx, arg_sub *a)
312
{
313
- return gen_arith(ctx, a, &tcg_gen_sub_tl);
314
+ return gen_arith(ctx, a, EXT_NONE, tcg_gen_sub_tl);
315
}
316
317
static bool trans_sll(DisasContext *ctx, arg_sll *a)
318
@@ -XXX,XX +XXX,XX @@ static bool trans_sll(DisasContext *ctx, arg_sll *a)
319
320
static bool trans_slt(DisasContext *ctx, arg_slt *a)
321
{
322
- return gen_arith(ctx, a, &gen_slt);
323
+ return gen_arith(ctx, a, EXT_SIGN, gen_slt);
324
}
325
326
static bool trans_sltu(DisasContext *ctx, arg_sltu *a)
327
{
328
- return gen_arith(ctx, a, &gen_sltu);
329
+ return gen_arith(ctx, a, EXT_SIGN, gen_sltu);
330
}
331
332
static bool trans_xor(DisasContext *ctx, arg_xor *a)
333
{
334
- return gen_arith(ctx, a, &tcg_gen_xor_tl);
335
+ return gen_arith(ctx, a, EXT_NONE, tcg_gen_xor_tl);
336
}
337
338
static bool trans_srl(DisasContext *ctx, arg_srl *a)
339
@@ -XXX,XX +XXX,XX @@ static bool trans_sra(DisasContext *ctx, arg_sra *a)
340
341
static bool trans_or(DisasContext *ctx, arg_or *a)
342
{
343
- return gen_arith(ctx, a, &tcg_gen_or_tl);
344
+ return gen_arith(ctx, a, EXT_NONE, tcg_gen_or_tl);
345
}
346
347
static bool trans_and(DisasContext *ctx, arg_and *a)
348
{
349
- return gen_arith(ctx, a, &tcg_gen_and_tl);
350
+ return gen_arith(ctx, a, EXT_NONE, tcg_gen_and_tl);
351
}
352
353
static bool trans_addiw(DisasContext *ctx, arg_addiw *a)
354
{
355
REQUIRE_64BIT(ctx);
356
- return gen_arith_imm_tl(ctx, a, &gen_addw);
357
+ ctx->w = true;
358
+ return gen_arith_imm_fn(ctx, a, EXT_NONE, tcg_gen_addi_tl);
359
}
360
361
static bool trans_slliw(DisasContext *ctx, arg_slliw *a)
362
@@ -XXX,XX +XXX,XX @@ static bool trans_sraiw(DisasContext *ctx, arg_sraiw *a)
363
static bool trans_addw(DisasContext *ctx, arg_addw *a)
364
{
365
REQUIRE_64BIT(ctx);
366
- return gen_arith(ctx, a, &gen_addw);
367
+ ctx->w = true;
368
+ return gen_arith(ctx, a, EXT_NONE, tcg_gen_add_tl);
369
}
370
371
static bool trans_subw(DisasContext *ctx, arg_subw *a)
372
{
373
REQUIRE_64BIT(ctx);
374
- return gen_arith(ctx, a, &gen_subw);
375
+ ctx->w = true;
376
+ return gen_arith(ctx, a, EXT_NONE, tcg_gen_sub_tl);
377
}
378
379
static bool trans_sllw(DisasContext *ctx, arg_sllw *a)
380
diff --git a/target/riscv/insn_trans/trans_rvm.c.inc b/target/riscv/insn_trans/trans_rvm.c.inc
381
index XXXXXXX..XXXXXXX 100644
382
--- a/target/riscv/insn_trans/trans_rvm.c.inc
383
+++ b/target/riscv/insn_trans/trans_rvm.c.inc
384
@@ -XXX,XX +XXX,XX @@
385
static bool trans_mul(DisasContext *ctx, arg_mul *a)
386
{
387
REQUIRE_EXT(ctx, RVM);
388
- return gen_arith(ctx, a, &tcg_gen_mul_tl);
389
+ return gen_arith(ctx, a, EXT_NONE, tcg_gen_mul_tl);
390
}
391
392
static bool trans_mulh(DisasContext *ctx, arg_mulh *a)
393
@@ -XXX,XX +XXX,XX @@ static bool trans_mulh(DisasContext *ctx, arg_mulh *a)
394
static bool trans_mulhsu(DisasContext *ctx, arg_mulhsu *a)
395
{
396
REQUIRE_EXT(ctx, RVM);
397
- return gen_arith(ctx, a, &gen_mulhsu);
398
+ return gen_arith(ctx, a, EXT_NONE, gen_mulhsu);
399
}
400
401
static bool trans_mulhu(DisasContext *ctx, arg_mulhu *a)
402
@@ -XXX,XX +XXX,XX @@ static bool trans_mulhu(DisasContext *ctx, arg_mulhu *a)
403
static bool trans_div(DisasContext *ctx, arg_div *a)
404
{
405
REQUIRE_EXT(ctx, RVM);
406
- return gen_arith(ctx, a, &gen_div);
407
+ return gen_arith(ctx, a, EXT_SIGN, gen_div);
408
}
409
410
static bool trans_divu(DisasContext *ctx, arg_divu *a)
411
{
412
REQUIRE_EXT(ctx, RVM);
413
- return gen_arith(ctx, a, &gen_divu);
414
+ return gen_arith(ctx, a, EXT_ZERO, gen_divu);
415
}
416
417
static bool trans_rem(DisasContext *ctx, arg_rem *a)
418
{
419
REQUIRE_EXT(ctx, RVM);
420
- return gen_arith(ctx, a, &gen_rem);
421
+ return gen_arith(ctx, a, EXT_SIGN, gen_rem);
422
}
423
424
static bool trans_remu(DisasContext *ctx, arg_remu *a)
425
{
426
REQUIRE_EXT(ctx, RVM);
427
- return gen_arith(ctx, a, &gen_remu);
428
+ return gen_arith(ctx, a, EXT_ZERO, gen_remu);
429
}
430
431
static bool trans_mulw(DisasContext *ctx, arg_mulw *a)
432
{
433
REQUIRE_64BIT(ctx);
434
REQUIRE_EXT(ctx, RVM);
435
-
436
- return gen_arith(ctx, a, &gen_mulw);
437
+ ctx->w = true;
438
+ return gen_arith(ctx, a, EXT_NONE, tcg_gen_mul_tl);
439
}
440
441
static bool trans_divw(DisasContext *ctx, arg_divw *a)
442
--
443
2.31.1
444
445
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Alistair Francis <alistair23@gmail.com>
2
2
3
Split out gen_mulh and gen_mulhu and use the common helper.
3
Previously we only listed a single pmpcfg CSR and the first 16 pmpaddr
4
CSRs. This patch fixes this to list all 16 pmpcfg and all 64 pmpaddr
5
CSRs are part of the disassembly.
4
6
5
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
7
Reported-by: Eric DeVolder <eric_devolder@yahoo.com>
6
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Fixes: ea10325917 ("RISC-V Disassembler")
8
Message-id: 20210823195529.560295-9-richard.henderson@linaro.org
10
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
11
Cc: qemu-stable <qemu-stable@nongnu.org>
12
Message-ID: <20240514051615.330979-1-alistair.francis@wdc.com>
9
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
13
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
10
---
14
---
11
target/riscv/insn_trans/trans_rvm.c.inc | 40 +++++++++++--------------
15
disas/riscv.c | 65 ++++++++++++++++++++++++++++++++++++++++++++++++++-
12
1 file changed, 18 insertions(+), 22 deletions(-)
16
1 file changed, 64 insertions(+), 1 deletion(-)
13
17
14
diff --git a/target/riscv/insn_trans/trans_rvm.c.inc b/target/riscv/insn_trans/trans_rvm.c.inc
18
diff --git a/disas/riscv.c b/disas/riscv.c
15
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
16
--- a/target/riscv/insn_trans/trans_rvm.c.inc
20
--- a/disas/riscv.c
17
+++ b/target/riscv/insn_trans/trans_rvm.c.inc
21
+++ b/disas/riscv.c
18
@@ -XXX,XX +XXX,XX @@ static bool trans_mul(DisasContext *ctx, arg_mul *a)
22
@@ -XXX,XX +XXX,XX @@ static const char *csr_name(int csrno)
19
return gen_arith(ctx, a, EXT_NONE, tcg_gen_mul_tl);
23
case 0x0383: return "mibound";
20
}
24
case 0x0384: return "mdbase";
21
25
case 0x0385: return "mdbound";
22
-static bool trans_mulh(DisasContext *ctx, arg_mulh *a)
26
- case 0x03a0: return "pmpcfg3";
23
+static void gen_mulh(TCGv ret, TCGv s1, TCGv s2)
27
+ case 0x03a0: return "pmpcfg0";
24
{
28
+ case 0x03a1: return "pmpcfg1";
25
- REQUIRE_EXT(ctx, RVM);
29
+ case 0x03a2: return "pmpcfg2";
26
- TCGv source1 = tcg_temp_new();
30
+ case 0x03a3: return "pmpcfg3";
27
- TCGv source2 = tcg_temp_new();
31
+ case 0x03a4: return "pmpcfg4";
28
- gen_get_gpr(ctx, source1, a->rs1);
32
+ case 0x03a5: return "pmpcfg5";
29
- gen_get_gpr(ctx, source2, a->rs2);
33
+ case 0x03a6: return "pmpcfg6";
30
+ TCGv discard = tcg_temp_new();
34
+ case 0x03a7: return "pmpcfg7";
31
35
+ case 0x03a8: return "pmpcfg8";
32
- tcg_gen_muls2_tl(source2, source1, source1, source2);
36
+ case 0x03a9: return "pmpcfg9";
33
+ tcg_gen_muls2_tl(discard, ret, s1, s2);
37
+ case 0x03aa: return "pmpcfg10";
34
+ tcg_temp_free(discard);
38
+ case 0x03ab: return "pmpcfg11";
35
+}
39
+ case 0x03ac: return "pmpcfg12";
36
40
+ case 0x03ad: return "pmpcfg13";
37
- gen_set_gpr(ctx, a->rd, source1);
41
+ case 0x03ae: return "pmpcfg14";
38
- tcg_temp_free(source1);
42
+ case 0x03af: return "pmpcfg15";
39
- tcg_temp_free(source2);
43
case 0x03b0: return "pmpaddr0";
40
- return true;
44
case 0x03b1: return "pmpaddr1";
41
+static bool trans_mulh(DisasContext *ctx, arg_mulh *a)
45
case 0x03b2: return "pmpaddr2";
42
+{
46
@@ -XXX,XX +XXX,XX @@ static const char *csr_name(int csrno)
43
+ REQUIRE_EXT(ctx, RVM);
47
case 0x03bd: return "pmpaddr13";
44
+ return gen_arith(ctx, a, EXT_NONE, gen_mulh);
48
case 0x03be: return "pmpaddr14";
45
}
49
case 0x03bf: return "pmpaddr15";
46
50
+ case 0x03c0: return "pmpaddr16";
47
static bool trans_mulhsu(DisasContext *ctx, arg_mulhsu *a)
51
+ case 0x03c1: return "pmpaddr17";
48
@@ -XXX,XX +XXX,XX @@ static bool trans_mulhsu(DisasContext *ctx, arg_mulhsu *a)
52
+ case 0x03c2: return "pmpaddr18";
49
return gen_arith(ctx, a, EXT_NONE, gen_mulhsu);
53
+ case 0x03c3: return "pmpaddr19";
50
}
54
+ case 0x03c4: return "pmpaddr20";
51
55
+ case 0x03c5: return "pmpaddr21";
52
-static bool trans_mulhu(DisasContext *ctx, arg_mulhu *a)
56
+ case 0x03c6: return "pmpaddr22";
53
+static void gen_mulhu(TCGv ret, TCGv s1, TCGv s2)
57
+ case 0x03c7: return "pmpaddr23";
54
{
58
+ case 0x03c8: return "pmpaddr24";
55
- REQUIRE_EXT(ctx, RVM);
59
+ case 0x03c9: return "pmpaddr25";
56
- TCGv source1 = tcg_temp_new();
60
+ case 0x03ca: return "pmpaddr26";
57
- TCGv source2 = tcg_temp_new();
61
+ case 0x03cb: return "pmpaddr27";
58
- gen_get_gpr(ctx, source1, a->rs1);
62
+ case 0x03cc: return "pmpaddr28";
59
- gen_get_gpr(ctx, source2, a->rs2);
63
+ case 0x03cd: return "pmpaddr29";
60
+ TCGv discard = tcg_temp_new();
64
+ case 0x03ce: return "pmpaddr30";
61
65
+ case 0x03cf: return "pmpaddr31";
62
- tcg_gen_mulu2_tl(source2, source1, source1, source2);
66
+ case 0x03d0: return "pmpaddr32";
63
+ tcg_gen_mulu2_tl(discard, ret, s1, s2);
67
+ case 0x03d1: return "pmpaddr33";
64
+ tcg_temp_free(discard);
68
+ case 0x03d2: return "pmpaddr34";
65
+}
69
+ case 0x03d3: return "pmpaddr35";
66
70
+ case 0x03d4: return "pmpaddr36";
67
- gen_set_gpr(ctx, a->rd, source1);
71
+ case 0x03d5: return "pmpaddr37";
68
- tcg_temp_free(source1);
72
+ case 0x03d6: return "pmpaddr38";
69
- tcg_temp_free(source2);
73
+ case 0x03d7: return "pmpaddr39";
70
- return true;
74
+ case 0x03d8: return "pmpaddr40";
71
+static bool trans_mulhu(DisasContext *ctx, arg_mulhu *a)
75
+ case 0x03d9: return "pmpaddr41";
72
+{
76
+ case 0x03da: return "pmpaddr42";
73
+ REQUIRE_EXT(ctx, RVM);
77
+ case 0x03db: return "pmpaddr43";
74
+ return gen_arith(ctx, a, EXT_NONE, gen_mulhu);
78
+ case 0x03dc: return "pmpaddr44";
75
}
79
+ case 0x03dd: return "pmpaddr45";
76
80
+ case 0x03de: return "pmpaddr46";
77
static bool trans_div(DisasContext *ctx, arg_div *a)
81
+ case 0x03df: return "pmpaddr47";
82
+ case 0x03e0: return "pmpaddr48";
83
+ case 0x03e1: return "pmpaddr49";
84
+ case 0x03e2: return "pmpaddr50";
85
+ case 0x03e3: return "pmpaddr51";
86
+ case 0x03e4: return "pmpaddr52";
87
+ case 0x03e5: return "pmpaddr53";
88
+ case 0x03e6: return "pmpaddr54";
89
+ case 0x03e7: return "pmpaddr55";
90
+ case 0x03e8: return "pmpaddr56";
91
+ case 0x03e9: return "pmpaddr57";
92
+ case 0x03ea: return "pmpaddr58";
93
+ case 0x03eb: return "pmpaddr59";
94
+ case 0x03ec: return "pmpaddr60";
95
+ case 0x03ed: return "pmpaddr61";
96
+ case 0x03ee: return "pmpaddr62";
97
+ case 0x03ef: return "pmpaddr63";
98
case 0x0780: return "mtohost";
99
case 0x0781: return "mfromhost";
100
case 0x0782: return "mreset";
78
--
101
--
79
2.31.1
102
2.45.1
80
81
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Yu-Ming Chang <yumin686@andestech.com>
2
2
3
We failed to write into *val for these read functions;
3
Both CSRRS and CSRRC always read the addressed CSR and cause any read side
4
replace them with read_zero. Only warn about unsupported
4
effects regardless of rs1 and rd fields. Note that if rs1 specifies a register
5
non-zero value when writing a non-zero value.
5
holding a zero value other than x0, the instruction will still attempt to write
6
the unmodified value back to the CSR and will cause any attendant side effects.
6
7
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
So if CSRRS or CSRRC tries to write a read-only CSR with rs1 which specifies
8
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
9
a register holding a zero value, an illegal instruction exception should be
10
raised.
11
12
Signed-off-by: Yu-Ming Chang <yumin686@andestech.com>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
13
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Message-id: 20210823195529.560295-18-richard.henderson@linaro.org
14
Message-ID: <20240403070823.80897-1-yumin686@andestech.com>
11
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
15
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
12
---
16
---
13
target/riscv/csr.c | 26 ++++++++------------------
17
target/riscv/cpu.h | 4 ++++
14
1 file changed, 8 insertions(+), 18 deletions(-)
18
target/riscv/csr.c | 51 ++++++++++++++++++++++++++++++++++++----
19
target/riscv/op_helper.c | 6 ++---
20
3 files changed, 53 insertions(+), 8 deletions(-)
15
21
22
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/riscv/cpu.h
25
+++ b/target/riscv/cpu.h
26
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPURISCVState *env, vaddr *pc,
27
void riscv_cpu_update_mask(CPURISCVState *env);
28
bool riscv_cpu_is_32bit(RISCVCPU *cpu);
29
30
+RISCVException riscv_csrr(CPURISCVState *env, int csrno,
31
+ target_ulong *ret_value);
32
RISCVException riscv_csrrw(CPURISCVState *env, int csrno,
33
target_ulong *ret_value,
34
target_ulong new_value, target_ulong write_mask);
35
@@ -XXX,XX +XXX,XX @@ typedef RISCVException (*riscv_csr_op_fn)(CPURISCVState *env, int csrno,
36
target_ulong new_value,
37
target_ulong write_mask);
38
39
+RISCVException riscv_csrr_i128(CPURISCVState *env, int csrno,
40
+ Int128 *ret_value);
41
RISCVException riscv_csrrw_i128(CPURISCVState *env, int csrno,
42
Int128 *ret_value,
43
Int128 new_value, Int128 write_mask);
16
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
44
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
17
index XXXXXXX..XXXXXXX 100644
45
index XXXXXXX..XXXXXXX 100644
18
--- a/target/riscv/csr.c
46
--- a/target/riscv/csr.c
19
+++ b/target/riscv/csr.c
47
+++ b/target/riscv/csr.c
20
@@ -XXX,XX +XXX,XX @@ static RISCVException write_hcounteren(CPURISCVState *env, int csrno,
48
@@ -XXX,XX +XXX,XX @@ static RISCVException rmw_seed(CPURISCVState *env, int csrno,
49
50
static inline RISCVException riscv_csrrw_check(CPURISCVState *env,
51
int csrno,
52
- bool write_mask)
53
+ bool write)
54
{
55
/* check privileges and return RISCV_EXCP_ILLEGAL_INST if check fails */
56
bool read_only = get_field(csrno, 0xC00) == 3;
57
@@ -XXX,XX +XXX,XX @@ static inline RISCVException riscv_csrrw_check(CPURISCVState *env,
58
}
59
60
/* read / write check */
61
- if (write_mask && read_only) {
62
+ if (write && read_only) {
63
return RISCV_EXCP_ILLEGAL_INST;
64
}
65
66
@@ -XXX,XX +XXX,XX @@ static RISCVException riscv_csrrw_do64(CPURISCVState *env, int csrno,
21
return RISCV_EXCP_NONE;
67
return RISCV_EXCP_NONE;
22
}
68
}
23
69
24
-static RISCVException read_hgeie(CPURISCVState *env, int csrno,
70
+RISCVException riscv_csrr(CPURISCVState *env, int csrno,
25
- target_ulong *val)
71
+ target_ulong *ret_value)
26
-{
72
+{
27
- qemu_log_mask(LOG_UNIMP, "No support for a non-zero GEILEN.");
73
+ RISCVException ret = riscv_csrrw_check(env, csrno, false);
28
- return RISCV_EXCP_NONE;
74
+ if (ret != RISCV_EXCP_NONE) {
29
-}
75
+ return ret;
30
-
76
+ }
31
static RISCVException write_hgeie(CPURISCVState *env, int csrno,
77
+
32
target_ulong val)
78
+ return riscv_csrrw_do64(env, csrno, ret_value, 0, 0);
79
+}
80
+
81
RISCVException riscv_csrrw(CPURISCVState *env, int csrno,
82
target_ulong *ret_value,
83
target_ulong new_value, target_ulong write_mask)
33
{
84
{
34
- qemu_log_mask(LOG_UNIMP, "No support for a non-zero GEILEN.");
85
- RISCVException ret = riscv_csrrw_check(env, csrno, write_mask);
35
+ if (val) {
86
+ RISCVException ret = riscv_csrrw_check(env, csrno, true);
36
+ qemu_log_mask(LOG_UNIMP, "No support for a non-zero GEILEN.");
87
if (ret != RISCV_EXCP_NONE) {
37
+ }
88
return ret;
89
}
90
@@ -XXX,XX +XXX,XX @@ static RISCVException riscv_csrrw_do128(CPURISCVState *env, int csrno,
38
return RISCV_EXCP_NONE;
91
return RISCV_EXCP_NONE;
39
}
92
}
40
93
41
@@ -XXX,XX +XXX,XX @@ static RISCVException write_htinst(CPURISCVState *env, int csrno,
94
+RISCVException riscv_csrr_i128(CPURISCVState *env, int csrno,
42
return RISCV_EXCP_NONE;
95
+ Int128 *ret_value)
43
}
96
+{
44
97
+ RISCVException ret;
45
-static RISCVException read_hgeip(CPURISCVState *env, int csrno,
98
+
46
- target_ulong *val)
99
+ ret = riscv_csrrw_check(env, csrno, false);
47
-{
100
+ if (ret != RISCV_EXCP_NONE) {
48
- qemu_log_mask(LOG_UNIMP, "No support for a non-zero GEILEN.");
101
+ return ret;
49
- return RISCV_EXCP_NONE;
102
+ }
50
-}
103
+
51
-
104
+ if (csr_ops[csrno].read128) {
52
static RISCVException write_hgeip(CPURISCVState *env, int csrno,
105
+ return riscv_csrrw_do128(env, csrno, ret_value,
53
target_ulong val)
106
+ int128_zero(), int128_zero());
107
+ }
108
+
109
+ /*
110
+ * Fall back to 64-bit version for now, if the 128-bit alternative isn't
111
+ * at all defined.
112
+ * Note, some CSRs don't need to extend to MXLEN (64 upper bits non
113
+ * significant), for those, this fallback is correctly handling the
114
+ * accesses
115
+ */
116
+ target_ulong old_value;
117
+ ret = riscv_csrrw_do64(env, csrno, &old_value,
118
+ (target_ulong)0,
119
+ (target_ulong)0);
120
+ if (ret == RISCV_EXCP_NONE && ret_value) {
121
+ *ret_value = int128_make64(old_value);
122
+ }
123
+ return ret;
124
+}
125
+
126
RISCVException riscv_csrrw_i128(CPURISCVState *env, int csrno,
127
Int128 *ret_value,
128
Int128 new_value, Int128 write_mask)
54
{
129
{
55
- qemu_log_mask(LOG_UNIMP, "No support for a non-zero GEILEN.");
130
RISCVException ret;
56
+ if (val) {
131
57
+ qemu_log_mask(LOG_UNIMP, "No support for a non-zero GEILEN.");
132
- ret = riscv_csrrw_check(env, csrno, int128_nz(write_mask));
58
+ }
133
+ ret = riscv_csrrw_check(env, csrno, true);
59
return RISCV_EXCP_NONE;
134
if (ret != RISCV_EXCP_NONE) {
60
}
135
return ret;
61
136
}
62
@@ -XXX,XX +XXX,XX @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
137
diff --git a/target/riscv/op_helper.c b/target/riscv/op_helper.c
63
[CSR_HIP] = { "hip", hmode, NULL, NULL, rmw_hip },
138
index XXXXXXX..XXXXXXX 100644
64
[CSR_HIE] = { "hie", hmode, read_hie, write_hie },
139
--- a/target/riscv/op_helper.c
65
[CSR_HCOUNTEREN] = { "hcounteren", hmode, read_hcounteren, write_hcounteren },
140
+++ b/target/riscv/op_helper.c
66
- [CSR_HGEIE] = { "hgeie", hmode, read_hgeie, write_hgeie },
141
@@ -XXX,XX +XXX,XX @@ target_ulong helper_csrr(CPURISCVState *env, int csr)
67
+ [CSR_HGEIE] = { "hgeie", hmode, read_zero, write_hgeie },
142
}
68
[CSR_HTVAL] = { "htval", hmode, read_htval, write_htval },
143
69
[CSR_HTINST] = { "htinst", hmode, read_htinst, write_htinst },
144
target_ulong val = 0;
70
- [CSR_HGEIP] = { "hgeip", hmode, read_hgeip, write_hgeip },
145
- RISCVException ret = riscv_csrrw(env, csr, &val, 0, 0);
71
+ [CSR_HGEIP] = { "hgeip", hmode, read_zero, write_hgeip },
146
+ RISCVException ret = riscv_csrr(env, csr, &val);
72
[CSR_HGATP] = { "hgatp", hmode, read_hgatp, write_hgatp },
147
73
[CSR_HTIMEDELTA] = { "htimedelta", hmode, read_htimedelta, write_htimedelta },
148
if (ret != RISCV_EXCP_NONE) {
74
[CSR_HTIMEDELTAH] = { "htimedeltah", hmode32, read_htimedeltah, write_htimedeltah },
149
riscv_raise_exception(env, ret, GETPC());
150
@@ -XXX,XX +XXX,XX @@ target_ulong helper_csrrw(CPURISCVState *env, int csr,
151
target_ulong helper_csrr_i128(CPURISCVState *env, int csr)
152
{
153
Int128 rv = int128_zero();
154
- RISCVException ret = riscv_csrrw_i128(env, csr, &rv,
155
- int128_zero(),
156
- int128_zero());
157
+ RISCVException ret = riscv_csrr_i128(env, csr, &rv);
158
159
if (ret != RISCV_EXCP_NONE) {
160
riscv_raise_exception(env, ret, GETPC());
75
--
161
--
76
2.31.1
162
2.45.1
77
78
diff view generated by jsdifflib